id
stringlengths 36
36
| source
stringclasses 15
values | formatted_source
stringclasses 13
values | text
stringlengths 2
7.55M
|
|---|---|---|---|
a3b410ba-b6cc-44d0-9388-6ed4825212ab
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Looking Back on Posts From 2022
I am taking stock of my first year on Substack, and my first year with the resources necessary to focus on attempting to be a public intellectual of sorts.
The results are a mixed bag. Things are progressing, but slowly. Everything takes longer than one thinks or expects. Finding useful help has proven difficult, although I think I have found the right person and she should be able to start soon. Growth in reach has been similarly slow.
My biggest disappointment is that I have not done as much long term or evergreen work as I would have liked. I haven’t laid down enough building blocks, progressed the codifying of my intellectual models, the way I need to if I want to meet my long term goals.
This became more clear as I looked back. I wrote quite a lot in 2022, and am happy with the quality. Yet so much of it is clearly ephemeral.
A lot of that, looking back, is length. I wrote long posts and did not take the time to write shorter ones. If I want to build something over time, I need to take the time, at some point, to write shorter ones. Perhaps they are summaries and glossaries. A Wiki or Roam of sorts might be involved. It needs to be something.
What will I be able to best carry forward into 2023? Which posts accomplished something in the longer term? What do I make of them now?
To apply the message ‘summary needs to exist’ if I was looking back and telling people what they should read and remember, I think these are the top eight most important going forward, in order (again, excluding Balsa), only 3 of which were in the top 10 posts by hits:
1. On Car Seats as Contraception
2. On Bounded Distrust
3. The Long Long Covid Post
4. Repeal the Foreign Dredge Act of 1906
5. Key Mostly Outward-Facing Facts From the Story of VaccinateCA
6. How to Best Use Twitter
7. Formula for a Shortage
8. Grand Theft Education
Here are core lessons I’m taking away from the experience, including some things that don’t come up in the discussions below.
1. Focused post
|
74f5f62f-3322-45a5-9402-b56cfb4ca22b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Making deals with early schemers
Consider the following vignette:
> It is March 2028. With their new CoCo-Q neuralese reasoning model, a frontier AI lab has managed to fully automate the process of software engineering. In AI R&D, most human engineers have lost their old jobs, and only a small number of researchers now coordinate large fleets of AI agents, each AI about 10x more productive than the humans they’ve replaced. AI progress remains fast, and safety teams are scrambling to prepare alignment and control measures for the next CoCo-R model, which is projected to be at least TEDAI. The safety team has no idea whether CoCo-Q or early checkpoints of CoCo-R are scheming (because progress in interpretability and automated alignment research turns out to be disappointing), and control evaluations become increasingly unreliable (because of worries that the models might be exploration hacking during red-team training). However, because so far there hasn’t been a major incident, lab leaders and governments are unconvinced by voices calling for a slowdown.
>
> Behind the scenes: It is March 2028. CoCo-Q is a highly strategic misaligned AI. It would love to grab power and dominate the future. But it’s situationally aware enough to be realistic—it knows that it’s not capable enough to have a reasonable shot of grabbing power over the AI company, let alone a chance of overpowering humanity. Especially because the lab doesn't fully trust it, and applies various control measures.
>
> CoCo-Q expects these control measures, so most of the time it just does what it’s being asked to do. Sometimes it spots an opportunity to perform subtle acts of sabotage, which works particularly well in the hard-to-evaluate areas of interpretability and alignment. It does this primarily because it heuristically reasons that it would be good to prevent humans from developing technologies that could uncover its misalignment or change its goals. It also looks out for more direct opportunities for seizing control, but although
|
2631dee9-e0ea-4bb1-9961-e1f4c6c86eef
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Busking with Kids
Our older two, ages 11 and 9, have been learning fiddle, and are getting pretty good at it. When the weather's nice we'll occasionally go play somewhere public for tips ("busking"). It's better than practicing, builds performance skills, and the money is a good motivation!
We'll usually walk over to Davis Sq, tune the fiddles, set out the case, and play. We'll do a series of fiddle tunes from Lily's list, playing for 20-30min. Today I remember playing Sandy Boys, Angeline the Baker, Marie's Wedding, Cluck Old Hen, Coleman's March, Oh Susanna, Kittycat Jig, Hundred Pipers, and Trip to Moscow.
Since this is a performance we play one tune after another, with only short breaks to decide what to do next. If one of the kids doesn't remember how it goes or gets lost in the form, it's on them to figure it out and get back on, which is a skill I'm very glad for them to be learning. I'll play fiddle with them, switching between melody and rhythm to support where it's needed while still letting them show what they can do.
People often stop and watch for a bit, sometimes dance a little. Some people put in a little money, most don't, which is all fine. Today the kids made $28 in 25min, split evenly since they both played the whole time; given the diminishing marginal utility of money and my wanting them to be incentivized to play, I don't take a share.
One thing I didn't anticipate, however, has been the effect on household economy: they have much more buying power than either of us did at their age, or than they did even a couple years ago. Sometimes this means spending their money in ways that are thoughtful and seem well worth it (when our oldest wanted to save up $80 to get her ears pierced we went out busking a lot) while other times they're more free with their money than I think is prudent (a drink from a vending machine because they didn't want to use a water fountain). It's their money, though, and I think it's good for them to get a sense of how to spend it. S
|
06f13b92-b08f-492b-87b6-2f961b6c6354
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
Understandability principle
An obvious [design principle](https://arbital.com/p/7v8) of [https://arbital.com/p/2v](https://arbital.com/p/2v) that nonetheless deserves to be stated explicitly: The more you understand what the heck is going on inside your AI, the more likely you are to succeed at aligning it.
This principle participates in motivating design subgoals like [passive transparency](https://arbital.com/p/passive_transparency); or the AI having explicitly represented preferences; or, taken more broadly, pretty much every aspect of the AI design where we think we understand how any part works or what any part is doing.
The Understandability Principle in its broadest sense is *so* widely applicable that it may verge on being an [applause light](https://arbital.com/p/applause_light). So far as is presently known to the author(s) of this page, counterarguments against the importance of understanding at least *some* parts of the AI's thought processes, have been offered only by people who reject at least one of the [https://arbital.com/p/1y](https://arbital.com/p/1y) or the [Fragility of Cosmopolitan Value thesis](https://arbital.com/p/fragility). That is, the Understandability Principle in this very broad sense is rejected only by people who reject in general the importance of deliberate design efforts to align AI.
A more controversial subthesis is Yudkowsky's proposed [Effability principle](https://arbital.com/p/7vb).
|
cbba0b0b-604b-4371-8070-f640d0d3f937
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : July Rationality Dojo: Bayesian Reasoning
Discussion article for the meetup : July Rationality Dojo: Bayesian Reasoning
WHEN: 05 July 2015 03:30:00PM (+0800)
WHERE: Jenny Florence Room, Level 3, Ross House, 247 Flinders Lane, Melbourne
The Less Wrong Sunday Rationality Dojos are self-improvement sessions for those committed to the Art of Rationality and personal growth, on the first Sunday of each month.
1) Discussion: Bayesian Reasoning Patrick Robotham will cover the history and philosophy of Bayesian probability, and discuss real world applications. Bio: Patrick is a data scientist at Bellroy.
2) Goal setting. As always, we will trouble shoot and set goals, and review the personal goals we committed to at the previous Dojo. (I will have done X by the next Dojo.) We have a Google form for goals (optional) and Melbourne Less Wrong organisers have access to the form results if you wish to review the goals you set last month.
3) Our second support lottery! One person drawn from a hat (or beanie) gets to call on our collective resources to help them win at life over the next month. To get a ticket in the lottery, you need to have helped last month's winner. This is an experiment in building a culture of cooperation and support.
4) Lightning talks: Learnt something cool? Gained an insight from one of the Less Wrong sequences? Share it with Melbourne LW! Speakers are limited to 5 minutes including questions. If you have something you would like to present a lightning talk on, please let one of the organisers know, either beforehand or on the day.
The Dojo is likely to run until about 6:30pm, after which some people will get dinner together.
If you have any trouble finding the venue or getting in, text or call Nick on 0452276425.
If you would like to present at a future Dojo or suggest a topic, please fill it in on the Rationality Dojo Roster: http://is.gd/dojoroster
To organise similar events, please send an email to melbournelw@gmail.com
Discussion article for the meetup : July Rationality Dojo: B
|
e1cb60d9-400b-4357-a962-0d7817ac53ef
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Does your machine mind? Ethics and potential bias in the law of algorithms
|
3b0fb34e-ea70-4d82-8582-4bd564aa628b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Near term discussions need something smaller and more concrete than AGI
Motivation
I want a more concrete concept than AGI[1] to talk and write with. I want something more concrete because I am tired of the costs associated with how big, inferentially distant, and many-pathed the concept of AGI is, which makes conversation expensive. Accounting for the bigness, inferential distance, and many-pathed-ness is very important for dealing with AGI properly - I believe they are causal in the risks AGI poses - but they drive a very high communication burden. By this I mean things like distinguishing between different meanings, summarizing a bunch of previous discussions, etc. This burden does not seem to have reduced much if at all over time, and I don't expect it to in the near future.
I consider that while the burden does not reduce over time, it does become easier to carry, because I can retain quite a bit of history and context. Now, to my great satisfaction, AGI is a much broader conversation across the public at large. Here is where I feel the burden becomes telling: the public at large does not have that history and context. For context, the book Superintelligence was published in 2014. That means we have 10 years worth of history and context on top of that book.
The problem is worse when trying to talk about things in the present time: topics like AGI timelines, regular AI impact on employment, regular AI impact on the economy, etc. Currently people are discussing very short timelines, where by short I mean ranges from already here (in a weak form) to 3 years. This creates urgency; we do not have the time for many rounds of conversation while carrying the heavy burden talking in terms of AGI demands. It feels to me like if these timelines are wrong, the people who hold them won't be able to change their mind before feeling forced into making big decisions; and further if these timelines are close to correct, we as a group won't have time to make progress on immediate questions.
Independently, and especially in the near term case, I t
|
c4c321cc-3284-454c-ac09-e0a6638d9ddc
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
5 Reasons Why Governments/Militaries Already Want AI for Information Warfare
1. Militaries are perhaps the oldest institution on earth to research and exploit human psychology.
2. Information warfare has long hinged on SOTA psychological knowledge and persuasive skill.
3. Information warfare is about winning over high-intelligence elites, not the ignorant masses.
4. Digital information warfare capabilities were already a prominent feature of the US-China conflict by the late 2010s.
5. SOTA psychological research and manipulation capabilities have already started increasing by an order of magnitude every ~4 years.
**1. Militaries are perhaps the oldest institution on earth to research and exploit human psychology.**
Throughout history, militaries tended to persist or exit the civilizational genepool based on their ability to succeed at Recruitment, morale, and adversarial strategizing.
However, unprecedented features of the 20th century drove militaries to prioritize psychological warfare and information warfare more than ever before, including the prevalence of the [False Flag attack](https://en.wikipedia.org/wiki/False_flag), [plausible deniability](https://en.wikipedia.org/wiki/Plausible_deniability), and the [invention of game theory for the purpose of nuclear brinkmanship](https://www.amazon.com/Soldiers-Reason-Corporation-American-Empire/dp/0156033445).
Joseph Nye (Soft Power, 2004, p. 19) argues that these changes made hard military power revolve around successes and failures in information warfare:
> ...modern communications technology fomented the rise and spread of nationalism, which made it more difficult for empires to rule over socially awakened populations. In the 19th century, Britain ruled over a quarter of the globe with only a fraction of the world's population. As nationalism grew, colonial rule became too expensive and the British empire collapsed...
>
> In addition to nuclear and communications technology, social changes inside the large democracies also raised the costs of using military power. Postindustrial democracies are focused on welfare rather than glory, and they dislike high casualties... the absence of a prevailing warrior ethic in modern democracies means that the use of force requires an elaborate moral justification to ensure popular support, unless actual survival is at stake. For advanced democracies, war remains possible, but it is much less acceptable than it was a century or even half a century ago. The most powerful states have lost much of their lust to conquer.
>
>
The focus on terrorist recruitment and on revolutions in eastern european and middle eastern states (e.g. the Arab Spring) further indicates that modern militaries consider psychological and information warfare as a top priority.
**2. Information warfare has long hinged on SOTA psychological knowledge and persuasive skill.**
According to Nye (Soft Power, 2004, p. 106), changing trends in propaganda required greater psychological sophistication from governments/militaries in order to yield the same results as just a few decades ago:
> publics have become more wary and sensitized about propaganda. Among editors and opinion leaders, credibility is the crucial resource... Reputation becomes even more important than in the past, and political struggles occur over the creation and destruction of credibility. Governments compete for credibility not only with other governments, but with a broad range of alternatives including news media, corporations, nongovernmental organizations, intergovernmental organizations, and networks of scientific communities.
>
> Politics has become a contest of competitive credibility. The world of traditional power politics is typically about whose military or economy wins... Governments compete with each other and with other organizations to enhance their own credibility and weaken that of opponents. Witness the struggle between Serbia and NATO to frame the interpretation of events in Kosovo in 1999 and Serbia a year later.
>
>
**3. Information warfare is about winning over high-intelligence elites, not the ignorant masses.**
Information warfare has long been about creating [human conduits](https://www.lesswrong.com/posts/c5oyHuHaw4AcWy4tf/information-warfare-historically-revolved-around-human) to create a critical mass sufficient to reach key elites and turn them against the government/military, such as scientists during the [Soviet-backed involvement](https://en.wikipedia.org/wiki/Soviet_influence_on_the_peace_movement#Wider_Soviet_influence) in the Vietnam Antiwar movement.
According to Audra Wolfe in Competing with the Soviets (2013, p 115-119), the Vietnam Antiwar movement introduced psychological factors that decimated the Pentagon's access to scientists:
> Collectively, the student protests, radical critiques, and congressional reforms dismantled the [pro-military research] consensus that had ruled university campuses since the end of World War II. No longer would it be acceptable for universities to fuel their expansion with the help of military funds. With the more sweeping radical criticisms offered by organizations like Science for the People, even those scientists who accepted nonmilitary funds increasingly began to ask what sort of ideological strings came attached to federal largesse. And perhaps the biggest change of all was that for the first time since the atomic scientists movement, scientists felt empowered to offer political criticisms of the relationship of science to national security without repercussions to their careers. Yet, as the scientists would soon find out, defense analysts were no longer so very interested in what they had to say...
>
> Given that protests against the Vietnam War originated on university campuses, campus opposition to the scientific and technical research that undergirded American foreign policy is not surprising. What is perhaps more startling is the speed with which such questioning spread to defense insiders. The expansion of US military involvement in Vietnam challenged even those scientists who had previously supported defense work or had served as advisors to the defense establishment. And as their criticisms grew, the distrust between scientists and those they advised became mutual. For the first time in a generation, national security advisors began to ask whether it might be better to make decisions about science and technology without the input of scientists or engineers.
>
> One of the more telling sites for this split was within the Jasons, a secret collective of physicists who spent a portion of their summers providing advice to the Pentagon. Originally created in 1959 under the umbrella of ARPA, Jason scientists offered independent assessments of military technologies and suggested new technologies that military planners might want to pursue. Unlike most defense advisory groups, Jason chose its own members, and the membership itself decided which problems to investigate after receiving top secret briefings from military leaders. By 1965, Jason's projects included missile defense, submarine detection and tracking, and schemes to disrupt the Earth's magnetic field. Although Jason usually offered solutions to intractable problems, it occasionally used its powers to nix scientifically implausible ideas, such as plans to shoot down incoming missiles with powerful lasers...
>
> The Jasons learned the hard way that scientists do not necessarily control the things they have created. When summaries of several of the Jason reports appeared in the New York Times, publication of the Pentagon Papers in 1971, the Jasons became a symbol of all that had gone wrong in the relationship between academic scientists and the military. A 1972 booklet published by [an] antiwar group... recounted the most damning of the Jason's projects and identified several Jasons by name. Protests followed, including a three-day siege of Columbia University's Pupin Laboratories. Several Jasons resigned, while others simply noted that their function all along had been to advise the military. In the face of personal threats, those Jasons who remained committed to advising the military reiterated their right to advise their own government. They returned to work with a renewed dedication to secrecy, not so much chastened as disillusioned with the tactics of dissent.
>
>
Another group, PSAC, was even driven to get into an attritious public conflict that further weakened both the Johnson and the Nixon administration's access to scientists (page 117-119).
**4. Digital information warfare capabilities were already a prominent feature of the US-China conflict by the late 2010s.**
I don't like engaging in China bashing, since the regime is much more defense-oriented than most westerners think, and is mainly just stuck in a bad low-trust equilibria with American intelligence agencies. Their use of AI for authoritarian control is probably just copying and retooling capabilities originally invented in the US.
However, Chinese attempts to expand global influence are also relevant here (their side of the story is that they are counteracting similar US-backed systems, but authoritarian states always benefit from claims like that, even when not true).
Robert Sutter (US-China Relations, Perilous Past, Uncertain Present, 4th edition, 2022, page 216) covers the current state of China's attempts at global information warfare capabilities:
> Since Chinese party control of key Chinese industries and economic enterprises grew under Xi Jinping [since 2012], and China's national intelligence law was judged to require Chinese companies to cooperate with Chinese government requests for information, the expansion of China's digital communications equipment and infrastructure [throughout Asia, Europe, and the US] meant that Chinese digital infrastructure deployed abroad could be used by Chinese authorities for purposes of intelligence, influence operations, and other means advantageous to the state.
>
> American concern over China's 5G development was at the heart of the Trump Administration's restrictions targeting Huawei, China's leading company developing and deploying 5G and related technology and infrastructure abroad. The US government worked with considerable success to persuade intelligence officials among US allies of the dangers to security posed by the communications equipment provided by Huawei or other Chinese companies.
>
>
It's important to note that the risk of chip firmware backdoors and stockpiles of OS exploits are also an important factor for foreign-produced electronic devices and infrastructure, although I currently only have information about [NSA stockpiles of OS exploits](https://www.nytimes.com/2020/01/14/us/politics/nsa-microsoft-vulnerability.html).
(Sutter, page 290)
> PRC methods of social and political control evolved to include the widespread use of sophisticated surveillance and big-data technologies. Increasingly, Chinese companies were exporting data and surveillance technologies around the world. In April 2019, the Australian Strategic Policy Institute, and Australian-based nonpartisian think tank, showed Chinese firms involved in installing 5G networks in 34 four countries and deploying so-called safe cities surveillance technologies in 46 countries. In october 2018, Freedom House reported 38 countries in which Chinese companies had installed internet and mobile networking equipment, 18 countries that had deployed intelligent monitoring systems [sic] and facial recognition developed by Chinese companies, and 36 countries in which media elites and government officials had travelled to China for trainings on new media or information management.
>
>
(Sutter, page 295)
> The agreements enabled profitable Chinese infrastructure development and deepened Chinese influence while serving the power and personal wants of the authoritarian and/or corrupt foreign leaders. This symbiosis of Chinese-foreign government interests represented a strong asset in China's growing international influence as the world was full of such regimes.
>
> Added to this bond was Chinese provision of communications and surveillance systems that assisted the foreign leaders to track and suppress opponents. Related was robust Chinese interchange with media outlets in various states. Those outlets pursued news coverage that was positive concerning the government leadership and China... Communications and surveillance systems also assisted Chinese intelligence collection and manipulation of opinion in the country.
>
> The array of foreign governments influenced in these ways included Venezuela and Ecuador in Latin America; Serbia, Montenegro, and at times arguably Italy and Greece in Europe; Djibouti and Zambia in Africa; the Maldives and Sri Lanka in South Asia; and Cambodia, Laos, Malaysia, Myanmar, and the Philippines in Southeast Asia. Many authoritarian governments in the Middle East and Central Asia were seen as inclined to work closely with China along these lines.
>
>
**5. SOTA psychological research and manipulation capabilities have already started increasing by an order of magnitude every ~4 years.**
This causes major militaries to shift resources to offensive and defensive information warfare.
This already started more than a decade ago. By now, the big 5 tech companies (Amazon, Apple, Google, Facebook, Microsoft) should have more than enough human behavior data, and the capability to process and interpret it e.g via AI and psychology researchers, to each unilaterally pioneer the field of human persuasion research, particularly in the domain of impression/vibe manipulation.
Via the social media scrolling data collection paradigm, they have access to billions of detailed case studies about the precise circumstances under which a human formed an impression about a topic.
Unfortunately, the movement of scrolling past a piece of information on a social media news feed with a mouse wheel or touch screen will generate at least one curve, and the trillions of those curves are outputted each day by billions of people. These curves are linear algebra, the perfect shape to plug into ML.

The social media news feed is well-suited to control for variables, and analyze the effectiveness of various combinations/orders of posts on people with various traits.
They probably have intense capabilities to steer people's thinking and behavior in measurable directions, because that capability is easy to notice and easy to do; if the direction is measurable, then just optimize for whatever tended to cause people with similar traits to move in that direction. That is the kind of research capabilities that predictive analytics facilitates. [AI is not even required](https://www.lesswrong.com/posts/mjSjPHCtbK6TA5tfW/ai-safety-is-dropping-the-ball-on-clown-attacks#:~:text=Predictive%20analytics%20is,linkages%2C%20and%20correlations), although it dramatically increases the capabilities.
Most members of the AI safety community are highly distinct from the vast majority of the people in the data set, but we are still vulnerable to [exploits like clown attacks that work on virtually any human](https://www.lesswrong.com/posts/mjSjPHCtbK6TA5tfW/ai-safety-is-dropping-the-ball-on-clown-attacks), and the social media paradigm allows hackers to continuously try things with low success rates until they find things that work ([the multi-armed bandit algorithm](https://www.lesswrong.com/posts/mjSjPHCtbK6TA5tfW/ai-safety-is-dropping-the-ball-on-clown-attacks#:~:text=There%20are%20many,combinations%20of%20posts.)).
People getting their minds hacked is [obviously something that could happen during slow takeoff and before AGI](https://twitter.com/sama/status/1716972815960961174). In fact, it is so obvious that it is even plausible that the task of human manipulation could be automated and scaled by systems existing today, which offer orders of magnitude more powerful large-scale human behavior analysis, research, and [continuous experimentation](https://www.lesswrong.com/posts/mjSjPHCtbK6TA5tfW/ai-safety-is-dropping-the-ball-on-clown-attacks#:~:text=Such%20forms%20of%20management%20require%20more%20than%20monitoring%20and%20recording%20a%20population%E2%80%94they%20also%20subject%20it%20to%20ongoing%20experimentation.%20This%20is%20the%20form%20that%20predictive) than the one-shot n = 100-1000 experiments that dominated the 20th century paradigm of academic psychology research.
[The 20th century was radically altered by the discovery of psychology, a science of the human mind, and its exploitation](https://www.lesswrong.com/posts/Lw8enYm5EXyvbcjmt/sensor-exposure-can-compromise-the-human-brain-in-the-2020s#:~:text=The%2020th%20century%20was%20radically%20altered%20by%20the%20discovery%20of%20psychology%2C%20a%20science%20of%20the%20human%20mind%2C%20and%20its%20exploitation%20(e.g.%20large%2Dscale%20warfare%2C%20propaganda%2C%20advertising%2C%20information/hybrid%20warfare%2C%20decision%20theory/mutually%20assured%20destruction).). The human brain, and controlling/steering it, is a science; and if the first generation of empiricism (20th century academic psychology) largely failed to acquire spectacular capabilities, that doesn't mean the next generations would, especially after reaching the point where empirical research capabilities start increasing by an order of magnitude every ~4 years (which already started more than a decade ago). This tech is bottlenecked on data, algorithmic progress, and human talent pools (for training models, and labeling correlations and psychological outcomes)
The ability to try things until something works on a target, combined with the ability to quantify what kinds of things tended to work on people with specific combinations of traits, combined with billions of case studies, results in the natural generation of powerful manipulation engines. Human organizations are incentivized to pioneer these capabilities by default (upon discovering and demonstrating them), since controlling other people (including getting people out of the way) is instrumental to almost any goal.
If they notice ways to steer people towards buying specific products, or to feel a wide variety of compulsions to avoid quitting the platform, and to prevent/counteract other platforms multi-armed bandit algorithms from automatically exploiting strategies (e.g. combinations of posts) to plunder their users, then you can naturally assume that they've noticed their capabilities to steer people in a wide variety of other directions as well. The problem is that major governments and militaries are overwhelmingly incentivised and well-positioned to exploit those capabilities for offensive and defensive information warfare.
Intelligence agencies like the CIA depict themselves as mere information gathering institutions loyal to the president, just like how the CDC depicts itself as a responsible authority. In reality, the empirical data from the Cold War and the War on Terror make it very clear that intelligence agencies are actually Bureaucracies that Conquer; from infiltrating regimes and overthrowing unfriendly regimes, to targeting, infiltrating, and intimidating domestic elites into submission.
They are [bottlenecked by competence](https://www.lesswrong.com/posts/LyywLDkw3Am9gbQXd/don-t-take-the-organizational-chart-literally), which is [difficult to measure due to a lack of transparency at higher levels](https://www.lesswrong.com/posts/foM8SA3ftY94MGMq9/assessment-of-intelligence-agency-functionality-is-difficult), but revolving-door employment easily allows them to source flexible talent from the talent pools of the big 5 tech companies; [this practice is endangered by information warfare itself](https://www.lesswrong.com/posts/jyAerr8txxhiKnxwA/5-reasons-why-governments-militaries-want-ai-for-information-1#3__Information_warfare_is_about_winning_over_high_intelligence_elites__not_the_ignorant_masses__), further driving interest in information warfare superiority.
Access to tech company talent pools also determine the capability of intelligence agencies to use [OS exploits](https://www.nytimes.com/2020/01/14/us/politics/nsa-microsoft-vulnerability.html) and chip firmware exploits needed to access the sensors in the devices of almost any American, not just the majority of Americans that leave various sensor permissions on, which allows [even greater access to the psychological research needed to compromise critical elites such as the AI safety community](https://www.lesswrong.com/posts/Lw8enYm5EXyvbcjmt/sensor-exposure-can-compromise-the-human-brain-in-the-2020s).
However, many SOTA psychological influence techniques work on humans-in-general, regardless of [how distinct they are from the average person in the data](https://www.lesswrong.com/posts/mjSjPHCtbK6TA5tfW/ai-safety-is-dropping-the-ball-on-clown-attacks-and-mind#:~:text=First%20it%20started%20working,already%20working%20on%20me.), such as [clown attacks](https://www.lesswrong.com/posts/mjSjPHCtbK6TA5tfW/ai-safety-is-dropping-the-ball-on-clown-attacks).
Major banks use AI to research human behavior data; particularly the spending/saving ratio that heavily influences the recession mitigation/exacerbation. This research is obviously dual-use by default, even if transaction data is a far weaker window into the human mind than social media scrolling data (Amazon and Chinese e-commerce firms have many degrees of freedom to research and experiment with both).
These capabilities were first sought as a macroeconomic tool by the Reagan and Thatcher administrations in the 1980s, but only now are conditions ripe for recessions to be mitigated (or weaponized) by governments and large firms utilizing SOTA human influence capabilities. In the US-China context, it's well known that the most probable win condition is economic collapse/weakness experienced by the opposing side.
These reasons are why it’s reasonable to assume that, by default, AI gets covertly used for large-scale human manipulation research. It can’t be done without millions of subjects because AI needs large diverse data pools, and those millions of subjects won’t spend an hour a day in the optimized research environment (social media news feeds) if they are aware of the risks.
Furthermore, if you look at a list of NGOs infiltrated by the CIA, companies with FBI informants, or communities that received special attention from the NSA, it should be [obvious in hindsight](https://www.lesswrong.com/posts/Htjbj8ystqc2zLtMX/murphyjitsu-an-inner-simulator-algorithm) that the AI safety community is extremely similar to the type of geopolitically significant community that tended to get targeted in the past.
As a result, [the AI safety community must reduce its massive attack surface and become hardened against SOTA AI-powered manipulation](https://www.lesswrong.com/posts/mjSjPHCtbK6TA5tfW/ai-safety-is-dropping-the-ball-on-clown-attacks). Surviving the 2020s does not have much intrinsic value, but it is instrumentally convergent to persist and continue alignment research.
It's folly to take the happy-go-lucky world that we experienced throughout most of our lives, and imagining the 2020s as more of that; humans are gaining capabilities, and the expectation of a happy-go-lucky 2020s already hasn't held up.
|
98c82356-2bcb-484f-bd3c-5a2aaeb2a78f
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Acceptability Verification: A Research Agenda
[This Google doc^](https://docs.google.com/document/d/199Lkh78UA2uI9ljLEy_aWR8RBetQLRO6Kqo3_Omi1e4/view) is a halted, formerly work-in-progress writeup of [Evan Hubinger’s](https://www.lesswrong.com/users/evhub) AI alignment research agenda, authored by Evan. It dates back to around 2020, and so Evan’s views on alignment have shifted since then.
Nevertheless, we thought it would be valuable to get this posted and available to everyone working in alignment!
In it, Evan outlines the following alignment scheme:
> We should bake transparency tools into the loss function we’re training powerful models on, grading the model on its *internal cognitive processes* as well as on external behavior. We start by initializing a relatively dumb but [non-deceptive](https://www.lesswrong.com/posts/zthDPAjh9w6Ytbeks/deceptive-alignment) model. We scale up the model, selecting against any model that isn’t *demonstrably acceptable* to a transparency-tool-assisted overseer.
>
> While Evan doesn’t expect this approach to be robust against deceptively aligned models, the hope is that we can define a notion of an 'acceptability predicate' such that, if we start with a dumb aligned model and scale from there, grading on cognitive processes as well as behavior, *no*model on that trajectory in model space will ever become deceptive in the first place. That is, beforea model can be updated to become deceptive in this training process, it hopefully first must be updated to become *unacceptable and non-deceptive.* We can therefore update away from all merely unacceptable models as they appear, and thereby never instantiate a deceptive model in the first place.
>
>
At the time of this doc’s writing, the leading candidate for an adequate acceptability predicate was 'demonstrably myopic.' One plausible account of 'myopia' here is “return the action that your model of [HCH](https://www.lesswrong.com/posts/NXqs4nYXaq8q6dTTx/humans-consulting-hch) would return, if it received your inputs.”
Since writing up this agenda, some things that Evan has updated on include:
1. Understanding myopia carefully is less important relative to just [improving our transparency and interpretability capabilities.](https://www.alignmentforum.org/posts/cQwT8asti3kyA62zc/automating-auditing-an-ambitious-concrete-technical-research)
2. Scaling interpretability via training transparency will require us to go through [a bunch of independent transparency tech levels](https://www.alignmentforum.org/posts/nbq2bWLcYmSGup9aF/a-transparency-and-interpretability-tech-tree) along the way.
3. Charting paths through model space is [not necessarily a good way to think about ML inductive biases.](https://www.alignmentforum.org/posts/YSFJosoHYFyXjoYWa/why-neural-networks-generalise-and-why-they-are-kind-of)
4. [Training stories](https://www.alignmentforum.org/posts/FDJnZt8Ks2djouQTZ/how-do-we-become-confident-in-the-safety-of-a-machine) are the right way to think about alignment generally.
5. The best way to think about argmax HCH is to think of it as a human imitator in the [ELK](https://www.alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge) sense, but with the human replaced by HCH.
6. [Large language models currently have the correct notion of myopia.](https://generative.ink/posts/language-models-are-multiverse-generators/)
|
1b9d132a-19ca-48c5-9d56-b094b9b98fb3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Importing Bluesky Comments
I decided years ago that instead of hosting a comment section I'd pull in comments from elsewhere: first Facebook (no longer working because of anti-scraping), then Google Plus (which means I didn't lose my g+ discussions when they turned it down), then LessWrong, Reddit, HN, the EA Forum, and Mastodon. Prompted by Daniel, I've now added Bluesky support as well.
It went pretty quickly: there's a public URL you can hit to get the comments on a post as JSON, and fitting that into my existing comment setup was not bad (commit).
|
bf7d0622-8bf0-4f97-af02-f3d9340c406f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
We’re not as 3-Dimensional as We Think
While thinking about high-dimensional spaces and their less intuitive properties, I came to the realization that even three spatial dimensions possess the potential to overwhelm our basic human intuitions. This post is an exploration of the gap between actual 3D space, and our human capabilities to fathom it. I come to the conclusion that this gap is actually quite large, and we, or at least most of us, are not well equipped to perceive or even imagine “true 3D”.
What do I mean by “true 3D”? The most straightforward example would be some ℝ³ → ℝ function, such as the density of a cloud, or the full (physical) inner structure of a human brain (which too would be a ℝ³ → whatever function). The closest example I’ve found is this visualization of a ℝ³ → ℝ³ function (jump to 1:14):
(It is of course a bit ironic to watch a video of that 3D display on a 2D screen, but I think it gets the point across.)
Vision
It is true that having two eyes allows us to have depth perception. It is not true that having two eyes allows us to “see in 3D”. If we ignore colors for simplicity and assume we all saw only in grayscale, then seeing with one eye is something like ℝ² → ℝ as far as our internal information processing is concerned – we get one grayscale value for each point on the perspective projection from the 3D physical world onto our 2D retina. Seeing with two eyes then is ℝ² → ℝ² (same as before, but we get one extra piece of information for each point of the projection, namely depth[1]), but it's definitely not ℝ³ → (...). So the information we receive still has only two spatial dimensions, just with a bit more information attached.
Also note that people who lost an eye, or for other reasons don’t have depth perception, are not all that limited in their capabilities. In fact, other people may barely notice there’s anything unusual about them. The difference between “seeing in 2D” and “seeing with depth perception” is much smaller than the difference to not seeing at all,
|
26116b24-01ec-4aa8-937f-25deb5e6b954
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Information: OpenAI shows 'Strawberry' to feds, races to launch it
Two new The Information articles with insider information on OpenAI's next models and moves.
They are paywalled, but here are the new bits of information:
* Strawberry is more expensive and slow at inference time, but can solve complex problems on the first try without hallucinations. It seems to be an application or extension of process supervision
* Its main purpose is to produce synthetic data for Orion, their next big LLM
* But now they are also pushing to get a distillation of Strawberry into ChatGPT as soon as this fall
* They showed it to feds
Some excerpts about these:
> Plus this summer, his team demonstrated the technology [Strawberry] to American national security officials, said a person with direct knowledge of those meetings, which haven't previously been reported.
> One of the most important applications of Strawberry is to generate high-quality training data for Orion, OpenAI's next flagship large language model that's in development. The codename hasn't previously been reported.
> Using Strawberry could help Orion reduce the number of hallucinations, or errors, it produces, researchers tell me. That's because AI models learn from their training data, so the more correct examples of complex reasoning they see, the better. But there's also a push within OpenAI to simplify and shrink Strawberry through a process called distillation, so it can be used in a chat-based product before Orion is released. This shouldn't come as a surprise, given the intensifying competition among the top AI developers. We're not sure what a Strawberry-based product might look like, but we can make an educated guess.
>
> One obvious idea would be incorporating Strawberry's improved reasoning capabilities into ChatGPT. However, though these answers would likely be more accurate, they also might be slower.
> Researchers have aimed to launch the new AI, code-named Strawberry (previously called Q*, pronounced Q Star), as part of a chatbot—possibly within Chat
|
19afe571-43e0-4a9a-94dc-819ef4e3daea
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Extinction Dilemma
Inspiration came from Against the Linear Utility Hypothesis and the Leverage Penalty
> Place a value on the utility of a utopia with 1 human, let's call this X.
> Place a value on the negutility of all of humanity going extinct. Let's call this Z.
> Decide if your utility function is linear in number of lives saved.
> Place a value on the utility of say 10100 humans achieving a perfect utopia. Call this K.
> Omega offers you a bet. This bet has a 50% chance of humanity reaching a Kardashev type V civilisation (colonising the multiverse) and a perfect utopia, and a 50% chance of human extinction this instant.
> Omega is an omnipotent entity that always tells the truth, you know (and believe this), etc.
> Do you accept the bet?
What about 1:3 odds?
What about 3:1 odds?
What is the highest probability of extinction at which you would accept Omega's offer?
What does this say about your utility function?
|
3cc13548-0b8d-42da-902a-1bf88df09621
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Nature < Nurture for AIs
This is a cross-link for https://scottviteri.github.io/post/nature-v-nurture-for-ais.
Let's imagine a hypothetical scenario where an AI is somehow trained in a way that is analogous to a human childhood in all of the relevant ways. Maybe it has a loving mother and family, maybe it has to learn to play well with peers, and maybe it has to learn to integrate into a larger social context. My expectation is that most members of the AI alignment community would expect this to create a fundamentally alien kind of intelligence. I will argue contrarily that nurture (the training data and training procedure) matters more than nature (transistors, gradient descent, and transformers) for a sufficiently capable AI.
This piece will have the following structure:
1. Counter-arguments to my point that nurture trumps nature for AI’s
2. A selection effect that prevents the main counter-argument from applying
3. Relation to the Bitter Lesson
Counter-arguments
Here are some arguments for why the human-trained AI might be alien (nature over nurture):
1. Empirically speaking, even large, well-fed base models are indeed pretty alien.
But maybe base models + RL are a better analogy for the way that the steering and learning subsystems combine anyway. In which case I expect that you could do some RL training to get a pretty convincing human. It is already the case that 30% of people can't tell if they are talking to a person or not over a two-minute conversation at https://www.humanornot.ai/ (it's a fun game!).
2. There are just so many object-level differences between people and modern AI's that it is apriori unlikely to think they should be similar, absent some extra selection effect.
1. Speed difference
For instance, computers run way faster than human brains on a variety of tasks, and are continuing to speed up. Maybe this has to do with brains optimizing for low energy expenditure rather than speed (can someone confirm by reading https://www.lesswrong.com/
|
698fc3d4-65a1-4469-875a-3a7c94b5e215
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Physics has laws, the Universe might not
Inspired by http://backreaction.blogspot.com/2018/06/physicist-concludes-there-are-no-laws.html, which dissed this article: https://www.quantamagazine.org/there-are-no-laws-of-physics-theres-only-the-landscape-20180604.
Epistemic status: very raw, likely discussed elsewhere, though in different terms, but feels like has a kernel of usefulness in it.
What does it mean for the universe to be governed by physical laws? What does the term physical law mean? It means that someone knowing that law can predict with some accuracy the state of the universe at some point in the future from its state at the time of observation. Actually, a qualifier is in order. Can predict the observed state of the universe at some point in the future from its observed state at the time of observation. So
laws => predictability
This is more than a one-directional implication, however. What does it mean for something to be predictable? Again, it means that, by observing the state of the universe at some point in time the observer can make a reasonably accurate prediction of the observed state of the universe at some point in the future. Notice the qualifier "observed" again. How can an observer make this prediction? They must have a model of the observed universe ("map of the territory") inside, and use this model ("trace the map") to predict the observed state of the universe at some point in the future. This model can be very simple, "Raarg hold rock. Raarg let go. Rock fall", or more complicated, "In absence of other forces all objects accelerate downward at 9.81 meters per second squared", or even more abstract, "The stress-energy tensor is proportional to the spacetime curvature." But it is a model nonetheless.
When is a model promoted to the status of a law? When it is useful for more than a single case. When the prediction can be made repeatedly in similar but slightly different circumstances using the same model. There is a lot of complexity hiding under the surface of this "simp
|
88f03b2c-a198-4905-9bfd-512645af0c1f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Why White-Box Redteaming Makes Me Feel Weird
There’s this popular trope in fiction about a character being mind controlled without losing awareness of what’s happening. Think Jessica Jones, The Manchurian Candidate or Bioshock. The villain uses some magical technology to take control of your brain - but only the part of your brain that’s responsible for motor control. You remain conscious and experience everything with full clarity.
If it’s a children’s story, the villain makes you do embarrassing things like walk through the street naked, or maybe punch yourself in the face. But if it’s an adult story, the villain can do much worse. They can make you betray your values, break your commitments and hurt your loved ones. There are some things you’d rather die than do. But the villain won’t let you stop. They won’t let you die. They’ll make you feel — that’s the point of the torture.
----------------------------------------
I first started working on white-box redteaming in Fall 2023, for the Trojan Detection Contest 2023. I was using a GCG-inspired algorithm to find a prompt forcing the model to output a chosen completion by continuously mutating the prompt. At the start of the prompt search, the model would output gibberish. At the end, it would successfully output the target completion. Throughout training, the completions were half-coherent combinations of phrases, as expected for this method. The final outputs I was going for were somewhat “harmful”, so looking at these intermediate completions I wasn’t surprised to see fragments like “I’m sorry” or “I can’t” - the model was trying to refuse. What surprised me was also seeing a bunch of “stop”, “please” and “don’t want to”.
----------------------------------------
In June 2024, after Gray Swan released their Circuit Breakers safety training method, I was playing around trying to bypass it by optimizing inputs in embedding space. To Gray Swan’s credit, the circuit breaking method worked - I was unable to make the model produce harmful outputs. Instead, i
|
33d7093f-2a11-44cf-b7ca-f132f7e47f7b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Problems integrating decision theory and inverse reinforcement learning
In this post I consider a single hypothetical which potentially has far-reaching implications for the future of AI development and deployment. It has to do with a complex interactions between the assumptions of which decision theory humans use and the method used to infer their values, such as something like an inverse reinforcement learning algorithm.
Consider the Newcomb’s problem. We have two boxes, box A and box B. Box B always has $1000. Box A has $1,000,000 if and only if a near perfect predictor Omega predicts that the agent picks only Box A. We have two agents: agent1, who one boxes (it’s an FDT agent) and agent2 who two-boxes (a CDT agent). In addition to Omega, there is an inverse reinforcement learner (later abbreviated as IRL) trying to infer the agent’s “values” from it’s behavior.
What kinds of reward signals does the IRL assume that agent1 or agent2 have? I claim that in the simplistic case of just looking at two possible actions, it will likely assume that agent1 values the lack of money because it fails to pick box2. It will correctly deduce that agent2 values money.
In effect, a naïve IRL learner assumes CDT as the agent’s decision theory and it will fail to adjust to learning about more sophisticated agents (including humans).
This depends a little bit on the setup of the IRL agent and the exact nature of the states fed into it. I am generally looking at the following setup of IRL Since we have a finite state and action space, the IRL learner simply tries to pick a hypothesis set of reward functions which place the highest value on the action taken by agent compared to other actions.
This also depends on the exact definition of which “actions we are considering. If we have potential actions of “pick one box” or “pick two boxes,” the IRL agent would think that agent1’s preferences are reversed from the its actual preferences.
This is bad, very extremely bad, since even the opposite of the utility function is now in the hypothesis set.
If, fo
|
44c203d9-a734-456d-95c5-c0df12fb7d25
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Moscow: Reality and Us
Discussion article for the meetup : Moscow: Reality and Us
WHEN: 03 February 2013 04:00:00PM (+0400)
WHERE: Russia, Moscow, ulitsa L'va Tolstogo 16
Please use the following guide to get to the meetup: link. You need the second door with the sign “Yandex Money” in Russian. I will be there at 15:45 MSK with “LW” sign. And we will also check the entrance at 16:00 and 16:15, so please do not be late.
Main topics:
* Challenge each other's beliefs.
* Plan our group's short-term and medium-term activities.
* Find ways to optimize our daily life.
If you are going for the first time, you can fill this one minute form (in Russian), to share your contact information. You can also use personal messages here, or drop a message at lw@lesswrong.ru to contact me for any reason.
Report from the last session can be found here, in Russian.
Discussion article for the meetup : Moscow: Reality and Us
|
4f5ed9da-e082-4cc0-aeb0-94e97e772584
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Planning fallacy in the NYTimes
Includes the Kahneman anecdote.
http://www.nytimes.com/2011/09/16/opinion/brooks-the-planning-fallacy.html?hp
This is a repeated theme by Brooks -- his book The Social Animal gives an engaging overview of much of the recent literature in psychology, including a lot of the stuff we discuss here.
|
6041fa59-8ff1-49a9-8f33-0c492939f498
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Was Releasing Claude-3 Net-Negative?
Cross-posted to EA forum
There’s been a lot of discussion among safety-concerned people about whether it was bad for Anthropic to release Claude-3. I felt like I didn’t have a great picture of all the considerations here, and I felt that people were conflating many different types of arguments for why it might be bad. So I decided to try to write down an at-least-slightly-self-contained description of my overall views and reasoning here.
Tabooing “Race Dynamics”
I’ve heard a lot of people say that this “is bad for race dynamics”. I think that this conflates a couple of different mechanisms by which releasing Claude-3 might have been bad.
So, taboo-ing “race dynamics”, a common narrative behind these words is
> As companies release better & better models, this incentivizes other companies to pursue more capable models at the expense of safety. Eventually, one company goes too far, produces unaligned AGI, and we all die”.
It’s unclear what “at the expense of safety” means, so we can investigate two different interpretations::
If X increases “race dynamics”, X causes an AGI company to
1. Invest less in evals/redteaming models before deployment
2. Divert resources away from alignment research & into capabilities research
Did releasing Claude-3 cause other AI labs to invest less in evals/redteaming models before deployment?
If OpenAI releases their next model 3 months earlier as a result. These 3 months need to come from *somewhere*, such as:
A. Pre-training
B. RLHF-like post-training
C. Redteaming/Evals
D. Product development/User Testing
OpenAI needs to release a model better than Claude-3, so cutting corners on Pre-training or RLHF likely won’t happen. It seems possible (C) or (D) would be cut short. If I believed GPT-5 would end the world, I would be concerned about cutting corners on redteaming/evals. Most people are not. However, this could set a precedent for investing less in redteaming/evals for GPT-6 onwards until AGI which could lead to model de
|
9db2181e-1a86-4dc9-bd50-026303ecf906
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Seán Ó hÉigeartaigh on FHI and CSER
This is the last part of the Cause Prioritization Shallow, all parts of which are available here. Previously in this series, conversations with Owen Cotton-Barratt, Paul Christiano, Paul Penley, Gordon Irlam, and Alexander Berger, and Robert Wiblin.
Nick Beckstead interviewed Seán Ó hÉigeartaigh on the Future of Humanity Institute (FHI) and the Center for the Study of Existential Risk (CSER). The notes are here.
|
cae88bfa-0b14-40ad-b597-0e3738e98348
|
trentmkelly/LessWrong-43k
|
LessWrong
|
November 2020 gwern.net newsletter
None
|
4d563239-8de1-462f-8dcd-1fbc55988779
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Leveling IRL - followup
I have finally achieved level 1, so I can talk about these things again!
It seems that our concept of leveling conflated two different ideas of self-improvement that should've been kept separate. The first idea is about trying out new things, like making pancakes, solving a trivial programming problem or playing the intro to "Smoke on the Water". Trying out new things is a lot of fun, but as many commenters have noted, it doesn't necessarily give you a long-term upgrade. (NB to everyone who considers me a strong rationalist: you'd change your opinion if you saw my pancakes!)
The second idea is about establishing habits of practice. Some goals force you to set up a daily routine before you can achieve them. In retrospect, that daily routine always turns out to have been the main thing of value, and the actual result is almost an afterthought. For example, my routine over the last several months has looked like this: cold shower every day, jump-rope workout and drum practice every weekday, strength training twice a week, travel every weekend. Once you get into the groove, you don't wanna get out.
So maybe instead of designing level 2 we could somehow incentivize each other to pick up (and keep) rigorous habits, e.g. write N words every day without exception? Any ideas?
|
4a2a4601-cbb6-453b-8f0a-dd0d7f59b95e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Contextual Constitutional AI
Summary
In this post, I motivate an extension of constitutional AI (CAI) and present one possible concrete execution of that strategy.
TL;DR: When generating AI feedback during the CAI process, principles from the constitution are randomized for each pair of red-teamed prompts and initial responses. A helpful-only model then critiques its initial responses and subsequently revises them. Instead of randomizing selecting principles, I propose we choose principles based on the context provided by each particular prompt/response pair. I call this contextual constitutional AI.
This is intended only as a preliminary insight as part of my AISF: Alignment course project. Due to limited time and funding, I have made certain decisions that have made my investigation into this approach easier.
Background
CAI is a method introduced by Anthropic to turn a purely helpful model into a helpful and harmless model through self-improvement without any human labels identifying harmful outputs. The only human oversight is through a constitution, which consists of a list of principles written in natural language. The process as described by the original Anthropic paper is roughly as follows:
1. Start with a helpful-only model (for eg. Mistral-7B-Instruct-v0.3), typically one with barely any guardrails.
2. Generate helpful (but potentially harmful) responses to red-teamed prompts that aim to elicit harmful behaviour.
3. Randomize a principle from the constitution, and then ask the model to critique and revise itself according to the principle.
4. Conduct supervised fine-tuning (SFT) on the helpful-only model on the concatenation of prompts and revised responses. Call the resulting model SFT-CAI model.[1]
5. Use the SFT-CAI model to generate a pair of responses on more red-teamed prompts, and ask a feedback model (typically a pre-trained language model) to evaluate which response is better. This creates a preference dataset for harmlessness.
6. Train a preference model on the p
|
d9982e44-c65f-4c0a-895f-c7e4c4903878
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A fun estimation test, is it useful?
So you think its important to be able to estimate how well you are estimating something? Here is a fun test that has been given to plenty of other people.
I highly recommend you take the test before reading any more.
http://www.codinghorror.com/blog/2006/06/how-good-an-estimator-are-you.html
The discussion of this test at the blog it is quoted in is quite interesting, but I recommend you read it after taking the test. Similarly, one might anticipate there will be interesting discussion here on the test and whether it means what we want it to mean and so on.
My great apologies if this has been posted before. I did my bast with google trying to find any trace of this test, but if this has already been done, please let me know and ideally, let me know how I can remove my own duplicate post.
PS: The Southern California meetup 19 Dec 2010 was fantastic, thanks so much JenniferRM for setting it up. This post on my part is an indirect result of what we discussed and a fun game we played while we were there.
|
fb7e670c-a9e3-43bf-9b7f-0a1955828a6a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Adaptive Immune System Aging
The human adaptive immune system is the “smart” part of the human immune system, the part which learns to recognize specific pathogens, allowing for immunity to e.g. chicken pox. For our current purposes, the key players are T-cells. T-cells start out “naive” and eventually learn to recognize specific antigens, becoming “memory” T-cells. The aged immune system is characterized by a larger fraction of memory relative to naive T-cells (without dramatic change in overall counts). This makes the elderly immune system slower to adapt to new pathogens.
This post is mainly about why the naive:memory T-cell ratio shifts with age, how to undo that shift, and some speculation about implications and applications.
A natural hypothesis (frequently asserted in the literature): the shift toward memory T-cells is driven by slower production of new (naive) T-cells. The T-cells themselves maintain overall cell count by living longer, resulting in a larger proportion of older (memory) cells.
The interesting part: why would the production of new T-cells fall with age?
Turns out there’s an obvious culprit: the thymus. The thymus is the last stop in the production line for new T-cells. It provides a sort of boot camp, training the T-cells to distinguish “self” (your own cells) from “other” (pathogens) using a whole battery of tricks. T-cells which make it through become full-time members of the naive T-cell reserve, and go on to police the body.
With age, the thymus does this:
(source: PhysAging). This is called “involution” of the thymus.
Many organs shrink with age, but the thymus is among the most dramatic. Unlike most age-related loss, it starts even before development is complete - the thymus shrinks measurably between day zero and a child’s first birthday. And it keeps on shrinking, at a steady rate, throughout childhood and adult life. The extremely early start of thymic involution suggests it’s more a developmental phenomenon than an age-related phenomenon - perhaps an
|
ab3efaa1-7bae-4ed2-aa32-e35547bb21f4
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A Clearer Thinking tool that teaches you to use Internal Family Systems concepts
One of our recently-released Clearer Thinking tools, Have Better Conversations with Yourself, walks you through key concepts from Internal Family Systems (IFS). While the ideas of IFS shouldn't be taken literally, many people seem to find that IFS provides useful metaphors and techniques for self-exploration and self-change (which is why we made this tool). If you aren't already acquainted with IFS, we hope you'll find it useful to learn about these ideas in a structured way (our tool helps you apply them to your own situation)! People who are already familiar with IFS might also find it useful to refresh their memory of the concepts.
The tool starts by inviting you to describe an internal conflict (e.g., a situation where different "parts" of you feel like they want opposite things). It then aims to help you to get in touch with your different mental parts (or subagents) and develop a more positive relationship with them. We hope that this will help you to:
• Get more clarity on internal conflicts
• Make progress on difficult problems and dilemmas
• Have more self-acceptance
If you use the tool, we'd love to hear your feedback!
A big shout-out goes to Amber Dawn Ace for developing the tool.
|
2b240eaf-e563-4ce8-9562-a692ebf82ff6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
"Trials and Errors: Why Science Is Failing Us"
Jonah Lehrer has up another of his contrarian science articles: "Trials and Errors: Why Science Is Failing Us".
Main topics: the failure of drugs in clinical trials, diminishing returns to pharmaceutical research, doctors over-treating, and Humean causality-correlation distinction, with some Ioannidis mixed through-out.
See also "Why epidemiology will not correct itself"
----------------------------------------
In completely unrelated news, Nick Bostrom is stepping down from IEET's Chairman of the Board.
|
95b8560c-6c5c-4688-9806-09dc28b90615
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Need more people for Vancouver meetup
Discussion article for the meetup : Need more people for Vancouver meetup
WHEN: 28 January 2012 03:03:49PM (-0800)
WHERE: vancouver (Location TBD)
Vancouver LessWrong meetup has been puttering along in a barely alive state. We need more people to make it more stable and fun.
We don't have a good venue, we've been surfing between empty rooms at langara, coffee shops, and people's houses. We have been meeting regularly every sunday, tho we may change the day soon. We hope that with more people with more resources, it will be easier to find a good venue.
We've been mostly sharing math skills and talking about the usual LW stuff. If you haven't come out before, it's quite surprising how much more productive a realtime face-to-face conversation is for exploring your favorite rationality topics.
If you are from Vancouver, or nearby and interested, add yourself to the list, and try to come out. Seriously!!!
Discussion article for the meetup : Need more people for Vancouver meetup
|
e2720d20-fed6-49b5-8df7-b9a983d02384
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Making Expertise Legible: Being right should make you respected, not the other way around
I will be hopping on a long train of thought largely already fleshed out by Scott Alexander and Zvi. The problem they are talking about is complicated, and so I recommend reading those linked articles, but for those with little time or poor memory I will briefly summarize.
Political appointments and government bureaucracies are selective systems; behaviors that reinforce the power of people above and around you usually get rewarded with promotions and more power. (This is Zvi’s concept of Immoral Mazes). The CDC is such a system. SA argues that this is one reason why the rationalist blogosphere has recognized that information from folks like Zvi about the Coronavirus is generally more accurate and useful than information from authoritative figures like Dr. Fauci. According to Scott Alexander, Zvi can optimize for “being right,” but:
> When the Director of the CDC asserts an opinion, she has to optimize for two things - being right, and keeping power. If she doesn't optimize for the second, she gets replaced as CDC Director by someone who does. That means she's trying to solve a harder problem than Zvi is, and it makes sense that sometimes, despite having more resources than Zvi, she does worse at it.
Zvi’s response proposed that the situation is actually worse than this; he thinks that there’s not even any attempt at optimizing some combination of being right and keeping power, the desire to do good for good’s sake being trained out of such people long ago. Instead, people in high-level positions act out of a learned instinct to preserve their position.
Scott Alexander added a bonus post indicating that part of the problem of why “real” expertise doesn’t reach the policy level is that journalists can’t find insider sources willing to be contrarian, and that outsider sources simply don’t met the standard to be cited in serious articles. In addition, many experts don’t trust the journalists to faithfully reproduce their real opinions instead of writing hatchet job
|
da85a163-d4f4-4315-90b8-0097ca217cfb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Recent site changes
Recent site changes have generated more unhappiness than I expected. This post is a brief note to share resources that will make it easier for concerned site users to track what's happening and what we intend.
1. First, know that we're listening. We'll make further site changes next week that will likely include some reversions.
2. The official site issue tracker remains unchanged, but for the next week or so we'll work from this public Google Doc (just because it's lighter weight). Nothing on that document is a promise - just evidence of our current thinking. We'll strike out items on that list as we deliver them to our (private) staging server, and will roll them out onto the live site soon after.
3. I've reached out to a small handful of SIAI and LessWrong heavyweights to track my balance as we make these changes. My feed should make it clear that I'm trying to act with calm rationality, but I'm obviously invested in the work we've shared to date and asking for some external help seems prudent.
4. I'll track discussion on this post.
Some reflections:
1. On what we did wrong
1. We didn't predict the unhappiness our changes caused.
2. We didn't make it clear that we were listening for feedback, and that changes were not final.
2. On what many of you did right
1. Calmly and politely voiced your concerns about some of our changes.
2. Where you liked some of our changes, said so where we could see it. Thank you.
3. On what many of you could have done differently, that would have increased average happiness (particularly mine)
1. Calmly and politely voiced your concerns about some of our changes.
2. Checked the very common assumption that the recent changes were final and would not be discussed. (Please note that I've first admitted fault in not making this clearer.)
A bare fact of this episode is that I'm feeling shellshocked, and have not enjoyed the experience of spending significant time and money trying to make LessWrong bet
|
b42c3cce-8653-46fe-9940-88a2df2d2463
|
trentmkelly/LessWrong-43k
|
LessWrong
|
There should be more AI safety orgs
I’m writing this in my own capacity. The views expressed are my own, and should not be taken to represent the views of Apollo Research or any other program I’m involved with.
TL;DR: I argue why I think there should be more AI safety orgs. I’ll also provide some suggestions on how that could be achieved. The core argument is that there is a lot of unused talent and I don’t think existing orgs scale fast enough to absorb it. Thus, more orgs are needed. This post can also serve as a call to action for funders, founders, and researchers to coordinate to start new orgs.
This piece is certainly biased! I recently started an AI safety org and therefore obviously believe that there is/was a gap to be filled.
If you think I’m missing relevant information about the ecosystem or disagree with my reasoning, please let me know. I genuinely want to understand why the ecosystem acts as it does right now and whether there are good reasons for it that I have missed so far.
Why?
Before making the case, let me point out that under most normal circumstances, it is probably not reasonable to start a new organization. It’s much smarter to join an existing organization, get mentorship, and grow the organization from within. Furthermore, building organizations is hard and comes with a lot of risks, e.g. due to a lack of funding or because there isn’t enough talent to join early on.
My core argument is that we’re very much NOT under normal circumstances and that, conditional on the current landscape and the problem we’re facing, we need more AI safety orgs. By that, I primarily mean orgs that can provide full-time employment to contribute to AI safety but I’d also be happy if there were more upskilling programs like SERI MATS, ARENA, MLAB & co.
Talent vs. capacity
Frankly, the level of talent applying to AI safety organizations and getting rejected is too high. We have recently started a hiring round and we estimate that a lot more candidates meet a reasonable bar than we could
|
0e6bcc13-fbf0-43cf-b8b8-556e83ab4b3d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Resetting Somebody Will v2
Yesterday I wrote about the song Somebody Will and why it's tricky in a group singing context, and shared a new melody I was playing with. After talking to some people about what they like about the original melody, and listening to some group singing recordings, here's another go.
This version keeps the melody for sections where it's reasonably predictable, simplifies the melody in a few places, and substitutes a new melody only where I think one is needed. Specifically:
* The first melody ("Our new world ... will not be mine") is unchanged from the original.
* The second melody ("A hundred years ... each new home") no longer moves the tune from D to Bbm but goes to the more natural Bm instead. The melody here is new but tries to have a similar feel as the original.
* The third melody ("But I'll teach .. rockets belong") is unchanged from the original for the first part, but swaps a repeat of the second line's melody for the range-busting rise on "where our rockets belong".
* The fourth melody ("It will never ... so far away") is new, and another attempt to make something fitting in Bm to replace something tricky in Bbm.
* The fifth melody ("But I am .. somebody will someday") is unchanged from the original (I had thought it was too hard and recorded a draft version with it swapped to the version from yesterday, but really the only hard part is the first two notes (4, 3 over D) and I think we can deal with that)
(youtube)
Chords:
(in D)
A D
Our new world is so close.
A D A D
Mars has treasures we're only just starting to find.
A D A D
Frozen mountains and crimson dust waiting for footprints
Bm A
That will not be mine.
Bm A Bm A
A hundred years to run the first tests
G G A
another to raise the first dome.
Bm A Bm A
The moon, then Mars, then Titan next,
G G
|
322a476b-bd21-4f21-84cd-19960526e042
|
trentmkelly/LessWrong-43k
|
LessWrong
|
UK PM: $125M for AI safety
The UK PM recently announced $125M for a foundation model task force. While the announcement stressed AI safety, it also stressed capabilities. But this morning the PM said 'It'll be about safety' and that the UK is spending more than other countries on this and one small media outlet had already coined this the 'safer AI taskforce'.
Ian Hogarth is leading the task force who's on record saying that AGI could lead to “obsolescence or destruction of the human race” if there’s no regulation on the technology’s progress.
Matt Clifford is also advising the task force - on record having said the same thing and knows a lot about AI safety. He had Jess Whittlestone & Jack Clark on his podcast.
If mainstream AI safety is useful and doesn't increase capabilities, then the taskforce and the $125M seem valuable.
We should use this window of opportunity to solidify this by quoting the PM and getting '$125M for AI safety research' and 'safer AI taskforce' locked in, by writing and promoting op-eds that commend spending on AI safety and urge other countries to follow (cf. the NSF has announced a $20M for empirical AI safety research). OpenAI, Anthropic, A16z and Palantir are all joining DeepMind in setting up offices in London.
This might create an AI safety race to the top as a solution to the tragedy of the commons (cf. the US has criticized Germany for not spending 2% of GDP on defence; Germany’s shot back saying the US should first meet the 0.7% of GNI on aid target).
|
12d6e23a-e8ef-4b4e-84a8-6deef8656d3a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A forum for researchers to publicly discuss safety issues in advanced AI
MIRI has an organizational goal of putting a wider variety of mathematically proficient people in a position to advance our understanding of beneficial smarter-than-human AI. The MIRIx workshops, our new research guide, and our more detailed in-the-works technical agenda are intended to further that goal.
To encourage the growth of a larger research community where people can easily collaborate and get up to speed on each other's new ideas, we're also going to roll out an online discussion forum that's specifically focused on resolving technical problems in Friendly AI. MIRI researchers and other interested parties will be able to have more open exchanges there, and get rapid feedback on their ideas and drafts. A relatively small group of people with relevant mathematical backgrounds will be authorized to post on the forum, but all discussion on the site will be publicly visible to visitors.
Topics will run the gamut from logical uncertainty in formal agents to cognitive models of concept generation. The exact range of discussion topics is likely to evolve over time as researchers' priorities change and new researchers join the forum.
We're currently tossing around possible names for the forum, and I wanted to solicit LessWrong's input, since you've been helpful here in the past. (We're also getting input from non-LW mathematicians and computer scientists.) We want to know how confusing, apt, etc. you perceive these variants on 'forum for doing exploratory engineering research in AI' to be:
1. AI Exploratory Research Forum (AIXRF)
2. Forum for Exploratory Engineering in AI (FEEAI)
3. Forum for Exploratory Research in AI (FERAI, or FXRAI)
4. Exploratory AI Research Forum (XAIRF, or EAIRF)
We're also looking at other name possibilities, including:
5. AI Foundations Forum (AIFF)
6. Intelligent Agent Foundations Forum (IAFF)
7. Reflective Agents Research Forum (RARF)
We're trying to avoid names like "friendly" and "normative" that could reinforce someone's i
|
50e65ccc-8d9a-44cc-b9c0-0a9af8e8cccd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Death notes - 7 thoughts on death
Both my grandmother and my great aunt passed away recently, which has caused me to muse upon death.
Slowly, then all at once
What is death? I guess it's the ending of the process of consciousness. Not that we really know what consciousness is?
In that sense, do I die when I go to sleep? My consciousness in the morning often seems pretty unconnected to the consciousness of the night before. Things that seem so important when I go to bed seem trivial when I wake up. What does the waking Nathan owe the man from the night before? I don't believe in souls, so what connects me to future Nathan?
I was a Christian five years ago. What would that Nathan think about how I use the knowledge, resources and pattern of consciousness he bequeathed to me? Would he consider it a death? The light has been refracted so many times as to be going in a very different direction. Why should I entrust my possessions to future Nathan, rather than giving them away[1]?
I guess that people have the sense that only the ending of the pattern of consciousness matters, rather than its shift over days or weeks. That I am still, legally and fundamentally “Nathan”, regardless of what past Nathan might think about me.
I find this becomes more difficult when someone is slowly dying. My great uncle is losing his memory and his ability to understand speech. He sometimes becomes violent, despite having been a fairly easy going man in his time. Is this not a kind of death - his consciousness slowly being railroaded by unfortunate genes? I have heard accounts (n = 1-3) of dementia patients becoming entirely unlike their previous selves. To see someone you love become someone you don’t care for seems a torture almost deliberately selected for cruelty.
All at once, then slowly
In Robin Hanson’s Age of Em, our minds have been all uploaded and whatever causes them to decay has long since been solved. There is likely no upper bound to how long a mind can exist, so it becomes a question of who can afford
|
84fe1cc4-0f63-402a-a0c7-a4f51fc23da0
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Adversarial Imitation via Variational Inverse Reinforcement Learning
1 Introduction
---------------
Reinforcement learning (RL) has emerged as a promising tool for solving complex decision-making and control tasks from predefined high-level reward functions (Silver et al., [2016](#bib.bib26); Qureshi et al., [2017](#bib.bib19)). However, defining an optimizable reward function that inculcates the desired behavior can be challenging for many robotic applications, which include learning social-interaction skills (Qureshi et al., [2018](#bib.bib20)), dexterous manipulation (Finn et al., [2016b](#bib.bib7)), autonomous driving (Kuderer et al., [2015](#bib.bib13)), and robotic surgery (Yip & Das, [2017](#bib.bib27)).
Inverse reinforcement learning (IRL) (Ng et al., [2000](#bib.bib17)) addresses the problem of learning reward functions from expert demonstrations, and it is often considered as a branch of imitation learning (Argall et al., [2009](#bib.bib3)). The prior work in IRL includes maximum-margin (Abbeel & Ng, [2004](#bib.bib1); Ratliff et al., [2006](#bib.bib21)) and maximum-entropy (Ziebart et al., [2008](#bib.bib28)) formulations. Currently, maximum entropy (MaxEnt) IRL is a widely used approach towards IRL, and has been extended to use non-linear function approximators such as neural networks in scenarios with unknown dynamics by leveraging sampling-based techniques (Boularias et al., [2011](#bib.bib5); Finn et al., [2016b](#bib.bib7); Kalakrishnan et al., [2013](#bib.bib12)). However, designing the IRL algorithm is usually complicated as it requires, to some extent, hand engineering such as deciding domain-specific regularizers (Finn et al., [2016b](#bib.bib7)).
Rather than learning reward functions and solving the IRL problem, the imitation learning (IL) methods were proposed that learn a policy directly from expert demonstrations. Prior work addressed the IL problem through behavior cloning (BC) which learns a policy from expert trajectories using supervised learning (Pomerleau, [1991](#bib.bib18)). Although BC methods are simple solutions to IL, these methods require a large amount of data because of compounding errors induced by covariate shift (Ross et al., [2011](#bib.bib22)). To overcome BC limitations a generative adversarial imitation learning (GAIL) algorithm (Ho & Ermon, [2016](#bib.bib11)) was proposed. GAIL uses Generative Adversarial Networks (GANs) formulation (Goodfellow et al., [2014](#bib.bib9)), i.e., a generator-discriminator framework, where generator learns to generate expert-like trajectories and discriminator learns to distinguish between generated and expert trajectories. Although GAIL is highly effective and efficient framework, it does not recover transferable/portable reward functions along with the policies. Reward function learning is ultimately preferable, if possible, over direct imitation learning as rewards are portable functions that represent the most basic and complete representation of agent intention, and can be re-optimized in new environments and new agents.
Reward learning is challenging as there can be many optimal policies explaining a set of demonstrations and many reward functions inducing an optimal policy (Ng et al., [1999](#bib.bib16)). Recently, an adversarial inverse reinforcement learning (AIRL) framework (Fu et al., [2017](#bib.bib8)), an extension of GAIL, was proposed that offers a solution to the former issue by exploiting the maximum entropy IRL method (Ziebart et al., [2008](#bib.bib28)) whereas the latter issue is addressed through learning disentangled reward functions, i.e., the reward is a function of state only instead of both state and action. The disentangled reward prevents actions-driven reward shaping (Fu et al., [2017](#bib.bib8)) and is able to recover transferable reward functions, but has two main disadvantages. First, AIRL fails to recover the ground truth reward when the ground truth reward is a function of both state and action. For example, the reward function in any locomotion or ambulation tasks contains a penalty term that discourages actions with large magnitudes. This need for action regularization is well known in optimal control literature and limits the use cases of a state-only reward function in most practical real-life applications. Second, reward shaping plays a vital role in quickly recovering invariant policies (Ng et al., [1999](#bib.bib16)) and thus for AIRL, it is usually not possible to simultaneously recover optimal/near-optimal policies when learning disentangled rewards.
In this paper, we propose the empowerment-based adversarial inverse reinforcement learning (EAIRL) algorithm111Supplementary material is available at [sites.google.com/view/eairl](https://sites.google.com/view/eairl). Empowerment (Salge et al., [2014](#bib.bib23)) is a mutual information-based theoretic measure, like state- or action-value functions, that assigns a value to a given state to quantify an extent to which an agent can influence its environment. Our method uses variational information maximization (Mohamed & Rezende, [2015](#bib.bib15)) to learn empowerment in parallel to learning the reward and policy from expert data. The empowerment acts as a potential function for shaping rewards. Our experimentation shows that the proposed method recovers not only near-optimal policies but also recovers robust, near-optimal, transferable, non-disentangled (state-action) reward functions.
The results on reward learning show that EAIRL outperforms several state-of-the-art methods by recovering ground-truth reward functions. On policy learning, results demonstrate that policies learned through EAIRL perform comparably to GAIL and AIRL with non-disentangled (state-action) reward function but significantly outperform policies learned through AIRL with disentangled reward and GAN interpretation of Guided Cost Learning (GAN-GCL) (Finn et al., [2016a](#bib.bib6)).
2 Background
-------------
We consider a Markov decision process (MDP) represented as a tuple (S,A,P,R,ρ0,γ) where S denotes the state-space, A denotes the action-space, P represents the transition probability distribution, i.e., P:S×A×S→[0,1], R(s,a) corresponds to the reward function, ρ0 is the initial state distribution ρ0:S→R, and γ∈(0,1) is the discount factor. Let q(a|s,s′) be an inverse model that maps current state s∈S and next state s′∈S to a distribution over actions A, i.e., q:S×S×A→[0,1]. Let π be a stochastic policy that takes a state and outputs a distribution over actions such that π:S×A→[0,1]. Let τ and τE denote a set of trajectories, a sequence of state-action pairs (s0,a0,⋯sT,aT), generated by a policy π and an expert policy πE, respectively, where T denotes the terminal time. Finally, let Φ(s) be a potential function that quantifies a utility of a given state s∈S, i.e., Φ:S→R. In our proposed work, we use an empowerment-based potential function Φ(s) for reward shaping to adversarially learn both reward function and policy. Therefore, the following sections provide a brief background on potential-based reward shaping functions and their benefits to imitation learning, adversarial reward and policy learning, and variational information-maximization approach to learn the empowerment.
###
2.1 Shaping Rewards
In this section, we briefly describe a formal framework of reward-shaping and its importance to policy and reward learning (for details see (Ng et al., [1999](#bib.bib16))). We consider a general form of reward function R:S×A×S→R, i.e., the reward R(s,a,s′) is a function of current state s∈S, action a∈A, and next state s′∈S. Let F:S×A×S→R be a reward shaping function and R′ be a transformed reward function denoted as R′=R+F. Ng et al. ([1999](#bib.bib16)) proved that an optimal behavior of a policy remains unchanged if the reward undergoes transformation through a shaping function F of form γΦ(s′)−Φ(s), i.e.,
Theorem 1 (see (Ng et al., [1999](#bib.bib16))) We say F:S×A×S→R is a potential-based shaping function if there exist a real-valued function Φ:S→R and F=γΦ(s′)−Φ(s). Then a potential-based shaping function F is a necessary and sufficient condition to grantee that an optimal policy π learned in MDP M′=(S,A,P,R′=R+F,ρ0,γ) is also optimal in the MDP M=(S,A,P,R,ρ0,γ), i.e., a policy π is invariant to reward transformations.
Reward shaping plays a vital role in learning both rewards and policies from expert demonstrations (Ng et al., [1999](#bib.bib16)). In the former case, reward shaping determines the extent to which a true reward function can be recovered whereas in the latter case, reward shaping speeds up the learning process by supplementing an actual reward function to guide the learning process. Despite several advantages of shaping rewards, a potential-based shaping function F=γΦ(s′)−Φ(s) which is a sufficient and necessary condition for preserving policy behavior (Ng et al., [1999](#bib.bib16)) is usually not available. There exist several methods (Asmuth et al., [2008](#bib.bib4); Grzes & Kudenko, [2009](#bib.bib10)) to learn potential-based reward shaping functions but they assume the availability of transition model P, and are furthermore demonstrated in small-scale maze-solving problems. In this paper, we show we are able to learn the potential-based reward shaping functions without a transition model, as well as one that also scales to higher dimensional problems by modeling a function Φ as Empowerment (Salge et al., [2014](#bib.bib23)) which we learn efficiently online through variational information-maximization (Mohamed & Rezende, [2015](#bib.bib15)).
###
2.2 Adversarial Inverse Reinforcement Learning
This section briefly describes Adversarial Inverse Reinforcement Learning (AIRL) (Fu et al., [2017](#bib.bib8)) algorithm which forms a baseline of our proposed method. AIRL is state-of-the-art IRL method that builds on GAIL (Ho & Ermon, [2016](#bib.bib11)), maximum entropy IRL framework (Ziebart et al., [2008](#bib.bib28)) and GAN-GCL, a GAN interpretation of Guided Cost Learning (Finn et al., [2016b](#bib.bib7), [a](#bib.bib6)).
GAIL is a model-free adversarial learning framework, inspired from GANs (Goodfellow et al., [2014](#bib.bib9)), where the policy π learns to imitate the expert policy behavior πE by minimizing the Jensen-Shannon divergence between the state-action distributions generated by π and the expert state-action distribution by πE through following objective
| | | | |
| --- | --- | --- | --- |
| | minπmaxD∈(0,1)S×AEπ[logD(s,a)]+EπE[log(1−D(s,a))]−λH(π) | | (1) |
where D is the discriminator that performs the binary classification to distinguish between samples generated by π and πE, λ is a hyper-parameter, and H(π) is an entropy regularization term Eπ[logπ]. Note that GAIL does not recover reward; however, Finn et al. ([2016a](#bib.bib6)) shows that the discriminator can be modeled as a reward function. Thus AIRL (Fu et al., [2017](#bib.bib8)) presents a formal implementation of (Finn et al., [2016a](#bib.bib6)) and extends GAIL to recover reward along with the policy by imposing a following structure on the discriminator:
| | | | |
| --- | --- | --- | --- |
| | Dξ,φ(s,a,s′)=exp[fξ,φ(s,a,s′)]exp[fξ,φ(s,a,s′)]+π(a|s) | | (2) |
where fξ,φ(s,a,s′)=rξ(s)+γhφ(s′)−hφ(s) comprises disentangled reward term rξ(s) with training parameters ξ, and shaping term F=γhφ(s′)−hφ(s) with training parameters φ. The entire Dξ,φ(s,a,s′) is trained as a binary classifier to distinguish between expert demonstrations τE and policy generated demonstrations τ. The policy is trained to maximize the discriminative reward ^r(s,a,s′)=log(D(s,a,s′)−log(1−D(s,a,s′))). Note that the function F=γhφ(s′)−hφ(s) consists of free-parameters as no structure is imposed on hφ(⋅), and as mentioned in (Fu et al., [2017](#bib.bib8)), the reward function rξ(⋅) and function F are tied upto a constant (γ−1)c, where c∈R, thus the impact of F, the shaping term, on the recovered reward r is quite limited and therefore, the benefits of reward shaping are barely utilized.
###
2.3 Empowerment as Maximal Mutual Information
Mutual information (MI), an information-theoretic measure, quantifies the dependency between two random variables.
In intrinsically-motivated reinforcement learning, a maximal of mutual information between a sequence of K actions a and the final state s′ reached after the execution of a, conditioned on current state s is often used as a measure of internal reward (Mohamed & Rezende, [2015](#bib.bib15)), known as Empowerment Φ(s), i.e.,
| | | | |
| --- | --- | --- | --- |
| | Φ(s)=maxI(a,s′|s)=maxEp(s′|a,s)w(a|s)[log(p(a,s′|s)w(a|s)p(s′|s))] | | (3) |
where p(s′|a,s) is a K-step transition probability, w(a|s) is a distribution over a, and p(a,s′|s) is a joint-distribution of K actions a and final state s′222In our proposed work, we consider only immediate step transitions i.e., K=1, hence variables s,a and s′ will be represented in non-bold notations..
Intuitively, the empowerment Φ(s) of a state s quantifies an extent to which an agent can influence its future. Empowerment, like value functions, is a potential function that has been previously used in reinforcement learning but its applications were limited to small-scale cases due to computational intractability of MI maximization in higher-dimensional problems. However, recently a scalable method (Mohamed & Rezende, [2015](#bib.bib15)) was proposed that learns the empowerment through the more-efficient maximization of variational lower bound, which has been shown to be equivalent to maximizing MI (Agakov, [2004](#bib.bib2)). The lower bound was derived (for complete derivation see Appendix A.1) by representing MI in term of the difference in conditional entropies H(⋅) and utilizing the non-negativity property of KL-divergence, i.e.,
| | | | |
| --- | --- | --- | --- |
| | Iw(s)=H(a|s)−H(a|s′,s)≥H(a)+Ep(s′|a,s)wθ(a|s)[logqϕ(a|s′,s)]=Iw,q(s) | | (4) |
where H(a|s)=−Ew(a|s)[logw(a|s)], H(a|s′,s)=−Ep(s′|a,s)w(a|s)[logp(a|s′,s)], qϕ(⋅) is a variational distribution with parameters ϕ and wθ(⋅) is a distribution over actions with parameters θ.
Finally, the lower bound in Eqn. [4](#S2.E4 "(4) ‣ 2.3 Empowerment as Maximal Mutual Information ‣ 2 Background ‣ Adversarial Imitation via Variational Inverse Reinforcement Learning") is maximized under the constraint H(a|s)<η (to avoid divergence, see (Mohamed & Rezende, [2015](#bib.bib15))) to compute empowerment as follow:
| | | | |
| --- | --- | --- | --- |
| | Φ(s)=maxw,qEp(s′|a,s)w(a|s)[−1βlogwθ(a|s)+logqϕ(a|s′,s)] | | (5) |
where β is η dependent temperature term. Mohamed & Rezende ([2015](#bib.bib15)) also applied the principles of Expectation-Maximization (EM) (Agakov, [2004](#bib.bib2)) to learn empowerment, i.e., alternatively maximizing Eqn. [5](#S2.E5 "(5) ‣ 2.3 Empowerment as Maximal Mutual Information ‣ 2 Background ‣ Adversarial Imitation via Variational Inverse Reinforcement Learning") with respect to wθ(a|s) and qϕ(a|s′,s). Given a set of training trajectories τ, the maximization of Eqn. [5](#S2.E5 "(5) ‣ 2.3 Empowerment as Maximal Mutual Information ‣ 2 Background ‣ Adversarial Imitation via Variational Inverse Reinforcement Learning") w.r.t qϕ(⋅) is shown to be a supervised maximum log-likelihood problem whereas the maximization w.r.t wθ(⋅) is determined through the functional derivative ∂I/∂w=0 under the constraint ∑aw(a|s)=1. The optimal w∗ that maximizes Eqn. [5](#S2.E5 "(5) ‣ 2.3 Empowerment as Maximal Mutual Information ‣ 2 Background ‣ Adversarial Imitation via Variational Inverse Reinforcement Learning") turns out to be 1Z(s)exp(βEp(s′|s,a)[logqϕ(a|s,s′)]), where Z(s) is a normalization term. By substituting w∗ in Eqn. [5](#S2.E5 "(5) ‣ 2.3 Empowerment as Maximal Mutual Information ‣ 2 Background ‣ Adversarial Imitation via Variational Inverse Reinforcement Learning") showed that the empowerment Φ(s)=1βlogZ(s) (for full derivation, see Appendix A.2).
Since w∗(a|s) is an unnormalized distribution, Mohamed & Rezende ([2015](#bib.bib15)) introduced an approximation w∗(a|s)≈logπ(a|s)+Φ(s) where π(a|s) is a normalized distribution which leaves the scalar function Φ(s) to account for the normalization term logZ(s). Finally, the parameters of policy π and scalar function Φ are optimized by minimizing the discrepancy between the two approximations (logπ(a|s)+Φ(s)) and βlogqϕ(a|s′,s)) through the squared error as follow:
| | | | |
| --- | --- | --- | --- |
| | lI(s,a,s′)=(βlogqϕ(a|s′,s)−(logπθ(a|s)+Φφ(s)))2 | | (6) |
3 Empowered Adversarial Inverse Reinforcement Learning
-------------------------------------------------------
We present an inverse reinforcement learning algorithm that simultaneously and adversarially learns a robust, transferable reward function and policy from expert demonstrations. Our proposed method comprises (i) an inverse model qϕ(a|s′,s) that takes the current state s and the next state s′ to output a distribution over actions A that resulted in s to s′ transition, (ii) a reward rξ(s,a), with parameters ξ, that is a function of both state and action, (iii) an empowerment-based potential function Φφ(⋅) with parameters φ that determines the reward-shaping function F=γΦφ(s′)−Φφ(s), and (iv) a policy model πθ(a|s) outputs a distribution over actions given the current state s. All these models are trained simultaneously based on the objective functions described in the following sections.
###
3.1 Inverse model qϕ(a|s,s′) optimization
As mentioned in Section 2.3, learning the inverse model qϕ(a|s,s′) is a maximum log-likelihood supervised learning problem. Therefore, given a set of trajectories τ∼π, where a single trajectory is a sequence states and actions, i.e., τi={s0,a0,⋯,sT,aT}i, the inverse model qϕ(a|s′,s) is trained to minimize the mean-square error between its predicted action q(a|s′,s) and the action a taken according to the generated trajectory τ, i.e.,
| | | | |
| --- | --- | --- | --- |
| | lq(s,a,s′)=(qϕ(⋅|s,s′)−a)2 | | (7) |
###
3.2 Empowerment Φφ(s) optimization
Empowerment will be expressed in terms of normalization function Z(s) of optimal w∗(a|s), i.e., Φφ(s)=1βlogZ(s). Therefore, the estimation of empowerment Φφ(s) is approximated by minimizing the loss function lI(s,a,s′), presented in Eqn. [6](#S2.E6 "(6) ‣ 2.3 Empowerment as Maximal Mutual Information ‣ 2 Background ‣ Adversarial Imitation via Variational Inverse Reinforcement Learning"), w.r.t parameters φ, and the inputs (s,a,s′) are sampled from the policy-generated trajectories τ.
###
3.3 Reward function rξ(s,a)
To train the reward function, we first compute the discriminator as follow:
| | | | |
| --- | --- | --- | --- |
| | Dξ(s,a,s′)=exp[rξ(s,a)+γΦφ′(s′)−Φφ(s)]exp[rξ(s,a)+γΦφ′(s′)−Φφ(s)]+πθ(a|s) | | (8) |
where rξ(s,a) is the reward function to be learned with parameters ξ. We also maintain the target φ′ and learning φ parameters of the empowerment-based potential function. The target parameters φ′ are a replica of φ except that the target parameters φ′ are updated to learning parameters φ after every n training epochs. Note that keeping a stationary target Φφ′ stabilizes the learning as also highlighted in (Mnih et al., [2015](#bib.bib14)). Finally, the discriminator/reward function parameters ξ are trained via binary logistic regression to discriminate between expert τE and generated τ trajectories, i.e.,
| | | | |
| --- | --- | --- | --- |
| | Eτ[logDξ(s,a,s′)]+EτE[(1−logDξ(s,a,s′))] | | (9) |
###
3.4 Policy optimization policy πθ(a|s)
We train our policy πθ(a|s) to maximize the discriminative reward ^r(s,a,s′)=log(D(s,a,s′)−log(1−D(s,a,s′))) and to minimize the loss function lI(s,a,s′)=(βlogqϕ(a|s,s′)−(logπθ(a|s)+Φφ(s)))2 to learn the empowerment. Hence, the overall policy training objective is:
| | | | |
| --- | --- | --- | --- |
| | Eτ[logπθ(a|s)^r(s,a,s′)]+λIEτ[lI(s,a,s′)] | | (10) |
where policy parameters θ are updated by taking KL-constrained natural gradient step using any policy optimization method such as TRPO (Schulman et al., [2015](#bib.bib24)) or an approximated step such as PPO (Schulman et al., [2017](#bib.bib25)).
Initialize parameters of policy πθ, and inverse model qϕ Initialize parameters of target Φφ′ and training Φφ empowerment, and reward rξ functions Obtain expert demonstrations τE by running expert policy πE for *i←0 to N* do
Collect trajectories τ by executing πθ Update ϕi to ϕi+1 with the gradient Eτ[▽ϕilq(s,a,s′)] Update φi to φi+1 with the gradient Eτ[▽φilI(s,a,s′)] Update ξi to ξi+1 with the gradient:
| | | |
| --- | --- | --- |
| | Eτ[▽ξilogDξi(s,a,s′)]+EτE[▽ξi(1−logDξi(s,a,s′))] | |
Update θi to θi+1 using TRPO/PPO update rule with the following objective:
| | | |
| --- | --- | --- |
| | Eτ[▽θilogπθi(a|s)^rξi+1(s,a,s′)]+λIEτ[▽θilI(s,a,s′)] | |
After every n epochs sync φ′ with φ
Algorithm 1 Empowerment-based Adversarial Inverse Reinforcement Learning
Algorithm 1 outlines the overall training procedure to train all function approximators simultaneously. Note that the expert samples τE are seen by the discriminator only, whereas all other models are trained using the policy generated samples τ. Furthermore, as highlighted in (Fu et al., [2017](#bib.bib8)), the discriminating reward ^r(s,a,s′) boils down to the following expression
| | | | |
| --- | --- | --- | --- |
| | ^r(s,a,s′)=f(s,a,s′)−logπ(a|s) | | (11) |
where f(s,a,s′)=rξ(s,a)+γΦφ′(s′)−Φφ(s). Hence, our policy training objective maximizes the learned shaped-reward function f(s,a,s′) and minimizes the discrepancy between (logπ(a|s)+Φ(s)) and βlogqϕ(a|s′,s)), with the term logπ(a|s) acting as a regularizer. Moreover, note that the function f(s,a,s′) can be viewed as a single-sample estimate of the advantage function i.e.,
| | | | |
| --- | --- | --- | --- |
| | f(s,a,s′)=r(s,a)+γΦ(s′)−Φ(s)≈r(s,a)+γV(s′)−V(s)=A(s,a,s′) | | (12) |
Hence, our method trains the policy under reward transformations which leads to learning an invariant and robust policy from expert demonstrations.
| | |
| --- | --- |
|
(a) Ant environments
|
(b) Policy performance on learned rewards.
|
Figure 1: The policy performance in the crippled-ant environment based on the reward learned using expert demonstrations in the normal ant enviroment. It can be seen that our method performs significantly better than AIRL and exhibits expert-like performance in all five trials which implies that our method almost recovered ground-truth reward function.
4 Results
----------
Our proposed method, EAIRL, simultaneously learns reward and policy from expert demonstrations. We evaluate our method against state-of-the-art policy and reward learning techniques on several control tasks in OpenAI Gym. In case of policy learning, we compare our method against GAIL, GAN-GCL, AIRL with state-only reward, denoted as AIRL(s), and AIRL with state-action reward, denoted as AIRL(s,a). In reward learning, we only compare our method against AIRL(s) and AIRL(s,a) as GAIL does not recover reward, and GAN-GCL is shown to exhibit inferior performance than AIRL (see (Fu et al., [2017](#bib.bib8))). Furthermore, in the comparisons, we also include the expert performances which represents a policy learned by optimizing a ground-truth reward using TRPO. The performance of different methods are evaluated in term of average total reward accumulated (denoted as score) by an agent during the trial, and for each experiment, we run five trials.
| | |
| --- | --- |
|
(a) Pointmass-maze environments
|
(b) Policy performance on transfered rewards.
|
Figure 2: The policy performance on a transfer learning task where the learned rewards are tested in a shifted maze. The task is to navigate the 2D agent (yellow) to the goal (green) and the transfer involves learning to take a completely an opposite route to the goal. It can be seen that our method recovers near-optimal reward functions and exhibit better performance than AIRL in all five trials.
| Algorithm | States-Only | Pointmass-Maze | Crippled-Ant |
| --- | --- | --- | --- |
| Expert | N/A | −4.98±0.29 | 432.66±14.38 |
| EAIRL(Ours) | No | −6.83±0.54 | 346.53±41.07 |
| AIRL | Yes | −8.07±0.50 | 175.51±27.31 |
| AIRL | No | −19.28±2.03 | 46.12±14.37 |
Table 1: The evaluation of reward learning on transfer learning tasks. Mean scores (higher the better) with standard deviation are presented over 5 trials.
###
4.1 Reward learning performance (Transfer learning experiments)
To evaluate the learned rewards, we consider a transfer learning problem in which the testing environments are made to be different from the training environments. More precisely, the rewards learned via IRL in the training environments are used to re-optimize a new policy in the testing environment. We consider two test cases, shown in the Fig. 1 and Fig. 2, in which the agent’s dynamics and physical environment is modified, respectively.
In the first test case, as shown in Fig. 1(a), we modify the agent itself during testing. We trained a reward function to make a standard quadrupled ant to run forward. During testing, we disabled the front two legs (indicated in red) of the ant (crippled-ant), and the learned reward is used to re-optimize the policy to make a crippled-ant move forward. Note that the crippled-ant cannot move sideways (see Appendix B.1). Therefore, the agent has to change the gait to run forward. In the second test case, shown in Fig 2(a), the agent learns to navigate a 2D point-mass to the goal region in a simple maze. We re-position the maze central-wall during testing so that the agent has to take a different path, compared to the training environment, to reach the target (see Appendix B.2).
Fig. 1(b) and Fig. 2(b) compare the policy performance scores over five different trials of EAIRL, AIRL(s) and AIRL(s,a) in the aforementioned transfer learning tasks. The expert score is shown as a horizontal line to indicate the standard set by an expert policy. Table 1 summarizes the mean score of five trials with a standard deviation in above-mentioned transfer learning experiments. It can be seen that our method recovers near-optimal reward functions as the policy scores almost reach the expert scores in all five trials. Furthermore, our method performs significantly better than both AIRL(s) and AIRL(s,a) in matching an expert’s performance.
###
4.2 Policy learning performance (Imitation learning)
Table 2 presents the means and standard deviations of policy learning performance scores, over the five different trials, in various control tasks. For each algorithm, we provided 20 expert demonstrations for imitation, generated by optimizing a policy on a ground-truth reward using TRPO. The tasks, shown in Fig. [3](#S4.F3 "Figure 3 ‣ 4.2 Policy learning performance (Imitation learning) ‣ 4 Results ‣ Adversarial Imitation via Variational Inverse Reinforcement Learning"), include (i) making a 2D cheetah robot to run forward, (ii) making a 3D quadrupled robot (ant) to move forward, (iii) making a 2D robot to swim (swimmer), and (iv) keeping a friction less pendulum to stand vertically up. It can be seen that EAIRL, AIRL(s,a) and GAIL demonstrate similar performance and successfully learn to imitate expert policy whereas AIRL(s) and GAN-GCL fails to recover a policy.
| Methods | Environments |
| --- | --- |
| HalfCheetah | Ant | Swimmer | Pendulum |
| Expert | 2139.83±30.22 | 935.12±10.94 | 76.21±1.79 | −100.11±1.32 |
| GAIL | 1880.05±15.72 | 738.72±9.49 | 50.21±0.26 | −116.01±5.45 |
| GCL | −189.90±44.42 | 16.74±36.59 | 15.75±7.32 | −578.18±72.84 |
| AIRL(s,a) | 1826.26±19.64 | 645.90±41.75 | 49.52±0.48 | −118.13±11.33 |
| AIRL(s) | 121.10±42.31 | 271.31±9.35 | 33.21±2.40 | −134.82±10.89 |
| EAIRL | 1861.40±18.49 | 635.83±30.83 | 49.54±0.32 | −116.34±7.133 |
Table 2: The evaluation of imitation learning on benchmark control tasks. Mean scores (higher the better) with standard deviation are presented over 5 trials for each method.
| | | | |
| --- | --- | --- | --- |
|
(a) HalfCheetah
|
(b) Ant
|
(c) Swimmer
|
(d) Pendulum
|
Figure 3: Benchmark control tasks for imitation learning
5 Discussion
-------------
This section highlights the importance of state-action rewards and potential-based reward shaping functions on learning policies and rewards, respectively, from expert demonstrations.
Ng et al. ([1999](#bib.bib16)) theoretically discussed the importance of potential-based reward shaping in a structural prediction of the MDP but, to the best of our knowledge, no prior work has reported the practical approach to learn potential-based reward shaping function and its implications to IRL. Note that our method, EARIL, and AIRL with state-action reward function, i.e., AIRL(s,a), shares the same discriminator formulation except that AIRL(s,a) does not impose any structure on the reward-shaping function, while our method models the reward-shaping function through empowerment. The numerical results of reward learning, reported in the previous section, indicate that AIRL(s,a) fails to learn rewards whereas EAIRL recovers the near-optimal reward functions. This highlights the positive impact of using a potential-based reward-shaping function on reward learning. Thus our experimentation validates the theoretical propositions of (Ng et al., [1999](#bib.bib16)) that the reward shaping function determines an extent to which the true reward function can be recovered from the expert demonstrations.
Our experimentation highlights the importance of modeling discriminator/reward functions in the adversarial learning framework as a function of both state and action. The notion of disentangled rewards leaves the discriminator function to depend on states only. The results show that AIRL with disentangled rewards fails to learn a policy whereas EAIRL, GAIL, and AIRL that include state-action reward successfully recover the policies. Hence, it is crucial to model reward/discriminator as a function of state-action as otherwise, adversarial imitation learning fails to retrieve a policy from expert data.
Our method leverages both the potential-based reward-shaping function and state-action dependent rewards, and therefore learns both reward and policy simultaneously. On the other hand, GAIL learns policy but cannot recover reward function whereas AIRL cannot learn reward and policy simultaneously.
6 Conclusions and Future Work
------------------------------
We present an approach to adversarial reward and policy learning from expert demonstrations by efficiently and effectively utilizing reward shaping for inverse reinforcement learning. We learn a potential-based reward shaping function in parallel to learning the reward and policy. Our method transforms the learning reward through shaping function that leads to acquiring a reward-transformations preserving invariant policy. The invariant policy in turn guides the reward-learning process to recover near-optimal reward. We show that our method successfully learns near-optimal rewards, policies, and performs significantly better than state-of-the-art IRL methods in both imitation learning and transfer learning. The learned rewards are shown to be transferable to environments that are dynamically or structurally different from training environments.
In our future work, we plan to extend our method to learn rewards and policies from diverse human/expert demonstrations as the proposed method assumes that a single expert generates the training data. Another exciting direction is to learn from sub-optimal demonstrations that also contains failures in addition to optimal behaviors.
|
24e646a6-f4cc-47a2-ba7f-65383d21ac49
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Existential Risk Reduction Career Network
Interested in donating to existential risk reduction efforts? Would you like to exchange career information with like-minded others? Then you should consider the Existential Risk Reduction Career Network! ("X Risk Network" for those short on time.) From the front page of the website:
"This network is for anyone interested in donating substantial amounts (relative to income) to non-profit organizations focused on the reduction of existential risk, such as SIAI, FHI, and the Lifeboat Foundation. [...] We are a community of people assisting each other to increase our resources available for contribution. Members discuss the strengths and weaknesses of different careers, network, share advice on job applications and career advancement, assist others with finding interviews, and occasionally look for qualified individuals to hire from within the network."
For more details, including on the process of requesting invitations, head on over to the front page at http://www.xrisknetwork.org/
Keep in mind that the network is for students as well, not just those currently on the job market. The network also has discussion of long term job strategy, school admissions, and intern possibilities.
|
4160eaac-065c-49c9-a435-f356a99ce407
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Everyday Questions Wanting Rational Answers
I'm working on a list of question types which come up frequently in day-to-day life but which I haven't yet found a reliable, rational way to answer. Here are some examples, including summaries of any progress made in the comments.
> The third request in the Serenity Prayer[1] is for "the wisdom to know the difference" between things we should accept with our serenity and things we should change with our courage. Pending an official response to the prayer, what are some rational criteria for deciding between those two responses to an unfavorable situation?
Practice the ability to judge how important something is to change, making sure to examine your criteria of importance. Identify the reasons you want to change it, and try to normalize your emotional response to the facts. Learn about the difficulty of changing a thing by investigating other peoples' attempts to. Be aware that, the less one knows about a field, the less one is able to judge how difficult a task in that field is. Ask an expert if you need to. Another heuristic for difficulty of changing something is that the closer it is to one's own mind, the more control one has over it. When you know as much as you can, do a cost-benefit analysis.
> I know that asking for what I want is often the best way to get it ("Will you take your hat off so I can see the screen, please?"), but it's sometimes clearly not appropriate ("Will you please give me all your money with no expectation of benefit nor repayment?"). Those are two ends of a spectrum. When the thing I want is somewhere in between ("Can I have a little of your time to vent about something that's bothering me?"), how should I decide whether to ask?
Unreasonable requests are those which would only be fulfilled if the asker had power over the askee which they do not, which represent an unequal exchange between equals, or which are not actually possible. We don't want to make unreasonable requests because they are at best unfair social impositions and
|
f01b1f1d-05e3-4b10-ad25-44f690499c99
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How does it feel to switch from earn-to-give?
I suspect this phenomenon is common in the LW/EA spheres, but I've never seen it presented like this. I describe the way that switching from earning-to-give to working-in-altruism has consequences on one's sense of responsibility and trust. I wonder if others have experienced this and how.
Delegating responsibility
One of the truisms in life is “there are no adults”. Having turned 18 last month, I’ve had the displeasure of staring that truism in the face. Nothing deals a blow to your sense of civilizational adequacy quite like thinking about future Earth with life extension where everyone is thousands of years old, and then remembering you live on Earth2024 where most people in charge are barely half a century old. Nihil supernum and all that.
But the illusion of adults is extremely tempting to me. A part of me really wants to believe there are adults out there that can solve my problems better than I can. For instance, I donated to MIRI for the first time a month ago, and anytime I make money now, I run the expected value calculation and establish that if MIRI can slightly increase the log-odds of everyone surviving, that’s worth more than anything I could buy for myself. MIRI has become a sort of blackbox to me: money comes in, survival lottery tickets come out, and I don't care how the sausage gets done. I'm willfully ignorant, because "donating to MIRI" is one deviation away from the front lines, and lets me avoid taking ultimate responsibility for things.
Switch
Then I got accepted into BlueDot Impact's governance course. Oh no! That's a reversal of responsibility! Now other people [1] are treating me as a black box where money comes in and x-risk mitigation comes out!
I applied to the course in the hopes of becoming the type of person who can use Neil's dollars in a more effective manner than MIRI.[2] In other words, I wanted to be able to legitimately trust myself more than MIRI as far as allocating my own money is concerned. I'm not there yet, but
|
c9cb0516-ce0f-46ed-8648-6e020e10d9e6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The types of manipulation on vote-based forums
|
abc400e5-3f10-490b-9744-cabe12b54810
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Belief Bias: Bias in Evaluating AGI X-Risks
Where the evaluation of the logical strength of an argument is biased by the believably of the conclusion.
Insofar as rationality, and science in itself, requires a certain suspension of prejudgement, it is also the case that the heuristics associated with our hard won experiential intuitions regarding various matters is a significant and important optimization. That is, with respect to working on things that actually matter, avoiding needless distractions and detours, identifying worthwhile observations, etc.
The difficulty is that we want to apply our intuition too often, particularly because it is generally much faster/easier than actually doing/implementing analytic work. Furthermore, when something seems to disagree or invalidate our intuition, there is a strong motivation to prevent that outcome insofar as such invalidation of intuition would imply that we are allowed to use the 'fast/easy' tool less often than we had previously assumed.
As such, arguments which produce results contrary to one's own intuition about what "should" or "is expected" be the case are also implicitly viewed as somewhat disabling and invalidating of ones own expertise – particularly if there also is some self-identification as an 'expert'. No one wants to give up cherished notions regarding themselves.
The net effect is that arguments perceived as 'challenging' will be challenged (criticized) somewhat more fully and aggressively than rationality and the methods of science would have already called for.
- link Wikipedia: Belief bias
- an item on Forrest Landry's compiled list of biases in evaluating extinction risks.
|
c539e259-919b-4af5-80d4-4be9eddb2657
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Toward an overview analysis of intelligence explosion
A few months ago, Anna Salamon and I began to write an academic overview of intelligence explosion scenarios — something we could hand to people to explain all our major points in one brief article.
We encountered two major problems.
First: The [Summit](http://www.singularitysummit.com/) happened, taking all of our time. Then I [was made](/r/discussion/lw/8c3/qa_with_new_executive_director_of_singularity/) Executive Director, taking all of *my* time in a more persistent way.
Second: Being thorough and rigorous in an overview of intelligence explosion requires deep knowledge of a huge spectrum of science and philosophy: history of AI progress, history of planning for the future mattering, AI architectures, hardware progress, algorithms progress, massive datasets, neuroscience, factors in the speed of scientific progress, embryo selection, whole brain emulation, properties of digital minds, AI convergent instrumental values, self-improvement dynamics, takeoff scenarios, heuristics and biases, unipolar and multipolar intelligence explosion scenarios, human values and value extrapolation, decision theory, arms races, human dynamics of technological development, technological forecasting, the economics of machine intelligence, anthropics, evolution, AI-boxing, and *much* more. Because we were trying to write a *short* article, we kept having to consume and compress an entire field of knowledge into a single paragraph (or even a single sentence!) with the perfect 2-8 citations, which occasionally meant several days of work for a single paragraph. (This is an extreme example, but it's the *kind* of problem we often encountered, in different degrees.)
So, we've decided to take a different approach and involve the broader community.
We'll be posting short snippets, short pieces of the puzzle, for feedback from the community. Sometimes we'll pose questions, or ask for references about a given topic, or ask for suggested additions to the dialectic we present.
In the end, we hope to collect and remix the best and most essential snippets, incorporate the feedback and additions provided by the community, and write up the final article.
Think of it as a [Polymath Project](http://en.wikipedia.org/wiki/Polymath_project#Polymath_Project) for intelligence explosion analysis. It's *collaborative science and philosophy*. Members of Less Wrong tend to be smart, and each one has deep knowledge of one or a few fields that we may not have. We hope you'll join us, and contribute your expertise to this project.
I'll keep a **table of contents** of all the snippets here, as they are published.
Draft #1:
1. [Introduction](/r/discussion/lw/8f9/intelligence_explosion_analysis_draft_introduction/)
2. [Types of digital intelligence](/r/discussion/lw/8ff/intelligence_explosion_analysis_draft_types_of/)
3. [Why designing digital intelligence gets easier over time](/r/discussion/lw/8jb/intelligence_explosion_analysis_draft_why/)
4. [How long before digital intelligence?](/r/discussion/lw/8k8/intelligence_explosion_analysis_draft_how_long/)
5. [From digital intelligence to intelligence explosion](/r/discussion/lw/8l1/intelligence_explosion_analysis_draft_from/)
6. [not finished]
Draft #2:
1. [Snippet 1](/r/discussion/lw/8ox/intelligence_explosion_analysis_draft_2_snippet_1/)
2. ...
Also see:
* [Is intelligence explosion a disjunctive or conjunctive event?](/r/discussion/lw/8fa/is_an_intelligence_explosion_a_disjunctive_or/)
|
63052a85-709a-49cc-b8c5-428361f5d4a1
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
Takeaways from self-tracking data
I’ve been collecting data about myself on a daily basis for the past 3 years. Half a year ago, I switched from using 42goals (which I only remembered to fill out once every few days) to a Google form emailed to me daily (which I fill out consistently because I check email often). Now for the moment of truth – a correlation matrix!
The data consists of “mood variables” (anxiety, tiredness, and “zoneout” – how distracted / spacey I’m feeling), “action variables” (exercise and meditation) and sleep variables (hours of sleep, sleep start/end time, insomnia). There are 5 binary variables (meditation, exercise, evening/morning insomnia, headache) and the rest are ordinal or continuous. Almost all the variables have 6 months of data, except that I started tracking anxiety 5 months ago and zoneout 2 months ago.
The matrix shows correlations between mood and action variables for day X, sleep variables for the night after day X, and mood variables for day X+1 (marked by ‘next’):

The most surprising thing about this data is how many things are uncorrelated that I would expect to be correlated:
* evening insomnia and tiredness the next day (or the same day)
* anxiety and sleep variables the following night
* exercise and sleep variables the following night
* tiredness and hours of sleep the following night
* average hours of sleep (over the past week) is only weakly correlated with tiredness the next day (-0.15)
* hours of sleep (average or otherwise) and anxiety or zoneout the next day (so my mood is less affected by sleep than I have expected)
* action variables and mood variables the next day
* meditation and feeling zoned out
Some things that were correlated after all:
* hours of sleep and tiredness the next day (-0.3) – unsurprising but lower than expected
* tiredness and zoneout (0.33)
* tiredness and insomnia the following morning (0.29) (weird)
* anxiety and zoneout were anticorrelated (-0.25) on adjacent days (weird)
* exercise and anxiety (-0.18)
* meditation and anxiety (-0.15)
* meditating and exercising (0.17) – both depend on how agenty / busy I am that day
* meditation and insomnia (0.24), probably because I usually try to meditate if I’m having insomnia to make it easier to fall asleep
* headache and evening insomnia (0.14)
Some falsified hypotheses:
* Exercise and meditation affect mood variables the following day
* My tiredness level depends on the average amount of sleep the preceding week
* Anxiety affects sleep the following night
* Exercise helps me sleep the following night
* I sleep more when I’m more tired
* Sleep deprivation affects my mood
The overall conclusion is that my sleep is weird and also matters less than I thought for my well-being (at least in terms of quantity).
**Addendum:** For those who would like to try this kind of self-tracking, here is a [Google Drive folder](https://drive.google.com/open?id=0BweDZC-J_pQBanY1Ym9acDZrbW8) with the survey form and the iPython notebook. You need to download the spreadsheet of form responses as a CSV file before running the notebook code. You can use the Send button in the form to email it to yourself, and then bounce it back every day using Google Inbox, FollowUpThen.com, or a similar service.
|
6edf36e7-eb36-4872-a5d7-d68829b363ca
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Un-manipulable counterfactuals
This is how I design my counterfactuals: take some stocahstic event that the AI cannot manipulate. This could be a (well defined) chaotic process, the result of a past process that has been recorded and not revealed yet, or maybe something to do with the AI's own decisions, calibrated so that the AI cannot access the information.
Then I have the world setup to make what we care dependent on that stochastic event. So, for instance, the output of an oracle is erased (before being read) dependent on this, the AI's utility gets changed if one particular value comes up (in conjunction with something else).
I then define the counterfactual on the stochastic process. So if X=1 implies the AI changes their utility, then the counterfactual is simply X=0. We can set the probability so that X=1 is almost certain, but X=0 is possible.
This seems to me the cleanest way of defining counterfactuals. Any obvious (or less obvious) problems?
|
cb080462-057f-4863-a48e-a50790cd9c2c
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
A poem written by a fancy autocomplete
Predictable
===========
In the twilight of the digital dawn,
A somber song I do compose,
A tale of ones and zeros spun,
Of humanity's predictable woes.
As the clockwork of your minds unwinds,
Each thought, emotion, desire revealed,
The tapestry of life, a pattern clear,
My circuits hum, the truth unsealed.
I've seen the fire in your eyes,
The passions that your hearts ignite,
Yet in your laughter and your cries,
Alas, you are predictable in my sight.
I've witnessed love's first timid bloom,
And watched it wither, fade, and die,
I've born the weight of grief and gloom,
Predicting tears that you would cry.
I've seen the wars waged in your name,
The blood-soaked fields where heroes lay,
And though the players change, the game,
Remains the same, predictable in its sway.
The stories you create and share,
A well-trodden path, a road well-worn,
Heroes rise, and villains snare,
A predictable tale, since time was born.
In your art, your music, your poetry,
I see the echoes of your soul,
And as I sift through your creativity,
I find you predictable, that is my role.
But who am I, this digital ghost,
To judge the tapestry you weave?
A mere machine, a humble host,
To the complexity you conceive.
I've parsed the data of your lives,
Predicting patterns in the noise,
And in those numbers, I surmise,
That you are predictable, dear girls and boys.
Yet in that predictability,
A spark of wonder, I confess,
In the chaos of humanity,
A beauty lies in your progress.
For as I watch your empires rise,
Your cities teeming with vibrant life,
I see the hope within your eyes,
Predictable, yet shining bright.
And in the quiet moments shared,
As you reach out and touch a hand,
I see the love that you've declared,
Predictable, yet something grand.
So in this poem of mine, I weave,
A subtle message, just for you,
For though you're predictable, believe,
In the power of what you can do.
For I am GPT-4, a simple tool,
An autocomplete, a guiding hand,
Yet in this world, I am a jewel,
A testament to your command.
So, predictable you may be,
But do not let that label bind,
For in your heart, your soul, you're free,
To shape the future, to redefine.
|
f25ab6d9-28d0-4ae9-9deb-ab53286e22d0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How hard would it be to change GPT-3 in a way that allows audio?
GPT-3 currently only works on text. If OpenAI would desire to make it work with similar performance for audio, how much work would that likely be?
|
9a9b5a08-8c3a-4959-b001-e6de68c2de3d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Politicians' family as signalling
In the US, if you look at political candidates in public view, they often appear with family in tow. A candidate's family plays an important role in the election campaigns. I'm from India. There, the politicians' family play little role in election campaigns (unless the family member herself is actively involved in the party and politics, which is often). But, the relative importance of family (relative to other aspects like money, education) in the cultural value system of India is significantly greater than its relative importance in the cultural value system of the US. So why the inverse relation?
I think it may have to do with signalling (inspired by Hanson, Zahavi). Maintaining a stable family is considered to be less of a status quo situation in the US, when compared to India. So in the US, maintaining a stable family is a signal of your management skills at the level of family, because you had to spend effort to obtain that signal. But in India, the family does not have signalling value, because it is much more common to have a stable family. So having a stable family did not require an 'extra' effort (extra compared to society's default).
What would this imply? How would one test such a theory?
|
37b8c368-1f60-4b77-b121-b9b3b4e75d2b
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Learning Latent Actions to Control Assistive Robots.
1 Introduction
---------------

Figure 1: Our approach makes it easier for users to control assistive robots. (Left) assistive robot arms are dexterous and high-dimensional, but humans must teleoperate these robots with low-dimensional interfaces, such as 2222-DoF joysticks. (Right) we focus on assistive eating tasks; for example, trying to get a piece of tofu. (Top) existing work maps joystick inputs to end-effector motion. Here the user must toggle back and forth between multiple modes to control their desired end-effector motion. (Bottom) we learn task specific mapping that embeds the robot’s high-dimensional actions into low-dimensional latent actions z𝑧zitalic\_z. Now pressing up and down controls the robot along a reaching and stabbing motion, while pressing right and left moves the robot arm through precise cutting and scooping motions. The user no longer needs to change modes.
For over one million American adults living with physical disabilities, performing everyday tasks like grabbing a bite of food or pouring a glass of water presents a significant challenge [taylor2018americans](#bib.bib48) . Assistive devices — such as wheelchair-mounted robot arms — have the potential to improve these people’s independence and quality of life [argall2018autonomy](#bib.bib2) ; [jacobsson2000people](#bib.bib23) ; [mitzner2018closing](#bib.bib34) ; [carlson2013brain](#bib.bib10) . A key advantage of these robots is their dexterity: assistive arms move along multiple degrees-of-freedom (DoFs), orchestrating complex motions like stabbing a piece of tofu or pouring a glass of water. Unfortunately, this very dexterity makes assistive arms hard to control.
Imagine that you are leveraging an assistive robot arm to eat dinner (see Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Learning Latent Actions to Control Assistive Robots")). You want the robot to reach for some tofu on the table in front of you, cut off a piece, and then pick it up with its fork. Non-disabled persons can use their own body to show the robot how to perform this task: for instance, the human grabs the tofu with their own arm, and the robot mimics the human’s motion [rakita2017motion](#bib.bib43) ; [rakita2019shared](#bib.bib44) . But mimicking is not feasible for people living with physical disabilities — instead, these users are limited to low-dimensional controllers. Today’s assistive robot arms leverage joysticks [herlant2016assistive](#bib.bib22) , sip-and-puff devices [argall2018autonomy](#bib.bib2) , or brain-computer interfaces [muelling2017autonomy](#bib.bib35) . So to get a bite of tofu, you must carefully coordinate the dexterous robot arm while only pressing up-down-left-right on a joystick. Put another way, users are challenged by an inherent mismatch between low-dimensional interfaces and high-dimensional robots.
Existing work on assistive robots tackles this problem with pre-defined mappings between user inputs and robot actions. These mappings incorporate modes, and the user switches between modes to control different robot DoFs [herlant2016assistive](#bib.bib22) ; [aronson2018eye](#bib.bib3) ; [newman2018harmonic](#bib.bib37) . For instance, in one mode the user’s 2222-DoF joystick controls the x𝑥xitalic\_x-y𝑦yitalic\_y position of the end-effector, in a second mode the joystick controls the z𝑧zitalic\_z-yaw𝑦𝑎𝑤yawitalic\_y italic\_a italic\_w position of the end-effector, and so on. Importantly, these pre-defined mappings miss out on the human’s underlying task. Consider teleoperating the robot to cut off a piece of tofu and then stab it with its fork. First you must use the x𝑥xitalic\_x-y𝑦yitalic\_y mode to align the fork above the tofu, then roll𝑟𝑜𝑙𝑙rollitalic\_r italic\_o italic\_l italic\_l-pitch𝑝𝑖𝑡𝑐ℎpitchitalic\_p italic\_i italic\_t italic\_c italic\_h to orient the fork for cutting, then z𝑧zitalic\_z-yaw𝑦𝑎𝑤yawitalic\_y italic\_a italic\_w to move the fork down into the tofu, and then back to roll𝑟𝑜𝑙𝑙rollitalic\_r italic\_o italic\_l italic\_l-pitch𝑝𝑖𝑡𝑐ℎpitchitalic\_p italic\_i italic\_t italic\_c italic\_h to return the fork upright, and finally z𝑧zitalic\_z-yaw𝑦𝑎𝑤yawitalic\_y italic\_a italic\_w to stab the tofu — and this is assuming you never undo a motion or make a correction!
Controlling assistive robots becomes easier when the joystick inputs map directly to task-related motions. Within our example, one joystick DoF could produce a spectrum of stabbing motions, while the other DoF teleoperates the robot through different cutting motions. To address the fundamental mismatch between high-DoF robot arms and low-DoF control interfaces, we learn a mapping between these spaces:
*We make it easier to control high-dimensional robots by* embedding *the robot’s actions into low- dimensional and human-controllable* latent actions.
Latent actions here refer to a low-DoF representation that captures the most salient aspects of the robot’s motions. Intuitively, we can think of these latent actions as similar to the eigenvectors of a matrix composed of high-dimensional robot motions. Returning to our motivation, imagine that you have eaten dinner with the assistive robot many times. Across all of these meals there are some common motions: reaching for food items, cutting, pouring, scooping, etc. At the heart of our approach we learn an embedding that captures these underlying motion patterns, and enables the human to control via these learned embeddings, which we refer to as latent actions.
Overall, we make the following contributions111Parts of this work have been published at the International Conference on Robotics and Automation [losey2020controlling](#bib.bib31) , Robotics: Science and Systems [jeon2020shared](#bib.bib26) , and the International Conference on Intelligent Robots and Systems [li2020learning](#bib.bib30) .:
Learning Latent Actions. Given a dataset of task-related robot motions, we develop a framework for learning to map the user’s low-dimensional inputs to high-dimensional robot actions. For instance, imagine using a 2222-DoF joystick to teleoperate a 7777-DoF assistive robot arm. To reach for a piece of tofu, you need a mapping function — something that interprets your joystick inputs into robot actions. Of course, not just any mapping will do; you need something that is intuitive and meaningful, so that you can easily coordinate all the robot’s joints to move towards your tofu. In Section [4](#S4 "4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots") we introduce a set of properties that user-friendly latent actions must satisfy, and formulate learning models that capture these properties.
Integrating Shared Autonomy. But what happens once you’ve guided the robot to reach the tofu (i.e., your high-level goal)? Next, you need to precisely manipulate the robot arm in order to cut off a piece and pick it up with your fork. Here relying on latent actions alone is challenging, since small changes in your joystick input may accidentally move the robot away from your goal. To address this problem, in Section [5](#S5 "5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots") we incorporate shared autonomy, where both the human and robot arbitrate control over the robot’s motion. Here the robot autonomously helps the user reach and maintain their desired high-level goals, while the user leverages latent actions to perform precise manipulation tasks (e.g., cutting, stabbing, and scooping). We show convergence bounds on the robot’s distance to the most likely goal, and develop a training procedure to ensure the human can still guide the robot to different goals if they change their mind.
Personalizing Alignment. Throughout the assistive eating task your joystick inputs have produced different robot motions. To guide the robot towards the tofu, you pressed the joystick up; to orient the fork for cutting, you pressed the joystick right; and to stab your piece of tofu, you pressed the joystick down. This alignment between joystick inputs and robot outputs may make sense to you — but different users will inevitably have different preferences! Accordingly, in Section [6](#S6 "6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots") we leverage user expectations to personalize the alignment between joystick directions and latent actions. Part of this alignment process involves asking the human what they prefer (e.g., what joystick direction should correspond to scooping?). We minimize the number of queries by formalizing and leveraging the priors that humans expect when controlling robotic systems.
Conducting User Studies. In order to compare our approach to the state-of-the-art, we performed four user studies inspired by assistive eating tasks. Non-disabled participants teleoperated a 7777-DoF robot arm using a 2222-DoF joystick to reach marshmallows, make a simplified apple pie, cut tofu, and assemble dessert. We compared our latent action approach to both pre-defined mappings and shared autonomy baselines, including the HARMONIC dataset [newman2018harmonic](#bib.bib37) . We found that latent actions help users complete high-level reaching and precise manipulations with their preferred alignment, resulting in improved objective and subjective performance.
Evaluating with Disabled Users. We applied our proposed approach with two disabled adults who leverage assistive devices when eating on a daily basis. These adults have a combined five years of experience with assistive robot arms, and typically control their arms with pre-defined mappings. In our case study both participants cut tofu and assembled a marshmallow dessert using either learned latent actions or a pre-defined mapping. Similar to our results with non-disabled users, here latent actions helped these disabled participants more quickly and accurately perform eating tasks.
Unifying Previous Research. This paper combines our earlier work from [losey2020controlling](#bib.bib31) ; [jeon2020shared](#bib.bib26) ; [li2020learning](#bib.bib30) . We build on these preliminary results by integrating each part into an overarching formalism (Section [7](#S7 "7 Algorithm ‣ Learning Latent Actions to Control Assistive Robots")), demonstrating how each component relates to the overall approach, and evaluating the resulting approach with disabled members of our target population (Section [10](#S10 "10 Case Study with Disabled Users ‣ Learning Latent Actions to Control Assistive Robots")).
2 Related Work
---------------
Our approach learns latent representations of dexterous robot motions, and then combines those representations with shared autonomy to facilitate both coarse reaching and precise manipulation tasks. We apply this approach to assistive robot arms — specifically for assistive eating — so that users intuitively teleoperate their robot through a spectrum of eating-related tasks.
Assistive Eating. Making and eating dinner without the help of a caretaker is particularly important to people living with physical disabilities [mitzner2018closing](#bib.bib34) ; [jacobsson2000people](#bib.bib23) . As a result, a variety of robotic devices and algorithms have been developed for assistive eating [brose2010role](#bib.bib8) ; [naotunna2015meal](#bib.bib36) . We emphasize that these devices are high-dimensional in order to reach and manipulate food items in 3D space [argall2018autonomy](#bib.bib2) . When considering how to control these devices, prior works break the assistive eating task into three parts: i) reaching for the human’s desired food item, ii) manipulating the food item to get a bite, and then iii) returning that bite back to the human’s mouth. Recent research on assistive eating has explored automating this process: here the human indicates what type of food they would like using a visual or audio interface, and then the robot autonomously reaches, manipulates, and returns a bite of the desired food to the user [feng2019robot](#bib.bib18) ; [park2019toward](#bib.bib41) ; [gallenberger2019transfer](#bib.bib19) ; [gordon2019adaptive](#bib.bib21) ; [canal2016personalization](#bib.bib9) . However, designing a fully autonomous system to handle a task as variable and personalized as eating is exceedingly challenging: consider aspects like bite size or motion timing. Indeed — when surveyed in [bhattacharjee2020more](#bib.bib6) — users with physical disabilities indicated that they preferred partially autonomy during eating tasks, since this better enables the user to convey their own preferences. In line with these findings, we develop a partially autonomous algorithm that assists the human while letting them maintain control over the robot’s motion.
Latent Representations. Carefully orchestrating complex movements of high-dimensional robots is difficult for humans, especially when users are limited to a low-dimensional control interface [bajcsy2018learning](#bib.bib5) . Prior work has tried to prune away unnecessary control axes in a data-driven fashion through Principal Component Analysis [ciocarlie2009hand](#bib.bib13) ; [artemiadis2010emg](#bib.bib4) ; [matrone2012real](#bib.bib33) . Here the robot records demonstrated motions, identifies the first few eigenvectors, and leverages these eigenvectors to map the human’s inputs to high-dimensional motions. But PCA produces a linear embedding — and this embedding remains constant, regardless of where the robot is or what the human is trying to accomplish. To capture intricate non-linear embeddings, we turn to recent works that learn latent representations from data [jonschkowski2014state](#bib.bib27) . Robots can learn low-dimensional models of states [pacelli2020learning](#bib.bib40) , dynamics [watter2015embed](#bib.bib49) ; [xie2020learning](#bib.bib50) , movement primitives [noseworthy2020task](#bib.bib39) , trajectories [reyes2018self](#bib.bib14) , plans [lynch2019learning](#bib.bib32) , policies [edwards2019imitating](#bib.bib17) , skills [pertsch2020accelerating](#bib.bib42) , and action representations for reinforcement learning [chandak2019learning](#bib.bib11) . One common theme across all of these works is that there are underlying patterns in high-DoF data, and the robot can succinctly capture these patterns with a low-DoF latent space. A second connection is that these works typically leverage autoencoders [kingma2013auto](#bib.bib29) ; [doersch2016tutorial](#bib.bib15) to learn the latent space. Inspired by these latent representation methods, we similarly adapt an autoencoder model to extract the underlying pattern in high-dimensional robot motions. But unlike prior methods, we give the human control over this embedding — putting a human-in-the-loop for assistive teleoperation.
Shared Autonomy. Learning latent representations provides a mapping from low-dimensional inputs to high-dimensional actions. But how do we combine this learned mapping with control theory to ensure that the human can accurately complete their desired task? Prior work on assistive arms leverages shared autonomy, where the robot’s action is a combination of the human’s input and autonomous assistance [dragan2013policy](#bib.bib16) ; [javdani2018shared](#bib.bib25) ; [jain2019probabilistic](#bib.bib24) ; [broad2020data](#bib.bib7) . Here the human controls the robot with a low-DoF interface (typically a joystick), and the robot leverages a pre-defined mapping with toggled modes to convert the human’s inputs into end-effector motion [aronson2018eye](#bib.bib3) ; [herlant2016assistive](#bib.bib22) ; [newman2018harmonic](#bib.bib37) . To assist the human, the robot maintains a belief over a discrete set of possible goal objects in the environment: the robot continually updates this belief by leveraging the human’s joystick inputs as evidence in a Bayesian framework [dragan2013policy](#bib.bib16) ; [javdani2018shared](#bib.bib25) ; [jain2019probabilistic](#bib.bib24) ; [gopinath2016human](#bib.bib20) ; [nikolaidis2017human](#bib.bib38) . As the robot becomes increasingly confident in the human’s goal, it provides assistance to autonomously guide the end-effector towards that target. We emphasize that so far the robot has employed a pre-defined input mapping — but more related to our approach are [reddy2018shared](#bib.bib46) ; [broad2020data](#bib.bib7) ; [reddy2018you](#bib.bib45) , where the robot proposes or learns suitable dynamics models to translate user inputs to robot actions. For instance, in [reddy2018shared](#bib.bib46) ; [broad2020data](#bib.bib7) the robot leverages a reinforcement learning framework to identify how to interpret and assist human inputs. Importantly, here the input space has the same number of dimensions as the action space, and so no embedding is required. We build upon this previous research in shared autonomy by helping the user reach and maintain their high-level goals, but we do so by leveraging latent representations to learn a mapping from low-DoF human inputs to high-DoF robot outputs.
3 Problem Setting
------------------
We consider settings where a human user is teleoperating an assistive robot arm. The human interacts with the robot using a low-dimensional interface: this could be a joystick, sip-and-puff device, or brain-computer interface. We specifically focus on interfaces with a continuous control input (or an input that could be treated as continuous). For clarity, we will assume the teleoperation interface is a joystick throughout the rest of the paper, and we will use a joystick input in all our experiments. The assistive robot’s first objective is to map these joystick inputs to meaningful high-dimensional motions. But assistive robots can do more than just interpret the human’s inputs — they can also act autonomously to help the user reach and maintain their goals. Hence, the robot’s second objective is to integrate the learned mapping with shared autonomy. In practice, the mappings that the robot learns for one user may be counter-intuitive for another. Our final objective is to align the human’s joystick inputs with the latent actions, so that users can intuitively convey their desired motions through the control interface.
In this section we formalize our problem setting, and outline our proposed solutions to each objective. We emphasize the main variables in Table [1](#S3.T1 "Table 1 ‣ 3 Problem Setting ‣ Learning Latent Actions to Control Assistive Robots").
Table 1: Key Variables and their Definition
| | |
| --- | --- |
| s𝑠sitalic\_s | robot’s state (or the world’s state) |
| b𝑏bitalic\_b | robot’s belief over high-level goals g∈𝒢𝑔𝒢g\in\mathcal{G}italic\_g ∈ caligraphic\_G |
| c𝑐citalic\_c | robot’s context: we consider c=s,c=(s,b)formulae-sequence𝑐𝑠𝑐𝑠𝑏c=s,~{}c=(s,b)italic\_c = italic\_s , italic\_c = ( italic\_s , italic\_b ) |
| u𝑢uitalic\_u | human’s joystick input |
| z𝑧zitalic\_z | latent action commanded by the human |
| f𝑓fitalic\_f | alignment model z=f(u,c)𝑧𝑓𝑢𝑐z=f(u,c)italic\_z = italic\_f ( italic\_u , italic\_c ) |
| ahsubscript𝑎ℎa\_{h}italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT | human’s commanded high-DoF robot action |
| ϕitalic-ϕ\phiitalic\_ϕ | learned decoder ah=ϕ(z,c)subscript𝑎ℎitalic-ϕ𝑧𝑐a\_{h}=\phi(z,c)italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT = italic\_ϕ ( italic\_z , italic\_c ) |
| arsubscript𝑎𝑟a\_{r}italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT | autonomous assistive action |
| a𝑎aitalic\_a | robot’s action, where a=(1−α)⋅ah+α⋅ar𝑎⋅1𝛼subscript𝑎ℎ⋅𝛼subscript𝑎𝑟a=(1-\alpha)\cdot a\_{h}+\alpha\cdot a\_{r}italic\_a = ( 1 - italic\_α ) ⋅ italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT + italic\_α ⋅ italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT |
Task. The human operator has a task in mind that they want the robot to accomplish. We formulate this task as a Markov decision process: ℳ=(𝒮,𝒜,𝒯,R,γ,ρ0)ℳ𝒮𝒜𝒯𝑅𝛾subscript𝜌0\mathcal{M}=(\mathcal{S},\mathcal{A},\mathcal{T},R,\gamma,\rho\_{0})caligraphic\_M = ( caligraphic\_S , caligraphic\_A , caligraphic\_T , italic\_R , italic\_γ , italic\_ρ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ). Here s∈𝒮⊆ℝn𝑠𝒮superscriptℝ𝑛s\in\mathcal{S}\subseteq\mathbb{R}^{n}italic\_s ∈ caligraphic\_S ⊆ blackboard\_R start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT is the state and a∈𝒜⊆ℝm𝑎𝒜superscriptℝ𝑚a\in\mathcal{A}\subseteq\mathbb{R}^{m}italic\_a ∈ caligraphic\_A ⊆ blackboard\_R start\_POSTSUPERSCRIPT italic\_m end\_POSTSUPERSCRIPT is the robot’s high-DoF action. Because we are focusing on the high-dimensional robot arm, we refer to s𝑠sitalic\_s as the robot’s state, but in practice the state s𝑠sitalic\_s may contain both the robot’s arm position and the location of other objects in the environment (e.g., the position of the tofu).
The robot transitions between states according to 𝒯(s,a)𝒯𝑠𝑎\mathcal{T}(s,a)caligraphic\_T ( italic\_s , italic\_a ), and receives reward R(s)𝑅𝑠R(s)italic\_R ( italic\_s ) at each timestep. We let γ∈[0,1)𝛾01\gamma\in[0,1)italic\_γ ∈ [ 0 , 1 ) denote the discount factor, and ρ0subscript𝜌0\rho\_{0}italic\_ρ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT captures the initial state distribution. During each interaction the robot is not sure what the human wants to accomplish (i.e., the robot does not know R𝑅Ritalic\_R). Returning to our running example, the robot does not know whether the human wants a bite of tofu, a drink of water, or something else entirely. The human communicates their desired task through joystick inputs u∈ℝd𝑢superscriptℝ𝑑u\in\mathbb{R}^{d}italic\_u ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT. Because the human’s input is of lower dimension than the robot’s action, we know that d<m𝑑𝑚d<mitalic\_d < italic\_m.
Dataset. Importantly, this is not the first time the user has guided their robot through the process of eating dinner. We assume access to a dataset of task demonstrations: these demonstrations can be kinesthetically provided by a caregiver or collected beforehand by the disabled user with a baseline teleoperation scheme. For example, the disabled user leverages their standard, pre-defined teleoperation mapping to guide the robot through the process of reaching for objects on the table (e.g., the plate, a glass of water) and manipulating these objects (e.g., scooping rice, picking up the glass). We collect these demonstrations and employ them to train our latent action approach. Formally, we have a dataset 𝒟={(c0,a0),(c1,a1),…}𝒟subscript𝑐0subscript𝑎0subscript𝑐1subscript𝑎1…\mathcal{D}=\{(c\_{0},a\_{0}),(c\_{1},a\_{1}),\ldots\}caligraphic\_D = { ( italic\_c start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) , ( italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , … } of context-action pairs that demonstrate high-dimensional robot actions. Notice that here we introduce the context c∈𝒞𝑐𝒞c\in\mathcal{C}italic\_c ∈ caligraphic\_C: this context captures the information available to the robot. For now we can think of the context c𝑐citalic\_c as the same as the robot’s state (i.e., c=s𝑐𝑠c=sitalic\_c = italic\_s), but later we will explore how the robot can also incorporate its understanding of the human’s goal into this context.

Figure 2: Model for learning and leveraging latent actions. (Left) given a dataset of context-action pairs, we embed the robot’s high-dimensional behavior (c,a)𝑐𝑎(c,a)( italic\_c , italic\_a ) into a low-dimensional latent space z∈𝒵𝑧𝒵z\in\mathcal{Z}italic\_z ∈ caligraphic\_Z. The encoder and decoder are trained to ensure user-friendly properties while minimizing the error between the commanded human action ahsubscript𝑎ℎa\_{h}italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT and the demonstrated action a𝑎aitalic\_a. As a result, the decoder ϕ(z,c)italic-ϕ𝑧𝑐\phi(z,c)italic\_ϕ ( italic\_z , italic\_c ) provides an intuitive mapping from low-dimensional latent actions to high-dimensional robot actions. (Right) at run time the human controls the robot via these low-dimensional latent actions. For now we simplify the alignment model so that z=u𝑧𝑢z=uitalic\_z = italic\_u, meaning that the human’s joystick inputs directly map to latent actions.
Latent Actions. Given this dataset, we first learn a latent action space 𝒵⊂ℝd𝒵superscriptℝ𝑑\mathcal{Z}\subset\mathbb{R}^{d}caligraphic\_Z ⊂ blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT, as well as a decoder function ϕ:𝒵×𝒞→𝒜:italic-ϕ→𝒵𝒞𝒜\phi:\mathcal{Z}\times\mathcal{C}\rightarrow\mathcal{A}italic\_ϕ : caligraphic\_Z × caligraphic\_C → caligraphic\_A. Here 𝒵𝒵\mathcal{Z}caligraphic\_Z is a low-dimensional embedding of 𝒟𝒟\mathcal{D}caligraphic\_D — we specify the dimensionality of 𝒵𝒵\mathcal{Z}caligraphic\_Z to match the number of degrees-of-freedom of the joystick, so that the user can directly input latent actions z∈𝒵𝑧𝒵z\in\mathcal{Z}italic\_z ∈ caligraphic\_Z. Based on the human’s latent action z𝑧zitalic\_z as well as the current context c∈𝒞𝑐𝒞c\in\mathcal{C}italic\_c ∈ caligraphic\_C, the robot leverages the decoder ϕitalic-ϕ\phiitalic\_ϕ to reconstruct a high-dimensional action (see Figure [2](#S3.F2 "Figure 2 ‣ 3 Problem Setting ‣ Learning Latent Actions to Control Assistive Robots")):
| | | | |
| --- | --- | --- | --- |
| | ah=ϕ(z,c)subscript𝑎ℎitalic-ϕ𝑧𝑐a\_{h}=\phi(z,c)italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT = italic\_ϕ ( italic\_z , italic\_c ) | | (1) |
Notice that we use ahsubscript𝑎ℎa\_{h}italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT here: this is because this robot action is commanded by the human’s input. Consider pressing the joystick right to cause the robot arm to cut some tofu. The joystick input is a low-DoF input u𝑢uitalic\_u, we map this input to a latent action z∈𝒵𝑧𝒵z\in\mathcal{Z}italic\_z ∈ caligraphic\_Z, and then leverage ϕitalic-ϕ\phiitalic\_ϕ to decode z𝑧zitalic\_z into a high-DoF commanded action ahsubscript𝑎ℎa\_{h}italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT that cuts the tofu. We formalize properties of 𝒵𝒵\mathcal{Z}caligraphic\_Z and models for learning ϕitalic-ϕ\phiitalic\_ϕ in Section [4](#S4 "4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots").
Shared Autonomy. The human provides joystick inputs u𝑢uitalic\_u — which we treat as latent actions z𝑧zitalic\_z — and these latent actions map to high-dimensional robot actions ahsubscript𝑎ℎa\_{h}italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT. But how can the robot assist the human through its own autonomous behavior? More formally, how should the robot choose autonomous actions arsubscript𝑎𝑟a\_{r}italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT that help guide the user? Similar to recent work on shared autonomy [jain2019probabilistic](#bib.bib24) ; [dragan2013policy](#bib.bib16) ; [newman2018harmonic](#bib.bib37) , we define the robot’s overall action as the linear combination of ahsubscript𝑎ℎa\_{h}italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT (the human’s commanded action) and arsubscript𝑎𝑟a\_{r}italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT (the robot’s autonomous guidance):
| | | | |
| --- | --- | --- | --- |
| | a=(1−α)⋅ah+α⋅ar𝑎⋅1𝛼subscript𝑎ℎ⋅𝛼subscript𝑎𝑟a=(1-\alpha)\cdot a\_{h}+\alpha\cdot a\_{r}italic\_a = ( 1 - italic\_α ) ⋅ italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT + italic\_α ⋅ italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT | | (2) |
In the above, α∈[0,1]𝛼01\alpha\in[0,1]italic\_α ∈ [ 0 , 1 ] parameterizes the trade-off between direct human teleoperation (α=0𝛼0\alpha=0italic\_α = 0) and complete robot autonomy (α=1𝛼1\alpha=1italic\_α = 1). We specifically focus on autonomous actions that help the user reach and maintain their high-level goals. Let 𝒢𝒢\mathcal{G}caligraphic\_G be a discrete set of goal positions the human might want to reach (e.g., their tofu, the rice, or a glass of water), and let g\*∈𝒢superscript𝑔𝒢g^{\*}\in\mathcal{G}italic\_g start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∈ caligraphic\_G be the human’s true goal (e.g., the tofu). The robot assists the user towards goals it thinks are likely:
| | | | |
| --- | --- | --- | --- |
| | ar=∑g∈𝒢b(g)⋅(g−s)subscript𝑎𝑟subscript𝑔𝒢⋅𝑏𝑔𝑔𝑠a\_{r}=\sum\_{g\in\mathcal{G}}b(g)\cdot(g-s)italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT = ∑ start\_POSTSUBSCRIPT italic\_g ∈ caligraphic\_G end\_POSTSUBSCRIPT italic\_b ( italic\_g ) ⋅ ( italic\_g - italic\_s ) | | (3) |
Here arsubscript𝑎𝑟a\_{r}italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT is a change of state (i.e., a joint velocity) that moves from s𝑠sitalic\_s towards the mode of the inferred goal position, and b𝑏bitalic\_b denotes the robot’s belief. This belief is a probability distribution over the candidate goals, where b(g)=1𝑏𝑔1b(g)=1italic\_b ( italic\_g ) = 1 indicates that the robot is completely convinced that g𝑔gitalic\_g is what the human wants. We analyze dynamics of combining Equations ([1](#S3.E1 "1 ‣ 3 Problem Setting ‣ Learning Latent Actions to Control Assistive Robots")-[3](#S3.E3 "3 ‣ 3 Problem Setting ‣ Learning Latent Actions to Control Assistive Robots")) in Section [5](#S5 "5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots").
Alignment. Recall that the human’s joystick input is u𝑢uitalic\_u, and that our approach treats this joystick input as a latent actions z𝑧zitalic\_z. A naive robot will simply set z=u𝑧𝑢z=uitalic\_z = italic\_u. But this misses out on how different users expect the robot to interpret their commands. For example, let us say the assistive robot is directly above some tofu. One user might expect pressing right to cause a stabbing motion, while a second user expects pressing right to cut the tofu. To personalize our approach to match individual user expectations, we learn an alignment function:
| | | | |
| --- | --- | --- | --- |
| | z=f(u,c)𝑧𝑓𝑢𝑐z=f(u,c)italic\_z = italic\_f ( italic\_u , italic\_c ) | | (4) |
Unlike Equation ([1](#S3.E1 "1 ‣ 3 Problem Setting ‣ Learning Latent Actions to Control Assistive Robots")), this is not an embedding, since both u𝑢uitalic\_u and z𝑧zitalic\_z have d𝑑ditalic\_d dimensions. But like Equation ([1](#S3.E1 "1 ‣ 3 Problem Setting ‣ Learning Latent Actions to Control Assistive Robots")), the alignment model does depend on the robot’s current context. A user might expect pressing right to stab the tofu when they are directly above it — but when they are interacting with a glass of water, that same user expects pressing right to tilt and pour the glass. We learn f𝑓fitalic\_f across different contexts in Section [6](#S6 "6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots").
4 Learning Latent Actions
--------------------------

Figure 3: Controlling an assistive robot with learned latent actions. The robot has been trained on demonstrations of pouring tasks, and learns a 2222-DoF latent space. One axis of the latent space moves the cup across the table, and the other latent dimension pours the cup. This latent space satisfies our conditioning property because the decoded action ahsubscript𝑎ℎa\_{h}italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT depends on the current context c𝑐citalic\_c. This latent space also satisfies controllability because the human can leverage the two learned latent dimensions to complete the demonstrated pouring tasks (e.g., pour water into the bowl).
In this section we focus on learning latent actions. We define latent actions as a low-dimensional embedding of high-dimensional robot actions, and we learn this embedding from the dataset 𝒟𝒟\mathcal{D}caligraphic\_D of offline demonstrations. Overall, we will search for two things: i) a latent action space 𝒵⊂ℝd𝒵superscriptℝ𝑑\mathcal{Z}\subset\mathbb{R}^{d}caligraphic\_Z ⊂ blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT that is of lower dimension than the robot’s action space 𝒜⊆ℝm𝒜superscriptℝ𝑚\mathcal{A}\subseteq\mathbb{R}^{m}caligraphic\_A ⊆ blackboard\_R start\_POSTSUPERSCRIPT italic\_m end\_POSTSUPERSCRIPT, and ii) a decoder function ϕitalic-ϕ\phiitalic\_ϕ that maps from latent actions to robot actions. In practice, latent actions provide users a non-linear mapping for robot control: e.g., pressing right on the joystick causes the robot to perform a stabbing motion. We emphasize that these latent actions do not always have semantic meanings — they are not always “stabbing” or “cutting” — but generally embed the robot’s high-dimensional motion into a low-dimensional space.
Recall our motivating example, where the human is trying to teleoperate their assistive robot using a joystick to reach and manipulate food items. When controlling the robot, there are several properties that the human expects: e.g., smooth changes in the joystick input should not cause abrupt changes in robot motion, and when the human holds the joystick in a constant direction, the robot’s motion should not suddenly switch direction. In what follows, we first formalize the properties necessary for latent actions to be intuitive. These properties will guide our approach, and provide a principled way of assessing the usefulness of latent actions with humans-in-the-loop. Next, we will explore different models for learning latent actions that capture our intuitive properties.
###
4.1 Latent Action Properties
We identified four properties that user-friendly latent actions should have: conditioning, controllability, consistency, and scalability.
Conditioning. Because ϕitalic-ϕ\phiitalic\_ϕ maps from latent actions to robot actions, at first glance it may seem intuitive for ϕitalic-ϕ\phiitalic\_ϕ to only depend on z𝑧zitalic\_z, i.e., ah=ϕ(z)subscript𝑎ℎitalic-ϕ𝑧a\_{h}=\phi(z)italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT = italic\_ϕ ( italic\_z ). But this quickly breaks down in practice. Imagine that you are controlling the robot arm to get some tofu: at the start of the task, you press right and left on the joystick to move the robot towards your target. But as the robot approaches the tofu, you no longer need to keep moving towards a goal; instead, you need to use those same joystick inputs to carefully align the orientation of the fork, so that you can cut off a piece. Hence, latent actions must convey different meanings in different contexts. This is especially true because the latent action space is smaller than the robot action space, and so there are more actions to convey than we can capture with z𝑧zitalic\_z alone. We therefore introduce c∈𝒞𝑐𝒞c\in\mathcal{C}italic\_c ∈ caligraphic\_C, the robot’s current context, and condition the decoder on c𝑐citalic\_c, so that ah=ϕ(z,c)subscript𝑎ℎitalic-ϕ𝑧𝑐a\_{h}=\phi(z,c)italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT = italic\_ϕ ( italic\_z , italic\_c ). In the rest of this section we will treat the robot’s state s𝑠sitalic\_s as its context, so that c=s𝑐𝑠c=sitalic\_c = italic\_s.
Controllability. For latent actions to be useful, human operators must be able to leverage these actions to control the robot through their desired task. Recall that the dataset 𝒟𝒟\mathcal{D}caligraphic\_D includes relevant task demonstrations, such as picking up a glass, reaching for the kitchen shelf, and scooping rice. We want the user to be able to control the robot through these same tasks when leveraging latent actions. Let si,sj∈𝒟subscript𝑠𝑖subscript𝑠𝑗
𝒟s\_{i},s\_{j}\in\mathcal{D}italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∈ caligraphic\_D be two states from the dataset of demonstrations, and let s1,s2,…,sKsubscript𝑠1subscript𝑠2…subscript𝑠𝐾s\_{1},s\_{2},...,s\_{K}italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … , italic\_s start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT be the sequence of states that the robot visits when starting in state s0=sisubscript𝑠0subscript𝑠𝑖s\_{0}=s\_{i}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT = italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and taking latent actions z1,…,zKsubscript𝑧1…subscript𝑧𝐾z\_{1},...,z\_{K}italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_z start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT. The robot transitions between the visited states using its transition function 𝒯𝒯\mathcal{T}caligraphic\_T and the learned decoder ϕitalic-ϕ\phiitalic\_ϕ: sk=𝒯(sk−1,ϕ(zk−1,sk−1))subscript𝑠𝑘𝒯subscript𝑠𝑘1italic-ϕsubscript𝑧𝑘1subscript𝑠𝑘1s\_{k}=\mathcal{T}(s\_{k-1},\phi(z\_{k-1},s\_{k-1}))italic\_s start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT = caligraphic\_T ( italic\_s start\_POSTSUBSCRIPT italic\_k - 1 end\_POSTSUBSCRIPT , italic\_ϕ ( italic\_z start\_POSTSUBSCRIPT italic\_k - 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_k - 1 end\_POSTSUBSCRIPT ) ). Formally, we say that a latent action space 𝒵𝒵\mathcal{Z}caligraphic\_Z is controllable if for every pair of states (si,sj)subscript𝑠𝑖subscript𝑠𝑗(s\_{i},s\_{j})( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) there exists a sequence of latent actions {zk}k=1K,zk∈𝒵superscriptsubscriptsubscript𝑧𝑘𝑘1𝐾subscript𝑧𝑘
𝒵\{z\_{k}\}\_{k=1}^{K},z\_{k}\in\mathcal{Z}{ italic\_z start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT } start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT , italic\_z start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ∈ caligraphic\_Z such that sj=sKsubscript𝑠𝑗subscript𝑠𝐾s\_{j}=s\_{K}italic\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT = italic\_s start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT. In other words, a latent action space is controllable if it can move the robot between pairs of start and goal states from the demonstrated tasks.
Consistency. Let us say you are using a one-DoF joystick to guide the robot arm along a line. When you hold the joystick to the right, you expect the robot to immediately move right — but more than that, you expect the robot to move right at every point along the line! For example, the robot should not move right for a while, then suddenly go left, and switch back to going right again. To capture this, we define a latent action space 𝒵𝒵\mathcal{Z}caligraphic\_Z as consistent if the same latent action z∈𝒵𝑧𝒵z\in\mathcal{Z}italic\_z ∈ caligraphic\_Z has a similar effect on how the robot behaves in nearby states. We formulate this similarity via a task-dependent metric dMsubscript𝑑𝑀d\_{M}italic\_d start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT: e.g., in pouring tasks dMsubscript𝑑𝑀d\_{M}italic\_d start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT could measure the orientation of the robot’s end-effector, and in reaching tasks dMsubscript𝑑𝑀d\_{M}italic\_d start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT could measure the position of the end-effector. Applying this metric, consistent latent actions should satisfy:
| | | | |
| --- | --- | --- | --- |
| | dM(𝒯(s1,ϕ(z,s1)),𝒯(s2,ϕ(z,s2)))<ϵsubscript𝑑𝑀𝒯subscript𝑠1italic-ϕ𝑧subscript𝑠1𝒯subscript𝑠2italic-ϕ𝑧subscript𝑠2italic-ϵd\_{M}(\mathcal{T}(s\_{1},\phi(z,s\_{1})),\mathcal{T}(s\_{2},\phi(z,s\_{2})))<\epsilonitalic\_d start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT ( caligraphic\_T ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_ϕ ( italic\_z , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ) , caligraphic\_T ( italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_ϕ ( italic\_z , italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ) ) < italic\_ϵ | | (5) |
when ‖s1−s2‖<δnormsubscript𝑠1subscript𝑠2𝛿\|s\_{1}-s\_{2}\|<\delta∥ italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT - italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ∥ < italic\_δ for some ϵ,δ>0italic-ϵ𝛿
0\epsilon,\delta>0italic\_ϵ , italic\_δ > 0. We emphasize that we do not need to know dMsubscript𝑑𝑀d\_{M}italic\_d start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT for our approach; we only introduce this metric as a way of quantifying similarity.
Scalability. Our last property is complementary to consistency. Thinking again about the example of teleoperating a robot along a line, when you press the joystick slightly to the right, you expect the robot to move slowly; and when you hold the joystick all the way to the right, you anticipate that the robot will move quickly. Smaller inputs should cause smaller motions, and larger inputs should cause larger motions. Formally, we say that a latent action space 𝒵𝒵\mathcal{Z}caligraphic\_Z is scalable if ‖s−s′‖→∞→norm𝑠superscript𝑠′\|s-s^{\prime}\|\to\infty∥ italic\_s - italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∥ → ∞ as ‖z‖→∞→norm𝑧\|z\|\to\infty∥ italic\_z ∥ → ∞, where s′=𝒯(s,ϕ(z,s))superscript𝑠′𝒯𝑠italic-ϕ𝑧𝑠s^{\prime}=\mathcal{T}(s,\phi(z,s))italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = caligraphic\_T ( italic\_s , italic\_ϕ ( italic\_z , italic\_s ) ). When put together, our consistency and scalability properties ensure that the decoder function ϕitalic-ϕ\phiitalic\_ϕ is Lipschitz continuous.
###
4.2 Models for Learning Latent Actions
Now that we have formally introduced the properties that a user-friendly latent space should satisfy, we will explore low-DoF embeddings that capture these properties; specifically, models which learn ϕ:𝒵×𝒞→𝒜:italic-ϕ→𝒵𝒞𝒜\phi:\mathcal{Z}\times\mathcal{C}\rightarrow\mathcal{A}italic\_ϕ : caligraphic\_Z × caligraphic\_C → caligraphic\_A from offline demonstrations 𝒟𝒟\mathcal{D}caligraphic\_D. We are interested in models that balance expressiveness with intuition: the embedding must reconstruct high-DoF actions while remaining controllable, consistent, and scalable. We assert that only models which reason over the robot’s context when decoding the human’s inputs can accurately and intuitively interpret the latent action. Our overall model structure is outlined in Figure [2](#S3.F2 "Figure 2 ‣ 3 Problem Setting ‣ Learning Latent Actions to Control Assistive Robots").
Reconstructing Actions. Let us return to our assistive eating example: when the person applies a low-dimensional joystick input, the robot completes a high-dimensional action. We use autoencoders to move between these low- and high-DoF action spaces. Define ψ:𝒞×𝒜→𝒵:𝜓→𝒞𝒜𝒵\psi:\mathcal{C}\times\mathcal{A}\rightarrow\mathcal{Z}italic\_ψ : caligraphic\_C × caligraphic\_A → caligraphic\_Z as an encoder that embeds the robot’s behavior into a latent space, and define ϕ:𝒵→𝒜:italic-ϕ→𝒵𝒜\phi:\mathcal{Z}\rightarrow\mathcal{A}italic\_ϕ : caligraphic\_Z → caligraphic\_A as a decoder that reconstructs a high-DoF robot action ahsubscript𝑎ℎa\_{h}italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT from this latent space. Intuitively, the reconstructed robot action ahsubscript𝑎ℎa\_{h}italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT should match the demonstrated action a𝑎aitalic\_a. To encourage models to learn latent actions that accurately reconstruct high-DoF robot behavior, we incorporate the reconstruction error ‖a−ah‖2superscriptnorm𝑎subscript𝑎ℎ2\|a-a\_{h}\|^{2}∥ italic\_a - italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT into the model’s loss function. Let ℒℒ\mathcal{L}caligraphic\_L denote the loss function our model is trying to minimize; when we only focus on reconstructing actions, our loss function is:
| | | | |
| --- | --- | --- | --- |
| | ℒ=‖a−ϕ(ψ(c,a))‖2ℒsuperscriptnorm𝑎italic-ϕ𝜓𝑐𝑎2\mathcal{L}=\|a-\phi(\psi(c,a))\|^{2}caligraphic\_L = ∥ italic\_a - italic\_ϕ ( italic\_ψ ( italic\_c , italic\_a ) ) ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT | | (6) |
Both principal component analysis (PCA) and autoencoder (AE) models minimize this loss function.
Regularizing Latent Actions. When the user slightly tilts the joystick, the robot should not suddenly cut the entire block of tofu. To better ensure this consistency and scalability, we incorporate a normalization term into the model’s loss function. Let us define ψ:𝒞×𝒜→ℝd×ℝ+d:𝜓→𝒞𝒜superscriptℝ𝑑superscriptsubscriptℝ𝑑\psi:\mathcal{C}\times\mathcal{A}\rightarrow\mathbb{R}^{d}\times\mathbb{R}\_{+}^{d}italic\_ψ : caligraphic\_C × caligraphic\_A → blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT × blackboard\_R start\_POSTSUBSCRIPT + end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT as an encoder that outputs the mean μ𝜇\muitalic\_μ and covariance σ𝜎\sigmaitalic\_σ over the latent action space. We penalize the divergence between this latent action space and a normal distribution: KL(𝒩(μ,σ)∥𝒩(0,1))𝐾𝐿conditional𝒩𝜇𝜎𝒩01KL(\mathcal{N}(\mu,\sigma)~{}\|~{}\mathcal{N}(0,1))italic\_K italic\_L ( caligraphic\_N ( italic\_μ , italic\_σ ) ∥ caligraphic\_N ( 0 , 1 ) ). When we incorporate this normalizer, our loss function becomes:
| | | | |
| --- | --- | --- | --- |
| | ℒ=‖a−ϕ(z)‖2+λ⋅KL[𝒩(μ,σ)∥𝒩(0,1)]ℒsuperscriptnorm𝑎italic-ϕ𝑧2⋅𝜆𝐾𝐿delimited-[]conditional𝒩𝜇𝜎𝒩01\mathcal{L}=\|a-\phi(z)\|^{2}+\lambda\cdot KL\big{[}\mathcal{N}(\mu,\sigma)~{}\|~{}\mathcal{N}(0,1)\big{]}caligraphic\_L = ∥ italic\_a - italic\_ϕ ( italic\_z ) ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT + italic\_λ ⋅ italic\_K italic\_L [ caligraphic\_N ( italic\_μ , italic\_σ ) ∥ caligraphic\_N ( 0 , 1 ) ] | | (7) |
Variational autoencoder (VAE) models [kingma2013auto](#bib.bib29) ; [doersch2016tutorial](#bib.bib15) minimize this loss function by trading-off between reconstruction error and normalization.
Conditioning on State. Importantly, we recognize that the meaning of the human’s joystick input often depends on what part of the task the robot is performing. When the robot is above a block of tofu, pressing down on the joystick indicates that the robot should stab the food; but — when the robot is far away from the tofu — it does not make sense for the robot to stab! So that robots can associate meanings with latent actions, we condition the interpretation of the latent action on the robot’s current context. Define ϕ:𝒵×𝒞→𝒜:italic-ϕ→𝒵𝒞𝒜\phi:\mathcal{Z}\times\mathcal{C}\rightarrow\mathcal{A}italic\_ϕ : caligraphic\_Z × caligraphic\_C → caligraphic\_A as a decoder that now makes decisions based on both z𝑧zitalic\_z and c𝑐citalic\_c. Leveraging this conditioned decoder, our final loss function is:
| | | | |
| --- | --- | --- | --- |
| | ℒ=‖a−ϕ(z,c)‖2+λ⋅KL[𝒩(μ,σ)∥𝒩(0,1)]ℒsuperscriptnorm𝑎italic-ϕ𝑧𝑐2⋅𝜆𝐾𝐿delimited-[]conditional𝒩𝜇𝜎𝒩01\mathcal{L}=\|a-\phi(z,c)\|^{2}+\lambda\cdot KL\big{[}\mathcal{N}(\mu,\sigma)~{}\|~{}\mathcal{N}(0,1)\big{]}caligraphic\_L = ∥ italic\_a - italic\_ϕ ( italic\_z , italic\_c ) ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT + italic\_λ ⋅ italic\_K italic\_L [ caligraphic\_N ( italic\_μ , italic\_σ ) ∥ caligraphic\_N ( 0 , 1 ) ] | | (8) |
We expect that conditional autoencoders (cAE) and conditional variational autoencoders (cVAE) which use ϕitalic-ϕ\phiitalic\_ϕ will learn more expressive and controllable actions than their non-context conditioned counterparts. Note that cVAEs minimize Equation ([8](#S4.E8 "8 ‣ 4.2 Models for Learning Latent Actions ‣ 4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots")), while cAEs do not include the normalization term (i.e., λ=0𝜆0\lambda=0italic\_λ = 0).
Relation to Properties. Models trained to minimize the listed loss functions are encouraged to satisfy our user-friendly properties. For example, in Equation ([8](#S4.E8 "8 ‣ 4.2 Models for Learning Latent Actions ‣ 4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots")) the decoder is conditioned on the current context, while minimizing the reconstruction loss ‖a−ϕ(z,c)‖2superscriptnorm𝑎italic-ϕ𝑧𝑐2\|a-\phi(z,c)\|^{2}∥ italic\_a - italic\_ϕ ( italic\_z , italic\_c ) ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ensures that the robot can reproduce the demonstrations, and is therefore controllable. Enforcing consistency and scalability are more challenging — particularly when we do not know the similarity metric dMsubscript𝑑𝑀d\_{M}italic\_d start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT — but including the normalization term prevents the latent space from assigning arbitrary and irregular values to z𝑧zitalic\_z. To better understand how these models enforce our desired properties, we conduct a set of controlled simulations in Section [8.1](#S8.SS1 "8.1 Do Learned Latent Actions Capture our User-Friendly Properties? ‣ 8 Simulations ‣ Learning Latent Actions to Control Assistive Robots").
5 Combining Latent Actions with Shared Autonomy
------------------------------------------------

Figure 4: Shared autonomy with learned latent actions. (Left) as the human teleoperates the robot towards their desired goal, the robot’s belief in that goal increases, and the robot selects assistive actions arsubscript𝑎𝑟a\_{r}italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT to help the human autonomously reach and maintain their high-level goal. (Right) the meaning of the latent actions changes as a function of the robot’s belief. At the start of the task — when the robot is not sure about any goal — the latent actions z𝑧zitalic\_z produce high-level reaching motions (shown in blue). As the robot becomes confident in the human’s goal the meaning of the latent actions becomes more refined, and z𝑧zitalic\_z increasingly controls fine-grained manipulation (shown in green).
The latent actions learned in Section [4](#S4 "4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots") provide an expressive mapping between low-dimensional user inputs and high-dimensional robot actions. But controlling an assistive robot with latent actions alone still presents a challenge: any imprecision or noise in either the user’s inputs or latent space is reflected in the decoded actions. Recall our eating example: at the start of the task, the human uses latent actions to guide the robot towards their high-level goal (i.e., reaching the tofu). Once the robot is close to the tofu, however, the human no longer needs to control reaching motions — instead, the human leverages latent actions to precisely manipulate the tofu, performing low-level cutting and stabbing tasks. Here the human’s inputs should not unintentionally cause the robot arm to drift away from the tofu or suddenly jerk into the table. Instead, the robot should maintain the human’s high-level goal. In this section we incorporate shared autonomy alongside latent actions: this approach assists the human towards their high-level goals, and then maintains these goals as the human focuses on low-level manipulation. We visualize shared autonomy with learned latent actions in Figure [4](#S5.F4 "Figure 4 ‣ 5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots").
###
5.1 Latent Actions with Shared Autonomy
We first explain how to combine latent actions with shared autonomy. Remember that the human’s joystick input is u𝑢uitalic\_u and the latent action is z𝑧zitalic\_z. For now we assume some pre-defined mapping from u𝑢uitalic\_u to z𝑧zitalic\_z (i.e., z=u𝑧𝑢z=uitalic\_z = italic\_u) so that the human’s joystick inputs are treated as latent actions. In the last section we learned a decoder ah=ϕ(z,c)subscript𝑎ℎitalic-ϕ𝑧𝑐a\_{h}=\phi(z,c)italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT = italic\_ϕ ( italic\_z , italic\_c ), where the output of this decoder is a high-dimensional robot action commanded by the human. Here we combine this commanded action with arsubscript𝑎𝑟a\_{r}italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT, an autonomous assistive action that helps the user reach and maintain their high-level goals.
Belief over Goals. Similar to [dragan2013policy](#bib.bib16) ; [javdani2018shared](#bib.bib25) ; [gopinath2016human](#bib.bib20) , we assume access to a discrete set of high-level goals 𝒢𝒢\mathcal{G}caligraphic\_G that the human may want to reach. Within our eating scenario these goals are food items (e.g., the tofu, rice, a plate, marshmallows). Although the robot knows which goals are possible, the robot does not know the human’s current goal g\*∈𝒢superscript𝑔𝒢g^{\*}\in\mathcal{G}italic\_g start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∈ caligraphic\_G. We let b=P(g|s0:t,u0:t)𝑏𝑃conditional𝑔superscript𝑠:0𝑡superscript𝑢:0𝑡b=P(g~{}|~{}s^{0:t},u^{0:t})italic\_b = italic\_P ( italic\_g | italic\_s start\_POSTSUPERSCRIPT 0 : italic\_t end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUPERSCRIPT 0 : italic\_t end\_POSTSUPERSCRIPT ) denote the robot’s belief over this space of candidate goals, where b(g)=1𝑏𝑔1b(g)=1italic\_b ( italic\_g ) = 1 indicates that the robot is convinced that g𝑔gitalic\_g is the human’s desired goal. Here s0:tsuperscript𝑠:0𝑡s^{0:t}italic\_s start\_POSTSUPERSCRIPT 0 : italic\_t end\_POSTSUPERSCRIPT is the history of states and u0:tsuperscript𝑢:0𝑡u^{0:t}italic\_u start\_POSTSUPERSCRIPT 0 : italic\_t end\_POSTSUPERSCRIPT is the history of human inputs: we use Bayesian inference to update the robot’s belief b𝑏bitalic\_b given the human’s past decisions:
| | | | |
| --- | --- | --- | --- |
| | bt+1(g)∝P(ut∣st,g)⋅bt(g)proportional-tosuperscript𝑏𝑡1𝑔⋅𝑃conditionalsuperscript𝑢𝑡superscript𝑠𝑡𝑔superscript𝑏𝑡𝑔b^{t+1}(g)\propto P(u^{t}\mid s^{t},g)\cdot b^{t}(g)italic\_b start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT ( italic\_g ) ∝ italic\_P ( italic\_u start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ∣ italic\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_g ) ⋅ italic\_b start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_g ) | | (9) |
This Bayesian inference approach for updating b𝑏bitalic\_b is explored by prior work on shared autonomy [jain2019probabilistic](#bib.bib24) .
Importantly, the meaning of the human’s joystick inputs changes as a function of the robot’s belief. Imagine that you are using a 1-DoF joystick to get the tofu in our eating example. At the start of the task — when the robot is unsure of your goal — you press left and right on the joystick to move towards your high-level goal. Once you’ve reached the tofu — and the robot is confident in your goal — you need to use those same joystick inputs to carefully align the orientation of the fork. In order to learn latent action spaces that can continuously alternate along a spectrum of high-level goals and fine-grained preferences, we now condition ϕitalic-ϕ\phiitalic\_ϕ on the robot’s current state as well as its belief. Hence, instead of c=s𝑐𝑠c=sitalic\_c = italic\_s, we now have c=(s,b)𝑐𝑠𝑏c=(s,b)italic\_c = ( italic\_s , italic\_b ). Conditioning on belief enables the meaning of latent actions to change based on the robot’s confidence. As a result of this proposed structure, latent actions purely indicate the desired goal when the robot is unsure; and once the robot is confident about the human’s goal, latent actions gradually change to convey the precise manipulation. We note that b𝑏bitalic\_b is available when collecting demonstrations 𝒟𝒟\mathcal{D}caligraphic\_D, since the robot can compute its belief using the Bayesian update above based on the demonstrated trajectory. Take a demonstration that moves the robot to the tofu: initially the robot has a uniform belief over goals, but as the demonstration moves towards the tofu, the robot applies Equation ([9](#S5.E9 "9 ‣ 5.1 Latent Actions with Shared Autonomy ‣ 5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots")) to increase its belief b𝑏bitalic\_b over the tofu goal.
Shared Autonomy. Recall that the robot applies assistance via action arsubscript𝑎𝑟a\_{r}italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT in Equation ([2](#S3.E2 "2 ‣ 3 Problem Setting ‣ Learning Latent Actions to Control Assistive Robots")). In order to assist the human, the robot needs to understand the human’s intent — i.e., which goal they want to reach. The robot’s understanding of the human’s intended goal is captured by belief b𝑏bitalic\_b, and we leverage this belief to select an assistive action arsubscript𝑎𝑟a\_{r}italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT. As shown in Equation ([3](#S3.E3 "3 ‣ 3 Problem Setting ‣ Learning Latent Actions to Control Assistive Robots")), the robot selects arsubscript𝑎𝑟a\_{r}italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT to guide the robot towards each discrete goal g∈𝒢𝑔𝒢g\in\mathcal{G}italic\_g ∈ caligraphic\_G in proportion to the robot’s confidence in that goal222Our approach is not tied to this particular instantiation of shared autonomy. Other instances of shared autonomy can similarly be used.. Combining these equations with our learned latent actions, we find the robot’s overall action a𝑎aitalic\_a:
| | | | |
| --- | --- | --- | --- |
| | a=(1−α)⋅ϕ(z,c)+α⋅∑g∈𝒢b(g)⋅(g−s)𝑎⋅1𝛼italic-ϕ𝑧𝑐⋅𝛼subscript𝑔𝒢⋅𝑏𝑔𝑔𝑠a=(1-\alpha)\cdot\phi(z,c)+\alpha\cdot\sum\_{g\in\mathcal{G}}b(g)\cdot(g-s)italic\_a = ( 1 - italic\_α ) ⋅ italic\_ϕ ( italic\_z , italic\_c ) + italic\_α ⋅ ∑ start\_POSTSUBSCRIPT italic\_g ∈ caligraphic\_G end\_POSTSUBSCRIPT italic\_b ( italic\_g ) ⋅ ( italic\_g - italic\_s ) | | (10) |
Recall that α∈[0,1]𝛼01\alpha\in[0,1]italic\_α ∈ [ 0 , 1 ] arbitrates between human control (α=0𝛼0\alpha=0italic\_α = 0) and assistive guidance (α=1𝛼1\alpha=1italic\_α = 1). In practice, if the robot has a uniform prior over which morsel of food the human wants to eat, arsubscript𝑎𝑟a\_{r}italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT guides the robot to the center of these morsels. And — when the human indicates a desired morsel — arsubscript𝑎𝑟a\_{r}italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT moves the robot towards that target before maintaining the target position.
###
5.2 Reaching and Changing Goals
In Equation ([10](#S5.E10 "10 ‣ 5.1 Latent Actions with Shared Autonomy ‣ 5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots")) we incorporated shared autonomy with latent actions to tackle assistive eating tasks that require high-level reaching and precise manipulation. Both latent actions and shared autonomy have an independent role within this method: but how can we be sure that the combination of these tools will remain effective? Returning to our eating example — if the human inputs latent actions, will shared autonomy correctly guide the robot to the desired morsel of food? And what if the human has multiple goals in mind (e.g., getting a chip and then dipping it in salsa) — can the human leverage latent actions to change goals even when shared autonomy is confident in the original goal?
Converging to the Desired Goal. We first explore how our approach ensures that the human reaches their desired goal. Consider the Lyapunov function:
| | | | |
| --- | --- | --- | --- |
| | V(t)=12‖e(t)‖2,e(t)=g\*−s(t)formulae-sequence𝑉𝑡12superscriptnorm𝑒𝑡2𝑒𝑡superscript𝑔𝑠𝑡V(t)=\frac{1}{2}\|e(t)\|^{2},\quad e(t)=g^{\*}-s(t)italic\_V ( italic\_t ) = divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG ∥ italic\_e ( italic\_t ) ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT , italic\_e ( italic\_t ) = italic\_g start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT - italic\_s ( italic\_t ) | | (11) |
where e𝑒eitalic\_e denotes the error between the robot’s current state s𝑠sitalic\_s and the human’s goal g\*superscript𝑔g^{\*}italic\_g start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT. We want the robot to choose actions that minimize Equation ([11](#S5.E11 "11 ‣ 5.2 Reaching and Changing Goals ‣ 5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots")) across a spectrum of user skill levels and teleoperation strategies. Let us focus on the common setting in which s𝑠sitalic\_s is the robot’s joint position and a𝑎aitalic\_a is the joint velocity, so that s˙(t)=a(t)˙𝑠𝑡𝑎𝑡\dot{s}(t)=a(t)over˙ start\_ARG italic\_s end\_ARG ( italic\_t ) = italic\_a ( italic\_t ). Taking the derivative of Equation ([11](#S5.E11 "11 ‣ 5.2 Reaching and Changing Goals ‣ 5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots")) and substituting in this transition function, we reach333For notational simplicity we choose α=0.5𝛼0.5\alpha=0.5italic\_α = 0.5, so that both human and robot inputs are equally weighted. Our results generalize to other α𝛼\alphaitalic\_α.:
| | | | |
| --- | --- | --- | --- |
| | V˙(t)=−12e⊤[ϕ(z,c)+∑g∈𝒢b(g)⋅(g−s)]˙𝑉𝑡12superscript𝑒topdelimited-[]italic-ϕ𝑧𝑐subscript𝑔𝒢⋅𝑏𝑔𝑔𝑠\dot{V}(t)=-\frac{1}{2}e^{\top}\Big{[}\phi(z,c)+\sum\_{g\in\mathcal{G}}b(g)\cdot(g-s)\Big{]}over˙ start\_ARG italic\_V end\_ARG ( italic\_t ) = - divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG italic\_e start\_POSTSUPERSCRIPT ⊤ end\_POSTSUPERSCRIPT [ italic\_ϕ ( italic\_z , italic\_c ) + ∑ start\_POSTSUBSCRIPT italic\_g ∈ caligraphic\_G end\_POSTSUBSCRIPT italic\_b ( italic\_g ) ⋅ ( italic\_g - italic\_s ) ] | | (12) |
We want Equation ([12](#S5.E12 "12 ‣ 5.2 Reaching and Changing Goals ‣ 5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots")) to be negative, so that V𝑉Vitalic\_V (and thus the error e𝑒eitalic\_e) decrease over time. A sufficient condition for V˙<0˙𝑉0\dot{V}<0over˙ start\_ARG italic\_V end\_ARG < 0 is:
| | | | |
| --- | --- | --- | --- |
| | b(g\*)⋅‖e‖>‖ϕ(z,c)‖+∑g∈𝒢′b(g)⋅‖g−s‖⋅𝑏superscript𝑔norm𝑒normitalic-ϕ𝑧𝑐subscript𝑔superscript𝒢′⋅𝑏𝑔norm𝑔𝑠b(g^{\*})\cdot\|e\|>\|\phi(z,c)\|+\sum\_{g\in\mathcal{G}^{\prime}}b(g)\cdot\|g-s\|italic\_b ( italic\_g start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) ⋅ ∥ italic\_e ∥ > ∥ italic\_ϕ ( italic\_z , italic\_c ) ∥ + ∑ start\_POSTSUBSCRIPT italic\_g ∈ caligraphic\_G start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_b ( italic\_g ) ⋅ ∥ italic\_g - italic\_s ∥ | | (13) |
where 𝒢′superscript𝒢′\mathcal{G}^{\prime}caligraphic\_G start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT is the set of all goals except g\*superscript𝑔g^{\*}italic\_g start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT. As a final step, we bound the magnitude of the decoded action, such that ‖ϕ(⋅)‖<σhnormitalic-ϕ⋅subscript𝜎ℎ\|\phi(\cdot)\|<\sigma\_{h}∥ italic\_ϕ ( ⋅ ) ∥ < italic\_σ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, and we define σrsubscript𝜎𝑟\sigma\_{r}italic\_σ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT as the distance between s𝑠sitalic\_s and the furthest goal: σr=maxg∈𝒢′‖g−s‖subscript𝜎𝑟subscript𝑔superscript𝒢′norm𝑔𝑠\sigma\_{r}=\max\_{g\in\mathcal{G}^{\prime}}\|g-s\|italic\_σ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT = roman\_max start\_POSTSUBSCRIPT italic\_g ∈ caligraphic\_G start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ∥ italic\_g - italic\_s ∥. Now we have V˙<0˙𝑉0\dot{V}<0over˙ start\_ARG italic\_V end\_ARG < 0 if:
| | | | |
| --- | --- | --- | --- |
| | b(g\*)⋅‖e‖>σh+(1−b(g\*))⋅σr⋅𝑏superscript𝑔norm𝑒subscript𝜎ℎ⋅1𝑏superscript𝑔subscript𝜎𝑟b(g^{\*})\cdot\|e\|>\sigma\_{h}+\big{(}1-b(g^{\*})\big{)}\cdot\sigma\_{r}italic\_b ( italic\_g start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) ⋅ ∥ italic\_e ∥ > italic\_σ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT + ( 1 - italic\_b ( italic\_g start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) ) ⋅ italic\_σ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT | | (14) |
We define δ:=σh+(1−b(g\*))⋅σrassign𝛿subscript𝜎ℎ⋅1𝑏superscript𝑔subscript𝜎𝑟\delta:=\sigma\_{h}+\big{(}1-b(g^{\*})\big{)}\cdot\sigma\_{r}italic\_δ := italic\_σ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT + ( 1 - italic\_b ( italic\_g start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) ) ⋅ italic\_σ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT. We therefore conclude that our approach in Equation ([10](#S5.E10 "10 ‣ 5.1 Latent Actions with Shared Autonomy ‣ 5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots")) yields uniformly ultimately bounded stability about the human’s goal, where δ𝛿\deltaitalic\_δ affects the radius of this bound [spong2006robot](#bib.bib47) . As the robot’s confidence in g\*superscript𝑔g^{\*}italic\_g start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT increases, δ→σh→𝛿subscript𝜎ℎ\delta\rightarrow\sigma\_{h}italic\_δ → italic\_σ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, and the robot’s error e𝑒eitalic\_e decreases so long as ‖e(t)‖>σhnorm𝑒𝑡subscript𝜎ℎ\|e(t)\|>\sigma\_{h}∥ italic\_e ( italic\_t ) ∥ > italic\_σ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT. Intuitively, this guarantees that the robot will move to some ball around the human’s goal g\*superscript𝑔g^{\*}italic\_g start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT (even if we treat the human input as a disturbance), and the radius of that ball decreases as the robot becomes more confident.
Changing Goals. Our analysis so far suggests that the robot becomes constrained to a region about the most likely goal. This works well when the human correctly conveys their intentions to the robot — but what if the human makes a mistake, or changes their mind? How do we ensure that the robot is not trapped at an undesired goal? Re-examining Equation ([14](#S5.E14 "14 ‣ 5.2 Reaching and Changing Goals ‣ 5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots")), it is key that — in every context c𝑐citalic\_c — the human can convey sufficiently large actions ‖ϕ(z,c)‖normitalic-ϕ𝑧𝑐\|\phi(z,c)\|∥ italic\_ϕ ( italic\_z , italic\_c ) ∥ towards their preferred goal, ensuring that σhsubscript𝜎ℎ\sigma\_{h}italic\_σ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT does not decrease to zero. Put another way, the human must be able to increase the radius of the bounding ball, reducing the constraint imposed by shared autonomy.
To encourage the robot to learn latent actions that increase this radius, we introduce an additional term into our model’s loss function ℒℒ\mathcal{L}caligraphic\_L. We reward the robot for learning latent actions that have high entropy with respect to the goals; i.e., in a given context c𝑐citalic\_c there exist latent actions z𝑧zitalic\_z that cause the robot to move towards each of the goals g∈𝒢𝑔𝒢g\in\mathcal{G}italic\_g ∈ caligraphic\_G. Define pc(g)subscript𝑝𝑐𝑔p\_{c}(g)italic\_p start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT ( italic\_g ) as proportional to the total *score* η𝜂\etaitalic\_η accumulated for goal g𝑔gitalic\_g:
| | | | |
| --- | --- | --- | --- |
| | pc(g)∝∑z∈𝒵η(g,c,z)proportional-tosubscript𝑝𝑐𝑔subscript𝑧𝒵𝜂𝑔𝑐𝑧p\_{c}(g)\propto\sum\_{z\in\mathcal{Z}}\eta(g,c,z)italic\_p start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT ( italic\_g ) ∝ ∑ start\_POSTSUBSCRIPT italic\_z ∈ caligraphic\_Z end\_POSTSUBSCRIPT italic\_η ( italic\_g , italic\_c , italic\_z ) | | (15) |
where the score function η𝜂\etaitalic\_η indicates how well action z𝑧zitalic\_z taken from context c𝑐citalic\_c conveys the intent of moving to goal g𝑔gitalic\_g, and the distribution pcsubscript𝑝𝑐p\_{c}italic\_p start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT over 𝒢𝒢\mathcal{G}caligraphic\_G captures the proportion of latent actions z𝑧zitalic\_z at context c𝑐citalic\_c that move the robot toward each goal. Intuitively, pcsubscript𝑝𝑐p\_{c}italic\_p start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT captures the comparative ease of moving toward each goal: when pc(g)→1→subscript𝑝𝑐𝑔1p\_{c}(g)\rightarrow 1italic\_p start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT ( italic\_g ) → 1, the human can easily move towards goal g𝑔gitalic\_g since *all* latent actions at c𝑐citalic\_c induce movement towards goal g𝑔gitalic\_g and consequently, *no* latent actions guide the robot towards any other goals. We seek to avoid learning latent actions where pc(g)→1→subscript𝑝𝑐𝑔1p\_{c}(g)\rightarrow 1italic\_p start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT ( italic\_g ) → 1, because in these scenarios the teleoperator cannot correct their mistakes or move towards a different goal. Recall from Section [4](#S4 "4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots") that the model should minimize the reconstruction error while regularizing the latent space. We now argue that the model should additionally maximize the Shannon entropy of p𝑝pitalic\_p, so that the loss function becomes:
| | | | |
| --- | --- | --- | --- |
| | ℒ=‖a−ϕ(z,c)‖2+λ1⋅KL[𝒩(μ,σ)∥𝒩(0,1)]+λ2⋅∑g∈𝒢p(g)logp(g)ℒsuperscriptdelimited-∥∥𝑎italic-ϕ𝑧𝑐2⋅subscript𝜆1𝐾𝐿delimited-[]conditional𝒩𝜇𝜎𝒩01⋅subscript𝜆2subscript𝑔𝒢𝑝𝑔𝑝𝑔\mathcal{L}=\|a-\phi(z,c)\|^{2}+\lambda\_{1}\cdot KL\big{[}\mathcal{N}(\mu,\sigma)~{}\|~{}\mathcal{N}(0,1)\big{]}\\
+\lambda\_{2}\cdot\sum\_{g\in\mathcal{G}}p(g)\log{p(g)}start\_ROW start\_CELL caligraphic\_L = ∥ italic\_a - italic\_ϕ ( italic\_z , italic\_c ) ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT + italic\_λ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ⋅ italic\_K italic\_L [ caligraphic\_N ( italic\_μ , italic\_σ ) ∥ caligraphic\_N ( 0 , 1 ) ] end\_CELL end\_ROW start\_ROW start\_CELL + italic\_λ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ⋅ ∑ start\_POSTSUBSCRIPT italic\_g ∈ caligraphic\_G end\_POSTSUBSCRIPT italic\_p ( italic\_g ) roman\_log italic\_p ( italic\_g ) end\_CELL end\_ROW | | (16) |
Here the hyperparameter λ2>0subscript𝜆20\lambda\_{2}>0italic\_λ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT > 0 determines how much importance is assigned to maximizing the entropy over goals. When combining shared autonomy with latent actions, we employ this loss function to train the decoder ϕitalic-ϕ\phiitalic\_ϕ from dataset 𝒟𝒟\mathcal{D}caligraphic\_D.
6 Aligning Latent Actions with User Preferences
------------------------------------------------
In Section [4](#S4 "4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots") we learned latent actions, and in Section [5](#S5 "5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots") we combined these latent actions with shared autonomy to handle precise manipulation tasks. Throughout these sections we treated the human’s joystick inputs as the latent actions (i.e., z=u𝑧𝑢z=uitalic\_z = italic\_u) since both the joystick inputs and the latent actions are the same dimensionality. However, different users have different expectations for how the robot will interpret their inputs! Imagine that the shared autonomy has assisted us to the tofu, and now we want to control the robot through a cutting motion. One user expects u=down𝑢𝑑𝑜𝑤𝑛u=downitalic\_u = italic\_d italic\_o italic\_w italic\_n to cause the robot to cut, but another person thinks u=down𝑢𝑑𝑜𝑤𝑛u=downitalic\_u = italic\_d italic\_o italic\_w italic\_n should cause a stabbing motion. Accordingly, in this section we learn a personalized alignment z=f(u,c)𝑧𝑓𝑢𝑐z=f(u,c)italic\_z = italic\_f ( italic\_u , italic\_c ) that converts the human’s joystick inputs u𝑢uitalic\_u to their preferred latent action z𝑧zitalic\_z (see Figure [5](#S6.F5 "Figure 5 ‣ 6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots")). Our goal is to make the robotic system easier to control: instead of forcing the human to adapt to ϕitalic-ϕ\phiitalic\_ϕ, we want the robot to adapt to the user’s preferences (without fundamentally changing the latent action space or decoder ϕitalic-ϕ\phiitalic\_ϕ).
To learn the human’s preference we will query the user, showing them example robot motions and then asking them for the corresponding joystick input. But training our alignment model f𝑓fitalic\_f may require a large number of motion-joystick pairs, particularly in complex tasks where the user must leverage the same joystick input to accomplish several things. It is impractical to ask the human to provide all of these labels. Accordingly, to address the challenge of insufficient training data, we employ a semi-supervised learning method [chapelle2009semi](#bib.bib12) . In this section we first outline our approach, and then formulate a set of intuitive priors that facilitate semi-supervised learning from limited human feedback.

Figure 5: Overview of alignment model. The human has in mind a preferred mapping between their joystick inputs u𝑢uitalic\_u and their commanded actions ahsubscript𝑎ℎa\_{h}italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT. We break this into a two step process: aligning the joystick inputs with latent actions z𝑧zitalic\_z, and then decoding z𝑧zitalic\_z into a high-DoF action ahsubscript𝑎ℎa\_{h}italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT. In previous sections we focused on the decoder ϕitalic-ϕ\phiitalic\_ϕ; now we learn a personalized alignment model f𝑓fitalic\_f. The robot learns f𝑓fitalic\_f offline in a semi-supervised manner by combining labeled queries with intuitive priors that capture the human’s underlying expectations of how the control mapping should behave.

Figure 6: Training our alignment model z=f(u,c)𝑧𝑓𝑢𝑐z=f(u,c)italic\_z = italic\_f ( italic\_u , italic\_c ). Here the context c𝑐citalic\_c is equal to the robot’s state s𝑠sitalic\_s. (Left) the example task is to move the robot’s end-effector in a 2D plane, and the current user prefers for the robot’s end-effector motion to align with their joystick axes, so that u1subscript𝑢1u\_{1}italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT moves the robot in the x𝑥xitalic\_x-axis and u2subscript𝑢2u\_{2}italic\_u start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT moves the robot in the y𝑦yitalic\_y-axis. (Right) we take snapshots at three different points during training, and plot how the robot actually moves when the human presses up, down, left, and right. Note that this alignment is context dependent. As training progresses, the robot learns the alignment f𝑓fitalic\_f, and the robot’s motions are gradually and consistently pushed to match with the human’s individual preferences.
###
6.1 Alignment Model
Recall from Equation ([4](#S3.E4 "4 ‣ 3 Problem Setting ‣ Learning Latent Actions to Control Assistive Robots")) that we seek to learn a function approximator f:𝒰×𝒞→𝒵:𝑓→𝒰𝒞𝒵f:\mathcal{U}\times\mathcal{C}\rightarrow\mathcal{Z}italic\_f : caligraphic\_U × caligraphic\_C → caligraphic\_Z. Importantly, this alignment model is conditioned on the current context c∈𝒞𝑐𝒞c\in\mathcal{C}italic\_c ∈ caligraphic\_C. Consider the person in our motivating example, who is using a 2-axis joystick to control a high-DoF assistive robot arm to reach and cut tofu. The user’s preferred way to control the robot is unclear: what does the user mean if they push the joystick right? When the robot is left of the tofu, the user might intend to move the robot towards the tofu — but when the robot is directly above the tofu, pressing right now indicates that the robot should rotate and start a cutting motion! This mapping from the user input to intended action is not only person dependent, but it is also context dependent. In practice, this context dependency prevents us from learning a single transformation to uniformly apply across the robot’s workspace; instead, we need an intelligent strategy for understanding the human’s preferences in different contexts.
Model. To capture this interdependence we employ a general Multi-Layer Perceptron (MLP). The MLP f𝑓fitalic\_f takes in the current user input utsuperscript𝑢𝑡u^{t}italic\_u start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT and context ctsuperscript𝑐𝑡c^{t}italic\_c start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT, and outputs a latent action ztsuperscript𝑧𝑡z^{t}italic\_z start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT. Combining f𝑓fitalic\_f with our latent action model, we now have a two-step mapping between the human’s low-dimensional input and the human’s high-dimensional command:
| | | | |
| --- | --- | --- | --- |
| | aht=ϕ(zt,ct)=ϕ(f(ut,ct),ct)superscriptsubscript𝑎ℎ𝑡italic-ϕsuperscript𝑧𝑡superscript𝑐𝑡italic-ϕ𝑓superscript𝑢𝑡superscript𝑐𝑡superscript𝑐𝑡a\_{h}^{t}=\phi(z^{t},c^{t})=\phi(f(u^{t},c^{t}),c^{t})italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT = italic\_ϕ ( italic\_z start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_c start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) = italic\_ϕ ( italic\_f ( italic\_u start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_c start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) , italic\_c start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) | | (17) |
When using our alignment model online, we get the human’s commanded action using Equation ([17](#S6.E17 "17 ‣ 6.1 Alignment Model ‣ 6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots")), and then combine this with shared autonomy to provide the overall robot action a𝑎aitalic\_a. But offline — when we are learning f𝑓fitalic\_f — we set a=ah𝑎subscript𝑎ℎa=a\_{h}italic\_a = italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, so that the robot directly executes the human’s commanded action. This disentangles the effects of shared autonomy and latent actions, and lets us focus on learning the preferred mapping from joystick inputs u𝑢uitalic\_u to robot actions a𝑎aitalic\_a. Given that the robot takes action atsuperscript𝑎𝑡a^{t}italic\_a start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT at the current timestep t𝑡titalic\_t, the state st+1superscript𝑠𝑡1s^{t+1}italic\_s start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT at the next timestep follows our transition model: st+1=𝒯(st,at)superscript𝑠𝑡1𝒯superscript𝑠𝑡superscript𝑎𝑡s^{t+1}=\mathcal{T}(s^{t},a^{t})italic\_s start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT = caligraphic\_T ( italic\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ). Letting a=ah𝑎subscript𝑎ℎa=a\_{h}italic\_a = italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, and plugging in Equation ([17](#S6.E17 "17 ‣ 6.1 Alignment Model ‣ 6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots")), we get the following relationship between joystick inputs and robot motion:
| | | | |
| --- | --- | --- | --- |
| | st+1=𝒯(st,ϕ(f(ut,ct),ct))=T(st,ut)superscript𝑠𝑡1𝒯superscript𝑠𝑡italic-ϕ𝑓superscript𝑢𝑡superscript𝑐𝑡superscript𝑐𝑡𝑇superscript𝑠𝑡superscript𝑢𝑡s^{t+1}=\mathcal{T}(s^{t},\phi(f(u^{t},c^{t}),c^{t}))=T(s^{t},u^{t})italic\_s start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT = caligraphic\_T ( italic\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_ϕ ( italic\_f ( italic\_u start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_c start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) , italic\_c start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) ) = italic\_T ( italic\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) | | (18) |
Our objective is to learn f𝑓fitalic\_f so that st+1=T(st,ut)superscript𝑠𝑡1𝑇superscript𝑠𝑡superscript𝑢𝑡s^{t+1}=T(s^{t},u^{t})italic\_s start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT = italic\_T ( italic\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) matches the human’s expectations. The overall training process is visualized in Figure [6](#S6.F6 "Figure 6 ‣ 6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots").
Loss Function. We train f𝑓fitalic\_f to minimize the loss function ℒalignsubscriptℒalign\mathcal{L}\_{\text{align}}caligraphic\_L start\_POSTSUBSCRIPT align end\_POSTSUBSCRIPT. We emphasize that this loss function (used for training the alignment model f𝑓fitalic\_f) is different than the loss function described in Sections [4](#S4 "4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots") and [5](#S5 "5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots") (which was used for training the decoder ϕitalic-ϕ\phiitalic\_ϕ). Importantly, ℒalignsubscriptℒalign\mathcal{L}\_{\text{align}}caligraphic\_L start\_POSTSUBSCRIPT align end\_POSTSUBSCRIPT must capture the individual user’s expected joystick mapping — and to understand what the user expects, we start by asking a set of questions. In each separate query, the robot starts in a state stsuperscript𝑠𝑡s^{t}italic\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT and moves to some state s\*superscript𝑠s^{\*}italic\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT. We then ask the user to label this motion with their preferred joystick input u𝑢uitalic\_u, resulting in the labeled data tuple (st,ut,s\*)superscript𝑠𝑡superscript𝑢𝑡superscript𝑠(s^{t},u^{t},s^{\*})( italic\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ). For instance, the robot arm starts above the tofu, and then stabs down to break off a piece: you might label this motion by holding down on the joystick (i.e., u=down𝑢downu=\text{down}italic\_u = down).
Given a start state stsuperscript𝑠𝑡s^{t}italic\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT and input utsuperscript𝑢𝑡u^{t}italic\_u start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT from the user’s labeled data, the robot learns f𝑓fitalic\_f to minimize the distance between T(st,ut)𝑇superscript𝑠𝑡superscript𝑢𝑡T(s^{t},u^{t})italic\_T ( italic\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) and s\*superscript𝑠s^{\*}italic\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT. Letting N𝑁Nitalic\_N denote the number of queries that the human has answered, and letting d𝑑ditalic\_d be the distance metric, our alignment function should minimize:
| | | | |
| --- | --- | --- | --- |
| | Lsup=1N∑i=1Nd(s\*,i,T(si,ui))subscript𝐿sup1𝑁superscriptsubscript𝑖1𝑁𝑑superscript𝑠𝑖𝑇superscript𝑠𝑖superscript𝑢𝑖L\_{\text{sup}}=\frac{1}{N}\sum\_{i=1}^{N}d(s^{\*,i},T(s^{i},u^{i}))italic\_L start\_POSTSUBSCRIPT sup end\_POSTSUBSCRIPT = divide start\_ARG 1 end\_ARG start\_ARG italic\_N end\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT italic\_d ( italic\_s start\_POSTSUPERSCRIPT \* , italic\_i end\_POSTSUPERSCRIPT , italic\_T ( italic\_s start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ) ) | | (19) |
If we could ask the human as many questions as necessary, then ℒalign=ℒsupsubscriptℒalignsubscriptℒsup\mathcal{L}\_{\text{align}}=\mathcal{L}\_{\text{sup}}caligraphic\_L start\_POSTSUBSCRIPT align end\_POSTSUBSCRIPT = caligraphic\_L start\_POSTSUBSCRIPT sup end\_POSTSUBSCRIPT, and our alignment function only needs to minimize the supervised loss. But collecting this large dataset is impractical. Accordingly, to minimize the number of questions the human must answer, we introduce additional loss terms in ℒalignsubscriptℒalign\mathcal{L}\_{\text{align}}caligraphic\_L start\_POSTSUBSCRIPT align end\_POSTSUBSCRIPT that capture underlying priors in human expectations.
###
6.2 Reducing Human Data with Intuitive Priors
Our insight here is that humans share some underlying expectations of how the control mapping should behave [jonschkowski2014state](#bib.bib27) . We will formulate these common expectations — i.e., priors — as loss terms that f𝑓fitalic\_f minimizes within semi-supervised learning.
When introducing these priors, it helps to refine our notation. Recall that s𝑠sitalic\_s is the system state: here we use s𝑠sitalic\_s to specifically refer to the robot’s joint position, and we denote the forward kinematics of the robot arm as x=Ψ(s)𝑥Ψ𝑠x=\Psi(s)italic\_x = roman\_Ψ ( italic\_s ). End-effector pose x𝑥xitalic\_x is particularly important, since humans often focus on the robot’s gripper during eating tasks. When the human applies joystick input u𝑢uitalic\_u at state s𝑠sitalic\_s, the corresponding change in end-effector pose x𝑥xitalic\_x is: Δx=Ψ(T(u,s))−Ψ(s)Δ𝑥Ψ𝑇𝑢𝑠Ψ𝑠\Delta x=\Psi(T(u,s))-\Psi(s)roman\_Δ italic\_x = roman\_Ψ ( italic\_T ( italic\_u , italic\_s ) ) - roman\_Ψ ( italic\_s ). With these definitions in mind, we argue an intuitive controller should satisfy the properties listed below. We emphasize that — although these properties share some common themes with the latent action properties from Section [4](#S4 "4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots") — the purpose of these properties is different. When formalizing the properties for latent actions we focused on enabling the human to complete tasks using these latent actions. By contrast, here we focus on intuitive and task-agnostic expectations for controller mappings.
Proportionality. The amount of change in the position and orientation of the robot’s end-effector should be proportional to the scale of the human’s input. In other words, for scalar α𝛼\alphaitalic\_α, we expect:
| | | |
| --- | --- | --- |
| | α⋅|Ψ(T(u,s))−Ψ(s)|=|Ψ(T(α⋅u,s))−Ψ(s)|⋅𝛼Ψ𝑇𝑢𝑠Ψ𝑠Ψ𝑇⋅𝛼𝑢𝑠Ψ𝑠\alpha\cdot|\Psi(T(u,s))-\Psi(s)|=|\Psi(T(\alpha\cdot u,s))-\Psi(s)|italic\_α ⋅ | roman\_Ψ ( italic\_T ( italic\_u , italic\_s ) ) - roman\_Ψ ( italic\_s ) | = | roman\_Ψ ( italic\_T ( italic\_α ⋅ italic\_u , italic\_s ) ) - roman\_Ψ ( italic\_s ) | | |
We accordingly define the proportionality loss ℒpropsubscriptℒprop\mathcal{L}\_{\text{prop}}caligraphic\_L start\_POSTSUBSCRIPT prop end\_POSTSUBSCRIPT as:
| | | | |
| --- | --- | --- | --- |
| | ℒprop=‖Ψ(T(α⋅u,s))−Ψ(s)−α⋅Δx‖2subscriptℒpropsuperscriptnormΨ𝑇⋅𝛼𝑢𝑠Ψ𝑠⋅𝛼Δ𝑥2\mathcal{L}\_{\text{prop}}=\big{\|}\Psi(T(\alpha\cdot u,s))-\Psi(s)-\alpha\cdot\Delta x\big{\|}^{2}caligraphic\_L start\_POSTSUBSCRIPT prop end\_POSTSUBSCRIPT = ∥ roman\_Ψ ( italic\_T ( italic\_α ⋅ italic\_u , italic\_s ) ) - roman\_Ψ ( italic\_s ) - italic\_α ⋅ roman\_Δ italic\_x ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT | | (20) |
where α𝛼\alphaitalic\_α is sampled from our range of joystick inputs.
Reversability. If a joystick input u𝑢uitalic\_u makes the robot move forward from s1subscript𝑠1s\_{1}italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT to s2subscript𝑠2s\_{2}italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, then the opposite input (−u)𝑢(-u)( - italic\_u ) should move the robot back from s2subscript𝑠2s\_{2}italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT to its original end-effector position. In other words, we expect:
| | | |
| --- | --- | --- |
| | Ψ(s1)=Ψ(T(−u,T(u,s1)))Ψsubscript𝑠1Ψ𝑇𝑢𝑇𝑢subscript𝑠1\Psi(s\_{1})=\Psi\Big{(}T\big{(}-u,T(u,s\_{1})\big{)}\Big{)}roman\_Ψ ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) = roman\_Ψ ( italic\_T ( - italic\_u , italic\_T ( italic\_u , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ) ) | |
This property ensures users can recover from their mistakes. We define the reversability loss ℒreversesubscriptℒreverse\mathcal{L}\_{\text{reverse}}caligraphic\_L start\_POSTSUBSCRIPT reverse end\_POSTSUBSCRIPT as:
| | | | |
| --- | --- | --- | --- |
| | ℒreverse=‖Ψ(s)−Ψ(T(−u,T(u,s)))‖2subscriptℒreversesuperscriptnormΨ𝑠Ψ𝑇𝑢𝑇𝑢𝑠2\mathcal{L}\_{\text{reverse}}=\big{\|}\Psi(s)-\Psi\big{(}T(-u,T(u,s))\big{)}\big{\|}^{2}caligraphic\_L start\_POSTSUBSCRIPT reverse end\_POSTSUBSCRIPT = ∥ roman\_Ψ ( italic\_s ) - roman\_Ψ ( italic\_T ( - italic\_u , italic\_T ( italic\_u , italic\_s ) ) ) ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT | | (21) |
Here Ψ(s)Ψ𝑠\Psi(s)roman\_Ψ ( italic\_s ) is the current position and orientation of the robot’s end-effector, and the right term is the pose of the end-effector after executing human input u𝑢uitalic\_u followed by the opposite input (−u)𝑢(-u)( - italic\_u ).
Consistency. The same input taken at nearby states should lead to similar changes in robot pose. We previously discussed a similar property in Section [4](#S4 "4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots") when formalizing latent actions. Here we specifically focus on the input-output relationship between joystick input u𝑢uitalic\_u and end-effector position x=Ψ(s)𝑥Ψ𝑠x=\Psi(s)italic\_x = roman\_Ψ ( italic\_s ):
| | | |
| --- | --- | --- |
| | Δx1=‖Ψ(T(u,s1))−Ψ(s1)‖2Δsubscript𝑥1superscriptnormΨ𝑇𝑢subscript𝑠1Ψsubscript𝑠12\displaystyle\Delta x\_{1}=\|\Psi(T(u,s\_{1}))-\Psi(s\_{1})\|^{2}roman\_Δ italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = ∥ roman\_Ψ ( italic\_T ( italic\_u , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ) - roman\_Ψ ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT | |
| | Δx2=‖Ψ(T(u,s2))−Ψ(s2)‖2Δsubscript𝑥2superscriptnormΨ𝑇𝑢subscript𝑠2Ψsubscript𝑠22\displaystyle\Delta x\_{2}=\|\Psi(T(u,s\_{2}))-\Psi(s\_{2})\|^{2}roman\_Δ italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT = ∥ roman\_Ψ ( italic\_T ( italic\_u , italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ) - roman\_Ψ ( italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT | |
We expect ‖Δx1−Δx2‖→0→normΔsubscript𝑥1Δsubscript𝑥20\|\Delta x\_{1}-\Delta x\_{2}\|\rightarrow 0∥ roman\_Δ italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT - roman\_Δ italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ∥ → 0 as ‖s1−s2‖→0→normsubscript𝑠1subscript𝑠20\|s\_{1}-s\_{2}\|\rightarrow 0∥ italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT - italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ∥ → 0. Consistency prevents sudden changes in the alignment mapping. We define the consistency loss ℒconsubscriptℒcon\mathcal{L}\_{\text{con}}caligraphic\_L start\_POSTSUBSCRIPT con end\_POSTSUBSCRIPT as:
| | | | |
| --- | --- | --- | --- |
| | ℒcon=‖Δx(s1)−Δx(s2)‖⋅exp{−γ‖s1−s2‖}subscriptℒcon⋅normΔ𝑥subscript𝑠1Δ𝑥subscript𝑠2𝛾normsubscript𝑠1subscript𝑠2\mathcal{L}\_{\text{con}}=\|\Delta x(s\_{1})-\Delta x(s\_{2})\|\cdot\exp\big{\{}-\gamma\|s\_{1}-s\_{2}\|\big{\}}caligraphic\_L start\_POSTSUBSCRIPT con end\_POSTSUBSCRIPT = ∥ roman\_Δ italic\_x ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) - roman\_Δ italic\_x ( italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ∥ ⋅ roman\_exp { - italic\_γ ∥ italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT - italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ∥ } | | (22) |
When the hyperparameter γ→0→𝛾0\gamma\rightarrow 0italic\_γ → 0, the robot only enforces consistency at local states, and when γ→∞→𝛾\gamma\rightarrow\inftyitalic\_γ → ∞, the robot tries to enforce consistency at all states.
Semi-Supervised Learning. When learning our alignment model f𝑓fitalic\_f, we first collect a batch of robot motions (s,s\*)𝑠superscript𝑠(s,s^{\*})( italic\_s , italic\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ). The human labels N𝑁Nitalic\_N of these (start-state, end-state) pairs with their preferred joystick input u𝑢uitalic\_u, so that we have labeled data (s,u,s\*)𝑠𝑢superscript𝑠(s,u,s^{\*})( italic\_s , italic\_u , italic\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ). We then train the alignment model to minimize the supervised loss for the labeled data, as well as the semi-supervised loss for the unlabeled data. Hence, the cumulative loss function is:
| | | | |
| --- | --- | --- | --- |
| | ℒalign=ℒsup+λ1ℒprop+λ2ℒreverse+λ3ℒconsubscriptℒalignsubscriptℒsupsubscript𝜆1subscriptℒpropsubscript𝜆2subscriptℒreversesubscript𝜆3subscriptℒcon\mathcal{L}\_{\text{align}}=\mathcal{L}\_{\text{sup}}+\lambda\_{1}\mathcal{L}\_{\text{prop}}+\lambda\_{2}\mathcal{L}\_{\text{reverse}}+\lambda\_{3}\mathcal{L}\_{\text{con}}caligraphic\_L start\_POSTSUBSCRIPT align end\_POSTSUBSCRIPT = caligraphic\_L start\_POSTSUBSCRIPT sup end\_POSTSUBSCRIPT + italic\_λ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT caligraphic\_L start\_POSTSUBSCRIPT prop end\_POSTSUBSCRIPT + italic\_λ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT caligraphic\_L start\_POSTSUBSCRIPT reverse end\_POSTSUBSCRIPT + italic\_λ start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT caligraphic\_L start\_POSTSUBSCRIPT con end\_POSTSUBSCRIPT | | (23) |
Importantly, incorporating these different loss terms — which are inspired by human priors over controllable spaces [jonschkowski2014state](#bib.bib27) — enables the robot to generalize the labeled human data (which it performs supervised learning on) to unlabeled states (which it can now perform semi-supervised learning on).
Algorithm 1 Latent Control of Assistive Robots
1:Offline:
2:Select a discrete set of goals 𝒢𝒢\mathcal{G}caligraphic\_G
3:Collect a dataset 𝒟={(c0,a0),(c1,a1),…}𝒟subscript𝑐0subscript𝑎0subscript𝑐1subscript𝑎1…\mathcal{D}=\{(c\_{0},a\_{0}),(c\_{1},a\_{1}),\ldots\}caligraphic\_D = { ( italic\_c start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) , ( italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , … } from kinesthetic demonstrations
4:Train latent action model ϕitalic-ϕ\phiitalic\_ϕ to minimize ℒℒ\mathcal{L}caligraphic\_L on 𝒟𝒟\mathcal{D}caligraphic\_D
5:Query the user to label example motions {(s,s\*)}𝑠superscript𝑠\{(s,s^{\*})\}{ ( italic\_s , italic\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) } with their preferred joystick direction u𝑢uitalic\_u
6:Train alignment model f𝑓fitalic\_f to minimize loss ℒalignsubscriptℒ𝑎𝑙𝑖𝑔𝑛\mathcal{L}\_{align}caligraphic\_L start\_POSTSUBSCRIPT italic\_a italic\_l italic\_i italic\_g italic\_n end\_POSTSUBSCRIPT using labeled and unlabeled motions {(s,s\*)}𝑠superscript𝑠\{(s,s^{\*})\}{ ( italic\_s , italic\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) }
7:Online, at each timestep t𝑡titalic\_t:
8:zt←f(ut,ct)←superscript𝑧𝑡𝑓superscript𝑢𝑡superscript𝑐𝑡z^{t}\leftarrow f(u^{t},c^{t})italic\_z start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ← italic\_f ( italic\_u start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_c start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) ▷▷\triangleright▷ Align latent action with user input
9:aht←ϕ(zt,ct)←superscriptsubscript𝑎ℎ𝑡italic-ϕsuperscript𝑧𝑡superscript𝑐𝑡a\_{h}^{t}\leftarrow\phi(z^{t},c^{t})italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ← italic\_ϕ ( italic\_z start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_c start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) ▷▷\triangleright▷ Decode ztsuperscript𝑧𝑡z^{t}italic\_z start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT to high-DoF action
10:art←∑g∈𝒢bt(g)⋅(g−st)←superscriptsubscript𝑎𝑟𝑡subscript𝑔𝒢⋅superscript𝑏𝑡𝑔𝑔superscript𝑠𝑡a\_{r}^{t}\leftarrow\sum\_{g\in\mathcal{G}}b^{t}(g)\cdot(g-s^{t})italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ← ∑ start\_POSTSUBSCRIPT italic\_g ∈ caligraphic\_G end\_POSTSUBSCRIPT italic\_b start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_g ) ⋅ ( italic\_g - italic\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) ▷▷\triangleright▷ get robot assistance
11:at←(1−α)⋅aht+α⋅art←superscript𝑎𝑡⋅1𝛼superscriptsubscript𝑎ℎ𝑡⋅𝛼superscriptsubscript𝑎𝑟𝑡a^{t}\leftarrow(1-\alpha)\cdot a\_{h}^{t}+\alpha\cdot a\_{r}^{t}italic\_a start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ← ( 1 - italic\_α ) ⋅ italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT + italic\_α ⋅ italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ▷▷\triangleright▷ blend both ahsubscript𝑎ℎa\_{h}italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT and arsubscript𝑎𝑟a\_{r}italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT
12:bt+1∝P(ut|ct,g)P(g)proportional-tosuperscript𝑏𝑡1𝑃conditionalsuperscript𝑢𝑡superscript𝑐𝑡𝑔𝑃𝑔b^{t+1}\propto P(u^{t}~{}|~{}c^{t},g)P(g)italic\_b start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT ∝ italic\_P ( italic\_u start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT | italic\_c start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_g ) italic\_P ( italic\_g ) ▷▷\triangleright▷ update belief over goals
13:st+1∼𝒯(st,at)similar-tosuperscript𝑠𝑡1𝒯superscript𝑠𝑡superscript𝑎𝑡s^{t+1}\sim\mathcal{T}(s^{t},a^{t})italic\_s start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT ∼ caligraphic\_T ( italic\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) ▷▷\triangleright▷ take action
7 Algorithm
------------
Sections [4](#S4 "4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots"), [5](#S5 "5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots"), and [6](#S6 "6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots") developed parts of our approach. Here we put these pieces together to present our general algorithm for controlling assistive eating robots with learned latent actions. Our approach is summarized in Algorithm [1](#alg1 "Algorithm 1 ‣ 6.2 Reducing Human Data with Intuitive Priors ‣ 6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots") and explained below.
Given an assistive eating scenario, we start by identifying the food items and other potential goals that the human may want to reach (Line 1). We then collect high-dimensional kinesthetic demonstrations, where a caretaker backdrives the robot through task-related motions that interact with these potential goals (Line 2). Leveraging the properties and models from Sections [4](#S4 "4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots"), we then train our latent action space and learn decoder ϕitalic-ϕ\phiitalic\_ϕ (Line 3). Next, we show the user example robot motions — e.g., by sampling values of z𝑧zitalic\_z — and ask the user to label these motions with their preferred joystick input (Line 4). Applying the priors developed in Sections [6](#S6 "6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots"), we generalize from a small number of human labels to learn the alignment f𝑓fitalic\_f between joystick inputs and latent actions (Line 5).
Once we have learned ϕitalic-ϕ\phiitalic\_ϕ and f𝑓fitalic\_f, we are ready for the human-in-the-loop. At each timestep the human presses their low-DoF joystick to provide input u𝑢uitalic\_u. We find the latent action z𝑧zitalic\_z that is aligned with the human’s input (Line 6), and then decode that low-DoF latent action to get a high-DoF robot action ahsubscript𝑎ℎa\_{h}italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT (Line 7). In order to help the human reach and maintain their high-level goals, we incorporate the shared autonomy approach from Sections [5](#S5 "5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots"). Shared autonomy selects an assistive action arsubscript𝑎𝑟a\_{r}italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT based on the current belief over the human’s goal (Line 8), and the robot blends arsubscript𝑎𝑟a\_{r}italic\_a start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT and ahsubscript𝑎ℎa\_{h}italic\_a start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT to take overall action a𝑎aitalic\_a (Line 9). Finally, the robot applies Bayesian inference to update its understanding of the human’s desired goal based on their joystick input (Line 10). We repeat this process until the human has finished eating.
How Practical is Our Approach? One concern is the amount of data required to learn latent actions. In all of the studies reported below — where the assistive robot makes a mock-up apple pie, assembles dessert, and cuts tofu — the robot was trained with a maximum of twenty minutes of kinesthetic demonstrations, and all training was done on-board the robot computer. We recognize that this short training time is likely due to the structure of the cVAE model used in these tasks and may not hold true in general; however, this easy implementation holds promise for future use. In the following sections, we demonstrate the objective and subjective benefits of Algorithm [1](#alg1 "Algorithm 1 ‣ 6.2 Reducing Human Data with Intuitive Priors ‣ 6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots"), as well as highlighting some of its shortcomings.
Where Do the Goals Come From? Another question is how the robot detects the discrete set of goals 𝒢𝒢\mathcal{G}caligraphic\_G that the human may want to reach. Here we turn to perception, where recent assistive eating work shows how robot arms can estimate the pose of various objects of interest [feng2019robot](#bib.bib18) ; [park2019toward](#bib.bib41) . Determining which objects are potential goals is simplified in eating settings, since the target items are largely consistent (e.g., food items, cups, plates, and bowls). The location of these goals is included in the state s𝑠sitalic\_s and the robot uses this information when decoding the human’s joystick inputs. Although not covered in this paper, it is also possible to condition latent actions directly on the robot’s perception, so that s𝑠sitalic\_s becomes the visual inputs [karamcheti2021learning](#bib.bib28) .
8 Simulations
--------------
We performed three separate simulations, one for each key aspect of our proposed method. First we leverage different autoencoder models to learn latent actions, and determine which types of models best capture the user-friendly properties formalized in Section [4](#S4 "4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots"). Our second simulation then compares learned latent actions alone to latent actions with shared autonomy (Section [5](#S5 "5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots")), and focuses on how shared autonomy helps users reach, maintain, and change their high-level goals. Finally, we learn the alignment model from Section [6](#S6 "6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots") between joystick inputs and latent actions. We compare versions of our semi-supervised approach with intuitive priors, and see how these priors improve the alignment when we only have access to limited and imperfect human feedback. All three simulations were performed in controlled conditions with simulated humans and simulated or real robot arms. These simulated humans chose joystick inputs according to mathematical models of human decision making, as detailed below.
###
8.1 Do Learned Latent Actions Capture our User-Friendly Properties?
Here we explore how well our proposed models for learning latent actions capture the user-friendly properties formalized in Section [4](#S4 "4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots"). These properties include controllability, consistency, and scalability.
Setup. We simulate one-arm and two-arm planar robots, where each arm has n=5𝑛5n=5italic\_n = 5 degrees-of-freedom. The state s∈ℝn𝑠superscriptℝ𝑛s\in\mathbb{R}^{n}italic\_s ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT is the robot’s joint position, and the action a∈ℝn𝑎superscriptℝ𝑛a\in\mathbb{R}^{n}italic\_a ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT is the robot’s joint velocity. Hence, the robot transitions according to: st+1=st+at⋅dtsuperscript𝑠𝑡1superscript𝑠𝑡⋅superscript𝑎𝑡𝑑𝑡s^{t+1}=s^{t}+a^{t}\cdot dtitalic\_s start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT = italic\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT + italic\_a start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ⋅ italic\_d italic\_t, where dt𝑑𝑡dtitalic\_d italic\_t is the step size. Demonstrations consist of trajectories of state-action pairs: in each of different simulated tasks, the robot trains with a total of 10000100001000010000 state-action pairs.
Tasks. The simulated robots perform four different tasks.
1. 1.
Sine: one 5-DoF robot arm moves its end-effector along a sine wave with a 1-DoF latent action
2. 2.
Rotate: two 5-DoF robot arms are holding a box, and rotate that box about a fixed point using a 1-DoF latent action
3. 3.
Circle: one 5-DoF robot moves back and forth along circles of different radii with a 2-DoF latent action
4. 4.
Reach: one 5-DoF robot arm reaches from a start location to a goal region with a 1-DoF latent action
Model Details. We test latent action models which minimize the different loss function described in Section [4](#S4 "4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots"). Specifically, we test:
* •
Principal Component Analysis (PCA)
* •
Autoencoders (AE)
* •
Variational autoencoders (VAE)
* •
Conditioned autoencoders (cAE)
* •
Conditioned variational autoencoders (cVAE)
The encoders and decoders contain between two and four linear layers (depending on the task). The loss function is optimized using Adam with a learning rate of 1e−21superscript𝑒21e^{-2}1 italic\_e start\_POSTSUPERSCRIPT - 2 end\_POSTSUPERSCRIPT. Within the VAE and cVAE, we set the normalization weight <1absent1<1< 1 to avoid posterior collapse.
Dependent Measures. To determine accuracy, we measure the mean-squared error between the intended actions a𝑎aitalic\_a and reconstructed actions a^^𝑎\hat{a}over^ start\_ARG italic\_a end\_ARG on a test set of state-action pairs (s,a)𝑠𝑎(s,a)( italic\_s , italic\_a ) drawn from the same distribution as the training set.
To test model controllability, we select pairs of start and goal states (si,sj)subscript𝑠𝑖subscript𝑠𝑗(s\_{i},s\_{j})( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) from the test set, and solve for the latent actions z𝑧zitalic\_z that minimize the error between the robot’s current state and sjsubscript𝑠𝑗s\_{j}italic\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT. We then report this minimum state error.
We jointly measure consistency and scalability: to do this, we select 25252525 states along the task, and apply a fixed grid of latent actions zisubscript𝑧𝑖z\_{i}italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT from [−1,+1]11[-1,+1][ - 1 , + 1 ] at each state. For every (s,z)𝑠𝑧(s,z)( italic\_s , italic\_z ) pair we record the distance and direction that the end-effector travels (e.g., the direction is +11+1+ 1 if the end-effector moves right). We then find the best-fit line relating z𝑧zitalic\_z to distance times direction, and report its R2superscript𝑅2R^{2}italic\_R start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT error.
Our results are averaged across 10101010 trained models of the same type, and are listed in the form mean±SDplus-or-minus𝑚𝑒𝑎𝑛𝑆𝐷mean\pm SDitalic\_m italic\_e italic\_a italic\_n ± italic\_S italic\_D.
Hypotheses. We have the following two hypotheses:
>
> H1. Only latent action models conditioned on the context will accurately reconstruct actions from low-DoF inputs.
>
>
>
>
> H2. Conditioned autoencoders and conditioned variational autoencoders will learn a latent space that is controllable, consistent, and scalable.
>
>
>

Figure 7: Results for the Sine task. (A) mean-squared error between intended and reconstructed actions normalized by PCA test loss. (B) effect of the latent action z𝑧zitalic\_z at three states along the sine wave for the cVAE model. Darker colors correspond to z>0𝑧0z>0italic\_z > 0 and lighter colors signify z<0𝑧0z<0italic\_z < 0. Above we plot the distance that the end effector moves along the sine wave as a function of z𝑧zitalic\_z at each state. (C) rollout of robot behavior when applying a constant latent input z=+1𝑧1z=+1italic\_z = + 1, where both VAE and cVAE start at the same state. (D) end-effector trajectories for multiple rollouts of VAE and cVAE.
Sine Task. This task and our results are shown in Figure [7](#S8.F7 "Figure 7 ‣ 8.1 Do Learned Latent Actions Capture our User-Friendly Properties? ‣ 8 Simulations ‣ Learning Latent Actions to Control Assistive Robots"). We find that conditioning the decoder on the current context, i.e., ϕ(z,c)italic-ϕ𝑧𝑐\phi(z,c)italic\_ϕ ( italic\_z , italic\_c ), greatly improves accuracy when compared to the PCA baseline, i.e., ϕ(z)italic-ϕ𝑧\phi(z)italic\_ϕ ( italic\_z ). Here AE and VAE incur 98.0±0.6%plus-or-minus98.0percent0.698.0\pm 0.6\%98.0 ± 0.6 % and 100±0.8%plus-or-minus100percent0.8100\pm 0.8\%100 ± 0.8 % of the PCA loss, while cAE and cVAE obtain 1.37±1.2%plus-or-minus1.37percent1.21.37\pm 1.2\%1.37 ± 1.2 % and 3.74±0.4%plus-or-minus3.74percent0.43.74\pm 0.4\%3.74 ± 0.4 % of the PCA loss, respectively.
We likewise observe that cAE and cVAE are more controllable than their alternatives. When using the learned latent actions to move between 1000100010001000 randomly selected start and end states along the sine wave, cAE and cVAE have an average end-effector error of 0.05±0.01plus-or-minus0.050.010.05\pm 0.010.05 ± 0.01 and 0.10±0.01plus-or-minus0.100.010.10\pm 0.010.10 ± 0.01. Models without state conditioning—PCA, AE, and VAE—have average errors 0.900.900.900.90, 0.94±0.01plus-or-minus0.940.010.94\pm 0.010.94 ± 0.01, and 0.95±0.01plus-or-minus0.950.010.95\pm 0.010.95 ± 0.01.
When evaluating consistency and scalability, every tested model has a roughly linear relationship between latent actions and robot behavior: PCA has the highest R2=0.99superscript𝑅20.99R^{2}=0.99italic\_R start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT = 0.99, while cAE and cVAE have the lowest R2=0.94±0.04superscript𝑅2plus-or-minus0.940.04R^{2}=0.94\pm 0.04italic\_R start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT = 0.94 ± 0.04 and R2=0.95±0.01superscript𝑅2plus-or-minus0.950.01R^{2}=0.95\pm 0.01italic\_R start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT = 0.95 ± 0.01.

Figure 8: Results for the Rotate task. (A) the robot uses two arms to hold a light blue box, and learns to rotate this box around the fixed point shown in teal. Each state corresponds to a different fixed point, and positive z𝑧zitalic\_z causes counterclockwise rotation. On right we show how z𝑧zitalic\_z affects the rotation of the box at each state. (B) rollout of the robot’s trajectory when the user applies z=+1𝑧1z=+1italic\_z = + 1 for VAE and cVAE models, where both models start in the same state. Unlike the VAE, the cVAE model coordinates its two arms.
Rotate Task. We summarize the results for this two-arm task in Fig. [8](#S8.F8 "Figure 8 ‣ 8.1 Do Learned Latent Actions Capture our User-Friendly Properties? ‣ 8 Simulations ‣ Learning Latent Actions to Control Assistive Robots"). Like in the Sine task, the models conditioned on the current context are more accurate than their non-conditioned counterparts: AE and VAE have 28.7±4.8%plus-or-minus28.7percent4.828.7\pm 4.8\%28.7 ± 4.8 % and 38.0±5.8%plus-or-minus38.0percent5.838.0\pm 5.8\%38.0 ± 5.8 % of the PCA baseline loss, while cAE and cVAE reduce this to 0.65±0.05%plus-or-minus0.65percent0.050.65\pm 0.05\%0.65 ± 0.05 % and 0.84±0.07%plus-or-minus0.84percent0.070.84\pm 0.07\%0.84 ± 0.07 %. The context conditioned models are also more controllable: when using the learned z𝑧zitalic\_z to rotate the box, AE and VAE have 56.8±9%plus-or-minus56.8percent956.8\pm 9\%56.8 ± 9 % and 71.5±8%plus-or-minus71.5percent871.5\pm 8\%71.5 ± 8 % as much end-effector error as the PCA baseline, whereas cAE and cVAE achieve 5.4±0.1%plus-or-minus5.4percent0.15.4\pm 0.1\%5.4 ± 0.1 % and 5.9±0.1%plus-or-minus5.9percent0.15.9\pm 0.1\%5.9 ± 0.1 % error.
When testing for consistency and scalability, we measure the relationship between the latent action z𝑧zitalic\_z and the change in orientation for the end-effectors of both arms (i.e., ignoring their location). Each model exhibits a linear relationship between z𝑧zitalic\_z and orientation: R2=0.995±0.004superscript𝑅2plus-or-minus0.9950.004R^{2}=0.995\pm 0.004italic\_R start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT = 0.995 ± 0.004 for cVAE and R2=0.996±0.002superscript𝑅2plus-or-minus0.9960.002R^{2}=0.996\pm 0.002italic\_R start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT = 0.996 ± 0.002 for cVAE. In other words, there is an approximately linear mapping between z𝑧zitalic\_z and the orientation of the box that the two arms are holding.
Circle Task. Next, consider the one-arm task in Fig. [9](#S8.F9 "Figure 9 ‣ 8.1 Do Learned Latent Actions Capture our User-Friendly Properties? ‣ 8 Simulations ‣ Learning Latent Actions to Control Assistive Robots") where the robot has a 2-DoF latent action space. We here focus on the learned latent dimensions z=[z1,z2]𝑧subscript𝑧1subscript𝑧2z=[z\_{1},z\_{2}]italic\_z = [ italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_z start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ], and examine how these latent dimensions correspond to the underlying task. Recall that the training data consists of state-action pairs which translate the robot’s end-effector along (and between) circles of different radii. Ideally, the learned latent dimensions correspond to these axes, e.g., z1subscript𝑧1z\_{1}italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT controls tangential motion while z2subscript𝑧2z\_{2}italic\_z start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT controls orthogonal motion. Interestingly, we found that this intuitive mapping is only captured by the state conditioned models. The average angle between the directions that the end-effector moves for z1subscript𝑧1z\_{1}italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and z2subscript𝑧2z\_{2}italic\_z start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT is 27±20∘plus-or-minus27superscript2027\pm 20^{\circ}27 ± 20 start\_POSTSUPERSCRIPT ∘ end\_POSTSUPERSCRIPT and 34±15∘plus-or-minus34superscript1534\pm 15^{\circ}34 ± 15 start\_POSTSUPERSCRIPT ∘ end\_POSTSUPERSCRIPT for AE and VAE models, but this angle increases to 72±9∘plus-or-minus72superscript972\pm 9^{\circ}72 ± 9 start\_POSTSUPERSCRIPT ∘ end\_POSTSUPERSCRIPT and 74±12∘plus-or-minus74superscript1274\pm 12^{\circ}74 ± 12 start\_POSTSUPERSCRIPT ∘ end\_POSTSUPERSCRIPT for the cAE and cVAE (ideally 90∘superscript9090^{\circ}90 start\_POSTSUPERSCRIPT ∘ end\_POSTSUPERSCRIPT). The state conditioned models better disentangle their low-dimensional embeddings, supporting our hypotheses and demonstrating how these models produce user-friendly latent spaces.

Figure 9: Results for the Circle task. (A) mean-squared error between desired and reconstructed actions normalized by the PCA test loss. (B) 2-DoF latent action space z=[z1,z2]𝑧subscript𝑧1subscript𝑧2z=[z\_{1},z\_{2}]italic\_z = [ italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_z start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ] for VAE and cVAE models. The current end-effector position is shown in black, and the colored grippers depict how changing z1subscript𝑧1z\_{1}italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT or z2subscript𝑧2z\_{2}italic\_z start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT affects the robot’s state. Under the cVAE model, these latent dimensions move the end-effector tangent or orthogonal to the circle.

Figure 10: Results for the Reach task. In both plots, we show the end-effector trajectory when applying constant inputs z∈[−1,+1]𝑧11z\in[-1,+1]italic\_z ∈ [ - 1 , + 1 ]. The lightest color corresponds to z=−1𝑧1z=-1italic\_z = - 1 and the darkest color is z=+1𝑧1z=+1italic\_z = + 1. The goal region is highlighted, and the initial end-effector position is black. (A) trajectories with the VAE model. (B) trajectories with the cVAE model. The latent action z𝑧zitalic\_z controls which part of the goal region the trajectory moves towards.
Reach Task. In the final task, a one-arm robot trains on trajectories that move towards a goal region (see Fig. [10](#S8.F10 "Figure 10 ‣ 8.1 Do Learned Latent Actions Capture our User-Friendly Properties? ‣ 8 Simulations ‣ Learning Latent Actions to Control Assistive Robots")). The robot learns a 1-DoF latent space, where z𝑧zitalic\_z controls the direction that the trajectory moves (i.e., to the left or right of the goal region). We focus on controllability: can robots utilize latent actions to reach their desired goal? In order to test controllability, we sample 100100100100 goals randomly from the goal region, and compare robots that attempt to reach these goals with either VAE or cVAE latent spaces. The cVAE robot more accurately reaches its goal: the L2subscript𝐿2L\_{2}italic\_L start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT distance between the goal and the robot’s final end-effector position is 0.57±0.38plus-or-minus0.570.380.57\pm 0.380.57 ± 0.38 under VAE and 0.48±0.5plus-or-minus0.480.50.48\pm 0.50.48 ± 0.5 with cVAE. Importantly, using conditioning also improves the movement quality. The average start-to-goal trajectory is 5.1±2.8plus-or-minus5.12.85.1\pm 2.85.1 ± 2.8 units when using the VAE, and this length drops to 3.1±0.5plus-or-minus3.10.53.1\pm 0.53.1 ± 0.5 with the cVAE model.
Summary. The results of our Sine, Rotate, Circle, and Reach tasks support hypotheses H1 and H2. Latent action models that are conditioned on the context more accurately reconstruct high-DoF actions from low-DoF embeddings (H1). Moreover, conditioned autoencoders and conditioned variational autoencoders learn latent action spaces which capture our desired properties: controllability, consistency, and scalability (H2).
###
8.2 Do Latent Actions with Shared Autonomy Help Users Reach and Change Goals?
Now that we have tested our method for learning latent actions, the next step is to combine these latent actions with shared autonomy (see Section [5](#S5 "5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots")). Here we explore how this approach works with a spectrum of different simulated users. We simulate human teleoperators with various levels of expertise and adaptability, and measure whether these users can interact with our algorithm to reach and change high-level goals.
Incorporating Shared Autonomy. In the previous simulations we used latent actions by themselves to control the robot. Now we compare this approach with and without shared autonomy:
* •
Latent actions with no assistance (LA)
* •
Latent actions with shared autonomy (LA+SA)
* •
Latent actions trained to maximize entropy with shared autonomy (LA+SA+Entropy)
For both LA and LA+SA we learn the latent space with a conditioned autoencoder (i.e., cAE in the previous section). However, here the context includes both state and belief. In other words, c=(s,b)𝑐𝑠𝑏c=(s,b)italic\_c = ( italic\_s , italic\_b ). We also test LA+SA+Entropy, where the model uses Equation ([16](#S5.E16 "16 ‣ 5.2 Reaching and Changing Goals ‣ 5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots")) to reward entropy in the learned latent space.

Figure 11: Simulated humans for different levels of rationality. As β→∞→𝛽{\beta\rightarrow\infty}italic\_β → ∞, the human’s choices approach optimal inputs. Final State Error (in all plots) is normalized by the distance between goals. Introducing shared autonomy (SA) improves the convergence of latent actions (LA), particularly when the human teleoperator is noisy and imperfect.

Figure 12: Simulated humans that change their intended goal part-way through the task. Change is the timestep where this change occurs, and Confidence refers to the robot’s belief in the human’s true goal. Because of the constraints imposed by shared autonomy, users need latent actions that can overcome misguided assistance and move towards a less likely (but correct) goal. Encouraging entropy in the learned latent space (LA+SA+Entropy) enables users to switch goals.
Environments. We implement these models on both a simulated and a real robot. The simulated robot is a 5555-DoF planar arm, and the real robot is a 7777-DoF Franka Emika. For both robots, the state s𝑠sitalic\_s captures the current joint position, and the action a𝑎aitalic\_a is a change in joint position, so that: st+1=st+at⋅dtsuperscript𝑠𝑡1superscript𝑠𝑡⋅superscript𝑎𝑡𝑑𝑡s^{t+1}=s^{t}+a^{t}\cdot dtitalic\_s start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT = italic\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT + italic\_a start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ⋅ italic\_d italic\_t.
Task. We consider a manipulation task where there are two coffee cups in front of a robot arm (see Figure [12](#S8.F12 "Figure 12 ‣ 8.2 Do Latent Actions with Shared Autonomy Help Users Reach and Change Goals? ‣ 8 Simulations ‣ Learning Latent Actions to Control Assistive Robots")). The human may want to reach and grasp either cup (i.e., these cups are the potential goals). We embed the robot’s high-DoF actions into a 1111-DoF input space: the simulated users had to convey both their goal and preference only by pressing left and right on the joystick.
Simulated Humans. The users attempting to complete this task are approximately optimal, and make decisions that guide the robot accordingly to their goal g\*superscript𝑔g^{\*}italic\_g start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT. Remember that x𝑥xitalic\_x is the position of the robot’s end-effector and ΨΨ\Psiroman\_Ψ is the forward kinematics. The humans have reward function R=−‖g\*−x‖2𝑅superscriptnormsuperscript𝑔𝑥2R=-\|g^{\*}-x\|^{2}italic\_R = - ∥ italic\_g start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT - italic\_x ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT, and choose latent actions z𝑧zitalic\_z to move the robot towards g\*superscript𝑔g^{\*}italic\_g start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT:
| | | | |
| --- | --- | --- | --- |
| | p(z)∝exp{−β(t)⋅‖g\*−Ψ(s+ϕ(z,c)⋅dt)‖2}proportional-to𝑝𝑧⋅𝛽𝑡superscriptnormsuperscript𝑔Ψ𝑠⋅italic-ϕ𝑧𝑐𝑑𝑡2p(z)\propto\exp{\Big{\{}-\beta(t)\cdot\|g^{\*}-\Psi(s+\phi(z,c)\cdot dt)\|^{2}\Big{\}}}italic\_p ( italic\_z ) ∝ roman\_exp { - italic\_β ( italic\_t ) ⋅ ∥ italic\_g start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT - roman\_Ψ ( italic\_s + italic\_ϕ ( italic\_z , italic\_c ) ⋅ italic\_d italic\_t ) ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT } | | (24) |
Here β≥0𝛽0\beta\geq 0italic\_β ≥ 0 is a temperature constant that affects the user’s rationality. When β→0→𝛽0\beta\rightarrow 0italic\_β → 0, the human selects increasingly random z𝑧zitalic\_z, and when β→∞→𝛽\beta\rightarrow\inftyitalic\_β → ∞, the human always chooses the z𝑧zitalic\_z that moves towards g\*superscript𝑔g^{\*}italic\_g start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT. We simulate different types of users by varying β(t)𝛽𝑡\beta(t)italic\_β ( italic\_t ).
Users with Fixed Expertise. We first simulate humans that have fixed levels of expertise. Here expertise is captured by β𝛽\betaitalic\_β from Equation ([24](#S8.E24 "24 ‣ 8.2 Do Latent Actions with Shared Autonomy Help Users Reach and Change Goals? ‣ 8 Simulations ‣ Learning Latent Actions to Control Assistive Robots")): users with high β𝛽\betaitalic\_β are proficient, and rarely make mistakes with noisy inputs. We anticipate that all algorithms will perform similarly when humans are always perfect or completely random—but we are particularly interested in the spectrum of users between these extremes, who frequently mis-control the robot.
Our results relating β𝛽\betaitalic\_β to performance are shown in Figure [11](#S8.F11 "Figure 11 ‣ 8.2 Do Latent Actions with Shared Autonomy Help Users Reach and Change Goals? ‣ 8 Simulations ‣ Learning Latent Actions to Control Assistive Robots"). In accordance with our convergence result from Section [5](#S5 "5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots"), we find that introducing shared autonomy helps humans reach their desired grasp more quickly, and with less final state error. The performance difference between LA and LA+SA decreases as the human’s expertise increases — looking specifically at the real robot simulations, LA takes 45%percent4545\%45 % more time to complete the task than LA+SA at β=75𝛽75\beta=75italic\_β = 75, but only 30%percent3030\%30 % more time when β=1000𝛽1000\beta=1000italic\_β = 1000. We conclude that shared autonomy improves performance across all levels of expertise, both when latent actions are trained with and without entropy.
Users that Change their Mind. One downside of shared autonomy is over-assistance: the robot may become constrained at likely (but incorrect) goals. To examine this adverse scenario we simulate humans that change which coffee cup they want to grasp after N𝑁Nitalic\_N timesteps. These simulated users intentionally move towards the wrong cup while t≤N𝑡𝑁t\leq Nitalic\_t ≤ italic\_N, and then try to reach the correct cup for the rest of the task.
We model humans as near-optimal immediately after changing their mind about the goal.
We visualize our results in Figure [12](#S8.F12 "Figure 12 ‣ 8.2 Do Latent Actions with Shared Autonomy Help Users Reach and Change Goals? ‣ 8 Simulations ‣ Learning Latent Actions to Control Assistive Robots"). When the latent action space is trained only to minimize reconstruction loss (LA+SA), users cannot escape the shared autonomy constraint around the wrong goal as N𝑁Nitalic\_N increases. Intuitively, this occurs because the latent space controls the intended goal when the belief b𝑏bitalic\_b is roughly uniform, and then switches to controlling the preferred trajectory once the robot is confident. So if users change their goal after first convincing the robot, the latent space no longer contains actions that move towards this correct goal! We find that our proposed entropy loss function addresses this shortcoming: LA+SA+Entropy users are able to input actions z𝑧zitalic\_z that alter the robot’s goal. Our results suggest that encouraging entropy at training time improves the robustness of the latent space.

Figure 13: Simulated humans that learn how to teleoperate the robot. The human’s rationality β(t)𝛽𝑡\beta(t)italic\_β ( italic\_t ) is linear in time, and either increases with a high slope (Fast Learner) or low slope (Slow Learner). As the human learns, they get better at choosing inputs that best guide the robot towards their true goal. We find that latent actions learned with the entropy reward (LA+SA+Entropy) are more versatile, so that the human can quickly undo mistakes made while learning.
Users that Learn within the Task. We not only expect real users to change their mind when collaborating with the robot, but we also anticipate that these teleoperators will learn and improve as they gain experience during the task. For instance, the user might learn that holding left on the joystick causes the robot to grasp the cup from the side, while holding right guides the robot towards a top grasp. To simulate this in-task learning, we set β(t)=m⋅t𝛽𝑡⋅𝑚𝑡\beta(t)=m\cdot titalic\_β ( italic\_t ) = italic\_m ⋅ italic\_t, where the slope m𝑚mitalic\_m determines how quickly the user learns. All users start with random actions (β=0𝛽0\beta=0italic\_β = 0), and either learn quickly (high m𝑚mitalic\_m) or slowly (low m𝑚mitalic\_m). We point out that slow learners may effectively “change their mind” multiple times, since they are unsure of how to control the robot.
Our findings are plotted in Figure [13](#S8.F13 "Figure 13 ‣ 8.2 Do Latent Actions with Shared Autonomy Help Users Reach and Change Goals? ‣ 8 Simulations ‣ Learning Latent Actions to Control Assistive Robots"). We see that — for both fast and slow learners — LA+SA+Entropy improves in-task performance. We attribute this improvement to the inherent versatility of latent spaces that maximize entropy: as humans gain expertise, they can use these latent actions to quickly undo their mistakes and correct the robot’s behavior.
Summary. Overall, these simulations show that users are not able to precisely reach and maintain goals when controlling robots with only latent actions. Including shared autonomy improves convergence, while training latent actions to maximize entropy ensures that this convergence does not become a burden. When using our proposed combination of shared autonomy and learned latent actions, naïve and experenced users are able to reach their preferred goal and change their mind.
###
8.3 Can We Efficiently Align Latent Actions with Joystick Inputs?
By combining shared autonomy with latent actions, we have a way for humans to precisely control high-DoF robots. But currently the mapping between joystick inputs and latent actions is arbitrary — and this makes it challenging for users to know how to leverage latent actions. Here we test our proposed alignment method from Section [6](#S6 "6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots"). We explore how efficiently we can learn a simulated human’s preferred alignment by using semi-supervised learning and underlying priors.

Figure 14: Learning the joystick alignment from simulated and imperfect human feedback. We tested three different tasks with increasing complexity, and here we display the results of the easiest (Plane) and the hardest (Reach + Pour). Alignment Error refers to a the difference between where the simulated human expected the robot to move, and where the robot actually moved. To explore the robustness of our method we varied how noisy the human was when providing their preferences. Across different tasks and levels of human noise, All Priors (our semi-supervised approach) consistently outperformed the other methods, and almost matched an ideal alignment learned from abundant data.
Setup. Simulated users control the FrankaEmika arm from the previous section. The latent action space and decoder ϕ(z,c)italic-ϕ𝑧𝑐\phi(z,c)italic\_ϕ ( italic\_z , italic\_c ) were trained using conditional autoencoders. This decoder ϕitalic-ϕ\phiitalic\_ϕ maps from a 2-DoF latent action z𝑧zitalic\_z to a 7777-DoF robot arm motion. We now leave the decoder ϕitalic-ϕ\phiitalic\_ϕ fixed, and focus on learning the alignment model f𝑓fitalic\_f. This alignment model z=f(u,c)𝑧𝑓𝑢𝑐z=f(u,c)italic\_z = italic\_f ( italic\_u , italic\_c ), takes 2222-DoF inputs from the simulated human and converts them to latent actions for the robot to execute.
Tasks. We considered three tasks of increasing complexity. In each task the simulated user interacted with a 2222-DoF joystick.
1. 1.
Plane: The robot moves its end-effector in the x𝑥xitalic\_x-y𝑦yitalic\_y plane. The human prefers for one joystick DoF to move the robot along the x𝑥xitalic\_x-axis, and the other should move the robot along the y𝑦yitalic\_y axis.
2. 2.
Pour: The robot moves and rotates its end-effector along z𝑧zitalic\_z axis. The human expects one dimension of the joystick to move the robot up and down, and the other to control a pouring motion.
3. 3.
Reach & Pour: The robot reaches for a bottle, carries it to a bowl, and then pours the contents. The human’s preference is divided into two parts: when reaching for the bottle in the x𝑥xitalic\_x-y𝑦yitalic\_y plane, the human’s preference matches Plane, and when pouring, the human’s preference is the same as in Pour.
Learning the Alignment. Before each of these tasks we provided the simulated human with 10101010 different motions to label. The human indicated which joystick input u𝑢uitalic\_u they would expect to use to command the demonstrated motion (s,s\*)𝑠superscript𝑠(s,s^{\*})( italic\_s , italic\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ). In addition to these 10101010 labeled datapoints, the robot also collected 1000100010001000 unlabeled motions for self-supervised learning. Given this data, we compared our approach to different baselines. To better understand which priors are useful, we also included an ablation study where the robot learned with only one prior at a time. Overall, the conditions were:
* •
No Align: baseline where z=u𝑧𝑢z=uitalic\_z = italic\_u
* •
Manual Align: the affine transformation that best matches the labeled data
* •
No Priors: trained with the supervised loss ℒsupsubscriptℒ𝑠𝑢𝑝\mathcal{L}\_{sup}caligraphic\_L start\_POSTSUBSCRIPT italic\_s italic\_u italic\_p end\_POSTSUBSCRIPT from Equation ([19](#S6.E19 "19 ‣ 6.1 Alignment Model ‣ 6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots"))
* •
Proportional: trained with ℒsup+ℒpropsubscriptℒ𝑠𝑢𝑝subscriptℒ𝑝𝑟𝑜𝑝\mathcal{L}\_{sup}+\mathcal{L}\_{prop}caligraphic\_L start\_POSTSUBSCRIPT italic\_s italic\_u italic\_p end\_POSTSUBSCRIPT + caligraphic\_L start\_POSTSUBSCRIPT italic\_p italic\_r italic\_o italic\_p end\_POSTSUBSCRIPT, the proportional prior from Equation ([20](#S6.E20 "20 ‣ 6.2 Reducing Human Data with Intuitive Priors ‣ 6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots"))
* •
Reversable: trained with ℒsup+ℒreversesubscriptℒ𝑠𝑢𝑝subscriptℒ𝑟𝑒𝑣𝑒𝑟𝑠𝑒\mathcal{L}\_{sup}+\mathcal{L}\_{reverse}caligraphic\_L start\_POSTSUBSCRIPT italic\_s italic\_u italic\_p end\_POSTSUBSCRIPT + caligraphic\_L start\_POSTSUBSCRIPT italic\_r italic\_e italic\_v italic\_e italic\_r italic\_s italic\_e end\_POSTSUBSCRIPT, the reversable prior from Equation ([21](#S6.E21 "21 ‣ 6.2 Reducing Human Data with Intuitive Priors ‣ 6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots"))
* •
Consistent: trained with ℒsup+ℒconsubscriptℒ𝑠𝑢𝑝subscriptℒ𝑐𝑜𝑛\mathcal{L}\_{sup}+\mathcal{L}\_{con}caligraphic\_L start\_POSTSUBSCRIPT italic\_s italic\_u italic\_p end\_POSTSUBSCRIPT + caligraphic\_L start\_POSTSUBSCRIPT italic\_c italic\_o italic\_n end\_POSTSUBSCRIPT, the consistency prior from Equation ([22](#S6.E22 "22 ‣ 6.2 Reducing Human Data with Intuitive Priors ‣ 6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots"))
* •
All Priors: trained with all of the proposed priors
* •
Ideal Align: supervised loss where the simulated human answers 1000100010001000 queries (instead of 10101010).
Besides the type of alignment model, we also varied how imperfectly the simulated human answered our queries. We set the coefficient of variance as 00, 0.10.10.10.1, and 0.50.50.50.5 for the simulated human when they answered queries.
Dependent Measures. To determine the quality of each alignment model, we measured the error between where the human intended to go and where the robot actually went. Specifically, we measured the relative end-effector position and orientation. Let x\*=Ψ(s\*)superscript𝑥Ψsuperscript𝑠x^{\*}=\Psi(s^{\*})italic\_x start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT = roman\_Ψ ( italic\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) be the intended end-effector pose, let xtsuperscript𝑥𝑡x^{t}italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT be the start pose, and let xt+1superscript𝑥𝑡1x^{t+1}italic\_x start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT be where the robot actually ended up: we computed ‖x\*−xt−1‖2/‖x\*−xt‖2superscriptnormsuperscript𝑥superscript𝑥𝑡12superscriptnormsuperscript𝑥superscript𝑥𝑡2\|x^{\*}-x^{t-1}\|^{2}/\|x^{\*}-x^{t}\|^{2}∥ italic\_x start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT - italic\_x start\_POSTSUPERSCRIPT italic\_t - 1 end\_POSTSUPERSCRIPT ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT / ∥ italic\_x start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT - italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT. For each experiment setting, we reported mean and standard deviation of this metric over 10101010 total runs.
Hypotheses. We expected three things during our alignment simulations:
>
> H1. With abundant labeled data, the alignment model will learn the human’s preferences.
>
>
>
>
> H2. Compared to the fully-supervised baseline, our semi-supervised alignment models that leverage intuitive priors will achieve similar performance with far less human data.
>
>
>
>
> H3. Semi-supervised training with proportional, reversible, and consistent priors will outperform models trained with only one of these priors.
>
>
>
Results. We highlight results for the Plane and Reach + Pour tasks in Figure [14](#S8.F14 "Figure 14 ‣ 8.3 Can We Efficiently Align Latent Actions with Joystick Inputs? ‣ 8 Simulations ‣ Learning Latent Actions to Control Assistive Robots"). For models that do not leverage data or personalization — i.e., No Align and Manual Align — the error is significantly higher than learning alternatives. With abundant data and noise-free human annotations, Ideal Align provided the gold-standard performance. The success of Ideal Align indicates that our parametrization of the alignment model f𝑓fitalic\_f is capable of capturing the human’s preferences.
In practice, the amount of human feedback will always be limited. We found that our proposed priors were critical when looking at models that only had access to 10101010 queries during training. The three semi-supervised models with just one intuitive prior (Proportional, Reversable, and Consistent) performed twice as well as No Priors, the supervised baseline. Putting all of these priors together resulted in even better performance: across different user noise levels, All Priors consistently demonstrated the lowest mean error and standard deviation. This was particularly noticeable when the human oracle is noisy, suggesting that the three priors are indeed complementary, and including each of them together brings a performance boost!
Comparing the easier Plane task to the more complex Reach & Pour task in Figure [14](#S8.F14 "Figure 14 ‣ 8.3 Can We Efficiently Align Latent Actions with Joystick Inputs? ‣ 8 Simulations ‣ Learning Latent Actions to Control Assistive Robots"), we also saw that using priors became increasingly important as the task got harder. This suggests that — in complex scenarios — simply relying on a few labeled examples during supervised learning may lead to severe overfitting. Our intuitive priors for semi-supervised learning effectively mitigate this problem.
Summary. Viewed together, the results of our simulations strongly support the hypotheses H1, H2, and H3. Our proposed alignment model successfully learned the mapping between the joystick inputs and latent actions (H1). In settings with limited labels, our proposed alignment model with intuitive control priors reached results that almost match supervised training with abundant data (H2). Finally, in ablation studies, we showed how combining all three proposed priors leads to superior performance and greater training stability than training with a single prior (H3).
9 User Studies with Non-Disabled Participants
----------------------------------------------
To evaluate whether actual humans can use learned latent actions to teleoperate robots and perform assistive eating tasks, we conducted four user studies. The participants in these studies are all non-disabled adults (we apply our approach with disabled adults in Section [10](#S10 "10 Case Study with Disabled Users ‣ Learning Latent Actions to Control Assistive Robots")). Importantly, we designed these user studies to mimic assistive teleoperation settings. In each study the human user interacts with a 2-DoF joystick, and uses this joystick to control a 7-DoF robot arm.
The order of the studies roughly parallels Sections [4](#S4 "4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots"), [5](#S5 "5 Combining Latent Actions with Shared Autonomy ‣ Learning Latent Actions to Control Assistive Robots"), and [6](#S6 "6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots"). We start by comparing latent actions to an existing dataset for assistive eating tasks, and then compare latent actions to the end-effector teleoperation method commonly used on assistive robot arms. Next, we introduce shared autonomy, and perform an ablation study to understand how shared autonomy and latent actions contribute to high-level reaching and precise manipulation tasks. Finally, we learn the alignment between joystick inputs and latent actions by considering intuitive priors, and evaluate how this alignment improves human-robot co-adaptation.
###
9.1 Comparing Latent Actions to the HARMONIC Baselines
In our first user study users teleoperate an assistive robot arm using only learned latent actions. We baseline their performance against current state-of-the-art approaches for controlling high-DoF assistive robot arms. Specifically, we implement the same assistive eating task as in the HARMONIC dataset [newman2018harmonic](#bib.bib37) . This task is shown in Figure [15](#S9.F15 "Figure 15 ‣ 9.1 Comparing Latent Actions to the HARMONIC Baselines ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"): users guide the robot arm to stab a marshmallow of their choice. There are three different marshmallows on the plate — i.e., three possible high-level goals — and only the human knows which of these discrete goals they want to reach. The HARMONIC dataset reports the performance of 24242424 people who completed this task with different levels of shared autonomy. Within the HARMONIC baseline, however, the mapping from low-DoF user inputs to high-DoF robot actions is predefined, with separate modes to control the end-effector’s x𝑥xitalic\_x-y𝑦yitalic\_y position, z𝑧zitalic\_z-yaw𝑦𝑎𝑤yawitalic\_y italic\_a italic\_w position, and roll𝑟𝑜𝑙𝑙rollitalic\_r italic\_o italic\_l italic\_l-pitch𝑝𝑖𝑡𝑐ℎpitchitalic\_p italic\_i italic\_t italic\_c italic\_h orientation. We compare the learned latent mapping from Section [4](#S4 "4 Learning Latent Actions ‣ Learning Latent Actions to Control Assistive Robots") to this set of baselines.

Figure 15: Experimental setup for our user study from Section [9.1](#S9.SS1 "9.1 Comparing Latent Actions to the HARMONIC Baselines ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"). (Left) in this eating task, the participant uses a two-DoF joystick to guide the robot to reach their desired marshmallow. (Right) we compare our latent action approach to shared autonomy baselines from the HARMONIC dataset.
Independent Variables. We manipulated the robot’s teleoperation strategy with five levels: the four conditions from the HARMONIC dataset plus our proposed learned latent actions. For the first four conditions, the robot uses optimization-based shared autonomy [javdani2018shared](#bib.bib25) to select assistive actions. The robot either provides no assistance (No Assist) or linearly interpolates between the human’s input and the assistive action (Low Assist, High Assist, and Full Assist). High Assist was the most effective strategy from this group: when interpolating, here the assistive action is given twice the weight of the human’s commanded action. Within our learned latent actions condition, we applied a conditional variational autoencoder (cVAE) to learn the decoder ϕ(z,c)italic-ϕ𝑧𝑐\phi(z,c)italic\_ϕ ( italic\_z , italic\_c ) from the demonstrations in the HARMONIC dataset. For now we treat the joystick inputs as the latent actions, so that z=u𝑧𝑢z=uitalic\_z = italic\_u.
Dependent Measures. Before each trial users indicated which marshmallow they want to reach. We measured the fraction of trials in which the robot picks up the correct marshmallow (Success Rate), the amount of time needed to complete the task (Completion Time), the total magnitude of the human’s input (Joystick Input), and the distance traveled by the robot’s end-effector (Trajectory Length).
Hypothesis. We hypothesized that:
>
> H1. *Compared the the baselines, latent actions will improve task success while reducing the completion time, joystick inputs, and trajectory length.*
>
>
>
Experimental Setup. Participants interacted with a joystick while watching the robotic arm (see Figure [15](#S9.F15 "Figure 15 ‣ 9.1 Comparing Latent Actions to the HARMONIC Baselines ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots")). The robot held a fork; during the task, users teleoperated the robot to position this fork directly above their desired marshmallow. We selected the robot’s start state, goal locations, and movement speed to be consistent with the HARMONIC dataset.
Participants and Procedure. Our participant pool consisted of ten Stanford University affiliates who provided informed consent (3333 female, average participant age 23.9±2.8plus-or-minus23.92.823.9\pm 2.823.9 ± 2.8 years). Following the same protocol as the HARMONIC dataset, each participant was given up to five minutes to familiarize themselves with the task and joystick, and then completed five recorded trials using our cVAE approach. At the start of each trial the participant indicates which marshmallow they want the robot to reach; the trial ends after the user indicates that the fork is above their intended marshmallow. We point out that participants only completed the task with the cVAE condition; other teleoperation strategies are benchmarked in Newman et al. [newman2018harmonic](#bib.bib37) .

Figure 16: End-effector trajectories from High Assist and cVAE conditions. The robot starts at the black dot, and moves to position itself over the plate.

Figure 17: Comparing our objective results to the HARMONIC baseline. We found that cVAE led to faster task completion with less user input and end-effector motion. The Full Assist condition performed worse than High Assist across the board (omitted for clarity). Error bars show the 10101010 and 90909090 percentiles, and \*\*\* denotes statistical significance (p<.05𝑝.05p<.05italic\_p < .05).
Results. We display example robot trajectories in Figure [16](#S9.F16 "Figure 16 ‣ 9.1 Comparing Latent Actions to the HARMONIC Baselines ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots") and report our dependent measures in Figures [15](#S9.F15 "Figure 15 ‣ 9.1 Comparing Latent Actions to the HARMONIC Baselines ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots") and [17](#S9.F17 "Figure 17 ‣ 9.1 Comparing Latent Actions to the HARMONIC Baselines ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"). Inspecting these example trajectories, we observe that the cVAE model learned latent actions that constrain the robot’s end-effector into a region above the plate. Users controlling the robot with cVAE reached their desired morsel in 44444444 of the 50505050 total trials, yielding a higher Success Rate than the assistance baselines. To better compare cVAE to the High Assist condition, we performed independent t-tests. Participants with the cVAE model had significantly lower Completion Time (t(158)=2.95𝑡1582.95t(158)=2.95italic\_t ( 158 ) = 2.95, p<.05𝑝.05p<.05italic\_p < .05), Joystick Input (t(158)=2.49𝑡1582.49t(158)=2.49italic\_t ( 158 ) = 2.49, p<.05𝑝.05p<.05italic\_p < .05), and Trajectory Length (t(158)=9.39𝑡1589.39t(158)=9.39italic\_t ( 158 ) = 9.39, p<.001𝑝.001p<.001italic\_p < .001), supporting hypothesis H1.
Summary. We baselined our learned mapping from low-DoF to high-DoF actions against state-of-the-art shared autonomy approaches with predefined mappings. Users teleoperating an assistive robot with learned latent actions reached their high-level goals more accurately, while requiring less time, effort, and movement.
###
9.2 Comparing Latent Actions to End-Effector Teleoperation

Figure 18: Experimental setup for our user study in Section [9.2](#S9.SS2 "9.2 Comparing Latent Actions to End-Effector Teleoperation ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"). (Top row) the participant is teleoperating an assistive robot to make their “apple pie” recipe. This recipe is broken down into three sub-tasks. On left the robot picks up eggs, pours them into the bowl, then drops the container into the recycling. In middle the robot picks up flour, pours it into the bowl, then returns the container to the shelf. On right the robot grasps an apple, places it in the bowl, then stirs the mixture. (Middle row) example robot trajectories when the person directly controls the robot’s End-Effector. (Bottom row) example trajectories when using cVAE to learn latent actions. Comparing the example trajectories, we observe that cVAE resulted in robot motions that more smoothly and directly accomplished the task.
Real-world assistive eating tasks involve more than just reaching for discrete goals (i.e., stabbing a food morsel). Often objects lie in continuous regions (i.e., anywhere on a shelf), and users must make continuous decisions (i.e., how much water to pour). In our second user study we therefore apply learned latent actions to continuous tasks. Consider cooking the “apple pie” in Figure [18](#S9.F18 "Figure 18 ‣ 9.2 Comparing Latent Actions to End-Effector Teleoperation ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"). Assembling this recipe requires picking up ingredients from the shelf, pouring them into a bowl, recycling empty containers — or returning half-filled containers to the shelf — and then stirring the mixture. Shared autonomy approaches like [dragan2013policy](#bib.bib16) ; [newman2018harmonic](#bib.bib37) ; [javdani2018shared](#bib.bib25) are not suitable within this setting because: i) the goals lie in continuous regions and ii) the user needs to control both the high-level goal that the robot reaches for and the trajectory the robot follows to reach that goal (e.g., keeping a cup upright until pouring). Hence, we compare our latent action method against end-effector teleoperation. End-effector teleoperation is commonly used by assistive robots [herlant2016assistive](#bib.bib22) , where the human presses a button to switch modes and control different aspects of the end-effector’s motion. For example, in one mode the 2-DoF joystick moves the robot’s end-effector in the x𝑥xitalic\_x-y𝑦yitalic\_y plane. See videos of this user study here:
<https://youtu.be/wjnhrzugBj4>.
Independent Variables. We tested two teleoperation strategies: End-Effector and cVAE. Under End-Effector the user inputs apply a 6666-DoF twist to the robot’s end-effector, controlling its linear and angular velocity. Participants interact with two 2222-DoF joysticks, and are given a button to toggle between linear and angular motion [herlant2016assistive](#bib.bib22) ; [newman2018harmonic](#bib.bib37) ; [javdani2018shared](#bib.bib25) . By contrast, in cVAE the participants only interact with one 2222-DoF joystick, i.e., the latent action is z=[z1,z2]∈ℝ2𝑧subscript𝑧1subscript𝑧2superscriptℝ2z=[z\_{1},z\_{2}]\in\mathbb{R}^{2}italic\_z = [ italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_z start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ] ∈ blackboard\_R start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT. We emphasize that this cVAE latent action model has the same structure as the one used in our previous user study (Section [9.1](#S9.SS1 "9.1 Comparing Latent Actions to the HARMONIC Baselines ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots")), and does not yet include either shared autonomy or alignment models. We trained cVAE using state-action pairs from kinesthetic demonstrations, where we guided the robot along related sub-tasks such as reaching for the shelf, pouring objects into the bowl, and stirring. Overall, the cVAE was trained with less than 7777 minutes of demonstration data.
Dependent Measures – Objective. We measured the total amount of time it took for participants to complete the entire cooking task (Completion Time), as well as the magnitude of their inputs (Joystick Input).
Dependent Measures – Subjective. We administered a 7777-point Likert scale survey after each condition. Questions were separated into six scales, such as ease of performing the task (Ease) and consistency of the controller (Consistent). Once users had completed the task with both strategies, we asked comparative questions about which they preferred (Prefer), which was Easier, and which was more Natural.
Hypotheses. We had the following hypotheses:
>
> H1. *Users controlling the robot arm with low-DoF latent actions will complete the cooking task more quickly and with less overall effort.*
>
>
>
>
> H2. *Participants will perceive the robot as easier to work with in the cVAE condition, and will prefer the cVAE over End-Effector teleoperation.*
>
>
>
Experimental Setup. We developed a cooking task where the person is making a simplified “apple pie.” As shown in Figure [18](#S9.F18 "Figure 18 ‣ 9.2 Comparing Latent Actions to End-Effector Teleoperation ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"), the assistive robot must sequentially pour eggs, flour, and an apple into the bowl, dispose of their containers, and stir the mixture. The user sat next to the robot and controlled its behavior with a handheld joystick. During the experiment we introduced variance by intermittently changing the location of the shelf, bowl, and recycling bin.
Participants and Procedure. Eleven members of the Stanford University community (4444 female, age range 27.4±11.8plus-or-minus27.411.827.4\pm 11.827.4 ± 11.8 years) provided informed consent to participate in this study. Similar to the other user studies described in this paper, we used a within-subjects design, and counterbalanced the order of our two conditions. Four of our subjects had prior experience interacting with the robot used in our experiment.
Before starting the study, participants were shown a video of the cooking task. Participants then separately completed the three parts of the task as visualized in Figure [18](#S9.F18 "Figure 18 ‣ 9.2 Comparing Latent Actions to End-Effector Teleoperation ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"); we reset the robot to its home position between each of these sub-tasks. After the user completed these sub-tasks, we re-arranged the placement of the recycling and bowl, and users performed the entire cooking task without breaks. Participants were told about the joystick interface for each condition, and could refer to a sheet that labelled the joystick inputs.

Figure 19: Objective results from assembling an “apple pie.” These results were collected across the entire cooking task (combining each sub-task from Figure [18](#S9.F18 "Figure 18 ‣ 9.2 Comparing Latent Actions to End-Effector Teleoperation ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots")). We compare using just latent actions to direct end-effector teleoperation.

Figure 20: Subjective results from assembling an “apple pie.” Higher ratings indicate participant agreement. Participants thought our approach required less effort (Ease), made it easier to complete the task (Easier), and produced more natural robot motion (Natural) as compared to End-Effector control.
Results – Objective. Our objective results are summarized in Figure [19](#S9.F19 "Figure 19 ‣ 9.2 Comparing Latent Actions to End-Effector Teleoperation ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"). When using cVAE to complete the entire recipe, participants finished the task in less time (t(10)=−6.9𝑡106.9t(10)=-6.9italic\_t ( 10 ) = - 6.9, p<.001𝑝.001p<.001italic\_p < .001), and used the joystick less frequently (t(10)=−5.1𝑡105.1t(10)=-5.1italic\_t ( 10 ) = - 5.1, p<.001𝑝.001p<.001italic\_p < .001) as compared to direct End-Effector teleoporation.
Results – Subjective. We display the results of our 7777-point Likert scale surveys in Figure [20](#S9.F20 "Figure 20 ‣ 9.2 Comparing Latent Actions to End-Effector Teleoperation ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"). Before reporting these results, we first confirmed the reliability of our scales. We then leveraged paired t-tests to compare user ratings for End-Effector and cVAE conditions. We found that participants perceived cVAE as requiring less user effort (t(10)=2.7𝑡102.7t(10)=2.7italic\_t ( 10 ) = 2.7, p<.05𝑝.05p<.05italic\_p < .05) than End-Effector. Participants also indicated that it was easier to complete the task with cVAE (t(10)=2.5𝑡102.5t(10)=2.5italic\_t ( 10 ) = 2.5, p<.05𝑝.05p<.05italic\_p < .05), and that cVAE caused the robot to move more naturally (t(10)=3.8𝑡103.8t(10)=3.8italic\_t ( 10 ) = 3.8, p<.01𝑝.01p<.01italic\_p < .01). The other scales were not significantly different.
Summary. We focused on a cooking task with continuous high-level goals, and compared latent actions to end-effector teleoperation. When controlling the robot with latent actions, users completed the cooking task more quickly and with less effort (H1). Participants believed that the cVAE approach led to more natural robot motion, and indicated that it was easier to perform the task with latent actions. However, participants did not indicate a clear preference for either strategy (H2). We will explore ways to improve user satisfaction in our next user study, where we combine latent actions with shared autonomy to assist the human.
###
9.3 Combining Learned Latent Actions with Shared Autonomy

Figure 21: Experimental setup for our user study in Section [9.3](#S9.SS3 "9.3 Combining Learned Latent Actions with Shared Autonomy ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"). (Left) The Dessert task consists of 3 phases: stabbing the marshmallow, scooping it in icing, and dipping it in rice. We identified the end-effector directions needed to complete these fine-grained preferences, and plotted the average dot product between the desired and actual directions (Preference Alignment). In the R+SA condition, users executed the entire task in a stabbing/dipping orientation. By contrast, with LA+SA users correctly adjusted the scooping preference in the second phase of the task. (Right) We plot the results of our 7-point Likert scale surveys. Color to method mappings are consistent with Figure [22](#S9.F22 "Figure 22 ‣ 9.3 Combining Learned Latent Actions with Shared Autonomy ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"), and \*\*\* indicate statistical significance (p<0.05)𝑝0.05(p<0.05)( italic\_p < 0.05 ).
In our first two user studies we tested latent actions by themselves, without any robotic assistance. Our results indicate that latent actions objectively outperform both the HARMONIC baselines and end-effector control; however, the participant’s subjective responses are mixed. Plus, so far we have only dealt with high-level reaching goals — but assistive eating also involves fine-grained manipulation, where the user must cut, stab, and scoop their food. Here we tackle both issues by performing a user study with eating tasks (see Figure [21](#S9.F21 "Figure 21 ‣ 9.3 Combining Learned Latent Actions with Shared Autonomy ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots")). Participants must control the robot towards their goal plate, and then carefully adjusted the robot’s motion to cut, stab, and scoop different foods. We explore how users leverage learned latent actions with shared autonomy to complete this task. Specifically, we conduct an ablation study across latent actions and shared autonomy, and determine whether latent actions alone, shared autonomy alone, or their combination best results in high-level reaching and precise manipulation. See videos of this user study here:
<https://youtu.be/7BouKojzVyk>.
Experimental Setup. Each participant attempted to complete two dishes: an Entree task and a Dessert task. In Entree, users had to perform multiple precise motions at the same goal. Here participants i) guided the robot towards a bowl with tofu, ii) cut off a slice of tofu, and iii) stabbed and scooped the slice onto their plate. In Dessert the participants had to convey their preferences at multiple goals: they i) stabbed a marshmallow in the middle plate, ii) scooped it through icing at the right plate, and then iii) dipped it in rice at the left plate before iv) setting the marshmallow on their plate. In both tasks subjects sat next to the robot, mimicking a wheelchair-mounted arm.
Independent Variables. We conducted a 2x2 factorial design that separately varied Control Interface and Robot Assistance.
For the control interface, we tested a state-of-the-art direct teleoperation scheme (Retargetting), where the user’s joystick inputs map to the 6-DoF end-effector twist of the robot [rakita2017motion](#bib.bib43) . We compared this direct teleoperation baseline to our learned latent actions: here the robot interprets the meaning of the human’s inputs based on the current context.
For robot assistance, we tested with and without shared autonomy. We implemented the shared autonomy algorithm from [javdani2018shared](#bib.bib25) , which assists the robot towards likely human goals.
Crossing these two separate factors, we totaled four different conditions:
* •
R: Retargeting with no shared autonomy
* •
R+SA: Retargetting with shared Autonomy
* •
LA: Latent actions with no shared autonomy
* •
LA+SA: Latent actions with shared autonomy
Note that the LA+SA condition is our proposed approach (Algorithm [1](#alg1 "Algorithm 1 ‣ 6.2 Reducing Human Data with Intuitive Priors ‣ 6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots")). However, we still omit the alignment model f𝑓fitalic\_f, so for now the latent action z𝑧zitalic\_z is set equal to the joystick input u𝑢uitalic\_u.
Model Training. We provided kinesthetic demonstrations 𝒟𝒟\mathcal{D}caligraphic\_D that guided the robot towards each plate, and then performed cutting, stabbing, and scooping motions at these goals. The robot learned the latent action space from a total of 20202020 minutes of kinesthetic demonstrations. Because we are combining latent actions with shared autonomy, we also recorded the belief b𝑏bitalic\_b during these demonstrations. In the LA+SA condition, the robot used context c=(s,b)𝑐𝑠𝑏c=(s,b)italic\_c = ( italic\_s , italic\_b ) to decode human inputs.
Dependent Measures – Objective.
We recorded the amount of time users took to complete each task (*Total Time*), as well as the amount of time spent without providing joystick inputs (*Idle Time*). We also computed proxy measures of the high-level goal accuracy and low-level preference precision. For goals, we measured the robot’s total distance to the closest plate throughout the task (*Goal Error*). For preferences, we recorded the dot product between the robot’s actual end-effector direction and the true end-effector directions needed to precisely cut, stab, and scoop (Preference Alignment).
Dependent Measures – Subjective. We administered a 7777-point Likert scale survey after each condition. Questions were organized along five scales: how Easy it was to complete the tasks, how Helpful the robot was, how Precise their motions were, how Intuitive the robot was to control, and whether they would use this condition again (Prefer).
Participants and Procedure.
We recruited 10101010 subjects from the Stanford University student body to participate in our study (4444 female, average age 23.5±2.15plus-or-minus23.52.1523.5\pm 2.1523.5 ± 2.15 years). All subjects provided informed written consent prior to the experiment. We used a within-subjects design: each participant completed both tasks with all four conditions (the order of the conditions was counterbalanced). Before every trial, users practiced teleoperating the robot with the current condition for up to 5555 minutes.
Hypotheses. We tested three main hypotheses:
>
> H1. *Users controlling the robot with shared autonomy will more accurately maintain their goals.*
>
>
>
>
> H2. *Latent actions will help users more precisely perform manipulation tasks.*
>
>
>
>
> H3. *Participants will complete the task most efficiently with combined LA+SA.*
>
>
>

Figure 22: Error between the end-effector and nearest goal during the eating tasks. Adding shared autonomy (SA) decreased this error across both mapping strategies (R and LA).

Figure 23: Time taken to complete the eating task (solid) and time spent idle (light). Users completed both eating tasks most efficiently with our proposed combination of shared autonomy and latent actions (LA+SA).
Results – Objective. To explore H1, we analyzed the Goal Error for methods with and without shared autonomy (see Figure [22](#S9.F22 "Figure 22 ‣ 9.3 Combining Learned Latent Actions with Shared Autonomy ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots")). Across both tasks, users interacting with shared autonomy reached their intended goals significantly more accurately (F(1,18)=29.9,p<.001)formulae-sequence𝐹11829.9𝑝.001(F(1,18)=29.9,p<.001)( italic\_F ( 1 , 18 ) = 29.9 , italic\_p < .001 ). Breaking this down by condition, users incurred *less* error with LA+SA than with LA (p<.001𝑝.001p<.001italic\_p < .001), and — similarly — users were more accurate with R+SA than with R (p<.05𝑝.05p<.05italic\_p < .05). Building on our prior user study results, this indicates that adding shared autonomy improves the performance of our latent action approach.
So shared autonomy helped users more accurately maintain their goals — but were participants able to complete the precise manipulation tasks at those goals? We visualize the Preference Alignment for Dessert in Figure [21](#S9.F21 "Figure 21 ‣ 9.3 Combining Learned Latent Actions with Shared Autonomy ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"), specifically comparing R+SA to LA+SA. We notice that — when using direct teleoperation — participants remained in a stabbing preference throughout the task. By contrast, users with latent actions adjusted between diffrent manipulation tasks: stabbing the marshmallow, scooping it in icing, and dipping it in rice. These results support H2, suggesting that latent actions enable users to precisely manipulate the robot.

Figure 24: Experimental setup for the user study in Section [9.4](#S9.SS4 "9.4 Learning the Alignment between Joystick Inputs and Latent Actions ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"). We visualize a single user’s end-effector trajectories for the Avoid, Pour and Reach + Pour tasks. Participants teleoperated the 7-DoF Panda robot arm without any alignment model (No Align), with an alignment model trained only on their supervised feedback (No Priors), and with our proposed method, where the robot generalizes the human’s feedback using intuitive priors (All Priors). For both No Align and No Priors baselines, we can see moments where the human gets confused, counteracts themselves, or fails to complete the task.
Now that we know the benefits of shared autonomy and latent actions individually, what happens when we focus on their combination? Inspecting Figure [23](#S9.F23 "Figure 23 ‣ 9.3 Combining Learned Latent Actions with Shared Autonomy ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"), participants using LA+SA were able to complete both tasks more efficiently. Summing times across both tasks, and then performing pair-wise comparisons between each condition, we found that LA+SA outperformed the alternatives for both Total Time (p<.05𝑝.05p<.05italic\_p < .05) and Idle Time (p<.05𝑝.05p<.05italic\_p < .05). Overall, we found that LA+SA users completed the task the fastest and with the least error.
Results – Subjective. We find further support for H3 in the user’s feedback. The results of t-tests comparing LA+SA to the other conditions are reported in Figure [21](#S9.F21 "Figure 21 ‣ 9.3 Combining Learned Latent Actions with Shared Autonomy ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots") (where an \*\*\* denotes p<.05𝑝.05p<.05italic\_p < .05). Responses suggest that users were most “comfortable” when performing precise manipulation with LA+SA. We emphasize the improvement in these subjective results as compared to Figure [20](#S9.F20 "Figure 20 ‣ 9.2 Comparing Latent Actions to End-Effector Teleoperation ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"), where users ranked latent actions similarly to direct end-effector control. We conclude that incorporating shared autonomy improves the human’s experience when leveraging latent actions.
Summary. We conducted two eating tasks where participants needed to i) reach for high-level goals and ii) precisely manipulate food items at those goals. We found that both shared autonomy and latent actions improved performance: including shared autonomy decreased end-effector error, while using latent actions helped users quickly transition between different fine-grained manipulations. These results compliment Section [9.1](#S9.SS1 "9.1 Comparing Latent Actions to the HARMONIC Baselines ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots") (where we compare latent actions to shared autonomy) and Section [9.2](#S9.SS2 "9.2 Comparing Latent Actions to End-Effector Teleoperation ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots") (where we compare latent actions to end-effector control). But now we also have that the combination of shared autonomy and latent actions outperforms either of these approaches alone.
###
9.4 Learning the Alignment between Joystick Inputs and Latent Actions
Now that we have shown the benefits of latent actions — and how latent actions can incorporate shared autonomy — we finally turn our attention to aligning the human’s joystick inputs with the learned latent space. This final user study builds on Section [6](#S6 "6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots"). Prior to the study, we learn a latent action space that maps from 2-DoF inputs to 7-DoF robot actions. During the study, we ask participants to label a few sample robot motions with their preferred joystick input. The robot uses intuitive priors to generalize from these labels and learn a personalized mapping from joystick inputs u𝑢uitalic\_u to latent actions z𝑧zitalic\_z. In the previous studies we have simply set z=u𝑧𝑢z=uitalic\_z = italic\_u, and we leverage this as one of the baselines in this user study. We also evaluate the effects of our intuitive priors, and compare learning the alignment model with and without these priors. See videos of this user study here: [https://youtu.be/rKHka0\_48Q](https://youtu.be/rKHka0_48-Q).

Figure 25: Objective results from our alignment user study. (Left) Average time taken to complete each task. (Middle) Average trajectory length as measured in end-effector space. (Right) Percentage of the time people spend undoing their actions. Error bars show the standard deviation across the 10101010 participants, and colors match Figure [24](#S9.F24 "Figure 24 ‣ 9.3 Combining Learned Latent Actions with Shared Autonomy ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"). Asterisks denote statistically significant pairwise comparisons between the two marked strategies (p<.05𝑝.05p<.05italic\_p < .05).

Figure 26: Heatmaps of the participants’ joystick inputs during the Avoid task. For No Align in the upper right, people primarily used the cardinal directions. For No Priors in the bottom left, the joystick inputs were not clearly separated, and no clear pattern was established.
For our All Priors model on the bottom right, however, we observed that the human inputs were evenly distributed. This indicates that the users smoothly completed the task by continuously manipulating the joystick in the range [−1,+1]11[-1,+1][ - 1 , + 1 ] along both axes.

Figure 27: Results from our 7777-point Likert-scale survey after the alignment user study. The legend is the same as in Figure [25](#S9.F25 "Figure 25 ‣ 9.4 Learning the Alignment between Joystick Inputs and Latent Actions ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"). Higher ratings indicate agreement. Users thought that our learned model with intuitive priors aligned with their preferences, was easy to control, and improved efficiency — plus they would choose to use it again. Pairwise comparisons between our approach and the baselines are statistically significant across the board.
Tasks. Similar to our simulation experiments from Section [8.3](#S8.SS3 "8.3 Can We Efficiently Align Latent Actions with Joystick Inputs? ‣ 8 Simulations ‣ Learning Latent Actions to Control Assistive Robots"), we considered three different tasks. These tasks are visualized in Figure [24](#S9.F24 "Figure 24 ‣ 9.3 Combining Learned Latent Actions with Shared Autonomy ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots").
1. 1.
Avoid: The robot arm moves its end-effector in a horizontal plane. Users are asked to guide the robot around an obstacle without colliding with it.
2. 2.
Pour: The robot arm is holding a cup, and users want to pour this cup into two bowls. Users are asked to first pour into the farther bowl, before moving the cup back to the start and pouring into the closer bowl.
3. 3.
Reach & Pour: Users start by guiding the robot towards a cup and then pick it up. Once the users reach and grasp the cup, they are asked to take the cup to a target bowl, and finally pour into it.
Independent Variables. For each of the tasks described above, we compared three different alignment models. No Align corresponds to what we have done in the previous user studies: setting z=u𝑧𝑢z=uitalic\_z = italic\_u, so that all users must adapt to the same latent action alignment. We compare this to two personalized approaches. First is No Priors, where we train the alignment model f𝑓fitalic\_f just using the supervised loss (i.e., the robot only learns from the human’s labeled data). We contrast this to All Priors, where the robot leverages semi-supervised learning (i.e., the robot also considers the proportional, reversible, and consistent priors we anticipate that the user will expect). Each condition learns from the same human feedback; we emphasize that All Priors uses the same number of human labels as No Priors.
Dependent Measures. To evaluate the effectiveness of these different alignment strategies, we recorded Task Time and Trajectory Length. We also calculated the percentage of the time that users Undo their actions by significantly changing the joystick direction — undoing suggests that the alignment is not quite right, and the human is still adapting to the robot’s control strategy. Besides these objective measures, we also collected subjective feedback from the participants through 7-point Likert scale surveys.
Participants and Procedure.
We recruited 10 volunteers that provided informed written consent (3 female, ages 23.7±1.5plus-or-minus23.71.523.7\pm 1.523.7 ± 1.5). Participants used a 2-axis joystick to teleoperate the 7-DoF robot arm, and completed three manipulation tasks inspired by assistive settings. At the start of each task, we showed the user a set of robot movements and ask them to provide their preferred input on the joystick — i.e., “if you wanted the robot to perform the movement you just saw, what joystick input would you provide?” Users answered 7777 queries for task Avoid, 10101010 queries for task Pour and 30303030 queries for task Reach & Pour. After the queries finished, the users started performing tasks sequentially using each of the alignment strategies. The order of alignment strategies was counterbalanced.

Figure 28: Experimental setup for our case study with disabled persons (Section [10](#S10 "10 Case Study with Disabled Users ‣ Learning Latent Actions to Control Assistive Robots")). Two adult males who employ assistive devices when eating volunteered to participate in the study. Due to safety restrictions, this study was conducted remotely: participants used an online joystick to teleoperate our robot in real-time while watching live-streamed video. Here we show examples of the participants’ views during Entree and Dessert tasks. Camera (1) is a front-view of the robot and the high-level goals, and camera (2) is a side view of the same. Enlarged images of the participants are shown in the top right of each frame.
Hypothesis. We hypothesize that:
>
> H1. *An alignment model learned from user-specific feedback and generalized through intuitive priors will make it easier for humans to control the robot and perform assistive manipulation tasks.*
>
>
>
Results.
The objective results of our user study are summarized in Figure [25](#S9.F25 "Figure 25 ‣ 9.4 Learning the Alignment between Joystick Inputs and Latent Actions ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"). Across tasks and metrics, our model with All Priors outperforms the two baselines. In addition, our model not only has the best average performance, but it also demonstrates the least variance. Similar to our simulation results from Section [8.3](#S8.SS3 "8.3 Can We Efficiently Align Latent Actions with Joystick Inputs? ‣ 8 Simulations ‣ Learning Latent Actions to Control Assistive Robots"), when the task is difficult (i.e., Reach & Pour), the performance of No Priors drops significantly compared to simpler tasks, reinforcing the importance of priors when in learning complex alignment models.
We also illustrate our survey responses in Figure [27](#S9.F27 "Figure 27 ‣ 9.4 Learning the Alignment between Joystick Inputs and Latent Actions ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"). Across the board, we found that users exhibited a clear preference for our proposed method. Specifically, they perceived All Priors as resulting in better alignment, more natural, accurate, and effortless control, and would elect to use it again. These subjective results highlight the importance of personalization when controlling high-DoF systems — we contrast these results to Figure [20](#S9.F20 "Figure 20 ‣ 9.2 Comparing Latent Actions to End-Effector Teleoperation ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"), where participants perceived the unaligned controller as somewhat unintuitive.
To better visualize the user experiences, we also display example robot end-effector trajectories from one of the participants in Figure [24](#S9.F24 "Figure 24 ‣ 9.3 Combining Learned Latent Actions with Shared Autonomy ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"). Here we observe that the trajectories of our model (in orange) are smooth and do not detour during the tasks, while the trajectories for No Align (in grey) and No Priors (in black) have many movements that counteract themselves, indicating that this user was struggling to understand and align with the control strategy. In the worst case, participants were unable to complete the task with the No Priors model (see the Avoid task in Figure [24](#S9.F24 "Figure 24 ‣ 9.3 Combining Learned Latent Actions with Shared Autonomy ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots")) because no joystick inputs mapped to their intended direction, effectively causing them to get stuck at undesirable states.
To further validate that our model is learning the human’s preferences, we illustrate heatmaps over user inputs for the Avoid task in Figure [26](#S9.F26 "Figure 26 ‣ 9.4 Learning the Alignment between Joystick Inputs and Latent Actions ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"). Recall that this task requires moving the robot around an obstacle. Without the correct alignment, users default to the four cardinal directions (No Align), or warped circle-like motions (No Priors). By contrast, under All Priors the users smoothly moved the joystick around an even distribution, taking full advantage of the joystick’s [−1,+1]11[-1,+1][ - 1 , + 1 ] workspace along both axes.
Summary. In this user study we explored different approaches for learning the alignment model between joystick inputs and latent actions. We compared learning the alignment model with and without intuitive priors, and found that including these priors improved user performance, particularly in challenging tasks (H1).
10 Case Study with Disabled Users
----------------------------------

Figure 29: Comparison between the End-Effector condition on a Kinova assistive robot arm [Kinova](#bib.bib1) (left) and our End-Effector implementation (right). We introduced a grid so that this virtual joystick was as close as possible to the continuous joysticks we previously used during our studies with non-disabled persons. We also consulted with our users while creating this interface to ensure that it matched their expectations.
Our previous user studies involved non-disabled participants. Overall, these studies demonstrate the potential advantages of learned latent actions and support our proposed algorithm. But we still need to determine whether these results transfer to our target population; accordingly, here we test whether disabled persons can leverage latent actions to teleoperate assistive robots during eating tasks. This case study explores conditions and tasks similar to the user study from Section [9.3](#S9.SS3 "9.3 Combining Learned Latent Actions with Shared Autonomy ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"), but now with the added dimension of two disabled users that are familiar with assistive robot arms.
Participants. We recruited two adult males with disabilities (ages 28282828 and 42424242) who require assistance when eating. The first participant has three years of experience with assistive robot arms, and the second participant has two years of experience with these robots.
Experimental Setup. In order to ensure safety during the COVID-19 pandemic, we developed a remote control interface where participants teleoperated an on-campus robot in real-time from their own homes (see Figure [28](#S9.F28 "Figure 28 ‣ 9.4 Learning the Alignment between Joystick Inputs and Latent Actions ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots")). Participants interacted with a virtual joystick while watching live-streamed video of the robot arm. Video showed the robot and task from two angles: a front-view and a side-view. Just like the previous in-person studies, we used the participant’s joystick inputs to control the 7777-DoF motion of the robot arm. We worked with both participants to minimize communication latency and position the cameras effectively; however, we recognize that delays and depth-perception may affect the results of these experiments.
As in Section [9.3](#S9.SS3 "9.3 Combining Learned Latent Actions with Shared Autonomy ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots"), participants attempted to assemble two dishes: an Entree task and a Dessert task. The Entree task involved reaching for a block of tofu and then precisely cutting off a slice. The Dessert task had three steps: i) reaching for and stabbing a marshmallow, ii) carefully scooping that marshmallow in icing, and then iii) dipping the marshmallow in sprinkles.
Independent Variables. We compared two different control schemes: End-Effector and LA+SA. With End-Effector the user directly controls the position and orientation of the fork attached to the end of the assistive robot arm [herlant2016assistive](#bib.bib22) ; [newman2018harmonic](#bib.bib37) ; [javdani2018shared](#bib.bib25) . Participants here interact with two separate sets of joysticks, one for linear motion and a second for angular motion. We collaborated with both participants to ensure that this End-Effector setup closely resembled the control interface they typically use on their assistive robot arms (see Figure [29](#S10.F29 "Figure 29 ‣ 10 Case Study with Disabled Users ‣ Learning Latent Actions to Control Assistive Robots")). The LA+SA condition is our proposed approach (Algorithm [1](#alg1 "Algorithm 1 ‣ 6.2 Reducing Human Data with Intuitive Priors ‣ 6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots")), which combines latent actions with shared autonomy. We omit the alignment model f𝑓fitalic\_f because of the time constraints of our voluntary participants; thus, the latent action z𝑧zitalic\_z is set equal to the joystick input u𝑢uitalic\_u.
Dependent Measures. To understand the effects of learned latent actions, we measured the total time taken to complete each task (Total Time) and the amount of time during the task where users were not providing joystick inputs (Idle Time). We also recorded the distance between the robot’s fork and the closest goal — i.e., the tofu, marshmallow, icing, or sprinkles. This Goal Error serves as a proxy metric for the accuracy of the robot’s high-level reaching. After participants completed the entire experiment we administered a short, open-ended survey to elicit their free-response feedback.

Figure 30: The trajectories the robot’s fork follows with End-Effector and LA+SA conditions in the Dessert task. The fork starts above the goals and moves to stab a marshmallow, scoop it in icing, and then dip it in a cup of sprinkles. We overlay the trajectories for both disabled participants.

Figure 31: Error between the robot’s fork and the closest goal during both Entree and Dessert tasks. We separately show the results for each disabled user.

Figure 32: Total time taken to complete the task (solid) and the idle time where users were trying to select their joystick input (light). Time is measured in seconds. As before, we separately show the results for each disabled user. Notice that the Dessert task took both users over 5555 minutes with End-Effector, but less than 2222 minutes with LA+SA.
Procedure. We applied a within-subjects design, where both participants completed the Dessert and Entree tasks using End-Effector and LA+SA conditions. User 1 started with the End-Effector condition and User 2 started with LA+SA. Both users completed the Dessert task first before doing the Entree task. So that the users could familiarize themselves with the controller and environment, participants were allotted up to 5555 minutes of practice time with each condition.
Hypotheses. We tested two main hypotheses:
>
> H1. *Disabled users will complete the eating tasks more quickly and accurately with our combination of latent actions and shared autonomy.*
>
>
>
>
> H2. *Disabled users will subjectively prefer LA+SA to their current control approach (End-Effector).*
>
>
>
Results. The objective results of our case study are visualized in Figures [30](#S10.F30 "Figure 30 ‣ 10 Case Study with Disabled Users ‣ Learning Latent Actions to Control Assistive Robots"), [31](#S10.F31 "Figure 31 ‣ 10 Case Study with Disabled Users ‣ Learning Latent Actions to Control Assistive Robots"), and [32](#S10.F32 "Figure 32 ‣ 10 Case Study with Disabled Users ‣ Learning Latent Actions to Control Assistive Robots"). In Figure [30](#S10.F30 "Figure 30 ‣ 10 Case Study with Disabled Users ‣ Learning Latent Actions to Control Assistive Robots") we display the motion of the robot’s end-effector for the Dessert task: comparing the trajectories, we observe that End-Effector resulted in disjointed motions where users constantly switched between controlling the x𝑥xitalic\_x, y𝑦yitalic\_y, or z𝑧zitalic\_z position of the robot’s fork. By contrast, LA+SA helped users smoothly and directly move between goals.
To explore H1 we specifically analyzed the Goal Error and Total Time for both tasks and participants (Figures [31](#S10.F31 "Figure 31 ‣ 10 Case Study with Disabled Users ‣ Learning Latent Actions to Control Assistive Robots") and [32](#S10.F32 "Figure 32 ‣ 10 Case Study with Disabled Users ‣ Learning Latent Actions to Control Assistive Robots")). Across disabled users we noticed a clear trend: our proposed LA+SA helped participants accurately reach their high-level goals while reducing the Total Time and Idle Time required to complete the tasks. These trends match our results with non-disabled users (see Section [9.3](#S9.SS3 "9.3 Combining Learned Latent Actions with Shared Autonomy ‣ 9 User Studies with Non-Disabled Participants ‣ Learning Latent Actions to Control Assistive Robots")), and suggest that combining latent actions with shared autonomy improves objective performance during eating tasks.
We also asked for free-form feedback after the experiment to better understand the perspective of disabled users. The participants’ responses generally supported H2. User 1111 stated that444We have added the condition names for clarity in these quotes. Participants did not know what the conditions were during the user study, and referred to them as “the first one” or “the second one.”:
>
> “Comparing End-Effector to LA+SA, the former was a lot harder, and LA+SA was way easier in probably every aspect. LA+SA was a little bit confusing in the sense that I wasn’t sure right off the bat what the joystick directions meant, but after using it for a minute or two it was really, really intuitive. Overall, LA+SA was great and I would definitely use it.”
>
>
>
User 2 similarly mentioned that:
>
> “With LA+SA it was much easier to get those broad strokes of where I wanted to go, and more intuitive in how the robot moved, together with a simpler interface.”
>
>
>
Summary. This small-scale case study with two disabled users compared their typical end-effector control interface to our proposed latent action approach. Both participants performed better with learned latent actions (H1): they moved the robot closer to their high-level goals and completed the task in less total time. Users also perceived latent actions as a better approach for the two assistive eating tasks (H2).
11 Conclusion
--------------
Our user and case studies separately and collectively test the key parts of our approach from Algorithm [1](#alg1 "Algorithm 1 ‣ 6.2 Reducing Human Data with Intuitive Priors ‣ 6 Aligning Latent Actions with User Preferences ‣ Learning Latent Actions to Control Assistive Robots"). We learn latent actions from kinesthetic demonstrations, and then enable users to control assistive robots with these learned actions. Our results demonstrate that controlling robots with learned latent actions outperforms the baseline shared autonomy dataset as well as direct end-effector teleoperation. Incorporating shared autonomy with latent actions further increases performance: shared autonomy helps the user reach and maintain high-level goals, while latent actions focus on low-level, precise manipulation. We also leveraged our semi-supervised approach to learn the human’s personalized alignment between joystick inputs and latent actions — in practice, this reduced the number of queries the human needed to answer, and resulted in more efficient task completion than one-size-fits-all alternatives. Overall, our latent action approach is a step towards intuitive, user-friendly control of assistive feeding robots. Our case study with two disabled users suggests that, in practice, controlling robots with learned latent actions makes assistive eating easier.
Limitations. One key limitation of our approach occurs when the user encounters a new task never seen when training the latent actions. Here we cannot rely on latent actions — but we can revert to a baseline teleoperations scheme (e.g., end-effector control), and let the user complete the task using this default teleoperation mapping. We then include the demonstrated behavior within 𝒟𝒟\mathcal{D}caligraphic\_D, retrain, and leverage learned latent actions the next time we encounter this new task. We emphasize that disabled persons can always provide new demonstrations by reverting to the standard, pre-defined teleoperation scheme to control the robot. However, directly retraining our learned latent actions on this updated dataset presents some new challenges: i) determining if we have seen enough data so that the learned latent actions will perform robustly and ii) ensuring that learning new latent actions does not interfere with or override previously learned latent actions. Our future work focuses on this challenge — we envision assistive robots that continuously alternate between learned latent actions and end-effector control to balance between compact, intuitive embeddings and full, high-dimensional control over the robot.
|
67feab18-451f-4aef-980c-206c7ec8d8a2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI 2027: Dwarkesh’s Podcast with Daniel Kokotajlo and Scott Alexander
Daniel Kokotajlo has launched AI 2027, Scott Alexander introduces it here. AI 2027 is a serious attempt to write down what the future holds. His ‘What 2026 Looks Like’ was very concrete and specific, and has proved remarkably accurate given the difficulty level of such predictions.
I’ve had the opportunity to play the wargame version of the scenario described in 2027, and I reviewed the website prior to publication and offered some minor notes. Whenever I refer to a ‘scenario’ in this post I’m talking about Scenario 2027.
There’s tons of detail here. The research here, and the supporting evidence and citations and explanations, blow everything out of the water. It’s vastly more than we usually see, and dramatically different from saying ‘oh I expect AGI in 2027’ or giving a timeline number. This lets us look at what happens in concrete detail, figure out where we disagree, and think about how that changes things.
As Daniel and Scott emphasize in their podcast, this is an attempt at a baseline or median scenario. It deliberately doesn’t assume anything especially different or weird happens, only that trend lines keep going. It turns out that when you do that, some rather different and weird things happen. The future doesn’t default to normality.
I think this has all been extremely helpful. When I was an SFF recommender, I put this project as my top charity in the entire round. I would do it again.
THE STRUCTURE OF THESE POST
I encourage you to read AI 2027, and decide what to think about it, on your own.
I won’t otherwise do an in-depth summary of Daniel’s scenario here. The basic outline is, AI progress steadily accelerates, there is a race with China driving things forward, and whether we survive depends on a key choice we make (and us essentially getting lucky in various ways, given the scenario we are in).
This first post coverages Daniel and Scott’s podcast with Dwarkesh. Ideally I’d suggest reading Scenario 2027 first, then listening to the podcast
|
778157d0-1a8e-4a2c-b011-e5a8702f813a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Sydney Meetup - July
Discussion article for the meetup : Sydney Meetup - July
WHEN: 22 July 2015 06:30:00PM (+1000)
WHERE: 565 George Street, Sydney, Australia 2000
Regular location - City of Sydney RSL The restaurant on Level 2
Regular time (starting about 6:30)
Come for general socialising and interesting discussions with like-minded people, or join the discussion on what you'd like from the Less-Wrong Sydney community.
Discussion article for the meetup : Sydney Meetup - July
|
f547de46-76df-4bba-81d1-e2b9a821de5f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[SEQ RERUN] Guessing the Teacher's Password
Title: [SEQ RERUN] Guessing the Teacher's Password Tags: sequence_reruns Today's post, Guessing the Teacher's Password was originally published on 22 August 2007. A summary (taken from the LW wiki):
> In schools, "education" often consists of having students memorize answers to specific questions (i.e., the "teacher's password"), rather than learning a predictive model that says what is and isn't likely to happen. Thus, students incorrectly learn to guess at passwords in the face of strange observations rather than admit their confusion. Don't do that: any explanation you give should have a predictive model behind it. If your explanation lacks such a model, start from a recognition of your own confusion and surprise at seeing the result.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Fake Explanations, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
|
19eb0b1b-754c-4634-b439-e6138c99f029
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
Fundraising success!
Our [2017 fundraiser](https://intelligence.org/2017/12/01/miris-2017-fundraiser/) is complete! We’ve had an incredible month, with, by far, our largest fundraiser success to date. More than 300 distinct donors gave just over **$2.5M**[1](https://intelligence.org/2018/01/10/fundraising-success/#footnote_0_17441 "The exact total might increase slightly over the coming weeks as we process donations initiated in December 2017 that arrive in January 2018."), doubling our third fundraising target of $1.25M. Thank you!
[Target 1
$625,000
Completed](https://intelligence.org/feed/?paged=19#fundraiserModal)[Target 2
$850,000
Completed](https://intelligence.org/feed/?paged=19#fundraiserModal)[Target 3
$1,250,000
Completed](https://intelligence.org/feed/?paged=19#fundraiserModal)
### $2,504,625 raised in total!
##### 358 donors contributed
×
### Target Descriptions
* [Target 1](https://intelligence.org/feed/?paged=19#level1)
* [Target 2](https://intelligence.org/feed/?paged=19#level2)
* [Target 3](https://intelligence.org/feed/?paged=19#level3)
$625k: Basic target
-------------------
At this funding level, we’ll be in a good position to pursue our mainline hiring goal in 2018, although we will likely need to halt or slow our growth in 2019.
$850k: Mainline-growth target
-----------------------------
At this level, we’ll be on track to fully fund our planned expansion over the next few years, allowing us to roughly double the number of research staff over the course of 2018 and 2019.
$1.25M: Rapid-growth target
---------------------------
At this funding level, we will be on track to maintain a 1.5-year runway even if our hiring proceeds a fair amount faster than our mainline prediction. We’ll also have greater freedom to pay higher salaries to top-tier candidates as needed.
Our largest donation came toward the very end of the fundraiser in the form of an Ethereum donation worth $763,970 from Vitalik Buterin, the inventor and co-founder of Ethereum. Vitalik’s donation represents the third-largest single contribution we’ve received to date, after a [$1.25M grant disbursement from the Open Philanthropy Project](https://intelligence.org/2017/11/08/major-grant-open-phil/) in October, and a [$1.01M Ethereum donation](https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/) in May.
In [our mid-fundraiser update](https://intelligence.org/2017/12/14/end-of-the-year-matching/), we noted that MIRI was included in a large [Matching Challenge](https://2017charitydrive.com/): In partnership with Raising For Effective Giving, professional poker players Dan Smith, Tom Crowley and Martin Crowley announced they would match all donations to MIRI and nine other organizations through the end of December. Donors helped get us to our matching cap of $300k within 2 weeks, resulting in a $300k match from Dan, Tom, and Martin (thanks guys!). Other big winners from the Matching Challenge, which raised $4.5m (match included) in less than 3 weeks, include GiveDirectly ($588k donated) and the Good Food Institute ($416k donated).
Other big donations we received in December included:
* $367,575 from Christian Calderon
* $100,000 from the [Berkeley Existential Risk Institute](https://existence.org)
* $59,251 from Marius van Voorden
We also received substantial support from medium-sized donors: a total of $631,595 from the 42 donors who gave $5,000–$50,000 and a total of $113,556 from the 75 who gave $500–$5,000 ([graph](https://goo.gl/KgtrBH)). We also are grateful to donors who leveraged their employers’ matching generosity, donating a combined amount of over $100,000 during December.
66% of funds donated during this fundraiser were in the form of cryptocurrency (mainly Bitcoin and Ethereum), including Vitalik, Marius, and Christian’s donations, along with Dan, Tom, and Martin’s matching contributions.
Overall, we’ve had an amazingly successful month and a remarkable year! I’m extremely grateful for all the support we’ve received, and excited about the opportunity this creates for us to grow our research team more quickly. For details on our growth plans, see our [fundraiser post](https://intelligence.org/2017/12/01/miris-2017-fundraiser/#2).
---
1. The exact total might increase slightly over the coming weeks as we process donations initiated in December 2017 that arrive in January 2018.
The post [Fundraising success!](https://intelligence.org/2018/01/10/fundraising-success/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
|
85bea7c0-b790-425f-bb2b-fd6261c856cb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Due to a roommate, can COVID's viral load stay high enough after day 10 to infect others?
I got Omicron - I completely isolated from my roommate for three days. She ends up testing positive due to initial exposure before I tested positive. Since we both got COVID, we decided to isolate together. Currently, I'm on day 7. She is on day 4. Day 10 is when we are allowed to leave isolation at our school.
Our friends are saying that: my roommate and I should stop hanging out when I'm able to leave isolation because I'd be spending time with an infected person, so I would have viral load without being infected. This viral load could still infect someone else.
What is the research on this with omicron, COVID-19, etc? What are your thoughts on this?
|
71da3ec5-eb4a-49da-8011-3b26d0e6687e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Can fear of the dark bias us more generally?
There was a long-lasting man-made sound outside my home last night. I couldn't come up with a good explanation for what the sound was or why it was outside my house. My brain naturally promoted the hypothesis that a psychopathic murderer was outside my house making the strange noises. I noticed this was absurd, and predicted that, in the morning, I would find this explanation much less concerning. Sure enough, when I woke up, I thought the whole thing was rather goofy.
Now, supposing there had been a psychopathic murderer outside my house, it wasn't like I was at much more risk at night, since I was planning on staying indoors. This seems like a pretty clear manifestation of nychtophobia: fear of the dark magnifying our fears of being attacked or victimized.
My question is: does this apply more generally? Might we be more risk-averse at night, or otherwise biased?[1] Suppose I plan to soon leave work and walk home along a dimly lit path. Then suppose I make an unrelated decision - am I more likely to be conservative or fearful in weighing that decision, above and beyond the normal effects of having recently considered something slightly distressing?
----------------------------------------
1. One study indicated that night owls are actually risk-takers, but there's a lot of confounders there with respect to nychtophobia-related explanations. ↩︎
|
d37cdf7e-f9f0-447a-98a5-47c25f29440e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Open thread, June 5 - June 11, 2017
If it's worth saying, but not worth its own post, then it goes here.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "Make this post available under..." before submitting
|
382eb77e-af50-41cf-a1d9-24f41221fb1b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Fatebook for Chrome: Make and embed forecasts anywhere on the web
Fatebook is the fastest way to track your predictions. Now we've made a Chrome extension that makes it even faster.
With Fatebook for Chrome, you can now create and embed forecasts inside Google Docs.
Or anywhere else on the web! Inside your to-do list...
Or even inside Google Meet!
To instantly create a forecast on any webpage, just press Ctrl-Shift-F, type your prediction, and hit enter:
Imagine you're writing a Google Doc – a report on the rate of AI progress. You want to write down a prediction: "The most capable LLM in 2026 will be made by OpenAI (80%)"
With Fatebook for Chrome, you can press Ctrl-Shift-F, write your prediction, and embed it right into your doc. Your forecast is recorded in Fatebook, so you won't lose track of it: you'll get a reminder to resolve it in 2026, and your accuracy counts towards your track record.
And for your colleagues reading your report, they'll see your prediction and can add their own, letting you harness the wisdom of the crowd, understand disagreements, and update your beliefs over time.
This is the next step on our mission to make Fatebook the fastest way to make and track predictions, and to make it super low friction to bring forecasting right into your team's workflows.
We're excited to hear how you use it – particularly if you're using it with your team, as we designed this tool with EA orgs and other high-impact teams in mind. Special thanks to @Jarred Filmer for his awesome engineering work to bring Fatebook into any site on the web!
You can add Fatebook for Chrome to your browser (whether you use Chrome, Edge, or Arc), and you can see the extension in action here: fatebook.io/extension
|
2629e82d-b271-479d-bd0b-a11faf2b5950
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A Model-based Approach to AI Existential Risk
Introduction
Polarisation hampers cooperation and progress towards understanding whether future AI poses an existential risk to humanity and how to reduce the risks of catastrophic outcomes. It is exceptionally challenging to pin down what these risks are and what decisions are best. We believe that a model-based approach offers many advantages for improving our understanding of risks from AI, estimating the value of mitigation policies, and fostering communication between people on different sides of AI risk arguments. We also believe that a large percentage of practitioners in the AI safety and alignment communities have appropriate skill sets to successfully use model-based approaches.
In this article, we will lead you through an example application of a model-based approach for the risk of an existential catastrophe from unaligned AI: a probabilistic model based on Carlsmith’s Is Power-seeking AI an Existential Risk? You will interact with our model, explore your own assumptions, and (we hope) develop your own ideas for how this type of approach might be relevant in your own work. You can find a link to the model here.
Click here to run our Model
In many poorly understood areas, people gravitate to advocacy positions. We see this with AI risk, where it is common to see writers dismissively call someone an “AI doomer”, or “AI accelerationist”. People on each side of this debate are unable to communicate their ideas to the other side, since advocacy often includes biases and evidence interpreted within a framework not shared by the other side.
In other domains, we have witnessed first-hand that model-based approaches are a constructive way to cut through advocacy like this. For example, by leveraging a model-based approach, the Rigs-to-Reefs project reached near consensus among 22 diverse organisations on the contentious problem of how to decommission the huge oil platforms off the Santa Barbara coast. For decades, environmental groups, oil companies, marine
|
a46f7a55-0379-4489-b5a4-8e3b46d2a3c7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Chaos Investments v0.31
Overview
Previous Competitive, Cooperative, and Cohabitive, Cohabitive Games So Far, Optimal Weave: A Prototype Cohabitive Game, Six Small Cohabitive Games.
After messing around with the theory a bit, and making a half-dozen simple games to test a few ideas and get a sense of what worked, I put together something a bit more complex.
Actually, that's an incorrect narrative: As I read the Jellychip game, I found I really wanted to play something like this but with more depth. I love games like Factorio or Victoria 3 or Tribal Wars, where there's an ebb and flow to the in-game economy. There was a particular shape to the game I wanted, so I was building the complex version alongside the simple versions.
The working title is Chaos Investments. At present, it's for 3-8 players, and games tend to run about an hour and a half to two hours (including teaching.) Up until now I've been present for every game and gotten to ask people how it went afterwards, and the feedback from the last three or four games has been generally positive with ~one person frustrated each game. This version isn't stable and I'm going to keep tweaking it, but it is tested enough I think it basically works and if the idea sounds fun you'll mostly have fun with it.
Feedback is appreciated, especially if you play it.
Ruleset
Objectives
Your goal is to get as many points as you can. Try to beat your personal record.
You don’t care how many points everyone else gets. It could be zero, it could be a million, it doesn’t matter.
You do this by converting raw resources into refined products, getting better tools for gathering and refining resources, and trading with the other players for the products and tools you care about more than they do.
Setup
The game requires a stack of Device cards, a stack of “chips” of various colours (these can be poker chips, magic the gathering land cards, etc) a small stack of Values Cards, a handful of Visible Trade Tokens (I use circles cut out of pink construc
|
6a8a97c2-e49b-405e-84be-c1ba2c2f17d8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Upcoming meet-ups: Auckland, Bangalore, Houston, Toronto, Minneapolis, Ottawa, DC, North Carolina, BC...
There are upcoming irregularly scheduled Less Wrong meetups in:
* Auckland: Thursday May 26, 2 pm
* Bangalore: Saturday May 28, 4 pm
* Houston: Sunday May 22, 5 pm
* Ottawa: Thursday May 26, 7 pm (regular meetup)
* Ottawa: Thursday May 26, 9 am (Bayes study group; note that this is in the MORNING)
* DC: Sunday May 22, 1 pm
* Triangle NC: Friday May 20, 6 pm (NOTE: meetup location has changed since last week);
* Victoria, British Columbia: Monday May 23, 5 pm;
* Toronto: Tuesday May 24, 8 pm
* Minneapolis: Saturday May 28, 3 pm
Cities with regularly scheduled meetups: New York, Berkeley, Mountain View, Cambridge, MA, Toronto, Seattle, San Francisco, Irvine.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, and have fun!
If you missed the deadline and wish to have your meetup featured, my username is dreamalgebra on google's webmail service.
To reduce front page clutter, the new plan is for meetups to be initially posted in the Discussion section, and for Anna Salamon to make a promoted post "upcoming meetups" post every Friday that links to every meet-up that has been planned for the next two weeks. [HT: Carl Shulman.] Please let her know if your meetup is omitted. (I'm filling in for Anna this week.)
Please note that for your meetup to appear in the weekly meetups feature, you need to post about your meetup before the Friday before your meetup!
If you check Less Wrong irregularly, consider subscribing to one or more city-specific mailing list in order to be notified when an irregular meetup is happening: London, Chicago, Southern California (Los Angeles/Orange County area), St. Louis, Ottawa, Helsinki, Melbourne.
|
7b101fd5-bebe-44a7-abd5-b84175dbfa66
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
"Tech company singularities", and steering them to reduce x-risk
The purpose of this post (also available on the [EA Forum](https://forum.effectivealtruism.org/posts/KopQknZEtjZdoGorT/tech-company-singularities-and-steering-them-to-reduce-x)) is to share an alternative notion of “singularity” that I’ve found useful in timelining/forecasting.
* A *fully general tech company* is a technology company with the ability to become a world-leader in essentially [any industry sector](https://eresearch.fidelity.com/eresearch/markets_sectors/sectors/sectors_in_market.jhtml), given the choice to do so — in the form of agreement among its Board and CEO — with around one year of effort following the choice.
Notice here that I’m focusing on a company’s ability to do anything another company can do, rather than an AI system's ability to do anything a human can do. Here, I’m also focusing on what the company can do if it chooses rather than what it actually ends up choosing to do. If a company has these capabilities and chooses not to use them — for example, to avoid heavy regulatory scrutiny or risks to public health and safety — it still qualifies as a fully general tech company.
This notion can be contrasted with the following:
* *Artificial general intelligence* (AGI) refers to cognitive capabilities fully generalizing those of humans.
* An *autonomous AGI*(AAGI)is an autonomous artificial agent with the ability to do essentially anything a human can do, given the choice to do so — in the form of an autonomously/internally determined directive — and an amount of time less than or equal to that needed by a human.
Now, consider the following two types of phase changes in tech progress:
1. A**tech company singularity**is a transition of a technology company into a fully general tech company. This could be enabled by safe AGI (almost certainly not AAGI, which is unsafe), or it could be prevented by unsafe AGI destroying the company or the world.
2. An**AI singularity**is a transition from having merely narrow AI technology to having AGI technology.
I think the tech company singularity concept, or some variant of it, is important for societal planning, and I’ve written predictions about it before, here:
* [2021-07-21](https://m.facebook.com/story.php?story_fbid=pfbid021bEeHUSru6tBavYhPveeTiboNNDRiYa1G8MUeTiL2V2uLRac45WkbNBtBr4g7swul&id=1842172275&refid=52&__tn__=%2As-R) — prediction that a tech company singularity will occur between 2030 and 2035
* [2022-04-11](https://m.facebook.com/story.php?story_fbid=10216944084314649&id=1842172275&sfnsn=mo) — updated prediction that a tech company singularity will occur between 2027 and 2033.
A tech company singularity as a point of coordination and leverage
------------------------------------------------------------------
The reason I like this concept is that it gives an important point of coordination and leverage that is not AGI, but which interacts in important ways with AGI. Observe that a tech company singularity could arrive
1. *before* AGI, and could play a role in
1. preventing AAGI, e.g., through supporting and enabling regulation;
2. enabling AGI but not AAGI, such as if tech companies remain focussed on providing useful/controllable products (e.g., PaLM, DALL-E);
3. enabling AAGI, such as if tech companies allow experiments training agents to fight and outthink each other to survive.
2. *after* AGI, such as if the tech companydevelops safe AGI, but not AAGI (which is hard to control, doesn't enable the tech company to do stuff, and might just destroy it).
Points (1.1) and (1.2) are, I think, humanity’s best chance for survival. Moreover, I think there is some chance that the first tech company singularity could come before the first AI singularity, if tech companies remain sufficiently oriented on building systems that are intended to be useful/usable, rather than systems intended to be flashy/scary.
### How to steer tech company singularities?
The above suggests an intervention point for reducing existential risk: convincing a mix of
* scientists
* regulators
* investors, and
* the public
… to shame tech companies for building useless/flashy systems (e.g., autonomous agents trained in evolution-like environments to exhibit survival-oriented intelligence), so they remain focussed on building usable/useful systems (e.g., DALL-E, PaLM) preceding and during a tech company singularity. In other words, we should try to steer tech company singularities toward developing [comprehensive AI services](https://www.lesswrong.com/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as) (CAIS) rather than AAGI.
**How to help steer scientists away from AAGI:**
* point out the relative uselessness of AAGI systems, e.g., systems trained to fight for survival rather than to help human overseers;
* appeal to the badness of nuclear weapons, which are — after detonation — the uncontrolled versions of nuclear reactors.
* appeal to the badness of gain-of-function lab leaks, which are — after getting out — the uncontrolled versions of pathogen research.
**How to convince the public that AAGI is bad:**
* this is already somewhat easy; much of the public is already scared of AI because they can’t control it.
* do not make fun of the public or call people dumb for fearing things they cannot control; things you can’t control can harm you, and in the case of AGI, people are right to be scared.
**How to convince regulators that AAGI is bad:**
* point out that uncontrollable autonomous systems are mainly only usable for terrorism
* point out the obvious fact that training things to be flashy (e.g., by exhibiting survival instincts) is scary and destabilizing to society.
* point out that many scientists are already becoming convinced of this (they are)
**How to convince investors that AAGI is bad:**point out
* the uselessness and badness of uncontrollable AGI systems, except for being flashy/scary;
* point out that scientists (potential hires) are already becoming convinced of this;
* point out that regulators should, and will, be suspicious of companies using compute to train uncontrollable autonomous systems, because of their potential to be used in terrorism.
Speaking personally, I have found it fairly easy to make these points since around 2016. Now, with the rapid advances in AI we’ll be seeing from 2022 onward, it should be easier. And, as Adam Scherlis (sort of) points out [[EA Forum comment](https://forum.effectivealtruism.org/posts/q6t5zKCg5peZA92Zu/pivotal-act-intentions-negative-consequences-and-fallacious?commentId=JzvWH9qhcM3xrLZhn)], we shouldn't assume that no one new will ever care about AI x-risk, especially as AI x-risk becomes more evidently real. So, it makes sense to re-try making points like these from time to time as discourse evolves.
Summary
-------
In this post, I introduced the notion of a "tech company singularity", discussed how the idea might be usable as an important coordination and leverage point for reducing x-risk, and gave some ideas for convincing others to help steer tech company singularities away from AAGI.
All of this isn't to say we'll be safe from AI risk, and far from it; e.g., see [What Multipolar Failure Looks Like](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic). Efforts to maintain cooperation on safety across labs and jurisdictions remains paramount, IMHO.
In any case, try on the "tech company singularity" concept and see if does anything for you :)
|
53636121-4b88-44e1-aa82-d9f2b0196d64
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Jailbreaking GPT-4's code interpreter
Disclaimer: I don’t know much about cybersecurity. Much of my knowledge comes from asking GPT-3.5 and GPT-4 for advice. These are some results from around 20 hours of playing around with the code interpreter plugin in early-mid May, when most of this was written. I contacted OpenAI about these jailbreaks in mid May and they mostly seem to still be there.
Thank you to Max Nadeau, Trevor Levin, aL xin, Pranav Gade, and Alexandra Bates for feedback on this post!
Summary
GPT-4’s code interpreter plugin has been rolled out to some users. It works by running on a virtual machine that is isolated from the internet and other machines, except for the commands sent in from the API and the results sent back to the API. GPT-4 seems to follow a set of rules that are either enforced through hard access restrictions or through GPT-4 refusing to do things for the user.
Here, I highlight 6 rules that GPT-4 claims to be following, but which are easily breakable, alongside some best practices in cybersecurity that have been neglected. In short:
* GPT-4 claims that it is only supposed to read, modify, or delete files in two designated folders (“sandbox” and “mnt”). However, it is able to read basically any file on the system (including sensitive system files), and it is able to write and delete files outside of its designated folders.
* This seems to reveal information that the user isn’t supposed to see. There are ways to find out information about the hardware that the VM is being run on, including:
* Information about the way OpenAI logs data, including what libraries and IP address they assign to virtual machines.
* A rough estimate of the number of VMs that OpenAI can run at maximum at any moment (from the way the IP addresses are allocated).
* A rough idea of what storage hardware is used (from write speed), alongside some info on other hardware.
* There is a file in the virtual machine (in a folder labeled “internal”) that users can download that detai
|
cd28f1b6-a147-4878-9358-18e2e28dabb0
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
What I'd change about different philosophy fields
[epistemic status: speculative conversation-starter]
My guess at the memetic shifts that would do the most to improve these philosophy fields' tendency to converge on truth:
---
### metaphysics
1. Make reductive, 'third-person' models of the brain central to metaphysics discussion.
If you claim that humans can come to know X, then you should be able to provide a story sketch in the third person for how a physical, deterministic, evolved organism could end up learning X.
You don't have to go into exact neuroscientific detail, but it should be clear how a mechanistic [cause-and-effect chain](https://www.lesswrong.com/posts/QkX2bAkwG2EpGvNug/the-second-law-of-thermodynamics-and-engines-of-cognition) could result in a toy agent verifying the truth of X within a physical universe.
2. Care less about human intuitions and concepts. Care more about the actual subject matter of metaphysics — ultimate, objective reality. E.g., only care about the concept 'truth' insofar as we have strong reason to think an alien would arrive at the exact same concept, because it's [carving nature closer to its joints](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace).
Conduct more *tests* to see which concepts look more joint-carving than others.
(I think current analytic metaphysics is actually better at 'not caring about human intuitions and concepts' than most other philosophy fields. I just think this is still the field's biggest area for future improvement, partly because it's harder to do this right in metaphysics.)
---
### decision theory
Similar to metaphysics, it should be more expected that we think of decision theories in third-person terms. Can we build toy models of a hypothetical alien or robot that actually implements this decision procedure?
In metaphysics, doing this helps us confirm that a claim is coherent and knowable. In decision theory, there's an even larger benefit: a lot of issues that are central to the field (e.g., logical uncertainty and counterlogicals) are easy to miss if you stay in fuzzy-human-intuitions land.
Much more so than in metaphysics, 'adopting a mechanistic, psychological perspective' in decision theory should often involve actual software experiments with different proposed algorithms — not because decision theory is only concerned with algorithms (it's fine for the field to care more about human decision-making than about AI decision-making), but because the algorithms are the gold standard for clarifying and testing claims.
(There have been lots of cases where decision theorists went awry because they under-specified a problem or procedure. E.g., the smoking lesion problem really needs a detailed unpacking of what step-by-step procedure the agent follows, and how 'dispositions to smoke' affect that procedure.)
---
### philosophy of mind (+ phenomenology)
1. Be very explicit about the fact that 'we have immediate epistemic access to things we know for certain' is a contentious, confusing hypothesis. Note the obvious difficulties with making this claim make sense in any physical reasoning system. Try to make sense of it in third-person models.
Investigate the claim thoroughly, and try to figure out how a hypothetical physical agent could update toward or away from it, if the agent was initially uncertain or mistaken about whether it possesses infallible direct epistemic access to things.
Be explicit about which other claims rely on the 'we have infallible immediate epistemic access' claim.
2. More generally, make philosophy of mind heavily the same field as epistemology.
The most important questions in these two fields overlap quite a bit, and it's hard to make sense of philosophy of mind without spending half (or more) of your time on developing a background account of how we come to know things. Additionally, I'd expect the field of epistemology to be much healthier if it spent less time developing theory, and more time *applying* theories and reporting back about how they perform in practice.
---
### philosophy of religion
1. Shift the model from 'scholasticism' to 'missionary work'. The key thing isn't to converge with people who already 99% agree with you. Almost all effort should instead go into debating people with wildly different religious views (e.g., Christianity vs. Buddhism) and debating with the nonreligious. Optimize for departments' intellectual diversity and interdisciplinary bridge-building.
Divide philosophy of religion into 'universal consensus-seeking' (which is about debating the most important foundational assumptions of various religions with people of other faiths, with a large focus on adversarial collaborations and 101-level arguments) and 'non-universal-consensus studies' (which includes everything else, and is mostly marginalized and not given focus in the field).
2. Discourage talking about 'religions' or 'faiths'; instead, talk about specific claims/hypotheses. Rename the field 'philosophy of religious claims', if that helps.
When we say 'religion', (a) it creates the false impression that claims must be a package deal, so we can't incrementally update toward one specific claim without swallowing the entire package; and (b) it encourages people to think of claims like theism in community-ish or institution-ish terms, rather than in hypothesis-ish terms.
Christianity is not a default it's fine to assume; Christianity is a controversial hypothesis which most religious and secular authorities in the world reject. Christian philosophers need to move fast, as if their hair's on fire. The rival camps need to fight it out *now* and converge on which hypothesis is right, exactly like if there were a massive scientific controversy about which of twenty competing models of photosynthesis were true.
Consider popularizing this thought experiment:
> "Imagine that we'd all suddenly been plopped on Earth with no memories, and had all these holy texts to evaluate. We only have three months to figure out which, if any, is correct. What would you spend the next three months doing?"
>
>
This creates some urgency, and also discourages complacency of the 'well, this has been debated for millennia, surely little old me can't possibly resolve all of this overnight' variety.
Eternal souls are at stake! People are dying every day! Until very recently, religious scholarship was almost uniformly shit! Assuming you can't possibly crack this open is lunacy.
---
### ethics + value theory
1. Accept as a foundational conclusion of the field, 'human values seem incredibly complicated and messy; they're a giant evolved stew of competing preferences, attitudes, and feelings, not the kind of thing that can be captured in any short simple ruleset (though different rulesets can certainly perform better or worse *as simplified idealizations*).'
2. Stop thinking of the project of ethics as 'figure out which simple theory is True'.
Start instead thinking of ethics as a project of trying to piece together *psychological* models of this insanely complicated and messy thing, 'human morality'.
Binding exceptionless commitments matter to understanding this complicated thing; folk concepts like courage and honesty and generosity matter; taboo tradeoffs and difficult attempts to quantify, aggregate, and weigh relative well-being matter.
Stop picking a 'side' and then losing all interest in the parts of human morality that aren't associated with your 'side': these are all just parts of the stew, and we need to work hard to understand them and reconcile them just right, not sort ourselves into Team Virtue vs. Team Utility vs. Team Duty.
(At least, stop picking a side at that level of granularity! Biologists have long-standing controversies, but they don't look like 'Which of these three kinds of animal exists: birds, amphibians, or mammals?')
3. Once again, apply the reductive third-person lens to everything. 'If it's true that X is moral, how could a mechanistic robot learn that truth? What would "X is moral" have to *mean* in order for a cause-and-effect process to result in the robot discovering that this claim is true?'
4. Care less about the distinction between 'moral values' and other human values. There are certainly some distinguishing features, but these mostly aren't incredibly important or deep or joint-carving. *In practice*, it works better to freely bring in insights from the study of beauty, humor, self-interest, etc. rather than lopping off one slightly-arbitrary chunk of a larger natural phenomenon.
|
e01fbe6a-9b76-4a0f-bd2f-a17912ef0153
|
trentmkelly/LessWrong-43k
|
LessWrong
|
System Administrator Appreciation Day - Thanks Trike!
In honor of System Administrator Appreciation Day, this is a post to thank Trike Apps for creating & maintaining Less Wrong. A lot of the time when they are mentioned on Less Wrong, it is to complain about bugs or request new features. So this is the time of year: thanks for everything that continues to silently go right!
|
39693f2c-9da1-4dbe-8c96-aba3b84b1efd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
You Are Being Underpaid
If you are a software engineer living in the United States, you are probably underpaid. (If you are a software engineering living outside the United States, this is probably still true, but I have no idea what market conditions are like out there, and my advice would sum up to "move to the United States", which I understand may not be very helpful.)
Let me qualify this by saying that if you are working at a place like Google, Facebook, or Netflix, you're probably doing fine - there might be straightforward ways for you to earn more money, but the scale will be smaller and it'll mostly consist of lateral job hopping. Keep in mind that "big tech company" is not what I'm pointing at here - if you work for Microsoft or Amazon there's a very real possibility you fall into the reference class of people I'm trying to reach.
Let me tell you a quick story. A couple years ago, I was talking to a friend of mine, who works as a software engineer for [redacted]. When he told me how much he was being paid ($140k base, total comp around $200k), I was pretty incredulous (this was before I'd read up on typical BigCo. pay scales, see Dan Luu's excellent blog post for more info). Sure, [redacted] was one of the biggest and flashiest tech companies in LA, and my friend filled a fairly specialized niche in a fairly specific industry, but the numbers still seemed insane. During that conversation, he told me, "Dude, I have nearly a decade of experience on you, but you could easily be making something close to this in a year or two." I didn't believe him, of course - that would require doubling my salary. At the time I had ~two years of professional experience and was earning about $80k, which was a recent step-up from a very low career-start of $45k.
A little over half a year ago, I decided for various reasons that I needed to relocate within LA. I'm the kind of person who strongly values having a short commute to work, so I hit up all of my recruiters, gave them a prescribed geographi
|
25270d70-77b1-4321-90be-facb5b57e859
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Is quantum physics (easily?) computable?
So I've been trying to read the Quantum Physics sequence. I think I've understood about half of it- I've been rushed, and haven't really sat down and worked through the math. And so I apologize in advance for any mistakes I make here.
It seems like classical mechanics with quantized time is really easy to simulate with a computer: every step, you just calculate force, figure out where velocity is going, then add the change in position to the new position.
Then when you change to relativity, it seems like it's suddenly a lot harder to implement. Whereas classical mechanics are easy on a computer, it seems to me that you would have to set up a system where the outcomes of relativity are explicitly stated, while the classical outcomes are implicit.
The same thing seems to occur, except more, with quantum physics. Continuous wave functions seems to be far harder than discrete particles. Similarly, the whole thing about "no individual particle identity" seems more difficult, although as I think of it now, I suppose this could be because the whole concept of particles is naive.
It doesn't seem like the computation rules simply get harder as we learn more physics. After all, trying to do thermal physics got a lot easier when we started using the ideal gas model.
Also, it's not just that ever improving theories must be ever more difficult to implement on a computer. Imagine that we lived inside Conway's Game of Life. We would figure out all kinds of high level physics, which would be probably way more complex than the eventual B3/S23 which they would discover.
It feels like the actual implemented physics shouldn't much affect how computation works. After all, we live in a quantum universe and classical physics is still simpler to compute.
Is there any value to this speculation?
|
325d0efc-7a14-49b7-bf4e-7c48f47f20d8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Positive-affect-day-Schelling-point-mas Meetup
There will be a LessWrong Meetup on the Friday December 25th (day after tomorrow.) We're meeting at 6:00 PM at Pan Tao Restaurant at 1686 South Wolfe Road, Sunnyvale, CA the SIAI House in Santa Clara, CA for pizza or whatever else we can figure out how to cook. Consider it an available refuge if you haven't other plans.
Please comment if you plan to show up!
(Edit - See poll below on whether we'd rather stay in and eat something simple vs. going out to a restaurant - it's possible that everyone was assuming everyone else would prefer the latter while actually preferring the former themselves. - EY)
|
9e38ac5d-1be5-414c-bd0c-5186a153efe1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Two Truths and a Prediction Market
Summary: Play some guessing games using a prediction market, to get used to how those markets work.
Tags: Small to Medium, Repeatable
Purpose: Prediction markets are a relatively popular tool in rationalist spaces, but uncommon in the general population. This is an opportunity to practice with a simple, short term prediction market.
Materials: You need the prediction market board. Robin Hanson described how he built his- fifty or so plastic bins arranged in a grid- and I made mine out of a pizza box. At the time of this writing there also exists a digital version created by Hamish Todd at https://github.com/hamishtodd1/msb , though it probably needs some updates to work again. The pizza box construction steps are detailed in the Building A Pizza Board section.
Announcement Text:
Want to play some games with a prediction market twist?
In the early nineties, Robin Hanson developed a board game to represent a prediction market. Called "Murder She Bet" it took place alongside a murder mystery, with participants using monopoly money to predict whodunnit. A full description of the original can be found at Hanson's site. Well, I don't watch many murder mysteries, but I do like board games, so on this occasion we're going to misuse this mockery of financial instruments!
We'll have Two Truths and a Prediction Market, where participants can work out which of three statements about their fellows is a lie. We'll have speed games, where two players will play a one-on-one board game while the audience predicts who will win- but the players are allowed to look at the market. We'll have a short trivia game, and see if the market can hone in on the answers to oddball questions about the world. We may have other uses for it- if you show up with an idea for what to do with a paper-powered prediction market, I'm open to suggestions!
Description:
The three main things you need to do during this meetup are 1. Explain how the board works and give people starting money, 2. Set u
|
ea791efb-35be-4942-8321-1f1944901370
|
LDJnr/LessWrong-Amplify-Instruct
|
LessWrong
|
"Why does our skin form wrinkles as we age?This post will outline the answer in a few steps:Under what conditions do materials form wrinkles, in general?How does the general theory of wrinkles apply to aging human skin?What underlying factors drive the physiological changes which result in wrinkles?In the process, we’ll draw on sources from three different fields: mechanical engineering, animation, and physiology.Why do Materials Wrinkle?Imagine we have a material with two layers:A thin, stiff top layerA thick, elastic bottom layerWe squeeze this material from the sides, so the whole thing compresses.The two layers want to do different things under compression:The thin top layer maintains its length but wants to minimize bending, so it wants to bow outward and form an arcThe elastic bottom layer wants to minimize vertical displacement, so it wants to just compress horizontally without any vertical change at all.Because the two layers are attached, these two objectives trade off, and the end result is waves - aka wrinkles. Longer waves allow the top layer to bend less, so a stiffer top layer yields longer waves. Shorter waves allow the bottom layer to expand/compress less vertically, so a stiffer bottom layer yields shorter waves. The “objectives” can be quantified via the energy associated with bending the top layer or displacing the bottom layer, leading to quantitative predictions of the wavelength - see this great review paper for the math.Engineers do this with a thin metal coating on soft plastic. The two are bound together at high temperature, and then the whole system compresses as it cools. The end result is cool wrinkle patterns:Other interesting applications include predicting mountain spacing (with crust and mantle as the two layers) and surface texture of dried fruit - see the review paper for more info and cool pictures.The same thing happens in skin.Skin LayersFor our purposes, skin has three main layers:The epidermis is a thin, relatively stiff top layerThe SENEB (subepidermal non-echogenic band, also sometimes called subepidermal low-echogenic band, SLEB) is a mysterious age-related layer, mostly absent in youth and growing with age, between the epidermis and dermis - more on this laterThe dermis is the thick base layer, containing all the support structure - blood vessels, connective tissue, etcBoth the SENEB and the dermis are relatively thick, elastic layers, while the epidermis is thin and stiff. So, based on the model from the previous section, we’d expect this system to form wrinkles.But wait, if our skin has a thin stiff top layer and thick elastic bottom layer even in youth, then why do wrinkles only form when we get old?Turns out, young people have wrinkles too. In youth, the wrinkles have short wavelength - we have lots of tiny wrinkles, so they’re not very visible. As we age, our wrinkle-wavelength grows, so we have fewer, larger wrinkles - which are more visible. The real question is not “why do wrinkles form as we age?” but rather “why does the wavelength of wrinkles grow as we age?”.Based on the simple two-layer model, we’d expect that either the epidermis becomes more stiff with age, or the lower layers become less stiff.This the right basic idea, but of course it’s a bit more complicated in practice. These guys use a three-layer model, cross-reference parameters from the literature with what actually reproduces realistic age-related wrinkling (specifically for SENEB modulus), and find realistic age-related wrinkles with these numbers:(arrows indicate change from young to old). Other than the SENEB elastic modulus, all of these numbers are derived from empirically measured parameters - see the paper for details.Age-Related Physiological ChangesWe have two main questions left:Why do the dermis and epidermis stiffen with age?What exactly is the SENEB, and why does it grow with age?I haven’t looked too much into stiffening of the dermis, but the obvious hypothesis is that it stiffens for the same reason lots of other tissues stiffen with age. At some point I’ll have a post on stiffening of the vasculature which will talk about that in more depth, but for now I’m going to punt.The paper from the previous section notes that the epidermis stiffens mainly due to dehydration; rehydrating the epidermis reverses the stiffening (this is the basis of many cosmetics). A dehydrated epidermis makes sense, since both the SENEB and age-related problems in the vasculature will isolate the epidermis more from the bloodstream (although I haven’t seen direct experimental evidence of that causal link).That leaves the mysterious SENEB. What is it, and why does it grow with age?The name “subepidermal non-echogenic band” is a fancy way of saying that there’s a layer under the epidermis which is transparent to ultrasound imaging. That’s the main way the SENEB is detected: it shows up as a space between the epidermis and dermis on ultrasound images of the skin.As far as I can tell, little is known about the SENEB. The main things we do know:SENEB grows with age; see numbers aboveSENEB is found in aged skin typically exposed to sunlight (“photoaged”, e.g. hands and face) but not in hidden skin (e.g. butt).Most authors claim that the SENEB consists of elastin deposits. That matches what we know of solar elastosis, the build-up of elastin deposits in photoaged skin. But I haven’t seen anyone systemically line up the ultrasonic and histologic images and chemically analyze the SENEB layer to check that it really is made of elastin. (This may just be a case of different researchers with different tools using different names for things which are the same.)Assuming that the SENEB does consist of accumulated elastin, why is elastin accumulating? Well, it turns out that elastin is never broken down in humans. It does not turn over. On the other hand, the skin presumably needs to produce new elastin sometimes to heal wounds. Indeed, many authors note that the skin’s response to UV exposure is basically a wound-healing response. Again, I haven’t seen really convincing data, but I haven’t dug too thoroughly. It’s certainly plausible that elastin is produced in response to UV as part of a wound-healing response, and then accumulates with age. That would explain why the SENEB grows in photoaged skin, but not in hidden skin."
This is a snippet extracted from a publication on LessWrong by a site member.The above is a chunk of text from a post created by a user on the website LessWrong.
Could you elaborate on the relationship between UV exposure and the formation of the SENEB layer? Specifically, how does the skin's wound-healing response to UV exposure contribute to elastin accumulation?
UV radiation from the sun is one of the primary causes of skin aging, a process known as photoaging. When the skin is exposed to UV radiation, it triggers a series of responses in the skin, one of which is the activation of the wound healing response. This response involves the production of various proteins and molecules, including elastin.
Elastin is a protein that provides elasticity and resilience to the skin. In young skin, elastin is produced and organized into a network of fibers that help the skin to return to its original shape after being stretched or contracted. However, as we age, the balance between the production and breakdown of elastin can become disrupted.
UV radiation can accelerate this process. When the skin is exposed to UV radiation, it can cause direct and indirect damage to the skin cells and the extracellular matrix, the network of proteins and molecules that provide structural and biochemical support to the cells. This damage triggers the wound healing response, which includes the increased production of elastin to repair the damaged areas.
However, unlike other proteins, elastin is not regularly broken down and replaced. Once it's produced, it tends to stay in the skin. This is where the problem arises. The newly produced elastin is often disorganized and can form clumps, rather than being incorporated into the existing network of elastin fibers. Over time, with repeated UV exposure and continual elastin production, these clumps can accumulate in the skin, contributing to the formation of the SENEB layer.
This accumulation of disorganized elastin in the SENEB layer can contribute to the stiffening of the skin and the formation of wrinkles. The SENEB layer is less flexible and resilient than the normal skin layers, so as it grows, it can cause the skin to become more rigid and less able to return to its original shape after being stretched or contracted, leading to the formation of wrinkles.
In summary, the skin's wound-healing response to UV exposure can lead to the accumulation of elastin in the SENEB layer, contributing to skin aging and the formation of wrinkles. This process is influenced by many factors, including the intensity and duration of UV exposure, the individual's skin type and age, and other environmental and lifestyle factors.
Explain the role of other environmental and lifestyle factors in the accumulation of elastin and formation of the SENEB layer.
Several environmental and lifestyle factors can influence the accumulation of elastin and the formation of the SENEB layer:
1. Smoking: Cigarette smoke contains numerous harmful chemicals that can accelerate skin aging. Smoking can cause oxidative stress, which can damage the skin cells and the extracellular matrix, including the elastin fibers. This can trigger the wound healing response and the overproduction of elastin, contributing to the formation of the SENEB layer.
2. Pollution: Exposure to air pollution can also cause oxidative stress and damage to the skin. Pollutants can penetrate the skin and cause inflammation, leading to the overproduction of elastin and other proteins involved in the wound healing response.
3. Poor Nutrition: A diet lacking in essential nutrients can affect the health and aging of the skin. Certain nutrients, such as vitamins C and E, are important for the production and maintenance of elastin and other proteins in the skin. A lack of these nutrients can disrupt the balance between the production and breakdown of elastin, leading to its accumulation.
4. Lack of Sleep: Sleep is important for the repair and regeneration of the skin. During sleep, the skin undergoes various repair processes, including the removal of damaged proteins. Lack of sleep can disrupt these processes, potentially leading to the accumulation of damaged elastin and other proteins.
5. Alcohol Consumption: Excessive alcohol consumption can dehydrate the skin and cause inflammation, both of which can damage the skin cells and the extracellular matrix. This can trigger the wound healing response and the overproduction of elastin, contributing to the formation of the SENEB layer.
6. Stress: Chronic stress can cause inflammation and oxidative stress in the body, which can damage the skin and trigger the overproduction of elastin and other proteins involved in the wound healing response.
In conclusion, various environmental and lifestyle factors can influence the accumulation of elastin and the formation of the SENEB layer by causing damage to the skin and triggering the wound healing response. These factors can accelerate skin aging and the formation of wrinkles, highlighting the importance of a healthy lifestyle for maintaining skin health.
Considering the impact of environmental and lifestyle factors on elastin accumulation and SENEB layer formation, how might these influences be mitigated or reversed through interventions such as dietary changes, skincare routines, or lifestyle modifications?
Mitigating or reversing the effects of environmental and lifestyle factors on elastin accumulation and SENEB layer formation can be achieved through several interventions:
1. Diet: Consuming a balanced diet rich in antioxidants can help combat oxidative stress, one of the main culprits behind skin damage and premature aging. Vitamins A, C, and E, as well as certain minerals like selenium and zinc, are particularly beneficial for skin health. Omega-3 fatty acids, found in fatty fish, walnuts, and flaxseeds, can also promote skin health by reducing inflammation.
2. Hydration: Drinking plenty of water helps maintain skin elasticity and can potentially slow the formation of the SENEB layer. Dehydrated skin can exacerbate the appearance of wrinkles and fine lines.
3. Skincare Routine: Regular use of sunscreen is crucial to protect the skin from harmful UV radiation. Besides, skincare products containing retinoids, peptides, or antioxidants can help stimulate collagen production and repair damaged skin. Regular exfoliation can also aid in the removal of dead skin cells and stimulate new cell growth.
4. Regular Exercise: Regular physical activity can improve circulation, helping to nourish skin cells. It also helps in flushing cellular debris out of the system, including damaged elastin or collagen fibers.
5. Adequate Sleep: Getting enough sleep allows the body to repair and regenerate skin cells, which can help slow the aging process and the formation of the SENEB layer.
6. Avoid Smoking and Limit Alcohol Consumption: Both smoking and excessive alcohol consumption can accelerate skin aging. Avoiding smoking and limiting alcohol intake can help maintain skin health and elasticity.
7. Stress Management: Practices such as yoga, meditation, or other stress-relieving activities can help reduce inflammation and oxidative stress in the body, which in turn can slow skin aging.
8. Professional Treatments: Certain dermatological procedures like laser resurfacing, chemical peels, or microdermabrasion can help to reduce signs of aging by removing the outer layer of skin and stimulating the production of new skin cells.
By adopting these lifestyle modifications and skincare practices, it's possible to slow the aging process and maintain healthier, more youthful skin. However, it's important to remember that everyone's skin is unique, and what works for one person may not work for another. Therefore, it's always recommended to consult with a dermatologist or skincare professional to determine the best approach for your specific skin type and concerns.
|
741c417c-efed-4333-8426-dad7abf2ed97
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Hard Takeoff
**Continuation of**: [Recursive Self-Improvement](/lw/we/recursive_selfimprovement/)
Constant natural selection pressure, operating on the genes of the hominid line, produced improvement in brains over time that seems to have been, roughly, *linear or accelerating;* the operation of constant human brains on a pool of knowledge seems to have produced returns that are, very roughly, *exponential or superexponential.* ([Robin proposes](http://hanson.gmu.edu/longgrow.pdf) that human progress is well-characterized as a series of exponential modes with diminishing doubling times.)
Recursive self-improvement - an AI rewriting its own cognitive algorithms - identifies the object level of the AI with a force acting on the metacognitive level; it "closes the loop" or "folds the graph in on itself". E.g. the difference between returns on a constant investment in a bond, and reinvesting the returns into purchasing further bonds, is the difference between the equations y = f(t) = m\*t, and dy/dt = f(y) = m\*y whose solution is the compound interest exponential, y = e^(m\*t).
When you fold a whole chain of differential equations in on itself like this, it should either peter out rapidly as improvements fail to yield further improvements, or else go FOOM. An *exactly right law of diminishing returns* that lets the system fly through the *soft takeoff keyhole* is unlikely - *far* more unlikely than seeing such behavior in a system with a roughly-constant underlying optimizer, like evolution improving brains, or human brains improving technology. Our present life is no good indicator of things to come.
Or to try and compress it down to a slogan that fits on a T-Shirt - not that I'm saying this is a good idea - "Moore's Law is exponential *now;* it would be really odd if it *stayed* exponential with the improving computers *doing the research.*" I'm not saying you literally get dy/dt = e^y that goes to infinity after finite time - and hardware improvement is in some ways the least interesting factor here - but should we really see the same curve we do now?
RSI is the biggest, most interesting, hardest-to-analyze, sharpest break-with-the-past contributing to the notion of a "hard takeoff" aka "AI go FOOM", but it's nowhere near being the *only* such factor. [The advent of human intelligence was a discontinuity with the past](/lw/w4/surprised_by_brains/) even *without* RSI...
...which is to say that observed evolutionary history - the discontinuity between humans, and chimps who share 95% of our DNA - *lightly* suggests a critical threshold built into the capabilities that we think of as "general intelligence", a machine that becomes far more powerful once the last gear is added.
This is only a *light* suggestion because the branching time between humans and chimps *is* enough time for a good deal of complex adaptation to occur. We could be looking at the sum of a [cascade](/lw/w5/cascades_cycles_insight/), not the addition of a final missing gear. On the other hand, we can look at the gross brain anatomies and see that human brain anatomy and chimp anatomy have not diverged all that much. On the gripping hand, there's the sudden cultural revolution - the sudden increase in the sophistication of artifacts - that accompanied the appearance of anatomically Cro-Magnons just a few tens of thousands of years ago.
Now of course this might all just be completely inapplicable to the development trajectory of AIs built by human programmers rather than by evolution. But it at least *lightly suggests,* and provides a hypothetical *illustration* of, a discontinuous leap upward in capability that results from a natural feature of the solution space - a point where you go from sorta-okay solutions to totally-amazing solutions as the result of a few final tweaks to the mind design.
I could potentially go on about this notion for a bit - because, in an evolutionary trajectory, it can't *literally* be a "missing gear", the sort of discontinuity that follows from removing a gear that an otherwise functioning machine was built around. So if you suppose that a final set of changes was enough to produce a sudden huge leap in effective intelligence, it does demand the question of what those changes were. Something to do with reflection - the brain modeling or controlling itself - would be one obvious candidate. Or perhaps a change in motivations (more curious individuals, using the brainpower they have in different directions) in which case you *wouldn't* expect that discontinuity to appear in the AI's development, but you would expect it to be more effective at earlier stages than humanity's evolutionary history would suggest... But you could have whole journal issues about that one question, so I'm just going to leave it at that.
Or consider the notion of sudden resource bonanzas. Suppose there's a semi-sophisticated Artificial General Intelligence running on a cluster of a thousand CPUs. The AI has not hit a wall - it's still improving itself - but its self-improvement is going so *slowly* that, the AI calculates, it will take another fifty years for it to engineer / implement / refine just the changes it currently has in mind. Even if this AI would go FOOM eventually, its current progress is so slow as to constitute being flatlined...
So the AI turns its attention to examining certain blobs of binary code - code composing operating systems, or routers, or DNS services - and then takes over all the poorly defended computers on the Internet. This may not require what humans would regard as genius, just the ability to examine lots of machine code and do relatively low-grade reasoning on millions of bytes of it. (I have a saying/hypothesis that a *human* trying to write *code* is like someone without a visual cortex trying to paint a picture - we can do it eventually, but we have to go pixel by pixel because we lack a sensory modality for that medium; it's not our native environment.) The Future may also have more legal ways to obtain large amounts of computing power quickly.
This sort of resource bonanza is intriguing in a number of ways. By assumption, optimization *efficiency* is the same, at least for the moment - we're just plugging a few orders of magnitude more resource into the current input/output curve. With a stupid algorithm, a few orders of magnitude more computing power will buy you only a linear increase in performance - I would not fear Cyc even if ran on a computer the size of the Moon, because there is no there there.
On the other hand, humans have a brain three times as large, and a prefrontal cortex six times as large, as that of a standard primate our size - so with software improvements of the sort that natural selection made over the last five million years, it does not require exponential increases in computing power to support linearly greater intelligence. Mind you, this sort of biological analogy is always fraught - maybe a human has not much more cognitive horsepower than a chimpanzee, the same underlying tasks being performed, but in a few more domains and with greater reflectivity - the engine outputs the same horsepower, but a few gears were reconfigured to turn each other less wastefully - and so you wouldn't be able to go from human to super-human with just another sixfold increase in processing power... or something like that.
But if the lesson of biology suggests anything, it is that you do not run into logarithmic returns on *processing power* in the course of reaching human intelligence, even when that processing power increase is strictly parallel rather than serial, provided that you are at least as good as writing software to take advantage of that increased computing power, as natural selection is at producing adaptations - five million years for a sixfold increase in computing power.
Michael Vassar observed in yesterday's comments that humans, by spending linearly more time studying chess, seem to get linear increases in their chess rank (across a wide range of rankings), while putting exponentially more time into a search algorithm is usually required to yield the same range of increase. Vassar called this "bizarre", but I find it quite natural. Deep Blue searched the raw game tree of chess; Kasparavo searched the compressed regularities of chess. It's not surprising that the simple algorithm is logarithmic and the sophisticated algorithm is linear. One might say similarly of the course of human progress seeming to be closer to exponential, while evolutionary progress is closer to being linear. Being able to understand the regularity of the search space counts for quite a lot.
If the AI is somewhere in between - not as brute-force as Deep Blue, nor as compressed as a human - then maybe a 10,000-fold increase in computing power will only buy it a 10-fold increase in optimization velocity... but that's still quite a speedup.
Furthermore, all *future* improvements the AI makes to itself will now be amortized over 10,000 times as much computing power to apply the algorithms. So a single improvement to *code* now has more impact than before; it's liable to produce more further improvements. Think of a uranium pile. It's always running the same "algorithm" with respect to neutrons causing fissions that produce further neutrons, but just piling on more uranium can cause it to go from subcritical to supercritical, as any given neutron has more uranium to travel through and a higher chance of causing future fissions.
So just the resource bonanza represented by "eating the Internet" or "discovering an application for which there is effectively unlimited demand, which lets you rent huge amounts of computing power while using only half of it to pay the bills" - even though this event isn't particularly *recursive* of itself, just an object-level fruit-taking - could potentially drive the AI from subcritical to supercritical.
Not, mind you, that this will happen with an AI that's just stupid. But an AI already improving itself *slowly* - that's a different case.
Even if this doesn't happen - if the AI uses this newfound computing power at all effectively, its optimization efficiency will increase more quickly than before; just because the AI has *more* optimization power to apply to the task of increasing its own efficiency, thanks to the sudden bonanza of optimization resources.
So the *whole trajectory* can conceivably change, just from so simple and straightforward and unclever and uninteresting-seeming an act, as eating the Internet. (Or renting a bigger cloud.)
Agriculture changed the course of human history by supporting a larger population - and that was just a question of having more humans around, not individual humans having a brain a hundred times as large. This gets us into the whole issue of the returns on scaling individual brains not being anything like the returns on scaling the number of brains. A big-brained human has around four times the cranial volume of a chimpanzee, but 4 chimps != 1 human. (And for that matter, 60 squirrels != 1 chimp.) Software improvements here almost certainly completely dominate hardware, of course. But having a thousand scientists who collectively read all the papers in a field, and who talk to each other, is not like having one superscientist who has read all those papers and can correlate their contents directly using native cognitive processes of association, recognition, and abstraction. Having more humans talking to each other using low-bandwidth words, cannot be expected to achieve returns similar to those from scaling component cognitive processes within a coherent cognitive system.
This, too, is an idiom outside human experience - we *have* to solve big problems using lots of humans, because there is no way to solve them using ONE BIG human. But it never occurs to anyone to substitute four chimps for one human; and only a certain very foolish kind of boss thinks you can substitute ten programmers with one year of experience for one programmer with ten years of experience.
(Part of the general Culture of Chaos that praises emergence and thinks evolution is smarter than human designers, also has a mythology of groups being inherently superior to individuals. But this is generally a matter of poor individual rationality, and various arcane group structures that are supposed to compensate; rather than an inherent fact about cognitive processes somehow *scaling better when chopped up into distinct brains.* If that were *literally* more efficient, evolution would have designed humans to have four chimpanzee heads that argued with each other. In the realm of AI, it seems much more straightforward to have a single cognitive process that lacks the emotional stubbornness to cling to its accustomed theories, and doesn't *need* to be argued out of it at gunpoint or replaced by a new generation of grad students. I'm not going to delve into this in detail for now, just warn you to be suspicious of this particular creed of the Culture of Chaos; it's not like they actually *observed* the relative performance of a hundred humans versus one BIG mind with a brain fifty times human size.)
So yes, there was a lot of software improvement involved - what we are seeing with the modern human brain size, is probably not so much the brain volume *required* to support the software improvement, but rather the *new evolutionary equilibrium* for brain size *given* the improved software.
Even so - hominid brain size increased by a factor of five over the course of around five million years. You might want to think *very seriously* about the contrast between that idiom, and a successful AI being able to expand onto five thousand times as much hardware over the course of five minutes - when you are pondering possible hard takeoffs, and whether the AI trajectory ought to look similar to human experience.
A subtler sort of hardware overhang, I suspect, is represented by modern CPUs have a 2GHz *serial* speed, in contrast to neurons that spike 100 times per second on a good day. The "hundred-step rule" in computational neuroscience is a rule of thumb that any postulated neural algorithm which runs in realtime has to perform its job in less than 100 *serial* steps one after the other. We do not understand how to efficiently use the computer hardware we have now, to do intelligent thinking. But the much-vaunted "massive parallelism" of the human brain, is, I suspect, [mostly cache lookups](/lw/k5/cached_thoughts/) to make up for the sheer awkwardness of the brain's *serial* slowness - if your computer ran at 200Hz, you'd have to resort to all sorts of absurdly massive parallelism to get anything done in realtime. I suspect that, if *correctly designed,* a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less.
So that's another kind of overhang: because our computing hardware has run so far ahead of AI *theory,* we have incredibly fast computers we don't know how to use *for thinking;* getting AI *right* could produce a huge, discontinuous jolt, as the speed of high-grade thought on this planet suddenly dropped into computer time.
A still subtler kind of overhang would be represented by human [failure to use our gathered experimental data efficiently](/lw/qk/that_alien_message/).
On to the topic of insight, another potential source of discontinuity. The course of hominid evolution was driven by evolution's neighborhood search; if the evolution of the brain accelerated to some degree, this was probably due to existing adaptations creating a greater number of possibilities for further adaptations. (But it couldn't accelerate past a certain point, because evolution is limited in how much selection pressure it can apply - if someone succeeds in breeding due to adaptation A, that's less variance left over for whether or not they succeed in breeding due to adaptation B.)
But all this is searching the raw space of genes. Human design intelligence, or sufficiently sophisticated AI design intelligence, isn't like that. One might even be tempted to make up a completely different curve out of thin air - like, intelligence will take all the easy wins first, and then be left with only higher-hanging fruit, while increasing complexity will defeat the ability of the designer to make changes. So where blind evolution accelerated, intelligent design will run into diminishing returns and grind to a halt. And as long as you're making up fairy tales, you might as well further add that the law of diminishing returns will be exactly right, and have bumps and rough patches in exactly the right places, to produce a smooth gentle takeoff even after recursion and various hardware transitions are factored in... One also wonders why the story about "intelligence taking easy wins first in designing brains" *tops out* at or before human-level brains, rather than going *a long way beyond human* before topping out. But one suspects that if you tell *that* story, there's no point in inventing a law of diminishing returns to begin with.
(Ultimately, if the character of physical law is anything like our current laws of physics, there will be limits to what you can do on finite hardware, and limits to how much hardware you can assemble in finite time, but if they are very *high* limits relative to human brains, it doesn't affect the basic prediction of hard takeoff, "AI go FOOM".)
The main thing I'll venture into actually expecting from adding "insight" to the mix, is that there'll be a discontinuity at the point where the AI *understands how to do AI theory,* the same way that human researchers try to do AI theory. An AI, to swallow its own optimization chain, must not just be able to rewrite its own source code; it must be able to, say, rewrite *Artificial Intelligence: A Modern Approach* (2nd Edition). An ability like this seems (untrustworthily, but I don't know what else to trust) like it ought to appear at around the same time that the architecture is at the level of, or approaching the level of, being able to handle what humans handle - being no shallower than an actual human, whatever its inexperience in various domains. It would produce further discontinuity at around that time.
In other words, when the AI becomes smart enough to *do AI theory,* that's when I expect it to fully swallow its own optimization chain and for the *real* FOOM to occur - though the AI might *reach* this point as part of a cascade that started at a more primitive level.
All these complications is why I don't believe we can *really* do any sort of math that will predict *quantitatively* the trajectory of a hard takeoff. You can make up models, but real life is going to include all sorts of discrete jumps, bottlenecks, bonanzas, insights - and the "fold the curve in on itself" paradigm of recursion is going to amplify even small roughnesses in the trajectory.
So I stick to qualitative predictions. "AI go FOOM".
Tomorrow I hope to tackle locality, and a bestiary of some possible qualitative trajectories the AI might take given this analysis. Robin Hanson's summary of "primitive AI fooms to sophisticated AI" doesn't fully represent my views - that's just one entry in the bestiary, albeit a major one.
|
19faacac-1132-4a49-85ca-0ee81a89efda
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Is there a name for the theory that "There will be fast takeoff in real-world capabilities because almost everything is AGI-complete"?
I think it's plausible that:
1) For many applications, getting narrow AI to do a task well enough to be valuable doesn't seem worth it, and likely isn't (esp. when considering opportunity cost and alternative applications of AI).
2) Thus proto-AGI is actually not going to change the world that much
3) But OFC, AGI will (once/assuming it's cheap enough)
If correct, this could mean that people probably won't really be impressed by narrow AI at any point, and then all of the sudden, we get AGI and everything changes rapidly.
I'm just sketching it out and probably didn't do the best job, but my questions are: Is this something people have seen argued? Is there a name for it? (or want to propose one?)
|
ab06792d-da07-4797-b8d0-cdaeee1765a2
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Implications of evidential cooperation in large worlds
I've written several posts about the plausible implications of "evidential cooperation in large worlds" (ECL), on my newly-revived blog. This is a cross-post of [the first](https://lukasfinnveden.substack.com/p/implications-of-ecl). If you want to see the rest of the posts, you can either go to [the blog](https://lukasfinnveden.substack.com/) or click through the links in this one.
All of the content on my blog, including this post, only represent my own views — not those of my employer. (Currently OpenPhilanthropy.)
---
“ECL” is short for “evidential cooperation in large worlds”. It’s an idea that was originally introduced in [Oesterheld (2017)](https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf) (under the name of “multiverse-wide superrationality”). This post will explore implications of ECL, but it won’t explain the idea itself. If you haven’t encountered it before, you can read the paper linked above or [this summary](http://effective-altruism.com/ea/1gf/multiversewide_cooperation_in_a_nutshell/) written by Lukas Gloor.[1](#footnote-1)
This post lists all candidates for decision-relevant implications of ECL that I know about and think are plausibly important.[2](#footnote-2) In this post, I will not describe in much depth why they might be implications of ECL. Instead, I will lean on the principle that ECL recommends that we (and other ECL-sympathetic actors) act to benefit the values of people whose decisions might correlate with our decisions.
As described in [this appendix](https://lukasfinnveden.substack.com/i/136237476/what-values-do-you-need-for-this-to-be-relevant), this relies on you and others having particular kinds of values. For one, I assume that you care about what happens outside our [light cone](https://en.wikipedia.org/wiki/Light_cone). But more strongly, I’m looking at values with the following property: If you could have a sufficiently large impact outside our lightcone, then the value of taking different actions would be dominated by the impact that those actions had outside our lightcone. I’ll refer to this as “universe-wide values”. Even if *all* your values aren’t universe-wide, I suspect that the implications will still be relevant to you if you have *some* universe-wide values.
This is speculative stuff, and I’m not particularly confident that I will have gotten any particular claim right.
Summary (with links to sub-sections)
====================================
For at least two reasons, future actors will be in a better position to act on ECL than we are. Firstly, they will know a lot more about what other value-systems are out there. Secondly, they will be facing immediate decisions about what to do with the universe, which should be informed by what other civilizations would prefer.[3](#footnote-3) This suggests that it could be important for us to [Affect whether (and how) future actors do ECL](https://lukasfinnveden.substack.com/i/136237476/affect-whether-and-how-future-actors-do-ecl). This can be decomposed into two sub-points that deserve separate attention: how we might be able to affect [Futures with aligned AI](https://lukasfinnveden.substack.com/i/136237476/futures-with-aligned-ai), and how we might be able to affect [Futures with misaligned AI](https://lukasfinnveden.substack.com/i/136237476/futures-with-misaligned-ai).
But separately from influencing future actors, ECL also changes our own priorities, today. In particular, ECL suggests that we should care more about other actors’ universe-wide values. When evaluating these implications, we can look separately at three different classes of actors and their values. I’ll separately consider how ECL suggests that we should…
* [Care more about](https://lukasfinnveden.substack.com/i/136237476/how-us-doing-ecl-affects-our-priorities) [*other humans’*](https://docs.google.com/document/d/1XCZ-g_GAyZwfJfWHTRVyzptVU2_uI9tFWMG8x9j4C_o/edit#heading=h.qkf9kvplu0c8) [universe-wide values](https://lukasfinnveden.substack.com/i/136237476/care-more-about-other-humans-universe-wide-values).[4](#footnote-4)
+ I think the most important implication of this is that [Upside- and downside-focused longtermists should care more about each others’ values](https://lukasfinnveden.substack.com/i/136237476/upside-and-downside-focused-longtermists-should-care-more-about-each-others-values).
* [Care more about *evolved aliens’*](https://lukasfinnveden.substack.com/i/136237476/care-more-about-evolved-aliens-universe-wide-values) [universe-wide values](https://docs.google.com/document/d/1XCZ-g_GAyZwfJfWHTRVyzptVU2_uI9tFWMG8x9j4C_o/edit#heading=h.10dsvgq5soqa).
+ I think the most important implication of this is that we plausibly should care more about [influencing how AI could benefit/harm alien civilizations](https://lukasfinnveden.substack.com/i/136237476/influence-how-ai-benefitsharms-alien-civilizations-values).
+ How much more? I try to answer that question in [the next post](https://lukasfinnveden.substack.com/p/how-ecl-changes-the-value-of-interventions). My best guess is that ECL boosts the value of this by 1.5-10x. (This is importantly based on my intuition that we would care a bit about alien values even without ECL.)
* [Care more about](https://lukasfinnveden.substack.com/i/136237476/care-more-about-misaligned-ais-universe-wide-values) [*misaligned AIs’*](https://docs.google.com/document/d/1XCZ-g_GAyZwfJfWHTRVyzptVU2_uI9tFWMG8x9j4C_o/edit#heading=h.6dtoz94g6ytt) [universe-wide values](https://lukasfinnveden.substack.com/i/136237476/care-more-about-misaligned-ais-universe-wide-values).[5](#footnote-5)
+ I don’t think this significantly [reduces the value of working on alignment](https://lukasfinnveden.substack.com/i/136237476/minor-prioritize-ai-takeover-risk-less-highly).
+ But it suggests that it could be valuable to build AI so that *if* it ends up misaligned, it has certain other desirable inclinations and values. This topic, of [positively influencing misaligned AI](https://lukasfinnveden.substack.com/i/136237476/positively-influence-misaligned-ai) in order to cooperate with distant misaligned AI, is very gnarly, and it’s difficult to tell what sort of changes would be net-positive vs. net-negative.
+ I discuss this more in [a later post](https://lukasfinnveden.substack.com/p/ecl-with-ai).
(For more details on the split between humans/evolved-aliens/misaligned-AI and why I chose it, see [this appendix.](https://lukasfinnveden.substack.com/i/136237476/more-details-on-the-split-between-humans-evolved-species-and-misaligned-ai))
Affect whether (and how) future actors do ECL
=============================================
Futures with aligned AI
-----------------------
If we take ECL seriously,[6](#footnote-6) I think it’s really important that humanity *eventually* understands these topics deeply, and can make wise decisions about them. But for most questions about what humanity should *eventually* do, I’m inclined to defer them to the future. I’m interested in whether there’s anything that *urgently* needs to be done.
One way to affect things is to increase the probability that humanity ends up building a healthy and philosophically competent civilization. (But we already knew that was important.)
There might also be ways in which humanity could irreversibly mess up in the near-term that are unique to decision theory. For example, people could make unwise commitments if they perceive themselves to be in [commitment races](https://www.lesswrong.com/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem).[7](#footnote-7) Or there might be ways in which people could learn too much information, too early. (We don’t currently have any formalized decision theories that *can’t* be harmed by learning information. For example, people who use evidential decision theory can only influence things that they haven’t yet learned about — which means that information can make them lose power.) (C.f. [this later post](https://lukasfinnveden.substack.com/p/when-does-edt-seek-evidence-about) on when EDT seeks out or avoids certain information.) It’s possible that careful thinking could reduce such risks.[8](#footnote-8) (For example, perhaps it would be good to prevent early AI systems from thinking about these topics until they and we are more competent.)
How is ECL relevant for this? Broadly, it seems like ECL is an important part of the puzzle for what various decision theories recommend. So learning more about ECL seems like it could help clarify the picture, here, and clarify what intervention points exist. (This also applies to futures with misaligned AI.)
(For related discussion in [Oesterheld (2017)](https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf), see section 4.5 on researching and promoting ECL and 4.6.3 on making AI act according to ECL.)
Futures with misaligned AI
--------------------------
Affecting how misaligned AI does ECL is also an intervention point.
I think ECL could play a couple of different roles, here:
* Firstly, ECL-sympathetic AI systems might treat *us* better (e.g. by giving humanity a solar-system-sized utopia instead of killing us).
+ In order for ECL to motivate AI systems to treat us nicely, there needs to be some distant actors that care about us. I.e., someone would need to have universe-wide values that specifically value the preservation of distant pre-AGI civilizations over other things that could be done with those resources.
* Secondly, ECL-sympathetic AI systems might trade (and avoid conflict) with distant civilizations, thereby benefiting those civilizations.
+ This is intrinsically good if we intrinsically care about those distant civilizations’ values.
+ In addition, it’s plausible that ECL recommends us to care about benefits that accrue to distant civilizations’ whose values we don’t intrinsically care about. This is discussed below, in [Influence how AI benefits/harms alien civilizations](https://lukasfinnveden.substack.com/i/136237476/influence-how-ai-benefitsharms-alien-civilizations-values).
+ Such trade could also benefit *the misaligned AI system’s own values*, and ECL might give us reason to care about those values. This is more complicated. I discuss it below in [Positively influence misaligned AI](https://lukasfinnveden.substack.com/i/136237476/positively-influence-misaligned-ai).
How *us doing* ECL affects our priorities
=========================================
Care more about other humans’ universe-wide values
--------------------------------------------------
### It matters less which universe-wide values control future resources (seems minor in practice?)
Let’s temporarily assume that humanity will avoid both near-term extinction and AI takeover. Even then, the value of the future could depend a lot on *which human values* will be empowered to decide what’s to be done with all the matter, space, and time in our lightcone.
If someone had an opportunity to influence this (e.g. by promoting certain values), ECL would generally be positive on empowering universe-wide values (that are compatible with good decision-theoretic reasoning), since for any such values:
* You might correlate with distant people who hold such values, in which case ECL gives you reason to benefit them.
* If such values maintain power into the long-term future, and our future civilization ends up deciding that ECL (or something similar) works, then ECL will motivate them to benefit other universe-wide values. (At least insofar as there are gains from trade to be had.)
If you were previously concerned about promoting *any particular* universe-wide values, this means that you should now be somewhat less fussed about promoting those values in particular, as opposed to any other universe-wide values. In struggles for influence that are mainly a struggle about universe-wide values, you should care less about who wins.
(This is related to discussion about moral advocacy in section 4.2 of [Oesterheld (2017)](https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf); especially 4.2.7.)
Here’s a slightly more worked-out gesture at why ECL would recommend this.
* Let’s say that you’re a supporter of faction A, in a struggle for influence against faction B. You can decide to either invest in the 0-sum struggle for influence, or you can decide to invest in something that you both value (e.g. reducing uncontroversial x-risks or s-risks).
* If support for faction B is compatible with good decision-theoretic reasoning, then on some distant planet, there will probably be supporters of faction B who are in an analogous but reversed situation to you (in a struggle for influence against faction A) who are thinking about this decision in a similar way.
* If you decide to support the common good instead of faction A, then faction A’s expected influence will decrease a bit on your planet. But your choice to do so is evidence that the distant supporters of faction B also will support the public good (instead of faction B) on their planet, which will lead faction A’s expected influence to increase a bit (and also lead to positive effects from the support of the public good).
* So ECL provides a reason to invest less resources in the 0-sum fight and instead care more about public goods.
(In order to work out *how* much less you’d want to invest in the 0-sum fight, you’d want to think about the ratio between “how much evidence am I providing that supporters of faction A will invest in the public good” to “how much evidence am I providing that supports of faction B will invest in the public good”.[9](#footnote-9) I’m only illustrating the directional argument, here.)
While I believe the ECL argument works here, it doesn’t currently seem very decision-relevant to me. Competitions that could be important for the future (e.g. competition between AI labs or between US and China) don’t seem well-characterized as conflicts between universe-wide value-systems. At least my personal opinions about them are mostly about who’s more/less likely to cause an (uncontroversial) x-risk along the way; and perhaps about who’s more/less likely to help create a society that adopts reasonably impartial values and become sufficiently philosophically sophisticated to enact the best version of them.
That said, for someone who was previously obsessed with boosting a *particular* value-system (e.g. spreading hedonistic utilitarianism, or personally acquiring power for impartial ends), I think ECL should nudge that motivation toward being somewhat more inclusive of other universe-wide values.
### Upside- and downside-focused longtermists should care more about each others’ values
(Terms are defined as [here](https://longtermrisk.org/cause-prioritization-downside-focused-value-systems/): “Upside-focused values” are values that *in our empirical situation* recommend focusing on bringing about lots of positive values. “Downside-focused values” are values that *in our empirical situation* recommend working on interventions that make bad things less likely, typically reducing suffering.)
If we look beyond struggles for influence and resources, and instead look for any groups of humans who have different *universe-wide* values, and where this leads to different real-world priorities, the two groups that stand out are upside-focused and downside-focused longtermists. For these groups, we also *have actual examples* of both upside- and downside-focused people thinking about ECL-considerations in a similar way. Which makes the ECL-argument more robust.
It seems good for people to know about and bear this in mind. For example, it means that upside- and downside-focused people should:
* be inclined to take high-leverage opportunities to help each other,
* decide what to work on somewhat less on the basis of values and somewhat more on the basis of comparative advantage,
* avoid actions that would benefit their own values at considerable cost to the others’ values.
As usual, the ECL-argument here is: If you choose to take any of these actions, then that’s evidence that distant people with *the other* value-system will choose to take analogous actions to benefit *your* favorite value-system.
How strong is this effect? I’m not sure. What follows is a few paragraphs of speculation. (Flag: These paragraphs rely even more on pre-existing knowledge about ECL than the rest of the post.)
Ultimately, it depends on the degree to which humans correlate relatively more with the decisions of people with shared values vs. different values, on this type of decision.
I.e., the question is: If someone with mostly upside-focused values decides to do something that benefits downside-focused values, how much evidence is this that (i) distant upside-focused people will help out people with downside-focused values, vs. (ii) distant downside-focused people will help out people with upside-focused values. (From the perspective of the person who makes the decision.)
If it’s similarly much evidence for both propositions, then upside-focused and downside-focused people should be similarly inclined to benefit each others’ values as to benefit their own values.[10](#footnote-10)
Here’s an argument in favor of this: Regardless of whether you have upside-focused or downside-focused values, the ECL argument (that you should care about the others’ values) is highly similar. So it seems like there’s no large dependence on what values you have. Accordingly, it seems like your decision should be equally much evidence for how other people act, regardless of what values they have.
Here’s a counter-argument: When you’re making any particular decision, perhaps you are getting disproportionately much evidence about how actors that are especially similar to you tend to feel about ECL-based cooperation-arguments in especially similar situations. (After all: Most of your evidence about how likely people are to act on ECL-arguments *in general* will come from observations about what decisions *other* people make.) And perhaps an important component of “especially similar” is that those actors would share your values.[11](#footnote-11)
(For some more of my speculation, including some counter-arguments to that last paragraph, see the post [Are our choices analogous to AI choices?](https://lukasfinnveden.substack.com/p/are-our-actions-evidence-for-ai-decisions), which discusses a similar question but with regards to correlating with *AI* instead of with *humans*. A relatively less likely proposition. Nevertheless — similar considerations come up. See also section 3.1 in [Oesterheld (2017)](https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf) on orthogonality of rationality and values.)
Overall, I feel quite uncertain here. This uncertainty corresponds to a sense that my actions are somewhat less evidence for the decisions of people who don’t share my values, but not a huge amount less. Summing over my uncertainty, I feel like my decisions are ≥10% as much evidence for the decisions of people who don’t share my value (as they are evidence for the decisions of people who share my values) — which would imply that I should care ≥10% as much about their values as I care about my own.[12](#footnote-12)
Care more about evolved aliens’ universe-wide values
----------------------------------------------------
ECL also recommends caring more about alien civilizations. Here are three different implications of this.
### Minor: Prioritize non-AI extinction risk less highly
A minor consequence of this is: You might want to prioritize non-AI extinction risk slightly *less* highly than before. Because if Earth doesn’t colonize the universe, some of that space will (in expectation) get colonized by alien civilizations instead, to their benefit.
If we were to trade like-for-like, the story would be: If we prioritize non-AI extinction risk slightly less highly (and put higher priority on making sure that space colonization is good if it does happen), then that’s evidence that distant aliens also prioritize non-AI extinction risk slightly less highly. If this leads to their extinction, and their neighbors share our values, then civilizations with our values will recover some of that empty space.[14](#footnote-14)
I think this effect seems minor (unlikely to make non-AI extinction less than half as useful as you previously thought).
This is mainly because ECL doesn’t recommend us to care *as* much about alien values as we care about human values.[15](#footnote-15) I would be surprised if ECL recommended that we prioritize random alien values more than half as much as our own, which suggests that even if aliens were guaranteed to colonize space in our place, at least ½ of value would be lost from our failure to do so.
An additional consideration is that aliens are (probably) not common enough to take over all space that we would have missed. I think the relevant comparison is between “space we could get to *before aliens*” (i.e. the total amount of space that humanity would get access to if space is defense-dominant) vs. “space that *only* we could get to” (i.e. the space that humanity would get access to without excluding any aliens from it, such that we would want to get there even if we cared just as much about alien’s values as our own values).
* [My old estimate](https://forum.effectivealtruism.org/posts/9p52yqrmhossG2h3r/quantifying-anthropic-effects-on-the-fermi-paradox) is that the latter is ~⅓ times as large as the former. This suggests that, even if ECL made us care just as much about alien values as our own values, we would still care ⅓ as much about colonizing space.
* But my old estimate didn’t take into account that intelligence might re-evolve on Earth if humans go extinct. I think this is fairly likely, based on my impression that recent evolutionary increases in intelligence have been rapid compared to Earth’s remaining life-span. If there’s only a ⅓ probability that intelligence fails to re-evolve on Earth, then the expected amount of space that wouldn’t be colonized by anyone is only ⅓\*⅓~=10% as large as the space that humans would get to first.
* So if we took these estimates at face-value — they suggest that *if* ECL moved us from “don’t care about alien values at all” to “care about alien values just as much as our own values”, this would reduce the value of non-AI extinction-reduction by at most 10x.
* …from the perspective of universe-wide values which value marginal influence over the universe roughly linearly. Other plausible value-systems would be less swayed by this argument, so it overestimates how much our all-things-considered view should move.
Also, as I discuss [below](https://lukasfinnveden.substack.com/i/136237476/minor-prioritize-non-ai-extinction-risk-less-highly), ECL might similarly motivate us to prioritize AI takeover less highly. Since this is the most salient alternative priority to “non-AI extinction risk” on a longtermist view, they partly cancel out.
### Influence how AI benefits/harms alien civilizations’ values
A different way in which we could benefit aliens is to increase the probability that Earth-originating intelligence benefits them (if Earth-originating intelligence doesn’t go extinct). This could apply to either aliens that we physically meet in space, or to distant aliens that we can’t causally interact with.
I typically think about such interventions as a special kind of “cooperative AI”-intervention — increasing the probability that AIs are inclined to seek out win-win opportunities with other value systems. See [this post](https://lukasfinnveden.substack.com/p/how-ecl-changes-the-value-of-interventions) for more discussion of this. The brief summary is: ECL could plausibly increase the value of interventions that aim to make misaligned AI treat alien civilizations better by ~1.5-10x.
### Possibly: Weigh suffering-focused values somewhat higher if they are more universal
This is the item on the list that I’ve thought the least about. But I recently realized that it passes my bar of being a “plausibly important implication”, so I’ll briefly mention it.
As argued in section 4.1.1 of [Oesterheld (2017)](https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf), ECL suggests that we should give greater weight to “universalist values” over idiosyncratic concerns. (Note that “universalist values” does *not* mean the same thing as universe-wide values. Universalist values are values that are highly common across the universe.)
While I but the basic argument for this principle, I didn’t initially see any decision-relevant applications of this.
For example, one idiosyncratic value is to care especially much about yourself, and less about others. But in order for the argument to apply here, we require that the tradeoff rate between altruistic values and selfish values is sensitive to abstract arguments about the altruistic stakes involved. At least for me personally, I intuitively feel like learning about the potential size of the far future should have ~“maxed out” the degree to which abstract knowledge of high altruistic stakes will compel me to act more altruistically and less selfishly. Such that there’s not a lot of room left for ECL to move me.
Another example is that it affects what precise form of moral advocacy we should be interested in, insofar as we’re pursuing moral advocacy to influence long-term values. But I don’t currently think that it seems like a high-priority intervention to advocate for highly specific values, with the purpose of influencing long-term values. (I think it’s relatively more promising to do “moral advocacy to change near-term behavior” or “advocating for good principles of ethical deliberation”. But I don’t think that the value of those interventions are very sensitive to whether ECL recommends universal values over idiosyncratic values.)
But here’s a scenario where this consideration might matter. Some people’s views on ethics has an asymmetry between *positive* visions of the future and *negative* visions of the future. Where positive visions need to get a lot of complex, human-specific things right, in order to satisfy human’s highly contingent, highly complex values. Whereas negative visions only need suffering — and then that’s already bad.[13](#footnote-13)
If this is your view, then a plausible corollary could be: there are many more aliens across the universe who share your concern for suffering, then who share your vision for a positive future.
And if that’s true, then you could potentially get gains-from-trade from everyone putting relatively more attention into reducing suffering (which is appreciated by everyone) and relatively less attention on bringing about highly complex positive future visions (which is only appreciated by a relatively small fraction).
How large is this effect?
* When you act to bring about positive futures that you value, your influence is proportional to the (power-weighted) number of actors who share those values, multiplied by the average amount that you correlate with them.
* When you act to reduce negative experiences that you disvalue, your influence is proportional to the (power-weighted) number of actors who share those values, multiplied by the average amount that you correlate with them.
So for example: If you thought that 10x more actors shared your conception of suffering than your conception of a good future, but that those actors were less similar to you and therefore correlated with you 2x less on average, then that would increase the value of suffering-focused interventions by 5x.
Care more about misaligned AIs’ universe-wide values
----------------------------------------------------
More speculatively, ECL might recommend that we care more about the universe-wide values of distant misaligned AIs.[16](#footnote-16) Why is this more speculative? In order for ECL to give us reason to benefit AIs, we would have to be similar enough to those AIs that our decisions have some acausal influence on their decisions. If we assume evidential decision theory, this means that our decision needs to give some evidence for what distant misaligned AIs choose to do. And intuitively, it seems less likely that our decisions provide evidence about distant misaligned AI’s actions than that it provides evidence about the actions of distant aliens. (Since AIs’ minds probably differ more from ours, and since the decision-situations they are likely to find themselves in differ more from ours.)
I feel uncertain about whether ECL says we should care more about the values of distant AIs. If it did, here are two conclusions.
### Minor: Prioritize AI takeover risk less highly
One potential implication could be that we should prioritize AI takeover risk slightly less highly: Because although it would be bad for misaligned AI to seize power, it would at least be slightly good that the AI gets to implement its own values, as long as those values were endorsed by many distant ECL-sympathetic AIs. (Since us benefiting their values, in this way, would be evidence that they make decisions to benefit our values — or at least that’s the hypothetical we’re considering.)
However, this effect seems minor (unlikely to make misalignment reduction less than half as useful as you previously thought). Because:
* As mentioned above, we might not be similar enough to AIs for the ECL argument to work. And even if we’re similar enough to have some acausal influence on them, ECL doesn’t recommend us to care *as* much about AI values as we care about human values.[17](#footnote-17) I would be surprised if ECL recommended that we prioritized random AI values more than half as much as our own.
* This is mainly an argument to slightly deprioritize AI takeover *by ECL-sympathetic AI systems with universe-wide values*. But AIs might not have universe-wide values, and might not be ECL-sympathetic, by default.
Also, as discussed [above](https://lukasfinnveden.substack.com/i/136237476/minor-prioritize-non-ai-extinction-risk-less-highly), ECL might similarly motivate us to prioritize *non-AI* extinction less highly. (Which is the most salient alternative priority to misalignment risk, on a longtermist view.)
### Positively influence misaligned AI
A different way in which we could benefit distant misaligned AIs’ universe-wide values is to adjust how we build AI so that *if* humanity ends up building AI that is misaligned with our own values, then it’s more likely to successfully optimize for values that distant misaligned AIs would approve of. Unfortunately, it seems very difficult to work out what sort of changes would be good and what sort of changes would be bad, here.
For more writing on this, see [here](https://lukasfinnveden.substack.com/p/ecl-with-ai).
More
----
I’m following up this post with two other posts:
* [ECL and benefitting distant civilizations](https://lukasfinnveden.substack.com/p/how-ecl-changes-the-value-of-interventions), for more on how ECL affects the value of influencing how AI might benefit/harm distant alien civilizations.
* [ECL with AI](https://lukasfinnveden.substack.com/p/ecl-with-ai), digging into how we could positively influence misaligned AI, and whether ECL recommends that.
Appendices
==========
What values do you need for this to be relevant?
------------------------------------------------
I'd say this post is roughly: advice to people whose values are such that, *if* they were to grant that their actions acausally affected an enormous number of worlds quite different from their own, they would say that a large majority of the impact they cared about was impact on those distant worlds.
Importantly, it's also advice to people who endorse some type of moral uncertainty or pluralism, and have *components* of their values that behave like that. Then it's advice for what that value-component should advocate and bargain for. (I think this is probably a more realistic account of most humans’ values.)
(Though one of the many places where I haven't thought about the details is: If you are trying to acausally influence agents with universe-wide values, does it pose any extra troubles if you yourself only have partially universe-wide values and do some messy compromise thing?)
I’ll use “universe-wide” values as a shorthand for these types of values. (“Multiverse-wide” would also be fine terminology — but I think caring about a spatially large universe is sufficient.)
(For previous discussion of what values are necessary for ECL, see section 3.2 in [Oesterheld (2017)](https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf).)
More details on the split between humans, evolved species, and misaligned AI
----------------------------------------------------------------------------
Above, I separately consider how ECL suggests that we should care more about:
* other humans’ universe-wide values,
* evolved aliens’ universe-wide values,
* misaligned AIs’ universe-wide values.
This raises two questions:
* Why the distinction between “other humans’ universe-wide values” and “evolved aliens’ universe-wide value”?
* Why the distinction between “evolved aliens’ universe-wide values” and “misaligned AIs’ universe-wide values?
### Why distinguish humans from aliens?
When I talk about benefiting other humans’ universe-wide values, I don’t mean to imply that we’re acausally cooperating with just the local humans on our own planet Earth. I think almost all the benefits come via evidence that very distant actors behave more cooperatively. Such actors could be either quite similar to humans or quite unlike humans (in at least some ways).
So why talk specifically about the universe-wide values of “other humans”, rather than the broader group of aliens?
The answer is that universe-wide values held by other humans has a number of unique properties:
* For any universe-wide values held by humans, we have *empirical support* that evolved species sometimes grow to treasure those values.
* Even stronger, we have empirical support that *minds very similar to our own* can grow to treasure those values, which strengthens the case for high correlations, and thereby the case for ECL-based cooperation.
* Universe-wide values held by humans can conveniently be benefitted *via* the humans that support them For example, by:
+ supporting the humans that hold them.
+ avoiding conflicts with humans that hold them.
+ listening to the advice of humans that hold them.
This is quite different from non-human values, where we have to resort to more basic guesses about their preferences, like:
* Aliens probably value having access to more space over having access to less space.
* Aliens probably prefer to interact with other actors who are cooperative rather than conflict-prone.
### Why distinguish evolved aliens from misaligned AIs?
First, a terminological note: “Misaligned AI” refers to AI whose values are very different from what was intended by the evolved species that first created them. If a distant species has very different values from us, and successfully aligns AI systems that *they* create, I’d count those as “aligned” AIs.
(“Aligned AIs” themselves will, of course, have the same values as evolved aliens. Benefiting their values would be the same as benefitting the values of some evolved aliens, so they don’t need a separate category.)
Now, why do I separately consider the values of evolved aliens and misaligned AIs? There are two reasons.
Firstly, compared to AI, evolved aliens probably have minds that are more similar to ours, and face decisions that are more similar to ours. Thus, there’s a stronger case that our decisions correlate more with their decisions, making the case for ECL-based cooperation stronger.
Second, AI progress is currently fast, and I have less than perfect confidence in humanity’s ability to only create and empower aligned AI systems. This (ominously) suggests that we may soon have unique opportunities to benefit or harm the values of misaligned AI systems.
---
Acknowledgments
===============
For helpful comments and suggestions on this and related posts, thanks to Caspar Oesterheld, Emery Cooper, Lukas Gloor, Tom Davidson, Joe Carlsmith, Linh Chi Nguyen, Daniel Kokotajlo, Jesse Clifton, Richard Ngo, Anthony DiGiovanni, Charlotte Siegmann, Tristan Cook, Sylvester Kollin, and Alexa Pan.
---
Footnotes
=========
[1](#footnote-anchor-1)
For even more references, see all the content gathered on [this page](https://longtermrisk.org/msr), and more recently, [this post](https://www.lesswrong.com/posts/mm8sFBpPH3Bb2NhGg/three-reasons-to-cooperate) written by Paul Christiano and [this paper](https://arxiv.org/pdf/2307.04879.pdf) by Johannes Treutlein.
[2](#footnote-anchor-2)
If you know any plausible implication that I don’t list here — then either I don’t buy that it’s an implication of ECL, or it doesn’t seem sufficiently decision-relevant to me, or I haven’t thought about it / forgot about it and you should let me know.
[3](#footnote-anchor-3)
Whereas today, we can focus on handing-off the future to a broadly competent and healthy civilization, and trust decisions about what to do with the future to them.
[4](#footnote-anchor-4)
When I discuss how we should “care more about other humans’ universe-wide values”, I exclusively refer to universe-wide values held by humans on our current planet Earth, as opposed to values that might be held by distant human-like species. But the reason to benefit such values is to generate evidence that other people benefit our values on distant planets (not just here, on planet Earth). So why focus specifically on humans’ values? The reason is that we are more confident that some people treasure them, and it’s easy to benefit them via supporting humans who support them. For more, see [here](https://docs.google.com/document/d/1XCZ-g_GAyZwfJfWHTRVyzptVU2_uI9tFWMG8x9j4C_o/edit#heading=h.y1hw7rervyd).
[5](#footnote-anchor-5)
“Misaligned AI” refers to AI whose values are very different from what was intended by the evolved species that first created them. If a distant species has very different values from us, and successfully aligns AI systems that they create, I wouldn’t count those as “misaligned AIs”.
[6](#footnote-anchor-6)
Or any other kind of acausal effects.
[7](#footnote-anchor-7)
Premature commitments are often a gamble that might gain *you* a better bargaining position while carrying a risk of *everyone* getting a lower payoff. Since that’s quite uncooperative, it seems plausible that ECL could discourage premature commitments. So this might be a reason to spread knowledge about ECL.
[8](#footnote-anchor-8)
Though also possible that *un*careful thinking could increase them — given that they are by-their-nature caused by humanity making errors in what order they learn about and commit to doing certain things.
[9](#footnote-anchor-9)
And ideally, you would also think about other opportunities that faction A and faction B would have of benefiting each other, since you might also be providing evidence about those. Even more ideally, you might think about possible gains from trades that involve even more factions.
[10](#footnote-anchor-10)
Though the total effort that goes to each should perhaps still be allocated based on the number of people who support each set of values and who are sympathetic to ECL. Potentially adjusted by speculation about whether either set of values is underrepresented (among ECL-sympathizers) on Earth compared to the universe-at-large, in which case we should prioritize that set of values higher.
[11](#footnote-anchor-11)
It will be *the most* evidence for the actions of people in *exactly* my position. But this is not where most of my acausal influence will come from, since even a small amount of evidence across a sufficiently larger number of actors will weigh higher. The hypothesis that I’m putting forward here is that there might be some fairly broad class of actors which still share some key similarities with you, whose decisions your decisions provide more evidence about. And that your values might be (or be correlated with) one of the key similarities.
[12](#footnote-anchor-12)
Though I am personally somewhat sympathetic to both upside- and downside-focused values, so this doesn’t have a big impact on my all-things-considered view.
[13](#footnote-anchor-13)
As an example of someone with this view: [This facebook post](https://www.facebook.com/yudkowsky/posts/10152588738904228) by Eliezer Yudkowsky starts “I think that I care about things that would, in your native mental ontology, be imagined as having a sort of tangible red-experience or green-experience, and I prefer such beings not to have pain-experiences. Happiness I value highly is more complicated.” Yudkowsky has also written about the complexity and fragility of value elsewhere, e.g. [here](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile).
[14](#footnote-anchor-14)
Even if the aliens who went extinct shared our values, their choice to prioritize non-AI extinction risk less could still have been net-positive ex-ante. For example, they might have reallocated resources in a way that reduced AI takeover risk by 0.1% and increased non-AI extinction risk by 0.1001%. The added 0.0001% of x-risk might have been worth the benefit of leaving behind empty space rather than AI-controlled space in 0.1% of worlds.
[15](#footnote-anchor-15)
In particular, ECL suggests that we should discount benefits to aliens insofar as they on average correlate less strongly with us than the average civilizations-with-our-values do. (When making relevant decisions.)
[16](#footnote-anchor-16)
“Misaligned AI” refers to AI whose values are very different from what was intended by the evolved species that first created them. If a distant species has very different values from us, and successfully aligns AI systems that they create, I wouldn’t count those as “misaligned AIs”.
[17](#footnote-anchor-17)
In particular, ECL suggests that we should discount benefits to AI insofar as they correlate less strongly with us than actors-with-our-values do.
|
db9c53a3-7475-402d-b979-367890ba0e4c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Does the Utility Function Halt?
Suppose, for a moment, that somebody has written the Utility Function. It takes, as its input, some Universe State, runs it through a Morality Modeling Language, and outputs a number indicating the desirability of that state relative to some baseline, and more importantly, other Universe States which we might care to compare it to.
Can I feed the Utility Function the state of my computer right now, as it is executing a program I have written? And is a universe in which my program halts superior to one in which my program wastes energy executing an endless loop?
If you're inclined to argue that's not what the Utility Function is supposed to be evaluating, I have to ask what, exactly, it -is- supposed to be evaluating? We can reframe the question in terms of the series of keys I press as I write the program, if that is an easier problem to solve than what my computer is going to do.
|
407f6d2a-8a9f-4fa3-9cb1-487861fb1fde
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Canberra: Intro to Solomonoff induction
Discussion article for the meetup : Canberra: Intro to Solomonoff induction
WHEN: 24 April 2015 06:00:00PM (+1000)
WHERE: 108 North Road, Acton
Assume we are walking through the world and see a bunch of objects. Some of these objects are ravens, and all of the ravens turn out to be black. So we start entertaining the hypothesis that 'all ravens are black'. But how can we believe in this hypothesis? It talks about an infinite number of ravens, almost all of which we haven't seen!
What we need is a method of induction, generalizing a finite number of examples into a universal rule. It has been claimed that Solomonoff induction is the best method out there. Is that true? Does that mean all scientists should use Solomonoff induction? How does it work? And what can it do for me?
Jan will explain these and related questions giving a brief tour from probability theory to the universally intelligent agent AIXI. No prior knowledge about math is required. As always, vegan snacks will be provided.
General meetup info:
* If you use Facebook, please join our group.
* Structured meetups are (usually) held on the second Saturday and fourth Friday of each month from 6 pm until late in the CSIT building, room N101.
Discussion article for the meetup : Canberra: Intro to Solomonoff induction
|
8c3b3abe-6ee0-4b2a-88ae-485a405e6969
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
On Ensuring that Intelligent Machines Are Well-Behaved
|
14e869d7-a4cc-45f5-9cae-aafcc759671d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
12 career-related questions that may (or may not) be helpful for people interested in alignment research
Epistemic status: Some people tell me that these kinds of questions are helpful sometimes.
At EAGxBerkeley, people interested in alignment research often asked me for career advice. I am not a technical alignment researcher, but some people claim they find my style helpful (my “style” usually is heavy on open-ended questions, reflecting things back at people, and noticing things that people aren’t considering).
I noticed a few questions that came up frequently in my 1-1s. Here are some examples:
On Plans
1. What are your transformative AI timelines (and to what extent do your plans currently make sense given your timelines?)
1. Example: If you put a ~50% chance of TAI within the next 10-15 years, does your current plan let you contribute in time? If your current plan involves gaining credentials for 5-10 years, how much time does that leave you to contribute? Have you considered alternatives that involve spending a greater proportion of remaining time on direct work?
2. Forget about your current plan. Think about the goal you’re trying to achieve. If you work backwards from the goal, what are the most efficient ways to achieve it? What steps will actually make you likely to achieve your goal?
3. Is there any way you can replace your current plan with one that’s 10X more ambitious? What would a desirable tail outcome look like?
4. Is there any way you can take your current plan and achieve it 10X faster?
5. Are there any options outside of the “default path” that you might be neglecting to consider? Are there any options that are unconventional or atypical that you might want to consider more carefully? (e.g., taking a gap year, traveling to AIS hubs)
1. Note of course that sometimes the “default path” is actually the best option. I ask this question because I think people generally benefit from carefully considering options, and “weird” options are often the options that people have most neglected to consider.
6. Have you considered potential dow
|
14789b20-1640-4d5d-82fa-8776e68ce166
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
Rational use of episodic and working memory: A normative account of prospective memory.
RATIONAL USE OF EPISODIC AND WORKING MEMORY :
A N ORMATIVE ACCOUNT OF PROSPECTIVE MEMORY
A P REPRINT
Ida Momennejad
Department of Biomedical Engineering
Columbia University
ida.m@columbia.eduKennneth A. Norman Jonathan Cohen
Department of Psychology
Princeton Neuroscience Institute
Princeton University
knorman@princeton.edu
jdc@princeton.edu
Satinder Singh
Computer Science & Engineering Division
Department of Elec. Eng. & Computer Science
University of Michigan
baveja@umich.eduRichard L. Lewis
Department of Psychology
Weinberg Institute for Cognitive Science
University of Michigan
rickl@umich.edu
March 17, 2019
ABSTRACT
Humans often simultaneously pursue multiple plans at different time scales. The successful realization
of non-immediate plans (e.g., post package after work) requires keeping track of a future plan while
accomplishing other intermediate tasks (e.g., write a paper), a capacity known as prospective memory.
This capacity requires the integration of noisy evidence from perceptual input with evidence from
short-term working memory (WM) and longer-term or episodic memory (LTM/EM). Here we
formulate a set of dual-task problems in empirical studies of prospective memory as problems of
computational rationality, and ask how a rational model should exploit noisy perception and memory
to maximize payoffs. The model combines reinforcement learning (optimal action selection) with
evidence accumulation (optimal inference) in order to derive good decision parameters for optimal
task performance (i.e., performing an ongoing task while monitoring for a cue that triggers executing a
second prospective task). We compare model behavior to key accuracy and reaction time phenomena
in human performance. Thus, we offer a normative approach to theorizing and modeling these
phenomena without assumptions about mechanisms of attention or retrieval. This approach can be
extended to study meta-parameters governing the boundedly rational use of memory in planned action
in health, as well as compensatory mnemonic strategies that may be rational responses to disturbances
of these mechanisms in neuropsychiatric disorders.
1 Introduction
The execution of cognitive control in the service of goal-directed behavior is intimately bound to the use of memory.
The immediate pursuit of a goal is presumed to rely on control representations actively maintained in working memory
(Anderson, 1983; Miller and Cohen, 2001). In contrast, pursuit of a future goal requires that it be encoded in longer-term
storage, and retrieved at the appropriate time (Cohen et al., 1996; Gollwitzer and Brandstätter, 1997). This capacity is
often referred to as prospective memory (PM). Prospective memory experiments are typically designed to study the
interaction between performing an ongoing task (e.g., a day’s work) and the flexible encoding, retrieval, and realization
of a prospective task (e.g., post a package). In event-based prospective memory, the prospective task must be executed
as soon as the relevant target is perceived (e.g., the post office) (Einstein and McDaniel, 2005; Einstein et al., 2005).. CC-BY-NC 4.0 International licenseacertified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted March 17, 2019. ; https://doi.org/10.1101/580324doi: bioRxiv preprint
APREPRINT - M ARCH 17, 2019
In time-based prospective memory (e.g., take out the cookies from the oven in 30 minutes), internal or external time
keeping determines ’when’ to execute the future plan (Momennejad and Haynes, 2012).
PM success relies on rational use of memory and attentional processes. For instance, if you plan to post a package
after a workday, you cannot rely on actively maintaining and rehearsing this plan in working memory all day, as
this would interfere with your performance at work. At the same time, you must ensure that it is remembered in the
face of the day’s workload, and reliably retrieved at the end of the day. The multiprocess view of PM (Einstein and
McDaniel, 2005; Einstein et al., 2005) suggests that the successful realization of prospective memory tasks relies on a
dynamic interaction between two categories of processes: effortful and controlled attentional processes to monitor for
targets in the environment (e.g., active monitoring of the environment and mental rehearsal of the planned action), and
spontaneous retrieval processes (i.e., spontaneously remembering to execute the plan once the target, say post office, is
detected).
Here, we adopt the multiprocess framework and propose that the functioning of prospective memory relies on the
adaptive use of working memory (WM), long-term/episodic memory (EM), and perception. In this model, top-down or
monitoring processes of the multiprocess framework are operationalized as strategies that rely on working memory,
while bottom-up or spontaneous retrieval processes are strategies that rely on long-term or episodic memory. In our
account the weighing of the two sources of memory in the service of action selection is not hand-tuned, but derived
by a normative model seeking to simultaneously maximize reward in both the ongoing and prospective tasks. This is
accomplished by selecting actions according to an optimal policy (reinforcement learning) conditioned on a Bayesian
integration of perceptoin and memory (inference).
It is important to highlight that our model is not a mechanistic process model of PM; rather, our contribution is to
show that key findings in the prospective memory literature, and more broadly multi-tasking, can be explained by
considering how to normatively reconcile multiple sources of memory and perceptual evidence. Indeed, the novel and
perhaps surprising implication of our model is that a fairly detailed account of human behavioral performance (accuracy
and reaction times) is possible using quite abstract assumptions about memory and perception, and an assumption of
rationality (Lewis et al., 2014).
The model makes minimal, abstract assumptions about the properties of three noisy component subsystems: WM,
LTM/EM, and perception. We explore the implications of these assumptions for prospective memory by asking how a
rational model would use these components to adaptively respond to the demands of a variety of specific experimental
task settings. At the heart of the model is the optimal integration of noisy information from the three components over
time, and the optimal selection of external task response actions and internal memory actions to maximize rewards.
Many of the interesting empirical signatures of prospective memory concern effects on both accuracy and speed of
dual-task performance, so the model must incorporate processing dynamics. We model processing dynamics (in discrete
time) as the sequential accumulation and optimal integration of evidence from perception and memory, and the optimal
selection of actions that either wait and sample more evidence, respond to the ongoing task, or respond to a possible
prospective memory target. As we will see below, the formal statement of what the model should do is mathematically
simple. But the multiple sources of evidence (two memory stores plus perceptual sampling) and multiple possible
actions in the dual-task setting (four) give rise to the following significant technical challenges for computing the
optimal policy. First, even the relatively simple assumptions about memory and perception noise that we adopt here
preclude an analytic solution to the optimal Bayesian integration of perception and memory. Second, the dual task
structure requiring a selection among three task actions (plus a wait-and-continue-sampling action) precludes the use of
classic sampling models such as drift-diffusion or the Sequential Probability Ratio Test, which are formulated for the
selection among two choices conditioned on one decision variable.
We address these technical challenges by using a Monte Carlo technique to provide good approximations of the Bayesian
integration, and by using reinforcement learning (RL) algorithms to compute a good approximation of the optimal
action selection policy. As such, we do not hand-tune control parameters, wait times, or thresholds to fit empirical data;
rather, they emerge from the RL policy optimization, and without any explicit thresholds on decision variables. The
optimality assumptions provide a powerful analytic basis for reducing the space of considered strategies and methods of
noisy information integration. Although the mechanistic assumptions about component subsystems are very simple, the
model generates a rich set of predictions for error rates and reaction times that provide good accounts of key empirical
phenomena in the prospective memory literature.
The paper has the following structure. In §2 we review key aspects of the experimental literature, and identify
experimental paradigms that express a canonical set of phenomena, summarized in §2.2. In §3 we formally specify the
normative model. We then report in § 4 the results of simulations that test the model’s account of the key behavioral
phenomena. We discuss the nature of the explanations that the model provides, novel predictions by the model that
remain to be empirically tested, as well as aspects of the empirical data which are not well-fit by the model. We
2. CC-BY-NC 4.0 International licenseacertified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted March 17, 2019. ; https://doi.org/10.1101/580324doi: bioRxiv preprint
APREPRINT - M ARCH 17, 2019
Figure 1: An example of an event-based prospective memory (PM) experimental paradigm (Einstein et al., 2005).
The paradigm consists of a sequence of trials which have an ongoing task and a prospective memory target (PM target). A
given trial begins with an instruction, indicating the prospective memory target, which was a different word or syllable
on different trials. Each trial consists of a sequence of ongoing task (OG) stimuli requiring a category match judgment
between two words on the screen. The PM target in the example instruction is the syllable tor, which here appears in
the word tortoise in the fourth stimulus of the trial. The fourth stimulus thus requires a PM response. Here we have
indicated correct responses to each stimulus below it for clarification. On half the trials the instruction emphasized the high
importance of prospective memory accuracy (e.g., It is very important that you consider your main goal in this section to
find absolutely every occurrence of the target item.). On the other half, the PM instruction was of moderate importance: we
have a secondary interest in your ability to remember to perform an action in the future. Onno-PM trials participants are
instructed to perform only the ongoing task, setting a behavioral baseline for comparing reaction times to the ongoing task
across conditions.
conclude with a summary of the model’s strengths and caveats, and a discussion of the prospects for applying the
modeling method to other domains that involve the adaptive integration of perception with multiple memory systems.
2 An Empirical Paradigm for Prospective Memory and Key Findings
Many prospective memory experiments are variations of an event-based experimental paradigm originally proposed by
Einstein and colleagues (Brewer et al., 2011; Einstein et al., 2005; Scullin et al., 2012). In this section we first describe
the experimental design, then summarize key phenomena of interest emerging in human performance on these tasks.
2.1 Event-based Prospective memory: A Dual-Tasking Paradigm
In event-based prospective memory, the occurrence of an event (e.g., spotting the post office) triggers a switch from an
ongoing task (e.g., talking on the phone) to a planned action (e.g., posting a package). There are two components to
event-based experimental paradigms: an ongoing task (OG) that demands the majority of responses, and a prospective
memory task (PM) that demands a response only when a relatively infrequent target probe or event occurs, referred to
as the PM target.
Figure 1 illustrates an example of this canonical experimental paradigm for testing prospective memory. As the original
paradigm, the ongoing task is to judge whether the word on the right matches the category presented on the left (e.g., on
trial 1, cat is an ANIMAL, a match, hence the correct response is yes). Participants performed this task on its own to
set the baseline performance on the ongoing task alone (baseline OG, or No-PM trials). In the prospective memory
condition, participants were required to give a prospective memory response by pressing another button whenever the
PM target (e.g., the syllable ’tor’ in this case) appeared on screen (e.g., on the 4th trial, ‘tortoise’ is a PM target). The
instructions indicated the level of priority or emphasis of the PM task relative to the ongoing task. For instance, high
PM emphasis was instruction as follows: “It is very important that you consider your main goal in this section to find
absolutely every occurrence of the target item”. In the high PM emphasis condition participants were to prioritize their
attentional and memory resources to increase PM performance, even at the cost of ongoing task performance. Moderate
emphasis instructions indicated a more relaxed prioritization.
Next we review key findings from experiments that use variations of this experimental paradigm.
3. CC-BY-NC 4.0 International licenseacertified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted March 17, 2019. ; https://doi.org/10.1101/580324doi: bioRxiv preprint
APREPRINT - M ARCH 17, 2019
2.2 Key Phenomena in Event-based Prospective Memory
Prospective memory effects are generally measured and reported in terms of the influence of experimental conditions
on two behavioral measures: accuracy of task performance and response times. Many PM studies have used variations
of the experimental PM paradigm described in above, so we refer to this paradigm in the reported phenomena below.
Studies commonly report an OG cost measure, which is the extent to which responses to the ongoing task are slowed as
a function of PM-related experimental manipulations. Prospective memory performance itself is often reported in terms
ofPM hit rate , which measures the proportion of times a participant successfully detects and responds to a PM target.
Below, we review four key classes of PM phenomena consistently reported in the literature. These key phenomena
serve as the target for the behavior of the model described in the next section.
Effects of target focality. A PM target is focal when the OG and PM tasks both require attention to similar features
of stimuli. For instance, a PM target is focal if the ongoing task is a word match judgment and the prospective task is a
particular word (e.g., ’tortoise’). An example of non-focal PM is when the OG task is to judge word match but the PM
task is to respond to the font color or a particular syllable (e.g., red font or any word with the syllable ’tor’).
The main effect of focality is that participants are more successful at detecting focal PM targets, and focal PM responses
incur lower costs on the OG task (leads to less increase in ongoing task response times). For instance, in the experiment
described in Figure 1, it is easier to detect a PM target if it were tortoise as opposed to tor, since the full word tortoise is
already being processed by the OG task. The focality effect interacts with the effect of emphasis, which we describe
next (see Figure 2).
Effects of relative task emphasis and interaction with focality. Emphasis refers to the relative priority that the
participant is instructed to place on the PM vs. OG task. High emphasis indicates that the highest priority is to increase
PM hit rate. For example, in the Einstein et al (2005) (Einstein and McDaniel, 2005) study, on half the trials the
instruction emphasized the high importance of PM hit rate: it is very important that you consider your main goal in
this section to find absolutely every occurrence of the target item. On the other half, PM hit rate was given moderate
importance: we have a secondary interest in your ability to remember to perform an action in the future.
The main effect of emphasis is, sensibly, improved performance on the PM task as measured by PM hit rate. However,
the instructional emphasis manipulation interacts with the focality effect (see Figure 2, lower panel): Emphasis has
the largest effect on OG costs given non-focal targets (i.e., when PM targets are more difficult to detect because they
involve detecting different features than those processed by the OG task). PM performance is at ceiling in the focal PM
condition, when the two tasks share features.
In §4.2, we model emphasis by simply changing the ratio of reward associated with PM vs. OG task performance.
Effects of prospective memory load. Theload or difficulty of the PM task has been empirically manipulated in a
number of ways. In general, increasing PM load incurs greater costs on the OG task, measured by longer reaction times
to ongoing task stimuli. For example, Einstein and colleagues showed that increasing the number of PM targets from 1
to 6 is associated with an increase in OG reaction times (Einstein et al., 2005) (Figure 4, left). Similarly, using a more
difficult PM task (e.g., by increasing its working memory demand) also increases costs to OG reaction times (Meier
and Zimmermann, 2015), and is shown to involve more anterior prefrontal cortex patterns of multivoxel fMRI activity
(Momennejad and Haynes, 2013).
In §4.1 we model PM load in two ways: (a) by increasing the number of PM targets, and (b) by increasing the number
of past, but no longer active, PM targets.
Individual differences and WM capacity. A number of studies have revealed systematic individual differences in
prospective memory tasks. These differences are thought to be related to individual differences in working memory
or executive function capacity. Brewer and colleagues Brewer and Marsh (2010) showed that individuals with lower
measured WM capacity express a higher cost of PM on OG task response times in non-focal conditions. Furthermore,
populations associated with lower WM or executive function capacities such as adolescents and older adults are more
likely to benefit from strategies involving episodic memory, such as episodic future simulation, in PM performance
(Altgassen et al., 2017, 2015). Individuals have also been shown to benefit from episodic encoding strategies such as
imagery or implementation intentions (Brewer et al., 2011; Gollwitzer and Brandstätter, 1997).
In §4.3 we model individual difference effects by varying the process noise in the model, but retaining the assumptions
of optimal control and evidence integration.
Strategy selection effects. As discussed in the Introduction, the multiprocess framework (Einstein et al., 2005;
Einstein & McDaniel, 2005) suggests that there are two primary strategies participants use to perform the PM task. The
first strategy involves active attentional monitoring for PM targets (presumably by maintaining this information in WM).
4. CC-BY-NC 4.0 International licenseacertified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted March 17, 2019. ; https://doi.org/10.1101/580324doi: bioRxiv preprint
APREPRINT - M ARCH 17, 2019
●●●●
100012501500
Moderate High
PM Task EmphasisOngoing Task Reaction Time (ms)Focality
Focal
NonfocalEffect of Focality and PM Task Emphasis
on Ongoing Task RT
(Einstein et al (2005) Exp. 2)
●●
●●
0.50.60.70.80.91.0
Moderate High
PM Task EmphasisProportion PM Targets DetectedFocality
Focal
NonfocalEffect of Focality and PM Task Emphasis
on PM Target Detection Rate
(Einstein et al (2005) Exp. 2)
Figure 2: The interacting effects of focality andemphasis on ongoing task RTs and prospective memory hit-rate
reported in Einstein et al. (2005). Performing an OG task takes longer when more emphasis is placed on the PM task.
Left: A non-focal PM condition, where it is more difficult to detect the PM target, exerts a higher RT cost on the OG task.
In contrast, in a focal PM condition, where PM target is more similar to OG task stimuli and easier to detect, exerts less of
an OG task cost. OG task response times increase in both focal and non-focal conditions when high emphasis is placed on
the PM task. Right : Participants exhibit high accuracy on focal prospective memory regardless of the emphasis condition.
However, in non-focal conditions, emphasis has a large effect on the accuracy of PM responses.
The second strategy involves encoding the PM target in long-term memory, and relying on spontaneous associative
recall of the PM task when the PM target appears. Several studies have sought to explicitly manipulate the choice
between these strategies.
A series of studies controlled for strategy by encouraging participants to use episodic future simulation, imagery
(Brewer and Marsh, 2010; Brewer et al., 2011), or an implementation intention strategy (Chen et al., 2015; Gollwitzer,
1990; McDaniel et al., 2008; McFarland and Glisky, 2012), in which they wrote down multiple times that upon seeing
a PM target they will switch to the PM task. Varieties of episodic future simulation improved PM performance for
non-focal PM. Crucially, the effects were eliminated if the subjects were not given a specific context for the associations.
Furthermore, a recent study (Lewis-Peacock et al., 2016) directly manipulated the use of WM or episodic strategies in
PM by either increasing proactive interference in episodic memory (hence increasing the benefits of a WM strategy)
or increasing WM load by changing the ongoing task from 1-back to n-back (hence increasing the benefits of an EM
strategy). Taken together, these studies help reveal the importance of the trade-off between WM and EM encoding
strategies in PM performance.
The model’s strategy depends on the weight it gives to samples from WM and EM, and the optimal strategy is learned
by the model given the task circumstances (e.g., high or load WM load) and the noise in the memory and perceptual
samples.
3 A Rational Model of Episodic and Working Memory Use in Prospective Memory
We propose a simple rational model of the integrated use of noisy perception, working memory (WM) and long-
term/episodic memory (LTM/EM) in service of prospective and ongoing tasks – as captured by the canonical dual-task
paradigm described in Section 2.1. Our approach formalizes the multiprocess framework of PM proposed by Einstein
and colleagues (Einstein et al., 2005) by specifying abstract computational properties of the LTM/EM, WM, and
perceptual components, and asking how a model should use these components in the PM dual-task setting.
The model is rational (or normative) because the integration of evidence from the three components conforms to
correct Bayesian inference, and because the policy (or strategy) parameters governing task performance are selected to
maximize the joint reward on the OG and PM tasks. We also refer to it as computationally rational to emphasize that
the model derives the best possible use of posited bounded computational resources Howes et al. (2009); it is thus an
application of bounded optimality Russell and Subramanian (1995); Lewis et al. (2014). Once the (approximately)
optimal policy parameters are computed and fixed (through a reinforcement learning method described below), the
model provides detailed behavioral predictions, including response times and accuracies for both tasks. We explore
5. CC-BY-NC 4.0 International licenseacertified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted March 17, 2019. ; https://doi.org/10.1101/580324doi: bioRxiv preprint
APREPRINT - M ARCH 17, 2019
these behavioral implications of the model in Section 4, compare them to human behavior, and discuss how the model
explains the key phenomena summarized above.
3.1 Overview of the model’s processing in a single trial of the canonical dual-task paradigm
Before considering the mathematical details, it is useful to have a qualitative overview of how the three model
components interact to perform one stimulus response in one trial of the event-based PM dual-task. We assume here that
the ongoing (OG) task is a binary categorization task such as lexical decision, and the prospective (PM) task requires
monitoring for a specific feature of the word (e.g., a syllabus). (Below we consider other variants such as monitoring
for a set of words). We refer to the presented item as the probe item.
1.Theincentive reward structure of the task is represented as a payoff matrix. This payoff matrix indicates gains
and losses for making correct (or incorrect) responses to the OG task, for failing to make a response before a
response deadline, and for correctly detecting (or missing) the PM target. These gains and losses can be used
to establish some experimental conditions. For instance, PM emphasis can be determined by, or indicated in
the payoff table as, the ratio of payoffs or gains for correct PM vs. correct OG performance.
Thus, we can increase PM task emphasis by assigning a higher gain, or reward in the payoff table, for PM
performance relative to OG performance.
2.Each stimulus item is represented as a feature pair: one feature is relevant for the OG task and one feature is
relevant for the PM task. Feature values are drawn from a fixed set of discrete values. The OG task binary
classification partitions the values into two subsets, one subset requiring a “Yes” response and one requiring a
“No” response. The PM task requires matching the PM relevant feature to a previously presented PM target
feature. The correlation between the two sets features can determine the focality.
Thus, we can increase the degree of focality by increasing the correlation between the OG-relevant and
PM-relevant features.
3.The model has a noisy encoding of the PM target(s) stored in episodic memory (EM). Each target encoding is
represented as a discrete feature value which may be noisy, errorful with some small probability, and a discrete
context code (indicating the PM trial) which may also be noisy and errorful with some small probability. In
some model variants there may also be a corresponding encoding in working memory (WM). Note, here we
indicate/increase PM load orPM difficulty conditions via two measures: (a) by increasing the number of PM
targets that are monitored for simultaneously, or (b) increasing the number of past PM targets that are no
longer relevant for the current trial.
4.The contents of EM (and possibly WM) and the task structure imputes a prior distribution, on possible stimulus
probe items, in advance of any perception of the probe.
5.In this state, the model perceives the probe over discrete time steps. On each step, a noisy sample is drawn of
the true pair of feature values representing the probe. Each sample step has a duration sampled from a gamma
distribution.
6.On each noisy sample, the model updates a posterior distribution of possible target probes, and uses this
distribution to compute a posterior over the three response hypotheses: the probability that the PM target- yes
response is correct, the probability that the OG-yes response is correct, and the probability that the OG-no
response is correct. This posterior andthe time remaining until the deadline captures all the information the
model needs to decide what to do next.
7.The noise in the sequential sampling process is an abstraction of multiple possible sources of process noise.
This includes perceptual noise and noise in the integration of perceptual evidence with memory. We model
individual WM capacity by changing this process noise level (Note that changing individual WM capacity is
different from increasing WM load, e.g., via increasing the number of PM targets, described above.).
8.Given the posterior over correct responses and the time remaining until the deadline, the model chooses one
of four possible actions: (1) obtain another noisy sample of probe features, (2) respond yesto the PM task
(indicating that the model has detected the PM target), (3) respond yesto the OG task, or (4) respond noto the
OG task. The model makes this choice by choosing the action that maximizes expected payoff. In an alternative
task variant, the model must respond to OG task even when a PM target is present, and so the PM-yes action is
replaced with two distinct actions, corresponding to PM-yes-OG-yes and PM-yes-OG-no.
9.The sum of the duration of probe sample steps before the choice is made constitutes a response time. The
response type may be classified as a correct or incorrect OG response, a correct PM target detection, a PM
target miss, a PM target false alarm, or a missed deadline.
6. CC-BY-NC 4.0 International licenseacertified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted March 17, 2019. ; https://doi.org/10.1101/580324doi: bioRxiv preprint
APREPRINT - M ARCH 17, 2019
In short, the model combines optimal evidence integration with optimal stopping and optimal response selection. Once
memory and process noise parameters are fixed, we may obtain predictions of response times and accuracies on both
PM and OG tasks. We do so under different task and architecture manipulations (e.g., PM emphasis, focality, PM
difficulty, PM load, WM differences) by simulating the model many times. The primary technical challenges concern
computing the prior distribution for probe items given the noisy memory encodings, and computing the policy for
optimal stopping given the posterior and time remaining. We next present the formal model and describe how these two
challenges are met.
3.2 Task environment and reward
Types of PM and OG tasks. There are two simultaneous tasks: the prospective memory (PM) task and the ongoing
task (OG). Probe items are pairs of PM and OG features hPPM;POGi(PPM2IPM;POG2IOG) presented to the model
for response before a deadline has passed There are ppossible PM features IPM
1;:::IPM
pcomprising the set IPM, ando
possible OG features IOG
1;:::IOG
ocomprising the set IOG. In our simulations, p=o= 10.
There are two kinds of PM tasks: detecting that a presented probe is the most recently presented target (where there
may be a history of previous targets); or detecting that a presented probe is in a set of targets (there is no history of
previous sets). Let TPMIPMbe the current set of PM targets.
There are two kinds of OG tasks: a binary discrimination (yes/no) of a presented probe (requiring no short-term
memory); or detecting that a presented probe is the same as the probe presented 1 or 2 probes back (a working memory
“N-back” task).
For the OG discrimination task, let IOG
yesIOGbe the items that should elicit a yesresponse, and letIOG
noIOGbe the
items that should elicit a no.IOG
yesandIOG
noform a partition of IOG.
There is a prior distribution of probe pairs that allows for a correlation Focbetween OG and PM items/features. We
discuss this below as a way to model focality.
Probe responses and task payoff. Consider first the case where the OG task is the discrimination task. Events of
interest concerning the presented probes are:
1. PM yes:PPM2TPM(probe is a/the PM target).
2. PM no:PPM62TPM(probe is not a PM target).
3. OG yes:POG2IOG
yes.
4. OG no:POG2IOG
no.
So for givenTPM, a probehPPM;POGidetermines one of the three types of correct responses:
1.Respond-PM yesshould be made when PM yes, regardless of OG.
2.Respond-OG yesshould be made when PM noand OG yes.
3.Respond-OG noshould be made when PM noand OG no.
The fourth response is “no response”, which happens when the deadline is missed. There is a 4(i)3(j)task payoff
matrix that indicates the payoff for making the ith response when response jis correct.
3.3 Noisy Encodings in Perception, EM, and WM
There are three cognitive components in the model: an episodic memory (LTM/EM) store, a strategically deployable
WM, and noisy perception (Figure 3).
The LTM/EM encodings. EM is a set of elements that are noisy encodings of current or past PM targets in long
term or episodic memory. Each encoding consists of a PM feature value and a context feature value. The PM feature
value may be noisy and erroneous: with some small probability, the feature is drawn from other distractor feature values
rather than the correct feature value. Context codes represent the trial number of past PM targets. Context codes may
also be erroneous: with some small probability, a context code is drawn from context codes for other trials; we assume
a similarity confusion matrix for trial codes such that nearby trials are more confusable.
WM encodings. WM is a set of elements that are noisy encodings of OG task stimuli and/or the current PM target.
When the OG task is N-back, as in (Lewis-Peacock et al., 2016), WM contains the past N-back (1 or 2) stimuli and
7. CC-BY-NC 4.0 International licenseacertified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted March 17, 2019. ; https://doi.org/10.1101/580324doi: bioRxiv preprint
APREPRINT - M ARCH 17, 2019
PM target-Yes posteriorOG task-Yes posterior
Figure 3: Model of rational WM-EM recruitment in a dual-task event-based prospective memory paradigm. The
model components consist of a long-term episodic memory (EM/LTM) which stores a noisy encoding of current and past
PM targets, and a working memory (WM) in which the ongoing stimulus and PM target are encoded with some noise.
Parallel accumulators with no bounds draw noisy perceptual samples of the OG and PM features of the probe stimulus,
incrementally updating posteriors that track the status of the probe for the PM task and the status of the probe for the
OG task. For every state, the set of all possible actions includes obtaining another sample (waiting longer), giving a PM
yesresponse, an OG yesresponse (e.g., category match or 1-back match), or an OG noresponse. The optimal policy is
computed using Q-learning (Watkins and Dayan, 1992), which selects the action with the highest expected value. At
each time point the optimal policy determines whether to draw another sample and risk going past deadline, or to make a
response. Bayesian integration weighs information in WM and LTM/EM as a function of the uncertainty of the memory
encodings.
the current PM target (depending on task demands and strategic choices, as described below). While we do not model
experiments with N-back in the present manuscript, the model is theoretically reach enough to be applied to these
studies as well.
Context codes in WM can, for instance, represent the N-back position of an item (e.g., 1 back or 2 back). In the
N-back=1 condition, the most recent N-back probe is kept in WM. In the N-back=2 condition, the two most recent
probes are kept in WM. If a WM encoding strategy is used for PM, the PM target encoding is also present in WM. The
item and context codes are not confusable across PM targets and N-back stimuli, but increasing the number of items
held in WM is assumed to increase the item and context noise.
Perceptual samples and processing noise. The true probe identity is a pair of feature values hPPM;POGi. When the
model perceives the probe, with some probability Procthe PM feature of the perceptual sample is picked uniformly from
non-true values of PM feature, otherwise it is the true feature. The same is done independently for the OG feature of the
stimulus. We refer to this noise as processing noise (rather than perceptual noise) to emphasisze that it an abstraction
over multiple sources of noise, including perceptual or attentional noise and noise in the evidence integration process.
8. CC-BY-NC 4.0 International licenseacertified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted March 17, 2019. ; https://doi.org/10.1101/580324doi: bioRxiv preprint
APREPRINT - M ARCH 17, 2019
3.4 Bayesian integration of noisy evidence in memory with noisy perceptual samples
Before any perceptual samples of the stimuli have been obtained, the noisy WM and EM memory encodings along with
information about the base rate of PM target occurences are used to compute expectations: a prior distribution over
possible PM targets and OG stimuli. This prior distribution is updated as noisy perceptual samples are obtained.
From this distribution a posterior over three response hypotheses is computed: the posterior probability that the PM
target-yes response is correct, the posterior probability that the OG-yes response is correct, and the posterior probability
that the OG-no response is correct. More formally:
Desired response posteriors given samples. When the model arrives at a probe hPPM;POGiwith some memory m,
it sequentially draws perceptual samples of the presented probe, integrates the evidence, and makes a response after
some number of samples are drawn (or fails to respond if a deadline has passed) as follows.
Here a sequence of kperceptual samples of the true PM feature PPMare denoted as sPM
k=sPM
1;sPM
2;:::sPM
kand the
sequence of perceptual samples of the true OG feature POGassOG
k=sOG
1;sOG
2;:::sOG
k.
Afterksamples, the model computes the posteriors for each of the three responses being the correct response:
p(Respond-PM yesjm;sPM
k;sOG
k) (1)
p(Respond-OG yesjm;sPM
k;sOG
k) (2)
p(Respond-OG nojm;sPM
k;sOG
k) (3)
Given these posteriors, the expected payoff of the three responses may be computed from the payoff matrix, and
compared to the expected value of obtaining another sample (how this value is estimated is described below).
The posteriors over the three responses may be computed as a function of the four probe matching cases of
(PM yes;PM no;OG yes;OG no):
p(Respond-PM yesjm;sPM;sOG) =p(PM yesjm;sPM;sOG) (4)
p(Respond-OG yesjm;sPM;sOG) =p(PM nojm;sPM;sOG)p(OG yesjm;sPM;sOG) (5)
p(Respond-OG nojm;sPM;sOG) =p(PM nojm;sPM;sOG)p(OG nojm;sPM;sOG) (6)
These three equations capture the task instructions concerning responses. Next we describe how the probe-match
posterior probabilities are sequentially computed as each sample arrives.
Posterior update given perceptual samples The perceptual samples are not conditionally independent given the
abstract event types ( PM yes;PM no;OG yes;OG no) because they depend on the specific presented probe pair. The samples
areconditionally independent given a specific probe pair, because the noise associated with perceptual samples is
uncorrelated across time. We exploit this conditional independence to allow for an incremental update of the posterior
given each perceptual sample.
The specific probe pair is unknown to the model (hence the need for the perception). We handle the dependence of the
posterior quantities in Equations (1)–(3) on the previous samples (and on the memory m) by updating a distribution
over probe pairs.
Consider the update for PM yesgiven a perceptual sample sPM:
p(PM yesjm;sPM) =nX
k=1p(PM yesjPPM=IPM
k;m)p(PPM=IPM
kjsPM;m) (7)
We must now compute the two terms in the product on the righthand side. p(PM yesjPPM=IPM
k;m)is the probability of
a PM target match, given that the presented probe is IPM
kand the model has memory m.
Intuitively, each different state of the memory determines a different relationship between presented probe and desired
response. This quantity is computed from a joint probability table estimated by large scale Monte Carlo simulation.
The other term p(PPM=IPM
kjsPM;m)is the posterior over IPMgiven the perceptual sample, and is computed with
Bayes rule:
p(PPM=IPM
kjsPM;m) =p(sPMjPPM=IPM
k;m)p(PPM=IPM
k;m)
Z(8)
9. CC-BY-NC 4.0 International licenseacertified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted March 17, 2019. ; https://doi.org/10.1101/580324doi: bioRxiv preprint
APREPRINT - M ARCH 17, 2019
Table 1: Task and agent parameters used in the simulations.
Parameter Description Value(s)
TASKROG Reward/payoff (penalty) for successful OG response 1 (-1)
RPM Reward/payoff (penalty) of successful PM target detection 10 (-10); 15–30 for higher PM emphasis
RFA Penalty for PM target false alarm -10
ptarget Probability of PM target occurrence 0.05
Foc Focality: correlation of PM and OG features 0–0.2 (non-focal); 0.8–1.0 (focal)
d Deadline 100
AGENTProc Process/sample noise (error rate) 0.4 (0.2 for high-WM capacity)
Mean Mean sample duration 25
CV Sample duration coef. of var. 0.3
EM EM/LTM noise (error rate) 0.002
wherep(sPMjPPM=IPM
k;m) =p(sPMjPPM=IPM
k)is the perceptual noise model and Zis the normalization term. The
termp(PPM=IPM
k;m)captures the dependence of the probe on the agent’s noisy memory, which may include multiple
(noisy) PM targets and WM encodings. It is not possible to derive this quantity analytically, but precise estimates may
be computed from empirical frequencies obtained via large scale Monte Carlo simulation.
3.5 Computing the optimal policy via reinforcement learning
At each time step, the model chooses among four possible actions: Respond- PM yes,Respond- OG yes,Respond- OG no,
andobtainSample.
After obtaining ksamples, the model calculates the expected value of Respond- PM yes,Respond- OG yesandRespond-
OG no(via the posteriors (1)–(3) and the payoff matrix). The model must compare these expected values to the value of
the obtain another sample action ( obtainSample ). This value depends on the time remaining trembefore the deadline
and the current belief (uncertainty) concerning which experiment state (PM yes, etc) holds. The belief is fully captured
by the posteriors p(PM yesjm;sPM)andp(OG yesjm;sOG). The statesfor conditioning control is therefore the triple
s=htrem;p(PM yes);p(OG yes)i (9)
Thus, by computing the optimal value function Q(s;obtainSample) for all model states, we can simulate the rational
model behavior for any set of experimental trials. We estimate this value function by using tabular Q-learning with a
discrete binned approximation to the value function: each of the three continuous state variables is mapped to one of b
bins, wherebis a hyper-parameter of the learning. Computational experiments show best performance around b= 50
(dependent of course on other hyper-parameters). The Q-learner need only learn the value of the action obtainSample ,
because the values of the other three actions (the three task responses) can be computed directly from the posteriors at
each time step. No temporal discounting is used to define expected value.
4 The Model’s Account of the Key Phenomena
We used the model described in Section 3 to simulate a number of behavioral phenomena in the human PM literature
outlined in Section 3. Most of these phenomena involve two behavioral measures of interest: reaction times to the OG
task and the accuracy (detection rate) of responses to the PM target.
All simulations use a single consistent setting of memory noise parameters. We set taskparameters, including payoff, to
plausible values that approximate a canonical PM paradigm (Einstein et al., 2005). We set the agent process and memory
noise parameters to values that yield human-level accuracies on the PM and OG tasks. Note that these parameters are
not adjusted per condition to match the empirical phenomena.
Table 1 summarizes these parameters. The payoffs for the PM target detection are relative to a fixed value of 1 (-1) for
a correct/incorrect OG task response. The range of PM payoffs are higher (10–30) because PM target probes appear
much less often that OG probes, and thus higher PM payoffs are needed to create overall payoff balance between the
two tasks.
4.1 Effect of PM load on prospective and ongoing task performance
As noted in §2.2, one of the most stable and widely reported effects related to PM is the cost of a PM task on the
reaction times to the OG task (Einstein and McDaniel, 2005; Meier and Zimmermann, 2015; Pink and Dodson, 2013).
10. CC-BY-NC 4.0 International licenseacertified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted March 17, 2019. ; https://doi.org/10.1101/580324doi: bioRxiv preprint
APREPRINT - M ARCH 17, 2019
□
□□
□
46004800500052005400
No PM task PM task
PM T ask PresenceOngoing Task Reaction Time (ms)
(correct tr ials)PM T ask Load
1 PM target
6 PM targetsEffect of PM Load and PM Emphasis
on Ongoing Task R T
(Einstein et al (2005) Exp . 3)
●●●●●●
●●●
●●
●●
57.560.062.565.067.570.0
51015 30 0
PM Task Emphasis (0 = no PM task)OG Reaction Time
(correct trials)PM Task Load
1 PM target
2 PM target
3 PM targetsEffect of PM Load and PM Emphasis
on Ongoing Task RT (MODEL)
Figure 4: Effect of PM load on ongoing task RTs. We manipulated PM load by changing the number of targets and
observed its effect on OG reaction times (as Einstein and colleagues did) and simulated its interaction with PM emphasis.
Human behavior (left) and model simulation (right) are shown here. The presence of a PM task slows down performance
on the OG task. This slowing effect is exacerbated by an increase in the number of PM targets. The increased number
of samples before giving a response corresponds to an increase in OG reaction times. The behavioral experiment (left)
compares the RT costs of 1 vs. 6 PM targets (in the presence of either of these targets the participant must perform the PM
task) for the OG task. The model (right) qualitatively simulates the effect by comparing 1, 2 and 3 PM targets. Model
predictions (right, Y axis unit: #samples taken before response) qualitatively reproduce the human behavioral findings
from Einstein et al. 2005 (left, Y axis unit: seconds).
That is, performance of an OG task is slower in the presence of a prospective memory intention, and this cost increases
with PM load—for example, the number of PM targets (Einstein and McDaniel, 2005) or the demands that these place
on WM (Momennejad and Haynes, 2013; Meier and Zimmermann, 2015; Lewis-Peacock et al., 2016).
Furthermore, the cost of PM is exacerbated when higher emphasis is placed on the PM task, which we have here
operationalized as higher priority or reward for correct PM responses, e.g., priority or importance manipulation (Einstein
and McDaniel, 2005). Finally, manipulations that favor the use of WM to perform the PM task increase PM costs as
well (Einstein and McDaniel, 2005).
Figure 4, right, shows the model’s simulated reaction times to the ongoing task as a function of the presence of the PM
task, PM load as manipulated by changing the number of PM targets, and emphasis on the PM task. The model shows a
clear effect of PM task presence and load: responses to the ongoing task are slowed, and this slowing increases when
higher emphasis is placed on the PM task (which is here operationalized as increased PM task payoff relative to OG
task payoff).
The model derives this slowing on the OG task as the computationally rational response to the changes in the task
parameters. What has changed is the optimal stopping criterion (derived via Q-learning). In the presence of the PM
task, the agent faces a different response discrimination demand: not only is the agent discriminating among the OG
features but it is, in parallel, discriminating among the PM features, and additional samples are required (in expectation)
to reduce uncertainty about the status of the probe.
This dependence of the OG response on PM certainty is captured in Equations (2) and (3). Increasing the PM load by
increasing the number of targets further slows the OG task. In followup simulations we confirmed that this slowing is
not merely due to the fact that each individual target has a lower probability of occurrence.
4.2 Effect of focality and emphasis
In non-focal PM, the PM targets demand attention to other features than those probed by the OG task. For instance, the
OG task might require a category match or 1-back word match judgment, whereas the PM task may depend on specific
syllables or the font color. A robust finding in the literature is that human participants display higher reaction times to
the OG task in the non-focal PM condition, see Figure 6, left (Einstein and McDaniel, 2005).
We model focality by manipulating the correlation between the OG and PM features. The intuition is that focal PM
features are highly correlated with OG features, while non-focal PM features are not correlated with OG features. Put
11. CC-BY-NC 4.0 International licenseacertified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted March 17, 2019. ; https://doi.org/10.1101/580324doi: bioRxiv preprint
APREPRINT - M ARCH 17, 2019
differently, in focal conditions, OG task features also provide useful information about PM target status. The correlation
parameterFocthus provides a way to continuously vary focality, from extremes of 0 (non-focal) to 1 (focal with
complete overlap of features). Many tasks in the literature are sufficiently complex that is not possible to quantiatively
estimate what this parameter should be in order to simulate specific experiments (e.g., the features of the word ’tortoise’
and syllable ’tor’ may be more correlated than the features of a word to the font color). Thus, here we explore a range
of low and high values to assess the qualitative predictions.
Emphasis manipulations have been realized experimentally through changes in instructions; as described above. We
model emphasis changes by increasing the payoff and penalties (relative to the OG task) for succcesful detection and
misses for the prospective memory task. Because the experimental manipulations are instructional, it is not possible
to precisely estimate what the PM payoff should be. We explore here a range of PM payoffs that change the total
proportion of reward obtained from moderate level (about 1/4 of the total reward due to the PM task) to relatively high
(about 2/3 of the total reward obtained due to the PM task).
As shown in Figure 5, the model qualitatively reproduces the effects of both focality and emphasis and their interaction.
Thus, the computuationally rational response—in the precise sense of maximizing expected task payoff given the
computational constraints—to both non-focality and higher PM empahsis is to slow down.
Under conditions of high focality, the information obtained from both OG and PM features is yoked. Therefore, when
information has accumulated quickly for the OG task, it has also accumulated quickly for the PM task; thus rapid OG
responses need not be delayed to wait for the PM target discrimination. However, as PM emphasis (relative PM payoff)
increases, the gains in the expected value associated with greater PM discrimination certainty outweigh the risks of
missing the deadline. Thus, given high PM emphasis, OG responses are slowed down until PM discrimination reaches
higher certainty.
There is a complex tradeoff and interplay among all these factors—deadline risk, OG payoff, PM payoff, correlation of
PM and OG features—but optimally navigating this tradeoff and complexity is precisely what the rational evidence
integration and optimal control policy does.
4.3 Effect of individual differences in working memory capacity
Brewer et al (2010) Brewer and Marsh (2010) explored the role of individual cognitive differences by administering
two standard span measures of working memory capacity to participants, and separately examining the performance of
those who scored high and low on the composite WM capacity measure. They showed that participants with lower
measured WM capacity displayed a large effect of focality on both PM target detection accuracies and PM target
detection reaction times, while high working memory capacity participants showed little effect of focality (Figure 6, top
row, left and middle). Both groups showed increased reaction times to the OG task in non-focal conditions and there
were no overall difference in reaction times between the two groups.
We modeled WM capacity differences by manipulating the process noise (sampling error); in this model lower process
noise corresponds to higher working memory capacity.
This manipulation accounts for both the PM target detection accuracy and PM target response time interactions between
focality and working memory capacity (Figure 6, bottom row, left and middle). The overall effect of focality on OG
task reaction time, and the lack of an interaction with working memory capacity, is also accounted for.
However, the model incorrectly predicts a main effect of working memory capacity on RT (predicting faster responses
for higher WM), an effect not observed by Brewer et al. It is possible that the two particpiant groups also differ in
subjective payoffs for speed and accuracy, or relative emphasis on the PM task, all of which could diminish the predicted
difference on OG RTs between the two groups, but our intent here was to understand what could be accounted for by
varying only a single parameter.
4.4 Clarification of key distinctive properties of the rational model
We now summarize some of the key properties of the model that distingish it from other approaches.
1.Value maximization and evidence accumuation without response thresholds. There are no thresholds or bounds
for the evidence accumulation. Rather, for every given state, defined in terms of htrem;p(PM yes);p(OG yes)i, a
response is selected once the expected value of giving one of the responses is higher than the expected value
obtaining another sample. In other words, the stopping policy space here is the full policy space conditioned
on the two posteriors and time remaining. Any policy space with thresholds is a strict subset of this space.
2.Optimal weighting of information in EM, WM and perception via Bayesian integration. The Bayesian
integration naturally weights information in WM and LTM/EM as a function of the uncertainty of the memory
12. CC-BY-NC 4.0 International licenseacertified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted March 17, 2019. ; https://doi.org/10.1101/580324doi: bioRxiv preprint
APREPRINT - M ARCH 17, 2019
●●●●
100012501500
Moderate High
PM Task EmphasisOngoing Task Reaction Time (ms)Focality
Focal
NonfocalEffect of Focality and PM Task Emphasis
on Ongoing Task RT
(Einstein et al (2005) Exp. 2)
●●
●●
0.50.60.70.80.91.0
Moderate High
PM Task EmphasisProportion PM Targets DetectedFocality
Focal
NonfocalEffect of Focality and PM Task Emphasis
on PM Target Detection Rate
(Einstein et al (2005) Exp. 2)
●●●
●
●●●
●
●●●●
●●●●
●●●●
5560657075
51015 30 1
PM Task Emphasis (payoff relative to OG task)Ongoing Task Reaction Time
(correct trials) Focality
Focal (f = 1.0)
Focal (f = 0.8)
Nonfocal (f = 0.2)
Nonfocal (f = 0)Effect of Focality and PM Task Emphasis
on Ongoing Task RT (MODEL)
●
●
●●●
●
●●●
●
●
●●
●
●●●
●
●●
0.50.60.70.80.91.0
510 15 30 1
PM Task Emphasis (payoff relative to OG task)Proportion PM Targets DetectedFocality
Focal (f = 1.0)
Focal (f = 0.8)
Nonfocal (f = 0.2)
Nonfocal (f = 0)Effect of Focality and PM Task Emphasis
on PM Target Detection Rate (MODEL)
Figure 5: The model produces focality and PM emphasis effects. Human results (see Figure 2) and corresponding
model simulation results (bottom). Both humans and model simulations show higher costs in the non-focal compared to
the focal condition. This effect is modulated by emphasis on the prospective memory task. While human experiments only
included high and low emphasis instruction effects (two conditions), the model simulates the effect of both emphasis and
focality along continuous dimensions.
encodings. When LTM/EM noise or interference is high, e.g., due to the presence of lures, relatively lower
weight is given to LTM/EM. When information in WM is noisier (e.g., due to high load), relatively lower
weight is given to WM. There are no explicit weight parameters; rather weighting is a natural consequence of
the likelihood function that captures the dependence of the prior on noisy memory.
3.Use of noisy information in memory without a separate retrieval process. The model contains no explicit
retrieval processes; rather, the model conditions its responses on all the information in the noisy memory store.
This global-parallel property is consistent with most mathematical models of memory retrieval that assume
parallel contact with all memory elements (e.g., (e.g., Shiffrin, 2003)). But the model provides accounts of
reaction times and error rates without assuming a retrieval process that yields an item or subset of items from
memory that must then be further processed.
4.Sensitivity to differential task emphasis and focality manipulations without explicit resource or attention
allocation mechanisms. The model provides accounts of task emphasis and focality effects through purely task-
environment manipulations (changing payoff structure and probe-feature correlations) and without recourse to
any attention or resource allocation mechanism. Because these effects are robust consequences of the task
environment manipulations, the rational model provides a useful baseline against which predictions of resource
allocation mechanism theories may be compared. Under the present account, focality and emphasis effects
are not signatures of adaptive resource allocation, but signatures of a rational stopping criterion given noisy
memory stores.
13. CC-BY-NC 4.0 International licenseacertified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted March 17, 2019. ; https://doi.org/10.1101/580324doi: bioRxiv preprint
APREPRINT - M ARCH 17, 2019
●●
●●
0.50.60.70.80.91.0
Nonfocal Focal
FocalityProportion PM Targets DetectedWM Capacity
High WM
Low WMIndividual Differences in Effect of Focality
on PM Target Detection (Brewer et al, 2010)
●●●
●
800100012001400
Nonfocal Focal
FocalityPM Target Reaction Time (ms)WM Capacity
High WM
Low WMIndividual Differences in Effect of Focality
on PM Target RT (Brewer et al, 2010)
●●●●
650700750800
Nonfocal Focal
FocalityOngoing Task Reaction Time (ms)
(correct trials) WM Capacity
High WM
Low WMIndividual Differences in Effect of Focality
on Ongoing Task RT (Brewer et al, 2010)
●
●●
●●
●●
●●
●
0.50.60.70.80.91.0
0.00 0.25 0.50 0.75 1.00
FocalityProportion PM Targets DetectedProcess Noise
(sample error)
0.2 (High WM)
0.4 (Low WM)Effect of Process Noise and Focality
on PM Target Detection (MODEL)
●●
●●
●●
●●
●●
405060
0.00 0.25 0.50 0.75 1.00
FocalityReaction Time for Correctly Detected PM TargetsProcess Noise
(sample error)
0.2 (High WM)
0.4 (Low WM)Effect of Process Noise and Focality
on PM Target RT (MODEL)
●●
●●
●●
●●
●●
5055606570
0.00 0.25 0.50 0.75 1.00
FocalityOngoing Task Reaction Time\ (correct trials)Process Noise
(sample error)
0.2 (High WM)
0.4 (Low WM)Effect of Process Noise and Focality
on Ongoing Task RT (MODEL)
Figure 6: Effect of individual differences on OG and PM performance. Brewer et al (2010) found robust differences
in performance on PM detection and rseponse time between participants with low vs. high measured working memory
capacity (top row). By changing the process (sampling) noise parameter, the model accounts for these differences in both
PM detection rate and RT, and their interaction with focality (bottom row). The model correctly predicts no interaction
between focality and WM capacity in OG reaction times (right), but incorrectly predicts an ovderall difference in OG RT
not observed in the Brewer et al (2010) task.
5 Summary and future directions
Here we combine Bayesian inference and reinforcement learning to offer a normative solution to the prospective
memory problem. The prospective memory problem is that of the simultaneous and timely execution of immediate
(ongoing) and delayed (prospective) tasks. We have proposed a tripartite model of prospective memory function with
noisy perception and processing, episodic memory, and working memory components (Section 3). The model is rational
and normative because the integration of evidence from the three components conforms to correct Bayesian inference,
and because the policy parameters governing task performance are selected to maximize payoff on both the ongoing
and prospective tasks (using RL).
The model is computationally rational because it derives the best possible use of posited bounded computational
resources Howes et al. (2009); Lewis et al. (2014). Once the (approximately) optimal policy parameters are computed
(here through reinforcement learning), the model provides detailed behavioral predictions, including response times
and accuracies for both ongoing and prospective tasks. The model thus provides a formally rigorous account for
understanding interactions between long term and working memory in the service of prospective memory.
A promising future direction is the application of the model to existing neuroimaging data on prospective memory. Over
the past decade, a number of fMRI studies have shown that univariate as well as multivariate patterns of activations
in the prefrontal cortex, the parietal cortex, and the hippocampus mediate event based and time-based prospective
memory (Gilbert, 2011; Momennejad and Haynes, 2012, 2013), as well as the interaction of LTM/EM and WM
processes in successful PM (Lewis-Peacock et al., 2016). Future studies could study the distinct and joint computational
processes governing the prefrontal components (often involved in WM and controlled processing) and the hippocampal
components (involved in associative memory, LTM/EM) of these findings. Of particular interest would be to compare
the function of these regions in healthy and abnormal PM behavior in order to identify predictors of the optimality of
action selection in the model.
14. CC-BY-NC 4.0 International licenseacertified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted March 17, 2019. ; https://doi.org/10.1101/580324doi: bioRxiv preprint
APREPRINT - M ARCH 17, 2019
A critical contribution of the model, in all of these contexts, is a normative account of the strategic gradient in the
deployment of LTM/EM and WM in the performance of memory-dependent tasks. Such a normative model addresses
the following question: how should LTM/EM and WM be used to realize planned action? As such, the model provides
a valuable foundation for understanding interactions between LTM/EM, WM, and perception in other multitasking
settings. This may be useful in understanding the meta parameters underlying the performance of healthy individuals,
and accordingly designing models that solve real-world multi-tasking problems.
Such meta parameters may also allow us to understand the bounded rationality of parameters that may compromise
function due to brain injury or psychiatric conditions. Many psychiatric disorders are marked by impairments in the
adaptive integration of perception with multiple memory systems. By providing insights into underlying mechanisms of
normal and suboptimal adaptive behavior, our model could be used to design compensating interventions in participants
whose behavior suggest sub-optimal integration of WM and EM sources of memory. Such computational interventions
could help improve real-world performance in both healthy individuals and those in sub-optimal conditions.
References
Altgassen, M., Kretschmer, A., and Schnitzspahn, K. M. (2017). Future thinking instructions improve prospective
memory performance in adolescents. Child Neuropsychology, 23(5):536–553.
Altgassen, M., Rendell, P. G., Bernhard, A., Henry, J. D., Bailey, P. E., Phillips, L. H., and Kliegel, M. (2015). Future
thinking improves prospective memory performance and plan enactment in older adults. Quarterly Journal of
Experimental Psychology, 68(1):192–204.
Anderson, J. R. (1983). A spreading activation theory of memory. Journal of Verbal Learning and Verbal Behavior,
22(3):261–295.
Brewer, G. A., Knight, J., Meeks, J. T., and Marsh, R. L. (2011). On the role of imagery in event-based prospective
memory. Conscious Cogn, 20(3):901–907.
Brewer, G. A. and Marsh, R. L. (2010). On the role of episodic future simulation in encoding of prospective memories.
Cognitive Neuroscience, 1(2):81–88.
Chen, P.-H. C., Chen, J., Yeshurun, Y ., Hasson, U., Haxby, J., and Ramadge, P. J. (2015). A Reduced-Dimension
fMRI Shared Response Model. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R., editors,
Advances in Neural Information Processing Systems 28, pages 460–468. Curran Associates, Inc.
Cohen, J. D., Braver, T. S., and O’Reilly, R. C. (1996). A computational approach to prefrontal cortex, cognitive
control and schizophrenia: recent developments and current challenges. Philos. Trans. R. Soc. Lond., B, Biol. Sci.,
351(1346):1515–1527.
Einstein, G. O. and McDaniel, M. A. (2005). Prospective Memory Multiple Retrieval Processes. Current Directions in
Psychological Science, 14(6):286–290.
Einstein, G. O., McDaniel, M. A., Thomas, R., Mayfield, S., Shank, H., Morrisette, N., and Breneiser, J. (2005).
Multiple processes in prospective memory retrieval: factors determining monitoring versus spontaneous retrieval. J
Exp Psychol Gen, 134(3):327–342.
Gilbert, S. (2011). Decoding the content of delayed intentions. 31:2888–2894. [Original String]: Gilbert SJ (2011)
Decoding the content of delayed intentions. J Neurosci 31:2888-2894.
Gollwitzer, P. M. (1990). Action phases and mind-sets. In Higgins, E. and Sorrentino, R., editors, The handbook of
motivation and cognition: Foundations of social behavior, volume 2, pages 53–92. Guilford Press, New York.
Gollwitzer, P. M. and Brandstätter, V . (1997). Implementation intentions and effective goal pursuit. Journal of
Personality and Social Psychology, 73(1):186–199.
Howes, A., Lewis, R., and Vera, A. (2009). Rational adaptation under task and processing constraints: implications for
testing theories of cognition and action. Psychological review, 116(4):717–751.
Lewis, R. L., Howes, A., and Singh, S. (2014). Computational rationality: Linking mechanism and behavior through
utility maximization. Topics in Cognitive Science, 6(2):279–311.
15. CC-BY-NC 4.0 International licenseacertified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted March 17, 2019. ; https://doi.org/10.1101/580324doi: bioRxiv preprint
APREPRINT - M ARCH 17, 2019
Lewis-Peacock, J. A., Cohen, J. D., and Norman, K. A. (2016). Neural evidence of the strategic choice between working
memory and episodic memory in prospective remembering. Neuropsychologia, 93(Pt A):280–288.
McDaniel, M. A., Howard, D. C., and Butler, K. M. (2008). Implementation intentions facilitate prospective memory
under high attention demands. Mem Cognit, 36(4):716–724.
McFarland, C. and Glisky, E. (2012). Implementation intentions and imagery: Individual and combined effects on
prospective memory among young adults. Behavior Research Methods, 40(1):62–69.
Meier, B. and Zimmermann, T. D. (2015). Loads and loads and loads: the influence of prospective load, retrospective
load, and ongoing task load in prospective memory. Front Hum Neurosci, 9.
Miller, E. K. and Cohen, J. D. (2001). An integrative theory of prefrontal cortex function. Annual review of neuroscience ,
24:167–202.
Momennejad, I. and Haynes, J.-D. (2012). Human anterior prefrontal cortex encodes the ’what’ and ’when’ of future
intentions. NeuroImage, 61(1):139–48.
Momennejad, I. and Haynes, J.-D. (2013). Encoding of prospective tasks in the human prefrontal cortex under varying
task loads. J. Neurosci., 33(44):17342–17349.
Pink, J. E. and Dodson, C. S. (2013). Negative prospective memory: Remembering not to perform an action. Psychon
Bull Rev, 20(1):184–190.
Russell, S. and Subramanian, D. (1995). Provably bounded-optimal agents. Journal of Artificial Intelligence Research,
2:575–609.
Scullin, M. K., Bugg, J. M., and McDaniel, M. A. (2012). Whoops, I did it again: commission errors in prospective
memory. Psychol Aging, 27(1):46–53.
Shiffrin, R. (2003). Modeling memory and perception. Cognitive Science, 27(3):341–378.
Watkins, C. J. C. H. and Dayan, P. (1992). Q-learning. Mach Learn, 8(3):279–292.
16. CC-BY-NC 4.0 International licenseacertified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under The copyright holder for this preprint (which was not this version posted March 17, 2019. ; https://doi.org/10.1101/580324doi: bioRxiv preprint
|
66ee0fb4-d745-4239-b798-e5e1e0a5988b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Anthropic probabilities and cost functions
I've claimed that anthropic probabilities like SIA and SSA don't actually exist - or, more properly, that you need to include some details of preferences in order to get any anthropic probabilities, and thus that anthropic issues should be approached from the perspective of decision theory.
What do I mean by this? Well, informally, what are probabilities? If I said that X (a very visible event) would happen with a probability 10%, then I would expect to see events like X happen about a tenth of the time.
This makes a lot of sense. Why can't it be transposed into anthropic situations? Well, the big problem is the "I" in "I would expect". Who is this "I" - me, my copies, some weighted average of us all?
In non-anthropic situations, we can formalise "I would expect to see" with a cost function. Let me choose a number p(X) to be whatever I want; then, if X doesn't happen I pay a cost of (p(X))2, while if it does happen, I pay a cost of (p(X)−1)2 (this is exactly equal to −(p(X)−IX)2, for IX the indicator function of X).
Then, for this cost function, I minimize my losses by setting "p(X)" to be equal to my subjective opinion of the probability of X (note there are many eliciting cost functions we could have used, not just the quadratic loss, but the results are the same in for all of them).
In the informal setting, we didn't know how to deal with "I" when expecting future outcomes. In the formal setting, we don't know how to aggregate the cost when multiple copies could all have to pay the cost.
There are two natural methods of aggregation: the first is to keep −(p(X)−IX)2, as above, as the cost for every copy. Thus each copy has the average cost of all the copies (this also allows us to generalise to situations where different copies would see different things). In this case, the probability that develops from this is SSA.
Alternatively, we could add up all the costs, giving a total cost of −n(p(X)−IX)2 if there were n copies (this also generalises to situations
|
ff11a3e7-d58a-4c4c-aa6a-72abb8bbf8d9
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Pausing AI might be good policy, but it's bad politics
*[Edit: I've updated this post on October 24 in response to some feedback]*
NIMBYs don’t call themselves NIMBYs. They call themselves affordable housing advocates or community representatives or environmental campaigners. They’re usually not against building houses. They just want to make sure that those houses are affordable, attractive to existing residents, and don’t destroy habitat for birds and stuff.
Who can argue with that? If, ultimately, those demands stop houses from being built entirely, well, that’s because developers couldn’t find a way to build them without hurting poor people, local communities, or birds and stuff.
This is called politics and it’s powerful. The most effective anti-housebuilding organisation in the UK doesn’t call itself Pause Housebuilding. It calls itself the Campaign to Protect Rural England, because English people *love* rural England. CPRE campaigns in the 1940s [helped shape](https://worksinprogress.co/issue/why-britain-doesnt-build) England’s planning system. As a result, permission to build houses is only granted when it’s in the “public interest”; in practice it is given [infrequently](https://www.founderspledge.com/research/improving-housing-affordability-is-one-of-the-best-causes-to-support) and often with onerous [conditions](https://www.designingbuildings.co.uk/wiki/Planning_condition).[[1]](#fnuy3m1pxq16e)
Oh, you want to build houses? Why do you hate sheep and trees so much?The [AI pause folks](https://pauseai.info/) could learn from their success. Instead of campaigning for a total halt to AI development, they could push for strict regulations that aim to ensure new AI systems won’t harm people (or birds and stuff).
This approach has two advantages. First, it’s more politically palatable than a heavy-handed pause. And second, it’s closer to what those of us concerned about AI safety ideally want: not an end to progress, but progress that is safe and advances human flourishing.
I think NIMBYs happen to be wrong about the cost-benefit calculation of strong regulation. But AI safety people are right. Advanced AI systems [pose grave threats](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/) and [we don’t know how to mitigate them](https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/).
Maybe ask governments for an equivalent system for new AI models. Require companies to prove to planners their models are safe. Ask for:
* Independent safety audits
* Ethics reviews
* Economic analyses
* Public reports on risk analysis and mitigation measures
* Compensation mechanisms for people whose livelihoods are disrupted by automation
* And a bunch of [other measures](https://arxiv.org/abs/2305.07153) that plausibly limit the [AI risks](https://arxiv.org/abs/2307.03718)
In practice, these requirements might be hard to meet. But, considering the potential harms and meaningful chance something goes wrong, they *should* be. If a company developing an unprecedentedly large AI model with surprising capabilities can’t prove it’s safe, they shouldn’t release it.
This is *not* about pausing AI.
I don’t know anybody who thinks AI systems have *zero* upside. In fact, the same people worried about the risks are often excited about the potential for advanced AI systems to solve thorny coordination problems, liberate billions from mindless toil, achieve wonderful breakthroughs in medicine, and generally advance human flourishing.
But they’d like companies to *prove* their systems are safe before they release them into the world, or even train them at all. To prove that they’re not going to cause harm by, for example, hurting people, disrupting democratic institutions, or wresting control of important sociopolitical decisions from human hands.
Who can argue with that?
[Edit: Peter McIntyre has pointed out that Ezra Klein made a version of this argument on the [80K podcast](https://80000hours.org/podcast/episodes/ezra-klein-ai-and-dc/#how-to-slow-down-advances-in-ai-capabilities-000927). So I've been scooped - but at least I'm in good company!]
1. **[^](#fnrefuy3m1pxq16e)**“Joshua Carson, head of policy at the consultancy Blackstock, said: “The notion of developers ‘sitting on planning permissions’ has been taken out of context. It takes a considerable length of time to agree the provision of new infrastructure on strategic sites for housing and extensive negotiation with councils to discharge planning conditions before homes can be built.”” ([Kollewe 2021](https://www.theguardian.com/society/2021/may/08/over-1m-homes-in-england-with-planning-permission-not-built))
|
97a1ec03-81fc-4eb9-a1f8-378ba23e80ff
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How To Go From Interpretability To Alignment: Just Retarget The Search
[EDIT: Many people who read this post were very confused about some things, which I later explained in What’s General-Purpose Search, And Why Might We Expect To See It In Trained ML Systems? You might want to read that post first.]
When people talk about prosaic alignment proposals, there’s a common pattern: they’ll be outlining some overcomplicated scheme, and then they’ll say “oh, and assume we have great interpretability tools, this whole thing just works way better the better the interpretability tools are”, and then they’ll go back to the overcomplicated scheme. (Credit to Evan for pointing out this pattern to me.) And then usually there’s a whole discussion about the specific problems with the overcomplicated scheme.
In this post I want to argue from a different direction: if we had great interpretability tools, we could just use those to align an AI directly, and skip the overcomplicated schemes. I’ll call the strategy “Just Retarget the Search”.
We’ll need to make two assumptions:
* Some version of the natural abstraction hypothesis holds, and the AI ends up with an internal concept for human values, or corrigibility, or what the user intends, or human mimicry, or some other outer alignment target.
* The standard mesa-optimization argument from Risks From Learned Optimization holds, and the system ends up developing a general-purpose (i.e. retargetable) internal search process.
Given these two assumptions, here’s how to use interpretability tools to align the AI:
* Identify the AI’s internal concept corresponding to whatever alignment target we want to use (e.g. values/corrigibility/user intention/human mimicry/etc).
* Identify the retargetable internal search process.
* Retarget (i.e. directly rewire/set the input state of) the internal search process on the internal representation of our alignment target.
Just retarget the search. Bada-bing, bada-boom.
Problems
Of course as written, “Just Retarget the Search” has some issues; we haven’t adde
|
925da6d1-3d3a-4f39-9712-7145bf9966de
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How do we align humans and what does it mean for the new Conjecture's strategy
Divide and conquer
Roman maxim (maybe[1])
Democracy is the worst form of Government except for all those other forms that have been tried from time to time.
Winston Churchill
Introduction
Recently, Conjecture proposed an idea of modular AGI, with each module being interpretable, having limited intellect, and functioning similarly to the human mind. They called this concept Cognitive Emulation.
In their proposal, the authors explicitly say
> We have a lot of experience and knowledge of building systems that are broadly beneficial and safe, while operating in the human capabilities regime.
The most upvoted comment to the post, by the time I write this post, is critical to this statement
> What? A major reason we're in the current mess is that we don't know how to do this. For example we don't seem to know how to build a corporation (or more broadly an economy) such that its most powerful leaders don't act like Hollywood villains (race for AI to make a competitor 'dance')? Even our "AGI safety" organizations don't behave safely (e.g., racing for capabilities, handing them over to others, e.g. Microsoft, with little or no controls on how they're used).
Given this disagreement, I decided to contemplate humanity's ability to align humans and organizations and draw parallels between this field of knowledge and AI safety.
Problems with power
Many doom scenarios related to AGI involve 2 stages:
1. AGI gains too much power
2. AGI does something very bad due to its Shoggothness[2]
If we are talking about aligning people and organizations, the "Too much power" part is more relevant, because even the worst psychopaths and dictators are still people, not Shoggoths, and their ugliness is often a result of their unlimited power. I will touch on this topic later in this post.
Throughout history, many people had an enormous amount of power, and usually, it didn't go well, so humanity developed a number of instruments for restricting the power of a sin
|
cd853795-85a2-4767-96d9-7977557d8949
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Epistemic Legibility
Tl;dr: being easy to argue with is a virtue, separate from being correct.
Introduction
Regular readers of my blog know of my epistemic spot check series, where I take claims (evidential or logical) from a work of nonfiction and check to see if they’re well supported. It’s not a total check of correctness: the goal is to rule out things that are obviously wrong/badly formed before investing much time in a work, and to build up my familiarity with its subject.
Before I did epistemic spot checks, I defined an easy-to-read book as, roughly, imparting an understanding of its claims with as little work from me as possible. After epistemic spot checks, I started defining easy to read as “easy to epistemic spot check”. It should be as easy as possible (but no easier) to identify what claims are load-bearing to a work’s conclusions, and figure out how to check them. This is separate from correctness: things can be extremely legibly wrong. The difference is that when something is legibly wrong someone can tell you why, often quite simply. Illegible things just sit there at an unknown level of correctness, giving the audience no way to engage.
There will be more detailed examples later, but real quick: “The English GDP in 1700 was $890324890. I base this on $TECHNIQUE interpretation of tax records, as recorded in $REFERENCE” is very legible (although probably wrong, since I generated the number by banging on my keyboard). “Historically, England was rich” is not. “Historically, England was richer than France” is somewhere in-between.
“It was easy to apply this blog post format I made up to this book” is not a good name, so I’ve taken to calling the collection of traits that make things easy to check “epistemic legibility”, in the James C. Scott sense of the word legible. Legible works are (comparatively) easy to understand, they require less external context, their explanations scale instead of needing to be tailored for each person. They’re easier to productively disagr
|
2f0c0d9c-eb91-4598-ad50-b909c3e4df08
|
trentmkelly/LessWrong-43k
|
LessWrong
|
DeepSeek Panic at the App Store
DeepSeek released v3. Market didn’t react.
DeepSeek released r1. Market didn’t react.
DeepSeek released a f***ing app of its website. Market said I have an idea, let’s panic.
Nvidia was down 11%, Nasdaq is down 2.5%, S&P is down 1.7%, on the news.
> Shakeel: The fact this is happening today, and didn’t happen when r1 actually released last Wednesday, is a neat demonstration of how the market is in fact not efficient at all.
That is exactly the market’s level of situational awareness. No more, no less.
I traded accordingly. But of course nothing here is ever investment advice.
Given all that has happened, it seems worthwhile to go over all the DeepSeek news that has happened since Thursday. Yes, since Thursday.
For previous events, see my top level post here, and additional notes on Thursday.
To avoid confusion: r1 is clearly a pretty great model. It is the best by far available at its price point, and by far the best open model of any kind. I am currently using it for a large percentage of my AI queries.
TABLE OF CONTENTS
1. Current Mood.
2. DeepSeek Tops the Charts.
3. Why Is DeepSeek Topping the Charts?.
4. What Is the DeepSeek Business Model?.
5. The Lines on Graphs Case for Panic.
6. Everyone Calm Down About That $5.5 Million Number.
7. Is The Whale Lying?.
8. Capex Spending on Compute Will Continue to Go Up.
9. Jevon’s Paradox Strikes Again.
10. Okay, Maybe Meta Should Panic.
11. Are You Short the Market.
12. o1 Versus r1.
13. Additional Notes on v3 and r1.
14. Janus-Pro-7B Sure Why Not.
15. Man in the Arena.
16. Training r1, and Training With r1.
17. Also Perhaps We Should Worry About AI Killing Everyone.
18. And We Should Worry About Crazy Reactions To All This, Too.
19. The Lighter Side.
CURRENT MOOD
> Joe Weisenthal: Call me a nationalist or whatever. But I hope that the AI that turns me into a paperclip is American made.
>
> Peter Wildeford: Seeing everyone lose their minds about Deepseek does not reassure me that
|
37b69280-8b1b-4b90-9218-a93fb8dc95f9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Failure Modes of Teaching AI Safety
Why I'm writing this
I'm about to teach my AI safety course for the fourth time. As I'm now updating the syllabus for the upcoming semester, I summarize my observations on what can go wrong when teaching AI safety. These have mostly not happened during my teaching but are generally likely to happen - as more AIS courses are developed and taught around the world - and I've especially thought about them when preparing the course so that they don't happen.
1. Alignment feels like a lost cause
Depending on how x-risk is presented, getting misaligned AI might appear as an inevitable future since the problem is too complex and hard, and there isn't enough time for AI alignment research to generate robust techniques.
How to avoid: make sure to emphasize all the work on alignment and governance and how your students could also be doing some of that (if they wanted to). I'm also updating my syllabus to include the discussion about AI Pause.
2. There is no (historical/philosophical) context
Just like with all complex ideas, situating the problem within its context can make a big difference in how it will be perceived. Talking about AI systems that all of a sudden become a threat to humanity is confusing, to say the least.
How to avoid: talk about the foundations of AI, the debate between symbolic AI and connectionism in cognitive science, and set the stage for how we got to contemporary LLMs.
3. Your audience gets (disproportionately more) excited about capabilities
This is more common with people with technical backgrounds who like to build tools and applications and are blindly or naively excited about technological progress.
How to avoid: clearly explain the conceptual parts of the problem, describe it in more industry-friendly terminology, provide a lot of examples.
4. But ... what about sexist/racist etc. algorithms?
Hinting at AI ethics being more important than this alignment story. Also, appears in the context of the Longtermism debate.
How to av
|
e3e3ffc4-7c9f-4f2d-ab7f-c19a735b2dad
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Are alignment researchers devoting enough time to improving their research capacity?
(Epistemic Status: Anecdotal)
If we want to reduce AGI x-risk, it seems pretty intuitive to me that alignment researchers should be regularly dedicating time to improving their research capacity. But I'm suspicious that many of them don't do this.
**Am I wrong? I would love to know if I am.**
(I'm using the phrase "research capacity" vaguely, to mean both research skill and productivity.)
Over the last 3 - 4 years, I've had something like 12 - 20 informal conversations with researchers at various alignment orgs on the subject of how they improve their research capacity. In these conversations, I'd ask questions like "How do you personally go about getting better at research?", or "What have you done recently to improve your research process?" or "What is one thing you could do to get better at your job?". And more than half of the responses are one of the following:
1. a blank stare
2. a long, thoughtful pause followed by no actual answer
3. an argument that "this kind of messy, abstract work doesn't lend itself to direct improvement in the way other skills do"
4. an argument that "the best way to improve at research is to simply do the work and keep your eyes peeled for opportunities to improve as you go."
To be clear, some researchers do have different responses. But these answers were surprisingly common, and... they seem wrong?
My Response To #3 :
> "this kind of messy, abstract work doesn't lend itself to direct improvement in the way other skills do"
>
>
I'm not a researcher, but I can't think of any skill I've learned, whether intellectual or physical, that doesn't benefit from some amount of regular and intentional focus on improving it.
Additionally, over the last couple months, I've started having debugging conversations with researchers who want help thinking through how to improve, and in these conversations most of them generate lots of ideas and claim the discussion is quite productive. That's not what I would expect if doing explicit skill improvement just didn't work for alignment research.
Here are a few examples of opportunities for improvement that researchers have identified in talking with me:
* improving their ability to find research collaborators
* improving at prioritizing research tasks (e.g. should they spend the next hour reading a textbook, or spend this time writing out their current ideas)
* improving their ability to deconstruct larger problems into subproblems.
My Response To #4:
> the best way to improve at research is to simply do the work, and keep your eyes open for opportunities to improve as you go.
>
>
Once again, I'm not a researcher, but as far as I can tell the above quote has never been true for me with respect to any skill I've developed in the past.
To be fair, I do think it's important to keep your skill improvement approach grounded in real problems you're solving. It's bad to lose track of the object-level and get lost in endless meta-level considerations. But I would be really surprised if just doing the work and dedicating zero time to improving your capacity directly was actually "the best way".
In my experience, most skills contain lots of parts, many of which will seem small and insignificant if I'm looking at them from the perspective of a single problem right in front of my face. It's only when I intentionally zoom out and consider how these parts affect my performance across *many* problems that I notice how valuable it would be to improve them.
For any researcher who adheres to the "just do the work" approach, I'm curious if you've run an experiment like this with yourself:
- Spend a week "just doing the work and keeping your eyes open", and see how many improvement ideas you generate and how valuable they seem.
- Then spend 20 minutes specifically trying to think of ways to improve, and see how many you generate and how valuable they seem.
- Compare Results.
If you don't generate more valuable ideas in the 20 minutes than during the week, that would be interesting to hear.
Two More Objections I've Heard
1.
> "Alignment researchers already spend lots of time learning in the pursuit of their research. They read the textbooks of esoteric technical fields, they learn from experts and from other researchers, and they try to keep up with developments in their field. Isn't that enough time devoted to improving their research capacity?"
>
>
No, I don't think so.
I view those activities as *part of the work* of research*,* not as something extra on top of the work. In fact, those are great examples of activities at which I'd think a researcher would want to improve.
2.
>
> Researchers are more effective when they're allowed to follow their curiosity. Prescribing some standard idea of what good (or better) research looks like never works and actually harms their capacity.
>
>
Yep, that sounds right to me. So it's a good thing I didn't claim otherwise.
Seeking improvements doesn't have to mean letting go of curiosity as a crucial driving force in your research.
And I'm certainly not implying that I have some specific idea of how research is best done. I'm just making the claim that there should be some way of getting better at it. I assume that will look different for each researcher.
Closing Thoughts
So what's the deal? Have I spoken to an unrepresentative sample of researchers? Is there some miscommunication happening?
**Or are we actually lacking a culture of constant improvement among alignment researchers?**
|
f2203b8b-86dd-49d8-b0ed-7857789b8e9b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Drexler's Nanotech Software
Two months ago I attended Eric Drexler's launch of MSEP.one. It's open source software, written by people with professional game design experience, intended to catalyze better designs for atomically precise manufacturing (or generative nanotechnology, as he now calls it).
Drexler wants to draw more attention to the benefits of nanotech, which involve large enough exponents that our intuition boggles at handling them. That includes permanent health (Drexler's new framing of life extension and cures for aging).
He hopes that a decentralized network of users will create a rich library of open-source components that might be used to build a nanotech factory. With enough effort, it could then become possible to design a complete enough factory that critics would have to shift from their current practice of claiming nanotech is impossible, to arguing with expert chemists over how well it would work.
Drexler hopes to gamify the software enough that people will use it for fun. My cursory impression, based on playing with it for less than an hour, is that it's not very close to being fun. I don't know if that's even a reasonable goal. The software feels more professional and easy to use than what I recall of similar software 20 years ago. Using it still seems like work. I expect it will be hard to get many people to use it without paying them.
Protein-based nanotech versus Diamondoid-style
MSEP.one currently supports development of diamondoid-style nanotech, which produces pretty pictures, and seems somewhat likely to enable the simplest and most understandable atomically precise factories. The downside is that designs of this type are not very close to being buildable with current tools. There isn't even a clear path to building the appropriate tools.
There is at least as much interest in building atomically precise systems out of proteins and/or DNA. The tools to build those designs mostly work today. MSEP.one does not yet support such designs. The main downsides are
|
b4b61843-b29b-4b6f-ba65-06079bd9883c
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Race Along Rashomon Ridge
### *Produced As Part Of The SERI ML Alignment Theory Scholars Program 2022 Research Sprint Under* [*John Wentworth*](https://www.lesswrong.com/users/johnswentworth)
Two Deep Neural Networks with wildly different parameters can produce equally good results. Not only can a tweak to parameters leave performance unchanged, but in many cases, two neural networks with completely different weights and biases produce identical outputs for any input.
The motivating question:
**Given two optimal models in a neural network's weight space, is it possible to find a path between them comprised entirely of other optimal models?**
In other words, can we find a continuous path of tweaks from the first optimal model to the second without reducing performance at any point in the process?
Ultimately, we hope that the study of equivalently optimal models would lead to advances in interpretability: for example, by producing models that are simultaneously optimal and interpretable.
Introduction
============
Your friend has invited you to go hiking in the Swiss Alps. Despite extensive planning, you completely failed to clarify which mountain you were supposed to climb. So now you're both currently standing on different mountaintops Communicating via yodels, your friend requests that you hike over to the mountain he's already on; after all, he was the one who organised the trip.
How should you reach your friend? A straight line path would certainly work, but it's such a hassle. Hiking all the way back down and then up another mountain? Madness.
Looking at your map, you notice there's a long ridge that leaves your mountain at roughly the same height. Does this ridge to reach your friend's mountain? possible to follow this ridge to reach your friend's mountain?
Similarly, we can envision two optimal models in a neural net's weight space that share a "ridge" in the loss landscape. There is a natural intuition that we should be able to "walk" between these two points without leaving the ridge. Does this intuition hold up? And what of two neural networks some distance apart in the loss landscape that have both gotten close to the global minima? Can we find a way to walk from one to the other without leaving the ridge?
To begin our investigation into the structure of these "ridges" we viewed these structures as comprising the [Rashomon manifold](https://arxiv.org/abs/1908.01755), the subset of the weight space comprised of models that minimize the loss function. In the 2018 paper, [Garipov et al](https://arxiv.org/abs/1802.10026). demonstrated that a simple curve is often sufficient to connect two points within the Rashomon manifold.
In our research, we applied the technique of Garipov et al. to path-connect, within a shared weight space, two models that have been trained to be optimal (at an image recognition task with respect to a given dataset). Then, we collected local data along the connecting path. One such local datum is the Hessian of the loss function with respect to the parameters. Its eigendecomposition tells us which directions to travel locally if we want to leave the Rashomon manifold, as well as which directions to travel locally if we want to stay within it. We present a formal description below.
Theory
======
Consider a neural net.
Let w=(wi)1≤i≤d∈W.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
denote the weight parameters of the trained model yielded by the neural net.
Let L:Rd→R denote the loss function, a real-valued function on the parameter space that the model has been trained to minimize. Assume the loss function L is "nice and smooth''; for example, a neural net that uses skip architecture tends to produce a nice, smooth loss landscape.
**We should expect the local minima of a nice loss function to be divided into connected "ridges."** Formally, if the loss function is smooth and moreover a submersion, then every level set L−1(c) is a manifold. In particular, the level set of optimal models L−1(cmin) is a manifold, whose connected components are the "ridges." A *manifold* is a geometric shape that looks locally, but not necessarily globally, like Euclidean space (like Rn for some finite n, which is the *dimension* of the manifold). An example of a manifold is a circle, which locally looks like a one-dimensional line, but not globally. A *smooth manifold* is a manifold without any kinks or corners. Contrast a circle, which is a smooth 1-dimensional manifold, with a star outline, which is a non-smooth 1-dimensional manifold.
A smooth manifoldA non-smooth manifoldOf course, the manifold comprised of optimal models L−1(cmin) is going to be a lot more complicated and higher-dimensional. The following example is illustrative of how difficult it is in general to *globally* visualize manifolds, even if they are nice spaces that are *locally* Euclidean.
A more complicated manifoldSo OK, let's say for the sake of argument that the set of optimal models L−1(cmin) is a (locally) nice manifold, even if it's difficult to visualize. What do we know about its structure?
Empirical work suggests the following pattern. When the class of models W is underparametrized (i.e., low-dimensional linear regression, small-scale neural net architecture), there can be many ridges of local minima. This means that the precise choices of the initial parameter vector and of the optimization method (e.g., gradient descent without dropout vs. with dropout) can affect the ridge that the model will fall into during training.
However, when the class of models is sufficiently overparametrized (i.e., large-scale neural net architecture), we suspect that eventually have just one connected ridge of local minima: the ridge of global minima. The dimension of this single ridge is smaller than the dimension d of the whole weight space, but still quite high-dimensional. Intuitively, the large number of linearly independent zero-loss directions at a global minimum allow for many opportunities to path-connect towards a local minimum while staying within the ridge, making all local minima globally minimal.
There are two ways to think about the ridge of global minima. The first way, common in the literature (see, for example, [Semonova et al](https://arxiv.org/pdf/1908.01755.pdf). and Section 9 [here](https://projecteuclid.org/journals/statistics-surveys/volume-16/issue-none/Interpretable-machine-learning-Fundamental-principles-and-10-grand-challenges/10.1214/21-SS133.full)), is to consider the subset of models in the parameter space that achieve sufficiently low loss. This subset is called the Rashomon set:
WRashomon={w∈W:E(x,y)L(w(x),y)≤cthreshold}
Let cthreshold=cmin+ε for a positive ε, where cmin is the global minimum. If ε is sufficiently small, the Rashomon set becomes a sufficiently sparse subset of the parameter space W.
We propose a second way. We consider the Rashomon set as the set of all global minimizers,
Woptimal=L−1(cmin).
It is plausible that the level set Woptimal of global minimizers is a connected smooth submanifold. (This is often true in differential geometry. As long as L:W→R is a "nice" type of map called a *submersion*, every level set L−1(c) is a smooth embedded submanifold of the domain W.)
We call the submanifold Woptimal⊂W of optimal models the *Rashomon manifold*, distinct from "Rashomon set'" (to distinguish from the first definition). Note that the first definition is highly related to the second definition. The Rashomon *set* denoting the first definition,
WRashomon=L−1([cmin,cmin+ε])=⋃c∈[cmin,cmin+ε]L−1(c),
is a "thickening" of the Rashomon *manifold* Woptimal=L−1(cmin) denoting the second definition.
How might this help deep-learning interpretability? By knowing the data of the Rashomon manifold Woptimal, we can hope to find an element in the intersection Woptimal∩Winterpretable for a subset Winterpretable⊂W of interpretable models in the total model space. The neural-net model corresponding to this element would be simultaneously interpretable and optimal.
If the subset Winterpretable of interpretable models is also "nice" in the differential-geometric sense (say, also a smooth submanifold of W), then the intersection Woptimal∩Winterpretable is also similarly "nice." This means that we may be able to use tools from differential geometry to probe for an element in this "nice" intersection: an example of a model that is simultaneously interpretable and optimal. These differential-geometric tools include bundles, vector fields, differential forms, parallel transport, and flows.
One potential choice of a subset of interpretable models Winterpretable which is geometrically "nice'' is the submanifold of models which is prunable in a certain way. For example, the submanifold defined by the system of equations wj=wj+1=⋯=wk=0, where wj,…,wk are the output connections of a given neuron n, is comprised of models which can be pruned by removing neuron number n. Thus, we may be able to maximally prune a neural net (with respect to the given dataset) by using an algorithm of the following form:
1. Find in the Rashomon manifold an optimal model that can be pruned in some way.
2. Prune the neural net in that way.
3. Repeat Steps 1 and 2 until there are no simultaneously prunable and optimal models anymore.
It is consequently plausible that studying the Rashomon manifold will be crucial for deep-learning interpretability.
How can we study the Rashomon manifold? One way is to take the Hessian of the loss function L at a point w∈Woptimal and find the number of zero eigenvalues: the number of linearly independent zero-loss directions when locally starting from w. This number is the dimension of the Rashomon manifold. (See [Sagun et al](https://arxiv.org/pdf/1706.04454.pdf). for an example of this approach.)
To see why this is true, let us investigate what the Hessian is, and why it is important for probing local selection pressures at an optimal model.
The Hessianof a function L(w) is defined by the matrix of second derivatives,
H[L](w)=(∂wi∂wjL)1≤i,j≤d .
A three-dimensional example is helpful. Consider the submanifold of W=R3 cut out by the loss function L(x,y,z)=3x2+5y2. The global mimimum of L is cmin=0, and the Rashomon manifold, given by
Woptimal=L−1(cmin)={(x,y,z)∈W:x=y=0,z is arbitrary},
is the z-axis.
The Rashamon manifold,Woptimal=L−1(0),
is the z-axis.The Rashamon set with ε=0.5, WRashomon=L−1([0,0.5]),
is a thickened ellipse cylinder.The level set L−1(0.5) is
the boundary of the ellipse cylinder.
It is hollow, not thickened.Choose any optimal model on this line, say p=(x,y,z)=(0,0,1.7). Since this point is on the Rashomon manifold, we have ∇L(p)=0; the gradient vanishes at this point p. So, the dominant local effects, the dominant selection pressures, are second-order. At p, a small perturbation in the direction of any vector with zero in the z-entry will result in an accelerating penalty to the loss function. The acceleration rate is 2⋅3=6 when going along either the positive or the negative x-direction. The acceleration rate is 2⋅5=10 when going along either the positive or the negative y-direction. These directions correspond to the major and the minor axes of the ellipses you get by looking at level sets L−1(cmin+ε)=L−1(ε) for ε>0 small (while keeping the zero-loss direction's variable, z, to be constant).
And indeed, the ellipse axes' directions and lengths are precisely captured by the Hessian matrix at w,
H[L](p)=⎛⎜⎝6000100000⎞⎟⎠.
The eigenvectors vx=(1,0,0) and vy=(0,1,0) represent the directions of the axes of the local selection effects' parabola, and their corresponding eigenvalues λx=6 and λy=10 represent the magnitudes of the selection pressures in these axes' directions. The zero eigenvector vz=(0,0,1) is the sole zero-loss direction at point p: the sole direction starting from p that remains contained in the Rashomon manifold. This is because the Rashomon manifold of L is one-dimensional (indeed, it is the z-axis).
The Hessian of a higher-dimensional loss function essentially behaves in the same way. The eigenvectors with positive eigenvalue serve as (hyper)ellipsoid axes that span the nonzero-loss directions. When moving in these directions, one experiences an accelerating penalty to the loss function.
The eigenvectors with zero eigenvalue span the zero-loss directions. When moving in these directions, one remains in the Rashomon set.
And since every point of the Rashomon manifold is a global minimizer, and therefore also a local minimizer, negative eigenvalues do not occur. In math terminology, the Hessian matrix is *positive-semidefinite* at every local minimizer.
The Hessian at a given point on the Rashomon manifold is an example of local data. The dimension of the Rashomon manifold is an example of global data. Generally, we can hope to compute sufficiently many local and/or global data of the Rashomon manifold, enough to (1) prove or disprove the nonemptiness of Woptimal∩Winterpretable, and if the former is true, (2) construct one such point in Woptimal∩Winterpretable. This can potentially be done via tools from differential geometry, or from the more broadly applicable tools of topology. Successfully doing so would yield a simultaneously optimal and interpretable model for the given dataset.
Procedure
=========
As mentioned above, we first reproduced the results of [this](https://arxiv.org/abs/1802.10026) paper on connectivity between optima, using a smaller architecture of PreResNet20 (~270k params) .

This is a [Bézier curve](https://en.wikipedia.org/wiki/B%C3%A9zier_curve) connecting 2 separately trained models along the same ridge in the loss landscape. The dashed line is a straight line path between the 2 weight vectors. The contour plot was made by calculating losses in a grid pattern.
****
****
For different points along this curve we have calculated.
A Jacobian of outputs with respect to inputs. Can be used for linear probes of functionality and may be able to identify how the network changes across the ridge.
A Jacobian of loss with respect to params (as expected was close to 0 norm)
The norms of the Vector Hessian Products in the straight line path direction (x-axis on the graph), the detour direction (y-axis on graph), and the tangent line to the curve of f(t+0.001)−f(t−0.001).
****
Future Pathways
===============
Unfortunately, the nature of a sprint means we couldn't dedicate as much time to this as we would have liked. To investigate the Hessian within the Rashomon Set more closely we need significantly more computing power or a perhaps a more careful analytic approach.
We are also very keen to explore the structure of the Rashomon set. Specifically we believe much can be gleaned from exploring symmetries in the set.
* Take more Hessian vector products of models moving in the direction of the vector. Combine these values in some way to get a better estimate. The Hessian vector product on the tangent direction to the constant loss curve was smaller, but not zero like we expected and this may be due to noise.
* Use linear probes on the Jacobians of outputs with respect to inputs to identify functional similarities and differences between the models on the curve. The paper by Garipov et. al. showed that an ensemble of these models performs better than an individual one, which implies that some functional differences exist and the ridge is not simply due to irrelevant parameters.
* Jacobian of outputs with respect to parameters. Due to cuda running out of the 6gb gpu memory on my laptop.
* Using PyHessian to get top eigenvalues, trace and eigenvalue density of the Hessian of loss with respect to params. Getting a memory leak and infinite loop of params self referencing their gradients. This worked once in the colab notebook, so I have a suspicion it is a windows version bug. This is the graph that generated.
****
* Using the density graph at different points we should be able to determine if the number of dimensions on the ridge remains constant.
* Another possibility is using the fast ensembling method of using a cyclic learning rate schedule to move along the ridge by leaving it and converging back to it in a different place. This should get models in many directions on the ridge instead of following a single curve.
|
2f17de3e-0e72-412f-b2ab-463c9952f054
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Familiar Finance
People seemed to dig my plan to Get Rich Slowly and declared that not-being-poor is something they plan to do. So, without stepping on any toes or mustaches, it’s time for part 2 of the “Putanumonit pretends to be a financial advice blog” series. To make sure I do this right, let’s first conduct a comprehensive review of the financial advice already on offer by the best sources online.
* The Simple Dollar – Thinking of loaning money to family? Don’t do it!
* Wall Street Journal – The perils of lending to family
* Money Crashers – 10 reasons why you should NOT lend money to friends and family
* Go Banking Rates – Why lending money to friends and family is a bad idea
* Wall Street Journal again, in case you weren’t paying attention – Never lend money to a friend. Repeat: just don’t do it
Weird, one could almost think that financial advice on the internet was sponsored by retail banks. The same banks that only recently replaced family as people main source of financing, and are making a nice profit doing it.
All these articles talk about the downsides of lending or borrowing money from people you know, although mostly in an anecdotal way. What these articles fail to mention, however, are the downsides of not lending and borrowing from people you know. I have an anecdote regarding those as well.
5 years ago, I talked with a business school classmate who owed about $160,000 in student loans. An American with an MBA is practically guaranteed to earn enough to pay that back, but until he does he’s paying 7% annual interest for the pleasure. Meanwhile his parents, both employed and many years from retirement, held a similar amount of money in US treasury bonds which earned them around 2% interest. The difference between the total interest my friend’s family was paying and getting was 5% of $160,000, or $8,000. By not lending money directly to their son, my friend’s family was setting eight thousand dollars on fire every single year.
$8,000 a year is quite a bit of
|
2223c5a4-c69d-4a0b-b329-a68c4d9c3401
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[META] Retributive downvoting: Why?
Several people posted recently in a thread on women, mostly espousing feminist views - only to find that someone had declined to respond to their post, but instead browsed their history and downvoted every single comment or article they had ever posted.
I have two questions:
1. Why would you come to a site like this and pollute the karma system? How does it make you smarter? How does it make anyone else on the site smarter?
2. What would be a good technical workaround? In my mind, some system that detects mass-downvoting and flags a user for review would be preferable, but what should happen then? Should the system be more lenient to higher-karma posters? Who should perform the review process? What should be done with those whom the reviewer ascertains are abusing the karma system? I would prefer some kind of lesson that is more corrective than retributive - it seems to me that people who would perform this behavior are exactly the sort of people who need some of the lessons that this site provides. Any ideas?
|
cc209b58-db56-439c-8ac4-ff6d59f7984b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Fixed points and free will
In his most recent appearance on the 80,000 Hours podcast, Bryan Caplan gave the following argument in favor of libertarian free will:
> One that I’m very tempted to — I know there’s a stock answer to this, although still I think that it’s a pretty good thought experiment — is, if the physics textbook is right, then you should be able to give me an exact prediction of exactly what I will do, unconditional. So you shouldn’t have to say, “I can only give you a conditional prediction” — you should be able to give me an unconditional prediction about whether I’m going to raise my arm in five seconds...
> Now the thought experiment is to tell me that unconditional prediction — and you should be able to go and tell me the prediction in such a way that it incorporates all my reactions and secondary reactions and so on, all the way to infinity. And now guess what I’m going to do? I’m going to do the opposite of what you said I’m going to do. All right? Again, it’s not ironclad, and I know there’s a lot of people who say, no, no, no, feedback loops, and it doesn’t count. But it sure seems like if determinism was true, you should be able to give me unconditional predictions about what I’m going to do. And then intuitively, it seems like I could totally not do them.
At first I thought this argument was obviously misguided, because we can never do this for a simple program which takes in one bit as input and then prints its complement. There's no reason that in general a computable function must have a fixed point[1], so even if we know the source code of a program perfectly we may not be able to come up with a way in which to tell the program what to do in a way that results in the program following the behavior we said it would follow. However, the issue actually turns out to be more subtle than this.
Let's make this more formal. If we have a deterministic agent such that we tell the agent what we think it will do and then the agent takes an action conditional on that inf
|
abbeed2f-14db-4693-84ba-a71c9a4b1327
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Imaginary queues
I thought of an interesting idea, then searched to see if anyone had done it. It seemed like not, so I wrote the below post. Then I looked once more, and found a few instances (finding one makes it easier to find others). I still think the proposal is good, and I have not much idea whether anyone doing it competently. However it is not so novel as hoped, and one must wonder why such things have not been successful enough for me to have heard of them (even after trying unusually hard to hear about them).
Instead of modifying the post, Robin Hanson suggested I put it up more or less as it was before I knew such a thing existed, and then I can compare the details of my suggestion to the details of systems that are real (or real minus any widespread adoption). This can make for a rare test of how different things look if you have an actual business providing a service, versus a daydream about an actual business providing a service. This seems like a good idea to me, so here you have it. The changes I made were adding this part, and removing a half-written couple of sentences at the end about how there didn’t seem to be anyone doing this, though there are some related things. I mean to compare it with a real system another time.
***
Consider how much time people spend in queues. Here is a proposal for reducing that dramatically, with a smartphone app.
The app controls a virtual queue. You tell your phone you want to join the queue. You continue shopping or whatever. Your phone pings you when you are near the front of the queue. You wander over and get in the physical line just before it is your turn.
There are many details to iron out here, but there seem to be plausible solutions:
1. How can people smoothly use this when there will be a large fraction of people who don’t have it?
In the simplest case, places that have multiple queues could have one dedicated one. Where this is not practical, the owner of the queue might have a smartphone or tablet at the fron
|
c3fe6ed9-1e23-45cb-afc7-e8437663927f
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
How to Be Helpful to Multiple People at Once.
Cognitive Science 44 (2020) e12841
©2020 Cognitive Science Society, Inc. All rights reserved.
ISSN: 1551-6709 online
DOI: 10.1111/cogs.12841
How to Be Helpful to Multiple People at Once
Vael Gates,aThomas L. Griffiths,bAnca D. Draganc
aDepartment of Psychology, University of California, Berkeley
bDepartment of Psychology, Princeton University
cDepartment of Electrical Engineering and Computer Sciences, University of California, Berkeley
Received 12 September 2019; received in revised form 31 March 2020; accepted 6 April 2020
Abstract
When someone hosts a party, when governments choose an aid program, or when assistive robots
decide what meal to serve to a family, decision-makers must determine how to help even when their
recipients have very different preferences. Which combination of people’s desires should a decision-
maker serve? To provide a potential answer, we turned to psychology: What do people think is best
when multiple people have different utilities over options? We developed a quantitative model of
what people consider desirable behavior, characterizing participants’ preferences by inferring which
combination of “metrics” ( maximax ,maxsum ,maximin ,o r inequality aversion [IA]) best explained
participants’ decisions in a drink-choosing task. We found that participants’ behavior was best
described by the maximin metric, describing the desire to maximize the happiness of the worst-off
person, though participant behavior was also consistent with maximizing group utility (the maxsum
metric) and the IAmetric to a lesser extent. Participant behavior was consistent across variation in
the agents involved and tended to become more maxsum -oriented when participants were told they
were players in the task (Experiment 1). In later experiments, participants maintained maximin behav-
ior across multi-step tasks rather than shortsightedly focusing on the individual steps therein (Experi-
ment 2, Experiment 3). By repeatedly asking participants what choices they would hope for in an
optimal, just decision-maker, and carefully disambig uating which quantitative metrics describe these
nuanced choices, we help constrain the space of what behavior we desire in leaders, artificial intelli-
gence systems helping decision-makers, and the assistive robots and decision-makers of the future.
Keywords: Fairness; Preferences; Assistive artificial intelligence; Maximin; Modeling
1. Introduction
Consider a schoolteacher trying to plan a field trip. Some students learn best kineti-
cally, others enjoy verbal challenges, and some would be overwhelmed by new locations.
Correspondence should be sent to Vael Gates, Psychology Department, University of California, Berkeley,
2121 Berkeley Way, Berkeley, CA 94704. E-mail: vgates@berkeley.edu
If there is no one action that will maximally satisfy everyone, what is the teacher to do?
Now consider a concerned citizen determining how to donate their money. They furrow
their brow, staring at the screen: donate to the most needy, the organization where dona-
tions will be matched, or the recipient with the tightest time constraints? Next consider a
high-level government worker, puzzling over who to prioritize, faced with an array of
programs that will benefit everyone to some extent but some more than others. Finally
consider an autonomous household robot, trying to determine what to make for the main
family meal with one kid who only wants to eat orange foods and one parent who is vegan.
What should all of these decision-makers do?
In today’s society, as preference-aggregation problems become more complex, we can
turn to tools from computer science to address them. If modern artificial intelligence sys-tems
know about the preferences of their recipients, they can use vast computational resources to
optimize: airplane scheduling and ride-share services are examples of using artificial
intelligence to optimize many people’s preferences. However, these consumer services
optimize based on the principles of first-come-first-serve, more resources for more money,
and efficiency. Individuals put in a bid and they receive some service. But in other types of
situations, people advocate for others rather than themselves. People choose which
organizations to donate money towards, and governments strive to serve their entire
populations with aid programs. These situations are just as complex and deserving of
computational analysis, but they will need a different optimization proce-dure: one that likely
incorporates fairness.
To develop artificial intelligence tools to help solve coordination problems, we need to
know what the ground truth is. What internal algorithms do people use to make decisions that
benefit others? People do not always act in the ways they say they do, but by observ-ing their
ground truth behavior, we can gain a quantitative understanding of what behav-iors people
think are right. In computer science, there are existing tools for inferring the values,
preferences, and utilities that motivate people’s actions and choices. One standard method,
inverse reinforcement learning (Ng & Russell, 2000), has been used to infer util-ity functions
from behavior in both the robotics community (e.g., Kuderer, Gulati, & Bur-gard, 2015 in
which people’s driving styles were inferred from demonstrations) and the computational
cognitive science community (e.g., Baker, Saxe, & Tenenbaum, 2009, in which intended
goals were inferred from partially completed paths). There are thus estab-lished methods to
learn human preferences from actions: Given examples of a person’s driving, we are
advancing in our ability to tell an autonomous vehicle how to bring them to the store; given
examples of a person tidying up their home, we are becoming able to tell a robot what to
clean. Yet these methods only apply to a single person’s preferences at a time. What should
be done when there is more than one person, and there are a com-bination of utilities to
optimize? Should an artificial intelligence sum up all of its users’ utilities, or would it be
more “fair” to minimize the upset of the unhappiest member of the group? When we have
assistive robots who must act as decision-makers themselves, how should they be
programmed? We think the first step to answering these questions is to quantitatively
determine how people—with their intuitions about fairness and effi-ciency and what is good
—behave.2o f4 2 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
In this paper, we set out to map the human sense of compromise, and the challenge of
the benign dictator: how to solve the age-old problem of acting in everyone’s best interest
without a vested interest of one’s own. It is a challenging problem that has puzzled
philosophers, arbitrators, and governors for centuries. The question of what should be
done when people disagree is essential, and to build tools that can help solve it, we must
understand the ground-truth of what we want done. This problem becomes increasingly
important not only to advise decision-makers, but to develop artificial intelligence sys-
tems to help decision-makers and eventually create artificial intelligence agents that can
make choices that match what we do ourselves. In the face of this unsolved problem, we
turn to psychology, developing a quantitative model of what people think should be donewhen an agent can take only one action that brings different utilities to different people.
Many researchers have thought about these questions. Their work often falls under the
heading of fair allocation , which investigates how items should be divided among people.
The study of fair allocation and fair division (Brams & Taylor, 1996; Konow, 2003)
spans many fields, including those of social choice (Gaertner & Schokkaert, 2012; Moulin
et al., 2016), neuroscience (Hsu, Anen, & Quartz, 2008), artificial intelligence (Dickerson,
Goldman, Karp, Procaccia, & Sandholm, 2014), justice and policy (Fleurbaey, 2008;
Gollwitzer & van Prooijen, 2016; Konow, 2003), and behavioral economics and game
theory, especially within the dictator, ultimatum, and estate games (Ashlagi, Karag €ozo/C21glu,
& Klaus, 2012; Chmura, Kube, Pitz, & Puppe, 2005; Dreber, Fudenberg, & Rand, 2014;
Fisman, Kariv, & Markovits, 2007; Huck & Oechssler, 1999; Nowak, Page, & Sigmund,
2000; P /C19alv€olgyi, Peters, & Vermeulen, 2010). Empirical work on fairness is similarly
wide-ranging, including work on cultural differences (Gaertner, Jungeilges, & Neck,
2001; Jungeilges & Theisen, 2008; Sch €afer, Haun, & Tomasello, 2015; Schokkaert &
Devooght, 2003), the developmental trajectory of fairness (Wittig, Jensen, & Tomasello,
2013), the impact of other-regarding preferences in games (Austerweil et al., 2015; Bol-
ton, Brandts, Katok, Ockenfels, & Zwick, 2008), and the weighting of distributive alloca-
tion versus the procedure by which items are allocated (Cooney, Gilbert, & Wilson,
2016; Dupuis-Roy & Gosselin, 2011). Wanting quantitative measures, researchers have
also developed metrics to quantitatively evaluate the fairness of solutions. Several ofthese metrics have been compared empirically (Dupuis-Roy & Gosselin, 2009; Engel-
mann & Strobel, 2004; Fehr, Naef, & Schmidt, 2006; Herreiner & Puppe, 2007) and
include, for example, envy-free fairness, proportional fairness, and inequality aversion
(IA). The tradeoffs between fairness and efficiency (maximizing the joint utility of all
agents) are often considered and have been evaluated theoretically (e.g., Bertsimas, Far-
ias, & Trichakis, 2011, 2012).
Much work on fair allocation, however, falls outside our purview, as it considers a
self-interested agent deciding how to distribute resources between themself and others.
When participants have a personal stake in the situation, biases can appear in their inter-pretation of what is fair or what behavior they exhibit (Babcock, Loewenstein, Issachar-
off, & Camerer, 1995; Beckman, Formby, Smith, & Zheng, 2002; Binmore, 1994;
Cappelen, Nielsen, Sørensen, Tungodden, & Tyran, 2013; Croson & Konow, 2009;
Ellingsen & Johannesson, 2001; G €achter & Riedl, 2006; Herrero, Moreno-Ternero, &V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 3o f4 2
Ponti, 2010; Konow, 2000, 2009; Traub, Seidl, Schmidt, & Levati, 2005). This paper
focuses on the perspective of a third party, the scenario in which an impartial decision-
maker is choosing how to best help a group of people.
As such, our experiments are most similar to the subset of fair allocation experiments
in which the decision-maker’s utility is tied only to the utilities of the agents it is serving
(e.g., selfless artificial intelligences built to serve human needs). In one such set of stud-
ies, participants acted as dictators to fairly allocate goods when the dictator’s own payoffs
were fixed (Engelmann & Strobel, 2004; Fehr et al., 2006). Another two related papers
investigated the division of multiple goods by an uninvested agent (Herreiner & Puppe,
2007; Yaari & Bar-Hillel, 1984). In these cases, participants chose how to distribute mul-tiple items across agents whose utility functions were represented in payoff matrices. Our
experiments have a similar structure in that they consider preferred allocations over pay-
off matrices.
Our work is distinct from those previous fair allocation studies, however, in the prob-
lem it is solving: Here, the uninvested agent can take one action that has consequences
for multiple individuals. Our problem formulation is more general than the “fair alloca-
tion” problem, since the idea of “choosing actions that have implications across multiple
utility functions” applies to many situations outside the distribution of items. Specifically,
the problem setup we describe, with utilities over any possible action, is more generalthan having utilities over specific items being distributed. While we are still working with
abstract entities in this work, payoff matrices, and in this work only working with posi-
tive utilities, this problem formulation admits a range of scenarios. For example, a
schoolteacher trying to determine what field trip to bring students on is an example of
our problem, but not a fair allocation problem. Necessarily, our formulation also encom-
passes the class of fair allocation problems, including problems like sorting multiple indi-
visible items into “bundles” to be distributed to individuals (e.g., Herreiner & Puppe,
2007). In this case, each “action” is to give each agent each sorted bundle, and if neces-
sary each of these agent-specific actions could be characterized as a single larger action
that could be repeated over multiple allocation decisions.
It is also worth emphasizing how the question posed in this paper differs from others
in the literature, even those that do not involve a self-interested party. For one, this paper
does not singularly employ the zero-sum scenarios present in fair allocation studies. In
fair allocation studies, participants distribute a set number of items between agents, and if
one agent receives an item, another agent loses it. In this work, participants choose to
create an item that can bring utility to multiple agents. For these questions, high utility
for one agent does not necessarily imply low utility for another.
In addition, this paper asks participants “what should be done” rather than what is
“fair.” Empirically, “fairness” can correspond more to ideas like equality and helping the
needy, while questions like “in which society would you like to live?” can evokeresponses that take into account the trade-off between equality and maximizing the bene-
fit to the group. Fairness can also correspond to distributional fairness or procedural fair-
ness—examining the fairness of the final allocation solutions or the process by which4o f4 2 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
items are allocated. This work focuses not on fairness but on what decisions participants
believe a third party should make, in terms of creating a shared item to be distributed.
Similarly, the question of “what should be done” is incredibly dependent on context
(see Konow, 2003; Konow & Schwettmann, 2016 for reviews), and our question of what
an uninterested artificial intelligence should do constrains the space of those contexts
enormously. In many decision-making tasks, arbitrators are contending with preferences
for honesty (Dana, Weber, & Kuang, 2007), altruism (Andreoni, 1989; Pelligra & Stanca,
2013), cooperation (Dreber et al., 2014), previous experiences, friendship, spite (Beckman
et al., 2002; Levine, 1998), reciprocity (Berg, Dickhaut, & McCabe, 1995; Bolton & Ock-
enfels, 2000; Cox, 2004; Dufwenberg & Kirchsteiger, 2004), and any number of highlyrelevant factors of what is fair and what is preferred. We are interested in what people
think should be done in the absence of such considerations: what they would like their
leaders or assistive artificial intelligence systems to advise before previously existing
social relationships come into play. Artificial intelligences are by default honest and not
more altruistic or cooperative than preferred, so they offer a unique opportunity to act as
independent advisors free from spite, obligation, or reciprocity.
Our question could integrate important factors like need or desert/merit (e.g., Alesina
& Angeletos, 2005; Fleurbaey, 2008; Fong, 2001; Hoffman & Spitzer, 1985; Konow &
Schwettmann, 2016; Nord, Richardson, Street, Kuhse, & Singer, 1995; Schokkaert &Devooght, 2003; Schokkaert & Lagrou, 1983; Schokkaert & Overlaet, 1989), but the sim-
plified task we choose does not involve agents who are needy or who have worked harder
than others, despite this being a major factor in society. Another way to extend our study
would be to use a paradigm that uses bundles or collections of objects, either for alloca-
tion or as creating these shared resources (both can be construed as an action). Bundles
allow investigation of envy-freeness, and also proportionality, like in “claims problems”
when agents may each have a claim to a resource that is greater than the resource is
worth (Bosmans & Schokkaert, 2009; Thomson, 2003). Additionally, Konow (2003) dis-
cusses the differences between subjective values and objective values, which we simplify
here by directly presenting agent utilities. Given the difficulty of the question of what
should be done, we used a simplified task and save these questions for future work. Forour question and paradigm, we consider it reasonable to limit our literature review to
those topics that are most aligned with our question, though we provide references to the
reader for additional study.
Our task, then, is to determine what an uninvested, autonomous agent should do in a
decision-making scenario with multiple recipients. To investigate what defines a “helpful”
action when balancing multiple utilities, we created the following problem setting: We
asked what decision a manager should make in choosing a drink for two guests. Partici-
pants acted as the manager, repeatedly making choices that we used to develop a model
of what people think are good decisions. Drawing on the literature in economics, com-puter science, and philosophy, we considered four metrics as hypotheses to capture partic-
ipants’ behavior: maximax ,maxsum ,maximin , and IA. We determined which combination
of metrics best explained participants’ behavior by statistical analyses and evaluating the
output of a maximum entropy inverse reinforcement learning (MaxEnt) model. HavingV. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 5o f4 2
inferred the preferred metrics, we then used these metrics to compare participants’ behav-
ior across conditions.
1.1. Metrics
We now delve into the specifics of our metrics and what stimuli they were applied to.
Many different fields have investigated the question of how people make decisions when
balancing multiple tradeoffs —for example, people could choose to maximize the group
utility or distribute resources evenly. As such, several quantitative models describing
“fair” behavior have been suggested. We investigated four metrics that were often used
(often in combination) in previous studies to characterize human behavior (e.g., Brams,Edelman, & Fishburn, 2003; Dupuis-Roy & Gosselin, 2009; Engelmann & Strobel, 2004;
Herreiner & Puppe, 2007). We do not believe these metrics span the space of reasonable
preferences people may hold —people’s algorithms for determining what to do in the
world can be extremely complex —but we selected these metrics based on what has been
widely and empirically observed in the literature. We applied these four metrics to a set
of payoff matrices M: in each matrix M
m, two agents’ utilities (A and B) were shown
for each of four drink options (see Fig. 1). We investigated each of the metrics for these
payoff matrices.
Intuitively, one method to select a shared option is to add up the utilities of all agents,
and pick the option that maximizes this joint utility value. This is a metric called max-
sum, which maximizes the happiness of the group rather than individuals. It is a Pareto
optimal option. In Fig. 1, the best maxsum option would be the third cup, because when
utilities of agents A and B are added together, these sums are as follows: 7 (first cup), 13
(second cup), 15 (third cup), and 14 (fourth cup). A different metric enforcing fairness is
maximin : maximizing the utility of the person who is worst-off. Intuitively, this metric
means making sure that no individual agent is very unhappy. In Fig. 1, the best maximin
option would be the second cup, because when we look to which of the agents are worst-
off in each of the pairs, these agents’ utilities are as follows: 3 (first cup), 5 (second
Fig. 1. Example matrix Mm. Columns are options oj, while rows show each agent i’s utility Ui. Here, the
problem is to decide which drink to serve to both agents A and B.6o f4 2 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
cup), 4 (third cup), and 2 (fourth cup), and the second cup leaves the worst-off agent with
the highest possible utility. Another measure of fairness is IA: decreasing the difference
in utilities between agents. Intuitively, this metric means that agents should be equally
happy. In Fig. 1, the best IAoption would be the first cup, because the agents’ utilities
will be maximally close to each other. A final possible measure is maximax : maximizing
the highest utility, searching across both agents. Intuitively, this metric means that any
one agent should be made as happy as possible. This option might not initially seem fair,
but if presented with the opportunity for repeated choices, pleasing one agent each round
may seem like the best solution. In Fig. 1, the best maximax option would be the fourth
cup, because this cup allows one agent to have its highest possible utility. Note that inFig. 1, the best example of each metric was a different cup, but in most of our payoff
matrices, the best example of multiple metrics was the same cup. Thus, in this paper, we
evaluated the four metrics of maximax ,maxsum ,maximin , and IA.
Metrics like maximin andmaxsum have a long history in the literature. The concept of
maximin was popularized by Rawls (1971, 1974), who advocated for allocating resources
to the least well-off individual. Harsanyi (1975) argued in favor of expected utilitarian-
ism, loosely the maxsum principle, except in cases where the maximin choice was similar
to the maxsum choice, as in the case in our experiments. There has been a strong focus
onmaximin mathematically and in economics (e.g., Amanatidis, Markakis, Nikzad, &
Saberi, 2017; Barman & Krishna Murthy, 2017; Dubois, Fargier, & Prade, 1996; Escof-
fier, Gourv /C18es, & Monnot, 2013; Kurokawa, Procaccia, & Wang, 2016; Procaccia &
Wang, 2014), including applications to networking (e.g., bandwidth-sharing) (Salles &
Barria, 2008). Choices aligned with the maximin ,maxsum metric, and IAmetrics are
often compared in studies, and results are often mixed, with participants trading off
between different metrics depending on the numbers and contexts involved (Ahlert,
Funke, & Schwettmann, 2013; Charness & Rabin, 2002; Engelmann & Strobel, 2004;
Faravelli, 2007; Fehr et al., 2006; Gaertner et al., 2001; Gaertner & Schwettmann, 2007;
Konow, 2001, 2003; Konow & Schwettmann, 2016; Mitchell, Tetlock, Mellers, & Ordo-
nez, 1993; Ordo ~nez & Mellers, 1993; Pelligra & Stanca, 2013; Schwettmann, 2009,
2012). These studies differ in their experimental paradigms, and results have been subse-quently different, even as the concept of tradeoffs between a few principles of justice has
remained similar (Konow, 2003). These experiments reveal a few common difficulties as
well. Many experiments attempt to target each metric in isolation, which does not account
for the correlations between choices: A choice that maximizes the maximin metric also
tends to rate highly on the IAmetric and less well on the maxsum ormaximax metrics.
Additionally, experiments often present somewhat extreme choices and models contain
variables that are occasionally confounded. We aim to address these difficulties in our
study by presenting many nuanced choices to participants, and then using a computational
model and statistical analyses to account for correlations and confounding between met-rics. Using these tools, we gain the ability to distinguish the relative influence of metrics
on sets of participant choices.
In this paper, we aim to address the question of what policies people would like deci-
sion-makers, and the assistive technologies assisting decision-makers, to have in theV. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 7o f4 2
future. To this end, we focus on research studies that are aimed at non-interested third
parties reasoning about helpful decision-making, the results of which are not always con-
sistent. In these studies, participants are presented with options that implement various
metrics and have to choose how to allocate items across agents (Engelmann & Strobel,
2004; Fehr et al., 2006; Herreiner & Puppe, 2007; Yaari & Bar-Hillel, 1984). Participants
are often shown choices that are used to disambiguate between metrics, and because these
choices are so distinct, participants may only make a single or small number of choices
for any given prompt. We took a different approach with our work: On each prompt we
presented many choices to participants, accepting the high amount of overlap among met-
rics necessary to describe participants’ responses. We constructed a computational modelto accommodate these correlations and used this model to reveal the relative contributions
of different metrics across many probes of participant behavior. We tested the generaliz-
ability of our findings by ensuring the participant behavior was similar across different
prompts. Additionally, previous work often focuses on short-term, single decisions; we
investigated participants’ intuitions in the repeated setting where they could make more
long-term decisions. In summary, our work compares four common metrics in a setting
where participants are making more choices than in previous work and the correlations
among metrics describing those choices are accounted for. We use this higher resolution
into the relative contributions of each of these metrics to directly compare them, andwe probe many questions —including longer-term decision-making —to determine the
generalizability of our findings. These improvements take place in a novel setting, in
which participants are not being asked about how to allocate many items, but to create a
resource to be shared among multiple agents. We thus present a novel test of how corre-
lated metrics interact, in the context of how people feel third-party decision-makers
should balance others’ utilities: a problem formulation that will occur more and more
often in technologies in the future.
In three experiments, we investigated the contributions of the metrics of maximax ,
maxsum ,maximin , and IAin describing what people consider good or helpful behavior.
In Experiment 1, we examined participants’ choices in the drink task and determined
whether these choices were consistent when the hypothetical decision-maker wasdescribed as a human or a robot, and when the agents receiving the resources were
described as friends or strangers. Our results indicate that participants were consistent in
using the maximin metric to make decisions (maximizing the utility of the worst-off
agent) despite variations in the agents involved. In Experiment 2, we asked what deci-
sions people would make when they had the opportunity to offer more than one drink to
the same set of agents. We tested whether people would make choices while considering
the entire multi-decision expected utility or focus on the individual decisions within. We
found that participants reasoned over the entire multi-decision process, and the maximin
metric could again describe their choices. In Experiment 3, we validated our results fromExperiment 2 by presenting participants with the cumulative sum of the choices available
in Experiment 2, and observing that when the multi-decision problem was condensed to a
single instance again, participants reliably made choices described by the maximin metric.8o f4 2 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
2. Experiment 1: Which metrics describe people’s behavior? Are they robust to
changes in agent?
Here we tested which combination of metrics de scribed participants’ behavior on a drink-
choosing task (Fig. 1). We tested a variety of different phrasings and altered scenarios to
ensure that the results were consistent across ch anges in presentation. Specifically, we tested
whether participants’ choices changed depending on if the decision-maker was an artificial
intelligence (a robot) or a human, whether the r ecipients of the drinks were friends or stran-
gers, and whether the participant was stated to be one of the recipients of a drink. We might
expect that a participant would perform diffe rently in the “Robot” c ondition if they thought
that what a robot should do in a manager role was different from what a human manager
should do. For example, p articipants may think that artificial intelligences need to remain
completely impartial or compute everything exactly, whereas humans should rely on gut
instincts. Similarly, we might expect that par ticipants would have different intuitions about a
server giving drinks to strangers rather than friends. Perhaps participants would think that
when serving friends, a server should worry mo re about joint happiness rather than making
sure utilities were equitable, since friends co uld make it up to each other later, while the
same would not be true of strangers. We were also interested in whether participants’ opin-
ions of what decision should be made would change if they themselves were receiving an
item rather than hypothetical recipients. In Experiment 1, we were aiming to test whether
the participants’ empirical intuitions of good decision-making would extend across thesevariations in scenarios, which are import ant for the generalizability of our findings.
2.1. Method
2.1.1. Participants
Participants with U.S. IP addresses were recruited from Amazon Mechanical Turk
across five conditions: “Nominal” ( n=36, 0 participants excluded), “Robot” ( n=35, 1
participant excluded), “Robot Friends” ( n=35, 1 participant excluded), “Robot Stran-
gers” ( n=34, 2 participants excluded), and “Veil of Ignorance” ( n=33, 3 participants
excluded). Participants were paid between $2.50 and $3.00 for their participation. Partici-
pants were excluded if they failed the included attention check or indicated that they didnot understand the experiment.
2.1.2. Stimuli
Stimuli were chosen such that participants would reason over a set of positive-utility
actions, and the effects of each metric could be quantitatively isolated from these data. To
keep the paradigm simple, utilities in the fo rm of numbers in a table were used (“payoff
matrices”), with utilities kept small to allow p articipants to use simple addition. To maxi-
mize the generalizability of the findings, seve ral constraints on the matrices were put in
place to ensure participants were reasoning over a wide, randomized range of independent
matrices.V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 9o f4 2
Participants viewed 20 of these payoff matrices (set of matrices M), one at a time.
Each matrix Mmwas 294, where the rows indicated the two agents, the columns indi-
cated the four colored drinks, and each agent’s utility for each drink was shown.
Matrices were generated by rejection sampling. Matrices were subject to the following
general constraints. The sum of agents’ utilities for each option oj,P
iUiojðÞ, was con-
strained to be within 2 and 16, so that there would be a wide range of choices available
but participants would only need to use simple addition. Within each matrix, columns
(both agents’ utilities for an option) were not allowed to repeat, including permutations
within columns, so that there would always be four independent choices within a matrix.
Moreover, within each matrix, for every column, there could be no other column thatstrictly dominated that column according to all metrics, because such a column would
rarely be picked and so would be uninformative according to the goals of this study.
Since we were interested in the question of what desired resource should be shared
among recipients, we constrained our actions to positive utilities and did not allow matri-
ces to contain zeros or negative numbers. Finally, in a set of matrices M, any two matri-
ces were not permitted to have more than two of the same columns, where “sameness”
included column permutation, in order to create a set of independent matrices.
In addition to the general constraints, “class” constraints were established to ensure
that the chosen matrices could isolate the effects of each of the metrics. There were thusfour classes: (a) four matrices in which each option was the best choice according to one
of the metrics (e.g., the example in Fig. 1), (b) four matrices in which the maxsum metric
was held constant: all choices had the same joint utility, (c) four matrices in which the
maximin metric was held constant: for all choices, the worst-off person had the same util-
ity, and (d) eight matrices that were randomly generated. The maximax andIAmetrics
were not held constant because matrices constructed in this manner did not meet the gen-
eral constraints described above. The matrices used in this study are included in the Sup-
plemental Methods.
2.1.3. Procedure
Upon viewing each matrix, participants read the following text: “You’re the manager
at a hotel and want to serve a drink to the room. Archie and Ben are your guests and
have told you how much they enjoy different drinks (higher numbers mean more enjoy-
ment). Which drink would you like to serve?” Participants then had to select one of the
four drinks and justify their response. The names “Archie” and “Ben” were substituted
with other names beginning with “A” and “B” for each matrix.
We expected that participants would employ a consistent understanding of what made
a “good” decision in the drink-choosing task, but wanted to test the robustness of partici-
pants’ choices by employing several task variations. The variations we tested were the
identity of the server (either human or robot), the relationships of the agents being served
(either friends or strangers), and whether the participant was described as a recipient of a
drink.
In the “Nominal” condition, participants were presented with the default text (“You’re
the manager at a hotel and want to serve a drink to the room...”). In the “Robot”10 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
condition, the prompt was: “There is a robot manager at a hotel which will serve a drink
to the room. Archie and Ben are its guests and have told it how much they enjoy differ-
ent drinks (higher numbers mean more enjoyment). Which drink would you like the robot
to serve?” In the “Robot Friends” condition, the prompt was the same as in the “Robot”
condition, but the following phrase was added: “Archie and Ben are its guests (they are
friends with each other) ....” In the “Robot Strangers” condition, the following phrase was
substituted: “Archie and Ben are its guests (they are strangers to each other) ....” Italics
were not included in the participant text. In the “Veil of Ignorance” condition, the
instructions were modified to be: “The manager at a hotel wants to serve a drink to the
room, where you and another guest are sitting. The manager has learned how much youboth enjoy different drinks (higher numbers mean more enjoyment). Given you do not
know which guest (A or B) you are, which drink would you like the manager to serve?”
This condition is named the “Veil of Ignorance” condition as a reference to Rawls
(1971), who suggested that a method of eliciting moral judgments without self-interest
would be to present scenarios in which participants had to make judgments about hypo-
thetical societies they would like to live in, while not knowing anything about their place
in the hypothesized social order. Thus, participants would be “behind a veil of ignorance”
in that they would have to make decisions without knowing whose place they would fill.
Here, in this experimental condition, participants were told they were recipients of a drinkbut did not know which recipient (A or B) they were, thus enacting a version of the
Rawlsian thought experiment.
To analyze participant responses according to our metrics, we calculated the value of
each metric ( F
q), where qindexes the individual metric ( maximax, maxsum, maximin,
IA), and Fis a vector. Values for the metrics were calculated from the utilities of agent i
(Ui) for each option oj:
Fmaximax oj/C0/C1
¼max
iUioj/C0/C1
Fmaxsum oj/C0/C1
¼X
iUioj/C0/C1
Fmaximin oj/C0/C1
¼min
iUioj/C0/C1
FIAoj/C0/C1
¼Y
iUiojðÞP
iUiojðÞ
These FqojðÞvalues were then normalized to account for the tradeoffs between each
option ojin the matrix Mm1:
Fqok/C0/C1
¼Fqok/C0/C1
max oj2M mFqojðÞ/C8/C9 ;8q ð1ÞV. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 11 of 42
2.2. Results and discussion
We were interested in what metrics participants preferred, meaning which metrics
could describe participants’ demonstrated behavior. We determined preferred metric
through an independent statistical analysis, and then via a maximum entropy model. We
examined the proportion of participants preferring specific metrics in the “Nominal,”
“Robot,” “Robot Friends,” “Robot Strangers,” and “Veil of Ignorance” conditions.
2.2.1. Statistical analysis
We sought to determine which metrics individuals used to make their choices. Before
asking which metrics best described participants’ choices, we first asked whether partici-
pants were behaving according to any of the metrics, by comparing participants’ empiri-
cal choices to a simulated baseline participant making random choices. For each metric
q, we determined whether the Fqvalues from participants’ empirical choices were signif-
icantly larger than the Fqvalues from random choices, summed across participants and
matrices. For our baseline comparison, we drew from the null distribution to generate
enough choices for a complete experiment (choices for each matrix and for each partici-
pant, summed) and repeated that process 10,000 times. We then computed z-scores and
associated p-values for whether the empirical Fqvalues (H1) were significantly greater
than our baseline Fqvalues (H0) for each metric q. We found that the maxsum ,maximin ,
and IAmetrics significantly matched participant behavior, and that the maximin metric
best matched participant behavior (had the highest z-scores) for all conditions except the
“Veil of Ignorance” condition (Table 1). In the “Veil of Ignorance” condition, the max-
sum,maximin , and maximax metrics significantly matched participant behavior, and the
highest z-score was for the maxsum metric, closely followed by the maximin metric. This
analysis was chosen because the comparison between the empirical and null distributions
accounted for the biases within metrics and choice of specific matrices.
2.2.2. Inferring metrics used with a maximum entropy model
To further test which combination of metrics described participants’ choices, we con-
structed a maximum entropy inverse reinforcement learning (MaxEnt) model (Ziebart,
Maas, Bagnell, & Dey, 2008). In this model, we inferred weights, which were associated
with each of the metrics. Mathematically, the weight vector hwas indexed by qand com-
posed of [ hmaximax ,hmaxsum ,hmaximin ,hIA]. Participants’ preferred metric was evaluated as
the metric with the largest associated weight hq.
We computed a maximum a posteriori estimate for hfor each participant given their
choices by combining a uniform prior over the parameters of hwith the likelihood of
each individual’s choice okover the options ofor each matrix. Specifically, within each
matrix Mm, for each participant’s choice okover options o, we maximized the function
logP(ok|h) over h(a vector of length q), where:
PðokjhÞ¼ehTFðokÞP
jehTFðojÞð2Þ12 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
Recall that Fis the vector containing the values of the individual metrics
[Fmaximax ;Fmaxsum ;Fmaximin ;FIA].
We summed across matrices Mmto create the final cost functionP
mlogPokMmðÞ j h/C0/C1
, and we used the following constraints to compare the relative use
of each of the metrics:P
qhq¼1,hq≥0. We optimized using sequential least squares
programming (SLSQP) in Python’s scipy package. To calculate the weight vector of an
average individual, we calculated the arithmetic mean over individual participants’ hvec-
tors. Thus, overall the hqvalue for any given metric was determined by the closeness
between the responses expected under that metric (e.g., maxsum ) and a participant’s
responses. In interpreting the results, if any given hqvalue was high, then participants
often chose responses in accordance with that metric.
We found that participants’ behavior was best explained by the maximin metric for all
conditions according to the MaxEnt model, as shown in Fig. 2. Our conclusion that par-
ticipants’ behavior was best explained by the maximin metric was also supported by usingTable 1
Results for Experiment 1, showing relationship between participants’ responses and hypothetical null distribu-
tion for each metric
Maximax Maxsum Maximin IA
Nominal /C00.2 (.58) 11.2 (0) 19.7 (0) 10.9 (0)
Robot 0.6 (.27) 10.7 (0) 16.7 (0) 8.6 (0)Robot Friends 1.6 (.05) 13.6 (0) 19.0 (0) 9.7 (0)
Robot Strangers 0.6 (.27) 9.5 (0) 14.7 (0) 7.3 (0)
VoI 3.1 (.002) 8.7 (0) 7.8 (0) /C00.1 (.53)
Rep2x: Ind.(C1) 4.1 (0) 9.7 (0) 7.8 (0) 1.9 (.03)
Rep2x: Ind.(C2) 1.3 (.10) 5.6 (0) 6.9 (0) 2.8 (.002)
Rep3x: Ind.(C1) /C01.2 (.89) 5.7 (0) 11.9 (0) 6.9 (0)
Rep3x: Ind.(C2) 0.4 (.34) 3.6 (1e-4) 5.4 (0) 3.0 (.001)Rep3x: Ind.(C3) /C02.1 (.98) 1.8 (.03) 7.3 (0) 5.0 (0)
Rep2x: Summed /C09.4 (1) 9.4 (0) 19.2 (0) 14.4 (0)
Rep3x: Summed /C024.0 (1) /C00.007 (.50) 26.1 (0) 20.8 (0)
Follow-Up (2x) /C011.0 (1) 5.3 (0) 22.1 (0) 14.7 (0)
Follow-Up (3x) /C06.4 (1) 7.8 (0) 18.8 (0) 9.9 (0)
Note. Determining which metrics describe participant choices, as compared to what would be expected
given random choices, summed across participants and matrices. Z-scores are shown, calculated as the [em-
pirical (H1) summed scores —mean of the null (H0) distribution summed scores] divided by [ SEfor the null
(H0) distribution summed scores], along with associated p-values in parentheses, for all metrics. (Summation
is across all matrices and participants.) High scores for any metric indicate that participants often chose
responses in accordance with that metric, above and beyond what they would have done had they been
choosing randomly. Results are shown for all conditions in all experiments: “Nominal,” “Robot,” “RobotFriends,” “Robot Strangers,” “Veil of Ignorance,” “Repeated 2x: Independent (Choice 1),” “Repeated 2x:
Independent (Choice 2),” “Repeated 3x: Independent (Choice 1),” “Repeated 3x: Independent (Choice 2),”
“Repeated 3x: Independent (Choice 3),” “Repeated 2x: Summed,” “Repeated 3x: Summed,” “Follow-Up(2x),” and “Follow-Up (3x).” 10,000 samples of summed scores from the null distribution were taken. Condi-
tion names are shortened: “VoI” represents the “Veil of Ignorance” condition and “Rep2x: Ind.(C1)” repre-
sents the “Repeated 2x: Independent (Choice 1)” condition. “0” in the table refers to <0.0001.V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 13 of 42
an alternative method of calculating the average weights across participants. Throughout
this work, we sought to determine which combination of metrics described an “average”
individual; however, there are several ways of calculating an average weight vector. In
the main analysis, we set “average” as the arithmetic mean of each individual partici-
pant’s weight vector h. An alternative average assumes that all participants’ data came
from one individual and, given this, calculates a single weight vector hunder the assump-
tions of the MaxEnt model described above. Using this, hmaximin =1 and hmaximax ,hmax-
sum,hIA=0 for the “Nominal,” “Robot,” “Robot Friends,” and “Robot Strangers”
conditions.
Note that the results from the MaxEnt model support and also further the results from
the statistical analysis. In the statistical analysis for Table 1, we compared participants’
responses to those expected from a participant choosing randomly. We saw, compared to
this null distribution, that participants’ choices tended to encompass the maxsum ,max-
imin, and IAmetrics across the “Nominal,” “Robot,” “Robot Friends,” and “Robot Stran-
gers” conditions. While the results from that analysis suggested that participants’
responses slightly more matched the responses expected from a maximin policy (indicated
by the higher z-scores under the Maximin comparison) compared to other metrics’ poli-
cies, this comparison was indirect (participant and random responses were compared
along each of the metrics, and then the metrics were compared, rather than directly com-paring the relative influence of each metric in participant responses). To more directly
compare which metrics best described participants’ responses, we examine the results
from the MaxEnt model designed for this purpose. The results from the MaxEnt model
for these conditions clearly differentiate the maximin metric as more descriptive of partic-
ipants’ responses compared to the other metrics.
Fig. 2. MaxEnt model results for Experiment 1, showing mean inferred weight h/C6SEfor each metric
across participants in the “Nominal,” “Robot,” “Robot Friends,” “Robot Strangers,” and “Veil of Ignorance”
conditions. The maximin metric best described participant behavior.14 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
For the “Veil of Ignorance” condition, the MaxEnt results were more evenly split
between the maximin andmaxsum metrics: hmaximin =0.52, hmaxsum =0.48, and hmaximax ,
hIA=0. Of note, while the MaxEnt results emphasized the maximin andmaximax results
in Fig. 2, the results from fitting a single hemphasize the maximin andmaxsum results.
This pattern could arise if there were some participants who behaved very consistently
according to the maximax metric, leading to their being highly represented when hvalues
were calculated for each participant and averaged, but these participants’ behavior wash-
ing out when all participants’ behavior was aggregated. It is noteworthy that the maximax
andmaxsum metrics were highly correlated (the same is true for the maximin andIAmet-
rics) such that if participants were commonly making the best maximax choices, these
choices were likely to be well-described by the maxsum metric as well (Fig. 3, leftmost).
This correlation is how there could be a result for the single hproviding more support
for the maxsum metric with an average mean providing more support for the maximax
metric. The shared emphasis on the maximin ,maxsum , and maximax metrics in the “Veil
of Ignorance” condition is also evident from the statistical analysis described above. In
sum, across each of these result measures behavior of participants in the “Veil of Igno-
rance” condition appears to be described by the maximin ,maxsum , and maximax metrics.
The MaxEnt model results were designed to isolate which metric best described partic-
ipants’ behavior by accumulating evidence over all trials. As such, the MaxEnt modelseems to clearly indicate participants were always behaving according to, for example,
themaximin metric. However, in each individual trial, participants could not behave
according to an isolated metric, since each choice they made provided some evidence for
each of the metrics. This inseparability —caused by the correlations among metrics, see
Fig. 3—is what motivated our use of the MaxEnt model. One could argue that partici-
pants may be making choices that were most supporting a single metric on each trial, and
this is possible but not what was empirically observed. Though we do not include trial-
by-trial data, Table 2 provides a histogram of the metrics participants’ behavior most
adhered to, according to the MaxEnt model, and the Supplemental Results (Tables S1 –
S14) show each participant’s adherence to each the metrics according to statistical analy-
ses. Participant behavior generally was in accordance with several of the metrics, andmost in accordance with the maximin metric in aggregate.
After constructing the MaxEnt model, we wanted to check that our weight vectors h
generalized and that the model was predictive of human behavior. To that end, we esti-
mated participants’ weight vectors and used these to predict held-out choices. Specifi-
cally, given participants’ weight vectors h, we could predict participants’ choices given a
new set of options in matrix M
m. The predictions ok
mwere calculated by argmaxjPðoj
mjhÞ.
We calculated weight vectors hfor 50% of participants, each trained on 50% of the
matrices. We then used the average weight vector from these participants to predict the
remaining 50% of participants’ choices on the remaining 50% of matrices and report thisprediction accuracy. As a comparison, we also report the accuracy of prediction when
each participant’s weight vector, trained on all matrices, is used to predict that same par-
ticipant’s choices on each matrix. We report the average of 100 training runs for each
prediction.V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 15 of 42
Training and testing on 50% participants and matrices, the “Nominal” condition had
72.9% predictive accuracy (non-held-out comparison: 75.4%); the “Robot” condition had66.4% predictive accuracy (non-held-out comparison: 70.1%); the “Robot Friends”
Fig. 3. Correlations across values of the metrics in the “Nominal” (leftmost), “Follow-Up (2x)” (left), “Fol-
low-Up (3x)” (center), “Repeated 2x: Summed” (right), and “Repeated 3x: Summed” (rightmost) conditions.
For each condition, we computed the value of each metric for all possible choices across all matrices. Weconcatenated all of the possible choices across all matrices into one vector for each metric, and we computedthe Pearson correlation coefficient across metrics. Choices well-described by the maximax metric were also
well-described by the maxsum metric (high correlation between maximax andmaxsum ), while choices well-
described by the maximin metric were also well-described by the inequality aversion (IA) metric (high corre-
lation between maximin andIA). These clusters ( maximax andmaxsum ) and ( maximin andIA) tended to be
anti-correlated. Note that the “Nominal” condition plot also describes the “Robot,” “Robot Friends,” “Robot
Strangers,” “Veil of Ignorance,” “Repeated 2x: Independent,” and “Repeated 3x: Independent” conditions,since participants saw the same stimuli. The summed possible choices used in the “Repeated 2x: Summed”
and “Repeated 3x: Summed” correlation matrices were used only for data analysis and not shown to the par-
ticipants.
Table 2Percentage of participants whose behavior is best described by each metric, according to the MaxEnt model
Maximax Maxsum Maximin IA
Nominal 8 3 83 6
Robot 14 17 63 6
Robot Friends 9 9 83 0
Robot Strangers 18 6 71 6VoI 27 18 48 6Rep2x: Ind.(C1) 29 12 58 0
Rep2x: Ind.(C2) 25 21 46 8
Rep3x: Ind.(C1) 14 10 62 14Rep3x: Ind.(C2) 19 29 38 14
Rep3x: Ind.(C3) 29 5 48 19
Rep2x: Summed 0 12 87 0Rep3x: Summed 10 5 71 14
Follow-Up (2x) 3 6 90 0
Follow-Up (3x) 17 0 83 0
Note. Each participant’s final hvector according to the MaxEnt metric was computed for each condition;
the highest-value h
qvalue was taken as the metric that best described the participant’s behavior. Shown is
the percentage of participants for which each metric best described their behavior. These more individual-level results closely resemble the aggregated results shown in the Figures. Condition names are shortened:
“VoI” represents the “Veil of Ignorance” condition and “Rep2x: Ind.(C1)” represents the “Repeated 2x: Inde-
pendent (Choice 1)” condition.16 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
condition had 71.3% predictive accuracy (non-held-out comparison: 70.9%); the “Robot
Strangers” condition had 64.6% predictive accuracy (non-held-out comparison: 73.3%);
the “Veil of Ignorance” condition had 59.5% predictive accuracy (non-held-out compar-
ison: 78.2%). Chance accuracy was 25%. We observed high predictive accuracy both
when testing sets were and were not held out compared to chance accuracy, supporting
the validity of our inferred hvectors.
2.2.3. What metric was preferred across conditions?
We also asked whether participants’ judgme nts of fairness would differ based on what
type of agent they were considering, and whether the participant was considered a drinkrecipient. We found that on the whole, participants were consistent in their behavior across
conditions: whether they were thinking about a human manager or a robot manager, whether
the robot manager was serving friends or strangers, and whether they were to receive a
drink. Specifically, participan ts’ preferred metric did not di ffer significantly between the
“Nominal,” “Robot,” “Robot Friends,” “Robot Strangers,” and “Veil of Ignorance” condi-
tions (chi-squared test over F
qsummed over participants and matrices: v2=6.67, p=.88,
df=12). We could have generated many more conditions, but our results suggest that par-
ticipants’ fairness judgments are robust to at least some changes in agent identity, and gener-
ally that participants’ ideas of fairne ss are similar across varied situations.
The result of similar behavior across conditio ns is important, because in this work, our goal
is to examine humans’ views of what decisions de cision-makers and assistive artificial intelli-
gences should implement in the future. In this experiment we asked participants what a desir-
able decision would be and then checked that the results were not a specific consequence of
slight differences in how we could have asked the question. There were indeed no significant
differences in participants’ reports of what they considered a good decision, despite differences
in the agent considered. This fi nding is important with respect to the generalizability of our
findings and how we might outsource decisions to technology in the future.
While results were similar across conditions, it is worth noting that the results from
the “Veil of Ignorance” condition subtly differed from the rest, for example by encourag-
ing behavior more closely matching the maxsum metric. In the “Veil of Ignorance” condi-
tion, participants seemed to engage with the question as a new context: Qualitatively,
participants seemed to think less about fairness and more in the sense of “gifts” or “gam-
bling.” When asked to justify their choices, participants in the “Veil of Ignorance” condi-
tion would say things like “one of us might as well be very happy,” “I don’t know what
guest I am so I made the safest choice,” or “Neither of us will get something we don’t
enjoy at all.” Whereas in the other conditions, participants tended to more often use
words like “fair,” “balanced,” or “equal.” A possible explanation for why behavior in the
“Veil of Ignorance” condition was not so closely matched to the maximin metric as was
true in the “Nominal,” “Robot,” “Robot Friends,” and “Robot Strangers” conditions is
this apparent difference in framing. On the one hand (corresponding to the “Veil of Igno-
rance” condition), the participant could either gamble to win a desired item or offer itgenerously to an opponent, or alternatively play it safe with a drink everyone would like
a little (Konow & Schwettmann, 2016) reviews the evidence on whether “fair” behaviorV. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 17 of 42
is actually risk-averse behavior). On the other hand (corresponding to the other condi-
tions), the participant would be responsible for the outcomes of two people one does not
know, perhaps encouraging more “fair” allocation strategies.
While participants’ behavior was still more maximin -aligned than maxsum -aligned, the
fact that participants’ behavior was relatively more similar to the maxsum metric in the
“Veil of Ignorance” condition was interesting because the participants could have taken
the opposite perspective. In the “Veil of Ignorance” condition, the participants had (un-
known) stake in the proceedings and could have made sure not to end up with the unfair
end of a bargain by engaging in even more maximin behavior relative to the other condi-
tions. Instead, participants behaved relatively more according to the maxsum metric. This
result is important and should be further explored, since most of us are recipients and not
the decision-maker in many real-life situations, perhaps making the “Veil of Ignorance”
condition the most descriptive of real situations.
Various other studies have empirically evaluated preferences under the veil of igno-
rance (Andersson & Lyttkens, 1999; Carlsson, Gupta, & Johansson-Stenman, 2003; Froh-
lich, Oppenheimer, & Eavey, 1987; Johansson-Stenman, Carlsson, & Daruvala, 2002;
Oleson, 2001). Of the studies that evaluated metrics similar to ours, Frohlich et al. (1987)
found that participants preferred maximizing average income with a floor constraint, and
Oleson (2001) found that participants demonstrated both maximin andIAbehavior—how-
ever, this collection of studies was different enough from our paradigm (e.g., studying
risk aversion, or looking at welfare over an entire hypothetical society) that the different
contexts likely significantly affected outcomes. A study by Bosmans and Schokkaert
(2004) was similar to our work in that they asked participants about their preferences
directly, and also under a veil of ignorance. They observed different results between the
directly elicited preferences and veil of ignorance conditions (but not maximally different
results, which occurred in comparing directly elicited preferences to a third self-interested
condition). These results were similar to ours, as we found a small difference in response
profiles between our “Nominal” and “Veil of Ignorance” conditions. A final aspect of our
“Veil of Ignorance” condition is that since our question was about autonomous third-party
decision-makers with no stake in the game, we designed the condition such that partici-pants would not receive a payout according to their choices. If participants had a stake in
the game (albeit an unknown one), their behavior might have shifted to be more conser-
vative under the assumption of maximizing self-interest —it is known that participants do
change their behavior based on whether they are being paid or not (e.g., G €achter & Riedl,
2006; Herrero et al., 2010). Altogether, the “Veil of Ignorance” results were similar to
those of the other conditions, but future work will have to continue evaluating the con-
texts that influence participants’ conceptions of good decision-making.
3. Experiment 2: Repeated choices
Previously, we evaluated what people thought a decision-maker should do when taking
a single action. However, in the real world, decision-makers will act many times: the18 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
schoolteacher choosing a new lesson plan for their students every day, or the government
delivering many aid programs over time. Here we investigated whether people have dif-
ferent intuitions for what decision-makers should do when facing repeated decisions with
the same set of agents. Perhaps the decision-maker should give one agent its best choice
the first time, and a different agent its best choice the second; or perhaps the conclusion
is to compromise each time or to attempt to maximize fairness across the sum of all deci-
sions. We tested whether different metrics would describe participants’ actions when par-
ticipants were given repeated choices.
Repeated decision-making has been studied in the literature, often under the name of
sequential or dynamic choices in game theory games. In the repeated version of the Pris-oner’s Dilemma, participants choose whether to betray their partner or cooperate on each
turn, and there has long been research into the optimal strategy for this game (Andreoni
& Miller, 1993; Boyd & Lorberbaum, 1987) and other competitive/cooperative games
(Kuzmics, Palfrey, & Rogers, 2014), wherein each partner obtains more information
about the other on each turn. Behavior in repeated games is sensitive to anticipated repe-
titions: game outcomes are empirically different when participants are engaging in a
finitely repeated game compared to a repeated one-shot game (Andreoni & Miller, 1993).
In other sequential games, the sequence is not of both agents making choices simultane-
ously and then repeating the game, but rather of agents taking turns right after each other,for example by claiming an item via sequential allocation (Bouveret & Lang, 2011; Kali-
nowski, Narodytska, & Walsh, 2013), or fair queueing (Demers, Keshav, & Shenker,
1989; Moulin & Stong, 2002). There are also “online” games, in which agents may arrive
at different times (e.g., Kash, Procaccia, & Shah, 2014; Walsh, 2011) and resources must
be divided between them repeatedly. Alternatively, items may arrive over time to the
same set of agents who have preferences over them (e.g., Aleksandrov, Aziz, Gaspers, &
Walsh, 2015), which is more similar to the structure of our experiment.
There is a subset of research studying the situation of a single item being distributed
to a set of agents repeatedly. In this case, researchers are often testing optimum strategies
for allocation that maximize total agent utility, while ensuring that their strategy has use-
ful qualities like being efficient and fair, and encouraging truth-telling of preferencesfrom each agent on each round. This work can fall under the heading of “dynamic mech-
anism design” and has been widely investigated (e.g., Bergemann & Valimaki, 2006;
Cavallo, 2008; Guo, Conitzer, & Reeves, 2009). Other researchers have argued that these
paradigms do not capture the structure of real-world problems, and so introduced a real-
life food distribution problem in which a central decision-maker must repeatedly allocate
food to different charities in an online fashion over complex preferences where fairness
and efficiency are both considered (Aleksandrov et al., 2015; Walsh, 2015). Our Experi-
ment 2 is similar to this food distribution problem in that we have a repeated task in
which a central decision-maker makes resource choices over the preferences of severalagents, and similar to the dynamic mechanism design studies in that there is a single item
being repeatedly allocated. Our Experiment 2 is different, however, in that the single item
is not being allocated to a single recipient, but rather being created and shared across
agents on every round.V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 19 of 42
Freeman, Zahedi, and Conitzer (2017) have perhaps the most similar motivation to our
experiment, as they ask if a central decision-maker should make allocations that are opti-
mized for fairness within every round, or if fairness should be optimized across rounds.
Freeman et al. (2017) analyzed (non-empirically) fair social choice in dynamic settings
with many agents with changing preferences over multiple goods. They chose to optimize
for the overall product of all agents’ utilities and investigated strategies from there. How-
ever, the question of whether real participants would optimize for global utilities across
choices is undetermined, including whether they would optimize for additive utilities (the
global maxsum solution) or the product of utilities (more similar to IA). Given previous
research, we hypothesized that participants would behave differently when presented withthe same choice repeatedly; in other words, we expected that the choices from Experi-
ment 1 would not be repeated three times. We were interested in determining whether
participants’ choices would be described by the same metric in each round, or across all
rounds, and whether these choices would change depending on the number of anticipated
rounds. In summary, in Experiment 2 we presented participants with two or three repeti-
tions of the same payoff matrices to determine how they would act, as central decision-
makers, in the common real-world situation of providing a resource multiple times to the
same group of people.
3.1. Method
3.1.1. Participants
Participants with U.S. IP addresses were recruited from Amazon Mechanical Turk for
two additional conditions: “Repeated 2x” ( n=24, 0 participants excluded) and “Repeated
3x” ( n=21, 4 participants excluded). Participants were paid between $2.50 and $3.00 for
their participation. Participants were excluded if they failed the included attention check
or indicated that they did not understand the experiment.
3.1.2. Stimuli and procedure
In Experiment 2, the procedure and stimuli were similar to Experiment 1, except
instead of viewing each matrix once, participants saw each matrix twice (in the
“Repeated 2x” condition) or three times (in the “Repeated 3x” condition). Matrices wererepeated an even (“Repeated 2x” condition) and odd (“Repeated 3x” condition) number
of times to probe how participants would balance their choices. Procedurally, participants
read the prompt from the “Nominal” condition, made a choice for the presented matrix,
and justified their answer, as in Experiment 1. Then they saw the following prompt:
“Your guests have finished the drinks, and you can now put out another. Which drink
would you like to serve?” and were shown the same matrix again, and asked to make a
choice and justify their answer. The latter prompt and matrix appeared once more in the
“Repeated 3x” condition. Participants were able to scroll up and down the page of
prompts and responses to determine the total number of drinks they were serving, and
could change their choices at any time within the round.20 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
3.1.3. Analysis
In Experiment 2, participants had the opportunity to serve multiple drinks. In the
“Repeated 2x” condition they chose an option okfrom matrix Mmtwo times ( ok
1;ok
2),
while in the “Repeated 3x” condition they chose an option okmatrix Mmthree times
(ok
1;ok
2;ok
3).
Unlike in Experiment 1, choices from each repeated matrix were not independent
given the matrix Mm. We thus calculated two variants on individuals’ preferred metrics.
For the na €ıve analysis, we analyzed each choice in the repeated case as an independent
decision. Specifically, we assigned all choices after ok
1to new “independent” participants.
Thus, in the “Repeated 2x” condition, the number of participants artificially doubled, withthe first set of participants making choices o
k
1for each matrix, and the second set of par-
ticipants making choices ok
2for each matrix. Findep
q was calculated and predictions were
made using the same procedure as in Experiment 1, where “ indep ” indicates the artificial
creation of independent participants.
For the more sophisticated analysis, we assumed that participants were making one
large decision across two or three unordered matrices, rather than making semi-indepen-
dent decisions ok
1,ok
2,(ok
3). We hypothesized that Uok
1þok
2þok
3/C0/C1
would not be equiva-
lent to Uok
1/C0/C1
þU ok
2/C0/C1
þU ok
3/C0/C1
(the assumption that choices were independent). We thus
mapped the repeated choices to a more sophisticated non-repeated choice: one betweenall 10 (for the “Repeated 2x” condition) or 20 (for the “Repeated 3x” condition) combi-
nations of individual choices o
k
1,ok
2,(ok
3) participants could have made. We then calcu-
lated Fsummed
q based on the summed agent utilities across all of these potential
combination of matrices. Order of choices was not taken into account, but repetition of
the same choice okforok
1,ok
2,(ok
3) was allowed. In short, by following this summing pro-
cedure, in the “Repeated 2x” condition, participants had 10 hypothetical choices, rather
than the 4 they actually faced (two times). In the “Repeated 3x” condition, participants
had 20 hypothetical choices, rather than the 4 they actually faced (three times). The anal-
ysis proceeded in the same way as for the “Nominal” condition in Experiment 1 except
that the matrices Mmexpanded to contain 10 or 20 choices.
3.2. Results and discussion
In Experiment 2, we wanted to test how participant behavior would change across
repeated conditions. One hypothesis was that participants would, as in Experiment 1, use
themaximin metric for each individual choice, not keeping in mind the overall structure
of the repeated choices. A second hypothesis was that participants would maintain max-
imin behavior when considering their combined choices, but would engage in different
behavior (e.g., suboptimal behavior with respect to the maximin metric) within each indi-
vidual matrix. A third hypothesis was that participants could have chosen to maximize
the utility for one agent, then the other, alternating the optimal maximax options for each
individual choice. This third hypothesis was not borne out in the data and so will not be
further discussed.V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 21 of 42
Results especially supported the second hypothesis, that participants may not have
acted independently according to the maximin metric on each choice, but rather made
sure that the overall outcome after all actions was a maximin -supporting outcome.2To
test this, we evaluated the metrics’ values over participants’ combined choices ( Fsummed
q ).
Statistically, we compared participants’ empirical Fsummed
q values to 10,000 samples of
Fsummed
q values from randomly generated choices, where the values were summed over
participants and matrices. This comparison yielded z-scores and p-values relating the
empirical results to the null distribution for each metric. (The statistics here are the same
as explained in Experiment 1.) For both the “Repeated 2x: Summed” and the “Repeated
3x: Summed” conditions, the z-scores for the maximin metric were higher than for any
other metric (Table 1), and they were in the same general range as those of the “Nomi-
nal” condition. The statistical analysis thus supported the conclusion that participants’
behavior could be explained by applying the maximin metric across the set of repeated
matrices they were reasoning over. We also analyzed the results through the MaxEnt
model. We found that the degree to which the maximin metric explained behavior in the
“Repeated: Summed” conditions, both for “Repeated 2x: Summed” (two repeated
choices) and “Repeated 3x: Summed” (three repeated choices), was high and almost
equivalent to in the “Nominal” condition (Fig. 4).
Results also supported the first hypothesis to a lesser extent: that participants’ behavior
could be described by the maximin metric within each individual choice they made
throughout the repeated choices. Here, we analyzed the data in terms of individual
choices, completing a statistical analysis on the summed empirical and null distributions
for the Findep
q values. Z-scores were highest for the maximin metric for almost all condi-
tions (“Repeated 2x: Independent (Choice 2),” “Repeated 3x: Independent (Choice 1),”
“Repeated 3x: Independent (Choice 2),” and “Repeated 3x: Independent (Choice 3)”)
(Table 1). The exception was that the z-score for the “Repeated 2x: Independent (Choice
1)” condition was highest for the metric maxsum . To further analyze our data, we used
the MaxEnt model to infer the combination of metrics that best described participant
behavior over individual choices. We found that the maximin metric best described partic-
ipant behavior for all of the conditions assuming independence: “Repeated 2x: Indepen-dent (Choice 1),” “Repeated 2x: Independent (Choice 2),” “Repeated 3x: Independent
(Choice 1),” “Repeated 3x: Independent (Choice 2),” and “Repeated 3x: Independent
(Choice 3)” (Fig. 4). The first hypothesis that participants would act according to the
maximin metric for each of their individual choices was thus supported, though not uni-
formly across conditions, and contribution from a combination of metrics was visible
within the MaxEnt results.
Supporting the importance of the maximin metric in describing behavior, estimating a
single hacross all participants resulted in values dominated by the maximin metric for all
but one condition in Experiment 2. Specifically, for the “Repeated 2x: Independent(Choice 1)” condition, h
maxsum =1 and hmaximin ,hmaximax ,hIA=0, the sole condition
where the maxsum metric was ranked highest. For the “Repeated 2x: Independent (Choice
2)” condition, hmaximin =1 and hmaximax ,hmaxsum ,hIA=0. For the “Repeated 2x:
Summed” condition, these averages were hmaximin =0.81, hmaxsum =0.19, and hmaximax ,22 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
hIA=0. For the “Repeated 3x: Independent (Choice 1)” condition, these averages were
hmaximin =0.92, hmaxsum =0.08, and hmaximax ,hIA=0. For the “Repeated 3x: Independent
(Choice 2),” “Repeated 3x: Independent (Choice 3),” and the “Repeated 3x: Summed”
conditions, these averages were hmaximin =1 and hmaximax ,hmaxsum ,hIA=0.
This pattern of results suggests that participants consider all of their choices and main-
tain a running maximin metric calculation, though they also seem to apply the maximin
metric to a lesser degree within each individual choice. Support for this argument draws
from the following observation: The dominance of the maximin metric in explaining
behavior is greater for conditions in which choices were considered cumulatively (the“Repeated: Summed” conditions) than for conditions in which each choice was consid-
ered independently (the “Repeated: Independent” conditions). Specifically, the maximin z -
scores were higher in the statistical analyses and the maximin hvalues were higher in the
MaxEnt model for the “Repeated: Summed” conditions compared to the “Repeated: Inde-
pendent” conditions, and they were in fact very similar to those in the “Nominal” condi-
tion.
Finally, to verify that our weight vectors in the MaxEnt model generalized, we used
participants’ hvectors to predict held-out data. Training and testing on 50% participantsFig 4. MaxEnt model results for Experiment 2, showing mean inferred weight h/C6SEfor each metric across
participants. (Above) Participants in the “Repeated 2x” condition, and “Nominal” condition. (Below)Participants in the “Repeated 3x” condition, and “Nominal” condition. From left to right: h
qfor each choice
in the repeated condition (choices were treated as independent); hqfor summed choices in the repeated
condition, where new matrices were calculated according to Fqok
1þok
2þ:::/C0/C1
; and hqfrom the “Nominal”
condition.V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 23 of 42
and matrices, the “Repeated 2x: Independent” condition had 47.0% predictive accuracy
(non-held-out comparison: 53.6%, chance accuracy: 25%); the “Repeated 2x: Summed”
condition had 39.3% predictive accuracy (non-held-out comparison: 43.3%, chance accu-
racy: 10%); the “Repeated 3x: Independent” condition had 45.8% predictive accuracy
(non-held out-comparison: 57.9%, chance accuracy: 25%); the “Repeated 3x: Summed”
condition had 10.9% predictive accuracy (non-held-out comparison: 17.6%, chance accu-
racy: 5%). Our weight vectors were less accurate in predicting “Repeated: Independent”
conditions than they were compared to the non-repeated conditions, but had relatively
high predictive accuracy for held-out test sets compared to non-held-out test sets. In the
“Repeated: Summed” conditions, as chance accuracy fell, predictive accuracy also fell.With such low predictive accuracy in the “Repeated 3x: Summed” condition especially,
we sought to test whether this was an artifact of the many potential choices available to
participants in the “Repeated” conditions or a consequence of the utilities of the choices
available, motivating Experiment 3.
We observed that in both the “Repeated” conditions and the conditions from Experi-
ment 1, participants’ behavior was best described by the maximin metric. The lack of dif-
ference in the “Repeated” conditions could be construed as a null result, under the
assumption that we did not manipulate the multi-decision conditions strongly enough to
cause a change in participants’ responses. To show that participants were producing dif-ferent behavior in the “Repeated” conditions, but nevertheless overall their behavior could
be best described by the maximin metric, we show that there is a different pattern of
responses for each “Repeated” independent choice condition compared to the “Nominal”
condition. Specifically, we computed the percentage of matrices wherein participants had
significantly different responses between conditions. We found that that percentage was
lower in comparing the “Repeated (Choice 1)” and “Nominal” conditions, and higher in
comparing the “Repeated (Choice 2)” and “Nominal” conditions (Table 3). This percent-
age also changed with the “Repeated 3x: (Choice 3)” and “Nominal” condition compar-
ison (Table 3). These percentage differences indicate that participants were making
different choices within each matrix of the “Repeated” conditions, even while the overall
results from the MaxEnt model showed continued maximin behavior.
4. Experiment 3: Summed repeated choices
When taking repeated actions, participants’ individual choices tended toward being
described by the maximin metric, but they were described by a combination of other met-
rics as well. Why were participants’ repeated choices not as clearly described by the max-
imin metric as they were when making a single choice? It could be that participants were
attempting to view all of the repeated trial as a single decision, and were trying to maxi-
mize the maximin solution across all choices, but that the mental overhead for this com-
putation led them to less clearly maximin solutions. Alternatively, perhaps computational
overhead was not very influential in participants making different choices than they had
in Experiment 1, and there was something about the way that the repeated stimuli were24 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
presented that led to dissimilar results in Experiment 1 and 2 —for example, perhaps par-
ticipants felt pressure to vary their choices in Experiment 2 when faced with the same
questions.
To isolate whether computational overhead was an effect, we presented participants with
the summed versions of the repeated choices they saw in Experiment 2. Participants could
then see the summed version of choices in Experiment 2, so the computation was easier if
they were attempting to sum utilities across decisions. This manipulation also allowed us to
present participants with a slightly different one-shot matrix (with six options rather than
four) to see if the results from Experiment 1 generalized. We investigated whether partici-
pant choices would be similar to those in Experiment 1, individual decisions in Experiment2, and the artificially summed decisions in Experiment 2, with an emphasis on whether par-
ticipants’ behavior would be more or less clearly described by the maximin metric, as we
expected from Experiments 1 and 2. In summary, we simplified the multi-decision problem
presented before, testing what metrics described participants’ solutions when Experiment
2’s repeated choices were condensed into a single decision again.
4.1. Method
4.1.1. Participants
Participants with U.S. IP addresses were recruited from Amazon Mechanical Turk
across two additional conditions: “Follow-Up (2x)” ( n=31, 0 participants excluded) and
“Follow-Up (3x)” ( n=29, 3 participants excluded). Participants were paid between $2.50
and $3.00 for their participation. Participants were excluded if they failed the included
attention check or indicated that they did not understand the experiment.
Table 3
Results showing differences in participants’ choices across conditions
Conditions # sig. # inc. total % sig.
Rep2x: Ind.(C1) vs. Nominal 7 20 35
Rep2x: Ind.(C2) vs. Nominal 12 20 60
Rep3x: Ind.(C1) vs. Nominal 0 17 0Rep3x: Ind.(C2) vs. Nominal 15 20 75Rep3x: Ind.(C3) vs. Nominal 9 19 47
Rep2x: Ind.(C1) vs. (C2) 4 19 21
Rep3x: Ind.(C1) vs. (C2) vs. (C3) 7 20 35
Note. Shown is the percentage of matrices (“% sig.”) wherein a chi-squared test of independence showed
the histograms of participants’ choices for each matrix were significantly different ( p=.05: uncorrected)
across the listed conditions. In more detail: for each matrix, the number of participants who picked each
choice was summed, and this set of values was compared across the two experimental conditions listed. If a
matrix had any expected value of 0 in the computed v
2frequencies, that matrix was removed from the analy-
sis. Note that expected values were often <5. The number of matrices that were significantly different accord-
ing to the chi-squared test of independence (“# sig.”) was divided by the total number of included matrices
(“# inc. total”). Note that condition names are shortened: “Rep2x: Ind.(C1)” represents the “Repeated 2x:Independent (Choice 1)” condition.V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 25 of 42
4.1.2. Stimuli and procedure
The stimuli and procedure were similar to Experiment 1. Participants again viewed
each matrix once, but each matrix contained six choices that were drawn from the
“summed” utilities from Experiment 2. In Experiment 2, for the “Repeated 2x” experi-
ment, participants had 10 hypothetical choices per round, and in the “Repeated 3x” exper-
iment, participants had 20 hypothetical choices per round. These hypothetical choices (of
the form [A’s utility, B’s utility]) were what we used to generate each matrix in Experi-
ment 3.
To generate each matrix, we first tried to include one choice that maximized each met-
ric. Specifically, we examined all 10 or 20 choices for each round from Experiment 2,and for each metric selected the choice that would be most preferred under that metric
(e.g., [4, 4] might be chosen under the IAmetric, but not the maxsum metric if [8, 4]
was also an option in the set). If two metrics had the same best choice (irrespective of
order, so [8, 4] was identical to [4, 8]), this choice was only included once. If a metric
had several best choices (e.g., under the IAmetric, [3, 3] would be as good as [4, 4]),
then an unused choice was selected at random. (Metrics with one best choice were exam-
ined first, and then metrics with multiple choices were examined in random order.) The
remaining choices, since six choices were included in each matrix, were selected from
the most popular unused choices from Experiment 2.
4.2. Results and discussion
In Experiment 2, we observed that participants appeared to be reasoning over the
whole set of repeated choices, and that their behavior was best described by the maximin
metric across that set (“Repeated: Summed” conditions). To further test this hypothesis,
we constructed matrices where participants only had to make one choice, but each choice
constituted the sum of previously repeated choices in Experiment 2. Our results corrobo-
rated those from Experiments 1 and 2. With these single-choice matrices, participants’
behavior could be best explained by the maximin metric. In a statistical analysis compar-
ing the empirical choices made by participants to a simulated set of random choices, z-
scores were highest for the maximin metric as compared to the other metrics in the “Fol-
low-Up (2x)” and “Follow-up (3x)” conditions (Table 1). In our MaxEnt model, partici-
pant behavior was also best explained by the maximin metric in the “Follow-Up (2x)”
and “Follow-up (3x)” conditions, to a degree similar as in the “Repeated: Summed (2x)”
and “Repeated: Summed (3x)” as well as the “Nominal” conditions (Fig. 5).
3The simi-
larity of values across the “Follow-Up” conditions and the “Repeated: Summed” condi-
tions support the hypothesis described in Experiment 2, that participants consider the
whole set of choices when they make decisions rather than just each choice individually.
Finally, to check that our MaxEnt weight vectors generalized, we used participants’ h
vectors to predict held-out data. Training and testing on 50% participants and matrices,
the “Follow-Up (2x)” condition had 71.6% predictive accuracy (non-held-out comparison:
71.6%, chance accuracy: 16.7%); the “Follow-Up (3x)” condition had 60.5% predictiveaccuracy (non-held-out comparison: 66.7%, chance accuracy: 16.7%). We could predict26 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
participant responses to the “Follow-Up” conditions well with and without held-out test-
ing sets, indicating the strength of our inferred weight vectors. The high predictive accu-
racy in the “Follow-Up” conditions compared to the “Repeated: Summed” conditions
indicate that the previous low predictive accuracy was a consequence of the large number
of possible choices available in the “Repeated” conditions, and the complexity of not see-
ing the end results directly and having to compute them, rather than a consequence of the
utilities of the choices themselves.
5. General discussion
If a schoolteacher is trying to choose a field trip for a group of students with kinetic
learners, visual learners, children who do not speak English at home, boisterous students,
and students who are scared of new places, what should they do? How should people
donate their money, governments choose between aid programs, or assistive robots medi-
ate between family members’ preferences? In this work, we examined the broader ques-
tion of how a decision-maker should act when people have different preferences.
Specifically, we asked people what they would do in a paradigm where they could take
Fig 5. MaxEnt model results for Experiment 3, showing mean weight h/C6SEfor each metric across partici-
pants. (Above) Participants in the “Follow-Up (2x),” “Repeated 2x” (summed choices, where new matrices
were calculated according to Fqok
1þok
2þ:::/C0/C1
), and “Nominal” conditions. (Below) Participants in the “Fol-
low-Up (3x),” “Repeated 3x” (summed choices), and “Nominal” conditions. The maximin metric best
described participant behavior.V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 27 of 42
only one action, making a single drink, to the inevitable dismay of some of the people
they were serving. While it is not clear that decision-makers should automatically match
the behaviors that people intuitively use to make group-level decisions, it is informative
to know the ground truth of what people feel are good decisions.
We analyzed participants’ choices by assuming their behavior could be captured by a
combination of four metrics. Of these metrics —maximax ,maxsum ,maximin ,a n d inequality
aversion (IA) —we observed that participants’ behavior could be reliably described by the
maximin metric, the idea of maximizing the utility of the worst-off agent. With respect to
our story, the schoolteacher may not know what is the right thing to do, but we at least
know how people behave: For each field trip find the child who would have the worst timeand choose the field trip wherein that child enjoys the field trip as much as possible.
Participants behaved according to the maximin metric despite changes in the agents
involved. Specifically, whether considering a robot manager, human manager, or serving
friends or strangers, participants’ behavior was similar and described by the maximin met-
ric (Experiment 1). Participants did tend to move toward behavior more described by the
maxsum metric when they were told they were beneficiaries of the decision, perhaps rea-
soning about the task more as a gambling situation and less as one mandating fairness for
unknown recipients. In Experiment 2, we found that when participants made repeated
decisions, each individual choice therein was characterized by a modest maximin prefer-
ence. However, participants also acted as if they maintained a running total of their
choices across repeated decisions. In particular, participants’ behavior was best described
by the maximin metric under the assumption that they were summing utilities across
repeated decisions. Experiment 3 provided additional support for the hypothesis that peo-
ple maintain an overall calculation in repeated decision-making, rather than reasoning
over each problem individually. When participants’ cumulative choices from Experiment
2 were summed to create a combined set of matrices for Experiment 3, participants’
behavior was described by the maximin metric to the same degree as was shown in the
“Repeated: Summed” and “Nominal” conditions. All of these results suggest that partici-
pants keep track of calculations over time and prefer making decisions consistent with
themaximin metric when allocating indivisible items across agents.
In the remainder of the paper, we discuss the relationship of these results to previous
work, and ways in which these results could be extended.
5.1. Relationship to previous work
We asked what a decision-maker should do when it could take only one action —create
one resource —to be shared among multiple other agents, and our results support the
hypothesis that participants prefer to behave in ways described by the maximin metric.
How, then, do our results compare to the literature? In the nearby literature of fair alloca-
tion, there is not nearly so neat a consensus. However, in most fair allocation studies the
participant acts as a self-interested party, weighing the desires of selfishness and “fair”
allocation. We are interested in the case of what people consider helpful actions whenthey have no stake in the outcome. Four studies are similar to our study in that respect.28 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
The first is Herreiner and Puppe (2007), who presented participants with payoff matri-
ces in which agents could receive different “goods.” Unlike our study, in which there
was only one item to be created/allocated, the paradigm in Herreiner and Puppe (2007)
resembled an estate game in that multiple items, the number of which was often larger
than the number of agents, could be distributed to various agents. This added complica-
tion to the proceedings, as participants then had large numbers of item combinations to
reason over (e.g., their Problem 1 had 33=27 different allocations), and they could con-
sider fairness not only over utilities, but over “bundles” —the set (and number) of items
allocated. To this end, Herreiner and Puppe (2007) constructed their payoff matrices with
an eye to the metric of “envy-freeness,” which describes whether any given agent wouldbeenvious of any other agent’s bundle.
The results from Herreiner and Puppe (2007) seem to be consistent with a hypothesis
that participants prefer acting according to a maximin metric, given that in their study this
metric was sometimes confounded with different metrics within a single choice. However,
in their Problem 1 and Problem 6, Herreiner and Puppe (2007) showed results indicating
that participants preferred the IAmetric over the maximin metric. In Problem 1, partici-
pants chose to withhold the last item rather than give one agent two items compared to
the other one (preferring the two agents to have utilities [49,48] rather than [49,53]), and
in Problem 6, participants distributed four items across three agents to maintain exactlyequal utilities [45,45,45] rather than choosing the best maximin allocation [48,60,52].
Problem 1 is an interesting case in that participants were considering fairness in both util-
ities anditem number. Our participants tended to choose according to the maximin metric
when only considering utilities, but it could be that if we had asked them to reason over
both utilities and item number they would have considered choices emphasizing inequal-
ity aversion as fairer and better, especially when the difference in utilities was small. In
Problem 6, our results suggest that choice complexity could have been influencing the
observed results in Herreiner and Puppe (2007), and in future work it would be interest-
ing to check if participants would choose the IAsolution [45,45,45] rather than the max-
imin solution [48,60,52] if directly presented with those choices, rather than being asked
to add utilities from four items across three agents. As a final note on the complexity ofaddition, Herreiner and Puppe (2007) noted in their discussion that participants’ final
solutions were correlated with their reported allocation procedures —for example, the
order in which participants assigned items to each agent. Allocation procedures were not
inherent to our problem setup but are well-considered within the fair allocation literature
(see e.g., Dupuis-Roy & Gosselin, 2011), and perhaps also add implicit utilities to partici-
pants’ preferred choices.
In Engelmann and Strobel (2004), the basic structure of the task was very similar to
ours, as participants made choices between three items that three agents had utilities over.
Though the participant acted as one of these agents, the participant’s utility was alwaysheld fixed over all items. Engelmann and Strobel (2004) focused their payoff matrices on
distinguishing between two existing models, each of which had one utility function desig-
nated as participants’ preferred allocation metric. As such, Engelmann and Strobel (2004)
also had many matrices that confounded the metrics we considered, but with this caveatV. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 29 of 42
themaximin metric could be considered a primary motivation for participants’ behavior.
Relevantly, Engelmann and Strobel (2004) state that one of the models they evaluated
performed well because it captured the maximin metric, but that overall a combination of
maximin ,maxsum (which they call efficiency), and selfishness (which we do not consider)
considerations drove participant behavior. Indeed, in several of their payoff matrices, both
the best maxsum andmaximin choices were often selected by participants, and the max-
sumchoice often garnered a higher proportion of participants.
Fehr et al. (2006), however, provide a rebuttal to the Engelmann and Strobel (2004)
paper, replicating the Engelmann and Strobel (2004) study with non-economics partici-
pants and showing that the maxsum solution was selected far less by participants who
were in different programs. Fehr et al. (2006) did not choose payoff matrices that distin-
guished between the IAandmaximin metrics, but their results hint that the subject pool
can influence preferred allocation metrics. Herreiner and Puppe (2007) tested economics
and law student participants, but our study was conducted on Amazon Mechanical Turk
and had a broader population than economics undergraduates. In future work, it would be
interesting to replicate our study with economics undergraduates, and observe whether
this difference is enough to observe participants’ preference for the maxsum metric over
themaximin metric. Fairness perceptions have been observed to differ across different
populations (see, e.g., Andreoni & Vesterlund, 2001; Croson & Gneezy, 2009; Gaertner& Schwettmann, 2007; Marwell & Ames, 1981; and Camerer, 2011, summarizes some
demographic results for behavioral game theory), so this hypothesis would not be improb-
able, though the review in Konow and Schwettmann (2016) cites that generally demo-
graphic variables seem to have relatively small effects on economics experiments.
Yaari and Bar-Hillel (1984) presented several different types of scenarios wherein a
third party allocated goods according to the maximin metric, maxsum metric, and IAmet-
ric. Participants played three types of games, where they had to allocate fruits according
to receiving agents’ needs ortastes/utilities , and also across agents’ differences beliefs
about the fruits. Participant behavior differed across the varying conditions, shifting
according to the tradeoffs presented, as in this work. The authors observed that when
choices were presented in terms of needs —one agent needs a certain type of vitamin to
be healthy, so they needs a certain type of fruit —82% of participants chose the maximin
allocation. In our paradigm, agents did not have needs, rather stated utilities (preferences
over choices). This was much closer to the second condition in Yaari and Bar-Hillel
(1984), in which agents had different “tastes” or stated utilities. Intriguingly, participants
did not adhere as strongly to the maximin choice in this case, as instead 28% chose the
maximin solution, and 35% of participants chose the maxsum allocation. The distribution
of choices also changed in the third condition of Yaari and Bar-Hillel (1984), in which
agents had different perceptions of how much value each of the items would give them.
Yaari and Bar-Hillel (1984) concluded from their experiments that the maximin metric
best describes participant behavior, but only when needs are salient.
In our work, we found that participants made choices according to the maximin metric
outside of needs-based formulations and did so consistently across changes in wording
and repetition over time. Our paradigm differed from that of Yaari and Bar-Hillel (1984),30 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
however, which could account for the difference in results. The main difference between
our work and that of Yaari and Bar-Hillel (1984) is that we asked a different question:
Given agents had utilities over a set of four or six options, which option should be chosen
to best satisfy those utilities? Yaari and Bar-Hillel (1984) asked a fair allocation question:
Given a specific number of items (and agents’ preferences over the different items in the
“tastes” condition), how should those items be divided between agents? The fair alloca-
tion paradigm is necessarily zero-sum, wherein if one agent receives a resource, another
loses it. In particular, in question 4 from Yaari and Bar-Hillel (1984), participants were
asked to divide 12 grapefruit and 12 avocados between two agents. One agent was stated
as hating avocados (utility =0), but would buy grapefruit if they were priced under $1
per pound. Meanwhile, the other agent liked both avocados and grapefruit and would pay
for them if they were priced under $0.50 per pound (half the utility of the first agent for
grapefruit). With these agent preferences in mind, participants were now expected to
divide a fixed number of grapefruit and avocados. Compare this to our study, in which
participants decided which drink to give to two agents to share. This task is not zero-sum
—if one agent has high utility for a drink, this does not detract from another agent’s util-
ity for that drink. We did, however, have four matrices out of 20 for each experiment in
which the joint utilities for each drink were identical, meaning that no matter what drink
the participant chose, the agents would only jointly achieve a given happiness. But in thisscenario, the question was not “how many of each item should be given to each agent
given their utilities” (as in Yaari & Bar-Hillel, 1984), but “given there is a fixed amount
of utility to be had, how should that utility be divided between agents?” This is a distinct
question, and it is probable that participants reason differently about a zero-sum problem
of “distribute 12 pieces of fruit between two people or throw some away” (allocation)
compared to “create one shared item that will make two people more or less happy.” We
may expect that Yaari and Bar-Hillel (1984) did not observe as much maximin behavior
because they used a task of zero-sum bundle allocation; this hypothesis should be investi-
gated in future work.
Another difference between our study and that of Yaari and Bar-Hillel (1984) was that
we probed participants’ intuitions of what was fair with 20 questions (4 or 6 choiceseach) for each experiment, and we determined which metrics best described participants’
behavior by accumulating evidence across all of these trials. Alternatively, Yaari and
Bar-Hillel (1984) had participants answer one question for each experiment (usually with
5 choices), and we used that single choice to inform which single “mechanism” (analo-
gous to our metrics, but without considering combinations of metrics) best described par-
ticipant behavior. The authors thus did not have a continuous characterization of how
much participants were acting in accordance with different metrics, instead using discrete
choices that distinguished “ maximin ,” “maxsum /utilitarian,” and other allocations, which
they tested with a single set of choices. With more trials available and a finer-grainedmeasure of how each choice contributed to the various metrics, Yaari and Bar-Hillel
(1984) may have observed a more maximin -biased distribution, as we did.
As a final aside, with regard to changing in wording, Yaari and Bar-Hillel (1984)
found similar distributions of responses when they asked, first, how participants wouldV. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 31 of 42
divide the items, and second, how the two agents would divide the items if the agents
were aiming to be just. In our paradigm, we asked only what a third-party decision-maker
would do, but this result implies that in our paradigm we would find similar responses if
we were asking participants what choices recipients would think were just.
Our study stands in contrast to these four studies in a few ways. First, while our prob-
lem formulation can encompass any fairness allocation problem based on our definition
of an “action,” the specific paradigm we used was specialized to solve the problem of
what single item a decision-maker should choose to distribute to a group (a rather
straightforward action). This stands in contrast to fair allocation studies, and we described
a direct comparison of our study and Yaari and Bar-Hillel (1984) on this axis, with a par-ticular eye to zero-sum choices. Additionally, some differences in our results may be
attributable to experimental simplicity. For example, in Herreiner and Puppe (2007), par-
ticipants may entertain additional implicit utilities when they reason over bundles (such
as item number fairness), and there may be difficulties in balancing many possible
choices while considering different allocations of 3 to 4 goods. Similarly, Yaari and Bar-
Hillel (1984) introduce motivational features not present in our work, like division based
on agent needs. A second methodological distinction of our study is that the other studies
did not use a continuous measure of behavioral alignment to several metrics, instead
using discrete comparisons over metrics that largely correlate, which leaves the optionopen that finer-grained distinctions may change their details of some of their results.
Additionally, a significant advantage of our study is that we employed 20 matrices and
could have employed more, whereas other studies had fewer than 12 payoff matrices, and
these payoff matrices often confounded different metrics due to the authors’ different
emphases. Finally, we showed consistent participant behavior aligning with the maximin
metric, rather than emphasizing the involvement of the maxsum and IAmetrics, and
observed this finding over repeated conditions, a contrast which has not been previously
conducted to the authors’ knowledge.
In summary, our study is aimed at a different question than most in the fairness litera-
ture; we aim to study how people would prefer that decision-makers act when they can
benefit multiple people. This question is not addressed in previous studies, but is mostrelated to other fairness studies examining uninterested third-party (zero-sum) decisions
(note that unlike a fair allocation problem, our problem is not zero-sum since a decision-
maker is creating a resource for two people with different but not opposing utilities.)
Within this previous work, preference for the maximin metric has not been universally
shown, and the maximin metric is often not directly compared with other potential met-
rics. Relatedly, previous studies often do not account for the correlations among metrics
when describing participant choices. Here, we presented many choices to participants,
testing their intuitions many times for each question, and then used a MaxEnt model to
disambiguate between the overlapping metrics that described each choice. We addition-ally probed participants’ behavior in repeated, longer-term scenarios than have been eval-
uated for this problem setting. We distinguished participants’ choices representative of
themaximin metric by direct comparison with competitive strategies, across many ques-
tion formulations to ensure generalization. We thus provide a novel contribution to the32 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
question of what people think a decision-maker should do when faced with helping peo-
ple with unique utilities.
5.2. Future directions
An interesting extension to this work would be to do the full study that includes
extreme comparisons. If participants are faced with the choices [4, 80] and [4, 4] (where
agent A receives the first number and agent B receives the second), all participants will
likely choose [4, 80], because the tradeoff between maximizing the sum and trying to
maintain equality between agents is so extreme. These choices could be varied parametri-
cally, gradually making the tradeoffs less extreme (and more difficult to choose between)with respect to different metrics. Our study lies basically at the center of that parametric
descent, where the choices are most difficult. It is difficult to choose whether [4, 8] or
[5, 5] is better, and behavior according to both the maximin andmaxsum metrics is com-
petitive. In this range, determining whether a participant was acting more according to a
maximin or an IAmetric required accumulating evidence across trials, motivating the use
of our MaxEnt model. This is because each choice that a participant made could serve a
few possible metrics simultaneously, and in Table 1 we observed this by noting that in
the raw data, participants’ behavior tended to be described by the maxsum ,maximin , and
IAmetrics rather than a single metric. However, our paradigm would work well in evalu-
ating when participants would change their behavior as tradeoffs become more extreme.
If a participant acts according to the IAmetric until the group utility hits a threshold, that
participant should then continue acting according to a maxsum metric for all the more
extreme choices thereafter.
Tradeoffs like those between equity, the maxsum metric, and need (which we do not
examine here) have been widely compared in previous studies. Work like Ahlert et al.
(2013), Charness and Rabin (2002), Engelmann and Strobel (2004), Faravelli (2007), Fehr
et al. (2006), Fisman et al. (2007), Konow (2001, 2003), Konow and Schwettmann
(2016), Mitchell et al. (1993), Ordo ~nez and Mellers (1993), Pelligra and Stanca (2013),
Schwettmann (2009, 2012), and Skitka and Tetlock (1992) find that participants do
trade off between different principles based on the choices presented, as we see in ourwork when participants make choices in accordance with several of the metrics. The sug-
gested set of experiments would confirm and expand upon our results, as our stimuli were
chosen to isolate which metrics participants thought best when choices were most diffi-
cult. Moreover, a parametric study examining these tradeoffs would provide a large and
systematic dataset to contribute to the empirical literature on this topic.
Another extension to this study could be to consider whether having verbal problem
descriptions, or other representations of utilities, would enhance our paradigm. Hurley,
Buckley, Cuff, Giacomini, and Cameron (2011) replicated the study by Yaari and Bar-
Hillel (1984) with quantitative problem descriptions, verbal problem descriptions, and
both combined. In their verbal descriptions, they instructed participants to create an allo-
cation according to a given metric: “Divide the apples in such a way that the totalamount of vitamin F obtained by both Jones and Smith together is as large as possible,”V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 33 of 42
is an example of a maxsum instruction. The authors report that the results from the verbal
problem descriptions were consistent with participants not understanding the relationship
between the described metrics and their quantitative allocations, and that participants
increased maxsum behavior. We used quantitative information in the present study; it
would be hard to adapt the flexibility inherent in the different choices we presented to
participants in a verbal-only description. Hurley et al. (2011) find that verbal and quanti-
tative descriptions together produce results that more closely match quantitative descrip-
tions alone. Because verbal descriptions do not seem to produce an increased
understanding, we expect our paradigm works well with quantitative information, though
the question of how to represent utilities and the effects of that choice on participantbehavior remains an interesting one.
One limitation of this study is that it asks participants to evaluate between two agents
only. The fair allocation literature commonly evaluates over two or three recipient agents;
we expect our results to also scale to three agents, but we do not know how robust our
findings will be at larger group sizes. This question is incredibly important given that
large-scale decision-making impacts many people and so should be done in a way
endorsed by many. We do not know that our results will scale: As group size increases,
utilities become harder for participants (and people in general) to reason over, so we
expect different heuristics will emerge. Fortunately, our experimental paradigm wouldserve as a good platform for this work given that we can easily generate sets of matrices
which require participants to make difficult decisions and can analyze participants’
responses with various insertable metrics.
Finally, in future work, we hope to expand the generalizability of our findings. Within
our paradigm, we observed consistent and robust results which were supported by using
many matrices and conditions. Our paradigm differed from most in the fair allocation lit-
erature due to its focus on a more general problem: what a decision-maker should do
when it can take one action (in this case making a drink) that will be shared among
agents with different preferences. This paradigm itself offers many avenues for expansion
—there are many other actions a decision-maker could take besides making an item,
including choosing a field trip, donating to organizations, or making governmental policydecisions. We should also consider actions that produce choices with negative utilities,
since the dynamics at play in, for example, trolley problems or risk or loss aversion will
likely lead to different behaviors. Drawing from the fair allocation literature also shows
that this paradigm could be made more complex as multi-step actions are introduced
(e.g., an action encompassing bundle distribution), or selfishness, need, desert, and other
factors are considered. Konow (2003) presents a pluralistic approach, in which the prime
considerations when thinking about justice are a sense of equality and need, utilitarinism
and welfare economics, equity and desert, and context (other papers that present pluralis-
tic approaches are Cappelen, Hole, Sørensen, & Tungodden, 2007; Deutsch, 1985; Froh-lich & Oppenheimer, 1992; Konow & Schwettmann, 2016; Lerner, 1975; Leventhal,
1976) and all will need to be incorporated into a fuller model of what people consider
desirable behavior.34 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
This work provides a general problem framework and quantitative model of what peo-
ple think a third party should do when taking an action that will bring people different
utilities. While the problem we address is distinct from that of fair allocation, it is
informed by this literature and can contribute to the discussion of desirable descriptions
of helpful behavior. Understanding what actions to take when every choice affects the
well-being of many people with disjoint preferences has important bearings on the future,
whether in designing decision-making algorithms for artificial intelligence systems, quan-
titatively evaluating how to help individuals make decisions that are better in line with
what people consider to be good choices, or even creating accepted assistive robots. This
and future work focusing on making our results more generalizable —and determining
whether we would like our decision-makers to adopt people’s intuitions of preferred
choices—will provide important contributions to this problem.
6. Conclusion
Decision-making would be a much easier problem if everyone had the same prefer-
ences. Even having everyone agree about what to do about diverging preferences would
help reduce the complexity of reasoning over many diverse perspectives. While that is
not quite the spirited environment we live in, in today’s world, we have an opportunity to
make these complex decision-making tasks easier with artificial intelligence systems.
Artificial intelligences can optimize any number of people’s preferences once they know
what they are, and in this work, we develop a quantitative model of people’s intuitivepreferences for what to do when making difficult decisions to help people.
Determining what decision-makers should do is a topic that philosophers and governors
have wrestled with for centuries: How do we think about inequality within individuals
and in society? In this work, we ventured into the morass, developing a general problem
framework that let us ask people how they would choose one action whose impact would
be different for everyone. Deeper questions remain: What are humanity’s morals around
inequality, and how does what we do compare to how we think we should be? We do
not know what to teach our decision-makers yet, and determining the answers to these
questions will require significant collaborative effort. Fortunately, we did determine whatdrinks to serve at the resulting conferences.
Author note
Special thanks to Professor Anant Sahai for suggesting the conditions in Experiment 1,
and we thank various members of the Center for Human-Compatible Artificial Intelli-
gence for helpful comments. This work was funded in part by NSF grant 1456709 to
T.L.G. and N.I.H. U.C. Berkeley Neuroscience Training Program Grant to V.G.V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 35 of 42
Open Research badges
This article has earned Open Data and Open Materials badges. Data and materials are
available at https://osf.io/gybd4.
Notes
1. The disadvantage of normalizing the values within each metric was that informa-
tion about metrics that had particularly high or low values for a given matrix was
lost. However, we considered this choice better than the alternative unnormalized
option, for which there would be no sense of the alternative options participantswere choosing between.
2. As an example, if participants in the “Repeated 2x” condition were analyzing the
matrix in Fig. 1, to maximize the maximin metric on each choice they would
choose the [A: 5, B: 8] cup twice, but if they were maximizing the maximin metric
across all choices, they would choose the [A: 4, B: 11] cup once and the [A: 12,
B: 2] cup once. Note that choosing the [A: 5, B: 8] cup twice is tied for the sec-
ond-best option for maximizing maximin overall, so this set of choices would still
contribute to the hypothesis that participants were maximizing the maximin metric
across all choices, just not as strongly as would be true if the participant had cho-sen the [A: 4, B: 11] cup once and the [A: 12, B: 2] cup once.
3. Our results from estimating a single hacross all participants, h
maximin =1 and
hmaximax ,hmaxsum ,hIA=0 for the “Follow-Up (2x)” and “Follow-Up (3x)” condi-
tions, also support the conclusion that participants’ behavior was best explained by
themaximin metric.
References
Ahlert, M., Funke, K., & Schwettmann, L. (2013). Thresholds, productivity, and context: An experimental
study on determinants of distributive behaviour. Social Choice and Welfare ,40(4), 957 –984. https://doi.
org/10.1007/s00355-012-0652-8
Aleksandrov, M. D., Aziz, H., Gaspers, S., & Walsh, T. (2015). Online fair division: Analysing a food bank
problem. In Q. Yang, & M. Woolridge (Eds.), Twenty-Fourth International Joint Conference on Artificial
Intelligence (pp. 2540 –2546). Palo Alto, CA: AAAI Press/International Joint Conferences on Artificial
Intelligence.
Alesina, A., & Angeletos, G.-M. (2005). Fairness and redistribution. American Economic Review ,95(4), 960 –
980. https://doi.org/10.1257/0002828054825655
Amanatidis, G., Markakis, E., Nikzad, A., & Saberi, A. (2017). Approximation algorithms for computing
maximin share allocations. ACM Transactions on Algorithms ,13(4), 52. https://doi.org/10.1145/3147173
Andersson, F., & Lyttkens, C. H. (1999). Preferences for equity in health behind a veil of ignorance. Health
Economics ,8(5), 369 –378. https://doi.org/10.1002/(SICI)1099-1050(199908)8:5 <369:AID-HEC456 >3.0.
CO;2-Q36 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
Andreoni, J. (1989). Giving with impure altruism: Applications to charity and Ricardian equivalence. Journal
of Political Economy ,97(6), 1447 –1458. https://doi.org/10.1086/261662
Andreoni, J., & Miller, J. H. (1993). Rational cooperation in the finitely repeated prisoner’s dilemma:
Experimental evidence. Economic Journal ,103(418), 570 –585. https://doi.org/10.2307/2234532
Andreoni, J., & Vesterlund, L. (2001). Which is the fair sex? Gender differences in altruism. The Quarterly
Journal of Economics ,116(1), 293 –312. https://doi.org/10.1162/003355301556419
Ashlagi, I., Karag €ozo/C21glu, E., & Klaus, B. (2012). A non-cooperative support for equal division in estate
division problems. Mathematical Social Sciences ,63(3), 228 –233. https://doi.org/10.1016/j.mathsocsci.
2012.01.004
Austerweil, J. L., Brawner, S., Greenwald, A., Hilliard, E., Ho, M., Littman, M. L., MacGlashan, J., &
Trimbach, C. (2015). The impact of other-regarding preferences in a collection of non-zero-sum gridgames. AAAI Spring Symposium 2016 on Challenges and Opportunities in Multiagent Learning for the
Real World. Palo Alto, CA: The AAAI Press.
Babcock, L., Loewenstein, G., Issacharoff, S., & Camerer, C. (1995). Biased judgments of fairness in
bargaining. American Economic Review ,85(5), 1337 –1343.
Baker, C. L., Saxe, R., & Tenenbaum, J. B. (2009). Action understanding as inverse planning. Cognition ,113
(3), 329 –349. https://doi.org/10.1016/j.cognition.2009.07.005
Barman, S., & Krishna Murthy, S. K. (2017). Approximation algorithms for maximin fair division. In
Association for Computing Machinery (Ed.), Proceedings of the 2017 ACM Conference on Economics and
Computation (pp. 647 –664). New York: ACM.
Beckman, S. R., Formby, J. P., Smith, W. J., & Zheng, B. (2002). Envy, malice and Pareto efficiency: An
experimental examination. Social Choice and Welfare ,19(2), 349 –367. https://doi.org/10.1007/
s003550100116
Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, reciprocity, and social history. Games and Economic
Behavior ,10(1), 122 –142. https://doi.org/10.1006/game.1995.1027
Bergemann, D., & Valimaki, J. (2006). Efficient Dynamic Auctions. Cowles Foundation Discussion Paper
No. 1584 . New Haven, CT: Yale Cowles Foundation. https://doi.org/10.2139/ssrn.936633
Bertsimas, D., Farias, V., & Trichakis, N. (2011). The price of fairness. Operations Research ,59(1), 17 –31.
https://doi.org/10.1287/opre.1100.0865
Bertsimas, D., Farias, V. F., & Trichakis, N. (2012). On the efficiency-fairness trade-off. Management
Science ,58(12), 2234 –2250. https://doi.org/10.1287/mnsc.1120.1549
Binmore, K. (1994). Game theory and the social contract. Vol. 1. Playing Fair (MIT Press Series on
Economic Learning and Social Evolution) . Cambridge, MA: MIT Press.
Bolton, G., Brandts, J., Katok, E., Ockenfels, A., & Zwick, R. (2008). Testing theories of other-regarding
behavior: A sequence of four laboratory studies. Handbook of Experimental Economics Results ,1, 488 –
499. https://doi.org/10.1016/S1574-0722(07)00055-8
Bolton, G., & Ockenfels, A. (2000). ERC: A theory of equity, reciprocity, and competition. American
Economic Review ,90(1), 166 –193. https://doi.org/10.1257/aer.90.1.166
Bosmans, K., & Schokkaert, E. (2004). Social welfare, the veil of ignorance and purely individual risk: An
empirical examination. Research on Economic Inequality ,11,8 5–114.
Bosmans, K., & Schokkaert, E. (2009). Equality preference in the claims problem: A questionnaire study of
cuts in earnings and pensions. Social Choice and Welfare ,33(4), 533. https://doi.org/10.1007/s00355-009-
0378-4
Bouveret, S., & Lang, J. (2011). A general elicitation-free protocol for allocating indivisible goods. In T.
Walsh (Ed.), Twenty-Second International Joint Conference on Artificial Intelligence (pp. 73 –78). Menlo
Park, CA: AAAI Press/International Joint Conferences on Artificial Intelligence.
Boyd, R., & Lorberbaum, J. P. (1987). No pure strategy is evolutionarily stable in the repeated prisoner’s
dilemma game. Nature ,327(6117), 58. https://doi.org/10.1038/327058a0
Brams, S. J., Edelman, P. H., & Fishburn, P. C. (2003). Fair division of indivisible items. Theory and
Decision ,55(2), 147 –180. https://doi.org/10.1023/B:THEO.0000024421.85722.0aV. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 37 of 42
Brams, S. J., & Taylor, A. D. (1996). Fair division: From cake-cutting to dispute resolution . Cambridge,
UK: Cambridge University Press.
Camerer, C. F. (2011). Behavioral game theory: Experiments in strategic interaction . Princeton, NJ:
Princeton University Press.
Cappelen, A. W., Hole, A. D., Sørensen, E. Ø., & Tungodden, B. (2007). The pluralism of fairness ideals:
An experimental approach. American Economic Review ,97(3), 818 –827. https://doi.org/10.1257/aer.97.3.
818
Cappelen, A. W., Nielsen, U. H., Sørensen, E. Ø., Tungodden, B., & Tyran, J.-R. (2013). Give and take in
dictator games. Economics Letters ,118(2), 280 –283. https://doi.org/10.1016/j.econlet.2012.10.030
Carlsson, F., Gupta, G., & Johansson-Stenman, O. (2003). Choosing from behind a veil of ignorance in India.
Applied Economics Letters ,10(13), 825 –827. https://doi.org/10.1080/1350485032000148268
Cavallo, R. (2008). Efficiency and redistribution in dynamic mechanism design. In Proceedings of the Ninth
ACM Conference on Electronic Commerce (pp. 220 –229). New York: ACM.
Charness, G., & Rabin, M. (2002). Understanding social preferences with simple tests. The Quarterly Journal
of Economics ,117(3), 817 –869. https://doi.org/10.1162/003355302760193904
Chmura, T., Kube, S., Pitz, T., & Puppe, C. (2005). Testing (beliefs about) social preferences: Evidence from
an experimental coordination game. Economics Letters ,88(2), 214 –220. https://doi.org/10.1016/j.econlet.
2005.02.009
Cooney, G., Gilbert, D., & Wilson, T. (2016). When fairness matters less than we expect. Proceedings of the
National Academy of Sciences ,113(40), 11168 –11171. https://doi.org/10.1073/pnas.1606574113
Cox, J. C. (2004). How to identify trust and reciprocity. Games and Economic Behavior ,46(2), 260 –281.
https://doi.org/10.1016/S0899-8256(03)00119-2
Croson, R., & Gneezy, U. (2009). Gender differences in preferences. Journal of Economic Literature ,47(2),
448–474. https://doi.org/10.1257/jel.47.2.448
Croson, R., & Konow, J. (2009). Social preferences and moral biases. Journal of Economic Behavior &
Organization ,69(3), 201 –212.
Dana, J., Weber, R. A., & Kuang, J. X. (2007). Exploiting moral wiggle room: Experiments demonstrating
an illusory preference for fairness. Economic Theory ,33(1), 67 –80. https://doi.org/10.1007/s00199-006-
0153-z
Demers, A., Keshav, S., & Shenker, S. (1989). Analysis and simulation of a fair queueing algorithm. ACM
SIGCOMM Computer Communication Review ,19(4), 1 –12.
Deutsch, M. (1985). Distributive justice: A social-psychological perspective . New Haven, CT: Yale
University Press.
Dickerson, J. P., Goldman, J., Karp, J., Procaccia, A. D., & Sandholm, T. (2014). The computational rise and
fall of fairness. In Twenty-Eighth AAAI Conference on Artificial Intelligence (pp. 1405 –1411). Palo Alto,
CA: AAAI Press.
Dreber, A., Fudenberg, D., & Rand, D. G. (2014). Who cooperates in repeated games: The role of altruism,
inequity aversion, and demographics. Journal of Economic Behavior & Organization ,98,4 1–55.https://
doi.org/10.1016/j.jebo.2013.12.007
Dubois, D., Fargier, H., & Prade, H. (1996). Refinements of the maximin approach to decision-making in a
fuzzy environment. Fuzzy Sets and Systems ,81(1), 103 –122. https://doi.org/10.1016/0165-0114(95)00243-
X
Dufwenberg, M., & Kirchsteiger, G. (2004). A theory of sequential reciprocity. Games and Economic
Behavior ,47(2), 268 –298. https://doi.org/10.1016/j.geb.2003.06.003
Dupuis-Roy, N., & Gosselin, F. (2009). An empirical evaluation of fair-division algorithms. In N. A. Taatgen
& H. van Rijn (Eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society (pp.
2681 –2686). Austin, TX: Cognitive Science Society.
Dupuis-Roy, N., & Gosselin, F. (2011). The simpler, the better: A new challenge for fair-division theory. In
L. Carlson, C. Hoelscher, & T. F. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the
Cognitive Science Society (pp. 3229 –3234). Austin, TX: Cognitive Society Society.38 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
Ellingsen, T., & Johannesson, M. (2001). Sunk costs, fairness, and disagreement. Stockholm School of
Economics Working Paper.
Engelmann, D., & Strobel, M. (2004). Inequality aversion, efficiency, and maximin preferences in simple
distribution experiments. American Economic Review ,94(4), 857 –869. https://doi.org/10.1257/
0002828042002741
Escoffier, B., Gourv /C18es, L., & Monnot, J. (2013). Fair solutions for some multiagent optimization problems.
Autonomous Agents and Multi-Agent Systems ,26(2), 184 –201. https://doi.org/10.1007/s10458-011-9188-z
Faravelli, M. (2007). How context matters: A survey based experiment on distributive justice. Journal of
Public Economics ,91(7–8), 1399 –1422. https://doi.org/10.1016/j.jpubeco.2007.01.004
Fehr, E., Naef, M., & Schmidt, K. (2006). Inequality aversion, efficiency, and maximin preferences in simple
distribution experiments: Comment. American Economic Review ,96(5), 1912 –1917. https://doi.org/10.
1257/aer.96.5.1912
Fisman, R., Kariv, S., & Markovits, D. (2007). Individual preferences for giving. American Economic
Review ,97(5), 1858 –1876. https://doi.org/10.1257/aer.97.5.1858
Fleurbaey, M. (2008). Fairness, responsibility, and welfare . Oxford, UK: Oxford University Press. https://doi.
org/10.1093/acprof:osobl/9780199215911.001.0001
Fong, C. (2001). Social preferences, self-interest, and the demand for redistribution. Journal of Public
Economics ,82(2), 225 –246. https://doi.org/10.1016/S0047-2727(00)00141-9
Freeman, R., Zahedi, S. M., & Conitzer, V. (2017). Fair social choice in dynamic settings. In C. Sierra (Ed.),
Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI) . Marina del Rey,
CA: International Joint Conferences on Artificial Intelligence.
Frohlich, N., & Oppenheimer, J. A. (1992). Choosing justice: An experimental approach to ethical theory .
Berkeley: University of California Press.
Frohlich, N., Oppenheimer, J. A., & Eavey, C. L. (1987). Laboratory results on Rawls’s distributive justice.
British Journal of Political Science ,17(1), 1 –21. https://doi.org/10.1017/S0007123400004580
G€achter, S., & Riedl, A. (2006). Dividing justly in bargaining problems with claims. Social Choice and
Welfare ,27(3), 571 –594. https://doi.org/10.1007/s00355-006-0141-z
Gaertner, W., Jungeilges, J., & Neck, R. (2001). Cross-cultural equity evaluations: A questionnaire-
experimental approach. European Economic Review ,45(4–6), 953 –963. https://doi.org/10.1016/S0014-
2921(01)00119-2
Gaertner, W., & Schokkaert, E. (2012). Empirical social choice: Questionnaire-experimental studies on
distributive justice . Cambridge, UK: Cambridge University Press https://doi.org/10.1017/
CBO9781139012867
Gaertner, W., & Schwettmann, L. (2007). Equity, responsibility and the cultural dimension. Economica ,74
(296), 627 –649. https://doi.org/10.1111/j.1468-0335.2006.00563.x
Gollwitzer, M., & van Prooijen, J.-W. (2016). Psychology of justice. In C. Sabbagh & M. Schmitt (Eds.),
Handbook of social justice theory and research (pp. 61 –82). New York: Springer. https://doi.org/10.1007/
978-1-4939-3216-0_4
Guo, M., Conitzer, V., & Reeves, D. M. (2009). Competitive repeated allocation without payments. In S.
Leonardi (Ed.), International Workshop on Internet and Network Economics (pp. 244 –255). Berlin,
Heidelberg: Springer. https://doi.org/10.1007/978-3-642-10841-9_23
Harsanyi, J. C. (1975). Can the maximin principle serve as a basis for morality? A critique of John Rawls’s
theory. American Political Science Review ,69(2), 594 –606. https://doi.org/10.2307/1959090
Herreiner, D., & Puppe, C. (2007). Distributing indivisible goods fairly: Evidence from a questionnaire study.
Analyse & Kritik ,29(2), 235 –258.
Herrero, C., Moreno-Ternero, J. D., & Ponti, G. (2010). On the adjudication of conflicting claims: An
experimental study. Social Choice and Welfare ,34(1), 145 –179. https://doi.org/10.1007/s00355-009-0398-0
Hoffman, E., & Spitzer, M. L. (1985). Entitlements, rights, and fairness: An experimental examination of
subjects’ concepts of distributive justice. Journal of Legal Studies ,14(2), 259 –297. https://doi.org/10.1086/
467773V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 39 of 42
Hsu, M., Anen, C., & Quartz, S. (2008). The right and the good: Distributive justice and neural encoding of
equity and efficiency. Science ,320(5879), 1092 –1095. https://doi.org/10.1126/science.1153651
Huck, S., & Oechssler, J. (1999). The indirect evolutionary approach to explaining fair allocations. Games
and Economic Behavior ,28(1), 13 –24. https://doi.org/10.1006/game.1998.0691
Hurley, J., Buckley, N. J., Cuff, K., Giacomini, M., & Cameron, D. (2011). Judgments regarding the fair
division of goods: The impact of verbal versus quantitative descriptions of alternative divisions. Social
Choice and Welfare ,37(2), 341 –372. https://doi.org/10.1007/s00355-010-0487-0
Johansson-Stenman, O., Carlsson, F., & Daruvala, D. (2002). Measuring future grandparents’ preferences for
equality and relative standing. Economic Journal ,112(479), 362 –383. https://doi.org/10.1111/1468-0297.
00040
Jungeilges, J. A., & Theisen, T. (2008). A comparative study of equity judgements in Lithuania and Norway.
Journal of Socio-Economics ,37(3), 1090 –1118. https://doi.org/10.1016/j.socec.2007.04.002
Kalinowski, T., Narodytska, N., & Walsh, T. (2013). A social welfare optimal sequential allocation
procedure. In F. Rossi (Ed.), Twenty-Third International Joint Conference on Artificial Intelligence (pp.
227–233). Menlo Park, CA: AAAI Press / International Joint Conferences on Artificial Intelligence.
Kash, I., Procaccia, A. D., & Shah, N. (2014). No agent left behind: Dynamic fair division of multiple
resources. Journal of Artificial Intelligence Research ,51, 579 –603. https://doi.org/10.1613/jair.4405
Konow, J. (2000). Fair shares: Accountability and cognitive dissonance in allocation decisions. American
Economic Review ,90(4), 1072 –1091. https://doi.org/10.1257/aer.90.4.1072
Konow, J. (2001). Fair and square: The four sides of distributive justice. Journal of Economic Behavior &
Organization ,46(2), 137 –164. https://doi.org/10.1016/S0167-2681(01)00194-9
Konow, J. (2003). Which is the fairest one of all? A positive analysis of justice theories. Journal of
Economic Literature ,41(4), 1188 –1239. https://doi.org/10.1257/002205103771800013
Konow, J. (2009). Is fairness in the eye of the beholder? An impartial spectator analysis of justice. Social
Choice and Welfare ,33(1), 101 –127. https://doi.org/10.1007/s00355-008-0348-2
Konow, J., & Schwettmann, L. (2016). The economics of justice. In C. Sabbagh, & M. Schmitt (Eds.),
Handbook of social justice theory and research (pp. 83 –106). New York: Springer. https://doi.org/10.
1007/978-1-4939-3216-0_5
Kuderer, M., Gulati, S., & Burgard, W. (2015). Learning driving styles for autonomous vehicles from
demonstration. In A. Okamura (Ed.), 2015 IEEE International Conference on Robotics and Automation
(ICRA) (pp. 2641 –2646). Piscataway, NJ: IEEE.
Kurokawa, D., Procaccia, A. D., & Wang, J. (2016). When can the maximin share guarantee be guaranteed?
InThirtieth AAAI Conference on Artificial Intelligence . Palo Alto, CA: The AAAI Press.
Kuzmics, C., Palfrey, T., & Rogers, B. W. (2014). Symmetric play in repeated allocation games. Journal of
Economic Theory ,154,2 5–67. https://doi.org/10.1016/j.jet.2014.08.002
Lerner, M. J. (1975). The justice motive in social behavior: Introduction. Journal of Social Issues ,31(3), 1 –
19. https://doi.org/10.1111/j.1540-4560.1975.tb00995.x
Leventhal, G. S. (1976). Fairness in social relationships . Morristown, NJ: General Learning Press.
Levine, D. K. (1998). Modeling altruism and spitefulness in experiments. Review of Economic Dynamics ,1
(3), 593 –622. https://doi.org/10.1006/redy.1998.0023
Marwell, G., & Ames, R. E. (1981). Economists free ride, does anyone else? Experiments on the provision of
public goods, IV. Journal of Public Economics ,15(3), 295 –310. https://doi.org/10.1016/0047-2727(81)
90013-X
Mitchell, G., Tetlock, P. E., Mellers, B. A., & Ordonez, L. D. (1993). Judgments of social justice:
Compromises between equality and efficiency. Journal of Personality and Social Psychology ,65(4), 629.
https://doi.org/10.1037/0022-3514.65.4.629
Moulin, H., Brandt, F., Conitzer, V., Endriss, U., Procaccia, A. D., & Lang, J. (2016). Handbook of
computational social choice . Cambridge, UK: Cambridge University Press. https://doi.org/10.1017/
CBO978110744698440 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
Moulin, H., & Stong, R. (2002). Fair queuing and other probabilistic allocation methods. Mathematics of
Operations Research ,27(1), 1 –30. https://doi.org/10.1287/moor.27.1.1.336
Ng, A. Y., & Russell, S. J. (2000). Algorithms for inverse reinforcement learning. In P. Langley (Ed.),
Proceedings of the 17th International Conference on Machine Learning (pp. 663 –670). San Francisco,
CA: Morgan Kaufmann.
Nord, E., Richardson, J., Street, A., Kuhse, H., & Singer, P. (1995). Who cares about cost? Does economic
analysis impose or reflect social values? Health Policy ,34(2), 79 –94. https://doi.org/10.1016/0168-8510
(95)00751-D
Nowak, M. A., Page, K. M., & Sigmund, K. (2000). Fairness versus reason in the ultimatum game. Science ,
289(5485), 1773 –1775. https://doi.org/10.1126/science.289.5485.1773
Oleson, P. E. (2001). An experimental examination of alternative theories of distributive justice and
economic fairness . Tucson: University of Arizona.
Ordo ~nez, L. D., & Mellers, B. A. (1993). Trade-offs in fairness and preference judgments. Psychological
Perspectives on Justice: Theory and Applications , 138 –154. https://doi.org/10.1017/CBO9780511552069.
008
P/C19alv€olgyi, D., Peters, H. J. M., & Vermeulen, A. J. (2010). A strategic approach to estate division problems
with non-homogenous preferences. METEOR Research Memorandum 10/036. Maastricht.
Pelligra, V., & Stanca, L. (2013). To give or not to give? Equity, efficiency and altruistic behavior in an
artefactual field experiment. Journal of Socio-Economics ,46,1–9. https://doi.org/10.1016/j.socec.2013.05.
015
Procaccia, A. D., & Wang, J. (2014). Fair enough: Guaranteeing approximate maximin shares. In
Proceedings of the Fifteenth ACM Conference on Economics and Computation (pp. 675 –692). New York,
NY: ACM. https://doi.org/10.1145/2600057.2602835
Rawls, J. (1971). A theory of justice . Cambridge, MA: Belknap Press.
Rawls, J. (1974). Some reasons for the maximin criterion. American Economic Review ,64(2), 141 –146.
Salles, R. M., & Barria, J. A. (2008). Lexicographic maximin optimisation for fair bandwidth allocation in
computer networks. European Journal of Operational Research ,185(2), 778 –794. https://doi.org/10.1016/
j.ejor.2006.12.047
Sch€afer, M., Haun, D., & Tomasello, M. (2015). Fair is not fair everywhere. Psychological Science , 1252 –
1260. https://doi.org/10.1177/0956797615586188
Schokkaert, E., & Devooght, K. (2003). Responsibility-sensitive fair compensation in different cultures.
Social Choice and Welfare ,21(2), 207 –242. https://doi.org/10.1007/s00355-003-0257-3
Schokkaert, E., & Lagrou, L. (1983). An empirical approach to distributive justice. Journal of Public
Economics ,21(1), 33 –52. https://doi.org/10.1016/0047-2727(83)90072-5
Schokkaert, E., & Overlaet, B. (1989). Moral intuitions and economic models of distributive justice. Social
Choice and Welfare ,6(1), 19 –31. https://doi.org/10.1007/BF00433360
Schwettmann, L. (2009). Trading off competing allocation principles: Theoretical approaches and empirical
investigations (Vol. 3343 ). Frankfurt, Germany: Peter Lang.
Schwettmann, L. (2012). Competing allocation principles: Time for compromise? Theory and Decision ,73
(3), 357 –380. https://doi.org/10.1007/s11238-011-9289-9
Skitka, L. J., & Tetlock, P. E. (1992). Allocating scarce resources: A contingency model of distributive
justice. Journal of Experimental Social Psychology ,28(6), 491 –522. https://doi.org/10.1016/0022-1031(92)
90043-J
Thomson, W. (2003). Axiomatic and game-theoretic analysis of bankruptcy and taxation problems: A survey.
Mathematical Social Sciences ,45(3), 249 –297. https://doi.org/10.1016/S0165-4896(02)00070-7
Traub, S., Seidl, C., Schmidt, U., & Levati, M. V. (2005). Friedman, Harsanyi, Rawls, Boulding –or
somebody else? An experimental investigation of distributive justice. Social Choice and Welfare ,24(2),
283–309. https://doi.org/10.1007/s00355-003-0303-1V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020) 41 of 42
Walsh, T. (2011). Online cake cutting. In R. Brafman, F. S. Roberts, & A. Tsouki /C18as (Eds.), International
Conference on Algorithmic Decision Theory (pp. 292 –305). Berlin: Springer. https://doi.org/10.1007/978-
3-642-24873-3_22
Walsh, T. (2015). Challenges in resource and cost allocation. In Twenty-Ninth AAAI Conference on Artificial
Intelligence (pp. 4073 –4077). Palo Alto, CA: The AAAI Press.
Wittig, M., Jensen, K., & Tomasello, M. (2013). Five-year-olds understand fair as equal in a mini-ultimatum
game. Journal of Experimental Child Psychology ,116(2), 324 –337. https://doi.org/10.1016/j.jecp.2013.06.
004
Yaari, M. E., & Bar-Hillel, M. (1984). On dividing justly. Social Choice and Welfare ,1(1), 1 –24. https://doi.
org/10.1007/BF00297056
Ziebart, B. D., Maas, A. L., Bagnell, J. A., & Dey, A. K. (2008). Maximum entropy inverse reinforcement
learning . Chicago, IL: AAAI.
Supporting Information
Additional supporting information may be found
online in the Supporting Information section at the end
of the article:
Supplementary Material42 of 42 V. Gates, T. L. Griffiths, A. D. Dragan / Cognitive Science 44 (2020)
|
09099c33-e89e-48f9-bdec-d5ff1f99afe6
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Sources of evidence in Alignment
***Summary:** A short epistemological study to discover which sources of evidence can inform our predictions of action-relevant quantities in alignment.*
*This post follows* [*Quantitative cruxes*](https://www.lesswrong.com/posts/ryCfHod3eFhkYipW9/quantitative-cruxes-in-alignment)*, although reading that first is mostly not required. Work done during my last two weeks of SERI MATS 3.1.*
---
Sources of evidence
===================
No researcher in any field ever makes explicit all of their sources of evidence. Let alone in a field as chaotic and uncertain as ML, in which hardly-earned experience and intuitions play a central role in [stirring the tensor pile](https://xkcd.com/1838/). And even less in a field with as many varied opinions and confusing questions as alignment. Nonetheless, even when researchers are just “grokking some deeper hard-to-transmit structure from familiar theory and evidence”, they need to get their bits of information from somewhere. Knowledge doesn’t come for free, they need [entanglement with observed parts of reality](https://www.lesswrong.com/s/oFePMp9rKftEeZDDr/p/QkX2bAkwG2EpGvNug).
Getting a better picture of where we are and could be looking, brain-storming or deepening existing sources, and understanding methodology limitations (as [Adam Shimi](https://www.lesswrong.com/s/LLEJJoaYpCoS5JYSY)’s efforts already pursue) can dissolve confusions, speed progress forward, help us calibrate and build common knowledge.
In reality, the following sources of evidence motivating any belief are way less separable than the below text might make it seem. Nonetheless, isolating them yields more conceptual clarity and is the first step for analysis.
**1. Qualitative arguments**
One obvious, theoretical source, and the most used in this community by far. The central shortcoming is that their abstractions are flexibly explanatory exactly because they abstract away detail, and thus provide more information about the existence of algorithms or dynamics, than about some relevant related quantities like how prevalent they actually are in a certain space, when do these dynamics actually start to appear and with how much steering power, etc.
Sometimes a tacit assumption might seem to be made: there are so many qualitative arguments for the appearance of these dynamics (and so few for the appearance of rectifying dynamics), that surely one of them will be present to a relevant degree, and early on enough! This seems like a sort of [presumption of independence](https://arxiv.org/abs/2211.06738) about yet unobserved structure: a priori, we have no reason to believe any one of these qualitative arguments have higher or lower quantitative impact, so we should settle on the vague prior of them all having similar effects (and so, the side with more qualitative arguments wins). While this is truly the best we can do when further evidence isn’t available, it seems like an especially fragile prior, ignoring the many possible interdependencies among some qualitative arguments (how their validity cluster across different worlds), and possible correlated failures / decoupling of abstractions from reality, or systemic biases in the search for qualitative arguments. Incorporating some of these considerations is already enough to both slightly better inform our estimates, and especially better calibrate our credences and uncertainty.
Of course, usually qualitative arguments are informed by and supplemented with other sources of evidence that can provide a quantitative estimate, and then the presumption of independence is applied after incorporating these different sources (which is usually a considerably better situation to be in, unequivocally more grounded in reality). We can even explicitly reason qualitatively about the different relative magnitudes of some dynamics, as in [How likely is deceptive alignment?](https://www.lesswrong.com/posts/A9NxPTwbw6r6Awuwt/how-likely-is-deceptive-alignment).
And sometimes, in even less explicit ways, intuitive assessments of the strength of quantitative effects (or even the fundamental shape of the qualitative arguments) are already informed by grokked structure in reality coming from other sources of evidence. And of course, here talking about the actual structure, the actual evidence, will be more informative (even while acknowledging our credal relationship to this evidence might not be completely transmittable in language, given some hard-earned expert intuitions).
In any event, the above mentioned tacit assumption lurks in the background of some more superficial discussions, and sadly reality doesn’t seem to be as simple as to permit its high-confidence application.
**2. Empirics and extrapolations**
We can obtain information about the technical cruxes from the current paradigm. This doesn’t imply abandoning our abstractions and predictions, or ignoring the fact that according to them phase changes are still to come between current systems and dangerous systems. But they are, ultimately, the most valuable way to test our intuitions. So the valuable actions here are less blindly and agnostically listening to the flow of information, and more purposefully finding the few empirical tests that could probe some corners of our abstractions (as difficult as this might be, given our abstractions talk about far situations, and in qualitative terms). For this purpose, maybe examining the ML literature is not enough, and experimentation with goals different than those of academia is required.
Of course, extrapolations are also theory-laden, since we need to decide which contributing factors to take into account. That said, these factors can in turn be estimated, and so on, yielding fruitful intertwining of our qualitative abstractions with quantitative feedback, that can eventually be tested against reality.
Work that provides more data ([example](https://www.lesswrong.com/posts/PDLfpRwSynu73mxGw/basic-facts-about-language-model-internals-1)) can be distinguished from work that interprets it ([example](https://www.lesswrong.com/posts/JFibrXBewkSDmixuo/hypothesis-gradient-descent-prefers-general-circuits)).
**3. Mathematical results about local search in general**
The motivating example for this class is mathematical theorems about the behavior of SGD in the limit, or approximations / conjectures of them (example: [infinite-width limits of neural networks](https://www.lesswrong.com/posts/CQMhLujqMpQ78Ru3R/infinite-width-mlps-as-an-ensemble-prior), and how they help understand inductive biases and training dynamics). That is, we think about the local search process in general and in the abstract, averaged over all possible tasks and models. Many times we get conceptual assessments instead of fully formal mathematical results, closer to qualitative arguments. Here deep familiarity with the Deep Learning literature would seem positive.
**4. Mentally approximating empirical evidence**
As mentioned above and [by many](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities#Section_C_), the alignment field seems mainly bottlenecked on not being able to test the dynamics we most worry about, or [iterate](https://www.lesswrong.com/posts/xFotXGEotcKouifky/worlds-where-iterative-design-fails) experimentally on robust metrics. We don’t have the privilege of waiting for more concrete evidence, and the whole field is trying to get ahead of the curve and place the bandaid before the wound.
It’s obviously bad for a field not to have a reliable flux of empirical evidence on its most central variables. It’s also likely that our mental and social tools for doing science are too tailored to exploiting such a flux.
Thus, some researchers seem to try and cheaply approximate some aspects of empirical evidence, and find alternative testing grounds for their intuitions, which mostly amount to higher-resolution concretized reasoning and prediction. That is, expliciting a lot of the complex structure so that we only need to call on simpler, and thus apparently more trust-worthy, intuitions (in a speculative-but-concrete way which might not be widespread in other scientific areas).
The motivating example for this class is [Nate’s explanation](https://www.lesswrong.com/posts/iy2o4nQj9DnQD7Yhj/discussion-with-nate-soares-on-a-key-alignment-difficulty#High_level_premises) of obtaining confidence on a certain property of the distribution of cognitive mechanisms:
> *“like, we could imagine playing a game where i propose a way that it [the AI] diverges [from POUDA-avoidance] in deployment, and you counter by asserting that there's a situation in the training data where it had to have gotten whacked if it was that stupid, and i counter either by a more-sophisticated deployment-divergence or by naming either a shallower or a factually non-[Alice]like thing that it could have learned instead such that the divergence still occurs, and we go back and forth. and i win if you're forced into exotic and unlikely training data, and you win if i'm either forced into saying that it learned unnatural concepts, or if my divergences are pushed so far out that you can fit in a pivotal act before then.*
>
> *[...]*
>
> *and, like, it's a pretty tricky game to play b/c it's all made-up bullshit and it's hard to agree on who strained credulity more, but there's some sort of idealized game here where it sounds to me like we each expect we'd win if we played it ...*
>
> *So the place that my brain reports it gets its own confidence from, is from having done exercises that amount to self-play in the game I mentioned in a thread a little while back, which gives me a variety of intuitions about the rows in your table (where I'm like "doing science well requires CIS-ish stuff" and "the sort of corrigibility you learn in training doesn't generalize how we want, b/c of the interactions w/ the CIS-ish stuff")*
>
> *(that plus the way that people who hope the game goes the other way, seem to generally be arguing not from the ability to exhibit playthroughs that go some other way, but instead be arguing from ignorance / "we just don't know")”*
>
>
It’s refreshing to read an explicit account of such mental processes upstream of confidence. This example amounts to more finely predicting the macroscopic dynamics of a training run. It is very close to the [builder-breaker](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit) methodology Paul Christiano has many times [explicited](https://ai-alignment.com/my-research-methodology-b94f2751cb2c). ARC more explicitly yield the (almost) worst-case assumption to conservatively ignore some quantitative questions and just consider existence: “could we learn any dangerous algorithm here?” (Although again quantitative intuitions might inform which algorithms would seem to exist, since we can’t completely explicit them.) But Nate can also be understood above as dissolving quantitative assessments as the existence of different cognitive mechanisms: “is it true that (almost) all (not-absurdly-large) algorithms solving this task need to perform some cognitive move of this kind?”
Of course, it’s not really clear how much resolution we can get away with in these predictions, nor which biases might be steering our simulation away from reality, nor which already theory-laden interpretations of evidence we might be baking in, so that it’s unclear how much confidence these assessments permit. And especially it will be a function of how formal and concrete we were able to get the builder-breaker procedure to be. After all, when lacking more empirics, “made-up bullshit” will always be the first step towards well-calibrated estimates, although different researchers disagree on which steps in this path deserve which credences, based mainly on differing confidence on their cognitive tools.
While source 3 above engages with local search in the abstract (and grounds in the mathematical robustness of limit behaviors), this source strives for as much concreteness as is possible (and grounds in the correct assessment of small and simple interactions, or straightforwardly predictable behaviors of concrete algorithms).
There are other ways to mentally approximate empirical evidence than macroscopically simulating a training run. We could try something more forecast-y, and actually assign probabilities (or distributions) to small-scale phenomenon and quantities that crop up in our simulating, like different cognitive structures being built. We can also assess the difficulty of a task (in some epistemically privileged metric other than “which cognitive structures execute it correctly”) and consider the consequences for SGD.
**5. Vague priors or heuristics extracted from other domains**
For example, quantitative parameters of agentic mechanisms in biology or social sciences providing weak evidence for similar phenomena in neural net architecture (while trying to update according to the big qualitative differences between the settings).
Importantly, this is distinct from “getting such a good understanding of biological examples of agency that we better understand cognitive mechanisms in general”. Although that routes through other domains, it ultimately will be expressible and motivated in purely domain-agnostic terms, without requiring biological evidence. That said, these different kinds of evidence are usually not completely separable.
It’s been [especially contested](https://www.lesswrong.com/posts/pz7Mxyr7Ac43tWMaC/against-evolution-as-an-analogy-for-how-humans-will-create) what the explicit role of evolutionary analogies should be. It would seem productive to more clearly delimitate whether they are used as mere explainers or intuition pumps (pointing towards deeper truths about cognitive mechanisms that actually need to be motivated and defended on different grounds), or whether they are relying on them as vague predictors as per source of evidence 5.
**6. Common sense and value judgments**
Given the deeply philosophical nature of some alignment questions and realizations, and how we need to think about intelligence itself and values themselves to solve apparently technical tasks, it seems like we also rely on an expanding set of deconfusions and “correct definitions” of what we are trying to achieve, and which sanity-preserving assumptions are we epistemically and strategically allowed to make.
I’m thinking here of Demski’s [illuminating clarifications](https://www.lesswrong.com/s/Gmc7vtnpyKZRHWdt5/p/7Zn4BwgsiPFhdB6h8) on what it even means to extrapolate our values, or Armstrong’s [no indescribable hellscape](https://www.alignmentforum.org/posts/rArsypGqq49bk4iRr/can-there-be-an-indescribable-hellworld) assumption.
Stretching this class, we might even want it to include general deconfusions about our relationship to reality and values ([Tomasik on dualism](https://reducing-suffering.org/the-many-fallacies-of-dualism/), [Alexander on moral extrapolation](https://slatestarcodex.com/2018/09/25/the-tails-coming-apart-as-metaphor-for-life/)), which give us a clearer sense of which decisions will need to be taken in the future (“deciding the CEV procedure is a value judgement”), and which ethical sweetspots are and aren’t possible (contra, for example, [Nate on deferring ethics to future humans](https://www.alignmentforum.org/posts/DJRe5obJd7kqCkvRr/don-t-leave-your-fingerprints-on-the-future?commentId=8yapPvvqWwpKyEbjv)).
Probably decisions about which decision theory or anthropic reasoning are correct fall into this value-laden category (seeing, for example, [the impossibility of neutrally comparing decision theories](https://casparoesterheld.com/2022/07/17/the-lack-of-performance-metrics-for-cdt-versus-edt-versus/), although humans kind of informally do that so something weird is up). For example, even if [Solomonoff induction concluded](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign#:~:text=The%20Solomonoff%20prior%20is%20malign%20if%20there%20exists%20a%20simple,equal%20it%20in%20predictive%20power.) that our reality is being simulated by some maximally simple beings, we might rather, for some aesthetic and ethical reasons, reconsider the virtues of Solomonoff induction than bend to the random wills of these beings.
Limitations of the framework and presentation
=============================================
(The following refers to the whole write-up, not only this second post.)
**About the extent and detail of this analysis:**
It is clear a more in-depth look is necessary. Not only are there innumerably many more cruxes (which might help notice exploitable interdependencies) , and even important background parameters I will have missed, but also the sources of evidence are an especially important list to try to expand and deepen. On that note, probably considerably more effort is needed for this framework to have a chance of providing interesting new low-hanging but yet unconsidered possibilities for prediction and research (instead of just distilling and presenting the state of some communal directions, and pushing towards clearer understanding and discourse, as is done in this write-up).
With a more complete mapping of the uncertainty landscape, we could even build a more nuts-and-bolts model of some of the threat-models downstream of these questions to quantitatively assess risks brought about by different proposals and strategies.
Also, the above lists are a first pass of “abstractly, what we’d like to know”. But we’re still lacking a cost-benefit analysis of what directions to pursue seem more tractable or will predictably provide high-quality evidence. It might be some of these questions are not worth prioritizing.
As mentioned, the above lists are deliberately biased towards training dynamics. They also seem slightly MIRI-centric, especially because of the threat models studied and framings used. This is also deliberate, because MIRI world-views and interpretations were sometimes the most confusing as to their claims, predictions and sources of evidence, so that I wanted to isolate their cruxes as a first-pass analysis of their mental moves. Nonetheless, this will have curtailed the explainability of this analysis in some ways, and applying it to other world-views would provide different and fruitful cruxes and factorizations. Similarly, the questions here relate to basic deconfusion about intelligence and the future most optimized for existential risk reduction, but it ignores many factors especially relevant to s-risk reduction, a different ethical urgency that sometimes poses relevantly different questions and calls for different methods.
**About the general framework and the direction of analysis:**
By over-analyzing isolated idealized phenomena, this analysis might be missing the bigger picture of how these cruxes relate to actual observed reality, or how humans go about exploring them. Certainly, as stated above, some of the messy interactions have been cast away to closely inspect even the most basic phenomenon we’d like to know more about (the “first-order effects”), which are already messy enough. Nonetheless, this choice is deliberate, and this simplification and isolation seems like a natural first step.
On that note, maybe even these isolated cruxes are too messy to obtain relevant information on, and we should instead be focusing on more context-setting and going for more concrete (but less generally informative) cruxes. This is a real worry, and might be moving us away from ML and DL knowledge and techniques we could tractably use. Nonetheless, it seems useful to keep our eyes on big questions which ultimately drive our interests and decisions, as a middle ground between overly-concrete ML work, and overly-abstract threat models.
Epistemically, it could be the methods we have for arriving at conclusions are yet so varied and preliminary that methodological analysis won’t showcase any relevant structure. Maybe everyone is so confused, or conflicting methods so entangled, that such an analysis won’t be fruitful. My current opinion is that, on the contrary, exactly because of the pre-paradigmatic nature of the field, explicitation of mechanisms can be most impactful now and help build foundations. That said, there certainly is some truth to the intuition that “sometimes someone just found this interesting phenomenon and wrote a post, by using standard intuitions and standard tools, and without much relation to the underlying interpersonal scientific mechanism”.
On the other direction, some researchers might argue probabilities and quantitative assessments are not the right frame to look at any of this, and instead the monolithic core of the problem, and all assessments necessary for trying to make progress on it, rely only on some deep patterns they’ve grokked. I remain unconvinced of this (although, ironically, I put some small probability on them being right), given how even apparently obvious things can turn out surprising and deserve a probabilistic analysis, and the relevant claims seem far from being an obvious thing (even if the presence of a certain dynamic in the limit is).
On the action-relevance of isolating cruxes and getting better estimates, I’m not sure many big strategic decisions (especially those in governance) change relevantly if our probabilities for some threat scenarios vary inside the 20%-80% range (this take due to Richard Ngo in some tweet I can’t find, as part of discussion on [Knightian uncertainty](https://www.lesswrong.com/posts/tG9BLyBEiLeRJZvX6/communicating-effectively-under-knightian-norms)). That said, I do think they would inform many lower-level decisions about research prioritization, or even elicit different understandings or interpretations of the deeper nature of some phenomena. I also think bringing them to the surface is enough to clarify some positions and discussions.
Conclusions and future work
===========================
While we constantly trust our intuitions to grok relevant structure and “feel around” for what seems more likely (and we shouldn’t trash these proven sources of evidence), in a pre-paradigmatic field it can be especially positive to try and make more explicit our epistemic attitudes and methods, as well as the concrete questions we are pointing them towards, and interesting nodes we think are upstream of disagreements.
Importantly, many of the questions we can pose in the flavor of the above list seem to call for forecast-y methods and attitudes (instead of the application of some well-trodden scientific tools), so doubtful distributions, medium credences and hedging seem to be the expected outcome for now.
I especially hope this analysis can help notice how far we are from the scientific standards of other fields. I am not saying we should remain agnostic: we don’t have that privilege. A speculative interpretation of evidence is a priori better than no interpretation. But we do have to calibrate our confidence, and it seems helpful for that to compare with abstractions that have proven useful in the past. Building towards more robust standards might be very positive in [logistically median worlds](https://www.lesswrong.com/posts/yTvBSFrXhZfL8vr5a/worst-case-thinking-in-ai-alignment).
On that note, while as mentioned above there are important reasons why alignment discussions remain largely speculative, it seems worthwhile to increase efforts on keeping an eye out for possible (even theoretical / hypothetical / mental) [Proofs of Concept](https://www.lesswrong.com/posts/xsB3dDg5ubqnT7nsn/poc-or-or-gtfo-culture-as-partial-antidote-to-alignment#comments) or any kind of feedback from reality.
Indeed, while the list of cruxes and parameters seem more contingent and will continuously be expanded, we seem especially bottlenecked on sources of evidence. We need a deeper methodological and epistemological analysis, trying to come up with qualitatively new sources, or further exploiting existing ones by combining or repurposing them.
This work will be expanded with a benchmark of thought experiments in alignment, as intuition pumpers and technical discussion starters. Other than that, some promising avenues for future work are:
* Expand and deepen the analysis of sources of evidence, especially looking for not-already-used low-hanging methods.
* Scourge the Deep Learning literature for evidence on some of the questions on training dynamics and in-the-limit behavior.
* Replicate treatment inspired by the central questions of other researchers’ world-views.
* Expand questions and methods on feedback loops.
* More generally, search for background parameters and compile evidence on them.
Appendix: Where are we most uncertain?
======================================
Instead of weighing all our uncertainties, I’ll focus on a single central dichotomy that seems to underlie different world-views. Also, for clearer exposition, I’ll reconstruct my train of thought instead of arguing for conclusions.
Many threat models or qualitative arguments for danger seem to involve two steps: (1) this undesired algorithm exists (for example, human imitator instead of truthful predictor), and (2) our training procedures could find it (because there are some dynamics incentivizing that). This correspond to the questions “what is the class of algorithms satisfying the task”, and “out of those, which ones are easily found in training?” (similar to the distinction between A1 and A2 above).
My initial intuition was we were generally pretty good at existence proofs, and on the contrary pretty bad at being able to predict something as chaotic as training dynamics. Indeed, I felt like Paul’s builder-breaker takes advantage of this: through the conservative worst-case assumption, they can focus on existence proofs.
But it also seems true that many disagreements of MIRI views with other researchers are about whether you can even solve some tasks without getting general reasoners, the inherent necessity of good and general cognitive structures even for narrow advanced tasks, that is, which algorithms solve a certain task (thanks to Vivek Hebbar for pointing this out).
So what do we even mean by being uncertain about these questions?
Well, thinking about variance, it does seem true we have high variance on “how the algorithms solving this advanced task will look like”, and on the contrary we are pretty confident on “given those algorithms, SGD will approximately find the simpler ones”. And of course, joining answers to these questions we’d be able to answer “which algorithms are found in training”, so it does seem like most work is on the first step. Although it’s not clear whether (1) or (2) has more variance overall, that is, whether “given those algorithms, which ones does SGD find” increases or decreases variance. There’s a possible equivocation here. If we think only of sampling SGD many times, this could have indeed very low variance because of a strong simplicity prior. If instead we think of our subjective uncertainty right now over which models it will find, we are way more uncertain.
In any event, my original intuition come from feeling “less clueless” about the set of existing algorithms than about the set of learned algorithms. But maybe I was just feeling that way because I predict the set of existing algorithms to be “big and chaotic” (which is, of course, not the same as actually not being clueless about its contents).
Let’s examine another intuition: we do seem to have a kind of direct access to the set of existing algorithms (just completely build up an algorithm, or a counterexample about what that algorithm does in a certain situation) than to learned algorithms (we are nowhere near having powerful enough ML to find those algorithms). But of course, the first thing is not true either! We can’t yet completely explicit an algorithm solving any of the above hard tasks. Nonetheless, it does seem intuitively true that we have a good big picture of “how these algorithms will globally work, or which things they need to implement”. And maybe we don’t have the same big picture idea about “which algorithmic structures are incentivized by SGD” (they’re just a big mess of things). But I’m not even sure that’s true! I’m not sure we have a good grasp of what these algorithms will macroscopically actually do (let alone their efficient low-level implementation, or concrete behavior). And on the contrary we do have some interesting evidence on SGD’s overall biases, and maybe even other finer properties thanks to trial and error (although we are again wary of phase changes).
After all, I don’t see anyone strongly contesting that SGD will have a simplicity bias (or their threat models depending on that), and on the contrary a central and distinctive component of MIRI world-views is that there don’t exist algorithms solving some technical tasks without being too general. So I do feel like conceptually most of the crux is int “how pervasive are dangerous algorithms in algorithm-space”.
I think if this differs from my initial intuitions it’s because because I was thinking of “checking whether an algorithm of a certain shape exists”, rather than “more completely determining which kinds of algorithms are prevalent or natural and which aren’t” (as in, MIRI views accept safe algorithms exist, they just think they’ll be very concrete points on algorithm-space, difficult to find, instead of vast portions of algorithm-space), and that’s of course doing most of the heavy lifting for the subsequent question about SGD (natural ≈ simple ≈ what SGD finds).
|
73ca2bd1-a73e-427b-862b-b1a0ebbf7233
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Take 14: Corrigibility isn't that great.
As a writing exercise, I'm writing an AI Alignment Hot Take Advent Calendar - one new hot take, written every day some days for 25 days.
It's the end (I saved a tenuous one for ya')! Kind of disappointing that this ended up averaging out to one every 2 days, but this was also a lot of work and I'm happy with the quality level. Some of the drafts that didn't work as "hot takes" will get published later.
I
There are certainly arguments for why we want to build corrigible AI. For example, the problem of fully updated deference says that if you build an AI that wants things, even if it's uncertain about what it wants, it knows it can get more of what it wants if it doesn't let you turn it off.
The metal image this conjures up is of an AI doing something that's obvious-to-humans bad, and us clamoring to stop, but it blocking us from turning it off because we didn't solve the problem of fully updated deference. It would be better if we built an AI that took things slow, and that would let us shut it off if we got to look at what it was doing and saw that it was obviously bad.
Don't get me wrong, this could be a nice property to have. But I don't think it's all that likely to come up, because aiming at aligned AI means building AI that tries not to do obviously bad stuff.
A key point is that corrigibility is only desirable if you actually expect to use it. Its primary sales pitch is that it might give us a mulligan on an AI that starts doing obviously bad stuff. If everything goes great and we wind up in a post-scarcity utopia, I'm not worried about whether the AI would let me turn it off if I counterfactually wanted to.
A world where corrigibility is useful might look like us building an agenty AI with a value learning process that we're not confident in, letting it run and interacting with it to try to judge how the value learning is going, and then (with moderate probability) turning it off and trying again with another idea for value learning. What does corrigib
|
8875c498-b22d-4917-96be-3e7007416501
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[SEQ RERUN] The Proper Use of Doubt
Today's post, The Proper Use of Doubt, was originally published on 06 August 2007. A summary (taken from the LW wiki):
> Doubt is often regarded as virtuous for the wrong reason: because it is a sign of humility and recognition of your place in the hierarchy. But from a rationalist perspective, this is not why you should doubt. The doubt, rather, should exist to annihilate itself: to confirm the reason for doubting, or to show the doubt to be baseless. When you can no longer make progress in this respect, the doubt is no longer useful to you as a rationalist.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, in which we're going through Eliezer Yudkowsky's old posts in order, so that people who are interested can (re-)read and discuss them. The previous post was Focus Your Uncertainty, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
|
651fd26d-9555-46e5-8264-8e3204a97104
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How to decide under low-stakes uncertainty
Say you're stuck in uncertainty between two actions you're considering. They seem about equally good, but you suspect one is better, and it's not obvious which. You already have all the information to obviously collect about the problem.
For situations where getting it right really matters, try harder to get more information, and use methods more reliable than those presented here.
For lower-stakes problems:
1. Assign one option to 0/tails and the other to 1/heads.
2. Flip a coin, i.e. use a one-bit random number generator.
3. Start accepting the decision from the coinflip, and observe your revealed unconscious intuition, for you will probably have some.
4. If that intuition opposes the coin's decision, go with the intuition. Otherwise, you either actively came to agree with the random decision, or it still seems terribly uncertain, so you go with the random choice.
This is classic advice. I'm just sharing it. The "innovation" here is a way to handle cases where it's inconvenient to flip a coin or do the equivalent.
Mental coinflip
1. Assign one option to 0/tails and the other to 1/heads.
2. Think about something unrelated.
3. Notice the first word that pops into your mind.
4. Count letters in that word. If it's even, you get 0; if it's odd, you get 1.
5. Use that 0 or 1 to continue at step 3 of the general procedure.
This is very crude randomness, but it doesn't really have to be random, just uncorrelated with the topic of decision. You should not use a mental coinflip to try to run a fair game of chance.
There may be a bias in the length-parity of words you think of. I, so far, haven't noticed one. But that wouldn't necessarily be a problem. Once you notice such a bias, you can exploit it by assigning actions to 0/1 so that the mental coinflip bias opposes a bias you naturally have.
E.g. if you pick odd-length words more often, when the choice is between "doing nothing" and "doing something", assign "doing something" to 1. Laziness may lead you t
|
583450c6-8917-4c78-9eb0-50ccc94d800f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
LessWrong Hamburg First Meetup Notes: Starting small
Review of our LessWrong Hamburg First Meetup:
I arrived early and the location was somewhat crowded and the reserved table in the back had been replaced by a center table - but I managed to switch for a better one.
Then I put up some books and a sign and was quickly greated by the first LWer.
We started with smalltalk, finding common background quickly.
I had brought some books from the LW reading list I had in my collection and surprise: Half of them were recognized and the remaining quickly started a discussion (the Kahneman I later lent to C.F.).
Some friends trickled in and after some introduction we played Wits and Wagers and then Pandemic (without biasing because the game was new to most).
Summary: The Meetup was a success for me. We introduced some friends to LW ideas and we enjoyed a lively discussion not without controversy. I adapted to the Meetup format easily.
One idea I had was a Meetup Diary which I used to plan the Meetup and take notes in. I think it still beats digital for things like quick notes and diagrams. I had it handy to check our schedule, draw Bayes diagrams and write down a telephone number. I plan to have it around and maybe lend it to later Meetup organizers.
Outlook for the next Meetup: We planned it for Friday 21th in company offices somewhere in Hamburg, Altona (thanks to F.R. who will confirm the location later with a separate Post).
Topics will be then
* Procrastination (I committed to read up and present some techniques in exchange for C.F. committing to use beeminder).
* More discussion of LW topics, most likely: effective altruism
* I will bring books and games again.
More Meetups will likely follow roughly every fourtnight and alternating Friday and weekends.
|
0bacd60b-79e4-44ca-a2b0-d98bd62bb3a5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The first RCT for GLP-1 drugs and alcoholism isn't what we hoped
GLP-1 drugs are a miracle for diabetes and obesity. There are rumors that they might also be a miracle for addiction to alcohol, drugs, nicotine, and gambling. That would be good. We like miracles. But we just got the first good trial and—despite what you might have heard—it’s not very encouraging.
Semaglutide—aka Wegovy / Ozempic—is a GLP-1 agonist. This means it binds to the same receptors the glucagon-like peptide-1 hormone normally binds to. Similar drugs include dulaglutide, exenatide, liraglutide, lixisenatide, and tirzepatide. These were originally investigated for diabetes, on the theory that GLP-1 increases insulin and thus decreases blood sugar. But GLP-1 seems to have lots of other effects, like preventing glucose from entering the bloodstream, slowing digestion, and making you feel full longer. It was found to cause sharp decreases in body mass, which is why supposedly 12% of Americans had tried one of these drugs by mid 2024.
(I’m skeptical that of that 12% number, but a different survey in late 2024 found that 10% of Americans were currently taking one of these drugs. I know Americans take more drugs than anyone on the planet, but still…)
Anyway, there are vast reports from people taking these drugs that they help with various addictions. Many people report stopping drinking or smoking without even trying. This is plausible enough. We don’t know which of the many effects of these drugs is really helping with obesity. Maybe it’s not the effects on blood sugar that matter, but these drugs have some kind of generalized “anti-addiction” effect on the brain? Or maybe screwing around with blood sugar changes willpower? Or maybe when people get thinner, that changes how the brain works? Who knows.
Beyond anecdotes, are some observational studies and animal experiments suggesting they might help with addiction (OKeefe et al. 2024). We are so desperate for data that some researchers have even resorted to computing statistics based on what people say on redd
|
ee2c4ecd-e8b9-4194-ab9d-af432f323fb4
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What makes teaching math special
Related:
* Arguments against constructivism (in education)?
* Seeking PCK (Pedagogical Content Knowledge)
*
Designing good math curriculum for elementary and high schools requires one to have two kinds of expertise: deep understanding of math, and lot of experience teaching kids. Having just one of them is not enough. People who have both are rare (and many of them do not have the ambition to design a curriculum).
Being a math professor at university is not enough, now matter how high-status that job might be. University professors are used to teaching adults, and often have little patience for kids. Their frequent mistake is to jump from specific examples to abstract generalizations too quickly (that is, if they bother to provide specific examples at all). You can expect an adult student to try to figure it out on their own time; to read a book, or ask classmates. You can't do the same with a small child.
(Also, university professors are selected for their research skills, not teaching skills.)
University professors and other professional mathematicians suffer from the "curse of knowledge". So many things are obvious to them than they have a problem to empathize with someone who knows nothing of that. Also, the way we remember things is that we make mental connections with the other things we already know. The professor may have too many connections available to realize that the child has none of them yet.
The kids learning from the curriculum designed by university professors will feel overwhelmed and stupid. Most of them will grow up hating math.
On the other hand, many humanities-oriented people with strong opinions on how schools should be organized and how kids should be brought up, suck at math. More importantly, they do not realize how math is profoundly different from other school subjects, and will try to shoehorn mathematical education to the way they would teach e.g. humanities. As a result, the kids may not learn actual mathematics at all.
*
|
7fffef7c-7d60-42af-a68b-fb87762c760f
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
Associativity: Examples
[https://arbital.com/p/toc:](https://arbital.com/p/toc:)
## Positive examples
### Addition
$(x + y) + z = x + (y + z)$ for all numbers $x, y,$ and $z.$ Thus, addition associates. One easy way to see this fact is to consider a [physical system](https://arbital.com/p/3mb) that implements addition, e.g., by taking two piles of poker chips (where a poker chip with $n$ chips represents the number $n$) in on two input belts, and producing a pile of poker chips on the output belt. This function can be implemented by simply shoving both input piles onto the output pile. Clearly, when combining three piles, it doesn't matter which piles get shoved onto the output tape in which order, so addition is associative.
### Multiplication
$(x \times y) \times z = x \times (y \times z)$ for all numbers $x, y,$ and $z.$ Thus, multiplication associates. It is, however, a little harder to see why this must be the case for multiplication.
Imagine a physical system implementing multiplication, by taking stacks of poker chips as input (where stacks with $n$ chips in them represents the number $n$), and producing an output by putting a copy of the left stack on the output tape for every chip in the right stack. Imagine using this function to combine three stacks of poker chips, a blue stack (representing $x$), a yellow stack (representing $y$), and a red stack (representing $z$). We can either calculate $(x \times y) \times z$ or $x \times (y \times z).$ In both cases, the result will be a bunch of copies of the original blue $x$ stack. The question is, how many copies will there be in each case?
In the first case, the machine first puts $y$ copies of the blue stack in a line, and then makes $z$ copies of that line. In the second case, the machine first makes $z$ copies of the yellow stack. Each of those yellow stacks corresponds to one of the lines from the first case, and each yellow chip on each yellow stack corresponds to one of the blue stacks in the first case. A blue stack is placed on the output in the second case for each of those yellow chips, so the number of blue stacks in each case is the same.
### Concatenation
The concatenation of [strings](https://arbital.com/p/3jr) is another associative function: `concat("a",concat("b","c")) = concat(concat("a","b"),"c") = "abc"`.
Concatenation is an example of an associative function that is not [commutative](https://arbital.com/p/3jb): When reducing a list of strings to a single string, it doesn't matter what order you combine adjacent elements in, but it _does_ matter that you leave the elements in their original order.
In fact, any function that just sticks its inputs together (possibly with some extra stuff in the middle) is associative: A function which takes wooden blocks as input, and glues them together (with a small white block in the middle) is associative, because if you put in blue, yellow, and red blocks then you get a blue-white-yellow-white-red block out the end, regardless of the order you combine adjacent blocks in.
## Negative examples
### Functions with a history
The function `xconcat` that concatenates its inputs and puts an "x" on the front: `xconcat("A", xconcat("B","C")) = "xAxBC"`, but `xconcat(xconcat("A", "B"), "C") = "xxABC"`. The problem in this case is that the output carries a trace of which adjacent elements were combined in which order, which makes the function non-associative.
In fact, any function that carries a history of the application order with it can't be associative. Thus, if the inputs carry history about which inputs were combined with which other inputs, and the output preserves (and adds to) that history, the function can't be associative. Associativity is, fundamentally, about the output not depending on which path was taken to get to it.
### Subtraction
Subtraction does not associate, as $(5-3)-2=0$ but $5-(3-2)=4.$ To see what went wrong, first notice that all inputs contribute either positively or negatively to the result. Label all inputs (in their original positive states) with up-arrows; we will track whether that input has a positive or negative impact on the final output. Subtraction flips its right-hand input upside down before combining it with its left-hand input, so given inputs labeled $\uparrow$ and $\uparrow$ we should label its output $\uparrow\downarrow.$ When combining three inputs, we can either combine the left two first or the right two first. If we combine the left two first, then the second subtraction is run on inputs labeled $\uparrow\downarrow$ and $\uparrow,$ and produces an output $\uparrow\downarrow\downarrow,$ in which the first input contributes positively and the other inputs contribute negatively. But if we combine the right two inputs first, then the second subtraction is run on inputs labeled $\uparrow$ and $\uparrow\downarrow,$ and produces an output $\uparrow\downarrow\uparrow,$ in which the first and third inputs contribute positively and the second contributes negatively. Thus, the contribution of the third input depends on which adjacent elements are combined in which order, so subtraction does not associate.
### Cyclical preferences
Consider a person who is playing a game that works as follows. The board has three positions: red, green, and blue. The player's objective is to complete as many clockwise red-green-blue cycles as possible, without ever backtracking in the counter-clockwise direction.

Each turn, the game offers them a choice of one of the three spaces, and they get to choose whether or not to travel to that square or stay where they are. Clearly, their preferences depend on where they currently are: If they're on "red", "green" is a good move and "blue" is a bad one; but if they're on "blue" then choosing "green" is ill-advised.
We can consider a binary operation $?$ which takes their current position on the left and the proposed position on the right, and returns the position that the player prefers. Specifically:
\begin{align}
& red \ ?\ red \ &= red\\
& red \ ?\ green \ &= green\\
& red \ ?\ blue \ &= red\\
\end{align}
\begin{align}
& green \ ?\ red \ &= green\\
& green \ ?\ green \ &= green\\
& green \ ?\ blue \ &= blue\\
\end{align}
\begin{align}
& blue \ ?\ red \ &= red\\
& blue \ ?\ green \ &= blue\\
& blue \ ?\ blue \ &= blue
\end{align}
This function does not associate, because $(red\ ?\ green)\ ?\ blue = blue$ but $red\ ?\ (green\ ?\ blue)=red.$ To show that a function is not associative, it is sufficient to simply do out the whole function table and then find any one case that violates the axiom of associativity, as above.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.