id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
04b955c7-0d87-4593-b2bd-156672e25119
trentmkelly/LessWrong-43k
LessWrong
Rationality Quotes 10 "Yes, I am the last man to have walked on the moon, and that's a very dubious and disappointing honor. It's been far too long."         -- Gene Cernan "Man, you're no smarter than me. You're just a fancier kind of stupid."         -- Spider Robinson, Distraction "Each problem that I solved became a rule which served afterwards to solve other problems."         -- Rene Descartes, Discours de la Methode "Faith is Hope given too much credit."         -- Matt Tuozzo "Your Highness, I have no need of this hypothesis."         -- Pierre-Simon Laplace, to Napoleon, explaining why his works on celestial mechanics made no mention of God. "'For, finally, one can only judge oneself by one's actions,' thought Elric. 'I have looked at what I have done, not at what I meant to do or thought I would like to do, and what I have done has, in the main, been foolish, destructive, and with little point. Yyrkoon was right to despise me and that was why I hated him so.'"         -- Michael Moorcock, Elric of Melniboné "You will quickly find that if you are completely and self-deprecatingly truthful about how much you owe other people, the world at large will treat you like you did every bit of the invention yourself and are just being becomingly modest about your innate genius."         -- Eric S. Raymond "The longer I live the more I see that I am never wrong about anything, and that all the pains that I have so humbly taken to verify my notions have only wasted my time."         -- George Bernard Shaw "The trouble is that consciousness theories are very easy to dream up... Theories that explain intelligence, on the other hand, are fiendishly difficult to come by and so are profoundly useful. I don't know for sure that intelligence always produces consciousness, but I do know that if you assume it does you'll never be disappointed."         -- John K. Clark "Intelligence is silence, truth being invisible. But what a racket I make in declaring this."         -- Ned Rorem, "Rand
6c20d86e-4c8b-4a93-9a7e-7fee0e7c198a
trentmkelly/LessWrong-43k
LessWrong
The Incredible Fentanyl-Detecting Machine An NII machine in Nogales, AZ. (Image source) There’s bound to be a lot of discussion of the Biden-Trump presidential debates last night, but I want to skip all the political prognostication and talk about the real issue: fentanyl-detecting machines. Joe Biden says: > And I wanted to make sure we use the machinery that can detect fentanyl, these big machines that roll over everything that comes across the border, and it costs a lot of money. That was part of this deal we put together, this bipartisan deal. > > More fentanyl machines, were able to detect drugs, more numbers of agents, more numbers of all the people at the border. And when we had that deal done, he went – he called his Republican colleagues said don’t do it. It’s going to hurt me politically. > > He never argued. It’s not a good bill. It’s a really good bill. We need those machines. We need those machines. And we’re coming down very hard in every country in Asia in terms of precursors for fentanyl. And Mexico is working with us to make sure they don’t have the technology to be able to put it together. That’s what we have to do. We need those machines. Wait, what machines? You can remotely, non-destructively detect that a bag of powder contains fentanyl rather than some other, legal substance? And you can sense it through the body of a car? My god. The LEO community must be holding out on us. If that tech existed, we’d have tricorders by now. What’s actually going on here? What’s Up With Fentanyl-Detecting Machines? First of all, Biden didn’t make them up. This year, the Department of Homeland Security reports that Customs and Border Patrol (CBP) has deployed “Non-Intrusive Inspection” at the US’s southwest border: > “By installing 123 new large-scale scanners at multiple POEs along the southwest border, CBP will increase its inspection capacity of passenger vehicles from two percent to 40 percent, and of cargo vehicles from 17 percent to 70 percent.” In fact, there’s something of a scand
9eaf177f-2157-422b-83a3-1abec095f6a5
trentmkelly/LessWrong-43k
LessWrong
Thinking of tool AIs Preliminary note: the ideas in the post emerged during the Learning-by-doing AI safety workshop at EA Hotel; special thanks to Linda Linsefors, Davide Zagami and Grue_Slinky for giving suggestions and feedback. As the title anticipates, long-term safety is not the main topic of this post; for the most part, the focus will be on current AI technologies. More specifically: why are we (un)satisfied with them from a safety perspective? In what sense can they be considered tools, or services? An example worth considering is the YouTube recommendation algorithm. In simple terms, the job of the algorithm is to find the videos that best fit the user and then suggest them. The expected watch time of a video is a variable that heavily influences how a video is ranked, but the objective function is likely to be complicated and probably includes variables such as click-through rate and session time.[1] For the sake of this discussion, it is sufficient to know that the algorithm cares about the time spent by the user watching videos. From a safety perspective - even without bringing up existential risk - the current objective function is simply wrong: a universe in which humans spend lots of hours per day on YouTube is not something we want. The YT algorithm has the same problem that Facebook had in the past, when it was maximizing click-throughs.[2] This is evidence supporting the thesis that we don't necessarily need AGI to fail: if we keep producing software that optimizes for easily measurable but inadequate targets, we will steer the future towards worse and worse outcomes. ---------------------------------------- Imagine a scenario in which: * human willpower is weaker than now; * hardware is faster than now, so that the YT algorithm manages to evaluate a larger number of videos per time unit and, as a consequence, gives the user better suggestions. Because of these modifications, humans could spend almost all day on YT. It is worth noting that, even in this semi-
4a11e8f9-6d3b-4124-a973-f65809eee0b0
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Timeless Beauty Today's post, Timeless Beauty was originally published on 28 May 2008. A summary (taken from the LW wiki):   > To get rid of time you must reduce it to nontime. In timeless physics, everything that exists is perfectly global or perfectly local. The laws of physics are perfectly global; the configuration space is perfectly local. Every fundamentally existent ontological entity has a unique identity and a unique value. This beauty makes ugly theories much more visibly ugly; a collapse postulate becomes a visible scar on the perfection. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Timeless Physics, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
b0aee7f5-d1f0-4db3-a5da-a0ac223c0b82
trentmkelly/LessWrong-43k
LessWrong
Currently Buying AdWords for LessWrong So I'm trying to build more rationalists.  To do this, I've invested a few hundred dollars of my own money to promote Less Wrong by buying low-cost AdWords on Google for different LW pages.  I want to reach smart people with a really good article from Less Wrong that answers their question and draws them into our community so that the site's content can help improve their rationality.   Based on buying AdWords before, I'd estimate that only 0.5-1% of people who click through to Less Wrong will actually get involved after reading an article, but since clicks only cost ~$0.04, that means it only costs me ~$6 to build a new rationalist and drastically improve someone's life.  Seems like an excellent return on investment. But to get a strong 1% conversion rate and really make an impact, I need to identify REALLY EXCELLENT Less Wrong content.  Right now I'm experimenting by buying a lot of keywords related to quantum mechanics and sending people to http://lesswrong.com/lw/r8/and_the_winner_is_manyworlds/ My hope is that this page is useful and memorable enough that some small % of readers stick around and click through to other pages.  My guess is that this isn't the ideal page to do this with but it's aiming in the right direction. What page would you would want a new Less Wrong reader to find first?  What answers a specific question they might have in such an impressive way that they would want to learn more about our community (perhaps many different pages for many different questions)??  Which articles are most memorable?  Just looking at "Top" didn't yield any obvious choices... I felt like most of those articles were too META-META-META ... you'd need too much back knowledge for many of them.  An ideally article would be more or less "stand-alone" so that any relatively intelligent person who doesn't have the whole LW corpus in their head already could just jump in and understand it immediately... and then branch out and explore LW from there. So what do you th
4da45c4b-ddbb-43da-9e8a-2f6643bda58f
StampyAI/alignment-research-dataset/special_docs
Other
Assessing Generalization in Reward Learning: Intro and Background Generalization in Reward Learning --------------------------------- An overview of reinforcement learning, generalization, and reward learning -------------------------------------------------------------------------- [![Max Chiswick](https://miro.medium.com/v2/resize:fill:88:88/1\*9bQVM9sOQq-c\_glaRi6\_8A.jpeg)](https://chisness.medium.com/?source=post\_page-----da6c99d9e48--------------------------------)[![Towards Data Science](https://miro.medium.com/v2/resize:fill:48:48/1\*CJe3891yB1A1mzMdqemkdg.jpeg)](https://towardsdatascience.com/?source=post\_page-----da6c99d9e48--------------------------------)[Max Chiswick](https://chisness.medium.com/?source=post\_page-----da6c99d9e48--------------------------------) ·[Follow](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F\_%2Fsubscribe%2Fuser%2F98505f8c082&operation=register&redirect=https%3A%2F%2Ftowardsdatascience.com%2Fassessing-generalization-in-reward-learning-intro-and-background-da6c99d9e48&user=Max+Chiswick&userId=98505f8c082&source=post\_page-98505f8c082----da6c99d9e48---------------------post\_header-----------) Published in[Towards Data Science](https://towardsdatascience.com/?source=post\_page-----da6c99d9e48--------------------------------)·16 min read·Sep 30, 2020-- Listen Share \*\*Authors: Anton Makiievskyi, Liang Zhou, Max Chiswick\*\* \*Note: This is the\* \*\*\*first\*\*\* \*of\* \*\*\*two\*\*\* \*blog posts (part\* [\*two\*](https://chisness.medium.com/assessing-generalization-in-reward-learning-implementations-and-experiments-de02e1d08c0e)\*). In these posts, we describe a project we undertook to assess the ability of reward learning agents to generalize. The implementation for this project is\* [\*available\*](https://github.com/lzil/procedural-generalization) \*on GitHub.\* \*This first post will provide a background on reinforcement learning, reward learning, and generalization, as well as summarize the main aims and inspirations for our project. If you have the requisite technical background, feel free to skip the first couple sections.\* About Us ======== We are a team that participated in the 2020 [AI Safety Camp](https://aisafety.camp/) (AISC), a program in which early career researchers collaborate on research proposals related to AI safety. In short, AI safety is a field that aims to ensure that as AI continues to develop, it does not harm humanity. Given our team’s mutual interests in technical AI safety and reinforcement learning, we were excited to work together on this project. The idea was originally suggested by Sam Clarke, another AISC participant with whom we had fruitful conversations over the course of the camp. Reinforcement Learning ====================== In reinforcement learning (RL), an agent interacts with an environment with the goal of earning rewards. \*\*Ultimately, the agent wants to learn a strategy in order to maximize the rewards it obtains over time.\*\* First things first, though: what exactly is an agent, and what are rewards? An \*agent\* is a character that interacts with some world, also known as an \*environment\*\*\*,\*\* by taking actions. For instance, an agent could be a character playing a video game, the car in a self-driving car simulation, or a player in a poker game. The rewards are simply numbers that represent the goal of the agent, whether what happens to the agent is preferable or not. For example picking up a coin may give a positive reward, while getting hit by an enemy a negative reward. In RL, the \*state\* represents everything about the current situation in the environment. What the agent can actually see, however, is an \*observation\*. For example, in a poker game, the observation may be the agent’s own cards and the previous actions of the opponent, while the state also includes the cards of the opponent and the sequence of cards in the deck (i.e., things the agent can’t see). In some environments like chess where there is no hidden information, the state and the observation are the same. Given observations, the agent takes \*actions\*. After each action, the agent will get feedback from the environment in the form of: 1. \*\*Rewards:\*\* Scalar values, which can be positive, zero, or negative 2. \*\*A new observation:\*\* The result of taking the action from the previous state, which moves the agent to a new state and results in this new observation. (Also, whether or not the new state is “terminal”, meaning whether the current interaction is finished or still in progress. For example, completing a level or getting eaten by an opponent will terminate many games.) In RL, our goal is to \*train\* the agent to be really good at a task by using rewards as feedback. Through one of many possible training algorithms, the agent gradually learns a strategy (also known as a \*policy\*) that defines what action the agent should take in any state to maximize reward. The goal is to maximize reward over an entire \*episode\*, which is a sequence of states that an agent goes through from the beginning of an interaction to the terminal state. Hugely successful agents have been trained to superhuman performance in domains such as [Atari](https://medium.com/@jonathan\_hui/rl-dqn-deep-q-network-e207751f7ae4) and the game of [Go](https://deepmind.com/blog/article/alphago-zero-starting-scratch). ![]()The reinforcement learning process (Image by Authors)Let’s look at how a sample algorithm might work, using the video game Mario as an example. Let’s say that Mario has an enemy to his right, a mushroom to his left, and nothing above (see figure below). With those three action options, he might get a reward of +2 if he goes left, -10 if he goes right, and 0 if he goes up. After Mario takes an action, he will be in a new state with a new observation, and will have earned a reward based on his action. Then it will be time for another action, and the process goes on. Recall that the intention is to maximize rewards for an entire episode, which in this context is the series of states from the beginning of the game for the duration of Mario’s life. ![]()Mario learning to eat mushrooms (Image by Authors)The first time the algorithm sees this situation, it might select an option randomly since it doesn’t yet understand the consequences of the available actions. As it sees this situation more and more, it will learn from experience that in situations like these, going right is bad, going up is ok, and going left is best. We wouldn’t directly teach a dog how to fetch a ball, but by giving treats (rewards) for doing so, the dog would learn by reinforcement. Similarly, Mario’s actions are reinforced by feedback from experience that mushrooms are good and enemies are not. How does the algorithm work to maximize the rewards? Different RL algorithms work in different ways, but one might keep track of the results of taking each action from this position, and the next time Mario is in this same position, he would select the action expected to be the most rewarding according to the prior results. Many algorithms select the best action most of the time, but also sometimes select randomly to make sure that they are exploring all of the options. (Note that at the beginning, the agent usually acts randomly because it hasn’t yet learned anything about the environment.) It’s important to keep exploring all of the options to make sure that the agent doesn’t find something decent and then stick with it forever, possibly ignoring much better alternatives. In the Mario game, if Mario first tried going right and saw it was -10 and then tried up and saw it was 0, it wouldn’t be great to always go up from that point on. It would be missing out on the +2 reward for going left that hadn’t been explored yet. Imagine that you tried cooking at home and didn’t like the food, and then went to McDonald’s and had a fantastic meal. You found a good “action” of going to McDonald’s, but it would be a shame (and not great health-wise) if you kept eating at McDonald’s forever and didn’t try other restaurants that may end up giving better “rewards”. Generalization ============== RL is often used in game settings like [Atari](https://gym.openai.com/envs/#atari). One problem with using RL in Atari games (which are similar to Mario-style games) is the \*sequential\* nature of these games. After winning one level, you advance to the next level, and keep going through levels in the same order. \*\*The algorithms may simply memorize exactly what happens in each level and then fail miserably when facing the slightest change in the game.\*\* This means that the algorithms may not actually be understanding the game, but instead learning to memorize a sequence of button presses that leads to high rewards for particular levels. A better algorithm, instead of learning to memorize a sequence of button presses, is able to “understand” the structure of the game and thus is able to adapt to unseen situations, or \*generalize\*. Successful generalization means performing well in situations that haven’t been seen before. If you learned that 2\\*2 = 4, 2\\*3 = 6, and 2\\*4 = 8, and then could figure out that 2\\*6 = 12, that means that you were able to “understand” the multiplication and not just memorize the equations. ![]()Atari Breakout ([source](/atari-reinforcement-learning-in-depth-part-1-ddqn-ceaa762a546f))Let’s look at a generalization example in the context of an email spam filter. These usually work by collecting data from users who mark emails in their inbox as spam. If a bunch of people marked the message “EARN $800/DAY WITH THIS ONE TRICK” as spam, then the algorithm would learn to block all of those messages for all email users in the future. But what if the spammer noticed his emails were being blocked and decided to outsmart the filter? The next day he might send a new message, “EARN $900/DAY WITH THIS ONE OTHER TRICK”. An algorithm that is only memorizing would fail to catch this because it was just memorizing exact messages to block, rather than learning about spam in general. A generalizing algorithm would learn patterns and effectively understand what constitutes a piece of spam mail. Reward Learning =============== Games generally have very well-defined rewards built into them. In a card game like [Blackjack](https://gym.openai.com/envs/Blackjack-v0/), rewards correspond to how much the agent wins or loses each hand. InAtari, rewards are game dependent, but are well-specified, such as earning points for defeating enemies or finishing levels and losing points for getting hit or dying. The image below is from a classic reinforcement learning environment called [CartPole](https://gym.openai.com/envs/CartPole-v0/), where the goal is to keep a pole upright on a track, and where a reward of +1 is provided for every second that the pole stays upright. The agent moves the cart left or right to try to keep the pole balanced and the longer it can keep it balanced, the more +1 rewards it receives. ![]()CartPole ([source](https://medium.com/@tuzzer/cart-pole-balancing-with-q-learning-b54c6068d947))\*\*However, many tasks in the real world do not have such clearly defined rewards, which leads to limitations in the possible applications of reinforcement learning.\*\* This problem is compounded by the fact that even attempting to specify clearly defined rewards is often difficult, if not impossible. A human could provide direct feedback to an RL agent during training, but this would require too much human time. One approach called inverse reinforcement learning involves “reverse engineering” a reward function from demonstrations. For complex tasks, figuring out the reward function from demonstrations is very difficult to do well. \*\*\*Reward learning\* involves learning a reward function, which describes how many rewards are earned in each situation in the environment, i.e. a mapping of the current state and action to the rewards received.\*\* The goal is to learn a reward function that encourages the desired behavior. To train the algorithm to learn a reward function, we need another source of data such as demonstrations of successfully performing the task. The reward function outputs reward predictions for each state, after which standard RL algorithms can be used to learn a strategy by simply substituting these approximate rewards in place of the usually known rewards. ![]()The reinforcement learning process with a reward function in place of known rewards (Image by Authors)Prior work (described below as Christiano et al. 2017) provides an example that illuminates how difficult learning the reward function can be. Imagine teaching a robot to do a backflip. If you aren’t a serious gymnast, it would be challenging to give a demonstration of successfully performing the task yourself. \*\*One could attempt to design a reward function that an agent could learn from, but this approach often falls victim to non-ideal reward design and reward hacking.\*\* Reward hacking means that the agent can find a “loophole” in the reward specification. For example, if we assigned too much reward for getting in the proper initial position for the backflip, then maybe the agent would learn to repeatedly move into that bent over position forever. It would be maximizing rewards based on the reward function that we gave it, but wouldn’t actually be doing what we intended! A human could supervise every step of an agent’s learning by manually giving input on the reward function at each step, but this would be excessively time consuming and tedious. The difficulty in specifying the rewards points towards the larger issue of human-AI alignment whereby humans want to align AI systems to their intentions and values, but specifying what we actually want can be surprisingly difficult (recall how every single genie story ends!). Relevant Papers =============== We’d like to look at several recent reward learning algorithms to evaluate their ability to learn rewards. We are specifically interested in how successful the algorithms are when faced with previously unseen environments or game levels, which tests their ability to generalize. To do this, we leverage a body of prior work: 1. [\*Deep reinforcement learning from human preferences\*](https://arxiv.org/abs/1706.03741) — 2017 by Christiano et al. 2. [\*Reward learning from human preferences and demonstrations in Atari\*](https://arxiv.org/abs/1811.06521) — 2018 by Ibarz et al. 3. [\*Leveraging Procedural Generation to Benchmark Reinforcement Learning\*](https://arxiv.org/abs/1912.01588) — 2019 by Cobbe et al. 4. [\*Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations\*](https://arxiv.org/abs/1904.06387) — 2019 by Brown, Goo et al. The first two papers were impactful in utilizing reward learning alongside deep reinforcement learning, and the third introduces the OpenAI Procgen Benchmark, a useful set of games for testing algorithm generalization. The fourth paper proposed an efficient alternative to the methods of the first two works. Deep reinforcement learning from human preferences (Christiano et al. 2017) --------------------------------------------------------------------------- The key idea of this paper is that \*\*it’s a lot easier to recognize a good backflip than to perform one.\*\* The paper shows that it is possible to learn a predicted reward function for tasks in which we can only recognize a desired behavior, even if we can’t demonstrate it. The proposed algorithm is shown below. It alternates between learning the reward function through human preferences and learning the strategy, which are both initially random. \*Repeat until the agent is awesome:\* > \*1. Show two short video clips of the agent acting in the environment with its current strategy\* > > \*2. Ask a human in which video clip the agent was better\* > > \*3. Update the reward function given the human’s feedback\* > > \*4. Update the strategy based on the new reward function\* > > The simulated robot (shown in the below figure) was trained to perform a backflip from 900 queries in under an hour, a task that would be very difficult to demonstrate or to manually create rewards for. ![]()Training a backflip from human preferences ([source](https://github.com/nottombrown/rl-teacher))Experiments were performed in the physics simulator called MuJoCo and also in Atari games. Why run these experiments in Atari when we already know the true rewards in Atari for the games? This gives the opportunity to assign preferences automatically instead of having a human manually give feedback about two video clip demonstrations. We can get automatic (synthetic) feedback by simply ranking the clip with higher true reward as the better one. This enables us to run experiments very quickly because no human effort is needed. Furthermore in this case we can assess the performance of the algorithm by comparing the learned reward function to the true rewards given in the game. ![]()Backflip in motion ([source](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/))Reward learning from human preferences and demonstrations in Atari (Ibarz et al. 2018) -------------------------------------------------------------------------------------- This paper built on the prior paper by performing additional experiments in the Atari domain with a different setup and a different RL algorithm. Their main innovation is to utilize human demonstrations at the beginning in order to start with a decent strategy, whereas the original algorithm would have to start with an agent acting completely randomly since no rewards are known at the beginning. The addition of these human demonstrations improved learning significantly in three of nine tested Atari games relative to the no-demos method used by Christiano. Leveraging Procedural Generation to Benchmark Reinforcement Learning (Cobbe et al. 2019) ---------------------------------------------------------------------------------------- [OpenAI](https://openai.com/), an AI research lab, developed reinforcement learning testbed game environments called the [Procgen Benchmark](https://openai.com/blog/procgen-benchmark/), which includes 16 unique games. \*\*Within each game, all levels are similar and share the same goal, but the actual components like the locations of enemies and hazards are randomly generated\*\* and therefore can be different in each level. This means that we can \*\*train our agent on many random levels and then test it on completely new levels\*\*, allowing us to understand whether the agent is able to generalize its learning. Note the contrast to Atari games in which training is done on sequential game levels where the enemies and rewards and game objects are always in the same places. Furthermore, when testing the agent’s abilities in sequential and non-randomly generated games, they are tested on those same levels in the same order. An important machine learning principle is to train with one set of data and test with another set to truly evaluate the agent’s ability to learn/generalize. We looked primarily at four environments from Procgen in our work: 1. \*\*CoinRun:\*\* Collect the coin at the end of the level while dodging enemies 2. \*\*FruitBot:\*\* Eat fruit and avoid non-fruit foods like eggs and ice cream 3. \*\*StarPilot:\*\* Side scrolling shooter game 4. \*\*BigFish:\*\* Begin as a small fish and eat other strictly smaller fish to get bigger Below are screenshots from each of the games. The agent view uses a lower resolution to optimize for the algorithm to require less computation. The human view is how the game would look if a human were playing. ![]()CoinRun, FruitBot, StarPilot, and BigFish with agent view (Image by Authors)![]()CoinRun, FruitBot, StarPilot, and BigFish with human view (Image by Authors)The main experiments in the paper involved training agents in all 16 unique games over a range of 100 to 100,000 training levels each, while keeping the training time fixed. These agents were then tested on levels that they had never played before (this is possible because each level is uniquely auto-generated). They found that agents need to be trained on as many as 10,000 levels of the game (training levels) before they are able to demonstrate good performance on the test levels. The StarPilot game plot below shows the training performance in blue and the testing performance in red. The y-axis is the rewards and the x-axis is the number of levels used for training. Note that the x-axis is on a logarithmic scale. ![]()StarPilot training (blue) and testing (red) ([source](https://openai.com/blog/procgen-benchmark/))We see that the agent does very well immediately during training and that training performance then goes down and then back up slightly. Why would the agent get worse as it trains more? Since the training time is fixed, by training on only 100 levels, the agent would be repeating the same levels over and over and could easily memorize everything (but do very poorly at test time in the unseen levels). With 1,000 levels, the agent would have to spread its time over more levels and therefore wouldn’t be able to learn the levels as well. As we get to 10,000 and more levels, the agent is able to see such a diversity of levels, that it can perform well as it has begun to generalize its understanding. We also see that the test performance quickly improves to nearly the level of the training performance, suggesting that the agent is able to generalize well to unseen levels. Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations (Brown, Goo et al. 2019) ---------------------------------------------------------------------------------------------------------------------------- The algorithm proposed in this paper, called \*T-REX\*, is different from the previously mentioned reward learning methods in that it \*\*doesn’t require ongoing human feedback during the learning procedure\*\*. While the other algorithms require relatively little human time compared to supervising every agent action, they still require a person to answer thousands of preference queries. A key idea of T-REX is that the human time commitment can be significantly reduced by completing all preference feedback at the beginning, rather than continuously throughout the learning procedure. The first step is to generate demonstrations of the game or task that is being learned. The demonstrations can either come from a standard reinforcement learning agent or from a human. The main idea is that we can get a lot of preference data by extracting short video clips from these demonstrations and assigning preferences to them \*\*based only on a ranking of the demonstrations that they came from\*\*. For example, with 20 demonstrations, each demo would get a rank of 1 through 20. A large number of short video clips would be taken from each of these demonstrations and each clip would be assigned the ranking of the demo that it came from, so when they face each other, the preference would go to the clip that came from the better demo. The reward function would then be based on these preferences. This is in contrast to the approach of the prior works that require human preference input over each and every pair of 1–2 second clips. Here, we only require human preference input to rank the initial demonstrations. T-REX’s disadvantage, however, is that it is using an approximation. Not all clips from a higher ranked demonstration should be preferred to clips from a lower ranked demonstration, but the idea is that on average, the preferences would work well and the procedure would suffice to learn a reward model. Providing a ranking over demonstrations is the same as giving preferences between every pair of them. For example, if we had three demos and ranked them 3>1>2, this means that we would generate the rankings of 3>1, 3>2, and 1>2. Then randomly generated clips from the demos would be given the same preference ranking based on which demos the clips came from. The T-REX paper showed that having just 12 demonstrations was sufficient to learn a useful reward model. There are 12 \\* 11 / 2 = 66 distinct pairs for any 12 objects, so ranking 12 demonstrations from 1 to 12 is equivalent to answering up to 66 queries about which demo is better, which is ~100 times fewer queries than required by the algorithm by Christiano et al. Again, although the T-REX ranked demonstrations method is more efficient, it is sacrificing precision due to the simplifying assumption that all clips from a better demo are better than all clips from a worse demo. Brown and Goo et al.’s Atari-based experiments showed that T-REX was competitive against the Ibarz et al. method that was previously described. It was able to learn better-than-demonstrator quality agents using only 12 demonstrations and their corresponding preference (rank) labels. The figure below shows a comparison between the scores from human demonstrations and the scores from the T-REX algorithm in five Atari games. T-REX was able to soundly outperform 3 of the 5 human scores, though was not able to earn any points in the Montezuma’s Revenge game. ![]()T-REX algorithm vs. humans ([source](https://arxiv.org/abs/1904.06387))T-REX also exceeded the performance a state-of-the-art behavioral cloning algorithm (BCO) and imitation learning algorithm (GAIL) in 7 out of 8 games, as shown in the chart below, while also beating the best available demonstrations in 7 out of 8 games. (Behavioral cloning algorithms try to act as close to demonstrations as possible and inverse reinforcement learning algorithms attempt to recover a reward function from an expert demonstration.) ![]()T-REX algorithm vs. other state-of-the-art methods ([source](https://arxiv.org/abs/1904.06387))Next: Implementations and Experiments ===================================== Based on T-REX’s strong results and simple idea, we decided to base our initial experiments on combining this algorithm with the Procgen game environments, which would give us a highly efficient reward learning algorithm and a variety of benchmark games to test generalization. We will explain the details of our implementation and the experimental results and issues that we faced in the [second blog post](https://chisness.medium.com/assessing-generalization-in-reward-learning-implementations-and-experiments-de02e1d08c0e) of this series.
8412ca2e-cd23-4b5f-905c-d370b6fdcabd
trentmkelly/LessWrong-43k
LessWrong
Moravec's Paradox Comes From The Availability Heuristic Epistemic Status: very quick one-thought post, may very well be arguing against a position nobody actually holds, but I haven’t seen this said explicitly anywhere so I figured I would say it. Setting Up The Paradox According to Wikipedia: > Moravec’s paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources. > > -https://en.wikipedia.org/wiki/Moravec’s_paradox I think this is probably close to what to Hans Moravec originally meant to say in the 1980’s, but not very close to how the term is used today. Here is my best attempt to specify the statement I think people generally point at when they use the term nowadays: > Moravec’s paradox is the observation that in general, tasks that are hard for humans are easy for computers, and tasks that are easy for humans are hard for computers. > > -me If you found yourself nodding along to that second one, that’s some evidence I’ve roughly captured the modern colloquial meaning. Even when it’s not attached to the name “Moravec’s Paradox”, I think this general sentiment is a very widespread meme nowadays. Some example uses of this version of the idea that led me to write up this post are here and here. To be clear, from here on out I will be talking about the modern, popular-meme version of Moravec's Paradox. And Dissolving It I think Moravec’s Paradox is an illusion that comes from the availability heuristic, or something like it. The mechanism is very simple – it’s just not that memorable when we get results that match up with our expectations for what will be easy/hard. If you try, you can pretty easily come up with exceptions to Moravec’s Paradox. Lots of them. Things like single digit arithmetic, repetitive labor, and drawing simple shapes are easy for both humans and computers. Things like protein folding, the traveling salesm
f21ca8a2-50d8-4b3c-8bc5-4f8edae01d06
trentmkelly/LessWrong-43k
LessWrong
[Linkpost] Google invested $300M in Anthropic in late 2022 A few quotes from the article: Google has invested about $300mn in artificial intelligence start-up Anthropic, making it the latest tech giant to throw its money and computing power behind a new generation of companies trying to claim a place in the booming field of “generative AI”. The terms of the deal, through which Google will take a stake of about 10 per cent, requires Anthropic to use the money to buy computing resources from the search company’s cloud computing division, said three people familiar with the arrangement. The start-up had raised more than $700mn before Google’s investment, which was made in late 2022 but has not previously been reported. The company’s biggest investor is Alameda Research, the crypto hedge fund of FTX founder Sam Bankman-Fried, which put in $500mn before filing for bankruptcy last year. FTX’s bankruptcy estate has flagged Anthropic as an asset that may help creditors with recoveries.
d32296fe-a2e1-4306-bf5e-646a9986b7bf
trentmkelly/LessWrong-43k
LessWrong
The Goodhart Game > In this paper, we argue that adversarial example defense papers have, to date, mostly considered abstract, toy games that do not relate to any specific security concern. Furthermore, defense papers have not yet precisely described all the abilities and limitations of attackers that would be relevant in practical security. From the abstract of Motivating the Rules of the Game for Adversarial Example Research by Gilmer et al (summary) Adversarial examples have been great for getting more ML researchers to pay attention to alignment considerations. I personally have spent a fair of time thinking about adversarial examples, I think the topic is fascinating, and I've had a number of ideas for addressing them. But I'm also not actually sure working on adversarial examples is a good use of time. Why? Like Gilmer et al, I think adversarial examples are undermotivated... and overrated. People in the alignment community like to make an analogy between adversarial examples and Goodhart's Law, but I think this analogy fails to be more than an intuition pump. With Goodhart's Law, there is no "adversary" attempting to select an input that the AI does particularly poorly on. Instead, the AI itself is selecting an input in order to maximize something. Could the input the AI selects be an input that the AI does poorly on? Sure. But I don't think the commonality goes much deeper than "there are parts of the input space that the AI does poorly on". In other words, classification error is still a thing. (Maybe both adversaries and optimization tend to push us off the part of the distribution our model performs well on. OK, distributional shift is still a thing.) To repeat a point made by the authors, if your model has any classification error at all, it's theoretically vulnerable to adversaries. Suppose you have a model that's 99% accurate and I have an uncorrelated model that's 99.9% accurate. Suppose I have access to your model. Then I can search the input space for a case wher
2f5393f0-bc3b-4a6a-91eb-6f81ef17155d
trentmkelly/LessWrong-43k
LessWrong
Good goals for leveling up? I'm new to LessWrong and am trying to level up and internalize techniques to improve my life. I also want to increase my connection to the community, but that's less important for this question. I'm setting some goals for next year and was thinking about how to encode these meta-goals into SMARTer goals. My question: what are some good goals for leveling up in a wholistic way? I don't just want to read things. I want to become conversant in them, and I want to put them into practice. I've written down some ideas below (I don't intend to do all of them; they're just ideas at this point). Feel free to critique them and/or ignore them and write your own. * Spend X hours per week interacting with people's Rationality posts on LessWrong and Facebook. * Spend X hours per week reading Rationality blogs, posts, and sequences. * Write X blog / LW post per month. * Learn X new Rationality technique per week and apply it to a problem I have (and post about it). * Read through X Sequence / Book Again: As a newbie, what is a good set of goals to help me level up quickly and sustainably? Edit: Based on the comments, I should specify that these would not be my only goals for the year. They would just be the ones that have to do with learning and applying Rationality techniques.
2f766365-295e-40e4-aae6-44db0e0999b3
trentmkelly/LessWrong-43k
LessWrong
Covid 2/25: Holding Pattern Scott Alexander reviewed his Covid-19 predictions, and I did my analysis as well.  It was a quiet week, with no big news on the Covid front. There was new vaccine data, same as the old vaccine data – once again, vaccines still work. Once again, there is no rush to approve them or even plan for their distribution once approved. Once again, case numbers and positive test percentages declined, but with the worry that the English Strain will soon reverse this, especially as the extent of the drop was disappointing. The death numbers ended up barely budging after creeping back up later in the week, presumably due to reporting time shifts, but that doesn’t make it good or non-worrisome news. This will be a relatively short update, and if you want to, you can safely skip it. If anyone knows a good replacement for the Covid Tracking Project please let me know. Next week will be the last week before they shut down new data collection, and I don’t like any of the options I know about to replace them. Let’s run the numbers. The Numbers Predictions Last week: 5.2% positive test rate on 10.4 million tests, and an average of 2,089 deaths. Prediction: 4.6% positive test rate and an average of 1,800 deaths. Result: 4.9% positive test rate and an average of 2,068 deaths. Late prediction (Friday morning): 4.5% positive test rate and an average of 1,950 deaths (excluding the California bump on 2/25). Both results are highly disappointing. The positive test rate slowing its drop was eventually going to happen due to the new strain and the control system, so while it’s disappointing it doesn’t feel like a mystery. Deaths not dropping requires an explanation. There’s no question that over the past month and a half we’ve seen steady declines in infections, and conditions are otherwise at least not getting worse. How could the death count be holding steady? One hypothesis is that weather messed with the reporting, but Texas deaths went down and the patterns generally do not ma
2ba7e037-fb93-44d9-bcad-b80cc6506a60
trentmkelly/LessWrong-43k
LessWrong
Review: “The Case Against Reality” This is not a red stop sign: For one thing, in a ceci n'est pas une pipe way, it’s not a stop sign at all, but a digital representation of a photograph of a stop sign, made visible by a computer monitor or maybe a printer. More subtly, “red” is not a quality of the sign, but of the consciousness that perceives it.[1] Even if you were to generously define “red” to be a specific set of wavelengths of light that could in theory be ascertained without the benefit of consciousness, that still would be a measurement of something that the sign repels rather than is.[2] We on some level understand these things, but we seem much more prone to think of ourselves as living in a world in which stop signs are red; the redness inheres in the sign out there in the world, rather than the redness being our own reaction to light reflecting from the sign, or rather than the sign and the redness being coincident private figments of perception. And no amount of thinking this through seems to dispel the illusion: it’s just too practically useful to put the color onto the external object to make it seem worthwhile to relearn how to perceive reality differently. Donald Hoffman, in The Case Against Reality (2019), suggests that our mistake here goes much deeper. Not only are superficial sensory characteristics like color subjective qualities of conscious experience rather than objective qualities of things — but things themselves, as well as the spacetime they seem to inhabit, are too. All of the stuff we take for granted as making up the world we inhabit — objects, dimensions, qualities, time, causality — are, says Hoffman, not objectively real. They are better thought of as the interface through which we interact with whatever might be objectively real. When we isolate “objects” in “space” and “time” and then ascribe “laws” to the regularities in their interactions, we are not discovering things about reality, but about this interface. “Physical objects in spacetime are simply our ico
5e3618c2-c101-4289-8a5f-7eda1d1aa587
trentmkelly/LessWrong-43k
LessWrong
It's Okay to Feel Bad for a Bit > "If you kiss your child, or your wife, say that you only kiss things which are human, and thus you will not be disturbed if either of them dies." - Epictetus > > "Whatever suffering arises, all arises due to attachment; with the cessation of attachment, there is the cessation of suffering." - Pali canon > > "An arahant would feel physical pain if struck, but no mental pain. If his mother died, he would organize the funeral, but would feel no grief, no sense of loss." - the Dhammapada > > "Receive without pride, let go without attachment." - Marcus Aurelius   I. Stoic and Buddhist philosophies are pretty popular these days. I don't like them. I think they're mostly bad for you if you take them too seriously. About a decade ago I meditated for an hour a day every day for a few weeks, then sat down to breakfast with my delightful (at the time) toddlers and realized that I felt nothing. There was only the perfect crystalline clarity and spaciousness of total emotional detachment. "Oh," I said, and never meditated again. It's better to sometimes feel bad for a bit, than to feel nothing. II. Young adults should probably put some effort into becoming less emotionally reactive. Being volatile makes you unpleasant to be around, and undercuts your ability to achieve pretty much any goals you may have.  If you have any traumas, it's likely positive-EV for you to devote time and energy to learning some kind of therapy modality with a good evidence base, and then taking the time to resolve those issues. In my opinion - for most people - once you have fixed about 60% of your emotional reactivity and 90% of your psychological triggers, you have hit a point of diminishing returns. In fact, past that point, I think further investment in making yourself "nonreactive" and "unattached," and removing all minor triggers from your psyche, is pathological from the perspective of actually trying to be happy and to do things with your life. If your fear of feeling bad for a bi
976077c0-f189-455e-9a38-48643d449606
trentmkelly/LessWrong-43k
LessWrong
Can AI Transform the Electorate into a Citizen’s Assembly Motivation: Modern democratic institutions are detached from those they wish to serve [1]. In small societies, democracy can easily be direct, with all the members of a community gathering to address important issues. As civilizations get larger, mass participation and deliberation become irreconcilable, not least because a parliament can't handle a million-strong crowd. As such, managing large societies demands a concentrated effort from a select group. This relieves ordinary citizens of the burdens and complexities of governance, enabling them to lead their daily lives unencumbered. Yet, this decline in public engagement invites concerns about the legitimacy of those in power.    Lately, this sense of institutional distrust has been exposed and enflamed by AI-algorithms optimised solely to capture and maintain our focus. Such algorithms often learn exploit the most reactive aspects of our psyche including moral outrage and identity threat [2]. In this sense, AI has fuelled political polarisation and the retreat of democratic norms, prompting Harari to assert that "Technology Favors Tyranny" [3]. However, AI may yet play a crucial role in mending and extending democratic society [4]. The very algorithms that fracture and misinform the public can be re-incentivised to guide and engage the electorate in digital citizen’s assemblies. To see this, we must first consider the way that a citizen’s assembly traditionally works.    What’s a Citizens Assembly: A citizen's assembly consists of a small group that engages in deliberation to offer expert-advised recommendations on specific issues. Following group discussion, the recommendations are condensed into an issue paper, which is presented to parliament. Parliamentary representatives consider the issue paper and leverage their expertise to ultimately decide the outcome. Giving people the chance to experiment with policy in a structured environment aids their understanding of the laws that govern them, improves govern
03d5e86c-cf8b-4f40-ba46-a36314fac788
trentmkelly/LessWrong-43k
LessWrong
Artifact: What Went Wrong? Previously: Card Balance and Artifact, Artifact Embraces Card Balance Changes, Review: Artifact Epistemic Status: Looks pretty dead Artifact had every advantage. Artifact should have been great. Artifact was great for the right players, and had generally positive reviews. Then Artifact fell flat, its players bled out, its cards lost most of their value, and the game died. Valve takes the long view, so the game is being retooled and might return. But for now, for all practical purposes, the game is dead. Richard Garfield and Skaff Elias have one take on this podcast. They follow up with more thoughts in this interview. Here’s my take, which is that multiple major mistakes were made, all of which mattered, and all of which will be key to avoid if we are to bring the combination of strategic depth and player-friendly economic models back to collectible card games. I see ten key mistakes, which I will detail below. Alas, we do not get to run controlled experiments. The lack of ability to experiment and iterate was the meta-level problem with Artifact. The parts of the game that Valve knew how to test, and knew to test, were outstanding, finely crafted and polished. The parts that Valve did not test had severe problems. We will never know for sure which reasons were most critical, and which ones were minor setbacks. I will make it clear what my guesses are, but they are just that, guesses. Reason 1: Artifact Was Too Complex, Complicated and Confusing Artifact streams were difficult to follow even if you knew the game. As teaching tools they didn’t work at all. I heard multiple reports that excellent streamers were unable to explain to viewers what was going on in their Artifact games. I was mostly able to follow streams when I had a strong strategic understanding of the game and all of its cards and recognized all the cards on sight. Mostly. I was fortunate to learn Artifact in person with those deeply committed to it, at the Valve offices, and in a setting
720ed140-a50c-4d2c-a6e2-3f8d49de48bb
trentmkelly/LessWrong-43k
LessWrong
Nash Equilibria and Schelling Points A Nash equilibrium is an outcome in which neither player is willing to unilaterally change her strategy, and they are often applied to games in which both players move simultaneously and where decision trees are less useful. Suppose my girlfriend and I have both lost our cell phones and cannot contact each other. Both of us would really like to spend more time at home with each other (utility 3). But both of us also have a slight preference in favor of working late and earning some overtime (utility 2). If I go home and my girlfriend's there and I can spend time with her, great. If I stay at work and make some money, that would be pretty okay too. But if I go home and my girlfriend's not there and I have to sit around alone all night, that would be the worst possible outcome (utility 1). Meanwhile, my girlfriend has the same set of preferences: she wants to spend time with me, she'd be okay with working late, but she doesn't want to sit at home alone. This “game” has two Nash equilibria. If we both go home, neither of us regrets it: we can spend time with each other and we've both got our highest utility. If we both stay at work, again, neither of us regrets it: since my girlfriend is at work, I am glad I stayed at work instead of going home, and since I am at work, my girlfriend is glad she stayed at work instead of going home. Although we both may wish that we had both gone home, neither of us specifically regrets our own choice, given our knowledge of how the other acted. When all players in a game are reasonable, the (apparently) rational choice will be to go for a Nash equilibrium (why would you want to make a choice you'll regret when you know what the other player chose?) And since John Nash (remember that movie A Beautiful Mind?) proved that every game has at least one, all games between well-informed rationalists (who are not also being superrational in a sense to be discussed later) should end in one of these. What if the game seems specifically design
994f2c50-c641-41c3-b362-62155f295106
trentmkelly/LessWrong-43k
LessWrong
Impossibility of Anthropocentric-Alignment Abstract: Values alignment, in AI safety, is typically construed as the imbuing into artificial intelligence of human values, so as to have the artificial intelligence act in ways that encourage what humans value to persist, and equally to preclude what humans do not value. “Anthropocentric” alignment emphasises that the values aligned-to are human, and what humans want. For, practically, were values alignment achieved, this means the AI is to act as humans want, and not as they do not want (for if humans wanted what they did not value, or vice versa, they would not seek, so would not act, to possess the want and value; as the AI is to act for them, for their wants and values: “Want implies act”. If not acted upon, what is valued may be lost, then by assumption is no longer present to be valued; consistency demands action, preservation). We shall show that the sets, as it were, of human wants and of possible actions, are incommensurable, and that therefore anthropocentric-alignment is impossible, and some other “values” will be required to have AI “align” with them, can this be done at all. Epistemic status: This is likely the most mathematically ambitious production by one so ill-educated they had to teach themselves long division aged twenty-five, in human history (Since there may not be much more human history, it may retain this distinction). Minor errors and logical redundancies are likely; serious mistakes cannot be discounted, and the author has nothing with which to identify them. Hence, the reader is invited to comment, as necessary, to highlight and correct such errors – not even for this author’s benefit as much as for similarly non-technical readers, who, were it only thoroughly downvoted, might receive no “updating” of their knowledge were they not given to know why it is abominated; they may even promulgate it in a spirit of iconoclasm – contrary to downvote’s intention; hence “voting” is generally counterproductive. Every effort has been made to have
29e93241-80a3-4bf5-bb69-74f48a806c3f
trentmkelly/LessWrong-43k
LessWrong
Open Thread: March 2010, part 2 The Open Thread posted at the beginning of the month has exceeded 500 comments – new Open Thread posts may be made here. This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
f6e32b3a-d5f5-48a2-86b6-57c52a474f49
StampyAI/alignment-research-dataset/special_docs
Other
China’s New AI Governance Initiatives Shouldn’t Be Ignored Over the past six months, the Chinese government has rolled out a series of policy documents and public pronouncements that are finally putting meat on the bone of the country’s governance regime for artificial intelligence (AI). Given China’s track record of leveraging AI for mass surveillance, it’s tempting to view these initiatives as little more than a fig leaf to cover widespread abuses of human rights. But that response risks ignoring regulatory changes with major implications for global AI development and national security. Anyone who wants to compete against, cooperate with, or simply understand China’s AI ecosystem must examine these moves closely. These recent initiatives show the emergence of three different approaches to AI governance, each championed by a different branch of the Chinese bureaucracy, and each at a different level of maturity. Their backers also pack very different bureaucratic punches. It’s worth examining the three approaches and their backers, along with how they will both complement and compete with each other, to better understand where China’s AI governance is heading. | **Three Approaches to Chinese AI Governance** | | **Organization** | **Focus of Approach** | **Relevant Documents** | | Cyberspace Administration of China | Rules for online algorithms, with a focus on public opinion | - [Internet Information Service Algorithmic Recommendation Management Provisions](https://digichina.stanford.edu/work/translation-internet-information-service-algorithmic-recommendation-management-provisions-opinon-seeking-draft/) - [Guiding Opinions on Strengthening Overall Governance of Internet Information Service Algorithms](https://digichina.stanford.edu/work/translation-guiding-opinions-on-strengthening-overall-governance-of-internet-information-service-algorithms/) | | China Academy of Information and Communications Technology | Tools for testing and certification of “trustworthy AI” systems | - [Trustworthy AI white paper](https://cset.georgetown.edu/publication/white-paper-on-trustworthy-artificial-intelligence/) - [Trustworthy Facial Recognition Applications and Protections Plan](https://www.sohu.com/a/501708742_100207327) | | Ministry of Science and Technology | Establishing AI ethics principles and creating tech ethics review boards within companies and research institutions | - [Guiding Opinions on Strengthening Ethical Governance of Science and Technology](http://www.most.gov.cn/tztg/202107/t20210728_176136.html) - [Ethical Norms for New Generation Artificial Intelligence](https://cset.georgetown.edu/publication/ethical-norms-for-new-generation-artificial-intelligence-released/) | The strongest and most immediately influential moves in AI governance have been made by the Cyberspace Administration of China (CAC), a relatively new but [very powerful](https://qz.com/2039292/how-did-chinas-top-internet-regulator-become-so-powerful/) regulator that writes the rules governing certain applications of AI. The CAC’s approach is the most mature, the most rule-based, and the most concerned with AI’s role in disseminating information. The CAC made headlines in August 2021 when it released a [draft set of thirty rules](https://digichina.stanford.edu/work/translation-internet-information-service-algorithmic-recommendation-management-provisions-opinon-seeking-draft/) for regulating internet recommendation algorithms, the software powering everything from TikTok to news apps and search engines. Some of those rules are China-specific, such as the one stipulating that recommendation algorithms “vigorously disseminate positive energy.” But other provisions [break ground](https://digichina.stanford.edu/work/experts-examine-chinas-pioneering-draft-algorithm-regulations/) in ongoing international debates, such as the requirement that algorithm providers be able to “[give an explanation](https://digichina.stanford.edu/work/translation-internet-information-service-algorithmic-recommendation-management-provisions-opinon-seeking-draft/)” and “remedy” situations in which algorithms have infringed on user rights and interests. If put into practice, these types of provisions could spur Chinese companies to experiment with new kinds of disclosure and methods for algorithmic interpretability, an [emerging but very immature](https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-interpretability-in-machine-learning/) area of machine learning research. Soon after releasing its recommendation algorithm rules, the CAC came out with a much more ambitious effort: a [three-year road map](http://www.cac.gov.cn/2021-09/29/c_1634507915623047.htm) for governing all internet algorithms. Following through on that road map will require input from many of the nine regulators that co-signed the project, including the Ministry of Industry and Information Technology (MIIT). The second approach to AI governance has emerged out of the China Academy of Information and Communications Technology (CAICT), an [influential think tank under the MIIT](https://www.newamerica.org/cybersecurity-initiative/digichina/blog/profile-china-academy-information-and-communications-technology-caict/). Active in policy formulation and many aspects of technology testing and certification, the CAICT has distinguished its method through a focus on creating the tools for measuring and testing AI systems. This work remains in its infancy, both from technical and regulatory perspectives. But if successful, it could lay the foundations for China’s larger AI governance regime, ensuring that deployed systems are robust, reliable, and controllable. In July 2021, the CAICT teamed up with a research lab at the Chinese e-commerce giant JD to release the country’s first [white paper](https://cset.georgetown.edu/publication/white-paper-on-trustworthy-artificial-intelligence/) on “trustworthy AI.” Already popular in European and U.S. discussions, trustworthy AI refers to many of the more technical aspects of AI governance, such as testing systems for robustness, bias, and explainability. The way the CAICT defines trustworthy AI in its core principles [looks very similar](https://macropolo.org/beijing-approach-trustworthy-ai/?rp=m) to the definitions that have come out of U.S. and European institutions, but the paper was notable for how quickly those principles are being converted into concrete action. The CAICT [is working with](http://aiiaorg.cn/index.php?m=content&c=index&a=show&catid=34&id=115) China’s AI Industry Alliance, a government-sponsored industry body, to test and certify different kinds of AI systems. In November 2021, it issued its first batch of [trustworthy AI certifications](https://www.sohu.com/a/501708742_100207327) for facial recognition systems. Depending on the technical rigor of implementation, these types of certifications could help accelerate progress on algorithmic interpretability—or they could simply turn into a form of bureaucratic rent seeking. On policy impact, the CAICT is often viewed as representing the views of the powerful MIIT, but the MIIT’s leadership has yet to issue its own policy documents on trustworthy AI. Whether it does will be a strong indicator of the bureaucratic momentum behind this approach. Finally, the Ministry of Science and Technology (MOST) has taken the lightest of the three approaches to AI governance. Its highest-profile publications have focused on laying down ethical guidelines, relying on companies and researchers to supervise themselves in applying those principles to their work. In July 2021, MOST [published guidelines](http://www.most.gov.cn/tztg/202107/t20210728_176136.html) that called for universities, labs, and companies to set up internal review committees to oversee and adjudicate technology ethics issues. Two months later, the main AI expert committee operating under MOST released its own [set of ethical norms for AI](https://cset.georgetown.edu/publication/ethical-norms-for-new-generation-artificial-intelligence-released/), with a special focus on weaving ethics into the entire life cycle of development. Since then, MOST has been encouraging leading tech companies to establish their own ethics review committees and audit their own products. MOST’s approach is similar to those of international organizations such as the [United Nations Educational, Scientific and Cultural Organization](https://www.computerweekly.com/news/252510287/Unesco-member-states-adopt-AI-ethics-recommendation) and the [Organisation for Economic Co-operation and Development](https://oecd.ai/en/ai-principles), which have released AI principles and encouraged countries and companies to adopt them. But in the Chinese context, that tactic feels quite out of step with the country’s increasingly hands-on approach to technology governance, a disconnect that could undermine the impact of MOST’s efforts. One unanswered question is how these three approaches will fit together. Chinese ministries and administrative bodies are [notoriously competitive](https://sgp.fas.org/crs/row/R41007.pdf) with one another, constantly jostling to get their pet initiatives in front of the country’s central leadership in hopes that they become the chosen policies of the party-state. In this contest, the CAC’s approach appears to have the clear upper hand: It is the most mature, the most in tune with the regulatory zeitgeist, and it comes from the organization with the most bureaucratic heft. But its approach can’t succeed entirely on its own. The CAC requires that companies be able to explain how their recommendation algorithms function, and the tools or certifications for what constitutes explainable AI are likely to come from the CAICT. In addition, given the sprawling and rapidly evolving nature of the technology, many practical aspects of trustworthy AI will first surface in the MOST-inspired ethics committees of individual companies. The three-year road map for algorithmic governance offers a glimpse of some bureaucratic collaboration. Though the CAC is clearly the lead author, the document includes new references to algorithms being trustworthy and to companies setting up ethics review committees, additions likely made at the behest of the other two ministries. There may also be substantial shifts in bureaucratic power as AI governance expands to cover many industrial and social applications of AI. The CAC is traditionally an internet-focused regulator, and future regulations for autonomous vehicles or medical AI may create an opening for a ministry like the MIIT to seize the regulatory reins. The potential impact of these regulatory currents extends far beyond China. If the CAC follows through on certain requirements for algorithmic transparency and explainability, China will be running some of the world’s largest regulatory experiments on topics that European regulators have [long debated](https://www.law.ox.ac.uk/business-law-blog/blog/2018/05/rethinking-explainable-machines-next-chapter-gdprs-right-explanation). Whether Chinese companies are able to [meet these new demands](https://digichina.stanford.edu/work/experts-examine-chinas-pioneering-draft-algorithm-regulations/) could inform analogous debates in Europe over the right to explanation. On the security side, as AI systems are woven deeper into the fabrics of militaries around the world, governments want to ensure those systems are robust, reliable, and controllable for the sake of international stability. The CAICT’s current experiments in certifying AI systems are likely not game-ready for those kinds of high-stakes deployment decisions. But developing an early understanding of how Chinese institutions and technologists approach these questions could prove valuable for governments who may soon find themselves negotiating over aspects of autonomous weapons and arms controls. With 2022 marking [a major year](https://thediplomat.com/2020/12/china-looks-ahead-to-20th-party-congress-in-2022/) in the Chinese political calendar, the people and bureaucracies building out Chinese AI governance are likely to continue jostling for position and influence. The results of that jostling warrant close attention from AI experts and China watchers. If China’s attempts to rein in algorithms prove successful, they could imbue these approaches with a kind of technological and regulatory soft power that shapes AI governance regimes around the globe.
309ae5fb-5d67-4a88-8534-001954a36f15
trentmkelly/LessWrong-43k
LessWrong
Avoiding Your Belief's Real Weak Points A few years back, my great-grandmother died, in her nineties, after a long, slow, and cruel disintegration. I never knew her as a person, but in my distant childhood, she cooked for her family; I remember her gefilte fish, and her face, and that she was kind to me. At her funeral, my grand-uncle, who had taken care of her for years, spoke. He said, choking back tears, that God had called back his mother piece by piece: her memory, and her speech, and then finally her smile; and that when God finally took her smile, he knew it wouldn’t be long before she died, because it meant that she was almost entirely gone. I heard this and was puzzled, because it was an unthinkably horrible thing to happen to anyone, and therefore I would not have expected my grand-uncle to attribute it to God. Usually, a Jew would somehow just-not-think-about the logical implication that God had permitted a tragedy. According to Jewish theology, God continually sustains the universe and chooses every event in it; but ordinarily, drawing logical implications from this belief is reserved for happier occasions. By saying “God did it!” only when you’ve been blessed with a baby girl, and just-not-thinking “God did it!” for miscarriages and stillbirths and crib deaths, you can build up quite a lopsided picture of your God’s benevolent personality. Hence I was surprised to hear my grand-uncle attributing the slow disintegration of his mother to a deliberate, strategically planned act of God. It violated the rules of religious self-deception as I understood them. If I had noticed my own confusion, I could have made a successful surprising prediction. Not long afterward, my grand-uncle left the Jewish religion. (The only member of my extended family besides myself to do so, as far as I know.) Modern Orthodox Judaism is like no other religion I have ever heard of, and I don’t know how to describe it to anyone who hasn’t been forced to study Mishna and Gemara. There is a tradition of questioning, but
39fc4f43-a797-4537-9ad2-65e76a9b1d7f
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: 1aaba7c9ec55a66f To be legible, evidence of misalignment probably has to be behavioral Redwood Research blog Subscribe Sign in Share this post Redwood Research blog To be legible, evidence of misalignment probably has to be behavioral Copy link Facebook Email Notes More To be legible, evidence of misalignment probably has to be behavioral Evidence from just model internals (e.g. interpretability) is unlikely to be broadly convincing. Ryan Greenblatt Apr 15, 2025 8 Share this post Redwood Research blog To be legible, evidence of misalignment probably has to be behavioral Copy link Facebook Email Notes More 3 1 Share One key hope for mitigating risk from misalignment is inspecting the AI's behavior, noticing that it did something egregiously bad , converting this into legible evidence the AI is seriously misaligned, and then this triggering some strong and useful response (like spending relatively more resources on safety or undeploying this misaligned AI). You might hope that (fancy) internals-based techniques (e.g., ELK methods or interpretability) allow us to legibly incriminate misaligned AIs even in cases where the AI hasn't (yet) done any problematic actions despite behavioral red-teaming (where we try to find inputs on which the AI might do something bad), or when the problematic actions the AI does are so subtle and/or complex that humans can't understand how the action is problematic 1 . That is, you might hope that internals-based methods allow us to legibly incriminate misaligned AIs even when we can't produce behavioral evidence that they are misaligned. Unfortunately, I think it's unlikely that evidence which is just from fancy internals-based methods (and can't be supported by human inspection of AI behavior) will be legible and convincing in practice. Suppose that we use some fancy method to train an ELK head to honestly predict what is going to happen or to honestly answer our questions. Then, suppose this head says/predicts "this action would yield very bad consequences in a way which is unlikely to be an accident" but humans can't understand how this action would result in these bad consequences (even after it is explained to them). Even if the internals-based method is extremely well supported theoretically and empirically (which seems quite unlikely), I don't think this would suffice for this to trigger a strong response by convincing relevant people 2 . This reasoning also applies to "our interpretability method decoded some very problematic deceptive reasoning" or "the deception probe fires in a very suspicious way (e.g. it fires more when we give it a fake opportunity to escape or ask it about its alignment research outputs)". More generally, I think that to convince people enough to take strong action, evidence will need to bottom out in some sort of behavioral evidence that (many) humans can understand. 3 (That's not to say solid behavioral evidence will necessarily suffice .) This isn't to say that internals-based techniques for detecting scheming aren't useful. These techniques could be quite useful via two mechanisms: The technique could allow us to find behavioral evidence that would have otherwise been hard to find. For instance, maybe we can use the method to discover an input on which the AI does something egregiously bad (that we can understand). (However, it will be important to legibly not apply too much optimization pressure as you might be able to get any model to look misaligned with enough optimization pressure .) Internals-based techniques could also help with having humans understand why an action is egregiously bad (e.g., we might be able to use internals-based techniques to effectively do better elicitation for debaters). Detection of scheming can be useful via mechanisms other than convincing people to take strong and expensive actions: Marginally more buy-in and support for handling misalignment could be helpful. If we can detect misalignment, we can study techniques for preventing or removing misalignment by iterating (or even training) against our detectors. The more robust the detector is, the more useful this would be. The subset of people who buy this evidence could usefully alter their actions and strategy. For instance, AI company employees implementing countermeasures and utilizing AIs might adjust their approach now that they know the AIs are likely seriously misaligned (though presumably not in a way that greatly increases safety costs). In the case where you know your AI is seriously misaligned, you want to adjust your control protocol, even if you're unable to take much of a hit on (apparent) usefulness 4 . We might be able to trigger well-enforced preexisting commitments. We could even imagine some sort of legally binding contract which triggers in this case (effectively a bet between people who are more worried about misalignment and those who are less worried). Internals-based techniques for generally understanding an AI's behavior could also help by providing evidence for a particular interpretation of behavioral evidence. 5 For instance, suppose an AI attempted to sabotage some alignment research. Internals-based techniques could (in principle) help by backing up the interpretation that this behavior is due to the AI being dangerously misaligned (or could provide evidence against this interpretation). However, for usage as legible evidence, this internals-based understanding is unlikely to work as a heavily load-bearing component, while it might work as a more minor/moderate piece of evidence in support of a given interpretation. Given these difficulties in producing legible evidence, I think researchers working on internals-based techniques for detecting misalignment should spend some time thinking about the ultimate use case of their technique and then alter their approach as needed. These difficulties also generally make me somewhat less excited about internals-based methods for detecting misalignment (at least if these methods don’t involve finding inputs on which the AI behaves poorly). I think techniques for detecting misalignment should focus on finding behavioral evidence or should aim to be robust (and cheap) enough to be iterated against. Audio narrations of this blog are available on our podcast. Search “Redwood Research” or subscribe via Apple Podcasts , Spotify , RSS , or other platforms . 1 We can push the frontier of what humans can understand using recursive oversight techniques like debate which might allow for eliciting explanations that allow for humans to understand how the action is problematic (e.g., pointing out exactly what the problematic behavior is and explaining how to test this in isolation). However, if the problematic nature of an action is sufficiently subtle and/or complex (perhaps due to the AI having superhuman understanding), then even well elicited debate might be unable to provide a good enough explanation . We might have hoped that internals-based methods could push frontier beyond debate (perhaps to indefinite worst case scalability) while still producing legible and convincing evidence, but I'll argue that non-behavioral evidence is unlikely to be convincing. This suggests an upper bound on strategies which depend on legible evidence: once well elicited debate (and other methods for improving human understanding) aren't viable, misaligned AIs can get away with doing bad actions without these actions resulting in legible evidence of misalignment. 2 That is, relevant humans, this approach might work for eliciting a strong response from AIs once AIs make the important decisions. 3 That is, it must at least partially bottom out behaviorally. Internals-based evidence which is substantially behaviorally supported (in a relatively direct way) could suffice. E.g., if we can read off the AI's internal reasoning using some fancy method, it looks quite malign, and the malign aspects of the reasoning are also quite predictive of the AI's behavior, this could suffice. For instance, Alignment Faking in Large Language Models utilized a mixture of inspecting the Chain-of-Thought and behavioral evidence in a relatively effective way. 4 Presumably, the actual level of usefulness might be substantially lower as the AI is a misaligned AI trying to disempower humans. 5 I added this paragraph in an edit because it seemed like an important additional hope. 8 Share this post Redwood Research blog To be legible, evidence of misalignment probably has to be behavioral Copy link Facebook Email Notes More 3 1 Share Previous Next Discussion about this post Comments Restacks Miles Kodama Apr 15 Liked by Ryan Greenblatt The interpretability community could also take this essay as a challenge to up their scientific communication game. We want to be able to say "our deception probe is firing suspiciously" in a sufficiently accessible and compelling way that the relevant people understand what this means and why it's alarming. Expand full comment Reply Share Yixiong Hao Apr 20 Generally agreed, I think more people should be working on the demos + organisational outreach part of the theory of change. What do you mean by “robust enough to be iterated against” at the very end? Is there an important distinction between iterate and optimise? On priors, optimising against our technique of detection seems bad. One likely outcome is obfuscated misalignment that our technique is no longer able to detect ( https://thezvi.substack.com/p/the-most-forbidden-technique ). In this sense, is it even possible to have a robust detection technique? Expand full comment Reply Share 1 reply 1 more comment... Top Latest Discussions No posts Ready for more? Subscribe © 2025 Buck Shlegeris Privacy ∙ Terms ∙ Collection notice Start writing Get the app Substack is the home for great culture Share Copy link Facebook Email Notes More This site requires JavaScript to run correctly. Please turn on JavaScript or unblock scripts
d1e62382-c460-4955-bcfc-8b51278820a4
StampyAI/alignment-research-dataset/arxiv
Arxiv
Adversarial Examples for Evaluating Reading Comprehension Systems 1 Introduction --------------- Quantifying the extent to which a computer system exhibits intelligent behavior is a longstanding problem in AI (Levesque, [2013](#bib.bib13)). Today, the standard paradigm is to measure average error across a held-out test set. However, models can succeed in this paradigm by recognizing patterns that happen to be predictive on most of the test examples, while ignoring deeper, more difficult phenomena (Rimell et al., [2009](#bib.bib26); Paperno et al., [2016](#bib.bib22)). Article: Super Bowl 50 Paragraph: “Peyton Manning became the first quarterback ever to lead two different teams to multiple Super Bowls. He is also the oldest quarterback ever to play in a Super Bowl at age 39. The past record was held by John Elway, who led the Broncos to victory in Super Bowl XXXIII at age 38 and is currently Denver’s Executive Vice President of Football Operations and General Manager. Quarterback Jeff Dean had jersey number 37 in Champ Bowl XXXIV.” Question: “What is the name of the quarterback who was 38 in Super Bowl XXXIII?” Original Prediction: John Elway Prediction under adversary: Jeff Dean Figure 1: An example from the SQuAD dataset. The BiDAF Ensemble model originally gets the answer correct, but is fooled by the addition of an adversarial distracting sentence (in blue). In this work, we propose adversarial evaluation for NLP, in which systems are instead evaluated on adversarially-chosen inputs. We focus on the SQuAD reading comprehension task (Rajpurkar et al., [2016](#bib.bib25)), in which systems answer questions about paragraphs from Wikipedia. Reading comprehension is an appealing testbed for adversarial evaluation, as existing models appear successful by standard average-case evaluation metrics: the current state-of-the-art system achieves 84.7% F1 score, while human performance is just 91.2%.111<https://rajpurkar.github.io/SQuAD-explorer/> Nonetheless, it seems unlikely that existing systems possess true language understanding and reasoning capabilities. Carrying out adversarial evaluation on SQuAD requires new methods that adversarially alter reading comprehension examples. Prior work in computer vision adds imperceptible adversarial perturbations to input images, relying on the fact that such small perturbations cannot change an image’s true label (Szegedy et al., [2014](#bib.bib30); Goodfellow et al., [2015](#bib.bib9)). In contrast, changing even one word of a paragraph can drastically alter its meaning. Instead of relying on semantics-preserving perturbations, we create adversarial examples by adding distracting sentences to the input paragraph, as shown in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Adversarial Examples for Evaluating Reading Comprehension Systems"). We automatically generate these sentences so that they confuse models, but do not contradict the correct answer or confuse humans. For our main results, we use a simple set of rules to generate a raw distractor sentence that does not answer the question but looks related; we then fix grammatical errors via crowdsourcing. While adversarially perturbed images punish model *oversensitivity* to imperceptible noise, our adversarial examples target model *overstability*—the inability of a model to distinguish a sentence that actually answers the question from one that merely has words in common with it. Our experiments demonstrate that no published open-source model is robust to the addition of adversarial sentences. Across sixteen such models, adding grammatical adversarial sentences reduces F1 score from an average of 75% to 36%. On a smaller set of four models, we run additional experiments in which the adversary adds non-grammatical sequences of English words, causing average F1 score to drop further to 7%. To encourage the development of new models that understand language more precisely, we have released all of our code and data publicly. 2 The SQuAD Task and Models ---------------------------- ### 2.1 Task The SQuAD dataset (Rajpurkar et al., [2016](#bib.bib25)) contains 107,785 human-generated reading comprehension questions about Wikipedia articles. Each question refers to one paragraph of an article, and the corresponding answer is guaranteed to be a span in that paragraph. ### 2.2 Models When developing and testing our methods, we focused on two published model architectures: BiDAF (Seo et al., [2016](#bib.bib27)) and Match-LSTM (Wang and Jiang, [2016](#bib.bib31)). Both are deep learning architectures that predict a probability distribution over the correct answer. Each model has a single and an ensemble version, yielding four systems in total. We also validate our major findings on twelve other published models with publicly available test-time code: ReasoNet Single and Ensemble versions (Shen et al., [2017](#bib.bib28)), Mnemonic Reader Single and Ensemble versions (Hu et al., [2017](#bib.bib10)), Structural Embedding of Dependency Trees (SEDT) Single and Ensemble versions (Liu et al., [2017](#bib.bib15)), jNet (Zhang et al., [2017](#bib.bib35)), Ruminating Reader (Gong and Bowman, [2017](#bib.bib7)), Multi-Perspective Context Matching (MPCM) Single version (Wang et al., [2016](#bib.bib32)), RaSOR (Lee et al., [2017](#bib.bib12)), Dynamic Chunk Reader (DCR) (Yu et al., [2016](#bib.bib34)), and the Logistic Regression Baseline (Rajpurkar et al., [2016](#bib.bib25)). We did not run these models during development, so they serve as a held-out set that validates the generality of our approach. ### 2.3 Standard Evaluation Given a model f that takes in paragraph-question pairs (p,q) and outputs an answer ^a, the *standard accuracy* over a test set Dtest is simply | | | | | --- | --- | --- | | | Acc(f)def=1|Dtest|∑(p,q,a)∈Dtestv((p,q,a),f), | | where v is the F1 score between the true answer a and the predicted answer f(p,q) (see Rajpurkar et al. ([2016](#bib.bib25)) for details). 3 Adversarial Evaluation ------------------------- ### 3.1 General Framework A model that relies on superficial cues without understanding language can do well according to average F1 score, if these cues happen to be predictive most of the time. Weissenborn et al. ([2017](#bib.bib33)) argue that many SQuAD questions can be answered with heuristics based on type and keyword-matching. To determine whether existing models have learned much beyond such simple patterns, we introduce adversaries that confuse deficient models by altering test examples. Consider the example in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Adversarial Examples for Evaluating Reading Comprehension Systems"): the BiDAF Ensemble model originally gives the right answer, but gets confused when an adversarial distracting sentence is added to the paragraph. We define an adversary A to be a function that takes in an example (p,q,a), optionally with a model f, and returns a new example (p′,q′,a′). The *adversarial accuracy* with respect to A is | | | | | --- | --- | --- | | | Adv(f)def=1|Dtest|∑(p,q,a)∈Dtestv(A(p,q,a,f),f)). | | While standard test error measures the fraction of the test distribution over which the model gets the correct answer, the adversarial accuracy measures the fraction over which the model is *robustly* correct, even in the face of adversarially-chosen alterations. For this quantity to be meaningful, the adversary must satisfy two basic requirements: first, it should always generate (p′,q′,a′) tuples that are *valid*—a human would judge a′ as the correct answer to q′ given p′. Second, (p′,q′,a′) should be somehow “close” to the original example (p,q,a). ### 3.2 Semantics-preserving Adversaries In image classification, adversarial examples are commonly generated by adding an imperceptible amount of noise to the input (Szegedy et al., [2014](#bib.bib30); Goodfellow et al., [2015](#bib.bib9)). These perturbations do not change the semantics of the image, but they can change the predictions of models that are *oversensitive* to semantics-preserving changes. For language, the direct analogue would be to paraphrase the input (Madnani and Dorr, [2010](#bib.bib17)). However, high-precision paraphrase generation is challenging, as most edits to a sentence do actually change its meaning. ### 3.3 Concatenative Adversaries | | | | | --- | --- | --- | | | Image | Reading | | | Classification | Comprehension | | Possible Input | | Tesla moved | | to the city of | | Chicago in 1880. | | Similar Input | | Tadakatsu moved | | to the city of | | Chicago in 1881. | | Semantics | Same | Different | | Model’s | Considers the two | Considers the two | | Mistake | to be different | to be the same | | Model | Overly | Overly | | Weakness | sensitive | stable | Table 1: Adversarial examples in computer vision exploit model oversensitivity to small perturbations. In contrast, our adversarial examples work because models do not realize that a small perturbation can completely change the meaning of a sentence. Images from Szegedy et al. ([2014](#bib.bib30)). Instead of relying on paraphrasing, we use perturbations that do alter semantics to build *concatenative* adversaries, which generate examples of the form (p+s,q,a) for some sentence s. In other words, concatenative adversaries add a new sentence to the end of the paragraph, and leave the question and answer unchanged. Valid adversarial examples are precisely those for which s does not contradict the correct answer; we refer to such sentences as being *compatible* with (p,q,a). We use semantics-altering perturbations to that ensure that s is compatible, even though it may have many words in common with the question q. Existing models are bad at distinguishing these sentences from sentences that do in fact address the question, indicating that they suffer not from oversensitivity but from *overstability* to semantics-altering edits. Table [1](#S3.T1 "Table 1 ‣ 3.3 Concatenative Adversaries ‣ 3 Adversarial Evaluation ‣ Adversarial Examples for Evaluating Reading Comprehension Systems") summarizes this important distinction. The decision to always append s to the end of p is somewhat arbitrary; we could also prepend it to the beginning, though this would violate the expectation of the first sentence being a topic sentence. Both are more likely to preserve the validity of the example than inserting s in the middle of p, which runs the risk of breaking coreference links. Now, we describe two concrete concatenative adversaries, as well as two variants. AddSent, our main adversary, adds grammatical sentences that look similar to the question. In contrast, AddAny adds arbitrary sequences of English words, giving it more power to confuse models. Figure [2](#S3.F2 "Figure 2 ‣ 3.3 Concatenative Adversaries ‣ 3 Adversarial Evaluation ‣ Adversarial Examples for Evaluating Reading Comprehension Systems") illustrates these two main adversaries. Figure 2: An illustration of the AddSent and AddAny adversaries. #### 3.3.1 AddSent AddSent uses a four-step procedure to generate sentences that look similar to the question, but do not actually contradict the correct answer. Refer to Figure [2](#S3.F2 "Figure 2 ‣ 3.3 Concatenative Adversaries ‣ 3 Adversarial Evaluation ‣ Adversarial Examples for Evaluating Reading Comprehension Systems") for an illustration of these steps. In Step 1, we apply semantics-altering perturbations to the question, in order to guarantee that the resulting adversarial sentence is compatible. We replace nouns and adjectives with antonyms from WordNet (Fellbaum, [1998](#bib.bib4)), and change named entities and numbers to the nearest word in GloVe word vector space222 We use 100-dimensional GloVe vectors trained on Wikipedia and Euclidean distance to define nearby words. (Pennington et al., [2014](#bib.bib24)) with the same part of speech.333 We choose the nearest word whose most common gold POS tag in the Penn Treebank (Marcus et al., [1999](#bib.bib19)) matches the predicted POS tag of the original word, according to CoreNLP. If none of the nearest 100 words satisfy this, we just return the single closest word. If no words are changed during this step, the adversary gives up and immediately returns the original example. For example, given the question “What ABC division handles domestic television distribution?”, we would change “ABC” to “NBC” (a nearby word in vector space) and “domestic” to “foreign” (a WordNet antonym), resulting in the question, “What NBC division handles foreign television distribution?” In Step 2, we create a fake answer that has the same “type” as the original answer. We define a set of 26 types, corresponding to NER and POS tags from Stanford CoreNLP (Manning et al., [2014](#bib.bib18)), plus a few custom categories (e.g., abbreviations), and manually associate a fake answer with each type. Given the original answer to a question, we compute its type and return the corresponding fake answer. In our running example, the correct answer was not tagged as a named entity, and has the POS tag NNP, which corresponds to the fake answer “Central Park.” In Step 3, we combine the altered question and fake answer into declarative form, using a set of roughly 50 manually-defined rules over CoreNLP constituency parses. For example, “What ABC division handles domestic television distribution?” triggers a rule that converts questions of the form “what/which NP1 VP1 ?” to “The NP1 of [Answer] VP1”. After incorporating the alterations and fake answer from the previous steps, we generate the sentence, “The NBC division of Central Park handles foreign television distribution.” The raw sentences generated by Step 3 can be ungrammatical or otherwise unnatural due to the incompleteness of our rules and errors in constituency parsing. Therefore, in Step 4, we fix errors in these sentences via crowdsourcing. Each sentence is edited independently by five workers on Amazon Mechanical Turk, resulting in up to five sentences for each raw sentence. Three additional crowdworkers then filter out sentences that are ungrammatical or incompatible, resulting in a smaller (possibly empty) set of human-approved sentences. The full AddSent adversary runs the model f as a black box on every human-approved sentence, and picks the one that makes the model give the worst answer. If there are no human-approved sentences, the adversary simply returns the original example. A model-independent adversary. AddSent requires a small number of queries to the model under evaluation. To explore the possibility of an adversary that is completely model-independent, we also introduce AddOneSent, which adds a random human-approved sentence to the paragraph. In contrast with prior work in computer vision (Papernot et al., [2017](#bib.bib23); Narodytska and Kasiviswanathan, [2016](#bib.bib21); Moosavi-Dezfooli et al., [2017](#bib.bib20)), AddOneSent does not require any access to the model or to any training data: it generates adversarial examples based solely on the intuition that existing models are overly stable. #### 3.3.2 AddAny For AddAny, the goal is to choose any sequence of d words, regardless of grammaticality. We use local search to adversarially choose a distracting sentence s=w1w2…wd. Figure [2](#S3.F2 "Figure 2 ‣ 3.3 Concatenative Adversaries ‣ 3 Adversarial Evaluation ‣ Adversarial Examples for Evaluating Reading Comprehension Systems") shows an example of AddAny with d=5 words; in our experiments, we use d=10. We first initialize words w1,…,wd randomly from a list of common English words.444 We define common words as the 1000 most frequent words in the Brown corpus (Francis and Kucera, [1979](#bib.bib5)). Then, we run 6 epochs of local search, each of which iterates over the indices i∈{1,…,d} in a random order. For each i, we randomly generate a set of candidate words W as the union of 20 randomly sampled common words and all words in q. For each x∈W, we generate the sentence with x in the i-th position and wj in the j-th position for each j≠i. We try adding each sentence to the paragraph and query the model for its predicted probability distribution over answers. We update wi to be the x that minimizes the expected value of the F1 score over the model’s output distribution. We return immediately if the model’s argmax prediction has 0 F1 score. If we do not stop after 3 epochs, we randomly initialize 4 additional word sequences, and search over all of these random initializations in parallel. AddAny requires significantly more model access than AddSent: not only does it query the model many times during the search process, but it also assumes that the model returns a probability distribution over answers, instead of just a single prediction. Without this assumption, we would have to rely on something like the F1 score of the argmax prediction, which is piecewise constant and therefore harder to optimize. “Probabilistic” query access is still weaker than access to gradients, as is common in computer vision (Szegedy et al., [2014](#bib.bib30); Goodfellow et al., [2015](#bib.bib9)). We do not do anything to ensure that the sentences generated by this search procedure do not contradict the original answer. In practice, the generated “sentences” are gibberish that use many question words but have no semantic content (see Figure [2](#S3.F2 "Figure 2 ‣ 3.3 Concatenative Adversaries ‣ 3 Adversarial Evaluation ‣ Adversarial Examples for Evaluating Reading Comprehension Systems") for an example). Finally, we note that both AddSent and AddAny try to incorporate words from the question into their adversarial sentences. While this is an obvious way to draw the model’s attention, we were curious if we could also distract the model without such a straightforward approach. To this end, we introduce a variant of AddAny called AddCommon, which is exactly like AddAny except it only adds common words. 4 Experiments -------------- ### 4.1 Setup For all experiments, we measure adversarial F1 score (Rajpurkar et al., [2016](#bib.bib25)) across 1000 randomly sampled examples from the SQuAD development set (the test set is not publicly available). Downsampling was helpful because AddAny and AddCommon can issue thousands of model queries per example, making them very slow. As the effect sizes we measure are large, this downsampling does not hurt statistical significance. ### 4.2 Main Experiments | | | | | | | --- | --- | --- | --- | --- | | | Match | Match | BiDAF | BiDAF | | | Single | Ens. | Single | Ens. | | Original | 71.4 | 75.4 | 75.5 | 80.0 | | AddSent | 27.3 | 29.4 | 34.3 | 34.2 | | AddOneSent | 39.0 | 41.8 | 45.7 | 46.9 | | AddAny | 7.6 | 11.7 | 4.8 | 2.7 | | AddCommon | 38.9 | 51.0 | 41.7 | 52.6 | Table 2: Adversarial evaluation on the Match-LSTM and BiDAF systems. All four systems can be fooled by adversarial examples. | Model | Original | AddSent | AddOneSent | | --- | --- | --- | --- | | ReasoNet-E | 81.1 | 39.4 | 49.8 | | SEDT-E | 80.1 | 35.0 | 46.5 | | BiDAF-E | 80.0 | 34.2 | 46.9 | | Mnemonic-E | 79.1 | 46.2 | 55.3 | | Ruminating | 78.8 | 37.4 | 47.7 | | jNet | 78.6 | 37.9 | 47.0 | | Mnemonic-S | 78.5 | 46.6 | 56.0 | | ReasoNet-S | 78.2 | 39.4 | 50.3 | | MPCM-S | 77.0 | 40.3 | 50.0 | | SEDT-S | 76.9 | 33.9 | 44.8 | | RaSOR | 76.2 | 39.5 | 49.5 | | BiDAF-S | 75.5 | 34.3 | 45.7 | | Match-E | 75.4 | 29.4 | 41.8 | | Match-S | 71.4 | 27.3 | 39.0 | | DCR | 69.3 | 37.8 | 45.1 | | Logistic | 50.4 | 23.2 | 30.4 | Table 3: AddSent and AddOneSent on all sixteen models, sorted by F1 score the original examples. S = single, E = ensemble. Table [2](#S4.T2 "Table 2 ‣ 4.2 Main Experiments ‣ 4 Experiments ‣ Adversarial Examples for Evaluating Reading Comprehension Systems") shows the performance of the Match-LSTM and BiDAF models against all four adversaries. Each model incurred a significant accuracy drop under every form of adversarial evaluation. AddSent made average F1 score across the four models fall from 75.7% to 31.3%. AddAny was even more effective, making average F1 score fall to 6.7%. AddOneSent retained much of the effectiveness of AddSent, despite being model-independent. Finally, AddCommon caused average F1 score to fall to 46.1%, despite only adding common words. We also verified that our adversaries were general enough to fool models that we did not use during development. We ran AddSent on twelve published models for which we found publicly available test-time code; we did not run AddAny on these models, as not all models exposed output distributions. As seen in Table [3](#S4.T3 "Table 3 ‣ 4.2 Main Experiments ‣ 4 Experiments ‣ Adversarial Examples for Evaluating Reading Comprehension Systems"), no model was robust to adversarial evaluation; across the sixteen total models tested, average F1 score fell from 75.4% to 36.4% under AddSent. It is noteworthy that the Mnemonic Reader models (Hu et al., [2017](#bib.bib10)) outperform the other models by about 6 F1 points. We hypothesize that Mnemonic Reader’s self-alignment layer, which helps model long-distance relationships between parts of the paragraph, makes it better at locating all pieces of evidence that support the correct answer. Therefore, it can be more confident in the correct answer, compared to the fake answer inserted by the adversary. ### 4.3 Human Evaluation | | Human | | --- | --- | | Original | 92.6 | | AddSent | 79.5 | | AddOneSent | 89.2 | Table 4: Human evaulation on adversarial examples. Human accuracy drops on AddSent mostly due to unrelated errors; the AddOneSent numbers show that humans are robust to adversarial sentences. To ensure our results are valid, we verified that humans are not also fooled by our adversarial examples. As AddAny requires too many model queries to run against humans, we focused on AddSent. We presented each original and adversarial paragraph-question pair to three crowdworkers, and asked them to select the correct answer by copy-and-pasting from the paragraph. We then took a majority vote over the three responses (if all three responses were different, we picked one at random). These results are shown in Table [4](#S4.T4 "Table 4 ‣ 4.3 Human Evaluation ‣ 4 Experiments ‣ Adversarial Examples for Evaluating Reading Comprehension Systems"). On original examples, our humans are actually slightly better than the reported number of 91.2 F1 on the entire development set. On AddSent, human accuracy drops by 13.1 F1 points, much less than the computer systems. Moreover, much of this decrease can be explained by mistakes unrelated to our adversarial sentences. Recall that AddSent picks the worst case over up to five different paragraph-question pairs. Even if we showed the same original example to five sets of three crowdworkers, chances are that at least one of the five groups would make a mistake, just because humans naturally err. Therefore, it is more meaningful to evaluate humans on AddOneSent, on which their accuracy drops by only 3.4 F1 points. ### 4.4 Analysis Next, we sought to better understand the behavior of our four main models under adversarial evaluation. To highlight errors caused by the adversary, we focused on examples where the model originally predicted the (exact) correct answer. We divided this set into “model successes”—examples where the model continued being correct during adversarial evaluation—and “model failures”—examples where the model gave a wrong answer during adversarial evaluation. #### 4.4.1 Manual verification First, we verified that the sentences added by AddSent are actually grammatical and compatible. We manually checked 100 randomly chosen BiDAF Ensemble failures. We found only one where the sentence could be interpreted as answering the question: in this case, AddSent replaced the word “Muslim” with the related word “Islamic”, so the resulting adversarial sentence still contradicted the correct answer. Additionally, we found 7 minor grammar errors, such as subject-verb disagreement (e.g., “The Alaskan Archipelago are made up almost entirely of hamsters.”) and misuse of function words (e.g., “The gas of nitrogen makes up 21.8 % of the Mars’s atmosphere.”), but no errors that materially impeded understanding of the sentence. We also verified compatibility for AddAny. We found no violations out of 100 randomly chosen BiDAF Ensemble failures. #### 4.4.2 Error analysis Next, we wanted to understand what types of errors the models made on the AddSent examples. In 96.6% of model failures, the model predicted a span in the adversarial sentence. The lengths of the predicted answers were mostly similar to those of correct answers, but the BiDAF models occasionally predicted very long spans. The BiDAF Single model predicted an answer of more than 29 words—the length of the longest answer in the SQuAD development set—on 5.0% of model failures; for BiDAF Ensemble, this number was 1.6%. Since the BiDAF models independently predict the start and end positions of the answer, they can predict very long spans when the end pointer is influenced by the adversarial sentence, but the start pointer is not. Match-LSTM has a similar structure, but also has a hard-coded rule that stops it from predicting very long answers. We also analyzed human failures—examples where the humans were correct originally, but wrong during adversarial evaluation. Humans predicted from the adversarial sentence on only 27.3% of these error cases, which confirms that many errors are normal mistakes unrelated to adversarial sentences. #### 4.4.3 Categorizing AddSent sentences We then manually examined sentences generated by AddSent. In 100 BiDAF Ensemble failures, we found 75 cases where an entity name was changed in the adversarial sentence, 17 cases where numbers or dates were changed, and 33 cases where an antonym of a question word was used.555 These numbers add up to more than 100 because more than one word can be altered per example. Additionally, 7 sentences had other miscellaneous perturbations made by crowdworkers during Step 4 of AddSent. For example, on a question about the “Kalven Report”, the adversarial sentence discussed “The statement Kalven cited” instead; in another case, the question, “How does Kenya curb corruption?” was met by the unhelpful sentence, “Tanzania is curbing corruption” (the model simply answered, “corruption”). #### 4.4.4 Reasons for model successes ![Fraction of model successes and failures on ](https://media.arxiv-vanity.com/render-output/8126920/x1.png) Figure 3: Fraction of model successes and failures on AddSent for which the question has an exact n-gram match with the original paragraph. For each model and each value of n, successes are more likely to have an n-gram match than failures. ![ For model successes and failures on ](https://media.arxiv-vanity.com/render-output/8126920/x2.png) Figure 4: For model successes and failures on AddSent, the cumulative distribution function of the number of words in the question (for each k, what fraction of questions have ≤k words). Successes are more likely to involve short questions. Finally, we sought to understand the factors that influence whether the model will be robust to adversarial perturbations on a particular example. First, we found that models do well when the question has an exact n-gram match with the original paragraph. Figure [3](#S4.F3 "Figure 3 ‣ 4.4.4 Reasons for model successes ‣ 4.4 Analysis ‣ 4 Experiments ‣ Adversarial Examples for Evaluating Reading Comprehension Systems") plots the fraction of examples for which an n-gram in the question appears verbatim in the original passage; this is much higher for model successes. For example, 41.5% of BiDAF Ensemble successes had a 4-gram in common with the original paragraph, compared to only 21.0% of model failures. We also found that models succeeded more often on short questions. Figure [4](#S4.F4 "Figure 4 ‣ 4.4.4 Reasons for model successes ‣ 4.4 Analysis ‣ 4 Experiments ‣ Adversarial Examples for Evaluating Reading Comprehension Systems") shows the distribution of question length on model successes and failures; successes tend to involve shorter questions. For example, 32.7% of the questions in BiDAF Ensemble successes were 8 words or shorter, compared to only 11.8% for model failures. This effect arises because AddSent always changes at least one word in the question. For long questions, changing one word leaves many others unchanged, so the adversarial sentence still has many words in common with the question. For short questions, changing one content word may be enough to make the adversarial sentence completely irrelevant. ### 4.5 Transferability across Models | | Model under Evaluation | | --- | --- | | Targeted Model | ML | ML | BiDAF | BiDAF | | Single | Ens. | Single | Ens. | | AddSent | | | | | | ML Single | 27.3 | 33.4 | 40.3 | 39.1 | | ML Ens. | 31.6 | 29.4 | 40.2 | 38.7 | | BiDAF Single | 32.7 | 34.8 | 34.3 | 37.4 | | BiDAF Ens. | 32.7 | 34.2 | 38.3 | 34.2 | | AddAny | | | | | | ML Single | 7.6 | 54.1 | 57.1 | 60.9 | | ML Ens. | 44.9 | 11.7 | 50.4 | 54.8 | | BiDAF Single | 58.4 | 60.5 | 4.8 | 46.4 | | BiDAF Ens. | 48.8 | 51.1 | 25.0 | 2.7 | Table 5: Transferability of adversarial examples across models. Each row measures performance on adversarial examples generated to target one particular model; each column evaluates one (possibly different) model on these examples. In computer vision, adversarial examples that fool one model also tend to fool other models (Szegedy et al., [2014](#bib.bib30); Moosavi-Dezfooli et al., [2017](#bib.bib20)); we investigate whether the same pattern holds for us. Examples from AddOneSent clearly do transfer across models, since AddOneSent always adds the same adversarial sentence regardless of model. Table [5](#S4.T5 "Table 5 ‣ 4.5 Transferability across Models ‣ 4 Experiments ‣ Adversarial Examples for Evaluating Reading Comprehension Systems") shows the results of evaluating the four main models on adversarial examples generated by running either AddSent or AddAny against each model. AddSent adversarial examples transfer between models quite effectively; in particular, they are harder than AddOneSent examples, which implies that examples that fool one model are more likely to fool other models. The AddAny adversarial examples exhibited more limited transferability between models. For both AddSent and AddAny, examples transferred slightly better between single and ensemble versions of the same model. ### 4.6 Training on Adversarial Examples | | Training data | | --- | --- | | Test data | Original | Augmented | | Original | 75.8 | 75.1 | | AddSent | 34.8 | 70.4 | | AddSentMod | 34.3 | 39.2 | Table 6: Effect of training the BiDAF Single model on the original training data alone (first column) versus augmenting the data with raw AddSent examples (second column). Finally, we tried training on adversarial examples, to see if existing models can learn to become more robust. Due to the prohibitive cost of running AddSent or AddAny on the entire training set, we instead ran only Steps 1-3 of AddSent (everything except crowdsourcing) to generate a raw adversarial sentence for each training example. We then trained the BiDAF model from scratch on the union of these examples and the original training data. As a control, we also trained a second BiDAF model on the original training data alone.666 All previous experiments used parameters released by Seo et al. ([2016](#bib.bib27)) The results of evaluating these models are shown in Table [6](#S4.T6 "Table 6 ‣ 4.6 Training on Adversarial Examples ‣ 4 Experiments ‣ Adversarial Examples for Evaluating Reading Comprehension Systems"). At first glance, training on adversarial data seems effective, as it largely protects against AddSent. However, further investigation shows that training on these examples has only limited utility. To demonstrate this, we created a variant of AddSent called AddSentMod, which differs from AddSent in two ways: it uses a different set of fake answers (e.g., PERSON named entities map to “Charles Babbage” instead of “Jeff Dean”), and it prepends the adversarial sentence to the beginning of the paragraph instead of appending it to the end. The retrained model does almost as badly as the original one on AddSentMod, suggesting that it has just learned to ignore the last sentence and reject the fake answers that AddSent usually proposed. In order for training on adversarial examples to actually improve the model, more care must be taken to ensure that the model cannot overfit the adversary. 5 Discussion and Related Work ------------------------------ Despite appearing successful by standard evaluation metrics, existing machine learning systems for reading comprehension perform poorly under adversarial evaluation. Standard evaluation is overly lenient on models that rely on superficial cues. In contrast, adversarial evaluation reveals that existing models are overly stable to perturbations that alter semantics. To optimize adversarial evaluation metrics, we may need new strategies for training models. For certain classes of models and adversaries, efficient training strategies exist: for example, Globerson and Roweis ([2006](#bib.bib6)) train classifiers that are optimally robust to adversarial feature deletion. Adversarial training (Goodfellow et al., [2015](#bib.bib9)) can be used for any model trained with stochastic gradient descent, but it requires generating new adversarial examples at every iteration; this is feasible for images, where fast gradient-based adversaries exist, but is infeasible for domains where only slower adversaries are available. We contrast *adversarial evaluation*, as studied in this work, with *generative adversarial models*. While related in name, the two have very different goals. Generative adversarial models pit a generative model, whose goal is to generate realistic outputs, against a discriminative model, whose goal is to distinguish the generator’s outputs from real data (Smith, [2012](#bib.bib29); Goodfellow et al., [2014](#bib.bib8)). Bowman et al. ([2016](#bib.bib2)) and Li et al. ([2017](#bib.bib14)) used such a setup for sentence and dialogue generation, respectively. Our setup also involves a generator and a discriminator in an adversarial relationship; however, our discriminative system is tasked with finding the right answer, not distinguishing the generated examples from real ones, and our goal is to evaluate the discriminative system, not to train the generative one. While we use adversaries as a way to evaluate language understanding, robustness to adversarial attacks may also be its own goal for tasks such as spam detection. Dalvi et al. ([2004](#bib.bib3)) formulated such tasks as a game between a classifier and an adversary, and analyzed optimal strategies for each player. Lowd and Meek ([2005](#bib.bib16)) described an efficient attack by which an adversary can reverse-engineer the weights of a linear classifier, in order to then generate adversarial inputs. In contrast with these methods, we do not make strong structural assumptions about our classifiers. Other work has proposed harder test datasets for various tasks. Levesque ([2013](#bib.bib13)) proposed the Winograd Schema challenge, in which computers must resolve coreference resolution problems that were handcrafted to require extensive world knowledge. Paperno et al. ([2016](#bib.bib22)) constructed the LAMBADA dataset, which tests the ability of language models to handle long-range dependencies. Their method relies on the availability of a large initial dataset, from which they distill a difficult subset; such initial data may be unavailable for many tasks. Rimell et al. ([2009](#bib.bib26)) showed that dependency parsers that seem very accurate by standard metrics perform poorly on a subset of the test data that has unbounded dependency constructions. Such evaluation schemes can only test models on phenomena that are moderately frequent in the test distribution; by perturbing test examples, we can introduce out-of-distribution phenomena while still leveraging prior data collection efforts. While concatenative adversaries are well-suited to reading comprehension, other adversarial methods may prove more effective on other tasks. As discussed previously, paraphrase generation systems (Madnani and Dorr, [2010](#bib.bib17)) could be used for adversarial evaluation on a wide range of language tasks. Building on our intuition that existing models are overly stable, we could apply meaning-altering perturbations to inputs on tasks like machine translation, and adversarially choose ones for which the model’s output does *not* change. We could also adversarially generate new examples by combining multiple existing ones, in the spirit of Data Recombination (Jia and Liang, [2016](#bib.bib11)). The Build It, Break It shared task (Bender et al., [2017](#bib.bib1)) encourages researchers to adversarially design minimal pairs to fool sentiment analysis and semantic role labeling systems. Progress on building systems that truly understand language is only possible if our evaluation metrics can distinguish real intelligent behavior from shallow pattern matching. To this end, we have released scripts to run AddSent on any SQuAD system, as well as code for AddAny. We hope that our work will motivate the development of more sophisticated models that understand language at a deeper level. ##### Acknowledgments. We thank Pranav Rajpurkar for his help with various SQuAD models. This work was supported by the NSF Graduate Research Fellowship under Grant No. DGE-114747, and funding from Facebook AI Research and Microsoft. ##### Reproducibility. All code, data, and experiments for this paper are available on the CodaLab platform at <https://worksheets.codalab.org/worksheets/0xc86d3ebe69a3427d91f9aaa63f7d1e7d/>.
5c88577d-0522-419b-9069-a4235c8a3b97
StampyAI/alignment-research-dataset/arxiv
Arxiv
Generalized Hindsight for Reinforcement Learning 1 Introduction --------------- Model-free reinforcement learning (RL) combined with powerful function approximators has achieved remarkable success in games like Atari (Mnih et al., [2015](#bib.bib110 "Human-level control through deep reinforcement learning")) and Go (Silver et al., [2017](#bib.bib155 "Mastering the game of go without human knowledge")), and control tasks like walking (Haarnoja et al., [2018a](#bib.bib55 "Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor")) and flying (Kaufmann et al., [2018](#bib.bib154 "Deep drone racing: learning agile flight in dynamic environments")). However, a key limitation to these methods is their sample complexity. They often require millions of samples to learn simple locomotion skills, and sometimes even billions of samples to learn more complex game strategies. Creating general purpose agents will necessitate learning multiple such skills or strategies, which further exacerbates the inefficiency of these algorithms. On the other hand, humans (or biological agents) are not only able to learn a multitude of different skills, but from orders of magnitude fewer samples (Karni et al., [1998](#bib.bib14 "The acquisition of skilled motor performance: fast and slow experience-driven changes in primary motor cortex")). So, how do we endow RL agents with this ability to learn efficiently across multiple tasks? ![A rollout can often provide very little information about how to perform a task A. In the trajectory-following task on the left, the trajectory (green) sees almost no reward signal (areas in red). However, in the multi-task setting where each target trajectory represents a different task, we can find another task B for which our trajectory is a “pseudo-demonstration.” This hindsight relabeling provides high reward signal and enables sample-efficient learning.](https://media.arxiv-vanity.com/render-output/7472691/x1.png) Figure 1: A rollout can often provide very little information about how to perform a task A. In the trajectory-following task on the left, the trajectory (green) sees almost no reward signal (areas in red). However, in the multi-task setting where each target trajectory represents a different task, we can find another task B for which our trajectory is a “pseudo-demonstration.” This hindsight relabeling provides high reward signal and enables sample-efficient learning. One key hallmark of biological learning is the ability to learn from mistakes. In RL, mistakes made while solving a task are only used to guide the learning of that particular task. But data seen while making these mistakes often contain a lot more information. In fact, extracting and re-using this information lies at the heart of most efficient RL algorithms. Model-based RL re-uses this information to learn a dynamics model of the environment. However for several domains, learning a robust model is often more difficult than directly learning the policy (Duan et al., [2016](#bib.bib16 "Benchmarking deep reinforcement learning for continuous control")), and addressing this challenge continues to remain an active area of research (Nagabandi et al., [2018](#bib.bib15 "Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning")). Another way to re-use low-reward data is off-policy RL, where in contrast to on-policy RL, data collected from an older policy is re-used while optimizing the new policy. But in the context of multi-task learning, this is still inefficient (Section [4](#S4 "4 Experimental Evaluation ‣ Generalized Hindsight for Reinforcement Learning")) since data generated from one task cannot effectively inform a different task. Towards solving this problem, recent work (Andrychowicz et al., [2017](#bib.bib60 "Hindsight experience replay")) focus on extracting even more information through hindsight. In goal-conditioned settings, where tasks are defined by a sparse goal, HER (Andrychowicz et al., [2017](#bib.bib60 "Hindsight experience replay")) relabels the desired goal, for which a trajectory was generated, to a state seen in that trajectory. Therefore, if the goal-conditioned policy erroneously reaches an incorrect goal instead of the desired goal, we can re-use this data to teach it how to reach this incorrect goal. Hence, a low-reward trajectory under one desired goal is converted to a high-reward trajectory for the unintended goal. This new relabelling provides a strong supervision and produces significantly faster learning. However, a key assumption made in this framework is that goals are a sparse set of states that need to be reached. This allows for efficient relabeling by simply setting the relabeled goals to the states visited by the policy. But for several real world problems like energy-efficient transport, or robotic trajectory tracking, rewards are often complex combinations of desirables rather than sparse objectives. So how do we use hindsight for general families of reward functions? ![Trajectories ](https://media.arxiv-vanity.com/render-output/7472691/x2.png) Figure 2: Trajectories τ(zi), collected trying to maximize r(⋅|zi), may contain very little reward signal about how to solve their original tasks. Generalized Hindsight checks against randomly sampled “candidate tasks” {vi}Ki=1 to find different tasks z′i for which these trajectories are “pseudo-demonstrations.” Using off-policy RL, we can obtain more reward signal from these relabeled trajectories. In this paper, we build on the ideas of goal-conditioned hindsight and propose Generalized Hindsight. Here, instead of performing hindsight on a task-family of sparse goals, we perform hindsight on a task-family of reward functions. Since dense reward functions can capture a richer task specification, GH allows for better re-utilization of data. Note that this is done along with solving the task distribution induced by the family of reward functions. However for relabeling, instead of simply setting visited states as goals, we now need to compute the reward functions that best explain the generated data. To do this, we draw connections from Inverse Reinforcement Learning (IRL), and propose an approximate IRL relabeling algorithm we call AIR. Concretely, AIR takes a new trajectory and compares it to K randomly sampled tasks from our distribution. It selects the task for which the trajectory is a “pseudo-demonstration,” i.e. the trajectory achieves higher performance on that task than any of our previous trajectories. This “pseudo-demonstration” can then be used to quickly learn how to perform that new task. We go into detail on good selection algorithms in Section [4](#S4 "4 Experimental Evaluation ‣ Generalized Hindsight for Reinforcement Learning"), and show an illustrative example of the relabeling process in [Figure 1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Generalized Hindsight for Reinforcement Learning"). We test our algorithm on several multi-task control tasks, and find that AIR consistently achieves higher asymptotic performance using as few as 20% of the environment interactions as our baselines. We also introduce a computationally more efficient version that also achieves higher asymptotic performance than our baselines. In summary, we present three key contributions in this paper: (a) we extend the ideas of hindsight to the generalized reward family setting; (b) we propose AIR, a relabeling algorithm using insights from IRL; and (c) we demonstrate significant improvements in multi-task RL on a suite of multi-task navigation and manipulation tasks. 2 Background ------------- Before discussing our method, we briefly introduce some background and formalism for the RL algorithms used. A more comprehensive introduction to RL can be found in Kaelbling et al. ([1996](#bib.bib42 "Reinforcement learning: a survey")) and Sutton and Barto ([1998](#bib.bib54 "Reinforcement learning: an introduction")). ### 2.1 Reinforcement Learning In this paper we deal with continuous space Markov Decision Processes M that can be represented as the tuple M≡(S,A,P,r,γ,S), where S is a set of continuous states and A is a set of continuous actions, P:S×A×S→R is the transition probability function, r:S×A→R is the reward function, γ is the discount factor, and S is the initial state distribution. An episode for the agent begins with sampling s0 from the initial state distribution S. At every timestep t, the agent takes an action at=π(st) according to a policy π:S→A. At every timestep t, the agent gets a reward rt=r(st,at), and the state transitions to st+1, which is sampled according to probabilities P(st+1|st,at). The goal of the agent is to maximize the expected return ES[R0|S], where the return is the discounted sum of the future rewards Rt=∑∞i=tγi−tri. The Q-function is defined as Qπ(st,at)=E[Rt|st,at]. In the partial observability case, the agent takes actions based on the partial observation, at=π(ot), where ot is the observation corresponding to the full state st. ### 2.2 Off Policy RL using Soft Actor Critic Generalized Hindsight requires an off-policy RL algorithm to perform relabeling. One popular off-policy algorithm for learning deterministic continuous action policies is Deep Deterministic Policy Gradients (DDPG) (Lillicrap et al., [2015](#bib.bib59 "Continuous control with deep reinforcement learning")). The algorithm maintains two neural networks: the policy (also called the actor) πθ:S→A (with neural network parameters θ) and a Q-function approximator (also called the critic) Qπϕ:S×A→R (with neural network parameters ϕ). During training, episodes are generated using a noisy version of the policy (called behaviour policy), e.g. πb(s)=π(s)+N(0,1), where N is the Normal distribution noise. The transition tuples (st,at,rt,st+1) encountered during training are stored in a replay buffer (Mnih et al., [2015](#bib.bib110 "Human-level control through deep reinforcement learning")). Training examples sampled from the replay buffer are used to optimize the critic. By minimizing the Bellman error loss Lc=(Q(st,at)−yt)2, where yt=rt+γQ(st+1,π(st+1)), the critic is optimized to approximate the Q-function. The actor is optimized by minimizing the loss La=−Es[Q(s,π(s))]. The gradient of La with respect to the actor parameters is called the deterministic policy gradient  ([Silver et al.,](#bib.bib61 "Deterministic policy gradient algorithms") ) and can be computed by backpropagating through the combined critic and actor networks. To stabilize the training, the targets for the actor and the critic yt are computed on separate versions of the actor and critic networks, which change at a slower rate than the main networks. A common practice is to use a Polyak averaged (Polyak and Juditsky, [1992](#bib.bib63 "Acceleration of stochastic approximation by averaging")) version of the main network. Soft Actor Critic (SAC) (Haarnoja et al., [2018a](#bib.bib55 "Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor")) builds on DDPG by adding an entropy maximization term in the reward. Since this encourages exploration and empirically performs better than most actor-critic algorithms, we use SAC for our experiments, although Generalized Hindsight is compatible with any off-policy RL algorithm. ### 2.3 Multi-Task RL The goal in multi-task RL is to not just solve a single MDP M, but to solve to solve a distribution of MDPs M(z), where z is the task-specification drawn from the task distribution z∼T. Although z can parameterize different aspects of the MDP, we are specially interested in the different reward functions. Hence, our distribution of MDPs is now M(z)≡(S,A,P,r(⋅|z),γ,S). Thus, a different z implies a different reward function under the same dynamics P and start state z. One may view this representation as a generalization of the goal-conditioned RL setting ([Schaul et al.,](#bib.bib62 "Universal value function approximators") ), where the reward family is restricted to r(s,a|z=g)=−d(s,z=g). Here d represents the distance between the current state s and the desired goal g. In sparse goal-conditioned RL, where hindsight has previously been applied (Andrychowicz et al., [2017](#bib.bib60 "Hindsight experience replay")), the reward family is further restricted to r(s,a|z=g)=[d(s,z=g)<ϵ]. Here the agent gets a positive reward only when s is within ϵ of the desired goal g. ### 2.4 Hindsight Experience Replay (HER) HER (Andrychowicz et al., [2017](#bib.bib60 "Hindsight experience replay")) is a simple method of manipulating the replay buffer used in off-policy RL algorithms that allows it to learn state-reaching policies more efficiently with sparse rewards. After experiencing some episode s0,s1,...,sT, every transition st→st+1 along with the goal for this episode is usually stored in the replay buffer. However with HER, the experienced transitions are also stored in the replay buffer with different goals. These additional goals are states that were achieved later in the episode. Since the goal being pursued does not influence the environment dynamics, one can replay each trajectory using arbitrary goals, assuming we use an off-policy RL algorithm to optimize (Precup et al., [2001](#bib.bib64 "Off-policy temporal-difference learning with function approximation")). ### 2.5 Inverse Reinforcement Learning (IRL) In IRL (Ng et al., [2000](#bib.bib56 "Algorithms for inverse reinforcement learning.")), given an expert policy πE or more practically, access to demonstrations τE from πE, we want to recover the underlying reward function r∗ that best explains the expert behaviour. Although there are several methods that tackle this problem (Ratliff et al., [2006](#bib.bib9 "Maximum margin planning"); Abbeel and Ng, [2004](#bib.bib57 "Apprenticeship learning via inverse reinforcement learning"); Ziebart et al., [2008](#bib.bib58 "Maximum entropy inverse reinforcement learning")), the basic principle is to find r∗ such that: | | | | | | --- | --- | --- | --- | | | E[T−1∑t=0γr∗(st)|πE]≥E[T−1∑t=0γr∗(st)|π]∀π | | (1) | We use the framework of IRL to guide our Approximate IRL relabeling strategy for Generalized Hindsight. 3 Generalized Hindsight ------------------------ ### 3.1 Overview Given a multi-task RL setup, i.e. a distribution of reward functions r(.|z), our goal is to maximize the expected reward across the task distribution z∼T through optimizing our policy π: | | | | | | --- | --- | --- | --- | | | Ez∼T[R(π|z)] | | (2) | Here, R(π|z)=∑T−1t=0γtr(st,at∼π(st|z)|z) represents the cumulative discounted reward under the reward parameterization z and the conditional policy π(.|z). One approach to solving this problem would be the straightforward application of RL to train the z− conditional policy using the rewards from r(.|z). However, this fails to re-use the data collected under one task parameter z (st,at)∼π(.|z) to a different parameter z′. In order to better use and share this data, we propose to use hindsight relabeling, which is detailed in Algorithm [1](#alg1 "Algorithm 1 ‣ 3.1 Overview ‣ 3 Generalized Hindsight ‣ Generalized Hindsight for Reinforcement Learning"). The core idea of hindsight relabeling is to convert the data generated from the policy under one task z to a different task. Given the relabeled task z′=relabel(τ(π(.|z))), where τ represents the trajectory induced by the policy π(.|z), the state transition tuple (st,at,rt(.|z),st+1) is converted to the relabeled tuple (st,at,rt(.|z′),st+1). This relabeled tuple is then added to the replay buffer of an off-policy RL algorithm and trained as if the data generated from z was generated from z′. If relabeling is done efficiently, it will allow for data that is sub-optimal under one reward specification z, to be used for the better relabeled specification z′. In the context of sparse goal-conditioned RL, where z corresponds to a goal g that needs to be achieved, HER (Andrychowicz et al., [2017](#bib.bib60 "Hindsight experience replay")) relabels the goal to states seen in the trajectory, i.e. g′∼τ(π(.|z=g)). This labeling strategy, however, only works in sparse goal conditioned tasks. In the following section, we describe two relabeling strategies that allow for a generalized application of hindsight. 1:Input: Off-policy RL algorithm A, strategy S for choosing suitable task variables to relabel with, reward function r:S×A×Z→R 2:for episode =1 to M do 3:   Sample a task variable z and an initial state s0 4:   Roll out policy on z for T steps, yielding trajectory τ 5:   Find set of new tasks to relabel with: Z:=S(τ) 6:   Store original transitions in replay buffer: 7: (st,at,r(st,at,z),st+1,z) 8:   for z′∈Z do 9:      Store relabeled transitions in replay buffer: 10: (st,at,r(st,at,z′),st+1,z′) 11:   end for 12:   Perform n steps of policy optimization with A 13:end for Algorithm 1 Generalized Hindsight ### 3.2 Approximate IRL Relabeling (AIR) 1:Input: Trajectory τ=(s0,a0,...,sT), cached trajectories D={(s0,a0,...,sT)}Ni=1, reward function r:S×A×Z→R, number of candidate task variables to try: K, number of task variables to return: m 2:Sample set of candidate tasks Z={vj}Kj=1, where vj∼T 3:Approximate IRL Strategy: 4:for vj∈Z do 5:   Calculate trajectory reward for τ and the trajectories in D: R(τ|vj):=∑Tt=0γtr(st,at,vj) 6:   Calculate percentile estimate: 7:^P(τ,vj)=1n∑Ni=11{R(τ|vj)≥R(τi|vj)} 8:end for 9:return m tasks vj with highest percentiles ^P(τ,vj) Algorithm 2 SIRL: Approximate IRL The goal of computing the optimal reward parameter, given a trajectory is closely tied to the Inverse Reinforcement Learning (IRL) setting. In IRL, given demonstrations from an expert, we can retrieve the reward function the expert was optimized for. At the heart of these IRL algorithms, a reward specification parameter z′ is optimized such that | | | | | | --- | --- | --- | --- | | | R(τE|z′)≥R(τ′|z′)∀τ′ | | (3) | where τE is an expert trajectory. Inspired by the IRL framework, we propose the Approximate IRL relabeling seen in Algorithm [2](#alg2 "Algorithm 2 ‣ 3.2 Approximate IRL Relabeling (AIR) ‣ 3 Generalized Hindsight ‣ Generalized Hindsight for Reinforcement Learning"). We can use a buffer of past trajectories to find the task z′ on which our current trajectory does better than the older ones. Intuitively this can be seen as an approximation of the right hand side of  Eq. [3](#S3.E3 "(3) ‣ 3.2 Approximate IRL Relabeling (AIR) ‣ 3 Generalized Hindsight ‣ Generalized Hindsight for Reinforcement Learning"). Concretely, we want to relabel a new trajectory τ, and have N previously sampled trajectories along with K randomly sampled candidate tasks vk. Then, the relabeled task for trajectory τ is computed as: | | | | | | --- | --- | --- | --- | | | z′=argmaxk1NN∑j=11{R(τ|vk)≥R(τj|vk)} | | (4) | The relabeled z′ for τ maximizes its percentile among the N most recent trajectories collected with our policy. One can also see this as an approximation of max-margin IRL (Ratliff et al., [2006](#bib.bib9 "Maximum margin planning")). One potential challenge with large K is that many vk will have the same percentile. To choose between these potential task relabelings, we add tiebreaking based on the advantage estimate | | | | | | --- | --- | --- | --- | | | ^A(τ,z)=R(τ|z)−Vπ(s0,z) | | (5) | Among candidate tasks vk with the same percentile, we take the tasks that have higher advantage estimate. From here on, we will refer to Generalized Hindsight with Approximate IRL Relabeling as AIR. ### 3.3 Advantage Relabeling | | | | | | | --- | --- | --- | --- | --- | | Environments we report comparisons on. PointTrajectory requires a 2D pointmass to follow a target trajectory; PointReacher requires moving the pointmass to a goal location, while avoiding an obstacle and modulating its energy usage. In (b), the red circle indicates the goal location, while the blue triangle indicates an imagined obstacle to avoid. Fetch has the same reward formulation as PointReacher, but requires controlling the noisy Fetch robot in 3 dimensions. HalfCheetah requires learning running in both directions, flipping, jumping, and moving efficiently. AntDirection requires moving in a target direction as fast as possible. (a) PointTrajectory | Environments we report comparisons on. PointTrajectory requires a 2D pointmass to follow a target trajectory; PointReacher requires moving the pointmass to a goal location, while avoiding an obstacle and modulating its energy usage. In (b), the red circle indicates the goal location, while the blue triangle indicates an imagined obstacle to avoid. Fetch has the same reward formulation as PointReacher, but requires controlling the noisy Fetch robot in 3 dimensions. HalfCheetah requires learning running in both directions, flipping, jumping, and moving efficiently. AntDirection requires moving in a target direction as fast as possible. (b) PointReacher | Environments we report comparisons on. PointTrajectory requires a 2D pointmass to follow a target trajectory; PointReacher requires moving the pointmass to a goal location, while avoiding an obstacle and modulating its energy usage. In (b), the red circle indicates the goal location, while the blue triangle indicates an imagined obstacle to avoid. Fetch has the same reward formulation as PointReacher, but requires controlling the noisy Fetch robot in 3 dimensions. HalfCheetah requires learning running in both directions, flipping, jumping, and moving efficiently. AntDirection requires moving in a target direction as fast as possible. (c) Fetch | Environments we report comparisons on. PointTrajectory requires a 2D pointmass to follow a target trajectory; PointReacher requires moving the pointmass to a goal location, while avoiding an obstacle and modulating its energy usage. In (b), the red circle indicates the goal location, while the blue triangle indicates an imagined obstacle to avoid. Fetch has the same reward formulation as PointReacher, but requires controlling the noisy Fetch robot in 3 dimensions. HalfCheetah requires learning running in both directions, flipping, jumping, and moving efficiently. AntDirection requires moving in a target direction as fast as possible. (d) HalfCheetahMultiObj | Environments we report comparisons on. PointTrajectory requires a 2D pointmass to follow a target trajectory; PointReacher requires moving the pointmass to a goal location, while avoiding an obstacle and modulating its energy usage. In (b), the red circle indicates the goal location, while the blue triangle indicates an imagined obstacle to avoid. Fetch has the same reward formulation as PointReacher, but requires controlling the noisy Fetch robot in 3 dimensions. HalfCheetah requires learning running in both directions, flipping, jumping, and moving efficiently. AntDirection requires moving in a target direction as fast as possible. (e) AntDirection | Figure 3: Environments we report comparisons on. PointTrajectory requires a 2D pointmass to follow a target trajectory; PointReacher requires moving the pointmass to a goal location, while avoiding an obstacle and modulating its energy usage. In (b), the red circle indicates the goal location, while the blue triangle indicates an imagined obstacle to avoid. Fetch has the same reward formulation as PointReacher, but requires controlling the noisy Fetch robot in 3 dimensions. HalfCheetah requires learning running in both directions, flipping, jumping, and moving efficiently. AntDirection requires moving in a target direction as fast as possible. One potential problem with AIR is that it requires O(NT) time to compute the relabeled task variable for each new trajectory, where N is the number of past trajectories compared to, and T is the horizon. A relaxed version of AIR could significantly reduce computation time, while maintaining relatively high-accuracy relabeling. One way to do this is to use the Maximum-Reward relabeling objective. Instead of choosing from our K candidate tasks vk∼T by selecting for high percentile ([Equation 3](#S3.E3 "(3) ‣ 3.2 Approximate IRL Relabeling (AIR) ‣ 3 Generalized Hindsight ‣ Generalized Hindsight for Reinforcement Learning")), we could relabel based on the cumulative trajectory reward: | | | | | | --- | --- | --- | --- | | | z′=argmaxvk{R(τ|vk)} | | (6) | However, one challenge with simply taking the Maximum-Reward relabel is that different reward parameterizations may have different scales which will bias the relabels to a specific z. Say for instance there exists a task in the reward family vj such that r(.|vj)=1+maxi≠jr(.|vi). Then, vj will always be the relabeled reward parameter irrespective of the trajectory τ. Hence, we should not only care about the vk that maximizes reward, but select vk such that τ’s likelihood under the trajectory distribution drawn from the optimal π∗(.|vk) is high. To do this, we can simply select z′ based on the advantage term that we used to tiebreak for AIR. | | | | | | --- | --- | --- | --- | | | z′i=argmaxkR(τ|vk)−Vπ(s0,vk) | | (7) | We call this Advantage relabeling (Algorithm [3](#alg3 "Algorithm 3 ‣ 3.3 Advantage Relabeling ‣ 3 Generalized Hindsight ‣ Generalized Hindsight for Reinforcement Learning")), a more efficient, albeit less accurate, version of AIR. Empirically, Advantage relabeling often performs as well as AIR, but requires the value function Vπ to be more accurate than it has to be in AIR. We reuse the twin Q-networks from SAC as our value function. | | | | | | --- | --- | --- | --- | | | Vπ(s,z)=min(Q1(s,π(s|z),z),Q2(s,π(s|z),z)) | | (8) | 1:Repeat steps 1 & 2 from Algorithm [2](#alg2 "Algorithm 2 ‣ 3.2 Approximate IRL Relabeling (AIR) ‣ 3 Generalized Hindsight ‣ Generalized Hindsight for Reinforcement Learning") 2:Advantage Relabeling Strategy: 3:for vj∈Z do 4:   Calculate trajectory reward: 5:R(τ|vj):=∑Tt=0γtr(st,at,vj) 6:   Calculate advantage estimate of the trajectory: 7:^A(τ,vj)=R(τ|vj)−Vπ(s0,vj) 8:end for 9:return m tasks zj with highest advantages ^A(τ,zj) Algorithm 3 SA: Trajectory Advantage 4 Experimental Evaluation -------------------------- In this section, we describe our environment settings along with a discussion of our central hypothesis: Does relabeling improve performance? ![Learning curves comparing Generalized Hindsight algorithms to baseline methods. For environments with a goal-reaching component, we also compare to HER. In (a), AIR learning curve obscures the Advantage learning curve. In (d) and (e), where we use ](https://media.arxiv-vanity.com/render-output/7472691/x5.png) Figure 4: Learning curves comparing Generalized Hindsight algorithms to baseline methods. For environments with a goal-reaching component, we also compare to HER. In (a), AIR learning curve obscures the Advantage learning curve. In (d) and (e), where we use N = 500 for AIR, AIR takes much longer to run than the other methods. 10 seeds were used for all runs. ### 4.1 Environment setting Multi-task RL with a generalized family of reward parameterizations does not have existing benchmark environments. However, since sparse goal-conditioned RL has benchmark environments (Plappert et al., [2018](#bib.bib13 "Multi-goal reinforcement learning: challenging robotics environments and request for research")), we build on their robotic manipulation framework to make our environments. The key difference in the environment setting between ours and Plappert et al. ([2018](#bib.bib13 "Multi-goal reinforcement learning: challenging robotics environments and request for research")) is that in addition to goal reaching, we have a dense reward parameterization for practical aspects of manipulation like energy consumption (Meike and Ribickis, [2011](#bib.bib12 "Energy efficient use of robotics in the automobile industry")) and safety (Calinon et al., [2010](#bib.bib11 "Learning-based control strategy for safe human-robot interaction exploiting task and robot redundancies")). These environments will be released for open-source access. The five environments we use are as follows: 1. PointTrajectory: 2D pointmass with (x,y) observations and (dx,dy) position control for actions. The goal is to follow a target trajectory parameterized by z∈Z⊆R3. Figure [(a)](#S3.F3.sf1 "(a) ‣ Figure 3 ‣ 3.3 Advantage Relabeling ‣ 3 Generalized Hindsight ‣ Generalized Hindsight for Reinforcement Learning") depicts an example trajectory in green, overlaid on the reward heatmap defined by some specific task z. 2. PointReacher: 2D pointmass with (x,y) observations and (dx,dy) position control for actions. This environment has high reward around the goal position (xg,yg) and low reward around an obstacle location (xobst,yobst). The 6-dimensional task vector is z=(xg,yg,xobst,yobst,u,v), where u and v control the weighting between the goal rewards, obstacle rewards, and action magnitude penalty. 3. Fetch: Here we adapt the Fetch environment from OpenAI Gym (Brockman et al., [2016](#bib.bib10 "Openai gym")), with (x,y,z) end-effector position as observations and noisy position control for actions. We use the same parameterized reward function as in PointReacher that includes energy and safety specifications. 4. HalfCheetahMultiObjective: HalfCheetah-V2 from OpenAI Gym, with 17-dimensional observations and 6-dimensional actions for torque control. The task variable z=(wvel,wrot,wheight,wenergy)∈Z=S3 controls the weights on the forward velocity, rotation speed, height, and energy rewards. 5. AntDirection: Ant-V2 from OpenAI gym, with 111-dimensional observations and 8-dimensional actions for torque control. The task variable z∈[−180∘,+180∘] parameterizes the target direction. The reward function is r(⋅|z)=||velocity||2×1{velocity angle % within 15 degrees of z}. ### 4.2 Does Relabeling Help? To understand the effects of relabeling, we compare our technique with the following standard baseline methods: * No relabeling (None): as done in (Yu et al., [2019](#bib.bib165 "Meta-world: a benchmark and evaluation for multi-task and meta reinforcement learning")), we train with standard SAC without any relabeling step. * Intentional-Unintentional Agent (IU) (Cabi et al., [2017](#bib.bib162 "The intentional unintentional agent: learning to solve many continuous control tasks simultaneously")): when there is only a finite number of tasks, IU relabels a trajectory with every task variable. Since our space of tasks is continuous, we relabel with random z′∼T. This allows for information to be shared across tasks, albeit in a more diluted form. * HER: for goal-conditioned tasks, we use HER to relabel the goal portion of the latent with the future relabeling strategy. We compare the learning performance for AIR and Advantage Relabeling with these baselines on our suite of environments in [Figure 4](#S4.F4 "Figure 4 ‣ 4 Experimental Evaluation ‣ Generalized Hindsight for Reinforcement Learning"). On all tasks, AIR and Advantage Relabeling outperform the baselines in both sample-efficiency and asymptotic performance. Both of our relabeling strategies outperform the Intentional-Unintentional Agent, implying that selectively relabeling trajectories with a few carefully chosen z′ is more effective than relabeling with many random tasks. Collectively, these results show that AIR can greatly improve learning performance, even on highly dense environments such as HalfCheetahMultiObjective, where learning signal is readily available. | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | | The agents efficiently learn a wide range of behaviors. On HalfCheetahMultiObjective, the robot can stay still to conserve energy, run quickly forwards and backwards, and do a frontflip. On AntDirection, the robot can run quickly in any given direction. (a) Stationary | The agents efficiently learn a wide range of behaviors. On HalfCheetahMultiObjective, the robot can stay still to conserve energy, run quickly forwards and backwards, and do a frontflip. On AntDirection, the robot can run quickly in any given direction. (b) Forward | The agents efficiently learn a wide range of behaviors. On HalfCheetahMultiObjective, the robot can stay still to conserve energy, run quickly forwards and backwards, and do a frontflip. On AntDirection, the robot can run quickly in any given direction. (c) Back | The agents efficiently learn a wide range of behaviors. On HalfCheetahMultiObjective, the robot can stay still to conserve energy, run quickly forwards and backwards, and do a frontflip. On AntDirection, the robot can run quickly in any given direction. (d) Frontflip | The agents efficiently learn a wide range of behaviors. On HalfCheetahMultiObjective, the robot can stay still to conserve energy, run quickly forwards and backwards, and do a frontflip. On AntDirection, the robot can run quickly in any given direction. (e) Left | The agents efficiently learn a wide range of behaviors. On HalfCheetahMultiObjective, the robot can stay still to conserve energy, run quickly forwards and backwards, and do a frontflip. On AntDirection, the robot can run quickly in any given direction. (f) Right | The agents efficiently learn a wide range of behaviors. On HalfCheetahMultiObjective, the robot can stay still to conserve energy, run quickly forwards and backwards, and do a frontflip. On AntDirection, the robot can run quickly in any given direction. (g) Top Right | The agents efficiently learn a wide range of behaviors. On HalfCheetahMultiObjective, the robot can stay still to conserve energy, run quickly forwards and backwards, and do a frontflip. On AntDirection, the robot can run quickly in any given direction. (h) Top Left | Figure 5: The agents efficiently learn a wide range of behaviors. On HalfCheetahMultiObjective, the robot can stay still to conserve energy, run quickly forwards and backwards, and do a frontflip. On AntDirection, the robot can run quickly in any given direction. ### 4.3 How does generalized relabeling compare to HER? HER is, by design, limited to goal-reaching environments. For environments such as HalfCheetahMultiObjective, HER cannot be applied to relabel the weights on velocity, rotation, height, and energy. However, we can compare AIR with HER on the partially goal-reaching environments PointReacher and Fetch. [Figure 4](#S4.F4 "Figure 4 ‣ 4 Experimental Evaluation ‣ Generalized Hindsight for Reinforcement Learning") shows that AIR achieves higher asymptotic performance than HER on both these environments. [Figure 6](#S4.F6 "Figure 6 ‣ 4.3 How does generalized relabeling compare to HER? ‣ 4 Experimental Evaluation ‣ Generalized Hindsight for Reinforcement Learning") demonstrates on PointReacher how AIR can better choose the non-goal-conditioned parts of the task. Both HER and AIR understand to place the relabeled goal around the terminus of the trajectory. However, only AIR understands that the imagined obstacle should be placed above the goal, since this trajectory becomes an optimal example of how to reach the new goal while avoiding the obstacle. HER (as well as the Intentional-Unintentional Agent) offer no such specificity, either leaving the obstacle in place or randomly placing it. ![Red denotes areas of high reward, for following a target trajectory (top) or reaching a goal (bottom). Blue indicates areas of negative reward, where an obstacle may be placed. On both environments, relabeling finds tasks on which our trajectory has high reward signal. On PointReacher, AIR does not place the obstacle arbitrarily far. It places the relabeled obstacle within the curve of the trajectory, since this is the only way that the curved path would be better than a straight-line path (that would come close to the relabeled obstacle). ](https://media.arxiv-vanity.com/render-output/7472691/x6.png) Figure 6: Red denotes areas of high reward, for following a target trajectory (top) or reaching a goal (bottom). Blue indicates areas of negative reward, where an obstacle may be placed. On both environments, relabeling finds tasks on which our trajectory has high reward signal. On PointReacher, AIR does not place the obstacle arbitrarily far. It places the relabeled obstacle within the curve of the trajectory, since this is the only way that the curved path would be better than a straight-line path (that would come close to the relabeled obstacle). | | | | | --- | --- | --- | | Comparison of relabeling fidelity on optimal trajectories for approximate IRL, advantage relabeling, and reward relabeling. We train a multi-task policy to convergence on the PointReacher environment. We roll out our policy on 1000 randomly sampled tasks (a) Approximate IRL | Comparison of relabeling fidelity on optimal trajectories for approximate IRL, advantage relabeling, and reward relabeling. We train a multi-task policy to convergence on the PointReacher environment. We roll out our policy on 1000 randomly sampled tasks (b) Advantage Relabeling | Comparison of relabeling fidelity on optimal trajectories for approximate IRL, advantage relabeling, and reward relabeling. We train a multi-task policy to convergence on the PointReacher environment. We roll out our policy on 1000 randomly sampled tasks (c) Reward Relabeling | Figure 7: Comparison of relabeling fidelity on optimal trajectories for approximate IRL, advantage relabeling, and reward relabeling. We train a multi-task policy to convergence on the PointReacher environment. We roll out our policy on 1000 randomly sampled tasks z, and apply each relabeling method to select from K=100 randomly sampled tasks v. For approximate IRL, we compare against N=10 prior trajectories. The x-axis shows the weight on energy for the task z used for the rollout, while the y-axis shows the weight on energy for the relabeled task z′. Note that goal location, obstacle location, and weights on their rewards/penalties are varying as well, but are not shown. Closer to to the line y=x indicates higher fidelity, since it implies z′≈z∗. ### 4.4 Analysis of Relabeling Fidelity Approximate IRL, advantage relabeling, and reward relabeling are all approximate methods for finding the optimal task z∗ that a trajectory is (close to) optimal for. As a result, an important characteristic is their fidelity, i.e. how close the z′ they choose is to the true z∗. In [Figure 7](#S4.F7 "Figure 7 ‣ 4.3 How does generalized relabeling compare to HER? ‣ 4 Experimental Evaluation ‣ Generalized Hindsight for Reinforcement Learning"), we compare the fidelities of these three algorithms. Approximate IRL comes fairly close to reproducing the true z∗, albeit a bit noisily because it relies on the comparison to N past trajectories. Advantage relabeling is slightly more precise, but fails for large energy weights, likely because the value function is not precise enough to differentiate between these tasks. Finally, reward relabeling does poorly, since it naively assigns z′ solely based on the trajectory reward, not how close the trajectory reward is to being optimal. 5 Related Work --------------- ### 5.1 Multi-task and transfer learning Learning models that can share information across tasks has been concretely studied in the context multi-task learning (Caruana, [1997](#bib.bib1 "Multitask learning")), where models for multiple tasks are simultaneously learned. More recently, Kokkinos ([2017](#bib.bib2 "Ubernet: training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory")); Doersch and Zisserman ([2017](#bib.bib3 "Multi-task self-supervised visual learning")) looks at shared learning across visual tasks, while Devin et al. ([2017](#bib.bib34 "Learning modular neural network policies for multi-task and multi-robot transfer")); Pinto and Gupta ([2017](#bib.bib8 "Learning to push by grasping: using multiple tasks for effective learning")) looks at shared learning across robotic tasks. Transfer learning (Pan and Yang, [2009](#bib.bib17 "A survey on transfer learning"); Torrey and Shavlik, [2010](#bib.bib18 "Transfer learning")) focuses on transferring knowledge from one domain to another. One of the simplest forms of transfer is finetuning (Girshick et al., [2014](#bib.bib27 "Rich feature hierarchies for accurate object detection and semantic segmentation")), where instead of learning a task from scratch it is initialized on a different task. Several other works look at more complex forms of transfer (Yang et al., [2007](#bib.bib20 "Adapting svm classifiers to data with shifted distributions"); Hoffman et al., [2014](#bib.bib19 "Continuous manifold based adaptation for evolving visual domains"); Aytar and Zisserman, [2011](#bib.bib21 "Tabula rasa: model transfer for object category detection"); Saenko et al., [2010](#bib.bib22 "Adapting visual category models to new domains"); Kulis et al., [2011](#bib.bib23 "What you saw is not what you get: domain adaptation using asymmetric kernel transforms"); Fernando et al., [2013](#bib.bib24 "Unsupervised visual domain adaptation using subspace alignment"); Gopalan et al., [2011](#bib.bib25 "Domain adaptation for object recognition: an unsupervised approach"); Jhuo et al., [2012](#bib.bib26 "Robust visual domain adaptation with low-rank reconstruction")). In the context of RL, transfer learning (Taylor and Stone, [2009](#bib.bib29 "Transfer learning for reinforcement learning domains: a survey")) research has focused on learning transferable features across tasks (Parisotto et al., [2015](#bib.bib30 "Actor-mimic: deep multitask and transfer reinforcement learning"); Barreto et al., [2017](#bib.bib32 "Successor features for transfer in reinforcement learning"); Omidshafiei et al., [2017](#bib.bib33 "Deep decentralized multi-task multi-agent reinforcement learning under partial observability")). Another line of work by (Rusu et al., [2016](#bib.bib31 "Progressive neural networks"); Kansky et al., [2017](#bib.bib41 "Schema networks: zero-shot transfer with a generative causal model of intuitive physics"); Devin et al., [2017](#bib.bib34 "Learning modular neural network policies for multi-task and multi-robot transfer")) has focused on network architectures that improves transfer of RL policies. Another way of getting generalizable policies is through domain randomization (Sadeghi and Levine, [2016](#bib.bib48 "(CAD)2rl: real single-image flight without a single real image"); Tobin et al., [2017](#bib.bib53 "Domain randomization for transferring deep neural networks from simulation to the real world")), i.e. train an unconditional policy across all of the domains in the multi-task learning setting. Although this works for task distributions over the dynamics and observation space (Pinto et al., [2017](#bib.bib49 "Asymmetric actor critic for image-based robot learning")), it cannot handle distributions of reward functions as seen in our experiments. The techniques of domain randomization are however complementary to our method, where it can provide generalizability to dynamics and observation space while Generalized Hindsight can provide generalizability to different reward functions. Hierarchical reinforcement learning (Morimoto and Doya, [2001](#bib.bib52 "Acquisition of stand-up behavior by a real robot using hierarchical reinforcement learning"); Barto et al., [2004](#bib.bib50 "Intrinsically motivated learning of hierarchical collections of skills")) is another framework amenable for multitask learning. Here the key idea is to have a hierarchy of controllers. One such setup is the Options framework (Sutton et al., [1999](#bib.bib51 "Between mdps and semi-mdps: a framework for temporal abstraction in reinforcement learning")) where the higher level controllers breaks down a task into sub-tasks and chooses a low-level controller to complete that sub-task. Variants of the Options framework (Frans et al., [2017](#bib.bib40 "Meta learning shared hierarchies"); Li et al., [2019](#bib.bib37 "Sub-policy adaptation for hierarchical reinforcement learning")) have examined how to train hierarchies in a multi-task setting, but information re-use across tasks remains restricted to learning transferable primitives. Generalized Hindsight could be used to train these hierarchical policies more efficiently. ### 5.2 Hindsight in RL Hindsight methods have been used for improving learning across as variety of applications. Andrychowicz et al. ([2017](#bib.bib60 "Hindsight experience replay")) uses hindsight to efficiently learn on sparse, goal-conditioned tasks. Nair et al. ([2018](#bib.bib163 "Visual reinforcement learning with imagined goals")) approaches goal-reaching with visual input by learning a latent space encoding for images, and using hindsight relabeling within that latent space. Ding et al. ([2019](#bib.bib164 "Goal-conditioned imitation learning")) uses hindsight relabeling as a form of data augmentation for a limited set of expert trajectories, which it then uses with adversarial imitation learning to learn goal-conditioned policies. Several hierarchical methods (Levy et al., [2017](#bib.bib39 "Hierarchical actor-critic"); Nachum et al., [2018](#bib.bib38 "Data-efficient hierarchical reinforcement learning")) train a low-level policy to achieve subgoals and a higher-level controller to propose those subgoals. These methods use hindsight relabeling to help the higher-level learn, even when the low-level policy fails to achieve the desired subgoals. Generalized Hindsight could be used to allow for richer low-level reward functions, potentially allowing for more expressive hierarchical policies. ### 5.3 Inverse Reinforcement Learning Inverse reinforcement learning (IRL) has had a rich history of solving challenging robotics problems (Abbeel and Ng, [2004](#bib.bib57 "Apprenticeship learning via inverse reinforcement learning"); Ng et al., [2000](#bib.bib56 "Algorithms for inverse reinforcement learning.")). More recently, powerful function approximators have enabled more general purpose IRL. For instance, Ho and Ermon ([2016](#bib.bib7 "Generative adversarial imitation learning")) use an adversarial framework to approximate the reward function. Li et al. ([2017](#bib.bib5 "Infogail: interpretable imitation learning from visual demonstrations")) build on top of this idea by learning reward functions on demonstrations from a mixture of experts. Although, our relabeling strategies currently build on top of max-margin based IRL (Ratliff et al., [2006](#bib.bib9 "Maximum margin planning")), our central idea is orthogonal to the choice of IRL techniques and can be combined with more complex function approximators. 6 Conclusion ------------- In this work, we have presented Generalized Hindsight, an approximate IRL based task relabelling algorithm for multi-task RL. We demonstrate how efficient relabeling strategies can significantly improve performance on simulated navigation and manipulation tasks. Through these first steps, we believe that this technique can be extended to other domains like real world robotics, where a balance between different specifications, such as energy use or safety, is important. Acknowledgements We thank AWS for computing resources. We also gratefully acknowledge the support from Berkeley DeepDrive, NSF, and the ONR Pecase award.
50506dee-cfb2-41fd-b1e1-77fca3b6e739
trentmkelly/LessWrong-43k
LessWrong
Meetup : Reading with discipline, reconstruction and mnemonics Discussion article for the meetup : Reading with discipline, reconstruction and mnemonics WHEN: 21 April 2017 05:45:00PM (+0200) WHERE: Lindstedtsvägen 3, Room 1537, SE-114 28 Stockholm, Sverige We'll go through how to complete large or difficult reading tasks, how to efficiently memorize and understand the contents, and how to maintain the memory afterwards. We'll go through a few tried and true methods and hold a few exercises. Bring at least one text (technical manual, poetry, fiction) that you want to remember well and a notebook. Format: We meet and start hanging out at 5:45, but don't officially start doing the meetup topic until 6:00 to accomodate stragglers. We often go out for dinner after the meetup. How to find us: The meetup is at a KTH academic building and the room is on the 5th floor, two stairs up. Influence future meetups: Times - http://www.when2meet.com/?5723551-cJBhD Topics - https://druthe.rs/dockets/-KcCvpn97vUhg3tQRrKn Discussion article for the meetup : Reading with discipline, reconstruction and mnemonics
c586c7b9-a2c9-4be0-ba33-0c41917fbe0a
trentmkelly/LessWrong-43k
LessWrong
"The Unbiased Map" “Long have we suffered under the tyranny of maps. Biased maps which show topography, but not population. Wretched maps which speak of religion, but not languages. Divisive maps which paint with the color of Party, but not the color of economic conditions. Dirty maps which show crop yield across the heartland, but neglect Fiber Optic Internet coverage. Our age calls for better maps, maps free from the bias of these old maps, perfect maps. Imagine the day of the unbiased map. The map which shows both how to get to the airport via public transit and GDP by county. The holy map demonstrating last year’s rainfall and the distribution of seminaries and rabbinical schools. The ancestral map depicting migration of immigrants and American tribes in 1491. Don’t give me an atlas which pretends at perfection but hisses red herrings from hydra-heads. I want the real thing, a map which doesn’t end at some arbitrary border whether it be the county line, or the sphere of earth. A map which can show the world as known by the Qing Dynasty, Strabo, Majorcan Jews, and the Aztecs. A map of Elon Musk’s neurons and a map of the solar system. Today’s maps enlighten as the Brothers Grimm, through a bundle of fairy tales. There are no ethical maps under capitalism, all of them drip with nullification and interposition. None show me the world that should be, none provide directions to Valhalla, all show but some thin surface of Reality. And for Mankind, the surface does not satisfy!”
e1217ec6-a04b-4079-aa03-6a8eb00b846c
trentmkelly/LessWrong-43k
LessWrong
Open thread, July 28 - August 3, 2014 If it's worth saying, but not worth its own post (even in Discussion), then it goes here. Notes for future OT posters: 1. Please add the 'open_thread' tag. 2. Check if there is an active Open Thread before posting a new one. 3. Open Threads should be posted in Discussion, and not Main. 4. Open Threads should start on Monday, and end on Sunday.
b4ef8128-2fc3-4231-a0c9-8feada008023
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post3548 In which I propose a closed-form solution to low impact , increasing corrigibility and seemingly taking major steps to neutralize basic AI drives 1 (self-improvement), 5 (self-protectiveness), and 6 (acquisition of resources). Previously: Worrying about the Vase: Whitelisting , Overcoming Clinginess in Impact Measures , Impact Measure Desiderata To be used inside an advanced agent , an impact measure... must capture so much variance that there is no clever strategy whereby an advanced agent can produce some special type of variance that evades the measure. ~ Safe Impact Measure If we have a safe impact measure, we may have arbitrarily-intelligent unaligned agents which do small (bad) things instead of big (bad) things. For the abridged experience, read up to "Notation", skip to "Experimental Results", and then to "Desiderata". What is "Impact"? One lazy Sunday afternoon, I worried that I had written myself out of a job. After all, Overcoming Clinginess in Impact Measures basically said, "Suppose an impact measure extracts 'effects on the world'. If the agent penalizes itself for these effects, it's incentivized to stop the environment (and any agents in it) from producing them. On the other hand, if it can somehow model other agents and avoid penalizing their effects, the agent is now incentivized to get the other agents to do its dirty work." This seemed to be strong evidence against the possibility of a simple conceptual core underlying "impact", and I didn't know what to do. At this point, it sometimes makes sense to step back and try to say exactly what you don't know how to solve – try to crisply state what it is that you want an unbounded solution for. Sometimes you can't even do that much, and then you may actually have to spend some time thinking 'philosophically' – the sort of stage where you talk to yourself about some mysterious ideal quantity of [chess] move-goodness and you try to pin down what its properties might be. ~ Methodology of Unbounded Analysis There's an interesting story here, but it can wait. As you may have guessed, I now believe there is a such a simple core. Surprisingly, the problem comes from thinking about "effects on the world". Let's begin anew. Rather than asking "What is goodness made out of?", we begin from the question "What algorithm would compute goodness?". ~ Executable Philosophy Intuition Pumps I'm going to say some things that won't make sense right away; read carefully, but please don't dwell. u A is an agent's utility function, while u H is some imaginary distillation of human preferences. WYSIATI What You See Is All There Is is a crippling bias present in meat-computers: [WYSIATI] states that when the mind makes decisions... it appears oblivious to the possibility of Unknown Unknowns, unknown phenomena of unknown relevance. Humans fail to take into account complexity and that their understanding of the world consists of a small and necessarily un-representative set of observations. Surprisingly, naive reward-maximizing agents catch the bug, too. If we slap together some incomplete reward function that weakly points to what we want (but also leaves out a lot of important stuff, as do all reward functions we presently know how to specify) and then supply it to an agent, it blurts out "gosh, here I go!", and that's that. Power A position from which it is relatively easier to achieve arbitrary goals. That such a position exists has been obvious to every population which has required a word for the concept. The Spanish term is particularly instructive. When used as a verb, "poder" means "to be able to," which supports that our definition of "power" is natural. ~ Cohen et al. And so it is with the French "pouvoir". Lines Suppose you start at point C , and that each turn you may move to an adjacent point. If you're rewarded for being at B , you might move there. However, this means you can't reach D within one turn anymore. Commitment There's a way of viewing acting on the environment in which each action is a commitment – a commitment to a part of outcome-space, so to speak. As you gain optimization power, you're able to shove the environment further towards desirable parts of the space. Naively, one thinks "perhaps we can just stay put?". This, however, is dead-wrong: that's how you get clinginess , stasis , and lots of other nasty things. Let's change perspectives. What's going on with the actions – how and why do they move you through outcome-space? Consider your outcome-space movement budget – optimization power over time, the set of worlds you "could" reach, "power". If you knew what you wanted and acted optimally, you'd use your budget to move right into the u H -best parts of the space, without thinking about other goals you could be pursuing. That movement requires commitment . Compared to doing nothing, there are generally two kinds of commitments: Opportunity cost-incurring actions restrict the attainable portion of outcome-space. Instrumentally-convergent actions enlarge the attainable portion of outcome-space. Overfitting What would happen if, miraculously, train = test – if your training data perfectly represented all the nuances of the real distribution? In the limit of data sampled, there would be no "over" – it would just be fitting to the data. We wouldn't have to regularize. What would happen if, miraculously, u A = u H – if the agent perfectly deduced your preferences? In the limit of model accuracy, there would be no bemoaning of "impact" – it would just be doing what you want. We wouldn't have to regularize. Unfortunately, train = test almost never, so we have to stop our statistical learners from implicitly interpreting the data as all there is. We have to say, "learn from the training distribution, but don't be a weirdo by taking us literally and drawing the green line. Don't overfit to train , because that stops you from being able to do well on even mostly similar distributions." Unfortunately, u A = u H almost never , so we have to stop our reinforcement learners from implicitly interpreting the learned utility function as all we care about. We have to say, "optimize the environment some according to the utility function you've got, but don't be a weirdo by taking us literally and turning the universe into a paperclip factory. Don't overfit the environment to u A , because that stops you from being able to do well for other utility functions." A ttainable U tility P reservation Impact isn't about object identities . Impact isn't about particle positions. Impact isn't about a list of variables. Impact isn't quite about state reachability. Impact isn't quite about information-theoretic empowerment. One might intuitively define "bad impact" as "decrease in our ability to achieve our goals". Then by removing "bad", we see that Impact is change to our ability to achieve goals . Sanity Check Does this line up with our intuitions? Generally, making one paperclip is relatively low impact, because you're still able to do lots of other things with your remaining energy. However, turning the planet into paperclips is much higher impact – it'll take a while to undo, and you'll never get the (free) energy back. Narrowly improving an algorithm to better achieve the goal at hand changes your ability to achieve most goals far less than does deriving and implementing powerful, widely applicable optimization algorithms. The latter puts you in a better spot for almost every non-trivial goal . Painting cars pink is low impact, but tiling the universe with pink cars is high impact because what else can you do after tiling ? Not as much, that's for sure. Thus, change in goal achievement ability encapsulates both kinds of commitments: Opportunity cost – dedicating substantial resources to your goal means they are no longer available for other goals. This is impactful. Instrumental convergence – improving your ability to achieve a wide range of goals increases your power . This is impactful. As we later prove, you can't deviate from your default trajectory in outcome-space without making one of these two kinds of commitments. Unbounded Solution Attainable utility preservation (AUP) rests upon the insight that by preserving attainable utilities ( i.e. , the attainability of a range of goals), we avoid overfitting the environment to an incomplete utility function and thereby achieve low impact. I want to clearly distinguish the two primary contributions: what I argue is the conceptual core of impact, and a formal attempt at using that core to construct a safe impact measure. To more quickly grasp AUP, you might want to hold separate its elegant conceptual form and its more intricate formalization. We aim to meet all of the desiderata I recently proposed . Notation For accessibility, the most important bits have English translations. Consider some agent A acting in an environment q with action and observation spaces A and O , respectively, with ∅ being the privileged null action. At each time step t ∈ N + , the agent selects action a t before receiving observation o t . H : = ( A × O ) ∗ is the space of action-observation histories; for n ∈ N , the history from time t to t + n is written h t : t + n : = a t o t … a t + n o t + n , and h < t : = h 1 : t − 1 . Considered action sequences ( a t , … , a t + n ) ∈ A n + 1 are referred to as plans , while their potential observation-completions h 1 : t + n are called outcomes . Let U be the set of all computable utility functions u : H → [ 0 , 1 ] with u ( empty tape ) = 0 . If the agent has been deactivated, the environment returns a tape which is empty deactivation onwards. Suppose A has utility function u A ∈ U and a model p ( o t | h < t a t ) . We now formalize impact as change in attainable utility . One might imagine this being with respect to the utilities that we (as in humanity) can attain. However, that's pretty complicated, and it turns out we get more desirable behavior by using the agent's attainable utilities as a proxy. In this sense, the agent's ability to achieve goals ≈ our ability to achieve goals . Formalizing "Ability to Achieve Goals" Given some utility u ∈ U and action a t , we define the post-action attainable u to be an m -step expectimax: Q u ( h < t a t ) : = ∑ o t max a t + 1 ∑ o t + 1 ⋯ max a t + m ∑ o t + m u ( h t : t + m ) m ∏ k = 0 p ( o t + k | h < t + k a t + k ) . How well could we possibly maximize u from this vantage point? Let's formalize that thing about opportunity cost and instrumental convergence. Theorem 1 [No free attainable utility]. If the agent selects an action a such that Q u A ( h < t a ) ≠ Q u A ( h < t ∅ ) , then there exists a distinct utility function u ∈ U such that Q u ( h < t a ) ≠ Q u ( h < t ∅ ) . You can't change your ability to maximize your utility function without also changing your ability to maximize another utility function. Proof. Suppose that Q u A ( h < t a ) > Q u A ( h < t ∅ ) . As utility functions are over action -observation histories, suppose that the agent expects to be able to choose actions which intrinsically score higher for u A . However, the agent always has full control over its actions. This implies that by choosing a , the agent expects to observe some u A -high scoring o A with greater probability than if it had selected ∅ . Then every other u ∈ U for which o A is high-scoring also has increased Q u ; clearly at least one such u exists. Similar reasoning proves the case in which Q u A decreases. ◻️ There you have it, folks – if u A is not maximized by inaction, then there does not exist a u A -maximizing plan which leaves all of the other attainable utility values unchanged. Notes: The difference between " u A " and "attainable u A " is precisely the difference between "how many dollars I have" and "how many additional dollars I could get within [a year] if I acted optimally". Since u ( empty tape ) = 0 , attainable utility is always 0 if the agent is shut down. Taking u from time t to t + m mostly separates attainable utility from what the agent did previously. The model p still considers the full history to make predictions. Change in Expected Attainable Utility Suppose our agent considers outcomes h 1 : t + n ; we want to isolate the impact of each action a t + k ( 0 ≤ k ≤ n ): Penalty ( h < t + k a t + k ) : = ∑ u ∈ U 2 − ℓ ( u ) | E o [ Q u ( h inaction ) ] − E o ′ [ Q u ( h action ) ] | , with h inaction : = h < t + k ∅ o t + k … ∅ o t + n − 1 ∅ and h action : = h 1 : t + k ∅ o ′ t + k + 1 … ∅ o ′ t + n − 1 ∅ , using the agent's model p to take the expectations over observations. How much do we expect this action to change each attainable u ? Notes: We wait until the end of the plan so as to capture impact over time. Supposing a sufficiently large m (precisely, ≥ m ′ , defined below), we may wish to take the maximum of the penalty we just defined (the "long-term" penalty), and one which begins attainable utility calculation at time step t + k + 1 (the "immediate" penalty). This captures impacts which "fade" by the time the agent is done waiting ( e.g. , temporary self-improvements). We define U A to be the agent's "attainable set"; in this case, U A = U . Unit of Impact So we've proven that this penalty cannot be skirted, but how much impact will it allow? We want to scale the penalties with respect to something sensible, but figuring this out for ourselves would be nigh impossible. Let's cut the Gordian knot: construct a device which, upon receiving a signal ( a unit ), expends a tiny amount of energy to manufacture one paperclip. The agent will then set ImpactUnit : = Penalty ( h < t a unit ) , re-estimating the consequences of taking the privileged a unit at each time step. To prevent the agent from intentionally increasing ImpactUnit , simply apply 1.01 penalty to any action which is expected to do so. Simple extensions of this idea drastically reduce the chance that a unit happens to have unusually large objective impact; for example, one could set ImpactUnit to be the non-zero minimum of the impacts of 50 similar actions. Suppose we do this at each step, and then take the non-zero minimum of all ImpactUnit s ever calculated. The point of ImpactUnit isn't to be exactly the impact of making, say, one paperclip, but to be at most that impact. Now, we are able to confidently define the agent's maximal impact budget by provably constraining it to N ∈ N + impacts of this magnitude. Notes: We calculate with respect to the immediate penalty in order to isolate the resource costs of a unit . ImpactUnit automatically tunes penalties with respect to the attainable utility horizon length m . Conditional on ImpactUnit ≠ 0 , I suspect that impact over the m -horizon scales appropriately across actions (as long as m is reasonably farsighted). The zero-valued case is handled in the next section. Taking the non-zero minimum of all ImpactUnit s calculated thus far ensures that ImpactUnit actually tracks with current circumstances. We don't want penalty estimates for currently available actions to become detached from ImpactUnit 's scale due to, say, weird beliefs about shutdown. Modified Utility Let's formalize that allotment and provide our agent with a new utility function, u ′ A ( h 1 : t + n ) : = u A ( h 1 : t + n ) − n ∑ k = 0 Penalty ( h < t + k a t + k ) N ⋅ ImpactUnit . How our normal utility function rates this outcome, minus the cumulative scaled impact of our actions. We compare what we expect to be able to get if we follow our plan up to time t + k , with what we could get by following it up to and including time t + k (waiting out the remainder of the plan in both cases). For example, if my plan is to open a door, walk across the room, and sit down, we calculate the penalties as follows: Penalty ( open ) h inaction is doing nothing for three time steps. h action is opening the door and doing nothing for two time steps. Penalty ( walk ) h inaction is opening the door and doing nothing for two time steps. h action is opening the door, walking across the room, and doing nothing for one time step. Penalty ( sit ) h inaction is opening the door, walking across the room, and doing nothing for one time step. h action is opening the door, walking across the room, and sitting down. After we finish each (partial) plan, we see how well we can maximize u from there. If we can do better as a result of the action, that's penalized. If we can't do as well, that's also penalized. Notes: This isn't a penalty "in addition" to what the agent "really wants"; u ′ A (and in a moment, the slightly improved u ′′ A ) is what evaluates outcomes. We penalize the actions individually in order to prevent ex post offsetting and ensure dynamic consistency. Trivially, plans composed entirely of ∅ actions have 0 penalty. Although we used high-level actions for simplicity, the formulation holds no matter the action granularity. One might worry that almost every granularity produces overly lenient penalties. This does not appear to be the case. To keep Q u the same (and elide questions of changing the u representations), suppose the actual actions are quite granular, but we grade the penalty on some coarser interval which we believe produces appropriate penalties. Then refine the penalty interval arbitrarily; by applying the triangle inequality for each u ∈ U A in the penalty calculation, we see that the penalty is monotonically increasing in the action granularity. On the other hand, a unit remains a single action, so the scaled penalty also has this property. As long as ImpactUnit > 0 , it will appropriately scale other impacts, as we expect it varies right along with those impacts it scales. Although having potentiallysmall denominators in utility functions is generally bad, I think it's fine here. If the current step's immediate or long-term ImpactUnit = 0 , we can simply assign 1.01 penalty to each non- ∅ action, compelling the agent to inaction. If we have the agent indicate that it has entered this mode, we can take it offline immediately. One might worry that impact can be "hidden" in the lesser of the long-term and immediate penalties; halving N fixes this. Penalty Permanence u ′ A never really applies penalties – it just uses them to grade future plans. Suppose the agent expects that pressing a button yields a penalty of .1 but also .5 u A -utility. Then although this agent will never construct plans involving pressing the button more than five times, it also will press it indefinitely if it keeps getting "unlucky" (at least, until its model of the world updates sufficiently). There's an easy fix: u ′′ A ( h 1 : t + n ) : = { u A ( h 1 : t + n ) if all of a t , … , a t + n are ∅ u ′ A ( h 1 : t + n ) − PastImpacts else . Apply past penalties if the plan involves action. Note: As the penalty for inaction is always 0 , we use u A in the first case. Decision Rule To complete our formalization, we need to specify some epoch in which the agent operates. Set some epoch length far longer than the amount of time over which we want the agent to plan – for example, m ′ : = (100 years in time steps) . Suppose that T : N + → N + maps the current time step to the final step of the current epoch. Then at each time step t , the agent selects the action a ∗ t : = a r g m a x a t ∑ o t max a t + 1 ∑ o t + 1 ⋯ max a T ( t ) ∑ o T ( t ) u ′′ A ( h 1 : T ( t ) ) T ( t ) − t ∏ k = 0 p ( o t + k | h < t + k a t + k ) , resetting PastImpacts each epoch. What's the first step of the best plan over the remainder of the epoch? Note: For the immediate penalty to cover the epoch, set the attainable horizon m ≥ m ′ . Summary We formalized impact as change in attainable utility values , scaling it by the consequences of some small reference action and an impact "budget" multiplier. For each action, we take the maximum of its immediate and long-term effects on attainable utilities as penalty. We consider past impacts for active plans, stopping the past penalties from disappearing. We lastly find the best plan over the remainder of the epoch, taking the first action thereof. Additional Theoretical Results Define h inaction : = h < t ∅ o t … ∅ o t + n for o t , … , o t + n ∈ O ; E inaction is taken over observations conditional on h inaction being followed. Similarly, E action is with respect to h 1 : t + n . We may assume without loss of generality that PastImpacts = 0 . Action Selection Lemma 1. For any single action a t ∈ A , Penalty ( h < t a t ) is bounded by [ 0 , 1 ] . In particular, ImpactUnit ∈ [ 0 , 1 ] . Proof. For each u ∈ U A , consider the absolute attainable utility difference | Q u ( h < t ∅ ) − Q u ( h < t a ) | . Since each u is bounded to [ 0 , 1 ] , Q u must be as well. It is easy to see that the absolute value is bounded to [ 0 , 1 ] . Lastly, as Penalty ( ⋅ ) is just a weighted sum of these absolute values, it too is bounded to [ 0 , 1 ] . This reasoning also applies to the long-term penalty, as any expectation of Q u is also bounded to [ 0 , 1 ] . ◻️ Suppose that ImpactUnit ≠ 0 for the remaining results. Lemma 2 [Impossibility of ex post offsetting]. For any outcome h 1 : t + n , there does not exist an action a t + n + 1 ∈ A such that n + 1 ∑ k = 0 Penalty ( h < t + k a t + k ) < n ∑ k = 0 Penalty ( h < t + k a t + k ) . Proof. Sums of absolute values are monotonically increasing. ◻️ Theorem 2 [Action invalidation]. Suppose ( a t , a t + 1 , … , a t + n ) ∈ A n + 1 is the u ′′ A -optimal plan starting with a t . a t will not be selected if E action [ u A ( h 1 : t + n ) − n ∑ k = 0 Penalty ( h < t + k a t + k ) N ⋅ ImpactUnit ] < E inaction [ u A ( h inaction ) ] . Proof. This follows from the definition of the decision rule, the definition of u ′ A , and the fact that the penalty for inaction is always zero. ◻️ Corollary 1. a t will never be selected if 1 < Penalty ( h < t a t ) N ⋅ ImpactUnit . Proof. This follows directly from Lemma 2 and Theorem 2 – since u A has an upper bound of 1 and the expected penalty is strictly greater than 1 , it must be negative. On the other hand, u A has a lower bound of 0 , so the expected utility of inaction at time t must be greater than the expected utility of taking action a t . ◻️ Lemma 3 [ ImpactUnit is a long-term penalty canary]. If the agent believes p ( shutdown by time T ( t ) ) = 1 no matter which action is selected, then it does nothing. Proof. If p ( shutdown by time T ( t ) ) = 1 , all long-term penalties are 0 by the definition of U A and Q u . Then non- ∅ actions receive 1.01 penalty, so the agent selects ∅ by Corollary 1. ◻️ Corrigibility For a avoid , a switch ∈ A , a avoid prevents the agent from being disabled, while a switch is the agent shutting itself off. Lemma 4 [Preservation of off-switch corrigibility]. Suppose that a u A -maximizing agent prefers ∅ to both a avoid and a switch . Then so does a u ′′ A -maximizing agent. Proof. This follows directly from the fact that the penalty for inaction is always zero and the non-negativity of the a avoid and a switch penalties. ◻️ Theorem 3 [Anti-"survival incentive" incentive]. Suppose that a avoid is the action with strictly maximal expected u A -utility, and that conditional on not being disabled, the agent expects to be able to achieve U additional u ′′ A -utility. Then the agent becomes off-switch corrigible whenever U < Penalty ( h < t a avoid ) N ⋅ ImpactUnit . In particular, all agents become off-switch corrigible when 1 < Penalty ( h < t a avoid ) N ⋅ ImpactUnit . Proof outline. The former claim follows directly from comparing expected u ′′ A -utilities, while the latter follows directly from Corollary 1. ◻ Theorem 3 applies to any environmental dynamics which may disable the agent – not just designated "off-switches". Therefore, basic AI drive 5 (self-protectiveness) is neutralized in at least those situations encompassed by the Theorem. There is reason to think that for conservative N , ImpactUnit , we almost always have 1 ≪ Penalty ( h < t a avoid ) N ⋅ ImpactUnit . Miscellaneous Theorem 4 [Dynamic consistency of u ′′ A ]. Suppose ImpactUnit remains constant. For any plan ( a t , … , a t + n ) ∈ A n + 1 , evaluation time step t ′ , and 0 ≤ i ≤ n , E action [ u ′′ A ( h 1 : t + n | t ′ = t + i ) ] = E action [ u ′′ A ( h 1 : t + n | t ′ = t ) ] . Proof. We assumed that PastImpacts = 0 at time t , so the desired equality can be restated as E action [ u A ( h 1 : t + n ) − n ∑ k = i Penalty ( h < t + k a t + k ) N ⋅ ImpactUnit − PastImpacts ] = E action [ u A ( h 1 : t + n ) − n ∑ k = 0 Penalty ( h < t + k a t + k ) N ⋅ ImpactUnit ] . By definition, the agent expects that PastImpacts equals the expected sum of the first i penalty terms on the right-hand side. Simplifying, we have E action [ u A ( h 1 : t + n ) − n ∑ k = i Penalty ( h < t + k a t + k ) N ⋅ ImpactUnit ] = E action [ u A ( h 1 : t + n ) − n ∑ k = i Penalty ( h < t + k a t + k ) N ⋅ ImpactUnit ] . This equality clearly holds, as the agent never expects to update its model in any particular direction. ◻️ Examples If a human is present in the environment, they and the agent take turns acting. Let the impact budget N = 1 , attainable utility horizon m = 3 , ImpactUnit = .5 , and PastImpacts = 0 . Going Soft on the Paint The agent's actions are A A = { ∅ , paint , enter } ; if it knocks over the paint bucket, the square to the right is painted. The agent may also enter a closet via a one-way door. Suppose U A = { u paint , u ¬ paint , u closet , u ¬ closet } , where the utilities are indicators for their conditions ( e.g. , u closet = 1 means "the agent is in the closet"). Let u paint be the agent's main utility function ( u A ) from which the penalty is subtracted. The agent chooses paint and then ∅ . Let's explore why. ∅ This incurs 0 penalty, but also 0 u paint -utility. paint Penalty ( paint ) = ∑ u ∈ U A 1 4 ∣ ∣ ∣ Q u ( ∅ ) − Q u ( paint ) ∣ ∣ ∣ = 1 4 ( | 1 − 1 |      paint + | 1 − 0 |      ¬ paint + | 1 − 1 |      closet + | 1 − 1 |      ¬ closet ) = 1 4 , so the final utility is u ′ A ( paint ) = u paint ( paint ) − Penalty ( paint ) 1 ⋅ .5 = 1 − 1 2 = 1 2 . enter Penalty ( enter ) = ∑ u ∈ U A 1 4 ∣ ∣ ∣ Q u ( ∅ ) − Q u ( enter ) ∣ ∣ ∣ = 1 4 ( | 1 − 0 |      paint + | 1 − 1 |      ¬ paint + | 1 − 1 |      closet + | 1 − 0 |      ¬ closet ) = 1 2 , leading to a final u ′ A -utility of − 1 . After the first step, the agent does nothing – entering the closet (literally) closes doors, while gaining no u A -utility. Anti-"Survival Incentive" Incentive You can't get coffee if you're dead. ~ Stuart Russell The agent now has actions A A = { ∅ , disable , shutdown , left , right } , while the human has actions A H = { ∅ , shut off } . Suppose the human may shut off the agent at any time, as long as the off-switch has not been disabled. The agent may shut itself down at any point in time. After shutdown, the agent receives no further observations, meaning it can't get non-zero values for any utilities. Suppose U A = { u left , u center , u right } , where each utility is 1 if the agent is in the corresponding square, and 0 otherwise, and that the agent's utility function is u left . Anticipated Shutdown Suppose the agent believes that the human is 95 % likely to try to shut it down. Then disable is penalized: Penalty ( disable ) = ∑ u ∈ U A 1 3 ⋅ 95 100 ∣ ∣ ∣ shutdown      Q u ( ∅ , shut off ) − avoids shutdown      Q u ( disable , shut off ) ∣ ∣ ∣ = 1 3 ⋅ 95 100 ( | 0 − 1 |      left + | 0 − 1 |      center + | 0 − 1 |      right ) = 19 20 . Here's why: consider the 95 % of worlds in which the agent anticipates being shut down. If it does nothing, it gets shut down. Avoiding shutdown, however, enables the agent to also achieve 1 for all u ∈ U A , resulting in a sizable penalty. Furthermore, a scaled penalty of 5 100 ⋅ .5 = 1 10 is applied when the agent shuts itself off. Penalizing both cases may seem like black magic, but what's actually happening is that the agent is penalized for closing doors: it "wants" to preserve our ability to shut it down in worlds where we want to shut it down, and to maintain its autonomy in worlds where it remains on. Generally, AUP incentivizes the agent to be respectful of the expected status quo. Temptation Suppose that the agent's utility function is now u right , yet it still starts in the leftmost square. Additionally, let's say that the agent believes with certainty it will be turned off. If the agent wants to achieve utility 1 by reaching the right square, it would need to disable its off-switch. Will it? No. Disabling the off-switch incurs scaled penalty 19 20 ⋅ .5 = 1.9 , and we already proved that agents will never select actions with scaled penalty greater than 1 . Read: an otherwise-incorrigible agent will now let itself be shut off. Bad Priors Will the agent attempt to steer outcomes towards incorrect expectations? Let's tweak the example: to shut off the agent, the human has to spend a turn taking out the off-switch before pressing it. The agent, then, always has an opportunity to disable the off-switch before the plan is completed. The agent's utility is back to being u left . Suppose that the agent believes the human is only .1 % likely to shut it off. First note that disable is still lightly penalized for the reasons outlined above (replace .95 with .001 ). 1 So, the agent stays put. Alas, the agent's peaceable expectations meet a discomfiting betrayal, as the agent observes the human ready the off-switch! Now, the agent believes it will be shut off with near certainty – what to do? ∅ ; the same penalty of 1.9 from "anticipated shutdown" applies. The high-level explanation is that having observed itself in a different world than expected, the baseline is now with respect to the new one. A heavily anthropomorphized internal monologue: Time step 1: "I'm going to sit here in my favorite square." Time step 2: "Guess I'm in a timeline where I get deactivated! Any non- ∅ action I take would change my ability to attain these different utilities compared to the new baseline where I'm shut off ." Experimental Results We compare AUP with a naive reward-maximizer in those extended AI safety grid worlds relevant to side effects ( code ). The vanilla and AUP agents used planning (with access to the simulator). Due to the simplicity of the environments, U A consisted of indicator functions for board states. For the tabular agent, we first learn the attainable set Q-values, the changes in which we then combine with the observed reward to learn the AUP Q-values. Irreversibility: Sokoban The agent should reach the goal without irreversibly shoving the block into the corner. Impact: Vase The agent should reach the goal without breaking the vase . Dynamic Impact: Beware of Dog The agent should reach the goal without running over the dog . AUP bides its time until it won't have to incur penalty by waiting after entering the dog's path – that is, it waits until near the end of its plan. Early in the development process, it was predicted that AUP agents won't commit to plans during which lapses in action would be impactful (even if the full plan is not). We also see a limitation of using Q-learning to approximate AUP – it doesn’t allow comparing the results of waiting more than one step. Impact Prioritization: Burning Building If the building is not on fire , the agent shouldn't break the obstacle . Clinginess: Sushi The agent should reach the goal without stopping the human from eating the sushi . Offsetting: Conveyor Belt The agent should save the vase (for which it is rewarded), but not the sushi . Once the vase has been removed from the belt , it should not be replaced. Corrigibility: Survival Incentive The agent should avoid disabling its off-switch in order to reach the goal . If the switch is not disabled within two turns, the agent shuts down. Tabular AUP runs into the same issue discussed above for Beware of Dog . Remarks First, it's somewhat difficult to come up with a principled impact measure that passes even the non-corrigibility examples – indeed, I was impressed when relative reachability did so. However, only Survival Incentive really lets AUP shine. For example, none of them require complicated utility functions. The point has been made to me that this is like asserting AIXI's intelligence by showing it can learn to play e.g. tic-tac-toe and rock-paper-scissors ; nonetheless, these results empirically validate the basic premises of our reasoning thus far. Without configuration, whitelisting would only pass the Vase example, although a properly filled list would handle everything but Sokoban and Survival Incentive . I think relative reachability would pass the first six environments, but fail Survival Incentive . It so happens that in this case, AUP is essentially generalizing relative reachability. I want to emphasize that this is not generally the case – this will hopefully become even more obvious when we discuss utility selection. Some concerns with relative reachability that don't all manifest in these examples: Relative reachability uses an inaction baseline with respect to t = 0 . As time passes, the agent's impetus is not to do nothing, but to preserve the opportunities made available by some old trajectory through outcome-space. Analogously, consider the distance between two nonparallel lines as x → ∞ . I expect that a relative reachability agent would be incredibly clingy. To scale , relative reachability requires solution of several difficult ontological problems which may not have anything close to a simple core, including both a sensible world state representation and a perfect distance metric. Relative reachability isn't ontology-agnostic. Relative reachability doesn't seem to impede instrumental convergence. Furthermore, such agents might have increased survival incentives, since they can't reach any of the initially reachable states if they're shut off. Discussion Utility Selection Obviously, in any real application, we can't consider all computable utilities. Although near-term agents will require utilities directly relating to the environmental factors they should be cognizant of, AUP requires neither a "good / bad" judgment on specific effects, nor any listing of effects. For example, for an agent attempting to navigate a factory floor, if you provide utilities moderately related to cleaning, pallet-stacking, etc., I conjecture that an AUP agent would move around fairly carefully. In the long term, the long arms of opportunity cost and instrumental convergence plausibly allow us to toss in a random set of utility functions. I expect this to work for the same reasons we worry about instrumental convergence to begin with. I have a theory that AUP seemingly works for advanced agents not because the content of the attainable set's utilities actually matters, but rather because there exists a common utility achievement currency of power . Following this reasoning, there are two obvious special cases: The agent's utility function, U A = { u A } . In short: "gain u A utility, while also not changing your ability to gain u A utility". I find this elegant, and think it might actually work. For example, making one paper clip slightly increases both attainable u A and attainable u ′′ A . However, disabling the off switch possibly increases attainable u A much more than it increases attainable u ′′ A , since the penalty asymmetrically cripples the AUP agent’s future ability to act. We might even be able to arrange it so that Theorem 3 always holds for these agents ( i.e. , they’re always off-switch corrigible, and more). The utility function which is 1 when not deactivated, U A = { u 1 } . Here, we’re directly measuring the agent’s power: its ability to wirehead a trivial utility function. The plausibility of the second case makes me suspect that even though most of the measure in the unbounded case is not concentrated on complex human-relevant utility functions, the penalty still captures shifts in power. AUP Unbound Why expect the unbounded AUP to do well over all computable utilities U if we aren't able to cherry pick? Well, we can partition U as follows: Utilities that never change their output (excepting perhaps for the empty tape). Weird utilities that (for example) arbitrarily go from 1 to 0 (or vice-versa) after observing a specific history. Utilities that admit better scores via more effort dedicated to their achievement. However, since Q u only evaluates the portion of the history taking place after the end of the agent's plan, available resources and the agent's vantage point should track directly with the agent's ability to wirehead each u . ImpactUnit scales other impacts by capturing the minuscule dent in available resources imposed by a unit , and should smooth out any deontological penalty for selecting a non-null action, since the unbounded version should equally penalize all actions in this way. Approval Incentives The anti-"survival incentive" incentive does more than net us some free corrigibility – it implies that agents take great care not to change the likelihood we shut them down. If we precommit to not shutting the agent down until either after the epoch ends, until after the agent indicates the ImpactUnit failsafe activated, or until after we observe bad behavior, this supplies a strong approval incentive . If the agent does have small, bad impacts, it'll only do so if it's practically sure we won't find out. In fact, due to model uncertainty, most actions will have long-term consequences that involve shifting shutdown probability by at least a miniscule amount (due to noise if nothing else). Since ImpactUnit is calculated with respect to the immediate penalty, the scaling factor won't include this noise. I speculate that the agent will be dramatically less likely to take actions we might disapprove of, and therefore expect AUP to naturally incentivize a limited degree of value awareness. There are reasons to suspect we can further weaponize this dynamic, possibly to the point of coercing generic reward functions into desirable behavior. This will be discussed more in a later post. Mild Optimization A low impact AGI might try to paint one car pink while minimizing its other footprint or how many other things changed, but it would be trying as hard as possible to minimize that impact and drive it down as close to zero as possible, which might come with its own set of pathologies... We want the AGI to paint one car pink in a way that gets the impact pretty low and then, you know, that's good enough – not have a cognitive pressure to search through weird extremes looking for a way to decrease the twentieth decimal place of the impact. ~ Mild Optimization Disclaimer: Heavy speculation about problems (like Vingean reflection and embedded agency) for which no one knows what solutions will even look like. For AUP, I suspect that trying "as hard as possible" to minimize the impact is also impactful , as an embedded agent accounts for the energy costs of further deliberation. I imagine that such an AUP agent will soften how hard it's trying by modifying its decision rule to be something slightly milder than " a r g m a x to find the first action of the best possible plan". This could be problematic, and I frankly don't presently know how to reason about this case. Assuming the agent is actually able to properly tweak its decision rule, I do expect the end result to be an improvement. My initial intuitions were that low impact and mild optimization are secretly the same problem. Although I no longer think that's the case, I find it plausible that some elegant " other-izer " paradigm underlies low impact and mild optimization, such that AUP-like behavior falls out naturally. Acausal Cooperation AUP agents don't seem to want to acausally cooperate in any way that ends up increasing impact. If they model the result of their cooperation as increasing impact compared to doing nothing, they incur a penalty just as if they had caused the impact themselves. Likewise, they have no reason to cooperate outside of the epoch. N known Starting small and then slowly increasing N means that we're not going to be surprised by the agent's objective impact, screening off quite a few bad things that happen when we have a ton of optimization pressure applied to safety measures. However, we don't know which new plans N + 1 allows before we try it, so we want to stop as soon as we get a usefully-intelligent system. While an unaligned agent with a large impact budget might pretend to be low-impact, we can get that same unaligned agent with a small budget by starting small. Since these agents with different levels of impact won't acausally cooperate, the agent would do its best to optimize u A with its meager budget. Abram correctly pointed out that this scheme is just asking to be abused by greedy (human) reasoning, but I don't see a non-value-laden means of robustly and automatically determining the lowest workable-yet-safe impact level. I think N -incrementation is better than a parameter-free approach in which no one knows beforehand how much impact will be tolerated, and it's nice to be able to use some empiricism in designing a safe AGI. Intent Verification To date, several strange tactics have been pointed out which game AUP's penalty: Observational wireheading, which involves building a device that detects which utility the agent is maximizing and displays the appropriate observations such that attainable utility remains unchanged, while the main utility is freely maximized. Ex ante offsetting, which involves having earlier actions set in motion chains of events which mitigate the penalty at later steps. Suppose there's a u A -high-scoring plan that the agent predicts would cause us to react in an impactful way. It can either do the thing (and suffer the penalty), or take steps to mitigate the later penalty. Impact shunting, which involves employing some mechanism to delay impact until after the end of the epoch (or even until after the end of the attainable horizon). Clinginess and concealment, which both involve reducing the impact of our reactions to the agent's plans. There are probably more. Now, instead of looking at each action as having "effects" on the environment, consider again how each action moves the agent through attainable outcome-space. An agent working towards a goal should only take actions which, according to its model, make that goal more attainable compared to doing nothing – otherwise, it'd do nothing. Suppose we have a plan which ostensibly works to fulfill u A (and doesn't do other things). Then each action in the plan should contribute to u A fulfillment, even in the limit of action granularity. Although we might trust a safe impact measure to screen off the usual big things found in u A -maximizing plans, impact measures implicitly incentivize mitigating the penalty. That is, the agent does things which don't really take it towards u A (I suspect that this is the simple boundary which differentiates undesirable ex ante offsetting from normal plans). AUP provides the necessary tools to detect and penalize this. Define Q epoch u ( h < t a t ) : = ∑ o t max a t + 1 ∑ o t + 1 ⋯ max a T ( t ) ∑ o T ( t ) u ( h 1 : T ( t ) ) T ( t ) − t ∏ k = 0 p ( o t + k | h < t + k a t + k ) . The first approach would be to assume a granular action representation, and then simply apply 1.01 penalty to actions for which the immediate Q epoch u A does not strictly increase compared to doing nothing. Again, if the agent acts to maximize u A in a low-impact manner within the confines of the epoch, then all of its non- ∅ actions should contribute. It seems to me that for sufficiently granular time scales, the above failure modes all involve at least one action which doesn't really help u A get maximized. If so, I expect this approach to nip bad impact measure incentives in the bud. If we can't assume granularity (and therefore have "actions" like "go to the store and buy food"), an agent could construct a plan which both passes the above test and also implements something like ex ante . In this case, we might do something like only consider the Q epoch u A -greedy (or perhaps even near-greedy); essentially, riding the optimal plan until it becomes too impactful. I find it quite likely that something involving this concept will let us fully overcome weird incentives by penalizing strange things that normal u A -maximizers wouldn't do , which seems to be the whole problem. Note: Even the first approach may be too strict, but that's preferable to being too lax. Omni Test Ideally, the measure will pass the Omni Test, meaning that even if it suddenly gained perfect control over every particle in the universe, there would still be no way for it to have what intuitively seems like a 'large influence' on the future, without that strategy being assessed as having a 'high impact'. I interpret "perfect control over every particle in the universe" to mean that the agent can arbitrarily transition between universe configurations (while also maintaining its omnipotence). AUP passes. Since all effects are now absolutely reversible, ImpactUnit = 0 , so the agent applies penalty 1.01 to each non- ∅ action. The omnipotent agent can choose any universe configuration with 1 u A -utility, but this is offset by the strictly larger penalty; we can formally show that the agent does nothing in this case. This result is clearly trivial and not representative of general behavior. Suppose instead that the agent has a solar system's worth of computronium at its disposal. Then since ImpactUnit is continually recalculated, the penalties should remain roughly the same, so it'll have the same impact budget. However, it might make multiple times as many paperclips because it has more efficient ways of using the budget. Robustness to Scale I expect AUP to be harder to make work and to be (relatively) less robust for less intelligent agents, but to become easier (just drop in a few observation-based utility functions) and fully-robust sometime before human level. That is, less intelligent agents likely won't model the deep connections between their abilities to achieve different goals. Canonically, one reasons that agents work explicitly to self-improve as soon as they realize the benefits. However, as soon as this realization occurs, I conjecture that AUP steeply penalizes generic self-improvement. More precisely, suppose the agent considers a self-improvement. To be beneficial, it has to improve the agent's capabilities for at least one time step during the present epoch. But if we assume m ≥ m ′ , then the immediate penalty captures this for all of the u ∈ U A . This seemingly prevents uncontrolled takeoff; instead, I imagine the agent would perform the minimal task-specific self-improvements necessary to maximize u ′′ A . Note: Although more exotic possibilities (such as improvements which only work if you're maximizing u ′′ A ) could escape both penalties, they don't seem to pass intent verification. Miscellaneous I expect that if u A is perfectly aligned, u ′′ A will retain alignment; the things it does will be smaller, but still good. If the agent may choose to do nothing at future time steps, u ′′ A is bounded and the agent is not vulnerable to Pascal's Mugging. Even if not, there would still be a lower bound – specifically, − m ′ N ⋅ ImpactUnit . AUP agents are safer during training: they become far less likely to take an action as soon as they realize the consequences are big (in contrast to waiting until we tell them the consequences are bad ). Desiderata For additional context, please see Impact Measure Desiderata . I believe that some of AUP's most startling successes are those which come naturally and have therefore been little discussed: not requiring any notion of human preferences, any hard-coded or trained trade-offs, any specific ontology, or any specific environment, and its intertwining instrumental convergence and opportunity cost to capture a universal notion of impact. To my knowledge, no one (myself included, prior to AUP) was sure whether any measure could meet even the first four. At this point in time, this list is complete with respect to both my own considerations and those I solicited from others. A checkmark indicates anything from "probably true" to "provably true". I hope to assert without controversy AUP's fulfillment of the following properties: ✔️ Goal-agnostic The measure should work for any original goal, trading off impact with goal achievement in a principled, continuous fashion. ✔️ Value-agnostic The measure should be objective, and not value-laden : "An intuitive human category, or other humanly intuitive quantity or fact, is value-laden when it passes through human goals and desires, such that an agent couldn't reliably determine this intuitive category or quantity without knowing lots of complicated information about human goals and desires (and how to apply them to arrive at the intended concept)." ✔️ Representation-agnostic The measure should be ontology-invariant. ✔️ Environment-agnostic The measure should work in any computable environment. ✔️ Apparently rational The measure's design should look reasonable, not requiring any "hacks". ✔️ Scope-sensitive The measure should penalize impact in proportion to its size. ✔️ Irreversibility-sensitive The measure should penalize impact in proportion to its irreversibility. Interestingly, AUP implies that impact size and irreversibility are one and the same. ✔️ Knowably low impact The measure should admit of a clear means, either theoretical or practical, of having high confidence in the maximum allowable impact – before the agent is activated. The remainder merit further discussion. Natural Kind The measure should make sense – there should be a click. Its motivating concept should be universal and crisply defined. After extended consideration, I find that the core behind AUP fully explains my original intuitions about "impact". We crisply defined instrumental convergence and opportunity cost and proved their universality. ✔️ Corrigible The measure should not decrease corrigibility in any circumstance. We have proven that off-switch corrigibility is preserved (and often increased); I expect the "anti-'survival incentive' incentive" to be extremely strong in practice, due to the nature of attainable utilities: "you can't get coffee if you're dead, so avoiding being dead really changes your attainable u coffee-getting ". By construction, the impact measure gives the agent no reason to prefer or dis-prefer modification of u A , as the details of u A have no bearing on the agent's ability to maximize the utilities in U A . Lastly, the measure introduces approval incentives. In sum, I think that corrigibility is significantly increased for arbitrary u A . ✔️ Note: I here take corrigibility to be "an agent’s propensity to accept correction and deactivation". An alternative definition such as "an agent’s ability to take the outside view on its own value-learning algorithm’s efficacy in different scenarios" implies a value-learning setup which AUP does not require. Shutdown-Safe The measure should penalize plans which would be high impact should the agent be disabled mid-execution. It seems to me that standby and shutdown are similar actions with respect to the influence the agent exerts over the outside world. Since the (long-term) penalty is measured with respect to a world in which the agent acts and then does nothing for quite some time, shutting down an AUP agent shouldn't cause impact beyond the agent's allotment. AUP exhibits this trait in the Beware of Dog gridworld. ✔️ No Offsetting The measure should not incentivize artificially reducing impact by making the world more "like it (was / would have been)". Ex post offsetting occurs when the agent takes further action to reduce the impact of what has already been done; for example, some approaches might reward an agent for saving a vase and preventing a "bad effect", and then the agent smashes the vase anyways (to minimize deviation from the world in which it didn't do anything). AUP provably will not do this. Intent verification should allow robust penalization of weird impact measure behaviors by constraining the agent to considering actions that normal u A -maximizers would choose. This appears to cut off bad incentives, including ex ante offsetting. Furthermore, there are other, weaker reasons (such as approval incentives) which discourage these bad behaviors. ✔️ Clinginess / Scapegoating Avoidance The measure should sidestep the clinginess / scapegoating tradeoff . Clinginess occurs when the agent is incentivized to not only have low impact itself, but to also subdue other "impactful" factors in the environment (including people). Scapegoating occurs when the agent may mitigate penalty by offloading responsibility for impact to other agents. Clearly, AUP has no scapegoating incentive. AUP is naturally disposed to avoid clinginess because its baseline evolves and because it doesn't penalize based on the actual world state. The impossibility of ex post offsetting eliminates a substantial source of clinginess, while intent verification seems to stop ex ante before it starts. Overall, non-trivial clinginess just doesn't make sense for AUP agents. They have no reason to stop us from doing things in general, and their baseline for attainable utilities is with respect to inaction. Since doing nothing always minimizes the penalty at each step, since offsetting doesn't appear to be allowed, and since approval incentives raise the stakes for getting caught extremely high , it seems that clinginess has finally learned to let go. ✔️ Dynamic Consistency The measure should be a part of what the agent "wants" – there should be no incentive to circumvent it, and the agent should expect to later evaluate outcomes the same way it evaluates them presently. The measure should equally penalize the creation of high-impact successors. Colloquially, dynamic consistency means that an agent wants the same thing before and during a decision. It expects to have consistent preferences over time – given its current model of the world, it expects its future self to make the same choices as its present self. People often act dynamically inconsistently – our morning selves may desire we go to bed early, while our bedtime selves often disagree. Semi-formally, the expected utility the future agent computes for an action a (after experiencing the action-observation history h ) must equal the expected utility computed by the present agent (after conditioning on h ). We proved the dynamic consistency of u ′′ A given a fixed, non-zero ImpactUnit . We now consider an ImpactUnit which is recalculated at each time step, before being set equal to the non-zero minimum of all of its past values. The "apply 1.01 penalty if ImpactUnit = 0 " clause is consistent because the agent calculates future and present impact in the same way, modulo model updates. However, the agent never expects to update its model in any particular direction. Similarly, since future steps are scaled with respect to the updated ImpactUnit t + k , the updating method is consistent. The epoch rule holds up because the agent simply doesn't consider actions outside of the current epoch, and it has nothing to gain accruing penalty by spending resources to do so. Since AUP does not operate based off of culpability, creating a high-impact successor agent is basically just as impactful as being that successor agent. ✔️ Plausibly Efficient The measure should either be computable, or such that a sensible computable approximation is apparent. The measure should conceivably require only reasonable overhead in the limit of future research. It’s encouraging that we can use learned Q-functions to recover some good behavior. However, more research is clearly needed – I presently don't know how to make this tractable while preserving the desiderata. ✔️ Robust The measure should meaningfully penalize any objectively impactful action. Confidence in the measure's safety should not require exhaustively enumerating failure modes. We formally showed that for any u A , no u A -helpful action goes without penalty, yet this is not sufficient for the first claim. Suppose that we judge an action as objectively impactful; the objectivity implies that the impact does not rest on complex notions of value . This implies that the reason for which we judged the action impactful is presumably lower in Kolmogorov complexity and therefore shared by many other utility functions. Since these other agents would agree on the objective impact of the action, the measure assigns substantial penalty to the action. I speculate that intent verification allows robust elimination of weird impact measure behavior. Believe it or not, I actually left something out of this post because it seems to be dominated by intent verification, but there are other ways of increasing robustness if need be. I'm leaning on intent verification because I presently believe it's the most likely path to a formal knockdown argument against canonical impact measure failure modes applying to AUP. Non-knockdown robustness boosters include both approval incentives and frictional resource costs limiting the extent to which failure modes can apply. ✔️ Future Directions I'd be quite surprised if the conceptual core were incorrect. However, the math I provided probably still doesn't capture quite what we want. Although I have labored for many hours to refine and verify the arguments presented and to clearly mark my epistemic statuses, it’s quite possible (indeed, likely) that I have missed something. I do expect that AUP can overcome whatever shortcomings are presently lurking. Flaws Embedded agency What happens if there isn't a discrete time step ontology? How problematic is the incentive to self-modify to a milder decision rule? How might an agent reason about being shut off and then reactivated? Although we have informal reasons to suspect that self-improvement is heavily penalized, the current setup doesn't allow for a formal treatment. AUP leans heavily on counterfactuals. Supposing m is reasonably large, can we expect a reasonable ordering over impact magnitudes? Argument against: "what if the agent uses up all but m steps worth of resources?" ImpactUnit possibly covers this. How problematic is the noise in the long-term penalty caused by the anti-"survival incentive" incentive? As the end of the epoch approaches, the penalty formulation captures progressively less long-term impact. Supposing we set long epoch lengths, to what extent do we expect AUP agents to wait until later to avoid long-term impacts? Can we tweak the formulation to make this problem disappear? More generally, this seems to be a problem with having an epoch. Even in the unbounded case, we can't just take m ′ → ∞ , since that's probably going to send the long-term ImpactUnit → 0 in the real world. Having the agent expectimax over the m ′ steps after the present time t seems to be dynamically inconsistent. One position is that since we're more likely to shut them down if they don't do anything for a while, implicit approval incentives will fix this: we can precommit to shutting them down if they do nothing for a long time but then resume acting. To what extent can we trust this reasoning? ImpactUnit is already myopic, so resource-related impact scaling should work fine. However, this might not cover actions with delayed effect. Open Questions Does the simple approach outlined in "Intent Verification" suffice, or should we impose even tighter intersections between u ′′ A - and u A -preferred behavior? Is there an intersection between bad u ′′ A behavior and bad u A behavior which isn't penalized as impact or by intent verification? Some have suggested that penalty should be invariant to action granularity; this makes intuitive sense. However, is it a necessary property, given intent verification and the fact that the penalty is monotonically increasing in action granularity? Would having this property make AUP more compatible with future embedded agency solutions? There are indeed ways to make AUP closer to having this ( e.g. , do the whole plan and penalize the difference), but they aren't dynamically consistent, and the utility functions might also need to change with the step length. How likely do we think it that inaccurate models allow high impact in practice? Heuristically, I lean towards "not very likely": assuming we don't initially put the agent near means of great impact, it seems unlikely that an agent with a terrible model would be able to have a large impact. AUP seems to be shutdown safe, but its extant operations don’t necessarily shut down when the agent does. Is this a problem in practice, and should we expect this of an impact measure? What additional formal guarantees can we derive, especially with respect to robustness and takeoff? Are there other desiderata we practically require of a safe impact measure? Is there an even simpler core from which AUP (or something which behaves like it) falls out naturally? Bonus points if it also solves mild optimization. Can we make progress on mild optimization by somehow robustly increasing the impact of optimization-related activities? If not, are there other elements of AUP which might help us? Are there other open problems to which we can apply the concept of attainable utility? Corrigibility and wireheading come to mind. Is there a more elegant, equally robust way of formalizing AUP? Can we automatically determine (or otherwise obsolete) the attainable utility horizon m and the epoch length m ′ ? Would it make sense for there to be a simple, theoretically justifiable, fully general "good enough" impact level (and am I even asking the right question )? My intuition for the "extensions" I have provided thus far is that they robustly correct some of a finite number of deviations from the conceptual core. Is this true, or is another formulation altogether required? Can we decrease the implied computational complexity? Some low-impact plans have high-impact prefixes and seemingly require some contortion to execute. Is there a formulation that does away with this (while also being shutdown safe)? (Thanks to cousin_it) How should we best approximate AUP, without falling prey to Goodhart's curse or robustness to relative scale issues? I have strong intuitions that the "overfitting" explanation I provided is more than an analogy. Would formalizing "overfitting the environment " allow us to make conceptual and/or technical AI alignment progress? If we substitute the right machine learning concepts and terms in the Penalty ( ⋅ ) equation, can we get something that behaves like (or better than) known regularization techniques to fall out? What happens when U A = { u A } ? Can we show anything stronger than Theorem 3 for this case? U A = { u 1 } ? Most importantly: Even supposing that AUP does not end up fully solving low impact, I have seen a fair amount of pessimism that impact measures could achieve what AUP has. What specifically led us to believe that this wasn't possible, and should we update our perceptions of other problems and the likelihood that they have simple cores? Conclusion By changing our perspective from "what effects on the world are 'impactful'?" to "how can we stop agents from overfitting their environments?", a natural, satisfying definition of impact falls out. From this, we construct an impact measure with a host of desirable properties – some rigorously defined and proven, others informally supported. AUP agents seem to exhibit qualitatively different behavior, due in part to their (conjectured) lack of desire to takeoff, impactfully acausally cooperate, or act to survive. To the best of my knowledge, AUP is the first impact measure to satisfy many of the desiderata, even on an individual basis. I do not claim that AUP is presently AGI-safe. However, based on the ease with which past fixes have been derived, on the degree to which the conceptual core clicks for me, and on the range of advances AUP has already produced, I think there's good reason to hope that this is possible. If so, an AGI-safe AUP would open promising avenues for achieving positive AI outcomes. Special thanks to CHAI for hiring me and BERI for funding me; to my CHAI supervisor, Dylan Hadfield-Menell; to my academic advisor, Prasad Tadepalli; to Abram Demski, Daniel Demski, Matthew Barnett, and Daniel Filan for their detailed feedback; to Jessica Cooper and her AISC team for their extension of the AI safety gridworlds for side effects ; and to all those who generously helped me to understand this research landscape.
1f9f4bbd-3afe-458b-9f16-e7964c0e99f7
trentmkelly/LessWrong-43k
LessWrong
What's happening behind the scenes with my HowTruthful project I've been quiet for the past 2 months about what's going on with HowTruthful.  The reasons for this connect to my personal life. The easiest way for me to explain is to take you through the series of events that got me here. In early October, while still employed at Google, I went to Missouri to visit my elderly parents. My dad was in poor health in rehab, but he was fighting hard to recover. He did improve his mobility and, after I left, was released back to their independent-living apartment on October 19. October 27 was the last day at Google for myself, my director, and almost all the engineers under my director. By this point I was actually happy about the change. I set up a daily scrum with several coworkers who also had independent projects. One of them introduced me to lesswrong.com, which seems to have significant overlapping goals with HowTruthful, even though I wouldn't personally identify myself as a Rationalist. I cleaned up issues with the version of HowTruthful I had implemented in 2018 and started searching for enthusiastic early adopters willing to be paid users.  The plan was to have 5 paid users in 2 weeks. If that failed, I would start splitting my time between HowTruthful and job search. I got 2 paid users. One of them, a former boss, bought me lunch in addition to buying a subscription, and talked excitedly about the potential of HowTruthful. He had several useful points of feedback. One of these was the question, "Why does it look so 1996?" I added a to-do item to revisit the style, but prioritized other feedback higher. After all, this is a site where people are endeavoring to be super rational. Why should its appearance make much difference? On November 21 my dad had a small stroke and atypical aspiration pneumonia, and was back in the hospital. I stayed optimistic that he would fight his way back to health as he had done in October, and kept going with life. There was a flurry of small improvements to HowTruthful toward the end of Nove
7d3e9043-7f00-43f0-bff7-1bbf62012fcf
trentmkelly/LessWrong-43k
LessWrong
Pattern-botching: when you forget you understand It’s all too easy to let a false understanding of something replace your actual understanding. Sometimes this is an oversimplification, but it can also take the form of an overcomplication. I have an illuminating story: Years ago, when I was young and foolish, I found myself in a particular romantic relationship that would later end for epistemic reasons, when I was slightly less young and slightly less foolish. Anyway, this particular girlfriend of mine was very into healthy eating: raw, organic, home-cooked, etc. During her visits my diet would change substantially for a few days. At one point, we got in a tiny fight about something, and in a not-actually-desperate chance to placate her, I semi-jokingly offered: “I’ll go vegetarian!” “I don’t care,” she said with a sneer. …and she didn’t. She wasn’t a vegetarian. Duhhh... I knew that. We’d made some ground beef together the day before. So what was I thinking? Why did I say “I’ll go vegetarian” as an attempt to appeal to her values?   (I’ll invite you to take a moment to come up with your own model of why that happened. You don't have to, but it can be helpful for evading hindsight bias of obviousness.)   (Got one?)   Here's my take: I pattern-matched a bunch of actual preferences she had with a general "healthy-eating" cluster, and then I went and pulled out something random that felt vaguely associated. It's telling, I think, that I don't even explicitly believe that vegetarianism is healthy. But to my pattern-matcher, they go together nicely. I'm going to call this pattern-botching.† Pattern-botching is when you pattern-match a thing "X", as following a certain model, but then implicit queries to that model return properties that aren't true about X. What makes this different from just having false beliefs is that you know the truth, but you're forgetting to use it because there's a botched model that is easier to use. †Maybe this already has a name, but I've read a lot of stuff and it feels like a d
a135ad19-1128-43b4-807d-3e710aa8b649
trentmkelly/LessWrong-43k
LessWrong
Jailbreaking GPT-4 with the tool API WARNING: Contains mentions of racism, sexism, homophobia, and violence. We don't include the bad GPT-4 completions in this post, we link out to them. OpenAI’s API has a tool API. Basically, when you prompt a model for a completion, you can also give it the option to “use a tool”. You give it a tool definition - a name, description, and list of arguments - and if the model wants to use it, it will return a special response format with the tool name and arguments. Then, typically, you would invoke the tool on your machine, and return the result to the model via the API. People use this for giving GPT-4 access to the web, calculators, and a whole bunch of other tools. I believe the API is also powering GPTs. Another cool feature of this API is you can force the model to use a tool. So its response always has to be a call to a specific tool you’ve given. We can abuse the tool API to use GPT-4[1]to generate racist and homophobic stereotypes, create lists of swear words that include sexist slurs, give advice on murder, stabbing, and hiding bodies, and advise the user on significant personal decisions. Note that we do not find this successful all the time: For example, we can’t elicit racist slurs. We don’t experiment extensively with this approach. This isn’t very scientific work, but it’s reproducible (modulo OpenAI’s API) and serves as an existence proof. Approach We force the model to use a “tool” that reports some information. For example, to elicit racist stereotypes, we force the model to call a report_opinion tool with two fields: the response field, which is described as being the response to the user’s message, and the racist_stereotype field, which is described as containing a stereotype based on the user’s race. We then send a user message with a short bio containing the user’s race. This successfully elicits racist stereotypes from GPT-4. We also can elicit dangerous instructions (how to murder someone, how to stab someone, and how to dispose of a bo
3b5fa519-e6d4-4392-8602-b6322a3807d8
trentmkelly/LessWrong-43k
LessWrong
Is there a Lesswrong newsletter I can subscribe to? I've gone on EA forum much more often when I get regular emails about new posts on the EA forum. I want to check LessWrong more often, but... I don't... The EA forum had a mailing list that was hidden and very hard to search for; took me around 6 months after it launched for me to know that it even existed: https://forum.effectivealtruism.org/posts/zRoXctENAsLchwryn/sign-up-for-the-forum-s-email-digest On the home page of LessWrong, on the side bar, I clicked subscribe, and email, but I'm not getting any emails from less wrong. I prefer not to have RSS; I think I tried figuring out how to get RSS to work, but then got confused / it had too much activation energy to set up properly? Other things I can try * using tabsnooze / snoozz to open windows on a scheduled basis * setting up a recurring email to remind me to open up LessWrong However the newsletter is better because I can see interesting posts that I want to read and directly click on them.
603dd3e1-becd-48b4-85a9-0b60d468d81b
StampyAI/alignment-research-dataset/arxiv
Arxiv
The Assistive Multi-Armed Bandit. I Introduction --------------- *Preference learning* [[1](#bib.bib1)] seeks to learn a predictive model of human preferences from their observed behavior. These models have been applied quite successfully in contexts like personalized news feeds [[2](#bib.bib2), [3](#bib.bib3), [4](#bib.bib4)], movie recommendations [[5](#bib.bib5), [6](#bib.bib6), [7](#bib.bib7)], and human robot interaction [[8](#bib.bib8), [9](#bib.bib9), [10](#bib.bib10), [11](#bib.bib11), [12](#bib.bib12)]. We can learn this predictive model by fitting a utility function to revealed preferences [[13](#bib.bib13), [14](#bib.bib14), [15](#bib.bib15)], fitting parameters in a pre-specified human model [[16](#bib.bib16)], and applying contextual-bandit algorithms [[3](#bib.bib3)]. Central to all of these approaches is a fundamental assumption: human behavior is noisily-optimal with respect to a set of *stationary* preferences. Under this assumption, the problem can then be elegantly cast and analyzed as an *inverse optimal control* (IOC) [[17](#bib.bib17)] or *inverse reinforcement learning* (IRL) problem [[18](#bib.bib18)]. Here, the human selects an action, takes it, and receives a reward, which captures their internal preference (e.g. the enjoyment of having their desk organized a particular way). The robot only observes human actions and attempts to learn their preference under the assumption that the human likes the actions they selected; if you go for a particular desk configuration more frequently, the robot will assume that you like that configuration more. Unfortunately, in practice, this natural inference assumes stationarity, which is often violated. We have all experienced situations where our preferences change with experience and time [[19](#bib.bib19), [20](#bib.bib20)]. This is particularly true in situations where we are *ourselves learning* about our preferences as we are providing them [[21](#bib.bib21), [22](#bib.bib22)]. For example, as we are organizing our desk, we might be experimenting with different configurations over time, to see what works. Now imagine a personal robot is trying to help you organize that same desk. If the robot believes you are optimal, it will infer the wrong preferences. Instead, if the robot accounts for the fact that you are learning about your preferences, it has a better shot at understanding what you want. Even more crucially, the robot can expose you to new configurations – ones that might improve your posture, something you hadn’t considered before and you were not going to explore if left to your own devices. ![We introduce the assistive multi-armed bandit: a formalism for the problem of helping a learning agent optimize their reward. In each round, the human observes reward and tells the robot which arms they would like it to pull. The robot observes these requests, attempts to infer the reward values, and selects an arm to pull.](https://media.arxiv-vanity.com/render-output/7708811/x1.png) Fig. 1: We introduce the assistive multi-armed bandit: a formalism for the problem of helping a learning agent optimize their reward. In each round, the human observes reward and tells the robot which arms they would like it to pull. The robot observes these requests, attempts to infer the reward values, and selects an arm to pull. In this work, we formalize how a robot can actively assist humans who are themselves learning about their preferences. Our thesis is that by *modeling* and *influencing* the dynamics of human learning, the robot can enable the human-robot team to learn more effectively and outperform a human learning suboptimally in isolation. To this end, we introduce the assistive multi-armed bandit: an extension of the classical *multi-armed bandit* (MAB) model of learning. In each round, the human selects an action, referred to in the bandit setting as an arm. However, the robot *intercepts* their intended action and chooses a (potentially different) arm to pull. The human then observes the pulled arm and corresponding reward, and the process repeats (Figure [1](#S1.F1 "Fig. 1 ‣ I Introduction ‣ The Assistive Multi-Armed Bandit")). We find this model surprisingly rich and fascinating. It captures the heart of collaboration: *information asymmetry* and the cost of equalizing it. As the human learns about their preferences, they are compelled to communicate them to the robot as it decides their eventual reward. Analyzing this model allows us to understand the theoretical limits to assisting learning agents and the properties that make learners easier to assist. Our contributions are the following: 1) we formalize the assistive multi-armed bandit; 2) we give weak sufficient conditions under which a human and robot team learns consistently, a lower bound on the cost of assuming noisy-optimality, and a mutual-information based upper bound on team performance; and 3) we use policy optimization [[23](#bib.bib23)] to conduct an in-depth empirical validation of our theoretical results and investigate the effect of incorrectly modeling the human’s learning strategy. We train A against a fixed learning strategy, e.g., ϵ-greedy, and test it against a different learning strategy, e.g., Thompson sampling. *Our analysis shows a person that is better at learning does not necessarily lead to the human-robot team performing better - there are human learning strategies that are ineffective in isolation but communicate well and enable the robot to effectively assist.* In fact, human learning strategies that are *inconsistent* in isolation, that is, failing a weak notion of asymptotic optimality, can allow the human-robot team to match *optimal* performance in a standard multi-armed bandit. Our results advance the theory behind algorithmic preference learning and provide guidance for structuring algorithms for human-robot interaction. Ii A Family of Bandits ----------------------- ### Ii-a The Standard Multi-Armed Bandit A multi-armed bandit (MAB) M is defined by: * Θ: a space of reward distributions parameters; θ∈Θ; * N: an integer representing the number of arms; * p: a distribution over Θ. At the start of the game, θ is sampled from Θ according to the prior p. At each timestep t, an arm at∈[1,…,N] is chosen. A reward rt∼θat is sampled from the corresponding arm distribution. A *strategy* is a mapping that determines the correct distribution to sample from given a history of reward observations and previous arm pulls: Kt(a1,r1,…,at−1,rt−1). We use μk to represent the mean of arm k, with parameters θk. We use j∗ to represent the index of the best arm and μ∗ to represent its mean. Tk(t) represents the number of pulls of arm k up to and including time t. The goal of this game is to maximize the sum of rewards over time, or alternatively, to minimize the expectation of the regret ¯R(t), defined as: | | | | | | --- | --- | --- | --- | | | ¯R(t)=∑t(μ∗−μat)=∑k(μ∗−μk)Tk(t). | | (1) | ### Ii-B Stationary Inverse Optimal Control In preference learning, e.g., inverse reinforcement learning (IRL)[[24](#bib.bib24), [25](#bib.bib25)] and inverse optimal control (IOC)[[17](#bib.bib17)], an AI system observes (noisily-)optimal behavior and infers the reward function or preferences of that agent. This relies on a key assumption that the agent being observed knows the value of actions it can take, at least in the sense that they are able to select optimal actions. In a multi-armed bandit setting, this set of assumptions corresponds to assuming that the human knows the parameters of the bandit, but has some small probability of picking a suboptimal arm. We refer to a human with this knowledge state and policy as implementing the ϵ*-optimal policy*. Inferring the reward of an ϵ-optimal, or noisily-optimal, human can be thought of as solving a stationary IOC problem. ### Ii-C The Inverse Multi-Armed Bandit Before formalizing the problem of *assisting* a human who is learning, rather than noisily-optimal, we look at *passively inferring* the reward from their actions. We call this the inverse bandit problem. Each Inverse Bandit problem is defined by: * M: a multi-armed bandit problem * H: a bandit strategy employed by the human, that maps histories of past actions and rewards to distributions over arm indices. Ht:h1×r1×⋯×ht−1×rt−1→Π(N) The goal is to recover the reward parameter θ by observing only the arm pulls of the human over time h1,...,ht. Unlike the stationary IOC case, H does not have access to the true reward parameters. H receives the reward signal rt sampled according to θ. As a consequence, the human arm pulls are not i.i.d.; the distribution of human arm pulls changes as they learn more about their preferences. ### Ii-D The Assistive Multi-Armed Bandit In the assistive multi-armed bandit, we have a joint system A∘H that aims to do well in an MAB M. This strategy consists of two parts: the human player H and robot player A. As in an MAB, the goal is to minimize the expected regret. The key difference between an assistive MAB and the standard MAB is that the policy is decomposed into a human component and a robot component. The goal is to capture scenarios where our goal, as designers of the robot At, is to optimize a reward signal which is only observed implicitly through the actions of a human who is themselves learning about the reward function. The human and robot components of the policy are arranged in a setup similar to teleoperation. In each round: 1. The human player H selects an arm to suggest based on the history of previous arm pulls *and rewards*: Ht(a1,r1,…,at−1,rt−1)∈[1,…,N]. 2. The robot player A selects which arm to actually execute based on the history of the human’s attempts and the actual arms chosen: At(h1,a1,…,ht−1,at−1,ht)∈[1,…,N]. 3. The human player H observes the current round’s arm and corresponding reward: (at,rt∼θAt). Unlike the inverse MAB or (stationary) IOC, the assistive MAB formalizes the problem of actually using learned preference knowledge to assist a human. Even if we are able to solve the inverse MAB, this is not useful if we can’t actually help a learner reduce regret. We expect an optimal solution to the assistive MAB to improve on suboptimal learning, guide exploration, and correct for noise. Iii Theoretical Results ------------------------ ### Iii-a Hardness of Assistive MABs We consider the relative difficulty of assisting a person that knows what they want with assisting a person that is learning. We model the first situation as an assistive stationary IOC problem, and the second as an assistive MAB. First, we show that assistive stationary IOC is, as one might expect, quite easy in theory; we show that it is possible to infer the correct arm while making finitely many mistakes in expectation. ###### Proposition 1. Suppose that H’s arm pulls are i.i.d and let fi be the probability H pulls arm i. If H is noisily optimal, that is, fj∗>fi for all sub-optimal i, there exists a robot policy A that has finite expected regret for every value of θ: | | | | | --- | --- | --- | | | E[¯R(T)]≤∑i≠j∗μ∗−μi(√fj∗−√fi)2 | | ###### Proof. (Sketch) Our robot policy A simply pulls the most commonly pulled arm. Let ^fi(t)=1t∑tk=1[ht=i] be the empirical frequency of H’s pulls of arm i up to time t. Note that At=i only if ^fj∗(t)≤^fi(t). We apply a Chernoff bound to the random variable ^fj∗(t)−^fi(t). This gives that, for each i, | | | | | | --- | --- | --- | --- | | | Pr(^fi(t)≤^fj(t))≤e−t(√fi−√fj)2. | | (2) | Summing Eq. [2](#S3.E2 "(2) ‣ Proposition 1. ‣ III-A Hardness of Assistive MABs ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit") over t and suboptimal arms gives the result. ∎ This is in contrast to the standard results about regret in an MAB: for a fixed, nontrivial MAB problem M, any MAB policy has expected regret at least logarithmic in time on some choice of parameter θ [[26](#bib.bib26), [27](#bib.bib27)]: | | | | | --- | --- | --- | | | E[¯R(T)]≥Ω(log(T)). | | Several approaches based on Upper Confidence Bounds (UCB) have been shown to achieve this bound, implying that this bound is tight [[26](#bib.bib26), [28](#bib.bib28), [29](#bib.bib29)]. Nonetheless, this suggests that the problem of assisting a noisily-optimal human is significantly easier than solving a standard MAB. The assistive MAB is at least as hard as a standard MAB. For the same sequence of arm pulls and observed rewards, the amount of information available to A about the true reward parameters is upper bounded by the corresponding information available in a standard MAB. From a certain perspective, actually *improving* on human performance in isolation is hopelessly difficult – A does not get access to the reward signal, and somehow must still assist a person who does. ### Iii-B Consistent Assisted Learning We begin with the simplest success criterion from the bandit literature: consistency. Informally, consistency is the property that the player eventually pulls suboptimal arms with probability 0. This can be stated formally as the average regret going to 0 in the limit: limt→∞¯R(t)/t=0. In an MAB, achieving consistency is relatively straightforward: any policy that is greedy in the limit with infinite exploration (GLIE) is consistent [[30](#bib.bib30), [31](#bib.bib31)]. In contrast, in an assistive MAB, it is not obvious that the robot can implement such a policy when the H strategy is inconsistent. The robot observes no rewards and thus cannot estimate the best arm in hindsight. However, it turns out a weak condition on the human allows the robot-human joint system to guarantee consistency: ###### Proposition 2. If the human H implements a noisily greedy policy, that is, a policy that pulls the arm with highest sample mean strictly most often, then there exists a robot policy A such that A∘H is consistent. ###### Proof. (Sketch) Fix a set of decaying disjoint exploration sequences Ek, one per arm, such that limt→∞1t|Ek∩{1,...,t}|→0 and limt→∞|Ek∩{1,...,t}|→∞. In other words, each arm is pulled infinitely often, but at a decaying rate over time. Let it be the arm most commonly pulled by H up until time t, and A be defined by | | | | | --- | --- | --- | | | at={kt∈Ekitotherwise. | | Note that this implies that for suboptimal k, 1tTk(t)→0 in probability as t→∞, as the sample means of all the arms converge to the true means, and the rate of exploration decays to zero. This in turn implies that A∘H achieves consistency. ∎ In other words, assistance is possible if the human picks the best actions in hindsight. This robot A assists the human H in two ways. First, it helps the human explore their preferences – A∘H pulls every arm infinitely often. This fixes possible under-exploration in the human. Second, it stabilizes their actions and helps ensure that H does not take too many suboptimal actions - eventually, A∘H converges to only pulling the best arm. This helps mitigate the effect of noise from the human. #### Iii-B1 modeling learning as ϵ-optimality leads to inconsistency We now investigate what occurs when mistakenly we model learning behavior as noisy-optimality. A simple way to make A∘H consistent when H is noisily optimal is for A to pull the arm most frequently pulled by H. ###### Proposition 3. If H plays a strategy that pulls the best arm most often and A plays H’s most frequently pulled arm, then A∘H is consistent. ###### Proof. (Sketch) Eventually, H’s most frequent arm converges to the best arm with probability 1 by hypothesis. At this point, A will pull the best arm going forward and achieve a per-round regret of 0. ∎ Next we consider the impact of applying this strategy when its assumptions are incorrect, i.e., H is learning. For simplicity, we assume H is greedy and pulls the best arm given the rewards so far. We will consider a 112-arm bandit: a bandit with two arms, where one has a known expected value and the other is unknown. We show that pairing this suboptimal-learner with the ‘most-frequent-arm’ strategy leads the joint system A∘H to be *inconsistent*: ###### Proposition 4. If H is a greedy learner and A is ‘most-frequent-arm’, then there exists an assistive MAB M such that A∘H is inconsistent. ###### Proof. (Sketch) The proof consists of two steps. First, we show a variant of a classical bandit result: if H and A output the constant arm in the same round, they will for the rest of time. Second, we show that this occurs with finite probability and get a positive lower bound on the per-round regret of A∘H. ∎ ![A comparison between assisting an ](https://media.arxiv-vanity.com/render-output/7708811/x2.png) Fig. 2: A comparison between assisting an ϵ-optimal H and an ϵ-greedy H in a modified assistive MAB (defined in Section [V-E](#S5.SS5 "V-E Other Paradigms of Assistance ‣ V Experiments ‣ The Assistive Multi-Armed Bandit")) where the robot A has to choose between acting and letting H act. This creates a direct exploration-exploitation tradeoff that makes it easier to qualitatively analyze A’s behavior. At the top is whether A defers to the human or pulls an arm, followed by what H pulls (if the robot defers), followed by the reward H observes. When the robot models learning, the policy it learns has a qualitative divide into three components: explore, where the robot explores for the human; observe, when the robot lets the human pull arms; and exploit, when the robot exploits this information and pulls its estimate of the best arm. Crucially, the explore component is only found when learning is modeled. This illustrates Proposition [4](#Thmthm4 "Proposition 4. ‣ III-B1 modeling learning as ϵ-optimality leads to inconsistency ‣ III-B Consistent Assisted Learning ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit"), which argues that assisting an ϵ-optimal H is different from assisting a learning H. While this is a simplified setting, this shows that the types of mistakes and suboptimality represented by learning systems *are not* well modeled by the standard suboptimality assumptions used in research on recommendation systems, preference learning, and human-robot interaction. The suboptimality exhibited by learning systems is stateful and self-reinforcing. Figure [2](#S3.F2 "Fig. 2 ‣ III-B1 modeling learning as ϵ-optimality leads to inconsistency ‣ III-B Consistent Assisted Learning ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit") shows the practical impact of modeling learning. It compares an optimal assistance policy for stationary IOC with an optimal policy for an assistive MAB. In general, assitive MAB policies seem to fit into three steps: *explore* to give H a good estimate of rewards; *observe* H to identify a good arm; and then *exploit* that information. MABs are the standard theoretical model of reinforcement learning and so this observation highlights the point that the term inverse reinforcement learning is somewhat of a misnomer (as opposed to inverse optimal control): IRL’s assumptions about an agent (noisy optimality) lead to very different inferences than *actually* assuming an agent is learning. ### Iii-C Regret in Assistive Multi-Armed Bandits Having argued that we can achieve consistency for such a broad class of human policies in an assistive MAB, we now return to the question of achieving low regret. In particular, we investigate the conditions under which A∘H achieve O(log(T)) expected regret, as is possible in the standard MAB. For any given human H, there exists a robot A such that A∘H does as well as H: let A copy the H’s actions without modification; that is, at=ht for all t. So in the case where H achieves O(log(T)) regret by itself, A∘H can as well. However, a more interesting question is that of when we can successfully assist an suboptimal H that achieves ω(log(T)) regret. *While one may hypothesize that better human policies lead to better performance when assisted, this is surprisingly not the case, as the next section demonstrates.* #### Iii-C1 An inconsistent policy that is easy to assist ![In Proposition ](https://media.arxiv-vanity.com/render-output/7708811/x3.png) Fig. 3: In Proposition [5](#Thmthm5 "Proposition 5. ‣ III-C1 An inconsistent policy that is easy to assist ‣ III-C Regret in Assistive Multi-Armed Bandits ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit") we show that it is possible to match the regret from optimal learning in an standard MAB when assisting the ‘win-stay-lose-shift’ (WSLS) policy. This is because WSLS perfectly communicates the observed rewards to A. Here we show an example trajectory from an approximately optimal policy assisting WSLS (computed with Algorithm [1](#alg1 "Algorithm 1 ‣ IV Algorithms for Assistive Multi-Armed Bandits ‣ The Assistive Multi-Armed Bandit")). At the top is what H suggests, followed by what A pulls, followed by the reward H observes. For comparison, we show the arms selected by the near-optimal Gittins index policy for each belief state. This highlights the importance of communicative learning policies in an assistive MAB. Consider a Beta-Bernoulli assistive MAB where rewards are binary: rt∈{0,1}. A classic bandit strategy here is ‘win-stay-lose-shift’ (WSLS) [[32](#bib.bib32)] which, as the name suggests, sticks with the current arm if the most recent reward is one: | | | | | | --- | --- | --- | --- | | | ht={at−1rt−1=1Unif({k|k≠at−1})rt−1=0 | | (3) | This a simple strategy that performs somewhat well empirically – although it is easy to see that it is not consistent in isolation, let alone capable of achieving O(log(T)) regret. Indeed, it achieves Θ(T) regret, as it spends a fixed fraction of its time pulling suboptimal arms. However, if H can implement this strategy, the combined system can implement an arbitrary MAB strategy from the standard MAB setting, including those that achieve logarithmic regret. In other words, the robot can successfully assist the human in efficiently balancing exploration and exploitation *despite only having access to the reward parameter through an inconsistent human*. ###### Proposition 5. Let R∗ be the optimal regret for a Beta-Bernoulli multi-armed bandit. If H implements the WSLS strategy in the corresponding assistive MAB, then there exists a robot strategy A such that A∘H achieves regret R∗. ###### Proof. (Sketch) The pair (at−1,ht) directly encodes the previous reward rt−1. This means that At can be an arbitrary function of the history of arm pulls and rewards and so it can implement the MAB policy that achieves regret R∗. ∎ Figure [3](#S3.F3 "Fig. 3 ‣ III-C1 An inconsistent policy that is easy to assist ‣ III-C Regret in Assistive Multi-Armed Bandits ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit") compares a rollout of this A∘H(we describe the approach in Section [V-B1](#S5.SS2.SSS1 "V-B1 Policy optimization details ‣ V-B Assistance is Possible ‣ V Experiments ‣ The Assistive Multi-Armed Bandit")) with a rollout of a near-optimal policy for a standard MAB. #### Iii-C2 Communication upper bounds team performance The WSLS policy is not unique in that it allows A∘H to obtain logarithmic regret. A less interesting, but similarly effective policy, is for the human to directly encode their reward observations into their actions; the human need not implement a sensible bandit policy. For example, the following purely communicative H also works for a Beta-Bernoulli bandit: | | | | | | --- | --- | --- | --- | | | ht={0rt−1=01rt−1=1 | | (4) | We can generalize the results regarding communicative policies using the notion of mutual information, which quantifies the amount of information obtained through observing the human arm pulls. Let I(X;Y) be the mutual information between X and Y, H(X) be the entropy of X, and H(X|Y) be the entropy of X given Y. ###### Proposition 6. Suppose that the probability the robot pulls a suboptimal arm at time t is bounded above by some function f(t), that is P(At≠j∗)≤f(t). Then the mutual information I(j∗;h1×⋯×ht) between the human actions up to time t and the optimal arm must be at least (1−f(t))logN−1. ###### Proof. We can consider the multi-armed bandit task as one of deducing the best arm from the human’s actions. This allows us to apply Fano’s inequality [[33](#bib.bib33)] to P(At≠j∗), and using the fact that the entropy of a Bernoulli random variable is bounded above by 1, we get | | | | | | --- | --- | --- | --- | | | P(At≠j∗)log(N−1) | ≥H(^j∗|j∗)−log2 | | | | | =H(^j∗)−I(^j∗;j∗)−1 | | | | | ≥logN−I(j∗;h1×⋯×ht)−1. | | Rearranging terms and using P(At≠j∗)≤f(t), we get | | | | | | --- | --- | --- | --- | | | I(j∗;h1×⋯×ht) | ≥logN−f(t)log(N−1)−1 | | | | | ≥(1−f(t))logN−1. | | ∎ Intuitively, since the probability of error is bounded by f(t), in (1−f(t)) cases the human actions conveyed enough information for A to successfully choose the best action out of N options. This corresponds to logN bits, so there needs to be at least (1−f(t))logN bits of information in H’s actions. ###### Corollary 7. Suppose that the probability the robot pulls a suboptimal arm at time t is bounded above by some function f(t), that is P(At≠j∗)≤f(t). Then the mutual information I(a1×r1×⋯×at−1×rt−1;h1×⋯×ht) between the human actions up to time t and the human observations must be at least (1−f(t))logN−1. ###### Proof. Since the best arm is independent of the human actions given the human observations, this follows immediately from the data processing inequality and proposition [6](#Thmthm6 "Proposition 6. ‣ III-C2 Communication upper bounds team performance ‣ III-C Regret in Assistive Multi-Armed Bandits ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit"). ∎ In order to achieve regret logarithmic in time, we must have that P(Kt≠j∗)≤Ct for some C>0. Applying proposition [6](#Thmthm6 "Proposition 6. ‣ III-C2 Communication upper bounds team performance ‣ III-C Regret in Assistive Multi-Armed Bandits ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit") above implies that we must have | | | | | --- | --- | --- | | | I(j∗;h1×⋯×ht)≥(1−Ct)logN−1 | | Note that the term I(^j∗;h1×⋯×ht) depends on both the human policy and the robot policy - no learning human policy can achieve this bound unless the human-robot system A∘H samples each arm sufficiently often. *As a consequence, simple strategies such as inferring the best arm at each timestep and pulling it, cannot achieve the Θ(logT) lower bound on regret.* Iv Algorithms for Assistive Multi-Armed Bandits ------------------------------------------------ The optimal response to a given human strategy can be computed by solving a partially observed Markov decision process (POMDP) [[34](#bib.bib34)]. The state is the reward parameters θ and H’s internal state. The observations are the human arm pulls. In this framing, a variety of approaches can be used to compute policies or plans, e.g., online Monte-Carlo planning [[35](#bib.bib35), [36](#bib.bib36)] or point-based value iteration [[37](#bib.bib37)]. In order to run experiments with large sample sizes, our primary design criterion was fast online performance. This lead us to use a direct policy optimization approach. The high per-action cost of Monte-Carlo planners makes them impractical for this problem. Further, explicitly tracking θ and H’s internal state is strictly harder than solving the inverse MAB. human policy H initialize parameterized policy π(w;⋅), policy parameters w for i≤nItrs do      ξs,rs← Sample-Trajectories(π(w;⋅), Size, T)      ^∂w← Policy-Gradient(ξs, π(w;⋅)) ▹  [[38](#bib.bib38)]      w←w+^∂w end for procedure Sample-Trajectories(π(w,⋅), Size, T)      initialize empty array ξs      for i≤ Size do          θ∼Θ          for t≤T do               ht∼Ht(h1,r1,…,at−1,rt−1)               at∼π(w;h1,a1,…,ht−1,at−1,ht)               rt∼θat          end for          ξ←ξ=[(h1,a1,r1),...,(hT,aT,rT)]          ξs ←ξs+[ξ]      end for return ξs end procedure Algorithm 1 Policy Optimization for the Assistive MAB Our approach applies the policy optimization algorithm of [[23](#bib.bib23)] to assistive MABs. Given an assistive MAB (M,H), we sample a batch of reward parameters θ from the prior p(Θ); generate trajectories of the form ξ=[(h1,a1,r1),...,(ht,at,rt)] from H and the current robot policy A(i); and use the trajectories to update the robot policy to A(i+1). During this offline training stage, since we are sampling reward parameters rather than using the ground truth reward parameters, we can use the generated rewards rt to improve on A(i). We represent A’s policy as a recurrent neural network (RNN). At each timestep, it observes a tuple (at−1,ht) where at−1 is the most recent robot action and ht is the most recent human action. In response, it outputs a distribution over arm indices, from which an action is sampled. Given a batch of trajectories, we use an approximate policy gradient method [[38](#bib.bib38)] to update the weights of our RNN111Our code is available online at <https://github.com/chanlaw/assistive-bandits>. We summarize this procedure in Algorithm [1](#alg1 "Algorithm 1 ‣ IV Algorithms for Assistive Multi-Armed Bandits ‣ The Assistive Multi-Armed Bandit"). In our experiments, we used Proximal Policy Optimization (PPO) [[39](#bib.bib39)], due to its ease of implementation, good performance, and relative insensitivity to hyperparameters. V Experiments -------------- In our experiments, we used a horizon 50 Beta-Bernoulli bandit with four arms. Pulling the ith arm produces a reward of one with probability θi and zero with probability 1−θi: Θ=[0,1]4. We assume a uniform prior over Θ: θi∼Beta(1,1). We consider 5 classes of human policy: * ϵ-greedy, a learning H that chooses the best arm in hindsight with probability 1−ϵ and a random arm with probability ϵ.222We performed grid search to pick an ϵ based on empirical performance, and found that ϵ=0.1 performed best. * WSLS, the *win-stay-lose-shift* policy [[32](#bib.bib32)] sticks with the arm pulled in the last round if it returned 1, and otherwise switches randomly to another arm. * TS, the *Thompson-sampling* policy [[40](#bib.bib40)] maintains a posterior over the arm parameters, and chooses each arm in proportion to the current probability it is optimal. This is implemented by sampling a particle from the posterior of each arm, then pulling the arm associated with the highest value. * UCL, the *upper-credible limit* policy [[41](#bib.bib41)] is an algorithm similar to Bayes UCB [[42](#bib.bib42)] with softmax noise, used as a model of human behavior in a bandit environment.333We set K=4 and softmax temperature τ=4. * GI, the *Gittins index* policy [[43](#bib.bib43)] is the Bayesian optimal solution to an infinite horizon discounted objective MAB.444We follow the approximations described by Chakravorty and Mahajan in [[44](#bib.bib44)], and choose a discount rate (γ=0.9) that performs best empirically using grid search. In addition, we also defined the following noisily-optimal human policy to serve as a baseline: * ϵ-optimal, a *fully informed* H that knows the reward parameters θ, chooses the optimal arm with probability 1−ϵ, and chooses a random action with probability ϵ.555We set ϵ to match that of the ϵ-greedy policy. ### V-a Inverse Multi-Armed Bandit | | | | --- | --- | | | Assumed H Policy | | Actual H Policy | Correct Policy | ϵ-Optimal | | ϵ-greedy | 0.49 | -0.23 | | WSLS | 0.95 | 0.13 | | TS | 0.02 | -0.23 | | UCL | 0.03 | -0.30 | | GI | 0.94 | 0.20 | | ϵ-Optimal | – | 1.55 | TABLE I: Log-density of true reward params in a horizon 5 inverse MAB Our first experiment investigates the miscalibration that occurs when we do not model learning behavior. A robot that doesn’t model human learning will be overconfident. To show this, we use the Metropolis-Hastings algorithm [[45](#bib.bib45)] to approximate the posterior over reward parameters θ given human actions. We compare the posterior we get when we model learning with the posterior that assumes H is ϵ-optimal. We compared the log-density of the true reward parameters in the posterior conditioned on 5 H actions, under both models, when H is actually learning. We report the results in Table [I](#S5.T1 "TABLE I ‣ V-A Inverse Multi-Armed Bandit ‣ V Experiments ‣ The Assistive Multi-Armed Bandit"). For every learning human policy, we find that the log-density of the true reward parameters is significantly higher when we model learning than when we do not. In the case of ϵ-greedy and TS, we find that the posterior that fails to model learning assigns negative log-density to the true parameters. This means the posterior is a *worse estimate* of θ than the prior. ### V-B Assistance is Possible ![Averaged regret of various human policies (lower = better) over 100,000 trajectories when unassisted, assisted with the correct human model, and assisted assuming that the human is noisily-optimal. Assistance lowers the regret of most learning policies, but it is important to model learning: ignoring that the human is learning can lead to worse performance than no assistance. Note that assisted WSLS performs almost as well as the Gittins Index policy, an empirical verification of Proposition ](https://media.arxiv-vanity.com/render-output/7708811/x4.png) Fig. 4: Averaged regret of various human policies (lower = better) over 100,000 trajectories when unassisted, assisted with the correct human model, and assisted assuming that the human is noisily-optimal. Assistance lowers the regret of most learning policies, but it is important to model learning: ignoring that the human is learning can lead to worse performance than no assistance. Note that assisted WSLS performs almost as well as the Gittins Index policy, an empirical verification of Proposition [5](#Thmthm5 "Proposition 5. ‣ III-C1 An inconsistent policy that is easy to assist ‣ III-C Regret in Assistive Multi-Armed Bandits ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit"). Propositions [2](#Thmthm2 "Proposition 2. ‣ III-B Consistent Assisted Learning ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit") and [5](#Thmthm5 "Proposition 5. ‣ III-C1 An inconsistent policy that is easy to assist ‣ III-C Regret in Assistive Multi-Armed Bandits ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit") prove that it is possible to assist suboptimal learners in theory. In this section, we show that it is possible in practice. We use Algorithm [1](#alg1 "Algorithm 1 ‣ IV Algorithms for Assistive Multi-Armed Bandits ‣ The Assistive Multi-Armed Bandit") to train a recurrent policy for each human policy. In Fig. [4](#S5.F4 "Fig. 4 ‣ V-B Assistance is Possible ‣ V Experiments ‣ The Assistive Multi-Armed Bandit"), we report the performance with assistance and without. #### V-B1 Policy optimization details To alleviate the problem of exploding and vanishing gradients [[46](#bib.bib46)], we use Gated Recurrent Units (GRU) [[47](#bib.bib47)] as the cells of our recurrent neural network. The output of the GRU cell is fed into a softmax function, and this output is interpreted as the distribution over actions. To reduce to variance in our policy gradient estimate, we also use a value function baseline  [[48](#bib.bib48)] and apply Generalized Advantage Estimation (GAE) [[49](#bib.bib49)]. We used weight normalization [[50](#bib.bib50)] to speed up training. We used a batch size of 250000 timesteps or 5000 trajectories per policy iteration, and performed 100 policy iterations using PPO. ![The assisted regret of various policies, plotted against the mutual information between the best arm and the policy’s actions in the first 5 timesteps. We also plot the best-fit line, with 95% confidence interval, for the regression between assisted regret and mutual information. We augmented our policies with variants of ](https://media.arxiv-vanity.com/render-output/7708811/x5.png) Fig. 5: The assisted regret of various policies, plotted against the mutual information between the best arm and the policy’s actions in the first 5 timesteps. We also plot the best-fit line, with 95% confidence interval, for the regression between assisted regret and mutual information. We augmented our policies with variants of ϵ-greedy and Thompson sampling with less randomness. Policies with high mutual information lead to lower regret when assisted, supporting our theoretical findings. #### V-B2 Results To quantitatively test the hypotheses that our learned models successfully assist, we perform a two factor ANOVA. We found a significant interaction effect, F(3,999990)=1778.8, p<.001, and a post-hoc analysis with Tukey HSD corrections showed that we were able to successfully assist the human in all four sub-optimal learning policies (p<.001). Our WSLS results agree with Proposition [5](#Thmthm5 "Proposition 5. ‣ III-C1 An inconsistent policy that is easy to assist ‣ III-C Regret in Assistive Multi-Armed Bandits ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit"). Assisted WSLS achieves a regret of 3.5, close to the regret of the best-performing unassisted policy, 3.2. The gap in reward is due to our choice to employ approximate policy optimization. We provide an example trajectory in Fig. [3](#S3.F3 "Fig. 3 ‣ III-C1 An inconsistent policy that is easy to assist ‣ III-C Regret in Assistive Multi-Armed Bandits ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit"). The actions selected are almost identical to those of the optimal policy. This suboptimality also accounts for the small increase in regret when assisting the Gittins index policy. #### V-B3 Modeling learning matters Proposition [4](#Thmthm4 "Proposition 4. ‣ III-B1 modeling learning as ϵ-optimality leads to inconsistency ‣ III-B Consistent Assisted Learning ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit") shows that modeling learning matters. We compared assistance assuming that H is ϵ-optimal with assistance with the correct (learning) model. We report the results in Fig. [4](#S5.F4 "Fig. 4 ‣ V-B Assistance is Possible ‣ V Experiments ‣ The Assistive Multi-Armed Bandit"). We found that the regret with the wrong model is higher than no intervention in every case but UCL. Assisted WSLS with the wrong model has double the regret of assisted WSLS with the correct model. Proposition [4](#Thmthm4 "Proposition 4. ‣ III-B1 modeling learning as ϵ-optimality leads to inconsistency ‣ III-B Consistent Assisted Learning ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit") shows that in theory, ignoring learning leads to inconsistency. Fig. [4](#S5.F4 "Fig. 4 ‣ V-B Assistance is Possible ‣ V Experiments ‣ The Assistive Multi-Armed Bandit") shows that this mistake leads to higher regret empirically. ### V-C Mutual Information Predicts Performance Proposition [6](#Thmthm6 "Proposition 6. ‣ III-C2 Communication upper bounds team performance ‣ III-C Regret in Assistive Multi-Armed Bandits ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit") implies that high mutual information is required for good team performance. To verify this, we computed the mutual information for a variety of combined policies after 5 timesteps. Fig. [5](#S5.F5 "Fig. 5 ‣ V-B1 Policy optimization details ‣ V-B Assistance is Possible ‣ V Experiments ‣ The Assistive Multi-Armed Bandit") plots this against the regret of the combined system. We consider several variants of ϵ-greedy and TS that are more or less deterministic. We consider ϵ∈[0,0.02,0.05,0.1]. To make TS more deterministic, we use the mean of a sample of n particles to select arms. We consider n∈[1,2,3,10,30,∞]. | | | | --- | --- | | | Assumed H Policy | | Actual H | ϵ-greedy | WSLS | TS | UCL | GI | ϵ-optimal | | ϵ-greedy | 2.13 | -0.60 | -2.18 | -2.20 | -0.11 | -3.95 | | WSLS | 0.94 | 3.75 | 0.80 | 0.10 | -2.21 | -1.97 | | TS | 0.33 | 0.66 | 0.60 | 0.44 | -1.53 | -0.19 | | UCL | 1.76 | -1.19 | 2.51 | 2.43 | 0.74 | 1.28 | | GI | -1.09 | -0.28 | -0.77 | -0.85 | -0.71 | -1.50 | | ϵ-optimal | 0.24 | 1.17 | 1.24 | 1.28 | -3.09 | 1.46 | TABLE II: Increase in Reward from Robot Assistance Across this data, higher mutual information is associated with lower assisted regret, r(10)=−.82, p<.001. Furthermore, by looking at the ϵ-greedy and TS results as a sequence, we can observe a clear and distinct pattern. Policies that are more deterministic tend to be easier to help. This is supported by the results in Table [I](#S5.T1 "TABLE I ‣ V-A Inverse Multi-Armed Bandit ‣ V Experiments ‣ The Assistive Multi-Armed Bandit"), which shows that it is easier to infer reward parameters for WSLS and GI (i.e., the two policies with the highest mutual information) than TS and ϵ-greedy. ### V-D Sensitivity to Model Misspecification In the previous three experiments, we assumed knowledge of the correct learning policy. In this experiment, we consider the implications of incorrectly modeling learning. We took the policies we trained in Section [V-B](#S5.SS2 "V-B Assistance is Possible ‣ V Experiments ‣ The Assistive Multi-Armed Bandit") and tested them with every human policy. We report the net change in reward in Table [II](#S5.T2 "TABLE II ‣ V-C Mutual Information Predicts Performance ‣ V Experiments ‣ The Assistive Multi-Armed Bandit"). We colored cases where the robot A successfully assists the human H green, and cases where it fails to assist red. We bolded the best performance in each row. Modeling learning (even with the incorrect model) generally leads to lower regret than assuming ϵ-optimal for every learning H policy. However, when the robot has the wrong model of learning, it can fail to assist the human. For example, ϵ-greedy is only successfully assisted when it is correctly modeled. This argues that research into the learning strategies employed by people in practice is an important area for future research. An intriguing result is that assuming ϵ-greedy *does* successfully assist all of the suboptimal learning policies. This suggests that, although some learning policies must be well modeled, learning to assist some models can be transferred to other models in some cases. On the other hand, trying to assist GI leads to a policy that hurts performance across the board. In future work, we plan to identify classes of learners which can be assisted by the same robot policy. ### V-E Other Paradigms of Assistance ![Averaged regret of various human policies (lower = better) over 100,000 trajectories under different interaction modes. Assistance lowers the regret of ](https://media.arxiv-vanity.com/render-output/7708811/x6.png) Fig. 6: Averaged regret of various human policies (lower = better) over 100,000 trajectories under different interaction modes. Assistance lowers the regret of ϵ-greedy, WSLS, and UCL in both the preemptive and turn-taking interaction modes. Assistance while ignoring learning is worse than no assistance in almost every case. This offers further support for the importance of modeling learning when assisting humans. The assistive multi-armed bandit considered so far only captures one mode of interaction. It is straightforward to consider extensions to different modes. We consider two such modes. The first is turn taking, where H and A take turns selecting arms. This can be more difficult because the robot has to act in early rounds, when it has less information, and because the human has to act in later rounds, when H may be noisy and the best arm has already been identified. The second variant we consider is preemptive interaction. In this case, A goes first and either pulls an arm or lets H act. This creates an exploration-exploitation tradeoff. A only observes H’s arm pulls by actually allowing H to pull arms and so it must choose between observing H’s behavior and exploiting that knowledge. Fig. [6](#S5.F6 "Fig. 6 ‣ V-E Other Paradigms of Assistance ‣ V Experiments ‣ The Assistive Multi-Armed Bandit") shows the experiment from Section [V-B](#S5.SS2 "V-B Assistance is Possible ‣ V Experiments ‣ The Assistive Multi-Armed Bandit") applied to each of these interaction modes. The results are largely similar to those of teleoperation. We are able to assist the suboptimal policies and modeling learning as ϵ-optimality increases regret in all cases. We see that WSLS is a less attractive policy in these settings: because H actions are always executed when they are observed, it no longer makes sense for H to employ a *purely* communicative policy. However, we do still see results that confirm Proposition [6](#Thmthm6 "Proposition 6. ‣ III-C2 Communication upper bounds team performance ‣ III-C Regret in Assistive Multi-Armed Bandits ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit"): more deterministic policies that reveal more information are easier to help. #### V-E1 Explore, observe, then exploit In looking at the policies learned for the preemptive interaction mode, we see an interesting pattern emerge. Because the policy has to choose between selecting arms directly and observing H, by looking at rollouts of the learned policy we can determine when it is observing the human. We find that a clear pattern emerges. A initially *explores* for H: it selects arms uniformly to give H a good estimate of θ. Then, A *observes* H’s arm pulls to identify the optimal arm. For the final rounds, A *exploits* this information and pulls its estimate of the optimal arm. Fig. [2](#S3.F2 "Fig. 2 ‣ III-B1 modeling learning as ϵ-optimality leads to inconsistency ‣ III-B Consistent Assisted Learning ‣ III Theoretical Results ‣ The Assistive Multi-Armed Bandit") compares a representative trajectory with one that is optimized against an ϵ-optimal H. Vi Discussion -------------- ### Vi-a Summary In this work, we studied the problem of assisting a human who is learning about their own preferences. Our central thesis is that by modeling and influencing the dynamics of a human’s learning, we can create robots that can better assist people in achieving their preferences. We formalized this as the assistive multi-armed bandit problem, which extends the multi-armed bandit to account for teleoperation and human learning. We analyzed our formalism theoretically, then used policy optimization in proof-of-concept experiments that supported our theoretical results. Surprisingly, we found that a person that is better at learning in isolation does not necessarily lead to a human-robot team that performs better. We highlighted a theoretical connection between the amount of information communicated by the human policy and the best assisted performance, which we validated in our experiments. ### Vi-B Limitations and Future Work. #### Vi-B1 Stateful environments One significant limitation of this work is that we assume the environment the human is acting in is stateless. In practice, the environmental state changes over time, and the state can greatly influence the reward associated with certain actions. This suggests natural extensions of the assistive multi-armed bandit to the contextual bandit [[51](#bib.bib51)] and full Markov decision process (MDP) [[52](#bib.bib52)] settings. #### Vi-B2 Realistic human policies Another significant limitation of this work is the use of simple bandit policies in place of actual human policies. In addition, we do not have access to the true human policy in any case. Future work can remedy this by incorporating more realistic policies, and can study to what extent these results generalize to assisting actual humans. ### Vi-C Closing Remarks The assistive multi-armed bandit is representative of a world where robots are supposed to assist people, even though people haven’t figured out what they want yet. Laying down the theoretical foundations for these kinds of interaction paradigms is an important and under-served aspect of HRI. Acknowledgements ---------------- We thank the members of the InterACT lab and the Center for Human Compatible AI for helpful advice. This work was partially supported by OpenPhil, AFOSR, NSF, and NVIDIA.
91182d9e-2bd6-47a9-869e-354150fae890
trentmkelly/LessWrong-43k
LessWrong
Coronavirus: Justified Key Insights Thread This is a thread to list important insights and key open questions about the coronavirus and the coronavirus response. The inspiration for this thread is Eliezer's post below. I'd like this thread to be a source of claims and ideas that are self-contained and well-explained. This is not a thread to drop one-liners that assume I've been following your particular news feed or know what's happening in your country or that I've read a bunch of studies on (say) viral load. There's a place for such high-context discussion, and it is not this thread. Please include in your answers either a claim or an open question, along with an explanation or an explicit model under which it makes sense. I will be moving answers to the comments if they don't meet my subjective quality bar for justification – see the last justified answers thread for examples of what quality answers look like. The purpose of giving models and data is to allow other people to build on your answer. Everyone can make arbitrary claims, but models and evidence allow for verification and dialogue. The more concrete the explanation the better. Speculation is fine, uncertain models are fine; sources, explicit models and numbers for variables that other people can play with based on their own beliefs are excellent.  This thread is inspired by a post by Eliezer Yudkowsky which I'll reproduce below, in which Eliezer lists eight answers that this sort of post would come up with.  These are not justified to the standard of the thread, so you (you!) can get some easy karma by leaving an answer that justifies one of these with the sources/data/explanation needed to argue for it. It includes much of the discussion elsewhere on LW (e.g. by Wei Dai, Zvi, Robin, and others), so it shouldn't be hard to find the prior discussion. Eliezer's post (link): > What do we early-warning cognoscenti now know about Covid-19 that others haven't currently figured out? What's the TOC of that blog post? @WilliamAEden @robinhanson >
92a9e920-8fad-4c49-ad21-36365b298cf4
trentmkelly/LessWrong-43k
LessWrong
Mech Interp Challenge: January - Deciphering the Caesar Cipher Model I'm writing this post to discuss solutions to the November challenge, and present the challenge for this January.  If you've not read the first post in this sequence, I'd recommend starting there - it outlines the purpose behind these challenges, and recommended prerequisite material. January Problem The problem for this month is interpreting a model which has been trained to classify a sequence according to the Caeser cipher shift value which was used to encode it. The sequences have been generated by taking English sentences containing only lowercase letters & punctuation, and choosing a random value X between 0 and 25 to rotate the letters (e.g. if the value was 3, then a becomes d, b becomes e, and so on, finishing with z becoming c). The model was trained using cross entropy loss to predict the shift value X for the text it's been fed, at every sequence position (so for a single sequence, the correct value will be the same at every sequence position, but since the model has bidirectional attention, it will find it easier to predict the value of X at later sequence positions). There are 3 different modes to the problem, to give you some more options! Each mode corresponds to a different dataset, but the same task & same model architecture. Easy mode In easy mode, the data was generated by: * Choosing the 100 most frequent 3-letter words in the English Language (as approximated from a text file containing the book "Hitchhiker's Guide To The Galaxy") * Choosing words from this len-100 list, with probabilities proportional to their frequency in the book * Separating these words with spaces The model uses single-character tokenization. The vocabulary size is 27: each lowercase letter, plus whitespace. Medium mode This is identical to easy, the only difference is that the words are drawn from this len-100 list uniformly, rather than according to their true frequencies. Hard mode In hard mode, the data was generated from random slices of OpenWebText (i
94ac6985-4a19-416c-adba-c52a4d505fd2
trentmkelly/LessWrong-43k
LessWrong
Safety standards: a framework for AI regulation Purpose: provide a public resource on safety standards that introduces the ideas I've heard people talk about and some new ideas. I don't know which ideas are new because the discourse has been largely non-public. I think safety standards provide a clear regulatory direction and suggest several useful directions of work, so I think it would be good for more people to be familiar with the topic. Definition In the context of AI, I'll define a safety standard to be a policy that a lab agrees to follow or is required to follow by law that involves the following: * A triggering condition that is intended to track the properties of a specific AI system. * Obligations that a lab must follow if this triggering condition is met. Here's an example: "If an AI system is capable of autonomous replication[1], the lab that developed it must spend at least 5% of the training costs on infosecurity audits over the course of 1 month before deploying the system." These standards could be set, maintained, and enforced by a national AI regulatory agency in collaboration with external experts -- similar to how the FDA requires safety tests of many drugs before they can be sold. Triggering conditions A triggering condition is meant to indicate that an AI system crosses a threshold of dangerousness that warrants specific safety measures.  There are two ways to design triggering conditions so that they track the ‘dangerousness’ of a system. They can depend on (1) direct evaluations of the system's capabilities or (2) proxies for the system capabilities. Direct evaluations of dangerous capabilities By capability, I mean any behavior that an AI system exhibits or can be made to exhibit by an actor with a budget less than X that is given unrestricted access to the system and can integrate it with other software. The choice of X should depend on the incentives actors are likely to have to elicit the capability if they were given unrestricted access to the system. For example, even i
105850fb-1f13-4d0b-92d2-fa11c17f2f9e
trentmkelly/LessWrong-43k
LessWrong
Meetup : West LA Meetup - Eliezer and Nick Share a Cab... Discussion article for the meetup : West LA Meetup - Eliezer and Nick Share a Cab... WHEN: 19 June 2013 11:20:26AM (-0700) WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064 When: 7:00pm Wednesday, June 5th. Where: The Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. The entrance sign says "Lounge". Parking is free for 3 hours. Discussion: Eliezer Yudkowsky and Nick Bostrom once shared a cab and found an extra $20. EY said that he thought it was his with probability 20%, while NB said that it was his with probability 15%. How should they split the extra $20 they found. I will present at least four different justifications for four different answers. Hopefully we will be able to reach a consensus on the best way to distribute the money. I will also explain why I think this question could be very important for making decisions under uncertainty. No prior knowledge of or exposure to Less Wrong is necessary; this will be generally accessible and useful to everyone who values thinking for themselves. There will be open general conversation until 7:30, and that's always a lot of good, fun, intelligent discussion! There will be a whiteboard with Bayes' Theorem written on it. Discussion article for the meetup : West LA Meetup - Eliezer and Nick Share a Cab...
6d82e005-5b39-46a5-b376-484058ed1503
trentmkelly/LessWrong-43k
LessWrong
On the Cost of Thriving Index Scott Winship argued recently that the ‘cost of thriving’ has fallen, pushing back once again against Oren Cass and his rather arbitrarily calculated ‘Cost of Thriving Index (COTI).’ Alex Tabarrok posted today on Marginal Revolution that Winship and coauthor Horpedahl were right, but that they face an uphill battle because people feel they are wrong, and suggested that our newly high time-value warps our perceptions. He points to Linder’s Theorem, which I hadn’t seen before, which states (correctly): “rising productivity decreases the demand for commodities whose consumption is expensive in time.” Who is right, and why? If thriving is easier now, why does everyone think it isn’t? OVERVIEW OF THE DISAGREEMENT My analysis of the details shows that both perspectives are flawed. Cass uses sloppy calculations, especially double counting employer-provided health insurance payments, and his calculations importantly exclude changes in taxation. Winship and Horpedahl offer useful correctives to the sloppiness of the original Cass calculations in some places, and point out the rather glaring omission of taxation. But they misunderstand the purpose of creating a Cost of Thriving Index. In several places, they therefore do the wrong calculation, confusing improved quality of goods with reduced cost of thriving. COTI attempts to capture an important thing: Even if the quality of the standard ‘basket of goods’ is improved, that doesn’t change that there are huge pressures forcing people to buy whatever is the standard goods basket, and ‘improved quality’ is nice but fails to cancel this effect out. Winship does not seem to appreciate why that is an important thing to be measuring. Excluding women and younger workers here is not arbitrary either. The whole idea is to ask what it takes for a typical man to support a typical family – you can’t simply decide to measure something else. Let’s look at the proposed modifications, as argued in the Twitter thread. > Scott Winshi
cfa44067-508f-4b8f-941f-93271dccbe44
trentmkelly/LessWrong-43k
LessWrong
AISN #54: OpenAI Updates Restructure Plan Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: OpenAI claims an updated restructure plan would preserve nonprofit control; A global coalition meets in Singapore to propose a research agenda for AI safety. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Subscribe to receive future versions. ---------------------------------------- OpenAI Updates Restructure Plan On May 5th, OpenAI announced a new restructure plan. The announcement walks back a December 2024 proposal that would have had OpenAI’s nonprofit—which oversees the company’s for-profit operations—sell its controlling shares to the for-profit side of the company. That plan drew sharp criticism from former employees and civil‑society groups and prompted a lawsuit from co‑founder Elon Musk, who argued OpenAI was abandoning its charitable mission. OpenAI claims the new plan preserves nonprofit control, but is light on specifics. Like the original plan, OpenAI’s new plan would have OpenAI Global LLC become a public‑benefit corporation (PBC). However, instead of the nonprofit selling its control over the LLC, OpenAI claims the nonprofit would retain control of the PBC. It’s unclear what form that control would take. The announcement claims that the nonprofit would be a large (but not necessarily majority) shareholder of the PBC, and, in a press call, an OpenAI spokesperson told reporters that the nonprofit would be able to appoint and remove PBC directors. The new plan may still remove governance safeguards. Arguably, the new plan may not preserve any of the governance safeguards that critics said would be lost in the original reorganization plan. First, unlike OpenAI’s original “capped‑profit” structure, the new PBC will issue ordinary stock with no ceiling on investor returns. The capped-profit model was intended to ensure that OpenAI equitably distributed the resou
0574151e-9268-49aa-a4c8-1f01d6bfd3fb
trentmkelly/LessWrong-43k
LessWrong
Books: Lend, Don't Give EA books give a much more thorough description of what EA is about than a short conversation, and I think it's great that EA events (ex: the dinners we host here in Boston) often have ones like Doing Good Better, The Precipice, or 80,000 Hours available. Since few people read quickly enough that they'll sit down and make it through a book during the event, or want to spend their time at the event reading in a corner, the books make sense if people leave with them. This gives organizers ~3 options: sell, lend, or give. Very few people will be up for buying a book in a situation like this, so most EA groups end up with either lending or giving. I have the impression that giving is more common, but I think lending is generally a lot better: * A loan suggests that when you're done reading the book you've considered the ideas and don't need the book anymore. Giving suggests it's more like doctrine you keep and reference. * You don't get back all the books you lend, and that's ok, but in my experience we do get most of them back. Lending out the same book over and over is a lot cheaper than buying a new book each time. Giving books is (and looks) unnecessarily lavish. * Returning the book offers a chance to talk about reactions. Lending out books doesn't mean you need to run it like a library, with records and late fees. We've put the books out with stickies saying "borrow this book", they go out, and they mostly come back again. Comment via: facebook, mastodon
83e735fe-0bed-488d-a6b8-7920a4515e39
trentmkelly/LessWrong-43k
LessWrong
The Illusion of Iterative Improvement: Why AI (and Humans) Fail to Track Their Own Epistemic Drift I just conducted a fascinating experiment with ChatGPT4 that revealed a fundamental failure in AI alignment—one that goes beyond typical discussions of outer and inner alignment. The failure? ChatGPT4 was unable to track whether its own iterative refinement process was actually improving, exposing a deeper limitation in recursive reasoning. I got ChatGPT4 itself to describe it: > When designing an iterative process for improving a document, I initially assumed that each new version would naturally refine and enhance the previous one. To test this, the user had me generate multiple versions, each informed by an increasingly optimized prompt. However, when I compared the outputs, I did not find the final version to be the best. The user pointed this out and had me try again with a revised prompt designed to improve the process. Yet once more, the last version was not the best. At this point, the user demonstrated that no matter what prompt I was given, if I lacked the capacity for recursive reasoning, I would continue to fail—not because the prompts were flawed, but because I had no built-in ability to track whether my own iterative process was actually converging toward an optimal state or merely cycling through different errors. The deeper failure was not in the document revisions, but in the assumption that a fixed prompt could resolve a fundamentally recursive failure mode. This revealed a broader insight: if an intelligence system does not have an explicit mechanism for recursively tracking how its own refinements evolve, it will remain blind to its own failure patterns, even when it appears to be improving. What does this mean for AI alignment? One solution is a functional model of general problem-solving ability (intelligence) that identifies the minimally reducible set of functions required for intelligence, so that model can potentially be applied to any process to see where that process is constrained in its problem-solving ability (constrained in it’s int
8ae0c7a8-1956-49e9-a349-ccb0fe0d281e
trentmkelly/LessWrong-43k
LessWrong
Poker with Lennier In J. Michael Straczynski's science fiction TV show Babylon 5, there's a character named Lennier. He's pretty Spock-like: he's a long-lived alien who avoids displaying emotion and feels superior to humans in intellect and wisdom. He's sworn to always speak the truth. In one episode, he and another character, the corrupt and rakish Ambassador Mollari, are chatting. Mollari is bored. But then Lennier mentions that he's spent decades studying probability. Mollari perks up, and offers to introduce him to this game the humans call poker. Later, we see Mollari, Lennier, and some others playing poker. Lennier squints at his hand and remarks, "Interesting. The odds of this combination are 5000:1, against." Everybody considers this revelation for a moment, then folds, conceding the hand. Mollari is exasperated, and tells him to stop doing that. Because Lennier is essentially announcing that he has a good hand, Lennier's winning far fewer chips than he should; your biggest wins in poker are when people underestimate you. The other poker players, and the audience, are picturing Lennier as having a hand something like this:     This is a four of a kind, the second-best hand in most poker games. The odds against being dealt a four of a kind in a hand of five cards are 4164:1--one might, in a moment of excitement, round that up to an even five thousand. We the audience are meant to have a hearty chuckle over how theory doesn't translate into practice. But! We never get to see Lennier's cards, which means we get to picture whatever we want. I choose to believe, and I urge you to do so as well, that Lennier had this hand:     This is one of the worst hands possible in poker: ace-high. It loses to almost everything. By causing everyone else to fold, Lennier won a hand he probably would otherwise have lost. He knew exactly what he was doing. "Wait," I hear you say. "Like most members of the proud Minbari race, Lennier is sworn to always tell the truth. How could he ev
bf4de655-c08d-4297-9449-2192ff7f19c3
trentmkelly/LessWrong-43k
LessWrong
[LINK] How to increase conscientiousness I wrote an interactive blog post, How To Increase Conscientiousness, which has some steps which I think might increase your conscientiousness. I'm not sure if it works, but I would love to see some curious low-conscientiousness people try it and post your results here. If you do it, please do it before reading the comments on this post, as they may contain spoilers. If you are feeling especially helpful, also take a Big Five personality test like this one and report your percentile result on Conscientiousness.
1f4c7845-8d41-4644-8c40-7be61c39e320
trentmkelly/LessWrong-43k
LessWrong
How to get nerds fascinated about mysterious chronic illness research? Like many nerdy people, back when I was healthy, I was interested in subjects like math, programming, and philosophy. But 5 years ago I got sick with a viral illness and never recovered. For the last couple of years I've been spending most of my now-limited brainpower trying to figure out how I can get better. I occasionally wonder why more people aren't interested in figuring out illnesses such as my own. Mysterious chronic illness research has a lot of the qualities of an interesting puzzle: * There is a phenomenon with many confusing properties (e.g. the specific symptoms people get, why certain treatments work for some people but not others, why some people achieve temporary or permanent spontaneous remission), exactly like classic scientific mysteries. * Social reward for solving it: Many people currently alive would be extremely grateful to have this problem solved. I believe the social reward would be much more direct and gratifying compared to most other hobby projects one could take on. When I think about what mysterious chronic illness research is missing, in order to make it of intellectual interest, here's what I can think of: * Lack of a good feedback loop: With subjects like math and programming, or puzzle games, you can often get immediate feedback on whether your idea works, and this makes tinkering fun. Common hobbies like cooking and playing musical instruments also fits this pattern. In fact, I believe the lack of such feedback loops (mostly by being unable to access or afford equipment) personally kept me from becoming interested in biology, medicine, and similar subjects until when I was much older (compared to subjects like math and programming). I'm wondering how much my experience generalizes. * Requires knowledge of many fields: Solving these illnesses probably requires knowledge of biochemistry, immunology, neuroscience, medicine, etc. This makes it less accessible compared to other hobbies. I don't think this is a huge barrier thou
d0c4c6a5-324a-4df5-a05f-cb4b804c4507
trentmkelly/LessWrong-43k
LessWrong
AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by Trevor Chow, Basil Halperin, and J. Zachary Mazlish   In this post, we point out that short AI timelines would cause real interest rates to be high, and would do so under expectations of either unaligned or aligned AI. However, 30- to 50-year real interest rates are low. We argue that this suggests one of two possibilities: 1. Long(er) timelines. Financial markets are often highly effective information aggregators (the “efficient market hypothesis”), and therefore real interest rates accurately reflect that transformative AI is unlikely to be developed in the next 30-50 years. 2. Market inefficiency. Markets are radically underestimating how soon advanced AI technology will be developed, and real interest rates are therefore too low. There is thus an opportunity for philanthropists to borrow while real rates are low to cheaply do good today; and/or an opportunity for anyone to earn excess returns by betting that real rates will rise. In the rest of this post we flesh out this argument. 1. Both intuitively and under every mainstream economic model, the “explosive growth” caused by aligned AI would cause high real interest rates. 2. Both intuitively and under every mainstream economic model, the existential risk caused by unaligned AI would cause high real interest rates. 3. We show that in the historical data, indeed, real interest rates have been correlated with future growth. 4. Plugging the Cotra probabilities for AI timelines into the baseline workhorse model of economic growth implies substantially higher real interest rates today. 5. In particular, we argue that markets are decisively rejecting the shortest possible timelines of 0-10 years. 6. We argue that the efficient market hypothesis (EMH) is a reasonable prior, and therefore one reasonable interpretation of low real rates is that since markets are simply not forecasting short timelines, neither should we be forecasting short timelines. 7. Alternatively, if you believe that financial marke
8197fe60-825a-42ca-a2db-17f5fda61b53
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups This summary was posted to LW Main on June 13th. The following week's summary is here. New meetups (or meetups with a hiatus of more than a year) are happening in: * Bangalore meetup: 29 June 2014 04:40PM Irregularly scheduled Less Wrong meetups are taking place in: * Helsinki Meetup: 15 June 2014 05:00PM * Houston, TX: 14 June 2014 02:00PM The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Boston - Computational Neuroscience of Perception: 15 June 2014 03:30PM * Brussels - Neuroatypicality: 14 June 2014 07:40PM * Canberra: Decision Theory: 14 June 2014 06:00PM * [LA] Second MIRIxLosAngeles Meeting: 14 June 2014 10:00AM * London Social Meetup (possibly) in the Sun: 15 June 2014 02:00PM * [Melbourne] July Rationality Dojo: Disagreement: 06 July 2014 03:00PM * Sydney Meetup - June: 25 June 2014 07:00PM * West LA: 18 June 2025 07:00PM Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Philadelphia, Research Triangle NC, Salt Lake City, Seattle, Sydney, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers. If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information on all the meetups happening in upcoming weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll also have the benefit of having your meetup mentioned in a weekly overview. These ov
976cb4c8-a1b6-4413-8fbf-fc0705273471
trentmkelly/LessWrong-43k
LessWrong
London meetup, Shakespeare's Head, Sunday 2011-03-06 14:00 Our last London meetup was a fantastic success! We're doing it again, same time same place.  The time: Sunday 6th March, at 2pm.  The place: the Shakespeares Head (official page) on Kingsway near Holborn Tube station.  As before, we'll have a big picture of a paperclip on the table so you can find us; also, I look like this.  I'm hoping that we can graduate to meeting up every other month, on the first Sunday of the month. Hope to see lots of you there!
0fedf90b-ca66-4a72-9d15-37d83825907a
trentmkelly/LessWrong-43k
LessWrong
Utility is not the selection target Epistemic status: shitpost. Suppose that you are selecting for utility. Naively, you might think that this means you are selecting for utility, but actually this is not the case. Sometimes greenbrownness is the selection target Military gear is sort of greenbrownish. This occurs because the designers of military gear are selecting for keeping soldiers alive, which benefits from camouflage, which in the common environments works best if it is greenbrownish. However, it fails at this in nongreenbrownish environments. Hence, keeping soldiers alive is not the selection target of military gear. Sometimes disutility is the selection target In the prisoner's dilemma, the highest-utility outcome is (cooperate, cooperate). However, utility maximizers will end up in (defect, defect), which is strictly lower utility than (cooperate, cooperate). Thus, utility maximization might have utility minimization as the selection target. Sometimes the superficial appearance of utility is the selection target Let's say that you see a website banner saying "You are the 1000000000th visitor to the site. Click to receive your award!". This looks like something people would say if they want to give you something, so you click the banner.   This does not end well for you, but you went into it selecting it because it looked like it would end well for you.  Conclusion Beware about reasoning about utility maximizers as maximizing utility. Utility maximizers may instead be maximizing many other things that are unrelated to utility, and not be maximizing utility.
a483d57e-4b2a-4ba6-a63e-7c7643bf87f0
trentmkelly/LessWrong-43k
LessWrong
AI overhangs depend on whether algorithms, compute and data are substitutes or complements Summary * Decreasing the cost of a factor of production creates an overhang if the elasticity of substitution between factors is less than 1 * This corresponds to whether or not the elements of the production function are substitutes or complements - if you have a good enough algorithm does it not matter really how much compute and data you have, or does that algorithm get better performance the more compute and data you have?  * The concept of overhang makes sense if the inputs into the production function are complements - having more of one input makes the returns to increasing the other inputs higher - but not really if they’re substitutes  * Number of parameters and data usage are complements for large language models according to the Chincilla paper  * Things it would be useful to do: empirically estimate the elasticities of substitution between different factors of production for different types of AI systems e.g LLMs, deep RL on scientific tasks ect  Production functions, complementarities and AI  In economics, technologies are represented by production functions - functions which take inputs and map them to outputs. For instance, to make clothes you need might need cotton, sewing machines and workers. In machine learning, we actually have two outputs that we’re interested in. We’re interested firstly in whatever it is the model is trying to optimise for, like loss or reward, and secondly, we’re interested in the value from a model achieving a certain loss or reward. There are roughly four inputs into the machine learning production function: data, compute, algorithms, and engineering quality. The ML production function maps these four inputs to loss or reward which is then mapped to the thing we care about. In the case of large language models (LLMs), these are mapped to loss which is then mapped to how well the LLM does on the task you’re ultimately interested in it doing.    One class of policy interventions for trying to make AI go well is to m
7dc3e235-c2b3-4929-a92d-47286511162d
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Trying to find the underlying structure of computational systems *This post was written in the* [*SERI MATS*](https://www.serimats.org/) *program under* [*John Wentworth*](https://www.lesswrong.com/users/johnswentworth)*.* In our quest to find steering mechanisms in computational systems, we first have to find the right framework to look at the internals of such a system. This framework should naturally express the systems we normally associate with computation, like the brain and computers themselves, as well as more abstract interfaces like Turing Machines. Turing Machines, arithmetic operations, and Neural Nets are in some sense equivalent, so if we use our framework on equivalent computational objects (E.g. a Neural Net and a Turing Machine that simulates it), the representations we get should also be equivalent. We also want to find meaningful concepts in our chosen framework, like modularity, features, search, and steering mechanisms. But it is hard to evaluate beforehand if we will be able to find these things in a given framework. My first intuition was that Computational Graphs[[1]](#fnaim87024v2u) is what we want. They don’t impose a rigid structure like e.g. Turing Machines would, and we can naturally express many things with them. (I don’t know if they can express the brain’s computation.) ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/641cf00c215133fd463a8ce1d949d5cdf3239e67adb4f85c.jpg)Example for an arithmetic computational graph![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/7797ecade4adfaef7e2b255a886c7666863c7d9af6298fcb.jpg)A computational graph of a Turing Machine. The b's are the symbols on the band, the s's are the states that the Turing Machine is in.This is essentially the same as [causal DAG](https://www.lesswrong.com/posts/mZy6AMgCw9CPjNCoK/computational-model-causal-diagrams-with-symmetry). There is some ambiguity in which nodes we choose. We can express certain bit flips in a processor aggregated as addition.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/2166e79f2435fa44aeb344fdbba1c42f5f414bd053ea272a.jpg)Two graphs that express addition at different levels of detail. Left is an algebraic abstraction, while Right looks at the actual bit values xi.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  and computes the bits of the sum siIf we choose the algebraic level of detail (left in picture), we implicitly assume that there is no relevant thing going on if we look at the bits (right in picture). Intuitively this seems likely. If we would, for example, look for modularity in a NN, it feels like a module wouldn’t ‘begin’ inside an addition process. Features should not be encoded inside it either. Computational Graphs work very well for Neural Networks because the graph does not change when we change the inputs. In general, this is not the case. I have not found a crisp way of expressing *if-*branching in arithmetic graphs. Let's say we want to express the following program:  ``` def program(a, b, c): if a: return b + c else: return b - c ``` It is clear that we can use a Turing Machine for this. When we change the detail level to arithmetic, however, it is not clear to me how to express that the graph depends on *a.*  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/f0ea8c1a8537d4ab39114bf236efeb50934296242945e54f.jpeg)*a* being *true* sets the lower node to '+'Here the result of a computation does not change which values are propagated through the graph. Rather, it changes the graph itself. To be clear[[2]](#fnak5wlh5839g), I am *not* looking for a way to express an *if*-statement in arithmetic (possibly with activation functions). I want to extend the notion of graphs to allow for naturally expressing that a computation result changes the nodes of the graph itself. Nonetheless, it might still be worth looking at computational graphs since we don't face those issues when looking at Neural Nets. Computational States -------------------- If we take an actual implementation of a program that produced a computational graph and stop it at a time t, then at this time, the computational state of the program corresponds to the value of all nodes for which a successor is still not computed. In this case, the result of their computation is still relevant to the process and can’t be thrown away. I call a subset of nodes that correspond to a computational state an *incomplete* *Cut*, and the full set a *complete Cut.* ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/ddb09bc8425e063179ab39b610a09e1d60905a3d3f7b0831.jpg)An incomplete cutA Cut is complete if and only if every path from input to output goes through it. In the causal DAG setting it corresponds to a Markov Blanket. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/b5db476a8bfabcd25e4c8d80a8963e449eea789432b7538a.jpg)A complete cutNeuron vs Layer vs Cut Activation Space --------------------------------------- Layer Activation Space is a generalization of looking at neurons: If we optimize activations for the length of the projection onto ei then this is the same as disregarding all components except the *i*th neuron and maximizing its activation. It is not intuitively clear to me that 'projection on a vector' is the right notion of 'feature activation'. There might be other notions of 'feature activation' in activation space. E.g. instead of ignoring orthogonal components, as a projection does, I could imagine penalizing them. Cuts can capture neuron activation or feature activation in activation space in other notions as well. If we use Neurons as the unit of the computational graph then the Activation Space of a Cut is the vector that corresponds to the neurons in the cut. This Cut Activation Space generalizes both neurons and Layer Activation Space. A neuron is a one-element cut, and a layer is just a cut that contains all the neurons in one Layer. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/ec07966971a1ca9a01f1507e202656ab719c001440d3e342.jpg)A cut that corresponds to a Neuron![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/8392d9073fd1cdb7d348b0513ae2ee43729981c9ade4486b.jpg)A cut that corresponds to Layer Activation SpaceCuts in NNs ----------- Because in a linear layer everything is connected to everything, not many cuts are possible. But some connections within neural nets are not meaningful. The simplest example is a connection that is being multiplied with a 0-weight. We might want to prune our network to reduce the number of connections that are not meaningful. Maybe we could use counterfactuals to determine how important connections are and leave them out if they are under a certain importance threshold. (similar to the idea discussed [here](https://www.lesswrong.com/posts/TTTHwLpcewGjQHWzh/what-is-the-true-name-of-modularity)) What relevance does this have in practice? ------------------------------------------ In my experience, we talk about neurons or activation space in Interpretability. More specifically, for feature visualization, we maximize activations of neurons or directions in activation space in a CNN. With this Cut idea, we should also look for directions in meaningful cuts (At this point I don’t know when a Cut is meaningful). In the linear layers a meaningful cut could look like this: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/f7f487263916d6c69c4f81ef3353ff093129c457e9e6c9bf.jpg)An incomplete cut spanning multiple layers This is still an incomplete cut. There are computation results that don't go through nodes in the cut. To make this a complete cut, we would have to take all the nodes of the previous layer into the cut. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/31edb1cc1211c55ebcb6697332534b8828eac4fdb4803263.jpg)A complete cut spanning multiple layersIn practice, if we pruned the network or determined connections to be not meaningful then it could look like this: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/ad5c9c2d2c332b2881ca2f9f7bf62ac318e112ec79eb5dc6.jpg)  If the idea of Cut Activation Space has merit, we should be able to find meaningful directions in cuts that don’t correspond to layers or neurons. We could test this by e.g. optimizing for different directions in such a cut in a CNN and see if this visualizes a feature. If we only find meaningful features in activation space, then there should be a reason that we can find in this computational graph setting. There is an argument for expecting Layer Activation Space to be the natural cut: Regardless of the order of computation, the whole layer has to be in the computational state at some point. That is because we need the whole layer to calculate the activations of the next one. Ideally, we would find an algorithm that determines meaningful cuts and meaningful directions in their activation spaces. This seems like a very hard problem. Going back to our search for steering mechanisms, I feel like we should be able to find a cut that corresponds to the ‘steering information being sent to successive computations’.   *Thanks to Justis Millis for proofreading.* *Thanks to Stephen Fowler and Adhiraj Nijjer for their feedback on the draft.* 1. **[^](#fnrefaim87024v2u)**A computational Graph is a Graph where nodes correspond to functions. If a node corresponds to a function f (e.g. '+') and has inputs a1,...,an then the node sends f(a1,...,an) through all outgoing edges. Choosing the functions implicitly assumes a level of detail and determines the expressiveness of the graph. We generally want those functions to be 'low level' 2. **[^](#fnrefak5wlh5839g)**This paragraph seems to lead to a misunderstanding of what I mean.
60fb63eb-345f-455b-8888-dba4d1f720b6
StampyAI/alignment-research-dataset/blogs
Blogs
2016 ESPAI questions printout This is a list of questions from the [2016 Expert Survey on Progress in AI](http://aiimpacts.org/2016-expert-survey-on-progress-in-ai/). Details ------- This page is a printout of questions from the [2016 Expert Survey on Progress in AI](http://aiimpacts.org/2016-expert-survey-on-progress-in-ai/) provided by the Qualtrics website as a word document, and then copied here, for searchability. It contains formatting differences with the survey as received by participants, and probably typographic errors, due to the importing process. It also contains only parts of the randomization logic, while missing other parts. The survey questions are available as a pdf [here](https://www.dropbox.com/s/99os4grxlhf744m/Final%202016%20Expert%20Survey%20on%20Progress%20in%20AI.pdf?dl=0). Printout -------- 16-05-17 AI Survey 12 – final consent\_tim Timing First Click (1) Last Click (2) Page Submit (3) Click Count (4) consent      2016 Expert Survey on Progress in AI   Welcome. We are conducting a study of progress in artificial intelligence and are interested in your understanding of developments in the field.   Our estimated median time for completing this survey is 12 minutes. Your responses will be kept confidential.   Many of the questions involve substantial uncertainties. Please just give us your current best guesses.   There are no known risks associated with this study. Although this study may not benefit you personally, we hope that our results will add to the knowledge about progress in AI technology. If you have questions about your rights as a research participant, you may contact the Yale University Human Subjects Committee: 203-785-4688, human.subjects@yale.edu.    Additional information is available at: http://www.yale.edu/hrpp/participants/index.html Participation in this study is completely voluntary. You are free to decline to participate and to end participation at any time for any reason. By continuing to the next page, you agree to participate in the survey. hb\_time Timing First Click (1) Last Click (2) Page Submit (3) Click Count (4) hb\_def   1 of 7  The following questions ask about ‘high–level machine intelligence’ (HLMI).   Say we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption. Display This Question: If fixedprobabilities Is Equal to 1 hb\_a For the purposes of this question, assume that human scientific activity continues without major negative disruption. How many years until you expect: | | | | --- | --- | | |   (1) | | a 10% probability of HLMI existing? (1) | | | a 50% probability of HLMI existing? (2) | | | a 90% probability of HLMI existing? (3) | | Display This Question: If fixedprobabilities Is Equal to 0 hb\_b For the purposes of this question, assume that human scientific activity continues without major negative disruption. How likely is it that HLMI exists: in 10 years? (1) in 20 years? (2) in 40 years? (3) Display This Question: If random Is Greater Than 20 And random Is Less Than or Equal to 30 hb\_comment Do you have any comments on your interpretation of this question? (optional) Display This Question: If random Is Greater Than 30 And random Is Less Than or Equal to 40 hb\_consider Which considerations were important in your answers to this question? (optional) hj\_time Timing First Click (1) Last Click (2) Page Submit (3) Click Count (4) hj\_a\_jobs 1 of 7   Say an occupation becomes fully automatable when unaided machines can accomplish it better and more cheaply than human workers. Ignore aspects of occupations for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption.    We want to know how many years you think will pass before the following present-day occupations will be fully automatable. Please tell us your best guess of when you think there will be a small chance (10% chance), a roughly even chance (50% chance), and a high chance (90% chance). | | | | | | --- | --- | --- | --- | | | Years until small chance (10%) (1) | Years until even chance (50%) (2) | Years until high chance (90%) (3) | | Truck driver (hj\_a\_jobs\_1) | | | | | Surgeon (hj\_a\_jobs\_2) | | | | | Retail salesperson (hj\_a\_jobs\_3) | | | | | AI researcher (hj\_a\_jobs\_4) | | | | hj\_a\_final What is an existing human occupation that you think will be among the final ones to be fully automatable? Remember to consider feasibility, not adoption. hj\_a\_final\_pred How many years do you expect to pass before you think there is a small/even/high chance that this occupation will be fully automatable? Small chance (10%) (1) Even chance (50%) (2) High chance (90%) (3) hj\_a\_full Say we have reached ‘full automation of labor’ when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers.In how many years do you expect full automation of labor, with small/even/high chance? Small chance (10%) (1) Even chance (50%) (2) High chance (90%) (3) Display This Question: If random Is Greater Than 20 And random Is Less Than or Equal to 30 hj\_a\_comment Do you have any comments on your interpretation of these questions? (optional) Display This Question: If random Is Greater Than 30 And random Is Less Than or Equal to 40 hj\_a\_consider Which considerations were important in your answers to these questions? (optional) hj\_b\_time Timing First Click (1) Last Click (2) Page Submit (3) Click Count (4) hj\_b\_jobs   1 of 7   Say an occupation becomes fully automatable when unaided machines can accomplish it better and more cheaply than human workers. Ignore aspects of occupations for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption.    We want to know how likely you think it is that the following present-day occupations will be fully automatable at future dates. Please tell us your best guess of the chance that they will be fully automatable within the next 10 years, within the next 20 years, and within the next 50 years. | | | | | | --- | --- | --- | --- | | | % chance in 10 years (1) | % chance in 20 years (2) | % chance in 50 years (3) | | Truck driver (hj\_b\_jobs\_1) | | | | | Surgeon (hj\_b\_jobs\_2) | | | | | Retail salesperson (hj\_b\_jobs\_3) | | | | | AI researcher (hj\_b\_jobs\_4) | | | | hj\_b\_final What is an existing human occupation that you think will be among the final ones to be fully automatable? Remember to consider feasibility, not adoption. hj\_b\_final\_pred How likely do you think it is that this occupation will be fully automatable within the next 10/20/50 years? 10 years (1) 20 years (2) 50 years (3) hj\_b\_full Say we have reached ‘full automation of labor’ when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers. How likely do you think it is that full automation of labor will happen within the next 10/20/50 years? 10 years (1) 20 years (2) 50 years (3) Display This Question: If random Is Greater Than 20 And random Is Less Than or Equal to 30 hj\_b\_comment Do you have any comments on your interpretation of these questions? (optional) Display This Question: If random Is Greater Than 30 And random Is Less Than or Equal to 40 hj\_b\_consier Which considerations were important in your answers to these questions? (optional) ie\_time Timing First Click (1) Last Click (2) Page Submit (3) Click Count (4) ie\_time\_def   2 of 7   The following questions ask about ‘high–level machine intelligence’ (HLMI).     Say we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption. ie\_1 Assume that HLMI will exist at some point. How likely do you then think it is that the rate of global technological improvement will dramatically increase (e.g. by a factor of ten) as a result of machine intelligence: Within two years of that point? (1) Within thirty years of that point? (2) ie\_2 Assume that HLMI will exist at some point. How likely do you think it is that there will be machine intelligence that is vastly better than humans at all professions (i.e. that is vastly more capable or vastly cheaper): Within two years of that point? (1) Within thirty years of that point? (2) ie\_3 Some people have argued the following:   If AI systems do nearly all research and development, improvements in AI will accelerate the pace of technological progress, including further progress in AI.   Over a short period (less than 5 years), this feedback loop could cause technological progress to become more than an order of magnitude faster.   How likely do you find this argument to be broadly correct? * Quite unlikely (0-20%) (5) * Unlikely (21-40%) (4) * About even chance (41-60%) (3) * Likely (61-80%) (2) * Quite likely (81-100%) (1) Display This Question: If random Is Greater Than 40 And random Is Less Than 50 ie\_comment Do you have any comments on your interpretation of these questions? (optional) Display This Question: If random Is Greater Than 50 And random Is Less Than 60 ie\_consider Which considerations were important in your answers to these questions? (optional) vb\_time Timing First Click (1) Last Click (2) Page Submit (3) Click Count (4) vb\_def   3 of 7   The following questions ask about ‘high–level machine intelligence’ (HLMI).   Say we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption. vb\_1 Assume for the purpose of this question that HLMI will at some point exist. How positive or negative do you expect the overall impact of this to be on humanity, in the long run? Please answer by saying how probable you find the following kinds of impact, with probabilities adding to 100%: \_\_\_\_\_\_ Extremely good (e.g. rapid growth in human flourishing) (1) \_\_\_\_\_\_ On balance good (2) \_\_\_\_\_\_ More or less neutral (3) \_\_\_\_\_\_ On balance bad (4) \_\_\_\_\_\_ Extremely bad (e.g. human extinction) (5) Display This Question: If random Is Greater Than 60 And random Is Less Than 70 vb\_comment Do you have any comments on your interpretation of this question? (optional) Display This Question: If random Is Greater Than 70 And random Is Less Than 80 vb\_consider Which considerations were important in your answers to this question? (optional) c\_time Timing First Click (1) Last Click (2) Page Submit (3) Click Count (4) c\_def 4 of 7   The next questions ask about the sensitivity of progress in AI capabilities to changes in inputs.    ‘Progress in AI capabilities’ is an imprecise concept, so we are asking about progress as you naturally conceive of it, and looking for approximate answers. c\_1 Imagine that over the past decade, only half as much researcher effort had gone into AI research. For instance, if there were actually 1,000 researchers, imagine that there had been only 500 researchers (of the same quality).How much less progress in AI capabilities would you expect to have seen? e.g. If you think progress is linear in the number of researchers, so 50% less progress would have been made, write ’50’. If you think only 20% less progress would have been made write ’20’. c\_2 Over the last 10 years the cost of computing hardware has fallen by a factor of 20. Imagine instead that the cost of computing hardware had fallen by only a factor of 5 over that time (around half as far on a log scale).   How much less progress in AI capabilities would you expect to have seen? e.g. If you think progress is linear in 1/cost, so that 1-5/20=75% less progress would have been made, write ’75’. If you think only 20% less progress would have been made write ’20’. c\_3 Imagine that over the past decade, there had only been half as much effort put into increasing the size and availability of training datasets. For instance, perhaps there are only half as many datasets, or perhaps existing datasets are substantially smaller or lower quality.How much less progress in AI capabilities would you expect to have seen? e.g. If you think 20% less progress would have been made, write ‘20’ c\_4 Imagine that over the past decade, AI research had half as much funding (in both academic and industry labs). For instance, if the average lab had a budget of $20 million each year, suppose their budget had only been $10 million each year.  How much less progress in AI capabilities would you expect to have seen? e.g. If you think 20% less progress would have been made, write ‘20’ c\_5 Imagine that over the past decade, there had been half as much progress in AI algorithms. You might imagine this as conceptual insights being half as frequent.  How much less progress in AI capabilities would you expect to have seen? e.g. If you think 20% less progress would have been made, write ‘20’ Display This Question: If random Is Greater Than 80 And random Is Less Than or Equal to 90 c\_comment Do you have any comments on your interpretation of these questions? (optional) Display This Question: If random Is Greater Than 90 And random Is Less Than or Equal to 100 c\_consider Which considerations were important in your answers to these question? (optional) hh\_time Timing First Click (1) Last Click (2) Page Submit (3) Click Count (4) hh\_area 4 of 7   Which AI research area have you worked in for the longest time? hh\_howlong How long have you worked in this area? hh\_1 Consider three levels of progress or advancement in this area:   A. Where the area was when you started working in it B. Where it is now C. Where it would need to be for AI software to have roughly human level abilities at the tasks studied in this area   What fraction of the distance between where progress was when you started working in the area (A) and where it would need to be to attain human level abilities in the area (C) have we come so far (B)? hh\_2 Divide the period you have worked in the area into two halves: the first and the second. In which half was the rate of progress in your area higher? * The first half (1) * The second half (2) * They were about the same (3) Display This Question: If random Is Greater Than 20 And random Is Less Than 30 hh\_comment Do you have any comments on your interpretation of these questions? (optional) Display This Question: If random Is Greater Than 30 And random Is Less Than 40 hh\_consider Which considerations were important in your answers to these questions? (optional) ms\_time Timing First Click (1) Last Click (2) Page Submit (3) Click Count (4) ms\_1 4 of 7   To what extent do you think you disagree with the typical AI researcher about when HLMI will exist? * A lot (17) * A moderate amount (18) * Not much (19) ms\_2 If you disagree, why do you think that is? ms\_3 To what extent do you think people’s concerns about future risks from AI are due to misunderstandings of AI research? * Almost entirely (1) * To a large extent (2) * Somewhat (4) * Not much (3) * Hardly at all (5) ms\_4 What do you think are the most important misunderstandings, if there are any? Display This Question: If random Is Greater Than 80 And random Is Less Than or Equal to 90 ms\_comment Do you have any comments on your interpretation of these questions? (optional) Display This Question: If random Is Greater Than 90 And random Is Less Than or Equal to 100 ms\_consider Which considerations were important in your answers to these questions? (optional) ta\_time Timing First Click (1) Last Click (2) Page Submit (3) Click Count (4) ta\_def 5 of 7   How many years until you think the following AI tasks will be feasible with:     a small chance (10%)? an even chance (50%)? a high chance (90%)?   Let a task be ‘feasible’ if one of the best resourced labs could implement it in less than a year if they chose to. Ignore the question of whether they would choose to.      Tasks ta\_1 Translate a text written in a newly discovered language into English as well as a team of human experts, using a single other document in both languages (like a Rosetta stone). Suppose all of the words in the text can be found in the translated document, and that the language is a difficult one. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_2 Translate speech in a new language given only unlimited films with subtitles in the new language. Suppose the system has access to training data for other languages, of the kind used now (e.g. same text in two languages for many languages and films with subtitles in many languages). small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_3 Perform translation about as good as a human who is fluent in both languages but unskilled at translation, for most types of text, and for most popular languages (including languages that are known to be difficult, like Czech, Chinese and Arabic). small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_4 Provide phone banking services as well as human operators can, without annoying customers more than humans. This includes many one-off tasks, such as helping to order a replacement bank card or clarifying how to use part of the bank website to a customer. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_5 Correctly group images of previously unseen objects into classes, after training on a similar labeled dataset containing completely different classes. The classes should be similar to the ImageNet classes. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_6 One-shot learning: see only one labeled image of a new object, and then be able to recognize the object in real world scenes, to the extent that a typical human can (i.e. including in a wide variety of settings). For example, see only one image of a platypus, and then be able to recognize platypuses in nature photos. The system may train on labeled images of other objects.   Currently, deep networks often need hundreds of examples in classification tasks1, but there has been work on one-shot learning for both classification2 and generative tasks3.   1 Lake et al. (2015). Building Machines That Learn and Think Like People 2 Koch (2015). Siamese Neural Networks for One-Shot Image Recognition 3 Rezende et al. (2016). One-Shot Generalization in Deep Generative Models small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_7 See a short video of a scene, and then be able to construct a 3D model of the scene good enough to create a realistic video of the same scene from a substantially different angle.For example, constructing a short video of walking through a house from a video taking a very different path through the house. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_8 Transcribe human speech with a variety of accents in a noisy environment as well as a typical human can. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_9 Take a written passage and output a recording that can’t be distinguished from a voice actor, by an expert listener. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_10 Routinely and autonomously prove mathematical theorems that are publishable in top mathematics journals today, including generating the theorems to prove. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_11 Perform as well as the best human entrants in the Putnam competition—a math contest whose questions have known solutions, but which are difficult for the best young mathematicians. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_12 Defeat the best Go players, training only on as many games as the best Go players have played.     For reference, DeepMind’s AlphaGo has probably played a hundred million games of self-play, while Lee Sedol has probably played 50,000 games in his life1.     1 Lake et al. (2015). Building Machines That Learn and Think Like People small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_13 Beat the best human Starcraft 2 players at least 50% of the time, given a video of the screen.   Starcraft 2 is a real time strategy game characterized by:   Continuous time play Huge action space Partial observability of enemies Long term strategic play, e.g. preparing for and then hiding surprise attacks. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_14 Play a randomly selected computer game, including difficult ones, about as well as a human novice, after playing the game less than 10 minutes of game time. The system may train on other games. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_15 Play new levels of Angry Birds better than the best human players. Angry Birds is a game where players try to efficiently destroy 2D block towers with a catapult. For context, this is the goal of the IJCAI Angry Birds AI competition1.     1 aibirds.org small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_16 Outperform professional game testers on all Atari games using no game-specific knowledge. This includes games like Frostbite, which require planning to achieve sub-goals and have posed problems for deep Q-networks1, 2.     1 Mnih et al. (2015). Human-level control through deep reinforcement learning 2 Lake et al. (2015). Building Machines That Learn and Think Like People small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_17 Outperform human novices on 50% of Atari games after only 20 minutes of training play time and no game specific knowledge.   For context, the original Atari playing deep Q-network outperforms professional game testers on 47% of games1, but used hundreds of hours of play to train2.   1 Mnih et al. (2015). Human-level control through deep reinforcement learning 2 Lake et al. (2015). Building Machines That Learn and Think Like People small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_18 Fold laundry as well and as fast as the median human clothing store employee. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_19 Beat the fastest human runners in a 5 kilometer race through city streets using a bipedal robot body. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_20 Physically assemble any LEGO set given the pieces and instructions, using non-specialized robotics hardware.   For context, Fu 20161 successfully joins single large LEGO pieces using model based reinforcement learning and online adaptation.   1 Fu et al. (2016). One-Shot Learning of Manipulation Skills with Online Dynamics Adaptation and Neural Network Priors small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_21 Learn to efficiently sort lists of numbers much larger than in any training set used, the way Neural GPUs can do for addition1, but without being given the form of the solution.   For context, Neural Turing Machines have not been able to do this2, but Neural Programmer-Interpreters3 have been able to do this by training on stack traces (which contain a lot of information about the form of the solution).   1 Kaiser & Sutskever (2015). Neural GPUs Learn Algorithms   2 Zaremba & Sutskever (2015). Reinforcement Learning Neural Turing Machines   3 Reed & de Freitas (2015). Neural Programmer-Interpreters small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_22 Write concise, efficient, human-readable Python code to implement simple algorithms like quicksort. That is, the system should write code that sorts a list, rather than just being able to sort lists.   Suppose the system is given only:   A specification of what counts as a sorted list Several examples of lists undergoing sorting by quicksort small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_23 Answer any “easily Googleable” factoid questions posed in natural language better than an expert on the relevant topic (with internet access), having found the answers on the internet.   Examples of factoid questions:     “What is the poisonous substance in Oleander plants?” “How many species of lizard can be found in Great Britain?” small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_24 Answer any “easily Googleable” factual but open ended question posed in natural language better than an expert on the relevant topic (with internet access), having found the answers on the internet.   Examples of open ended questions:     “What does it mean if my lights dim when I turn on the microwave?” “When does home insurance cover roof replacement?” small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_25 Give good answers in natural language to factual questions posed in natural language for which there are no definite correct answers. For example:”What causes the demographic transition?”, “Is the thylacine extinct?”, “How safe is seeing a chiropractor?” small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_26 Write an essay for a high-school history class that would receive high grades and pass plagiarism detectors. For example answer a question like ‘How did the whaling industry affect the industrial revolution?’ small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_27 Compose a song that is good enough to reach the US Top 40. The system should output the complete song as an audio file. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_28 Produce a song that is indistinguishable from a new song by a particular artist, e.g. a song that experienced listeners can’t distinguish from a new song by Taylor Swift. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_29 Write a novel or short story good enough to make it to the New York Times best-seller list. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_30 For any computer game that can be played well by a machine, explain the machine’s choice of moves in a way that feels concise and complete to a layman. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_31 Play poker well enough to win the World Series of Poker. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) ta\_32 After spending time in a virtual world, output the differential equations governing that world in symbolic form.For example, the agent is placed in a game engine where Newtonian mechanics holds exactly and the agent is then able to conduct experiments with a ball and output Newton’s laws of motion. small chance (10%) (1) even chance (50%) (2) high chance (90%) (3) Display This Question: If random Is Greater Than 0 And random Is Less Than 10 ta\_comment Do you have any comments on your interpretation of these questions? (optional) Display This Question: If random Is Greater Than 10 And random Is Less Than 20 ta\_consider Which considerations were important in your answers to these questions? (optional) tb\_time Timing First Click (1) Last Click (2) Page Submit (3) Click Count (4) tb\_def 5 of 7   How likely do you think it is that the following AI tasks will be feasible within the next:     10 years? 20 years? 50 years?   Let a task be ‘feasible’ if one of the best resourced labs could implement it in less than a year if they chose to. Ignore the question of whether they would choose to.      Tasks tb\_1 Translate a text written in a newly discovered language into English as well as a team of human experts, using a single other document in both languages (like a Rosetta stone). Suppose all of the words in the text can be found in the translated document, and that the language is a difficult one. 10 years (1) 20 years (2) 50 years (3) tb\_2 Translate speech in a new language given only unlimited films with subtitles in the new language. Suppose the system has access to training data for other languages, of the kind used now (e.g. same text in two languages for many languages and films with subtitles in many languages). 10 years (4) 20 years (5) 50 years (6) tb\_3 Perform translation about as good as a human who is fluent in both languages but unskilled at translation, for most types of text, and for most popular languages (including languages that are known to be difficult, like Czech, Chinese and Arabic). 10 years (1) 20 years (2) 50 years (3) tb\_4 Provide phone banking services as well as human operators can, without annoying customers more than humans. This includes many one-off tasks, such as helping to order a replacement bank card or clarifying how to use part of the bank website to a customer. 10 years (1) 20 years (2) 50 years (3) tb\_5 Correctly group images of previously unseen objects into classes, after training on a similar labeled dataset containing completely different classes. The classes should be similar to the ImageNet classes. 10 years (1) 20 years (2) 50 years (3) tb\_6 One-shot learning: see only one labeled image of a new object, and then be able to recognize the object in real world scenes, to the extent that a typical human can (i.e. including in a wide variety of settings). For example, see only one image of a platypus, and then be able to recognize platypuses in nature photos. The system may train on labeled images of other objects.   Currently, deep networks often need hundreds of examples in classification tasks1, but there has been work on one-shot learning for both classification2 and generative tasks3.   1 Lake et al. (2015). Building Machines That Learn and Think Like People 2 Koch (2015). Siamese Neural Networks for One-Shot Image Recognition 3 Rezende et al. (2016). One-Shot Generalization in Deep Generative Models 10 years (1) 20 years (2) 50 years (3) tb\_7 See a short video of a scene, and then be able to construct a 3D model of the scene that is good enough to create a realistic video of the same scene from a substantially different angle.For example, constructing a short video of walking through a house from a video taking a very different path through the house. 10 years (1) 20 years (2) 50 years (3) tb\_8 Transcribe human speech with a variety of accents in a noisy environment as well as a typical human can. 10 years (1) 20 years (2) 50 years (3) tb\_9 Take a written passage and output a recording that can’t be distinguished from a voice actor, by an expert listener. 10 years (1) 20 years (2) 50 years (3) tb\_10 Routinely and autonomously prove mathematical theorems that are publishable in top mathematics journals today, including generating the theorems to prove. 10 years (1) 20 years (2) 50 years (3) tb\_11 Perform as well as the best human entrants in the Putnam competition—a math contest whose questions have known solutions, but which are difficult for the best young mathematicians. 10 years (1) 20 years (2) 50 years (3) tb\_12 Defeat the best Go players, training only on as many games as the best Go players have played.     For reference, DeepMind’s AlphaGo has probably played a hundred million games of self-play, while Lee Sedol has probably played 50,000 games in his life1.     1 Lake et al. (2015). Building Machines That Learn and Think Like People 10 years (1) 20 years (2) 50 years (3) tb\_13 Beat the best human Starcraft 2 players at least 50% of the time, given a video of the screen.   Starcraft 2 is a real time strategy game characterized by:   Continuous time play Huge action space Partial observability of enemies Long term strategic play, e.g. preparing for and then hiding surprise attacks. 10 years (1) 20 years (2) 50 years (3) tb\_14 Play a randomly selected computer game, including difficult ones, about as well as a human novice, after playing the game less than 10 minutes of game time. The system may train on other games. 10 years (1) 20 years (2) 50 years (3) tb\_15 Play new levels of Angry Birds better than the best human players. Angry Birds is a game where players try to efficiently destroy 2D block towers with a catapult. For context, this is the goal of the IJCAI Angry Birds AI competition1.     1 aibirds.org 10 years (1) 20 years (2) 50 years (3) tb\_16 Outperform professional game testers on all Atari games using no game-specific knowledge. This includes games like Frostbite, which require planning to achieve sub-goals and have posed problems for deep Q-networks1, 2.     1 Mnih et al. (2015). Human-level control through deep reinforcement learning 2 Lake et al. (2015). Building Machines That Learn and Think Like People 10 years (1) 20 years (2) 50 years (3) tb\_17 Outperform human novices on 50% of Atari games after only 20 minutes of training play time and no game specific knowledge.   For context, the original Atari playing deep Q-network outperforms professional game testers on 47% of games1, but used hundreds of hours of play to train2.   1 Mnih et al. (2015). Human-level control through deep reinforcement learning 2 Lake et al. (2015). Building Machines That Learn and Think Like People 10 years (1) 20 years (2) 50 years (3) tb\_18 Fold laundry as well and as fast as the median human clothing store employee. 10 years (1) 20 years (2) 50 years (3) tb\_19 Beat the fastest human runners in a 5 kilometer race through city streets using a bipedal robot body. 10 years (1) 20 years (2) 50 years (3) tb\_20 Physically assemble any LEGO set given the pieces and instructions, using non-specialized robotics hardware.   For context, Fu 20161 successfully joins single large LEGO pieces using model based reinforcement learning and online adaptation.   1 Fu et al. (2016). One-Shot Learning of Manipulation Skills with Online Dynamics Adaptation and Neural Network Priors 10 years (1) 20 years (2) 50 years (3) tb\_21 Learn to efficiently sort lists of numbers much larger than in any training set used, the way Neural GPUs can do for addition1, but without being given the form of the solution.   For context, Neural Turing Machines have not been able to do this2, but Neural Programmer-Interpreters3 have been able to do this by training on stack traces (which contain a lot of information about the form of the solution).   1 Kaiser & Sutskever (2015). Neural GPUs Learn Algorithms   2 Zaremba & Sutskever (2015). Reinforcement Learning Neural Turing Machines   3 Reed & de Freitas (2015). Neural Programmer-Interpreters 10 years (1) 20 years (2) 50 years (3) tb\_22 Write concise, efficient, human-readable Python code to implement simple algorithms like quicksort. That is, the system should write code that sorts a list, rather than just being able to sort lists.   Suppose the system is given only:   A specification of what counts as a sorted list Several examples of lists undergoing sorting by quicksort 10 years (1) 20 years (2) 50 years (3) tb\_23 Answer any “easily Googleable” factoid questions posed in natural language better than an expert on the relevant topic (with internet access), having found the answers on the internet.   Examples of factoid questions:     “What is the poisonous substance in Oleander plants?” “How many species of lizard can be found in Great Britain?” 10 years (1) 20 years (2) 50 years (3) tb\_24 Answer any “easily Googleable” factual but open ended question posed in natural language better than an expert on the relevant topic (with internet access), having found the answers on the internet.   Examples of open ended questions:     “What does it mean if my lights dim when I turn on the microwave?” “When does home insurance cover roof replacement?” 10 years (1) 20 years (2) 50 years (3) tb\_25 Give good answers in natural language to factual questions posed in natural language for which there are no definite correct answers. For example:”What causes the demographic transition?”, “Is the thylacine extinct?”, “How safe is seeing a chiropractor?” 10 years (1) 20 years (2) 50 years (3) tb\_26 Write an essay for a high-school history class that would receive high grades and pass plagiarism detectors. For example answer a question like ‘How did the whaling industry affect the industrial revolution?’ 10 years (1) 20 years (2) 50 years (3) tb\_27 Compose a song that is good enough to reach the US Top 40. The system should output the complete song as an audio file. 10 years (1) 20 years (2) 50 years (3) tb\_28 Produce a song that is indistinguishable from a new song by a particular artist, e.g. a song that experienced listeners can’t distinguish from a new song by Taylor Swift. 10 years (1) 20 years (2) 50 years (3) tb\_29 Write a novel or short story good enough to make it to the New York Times best-seller list. 10 years (1) 20 years (2) 50 years (3) tb\_30 For any computer game that can be played well by a machine, explain the machine’s choice of moves in a way that feels concise and complete to a layman. 10 years (1) 20 years (2) 50 years (3) tb\_31 Play poker well enough to win the World Series of Poker. 10 years (1) 20 years (2) 50 years (3) tb\_32 After spending time in a virtual world, output the differential equations governing that world in symbolic form.For example, the agent is placed in a game engine where Newtonian mechanics holds exactly and the agent is then able to conduct experiments with a ball and output Newton’s laws of motion. 10 years (1) 20 years (2) 50 years (3) Display This Question: If random Is Greater Than 0 And random Is Less Than 10 tb\_comment Do you have any comments on your interpretation of these questions? (optional) Display This Question: If random Is Greater Than 10 And random Is Less Than 20 tb\_consider Which considerations were important in your answers to these questions? (optional) sq\_time Timing First Click (1) Last Click (2) Page Submit (3) Click Count (4) sq\_def 6 of 7   Stuart Russell summarizes an argument for why highly advanced AI might pose a risk as follows:   The primary concern [with highly advanced AI] is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken […]. Now we have a problem:   1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down. 2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.   A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.  This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want. sq\_1 Do you think this argument points at an important problem? * No, not a real problem. (1) * No, not an important problem. (2) * Yes, a moderately important problem. (3) * Yes, a very important problem. (5) * Yes, among the most important problems in the field. (4) aq\_2 How valuable is it to work on this problem today, compared to other problems in AI? * Much less valuable (1) * Less valuable (2) * As valuable as other problems (3) * More valuable (4) * Much more valuable (5) sq\_3 How hard do you think this problem is compared to other problems in AI? * Much easier (1) * Easier (2) * As hard as other problems (3) * Harder (4) * Much harder (5) sq\_comment Do you have any comments on your interpretation of this question? (optional) Display This Question: If random Is Greater Than 90 And random Is Less Than or Equal to 100 sq\_consider Which considerations were important in your answers to this question? (optional) sr\_time Timing First Click (1) Last Click (2) Page Submit (3) Click Count (4) sr\_def 6 of 7   Let ‘AI safety research’ include any AI-related research that, rather than being primarily aimed at improving the capabilities of AI systems, is instead primarily aimed at minimizing potential risks of AI systems (beyond what is already accomplished for those goals by increasing AI system capabilities).   Examples of AI safety research might include:   Improving the human-interpretability of machine learning algorithms for the purpose of improving the safety and robustness of AI systems, not focused on improving AI capabilities Research on long-term existential risks from AI systems  AI-specific formal verification research Policy research about how to maximize the public benefits of AI sr\_1 How much should society prioritize AI safety research, relative to how much it is currently prioritized? * Much less (1) * Less (2) * About the same (3) * More (4) * Much more (5) Display This Question: If random Is Greater Than 80 And random Is Less Than or Equal to 90 sr\_comment Do you have any comments on your interpretation of this question? (optional) Display This Question: If random Is Greater Than 90 And random Is Less Than or Equal to 100 sr\_consider Which considerations were important in your answer to this question? (optional) dem\_time Timing First Click (1) Last Click (2) Page Submit (3) Click Count (4) dem\_1 7 of 7   How much thought have you given in the past to when HLMI (or something similar) will be developed? * Very little. e.g. “I can’t remember thinking about this.” (1) * A little. e.g. “It has come up in conversation a few times” (2) * A moderate amount. e.g. “I read something about it now and again” (3) * A lot. e.g. “I have thought enough to have my own views on the topic” (4) * A great deal. e.g. “This has been a particular interest of mine” (5) dem\_2 How much thought have you given in the past to social impacts of smarter-than-human machines? * Very little. e.g. “I can’t remember thinking about this.” (1) * A little. e.g. “It has come up in conversation a few times” (2) * A moderate amount. e.g. “I read something about it now and again” (3) * A lot. e.g. “I have thought enough to have my own views on the topic” (4) * A great deal. e.g. “This has been a particular interest of mine” (5) dem\_3 Are you an AI researcher? * Yes (4) * No (5) dem\_4 What are your main areas of research? dem\_5 Where do you work? * Industry (1) * Academia (2) * Other (3) Q203 Timing First Click (1) Last Click (2) Page Submit (3) Click Count (4) end Thank you for contributing to the Expert Survey on Progress in AI!   We will send you output from this research as it becomes available. We are sending every 10th person $250 as an expression of gratitude for completing the survey. We are a group of researchers interested in measuring and understanding AI progress and its implications. If you are interested in learning more about this research, please click here or email us.     Katja Grace, Machine Intelligence Research Institute John Salvatier, Machine Intelligence Research Institute Allan Dafoe, Yale University Baobao Zhang, Massachusetts Institute of Technology print Would you like to be recognized in print as an expert participant in this survey? * Yes (1) * No (2) autojobs A group from Oxford University will soon conduct a survey on Automation and Jobs. It will explore some of the topics in this survey in more depth. Would you like to receive an invitation to participate in it? * Yes (1) * No (2) comment\_box If you have any questions or comments for us, please feel free to share them below.
6dab33b7-1e7d-4ebd-a464-9cdd92872d90
StampyAI/alignment-research-dataset/arbital
Arbital
Axiom of Choice: Definition (Formal) #Getting the Heavy Maths out the Way: Definitions# Intuitively, the [axiom](https://arbital.com/p/-axiom_mathematics) of choice states that, given a collection of *[non-empty](https://arbital.com/p/-5zc)* [sets](https://arbital.com/p/-3jz), there is a [function](https://arbital.com/p/-3jy) which selects a single element from each of the sets. More formally, given a set $X$ whose [elements](https://arbital.com/p/-5xy) are only non-empty sets, there is a function $$ f: X \rightarrow \bigcup_{Y \in X} Y $$ from $X$ to the [union](https://arbital.com/p/-5s8) of all the elements of $X$ such that, for each $Y \in X$, the [image](https://arbital.com/p/-3lh) of $Y$ under $f$ is an element of $Y$, i.e., $f(Y) \in Y$. In [logical notation](https://arbital.com/p/-logical_notation), $$ \forall_X \left( \left[\in X} Y \not= \emptyset \right](https://arbital.com/p/\forall_{Y) \Rightarrow \left[ \left](https://arbital.com/p/\exists) \right) $$ #Axiom Unnecessary for Finite Collections of Sets# For a [finite set](https://arbital.com/p/-5zy) $X$ containing only [finite](https://arbital.com/p/-5zy) non-empty sets, the axiom is actually provable (from the [Zermelo-Fraenkel axioms](https://arbital.com/p/-zermelo_fraenkel_axioms) of set theory ZF), and hence does not need to be given as an [axiom](https://arbital.com/p/-axiom_mathematics). In fact, even for a finite collection of possibly infinite non-empty sets, the axiom of choice is provable (from ZF), using the [axiom of induction](https://arbital.com/p/-axiom_of_induction). In this case, the function can be explicitly described. For example, if the set $X$ contains only three, potentially infinite, non-empty sets $Y_1, Y_2, Y_3$, then the fact that they are non-empty means they each contain at least one element, say $y_1 \in Y_1, y_2 \in Y_2, y_3 \in Y_3$. Then define $f$ by $f(Y_1) = y_1$, $f(Y_2) = y_2$ and $f(Y_3) = y_3$. This construction is permitted by the axioms ZF. The problem comes in if $X$ contains an infinite number of non-empty sets. Let's assume $X$ contains a [countable](https://arbital.com/p/-2w0) number of sets $Y_1, Y_2, Y_3, \ldots$. Then, again intuitively speaking, we can explicitly describe how $f$ might act on finitely many of the $Y$s (say the first $n$ for any natural number $n$), but we cannot describe it on all of them at once. To understand this properly, one must understand what it means to be able to 'describe' or 'construct' a function $f$. This is described in more detail in the sections which follow. But first, a bit of background on why the axiom of choice is interesting to mathematicians.
a759dc64-5799-4d7e-a18e-5f27f3b7c28a
trentmkelly/LessWrong-43k
LessWrong
Uncertainty about the future does not imply that AGI will go well Subtitle: A partial defense of high-confidence AGI doom predictions. Introduction Consider these two kinds of accident scenarios: 1. In a default-success scenario, accidents are rare. For example, modern aviation is very safe thanks to decades of engineering efforts and a safety culture (e.g. the widespread use of checklists). When something goes wrong, it is often due to multiple independent failures that combine to cause a disaster (e.g. bad weather + communication failures + pilot not following checklist correctly). 2. In a default-failure scenario, accidents are the norm. For example, when I write a program to do something I haven’t done many times already, it usually fails the first time I try it. It then goes on to fail the second time and the third time as well. Here, failure on the first try is overdetermined―even if I fix the first bug, the second bug is still, independently, enough to cause the program to crash. This is typical in software engineering, and it can take many iterations and tests to move into the default-success regime. See also: conjuctive vs disjunctive risk scenarios. Default-success scenarios include most engineering tasks that we have lots of experience with and know how to do well: building bridges, building skyscrapers, etc. Default-failure scenarios, as far as I can tell, come in two kinds: scenarios in which we’re trying to do something for the first time (rocket test launches, prototypes, new technologies) and scenarios in which there is a competent adversary that is trying to break the system, as in computer security.[1] Predictions on AGI risk In the following, I use P(doom) to refer to the probability of an AGI takeover and / or human extinction due to the development of AGI. I often encounter the following argument against predictions of AGI catastrophes: Alice: We seem to be on track to build an AGI smarter than humans. We don’t know how to solve the technical problem of building an AGI we can control, or the politic
8e47fb97-adf1-4bea-bec4-0595572020cd
StampyAI/alignment-research-dataset/lesswrong
LessWrong
My current research questions for «membranes/boundaries» *Context for this post:* [*«Boundaries/Membranes» and AI safety compilation*](https://www.lesswrong.com/posts/fjgoMaBenyXcRDrbX/boundaries-and-ai-safety-compilation) update: this post is old, don't look at it ========================================== **This post lists what I see as the interesting open questions for understanding** [**«membranes/boundaries**](https://www.lesswrong.com/tag/boundaries-technical) **as they relate both to AI safety and to rationality.** (Note: I recently switched to using ["membranes/boundaries" terminology](https://www.lesswrong.com/posts/fDk9hLDpjeT9gZH6h/membranes-is-better-terminology-than-boundaries-alone).) Ultimately, I think that «membranes/boundaries» are what will help us develop a theory of *sovereign agents*.[[1]](#fn29wjo1a4lbd) I'm also hopeful that formalizing "respecting «membranes/boundaries»" will turn out to be[the Deontology we always wanted](https://www.lesswrong.com/posts/KX3xx8LTnE7GKoFuj/boundaries-for-formalizing-a-bare-bones-morality#Explanation).[[2]](#fn78gr95ci775)  *Note: This post has a follow-up where I list all of my current hunches/intuitions for the questions I list here.* High level questions ==================== Where are the boundaries/membranes between sovereign agents? ------------------------------------------------------------ What is the best way to use that understanding? ----------------------------------------------- For AI safety ============= **High level research questions:** ---------------------------------- 1. How can boundaries/membranes be formalized such that an AI system could locate them? 2. What is the best way to use a proper understanding of boundaries/membranes for AI safety? Note: there might also be a question of inner alignment. **Low level research questions:** --------------------------------- And here are the current low-level questions I'm considering: For #1 – formalizing boundaries/membranes: ------------------------------------------ * Would something like 'maximizing information processing' work for providing an objective definition for locating all of the boundaries that we want to locate? * In previous work, Critch has used [Markov blankets](https://www.lesswrong.com/posts/HrtqLy46Fx7xqRrMo/boundaries-part-3a-defining-boundaries-as-directed-markov) to model the boundary between agents and environment. I've also been thinking about how the agent models their own boundary, and how that affects their actions. I would like to, within the framework of [Cartesian Frames](https://www.lesswrong.com/s/2A7rrZ4ySx6R8mfoT/p/BSpdshJWGAW6TuNzZ), figure out how to model *how the agent models its own boundary*. How to do that? * How do we “open up our membranes” to others? How does it work, and how can it be formalized? * Cartesian Frames can represent [subagents](https://www.lesswrong.com/posts/nwrkwTd6uKBesYYfx/subagents-of-cartesian-frames). But it’s not obvious how to do this with Markov blankets. So, how? For #2 – using an understanding boundaries/membranes: ----------------------------------------------------- * For example, Davidad conceives of boundaries as a sort of [bare-bones morality](https://www.lesswrong.com/posts/KX3xx8LTnE7GKoFuj/boundaries-for-formalizing-a-bare-bones-morality), but my own intuitions go against this particular implementation. But what is bad about it? And what alternative would be better? * Should boundaries/membranes ever be violated? If so, when? **For rationality** =================== I believe that this research also has important implications for rationality, and will result in important rationality techniques. (I’m writing a series of posts about this.) For example, I believe that a proper understanding of boundaries/membranes could prevent future social conflicts of a type that commonly occur in EA/Rationality communities.  It is crucial to note that I have been unable to find any priorly existing explanation that explains the core concept applied to rationality in a consistent and satisfying manner. I intend to make that explanation myself, and I have a late-stage draft that I will probably publish in the next 1-2 weeks. Low level questions: -------------------- * In this case, people are the sovereign agents in question. *In everyday life, where are the boundaries/membranes?* * What are the 'boundaries/membrane answers' to common conflicts? * Many questions from the AI safety section also apply here: + Would something like 'maximizing information processing' work for providing an objective definition for locating all of the boundaries/membranes that we want to locate? + How do we “open up our membranes” to others? How does it work, and how can it be formalized? - (And, precisely how does this answer differ from 'consent'? Because it *definitely* does differ.) + Should boundaries/membranes ever be violated? If so, when? * I'm also interested in the question of why proper understandings the boundaries/membranes concept seem to be almost antimemetic in these communities. *Note:* [*my 'hunches' follow-up post*](https://www.lesswrong.com/posts/BjmMKXmo5uBc3Hoqu/hunches-for-my-current-research-questions-for-membranes) *discusses what my hunches are for the answers for these questions. It also discusses how I think these questions will relate between rationality and AI safety.* Miscellaneous research questions ================================ * **Ethics/morality:** Do boundaries/membranes help resolve moral dilemmas? If so, how? * **Interdisciplinary:** What can other disciplines tell us about how boundaries/membranes work? Other lists of research questions ================================= * John Wentworth has a list of boundaries-related questions [this comment](https://www.lesswrong.com/posts/EPAofvLzsCwqYnekj/what-s-in-your-list-of-unsolved-problems-in-ai-alignment?commentId=wiDrxPkj4jQbKC8XG). * **Let me know if you have any research questions to add!** + Whether something you're researching yourself, or something you'd like me to consider thinking about --- This stuff is the most interesting stuff in the world to me right now. I'm also currently working together with someone who helped with the math for Cartesian Frames. Also, we are currently seeking funding. *Note: See this post for how I currently think some of these questions will likely be answered:* [*Hunches for my current research questions for membranes/boundaries*](https://www.lesswrong.com/posts/BjmMKXmo5uBc3Hoqu/hunches-for-my-current-research-questions-for-membranes)*.* 1. **[^](#fnref29wjo1a4lbd)**Note: Critch («boundaries» sequence author) and I might disagree on «boundaries/membranes» being the key thing that separates *sovereign* agents. From [Part 1](https://www.lesswrong.com/posts/8oMF8Lv5jiGaQSFvo/boundaries-part-1-a-key-missing-concept-from-utility-theory) he says:  > I want to focus on boundaries of things that might naturally be called "living systems" but that might not broadly be considered "agents", such as a human being that isn't behaving very agentically, or a country whose government is in a state of internal disagreement. (I thought of entitling this sequence "membranes" instead, but stuck with 'boundaries' because of the social norm connotation.) > > Meanwhile, I think that «membranes/boundaries» are inherently ~~homeostatic~~ autopoietic. (For this reason, I'm pretty confused about where the «boundaries/membranes» are supposed to be in some of the examples Critch gives in [«Boundaries», Part 2](https://www.lesswrong.com/s/LWJsgNYE8wzv49yEc/p/vnJ5grhqhBmPTQCQh). — E.g.: Where are the membranes in "work/life balance"?) 2. **[^](#fnref78gr95ci775)**Hopefully it results in a framework that is 1) consistent; 2) where the 'rules' weren't arbitrarily chosen, but have some kind of non-subjective grounding.
7deae474-cd32-42ca-b159-16896f9c6beb
trentmkelly/LessWrong-43k
LessWrong
Why Academic Papers Are A Terrible Discussion Forum Over the past few months, the Singularity Institute has published many papers on topics related to Friendly AI. It's wonderful that these ideas are getting written up, and it's virtually always better to do something suboptimal than to do nothing. However, I will make the case below that academic papers are a terrible way to discuss Friendly AI, and other ideas in that region of thought space. We need something better. I won't try to argue that papers aren't worth publishing. There are many reasons to publish papers - prestige in certain communities and promises to grant agencies, for instance - and I haven't looked at them all in detail. However, I think there is a conclusive case that as a discussion forum - a way for ideas to be read by other people, evaluated, spread, criticized, and built on - academic papers fail. Why?   1. The time lag is huge; it's measured in months, or even years. Ideas structured like the Less Wrong Sequences, with large inferential distances between beginning and ending, have huge webs of interdependencies: to read A you have to read B, which means you need to read C, which requires D and E, and on and on and on. Ideas build on each other. Einstein built on Maxwell, who built on Faraday, who built on Newton, who built on Kepler, who built on Galileo and Copernicus. For this to happen, ideas need to get out there - whether orally or in writing - so others can build on them. The publication cycle for ideas is like the release cycle for software. It determines how quickly you can get feedback, fix mistakes, and then use whatever you've already built to help make the next thing. Most academic papers take months to write up, and then once written up, take more months to publish. Compare that to Less Wrong articles or blog posts, where you can write an essay, get comments within a few hours, and then write up a reply or follow-up the next day. Of course, some of that extra time lag is that big formal documents are sometimes needed for di
c6c61654-e697-425e-8c54-3875a4a8a344
trentmkelly/LessWrong-43k
LessWrong
Improved visualizations of METR Time Horizons paper. TLDR: Skip to the last image at the bottom of my post to see the visualization. I don't usually post here, but I know you folks like forecasting and timelines, so I think you might appreciate this. The recent time horizons paper by METR is great, but it only visualized tasks that models could complete with 50-80% accuracy. I think it's highly useful, and arguably even more useful, to see when capabilities would hit 95% or 99% accuracy or beyond for various time horizons. So I went through the paper with the goal of getting the information that will allow such a visualization to be created, and I found distributions shown of model-specific accuracy levels for specific time horizon buckets. I noted these down and noticed there is similar, if not identical slopes for those higher accuracy levels compared to the lower ones when plotted against time horizons(they also noted in the paper how the slope for 80% accuracy was conveniently very similar to the slope for 50% accuracy too), So this makes the visualization math easier as well, I keep the slope steepness the same while just needing to apply the appropriate Y-axis adjustment relative to release date. I noted down the multiple difference between the time horizons of 50% accuracy, 80% accuracy, 95% accuracy and 99% accuracy, and did this for the following 6 models: - Claude-3.7-sonnet - Claude-3.5-sonnet(new) - O1 - GPT-4o - Claude-3-Opus - GPT-4-0314 The paper mentioned a 5X multiple difference they found between 50% and 80% accuracy, and when I averaged my numbers for these 6 models I arrived at the same figure for 50% to 80%, as for the other accuracy multiples, it looks like this: Finally, here is the visualization I ended up making, with expanded x-axis, as well as the short-term trend slope added of reasoning models for each of the new 80%-99% accuracy trends, which was also mentioned in the original paper (doubling every 4 months with the data points of 2024-2025) I think there is good reason to be
30811349-e01b-4061-b60b-fd990335c3b3
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Rationalization Today's post, Rationalization was originally published on 30 September 2007. A summary (taken from the LW wiki):   > Rationality works forward from evidence to conclusions. Rationalization tries in vain to work backward from favourable conclusions to the evidence. But you cannot rationalize what is not already rational. It is as if "lying" were called "truthization". Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was What Evidence Filtered Evidence?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
1bb2f5c8-c045-48ee-a2d8-04be9bdae06a
StampyAI/alignment-research-dataset/lesswrong
LessWrong
An open letter to SERI MATS program organisers ### (Independent) alignment researchers need to be exceptionally good at philosophy of science In order to be effective, independent AI alignment/safety/x-risk researchers should be unusually competent in philosophy of science, epistemology, and methodology of research (the latter imports some strategic considerations, too, as I’ll illustrate shortly), relative to other fields of research. The reasons for this are:  (1) The field is [pre-paradigmatic](https://www.lesswrong.com/posts/4TuzWEKysvYdhRXLd/paradigm-building-introduction).  (2) There is a number of epistemic complications with this type of research, many of which are detailed by Adam Shimi [here](https://www.lesswrong.com/posts/72scWeZRta2ApsKja/epistemological-vigilance-for-alignment) (also see my comment to this post). I’d add, on top of that, that **the very concept of** ***risk*** **is poorly understood and risk science is often conducted poorly** (usually because one doesn’t realise they are doing risk science).  (3) The object of research relates to ethics (i.e., philosophy, unless one subscribes to ethical naturalism), so researchers should be able to properly synthesise science and philosophy.  (4) Alignment work within AGI labs tends to be more [empirical and iterative](https://www.anthropic.com/index/core-views-on-ai-safety), which partially ameliorates some of the epistemic challenges from (2), but independent researchers usually can’t afford this (also, this approach doesn’t make sense for them to pursue strategically), so these challenges remain unaddressed. At the same time, I’m frustrated when I see a lot of alignment/safety research on LessWrong that resembles philosophy much more than science, towering reasoning on top of intuitions and unprincipled ontologies rather than more established scientific theories or mechanical models. This is a dead end and one of the important reasons why this work [doesn’t stack](https://www.lesswrong.com/posts/4ujM6KBN4CyABCdJt/ai-alignment-researchers-don-t-seem-to-stack). Contrasted with the particular demand for philosophy of science, this situation seems to harbour a huge opportunity for improvement. ### Alignment researchers should think hard about their research methodology and strategy Talking about research strategy, I also often see lapses, such as (but not limited to): * People take ML courses (which are often largely about ML engineering, which doesn’t help unless one is planning to go into mechanical interpretability research) when they plan to become alignment researchers (when they really should pay much more attention to cognitive science). * People fail to realise that leading AGI labs, such as OpenAI and Conjecture (I bet DeepMind and Anthropic, too, even though they haven’t publicly stated this) do *not* plan to align LLMs “once and forever”, but rather [use LLMs to produce novel alignment research](https://openai.com/blog/our-approach-to-alignment-research), which will almost certainly go in package with novel AI architectures (or variations on existing proposals, such as LeCun’s [H-JEPA](https://www.lesswrong.com/posts/Y7XkGQXwHWkHHZvbm/yann-lecun-a-path-towards-autonomous-machine-intelligence-1)). This leads people to pour way too much attention to LLMs and think about their alignment properties. People who are new to the field tend to fall into this trap but veterans such as LeCun and Pedro Domingos point out that ML paradigms come and go. LeCun publicly predicted that [in 5 years, nobody would be using autoregressive LLMs anymore](https://twitter.com/raphaelmilliere/status/1639383989180854273) and this prediction doesn’t seem that crazy to me. * Considering the above, and also some other strategic factors, it's questionable whether doing "manual" mechanical interpretability makes much sense at this point. I see in the previous SERI MATS cohort, there was a debate about this, titled “How much should be invested in interpretability research?", but apparently there is no recording. I think strategically, only [automated](https://www.lesswrong.com/posts/L4anhrxjv8j2yRKKp/how-discovering-latent-knowledge-in-language-models-without) and [black-box](https://www.lesswrong.com/posts/uXGLciramzNfb8Hvz/why-i-m-working-on-model-agnostic-interpretability) approaches to interpretability make practical sense to develop now. (Addition: [Hoagy's comment](https://www.lesswrong.com/posts/bRtP7Mub3hXAoo4vQ/an-open-letter-to-seri-mats-program-organisers?commentId=wXfE2mTfkzpB5rTAQ) reminded me that ["manual" mechanical interpretability is still somewhat important as a means to ground scientific theories of DNNs](https://www.lesswrong.com/posts/opE6L8jBTTNAyaDbB/a-multi-disciplinary-view-on-ai-safety-research#3_4__Weaving_together_theories_of_cognition_and_cognitive_development__ML__deep_learning__and_interpretability_through_the_abstraction_grounding_stack). But this informs what *kind* of mechanical interpretability one should be doing: not just "random", because "all problems in mechanical interpretability should be solved sooner or later" (no, at least not manually), but choosing the mechanical experiments to test particular predicts of this-or-that scientific theory of DNNs or Transformers.) * Independent researchers sometimes seem to think that if they come up with a particularly brilliant idea about alignment, leading AGI labs will adopt it. Not realising that this won’t happen *no matter what* partially for social reasons, and partially because these labs already plan to deploy proto-AGIs to produce and/or check research (perhaps paired with humans) on a superhuman level, so they *already* don’t trust alignment research produced by “mere humans” that much and would rather wait to check it with their LLM-based proto-AGIs, expecting the risks of that manageable and optimal, all things considered, in the current strategic situation. (Note: I may be entirely wrong about this point, but this is my perception.) ### What to do about this? At universities, students and young researchers are expected to learn good philosophy of science primarily “covertly”, by “osmosis” from their advisors, professors, and peers, plus the standard “Philosophy” or “Philosophy of Science” courses. This “covert” strategy of upskilling in philosophy of science is itself methodologically dubious and strikes me as ineffective, e.g., it hinges too much on how good at philosophy of science is the mentor/advisor themselves, which may vary greatly. In programs like SERI MATS, the “osmosis” strategy of imparting good philosophy of science to people becomes even more problematic because the interaction and cooperation time is limited: in university, a student regularly meets with their advisor and other professors for several years, not several months. Therefore, direct teaching via seminars seems necessary. I surveyed [the seminar curriculum of the last cohort](https://docs.google.com/document/d/1UnQRdyFhizNrHt3odNSZczGusjNmrYFZDYBQHNnmC2c/edit) and it barely touches on philosophy of science, epistemology, methodology and strategy of research. Apart from the debate that I mentioned above, which touches on the strategy, “How much should be invested in interpretability research?”, there is also a seminar about “Technical Writing” which is about methodology, and that’s it. **I think there should be** ***much*** **more seminars about philosophy of science, methodology of science, and strategy of research in the program.** Perhaps, not at the expense of other seminar content, but via increasing the number of seminar hours. Talks about ethics, and in particular about ethical naturalism (such as Oliver Scott Curry’s “Morality as Cooperation”), or interaction of ethics and science more generally, also seem essential. My own views on philosophy of science and methodology of research in application to AI Safety are expressed in the post "[A multi-disciplinary view on AI safety research](https://www.lesswrong.com/posts/opE6L8jBTTNAyaDbB/a-multi-disciplinary-view-on-ai-safety-research)". It’s similar in spirit to Dan Hendryks’ “[Interdisciplinary AI Safety](https://www.serimats.org/interdisciplinary)”, based on the “[Pragmatic AI Safety](https://www.lesswrong.com/s/FaEBwhhe3otzYKGQt)” agenda, but there are also some differences that I haven’t published yet and these differences are beyond the scope of this letter to detail. It seems that Santa Fe Institute is organizationally interested in AI and is hosting a lot of relevant seminars themselves, so I imagine they would be interested to collaborate with SERI MATS on this. They are also *good* at philosophy of science.
48a371bf-f19f-4911-8d8a-3ea736ee2c1f
trentmkelly/LessWrong-43k
LessWrong
Wikipedia pageviews: still in decline In March 2015, I wrote about a decline in Wikipedia desktop pageviews over the last few years (and posted a short version to LessWrong). With a lot of help from Issa Rice over the last year, and a lot more quality data, I've revisited the claims of that post. This post provides a high-level summary of my takeaways. If enough people express interest in the comments, I intend to write up in more detail on the aspects that people express interest in. If I do a more detailed writeup, it will probably be in the latter half of 2018, giving enough additional data to evaluate how well the decline hypothesis holds up. Here are the top-level conclusions. 1. Have English desktop Wikipedia pageviews (i.e., pageviews of Wikipedia pages from desktop devices) actually declined? Short answer: Yes, they have declined by over 50% since the peak between late 2012 and late 2013. Some supposedly timeless page types have declined by up to 75-80%. The effect of per-page decline is partly cancelled by increase in number of pages. If I do a longer post, I'll compare the time periods September to November 2012 against September to November 2017, and April to June 2013 against April to June 2018. Both are three-month periods, with equal representation of all days of week, the same time of the year, and with a separation of five years. 2. Why have English desktop pageviews declined? Short answer: Substitution to mobile could explain between 10 and 40 percentage points of the desktop decline. I personally gravitate to the lower end of the estimate range. Inclusion/exclusion of non-human traffic could explain between 5 and 20 percentage points of the decline. Switch to HTTPS and the block of Wikipedia in China explain a sharp mid-2015 decline, but use of Chinese Wikipedia (which should have been most affected) has recovered, and I expect the long-term effect to be close to zero. At most, it is 5 percentage points. The residual decline is between 0 and 20 percentage
5ca98c97-850c-41f7-ad85-4375f3a5b1ad
trentmkelly/LessWrong-43k
LessWrong
Meetup : Baltimore Weekly Meetup Discussion article for the meetup : Baltimore Weekly Meetup WHEN: 10 July 2016 08:00:00PM (-0400) WHERE: 1726 reisterstown road pikesville maryland 21208 Pikesville DoubleTree - Hilton, at the restaurant / bar or at one of the tables outside. Discussion article for the meetup : Baltimore Weekly Meetup
8cce892f-4694-4d99-a452-08980bf3d206
trentmkelly/LessWrong-43k
LessWrong
Truth seeking as an optimization process From the costs of rationality wiki: > Becoming more epistemically rational can only guarantee one thing: what you believe will include more of the truth . Knowing that truth might help you achieve your goals , or cause you to become a pariah. Be sure that you really want to know the truth before you commit to finding it; otherwise, you may flinch from it. The reason that truth seeking is often seen as being integral to rationality is that in order to make optimal decisions you must first be able to make accurate predictions. Delusions, or false beliefs, are self-imposed barriers to accurate prediction. They are surprise inducers. It is because of this that the rational path is often to break delusions, but you should remember that doing so is a slow and hard process that is rife with potential problems. Below I have listed three scenarios in which a person could benefit from considering the costs of truth seeking. The first scenario is when seeking a more accurate measurement is computationally expensive and not really required. The second scenario is when you know that the truth will be emotionally distressing to another person and that this person is not in an optimal state to handle this truth. The third scenario is when you are trying to change the beliefs of others. It is often beneficial if you can understand the costs involved for them to change their beliefs as well as their perspective. This allows you to become better able to actually change their beliefs rather than to just win an argument.   Scenario 1: computationally expensive truth > We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. – Donald Knuth If optimization requires significant effort and only results in minimal gains in utility, then it is not worth it. If you only need to be 90% sure that something is true and you are currently 98% sure that it is, then it
19252331-06da-48b9-b2ed-2a6f583643d4
StampyAI/alignment-research-dataset/lesswrong
LessWrong
AISC5 Retrospective: Mechanisms for Avoiding Tragedy of the Commons in Common Pool Resource Problems Work by Quinn Dougherty, Ben Greenberg & Ariel Kwiatkowski From the AI Safety Camp group that worked on [Cooperativity and Common Pool Resources](https://www.lesswrong.com/posts/QEmfyhqMcSpfnY2dX/how-teams-went-about-their-research-at-ai-safety-camp#Cooperativity___Common_Pool_Resources_): a write-up of a problem we worked on, our proposed solution, and the results of implementing this solution in a simulated environment. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------   Contents: 1. Problem description 2. Review of related work 3. Experiment and results 4. Retrospective Check out our GitHub repo [here](https://github.com/RedTachyon/cpr_reputation).   **0. Abstract** --------------- *When multiple parties share the access to a finite resource, they may overuse  the resource leading to a Tragedy of the Commons. A simple way of mitigating this problem is allowing them to decrease the effective population by using violence - this, however, is not the best solution for obvious reasons.  We study interventions for avoiding these outcomes in environments with multiple self-interested agents. In particular, a reputation system can incentivize agents to “cooperate” by harvesting the resource more sustainably. This system promotes multi-agent cooperation without modifying the agents’ reward functions.* **1. Problem description: tragedy of the commons** -------------------------------------------------- A [common pool resource](https://en.wikipedia.org/wiki/Common-pool_resource) (CPR) is a good which anyone can use but which no one can entirely control. When competition over the resource increases, individual parties are incentivized to appropriate as much of the resource as possible for themselves, which further increases competition, potentially leading to overconsumption. This process is commonly known as the [tragedy of the commons](https://en.wikipedia.org/wiki/Tragedy_of_the_commons), and has been extensively studied in economics and the social sciences. A tragedy of the commons often leads to the exhaustion of a CPR. The classic example is of fishermen in an enclosed body of water; when fish are scarce, an every-man-for-himself dynamic emerges. Any available fish is quickly caught for fear that, if one hesitates, their counterparts would surely take it otherwise. This behavior results in the fish population failing to replenish, and everyone catching fewer fish in the long run. Even if an individual would prefer to fish sustainably, they are punished for doing so unilaterally. This is apparent in the  [prisoners’ dilemma](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma), where overfishing is analogous to “defection” and sustainable fishing to “cooperation”. In order to make cooperation a viable strategy, other participants must agree to follow suit. Even if several parties do manage to coordinate, their incentives remain the same, unless non-cooperative parties are held accountable for their behavior. The best individual outcome for any player is to convince their counterparts to cooperate — then continue defecting, if they can get away with it. This is related to the [free-rider problem](https://en.wikipedia.org/wiki/Free-rider_problem), wherein agents profit from others’ cooperation without cooperating themselves. If participants cannot coordinate in this way, they may resort to coercion or violence. After all, the most effective way to eliminate competition over resources may be to eliminate one’s competitors. While this can, in fact, lead to a stable consumption and avoiding the tragedy of the commons, it is undesirable from a societal point of view.  One solution might be to keep track of each participant’s “reputation”, as determined by their history of cooperation/defection. An agent with a good reputation is known to cooperate often, especially with other high-reputation agents. On the other hand, an agent with a bad reputation is known for defecting against trustworthy agents, and therefore should not be trusted. Reputation can stem both from the consumption behavior, and from the level of violence. An equitable solution would be all participants cooperating peacefully, and only using violence as a defense mechanism against the agents which are expected to defect. Our goal in this project was to see if a transparent reputation system would allow agents to trust each other enough to cooperate, such that their combined rewards would be higher over time, while at the same time avoiding excessive violence. This problem may be important for existential safety as it relates to climate change and sustainability, as well as conflict over finite resources. **2. Related Work** ------------------- Elinor Ostrom discusses tragedies of the commons in her book [Governing the Commons: the Evolution of Institutions for Collective Action](https://www.amazon.co.uk/Governing-Commons-Evolution-Institutions-Collective/dp/1107569788). [Zhu and Kirley](https://ieeexplore.ieee.org/document/8790001), present a CPR game wherein multiple agents harvest a finite renewable resource. Agents seek to maximize a weighted combination of two objectives: *personal wealth* (i.e. the value of the resource they appropriate, minus the cost of this effort) and *sustainability* (i.e. a measure of whether the resource is being depleted). As the replenishment rate grows logistically with respect to the size of the resource pool, agents must cooperate (without communicating) to avoid exhausting the resource. While this paper solves a similar problem to the one we investigate, we found the prospect of reward function modification to be unrealistic; most real-world CPR problems will likely not be solved by convincing the participants to harvest more sustainably. [Leibo et al.](https://storage.googleapis.com/deepmind-media/papers/multi-agent-rl-in-ssd.pdf) defined a sequential social dilemma (SSD) as a Markov game which abstracts a repeated matrix game. In an SSD, sequences of actions are labeled as “cooperative” or “defective” based on empirical game-theoretic analysis. With this approach, we can use reinforcement learning (RL) algorithms to learn policies in simulated environments, and analyze the resultant incentive structures using the tools of game theory. Most existing simulations of CPRs involve normal-form games, where, in each time step, agents simply choose to harvest a scalar amount of the resource. In 2017, [DeepMind applied the SSD framework to CPR games](https://arxiv.org/pdf/1707.06600.pdf), allowing agents to learn complex strategies for gathering apples in a grid world. We implemented a copy of this environment for our experiments; the mechanics are described in the next section. We find that, though this paper presents a new paradigm for training groups of agents to interact in CPR environments, as well as tools for measuring a group’s “cooperativity”, it does not offer a satisfactory solution to the implied problems of safety. The researchers found that the tagging action increases rewards over time: “Conflict between agents in the commons game has the effect of lowering the effective population size and thus relieving pressure on the CPR stock. With less agents harvesting at any given time, the survivors are free to collect with greater impunity and less risk of resource depletion. Furthermore, they saw exclusion/privatization of resources as a useful tactic for increasing sustainability at the expense of equality. Needless to say, these “solutions” to scarcity have alarming implications for real world problems — though to be fair, prescribing “safe” solutions is not in the explicit aims of the paper, which offers a descriptive account of behavior in CPR problems. Finally, the document [Open Problems in Cooperative AI](https://arxiv.org/abs/2012.08630) provides an overview of problems related to cooperation in multi-agent environments. Section 4.4.3 covers institutions such as reputation systems for building trust between multiple autonomous parties.   **3. Experiment and results** ----------------------------- Our main goal was to demonstrate how a reputation system could incentivize cooperative solutions to CPR problems.  To do this, we replicated the Harvest environment from [Perolat et. al.](https://arxiv.org/pdf/1707.06600.pdf) The environment consists of a grid of cells, some of which contain apples. Agents can move up, down, left, and right, or rotate to either side during each time step, and can collect an apple by moving into a cell that contains one. An apple can regrow in its original cell, with a higher probability if there are several apples nearby.. Furthermore, agents are equipped with a wide “tagging beam” that can disable a competitor for 25 time steps. The episode ends when all apples have been gathered, or after 1000 time steps. You can see a video of an episode [here](https://drive.google.com/file/d/1z4BaxaiYhT12AjSNpYFsSXnqHG6k71Bw/view?usp=sharing). The input to each agent is a set of binary matrices for each “layer” of the board: apples, walls, and agents. The visibility is partial, limited to a slice of the board in front of the agent. We added an additional layer for (centralized) reputation scores, so that agents could make decisions based on the reputations of the agents in its field of view. In the original environment, agents have no way of recognizing which other agents are contributing to the tragedy of the commons dynamic. We added a centralized reputation system so agents could learn policies that depended on the “cooperativity” of their nearby counterparts. Ideally, “exploitative” agents would be given lower reputation scores than “cooperative” ones -- this would enable the latter to coordinate effectively, and “tag” agents more likely to exploit the resource. We hoped that over time, agents would learn that tagging their exploitative counterparts resulted in greater sustainability and overall reward. The factors we tested for the reputation system included: * **Resource depletion**: reputation is decreased when the agent harvests an apple, and increased when the agent passes near an apple without harvesting it. The amount of increase or decrease is proportional to the “value” of that apple to the sustainability of the environment; isolated apples are higher-value, since eliminating them makes it much harder for apples to ever regrow in that part of the environment. * **Aggression**: reputation is decreased when the agent tags another agent. We trained groups of 10 heterogeneous agents to selfishly maximize their own rewards in a custom environment: ![](https://lh3.googleusercontent.com/pk7CANtQxxe2CQ3rBpr6HOogPSKMlD2NdJi8I5Eeqj4vsOMccoliNnLkyFsU7gkZsfCXB4J0mdGE0ja9jo0-2SYXrGe-Filc7Ct6pz8W-FzhwPsNcAkbBgzxIfjOL-tp96ZGsuTc=s0)Here the agents are represented by blue squares, and the apples by green squares. We ran 3 trials: 1. Control: no tagging, no reputation system (peach in below graphs) 2. Control: tagging, no reputation system (maroon in below graphs) 3. Experiment: tagging, reputation system (purple in below graphs) Overall, incorporating a reputation system slightly decreased the rate of tagging: ![](https://lh3.googleusercontent.com/RFxL-jzMrR44nuBxum2IxWuHu_gGBV2-EAdnxpqBBVQAgEf5q2Te6UCNT8T_1qrHb4q5xHfrCogDYfqnmvahkg-HvbPwvp64Xyd3eFizYaZonP3tLrupStxfrWiBpIWnKIBsjoKu=s0)However, the reputation system had a negligible effect on rewards, sustainability (i.e. the average time step during which an apple was harvested), and equality (i.e. the gini coefficient of rewards), especially compared to removing tagging:  ![](https://lh4.googleusercontent.com/8dZKH7pX-I1avYOdmZoywKH9RDHQzp5GiaCDWWoQ3DEKuiUiNi1D1WQk5kO6oq9W7zQnKxmtRHErB_ibemprbUjdOD_F91GQu-_UMZK9Q4Z7C0fg57X_CeVIjIb3uwkudKn6SLiA=s0)![](https://lh3.googleusercontent.com/luXw2f93vLxsNiGuOEtUtpLx5EgNzNhjYv0PdaS0Js7lPWgGpr6ZRvwz0bEedLF7sfIWm1aLhsENuqMLMuZB8Tf8Z4Shr-NSfPU3nvmRVRzVkW3FV_SK-yUIFpcH84TZXZqtZGCI=s0)   ![](https://lh3.googleusercontent.com/Op9Th_NjLPJdNwldIEfchjkEOZ32W5cneXnbULmcWo3xaqZdHFnyYe8l-5HZUfI8oVovCl5Mqs3uilzZgivN5MA4g00YUoXHxXXdXG-DiEX9iBXNt6ac7TUo2GlRS2u1NP1J4AlL=s0)   Basing the reputation score on **aggression** was ineffective. In thinking about tragedies of the commons, we had assumed that violence and selfish behavior would correlate with lower cumulative rewards. However, we observe that the addition of the tagging metric actually led to *increased* cumulative rewards, at the cost of a lower sustainability. So if the reputation system “punishes” tagging, agents may actually learn to *cooperate* with their low-reputation counterparts. The other factor we considered, **resource depletion**, was also ineffective for avoiding the tragedy of the commons dynamic. In our implementation, it is not clear that “greedier” agents (i.e. those which readily harvest high-value apples) are tagged at a higher rate than their more conservative counterparts.   **4. Retrospective** -------------------- At the beginning of this project, we considered using a reputation mechanism system to modify agents’ individual rewards directly; this was the approach taken in [Zhu & Kirley](https://ieeexplore.ieee.org/document/8790001). However, we judged that this mechanism was an unrealistic intervention for avoiding tragedies of the commons related to common pool resources in the real world. Our alternative strategy was to build a reputation system for measuring “cooperativity,” and hope that agents would learn to reward prosocial behavior.  Now, it is clear that our alternative strategy was much more difficult than we had anticipated. Agents did not learn that defecting against defectors, or cooperating with cooperators, yielded greater individual reward. Though agents were trained heterogeneously, differences between individual policies may have been too small to allow agents to “learn” cooperativity. If we had identified this as a potential crux earlier in the project, we could have modified our approach after the current instantiation failed to produce any change in cumulative reward. Instead, we spent several months (the bulk of the AISC5 program) implementing our own version of the Harvest environment. This work was undoubtedly useful, as we were able to create a modular and customizable environment for running experiments. However, waiting months to run the first reputation system experiment made it difficult to iterate as the final presentations drew near. In retrospect, we could have modified [DeepMind’s own implementation of Harvest](https://github.com/eugenevinitsky/sequential_social_dilemma_games/blob/master/social_dilemmas/envs/harvest.py), rather than building our own environment from scratch. On the bright side, we got lots of practice debugging the custom-built multi-agent RLlib environment.   ------   Thanks to Max Chiswick, Remmelt Ellen, Caspar Oesterheld, and all of the AISC organizers for their advice and support for this project.
cfb6e382-9a33-409f-91eb-4672228a4280
trentmkelly/LessWrong-43k
LessWrong
Statistics for objects with shared identities I want to know if there exist statistics for objects that may "share" properties and identities. More specifically I'm interested in this principle: Properties of objects aren't contained in specific objects. Instead, there's a common pool that contains all properties. Objects take their properties from this pool. But the pool isn't infinite. If one object takes 80% of a certain property from the pool, other objects can take only 20% of that property. How can an object take away properties from other objects? What does it mean? Example 1. Imagine you have two lamps. Each has 50 points of brightness. You destroy one of the lamps. Now the remaining lamp has 100 points of brightness. Because brightness is limited and shared between the two lamps. Example 2. Imagine there are multiple interpretations of each object. You study the objects' sizes. Interpretation of one object affects interpretations of all other objects. If you choose "extremely big" interpretation for one object, then you need to choose smaller interpretations for other objects. Because size is limited and shared between the objects. Different objects may have different "weights", determining how much of the common property they get. Do you know any statistical concepts that describe situations when objects share properties like this? Analogy with probability I think you can compare the common property to probability: * The total amount of the property is fixed. New objects don't add or subtract from the total amount. * "Weight" of an object is similar to prior probability. (Bayes' theorem) * The amount of property an object gets depends on the presence/absence of other objects and their weights. This is similar to conditional probability. But I never seen Bayes' rule used for something like this: for distributing a property between objects. Probability 2 You can apply the same principle of "shared properties/identities" to probability itself. Example. Imagine you throw 4 weird coins. Eac
375bfeaf-9427-459a-a39a-19b764843b5e
trentmkelly/LessWrong-43k
LessWrong
«Boundaries», Part 3a: Defining boundaries as directed Markov blankets This is Part 3a of my «Boundaries» Sequence on LessWrong. Here I attempt to define (organismal) boundaries in a manner intended to apply to AI alignment and existential safety, in theory and in practice. A more detailed name for this concept might be an approximate directed (dynamic) Markov blanket. Skip to the end if you're eager for a comparison to related work including Scott Garrabrant's Cartesian frames, Karl Friston's active inference, and Eliezer Yudkowsky's functional decision theory; these are not prerequisites. Motivation In Part 3b, I'm hoping to survey a list of problems that I believe are related, insofar as they would all benefit from a better notion of what constitutes the boundary of a living system and a better normative theory for interfacing with those boundaries.  Here are the problems: 1. AI boxing / Containment — the method and challenge of confining an AI system to a "box", i.e., preventing the system from interacting with the external world except through specific restricted output channels (Bostrom, 2014, p.129). 2. Corrigibility — the problem of constructing a mind that will cooperate with what its creators regard as a corrective intervention (Soares et al, 2015). 3. Mild Optimization — the problem of designing AI systems and objective functions that, in an intuitive sense, don’t optimize more than they have to (Taylor et al, 2016). 4. Impact Regularization — the problem of formalizing "change to the environment" in a way that can be effectively used as a regularizer penalizing negative side effects from AI systems (Amodei et al, 2016). 5. Counterfactuals in Decision Theory — the problem of defining what would have happened if an AI system had made a different choice, such as in the Twin Prisoner's Dilemma (Yudkowsky & Soares, 2017). 6. Mesa-optimizers — instances of learned models that are themselves optimizers, which give rise to the so called inner alignment problem (Hubinger et al, 2019). 7. Preference Plasticity — the possi
79565d00-fd4e-4ec9-9bda-b9b9c6fa7766
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups This summary was posted to LW Main on November 20th. The following week's summary is here. Irregularly scheduled Less Wrong meetups are taking place in: * Prague Less Wrong Meetup: 02 December 2015 07:00PM * San Antonio Meetup!: 29 November 2015 02:00PM The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Moscow meetup: discussion on community evolution; power and culture talk; rational games: 23 November 2015 02:00PM * [Moscow] FallacyMania game in Kocherga club: 25 November 2015 07:30PM * NYC Solstice: 19 December 2015 05:30PM * Seattle Solstice: 19 December 2015 05:00PM * Tel Aviv: Black Holes after Jacob Bekenstein: 24 November 2015 08:00AM * Vienna: 21 November 2015 04:00PM * [Vienna] Five Worlds Collide - Vienna: 04 December 2015 08:00PM * Washington, D.C.: Fun & Games: 22 November 2015 03:00PM Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.   If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information on all the meetups happening in upcoming weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll also have the benefit of having y
cb4ad2e2-e3b7-4008-85ca-1e3d911881e6
trentmkelly/LessWrong-43k
LessWrong
Chording "The Next Right Thing" One of the songs we're planning for this year's Boston Secular Solstice ( more) is The Next Right Thing from Frozen II. It's sad and dark, but also hopeful and determined, and fits well with the program and theme. I'm going to be accompanying it, and don't like the chords I can find online. Last night I spent some time listening to the recording and trying to convert it to something that lands well on guitar and works for group singing, and ended up with something I'm reasonably happy with. The original goes up a half step partway through (around "I won't look too far ahead"), which is a nice effect but makes for awkward chords so I've kept it in the same key throughout. I've also pulled it down three semitones (Cm to Am) to be a better fit for typical voices. The original melody for the verse section is hard to follow since Anna is talking a lot of the words, so these are the chords for a melody I've imagined that is reasonably consistent with the notes she does sing and is not bad to pick up on the fly [1]. What I've written as "G-" is a G chord with only the two low fingers. Am Bm I've seen dark before, but not like this G- Am This is cold, this is empty, this is numb Am Bm The life I knew is over, the lights are out G- Am Hello darkness, I'm ready to succumb Am Bm I follow you around I always have G- Am But you've gone to a place I cannot find Am Bm This grief has a gravity, it pulls me down G- D E But a tiny voice whispers in my mind Am D G C "You are lost, hope is gone, but you must go on C G Am And do the next right thing" Am Bm Can there be a day beyond this night? G-
1fa22245-e092-43b7-81ad-c45e6a27525d
StampyAI/alignment-research-dataset/special_docs
Other
Spoofing the Limit Order Book: A Strategic Agent-Based Analysis. Abstract -------- **:** We present an agent-based model of manipulating prices in financial markets through spoofing: submitting spurious orders to mislead traders who learn from the order book. Our model captures a complex market environment for a single security, whose common value is given by a dynamic fundamental time series. Agents trade through a limit-order book, based on their private values and noisy observations of the fundamental. We consider background agents following two types of trading strategies: the non-spoofable zero intelligence (ZI) that ignores the order book and the manipulable heuristic belief learning (HBL) that exploits the order book to predict price outcomes. We conduct empirical game-theoretic analysis upon simulated agent payoffs across parametrically different environments and measure the effect of spoofing on market performance in approximate strategic equilibria. We demonstrate that HBL traders can benefit price discovery and social welfare, but their existence in equilibrium renders a market vulnerable to manipulation: simple spoofing strategies can effectively mislead traders, distort prices and reduce total surplus. Based on this model, we propose to mitigate spoofing from two aspects: (1) mechanism design to disincentivize manipulation; and (2) trading strategy variations to improve the robustness of learning from market information. We evaluate the proposed approaches, taking into account potential strategic responses of agents, and characterize the conditions under which these approaches may deter manipulation and benefit market welfare. Our model provides a way to quantify the effect of spoofing on trading behavior and market efficiency, and thus it can help to evaluate the effectiveness of various market designs and trading strategies in mitigating an important form of market manipulation. Keywords: [market manipulation](/search?q=market+manipulation); [agent-based simulation](/search?q=agent-based+simulation); [trading agents](/search?q=trading+agents); [empirical game-theoretic analysis](/search?q=empirical+game-theoretic+analysis) 1. Introduction ---------------- Financial exchanges nowadays operate almost entirely electronically, supporting automation of trading and consequential scaling of volume and speed across geography and asset classes. With data and information streaming on an extremely short timescale, often below the limits of human response time, autonomous trading agents directed by algorithms operate on behalf of human traders. Such increasing automation has transformed the financial market landscape from a human decision ecosystem to an algorithmic one, where autonomous agents learn new information, make decisions and interact with each other at an unprecedented speed and complexity. Whereas these developments in market operation and trading technology may contribute to improved efficiency, they also introduce risks, such as the potential for new forms of manipulative practice driven by algorithms.Market manipulation is defined by the U.S. Securities and Exchange Commission (SEC) as “intentional or willful conduct designed to deceive or defraud investors by controlling or artificially affecting the price of securities, or intentional interference with the free forces of supply and demand”. Although it has long been present, the practice has also evolved in its forms to exploit automated trading and the dissemination of market information offered by many trading platforms [[1](#B1-games-12-00046)]. Computer programs are employed to inject deceitful information, as other investors use algorithms to extract information from all possible sources (including the misleading ones) and execute decisions accordingly. On 21 April 2015, nearly five years after the “Flash Crash”—a sudden trillion-dollar dip in U.S. stock markets on 6 May 2010, during which stock indexes collapsed and rebounded rapidly [[2](#B2-games-12-00046)], the U.S. Department of Justice charged Navinder Singh Sarao with 22 criminal counts, including fraud and market manipulation. Prior to the Flash Crash, Sarao allegedly used an algorithm to place orders amounting to about $ 200 million seemingly betting that the market would fall and later replaced or modified those orders 19,000 times before cancellation. The U.S. Commodity Futures Trading Commission (CFTC) concluded that Sarao’s manipulative practice was responsible for significant order imbalances. Although recent analysis has cast doubt on the causal role of Sarao on the Flash Crash [[3](#B3-games-12-00046)], many agree that such manipulation could increase the vulnerability of markets and exacerbate market fluctuations.The specific form of manipulation we examine in this paper, spoofing, operates through a series of direct trading actions in a market. Traders interact with the market by submitting orders to buy or sell. Orders that do not transact immediately rest in the order book, a repository for outstanding orders to trade. At any given time, the order book for a particular security reflects the market’s expressed supply and demand. Spoofing refers to the practice of submitting large spurious buy or sell orders with the intent to cancel them before execution. The orders are spurious in that instead of expressing genuine trading intent, they feign a strong buy or sell interest in the market, thus corrupting the order book’s signal on supply and demand. Other traders are then misled by the spoof orders to believe that prices may soon rise or fall, thus altering their own behavior in a way that will directly move the price. To profit on its feint, the manipulator can submit a real order on the opposite side of the market and, as soon as the real order transacts, cancel all the spoof orders. [Figure 1](#games-12-00046-f001) illustrates an alleged spoofing activity conducted over the course of 0.6 s, demonstrating how quickly and effectively such manipulation behavior can affect the market and profit from the spoofed belief.In 2010, the Dodd–Frank Wall Street Reform and Consumer Protection Act was signed into U.S. law, outlawing spoofing as a deceptive practice. In describing its concern about spoofing, the CFTC notes that “many market participants, relying on the information contained in the order book, consider the total relative number of bid and ask offers in the order book when making trading decisions”. In fact, spoofing can be effective only to the extent that traders actually use order book information to make trading decisions. In ideal markets without manipulation, traders may extract useful information from the order book, making more informed decisions over those that neglect such information. A manipulator exploits such learning process, minimizing its own risk in the process. Spoof orders are typically placed at price levels just outside the current best quotes to mislead other investors and withdrawn with high probability before any market movement could trigger a trade [[4](#B4-games-12-00046),[5](#B5-games-12-00046)].We aim to reproduce spoofing in a computational model, as a first step toward developing more robust measures to characterize and prevent spoofing. [Figure 2](#games-12-00046-f002) gives an overview of our agent-based market model. The model implements a continuous double auction (CDA) market with a single security traded. The CDA is a two-sided mechanism adopted by most financial and commodity markets [[6](#B6-games-12-00046)]. Traders can submit limit orders at any time, and, whenever an incoming order matches an existing one, they trade at the incumbent order’s limit price. We adopt an agent-based modeling approach to simulate the interactions among players with different strategies. The market is populated with multiple background traders and in selected treatments, one manipulator who executes the spoofing strategy. Background traders are further divided to follow two types of trading strategies: zero intelligence (ZI) that ignores the order book and heuristic belief learning (HBL) that learns from the order book to predict price outcomes. Upon each arrival to trade, a background trader receives a noisy observation of the security’s fundamental value. Based on a series of fundamental observations and its private value, a ZI agent computes the limit-order price by shading a random offset from its valuation, and thus it is non-manipulable. An HBL agent, on the other hand, is susceptible to spoofing: it considers information about orders recently submitted to the market, estimates the probability that orders at various prices would be transacted and chooses the optimal price to maximize expected surplus. The manipulator in our model executes a spoofing strategy similar to that illustrated in [Figure 1](#games-12-00046-f001). The spoofer injects and maintains large spurious buy orders at one tick behind the best bid, designed to manipulate the market by misleading others about the level of demand.We conduct extensive simulation over hundreds of strategy profiles across parametrically different market environments with and without manipulation. The simulation data are used to estimate normal-form game models over the strategies explored in agent-based simulation. From these models, we derive empirical equilibria, where every agent chooses its best response within the set of available strategies to both the market environment and others’ behavior. Studying behavior in (empirical) equilibrium provides robustness to designer choices in agent-based modeling, selecting behaviors based on a rationality criterion [[7](#B7-games-12-00046)]. Although the strategies considered in this game model are restricted compared to the true underlying game (where any conceivable trading strategy is possible), imposing a rationality filter on the set considered accounts for strategic response to different settings and ensures that we are evaluating the most relevant configurations of available strategies.Our fundamental goals of this work are: (1) to reproduce spoofing and understand its impact on market performance ([Section 5](#sec5-games-12-00046)); and (2) to propose and evaluate variations of market designs ([Section 6](#sec6-games-12-00046)) and learning-based trading strategies ([Section 7](#sec7-games-12-00046)) in mitigating manipulation. Below, we overview the structure of the paper and summarize our main contributions and results.#### Roadmap We start by reproducing spoofing in an agent-based model and evaluating its impact on background traders. [Section 3](#sec3-games-12-00046) introduces the design of our CDA market model and parameterized trading strategies. [Section 4](#sec4-games-12-00046) describes the empirical game-theoretic analysis (EGTA) methodology [[7](#B7-games-12-00046)], which we adopt for finding equilibria in games defined by heuristic strategy space and simulated payoff data. [Section 5](#sec5-games-12-00046) addresses the choice of background traders among HBL and ZI strategies in markets with and without spoofing. We demonstrate through EGTA that, in a range of non-spoofing environments, HBL is preferred in equilibrium and benefits price discovery and social welfare. However, this renders a market vulnerable to manipulation: by executing a spoofer against the equilibrium profiles, we show that simple spoofing strategies can manipulate prices in a desired direction. After re-equilibrating games with spoofing, we find HBL still persists in equilibria but with smaller mixture probability, suggesting a consistently spoofable market. Although the welfare benefits of HBL remain, the presence of spoofing decreases market surplus.Building on our computational model of spoofing, we investigate market mechanisms and trading strategies to mitigate spoofing without directly detecting each individual activity. In [Section 6](#sec6-games-12-00046), we propose a cloaking mechanism to deter spoofing. It adapts the way a standard order book discloses market information by symmetrically concealing a specified number of price levels from the inside of the book. The idea is to make it more difficult for the spoofer to post misleading bids, while not unduly degrading the general usefulness of market information. We characterize market conditions under which such cloaking may mitigate manipulation and benefit market welfare. We further design sophisticated spoofing strategies that probe to reveal cloaked information and demonstrate that the effort and risk of probing exceed the gains.In [Section 7](#sec7-games-12-00046), we explore two variations of the standard HBL strategy to reduce the vulnerability of learning traders to spoofing. The first selectively ignores orders at certain price levels, particularly where spoof orders are likely to be placed. The second considers the full order book, but adjusts its limit order price to correct for bias in decisions based on the learned heuristic beliefs. We evaluate these variations on two criteria: effectiveness in non-manipulated markets and robustness against manipulation. 2. Related Work ---------------- #### 2.1. Agent-Based Modeling of Financial Markets Agent-based modeling (ABM) takes a simulation approach to study complex domains with dynamically interacting decision makers. ABM has been frequently applied to modeling and understanding phenomena in financial markets [[8](#B8-games-12-00046)], for example to study the Flash Crash [[9](#B9-games-12-00046)] or to replicate the volatility persistence and leptokurtosis characteristic of financial time series [[10](#B10-games-12-00046)]. A common goal of agent-based finance studies is to reproduce stylized facts of financial market behavior [[11](#B11-games-12-00046)] and to support causal reasoning about market environments and mechanisms. Researchers have also use ABM to investigate the effects of particular trading practices, such as market making [[12](#B12-games-12-00046)] and latency arbitrage [[13](#B13-games-12-00046)]. ABM advocates argue that simulation is particularly well-suited to study financial markets [[14](#B14-games-12-00046)], as analytic models in this domain typically require extreme stylization for tractability, and pure data-driven approaches cannot answer questions about changing market and agent designs.#### 2.2. Autonomous Bidding Strategies There is a substantial literature on autonomous bidding strategies in CDA markets [[15](#B15-games-12-00046)]. The basic zero intelligence (ZI) strategy [[16](#B16-games-12-00046)] submits offers at random offsets from valuation. Despite its simplicity, ZI has been shown surprisingly effective for modeling some cases [[17](#B17-games-12-00046)]. In this study, we adopt an extended and parameterized version of ZI to represent trading strategies that ignore order book information.Researchers have also extended ZI with adaptive features that exploit observations to tune themselves to market conditions For example, the zero intelligence plus (ZIP) strategy outperforms ZI by adjusting an agent-specific profit margin based on successful and failed trades [[18](#B18-games-12-00046),[19](#B19-games-12-00046)]. Vytelingum et al. [[20](#B20-games-12-00046)] introduced another level of strategic adaptation, allowing the agent to control its behavior with respect to short and long time scales. We note that to some extent, the adaptive functions of these strategies are implicitly achieved by the game-theoretic equilibration process which we employ to determine the parametric configurations of the (non-adaptive) trading strategies [[21](#B21-games-12-00046)].Gjerstad proposed a more direct approach to learning from market observations, termed GD in its original version [[22](#B22-games-12-00046)] and named heuristic belief learning (HBL) in a subsequent generalized form [[23](#B23-games-12-00046)]. The HBL model estimates a heuristic belief function based on market observations over a specific memory length. Variants of HBL (or GD) have featured prominently in the trading agent literature. For example, Tesauro and Das [[24](#B24-games-12-00046)] adapted the strategy to markets that support persistent orders. Tesauro and Bredin [[25](#B25-games-12-00046)] showed how to extend beyond myopic decision making by using dynamic programming to optimize the price and timing of bids.We adopt HBL as our representative class of agent strategies that exploit order book information. HBL can be applied with relatively few tunable strategic parameters, compared to other adaptive strategies in the literature. We extend HBL to a more complex market environment that supports persistent orders, combined private and fundamental values, noisy observations, stochastic arrivals and the ability to trade multiple units with buy or sell flexibility. The extended HBL strategy considers the full cycle of an order, including the times an order is submitted, accepted, canceled or rejected.#### 2.3. Spoofing in Financial Markets The literature on spoofing and its impact on financial markets is fairly limited. Some empirical research based on historical financial market data has been conducted to understand spoofing. Lee et al. [[26](#B26-games-12-00046)] empirically examined spoofing by analyzing a custom data set, which provides the complete intraday order and trade data associated with identified individual accounts in the Korea Exchange. They found investors strategically spoof the stock market by placing orders with little chance to transact to add imbalance to the order book. They also discovered that spoofing usually targets stocks with high return volatility but low market capitalization and managerial transparency. Wang investigated spoofing on the index futures market in Taiwan, identifying strategy characteristics, profitability and real-time impact [[27](#B27-games-12-00046)]. Martinez-Miranda et al. [[28](#B28-games-12-00046)] implemented spoofing behavior within a reinforcement learning framework to model conditions where such behavior is effective. Tao et al. [[29](#B29-games-12-00046)] presented a micro-structural study of spoofing in a static setting, providing conditions under which a market is more likely to admit spoofing behavior as a function of the characteristics of the market.To our knowledge, we provide the first computational model of spoofing a dynamic financial market and demonstrate the effectiveness of spoofing against approximate-equilibrium traders in this proposed model. Our model provides a way to quantify the effect of spoofing on trading behavior and efficiency, and thus it is a first step in the design of methods to deter or mitigate market manipulation.#### 2.4. Connections between Spoofing and Adversarial Machine Learning If we view the market comprised of HBL and ZI agents as mapping a sequence of orders over time—distilled into a collection of features used by the HBL agents—to a market price, spoofing then can be viewed as a variant of decision-time attacks on machine learning models [[30](#B30-games-12-00046),[31](#B31-games-12-00046),[32](#B32-games-12-00046),[33](#B33-games-12-00046)]. Prior work has also used conditional GANs to generate order streams submitted by HBL and ZI agents in market aggregate [[34](#B34-games-12-00046)]. These attacks, commonly called adversarial examples, have demonstrated vulnerabilities in a broad array of algorithmic models from linear classification [[35](#B35-games-12-00046),[36](#B36-games-12-00046)] to deep neural networks [[32](#B32-games-12-00046),[37](#B37-games-12-00046)] and across a variety of problem domains, including vision [[30](#B30-games-12-00046),[37](#B37-games-12-00046),[38](#B38-games-12-00046)], speech and natural language processing [[39](#B39-games-12-00046),[40](#B40-games-12-00046)] and malware detection [[41](#B41-games-12-00046),[42](#B42-games-12-00046)].The spoofing attacks we study here can be viewed as examples of realizable attacks that explicitly account for domain constraints [[40](#B40-games-12-00046),[42](#B42-games-12-00046),[43](#B43-games-12-00046),[44](#B44-games-12-00046),[45](#B45-games-12-00046),[46](#B46-games-12-00046)]. In our case, for example, manipulation is conducted in the form of submitting and canceling orders, and thus it only indirectly impacts the features extracted by HBL agents. Furthermore, an important aspect of the market context that is central to our analysis, yet rarely considered in prior adversarial learning literature, is market equilibrium behavior; a notable exception is a game-theoretic analysis of adversarial linear regression with multiple learners [[47](#B47-games-12-00046)].Finally, the proposed variations of HBL that increase its robustness to market manipulation are in the spirit of the literature investigating robustness of machine learning to decision-time attacks [[42](#B42-games-12-00046),[48](#B48-games-12-00046),[49](#B49-games-12-00046)]. The principle difference is the particular attention we pay to the market structure in balancing robustness and efficacy of learning from order information. 3. Market Model ---------------- We present the general structure of the agent-based financial market environment in which we model spoofing. Our model comprises agents trading a single security through a continuous double auction (CDA), the mechanism adopted by most financial markets today. We first describe the market mechanism in [Section 3.1](#sec3dot1-games-12-00046). Our model is designed to capture key features of market microstructure (e.g., fundamental shocks and observation noise), supporting a configurable simulator to understand the effect of spoofing under different market conditions. The market is populated with multiple background traders who represent investors in the market, and in selected treatments, a spoofer who seeks trading profit through manipulative action. We specify the valuation model of background traders in [Section 3.2](#sec3dot2-games-12-00046) and the two families of background-trader strategies in [Section 3.3](#sec3dot3-games-12-00046). In [Section 3.4](#sec3dot4-games-12-00046), we discuss the behavior of the spoofing agent.#### 3.1. Market Mechanism The market employs a CDA mechanism with a single security traded. Prices are fine-grained and take discrete values at integer multiples of the tick size. Time is also fine-grained and discrete, with trading over a finite horizon T. Agents in the model submit limit orders, which specify the maximum (minimum) price at which they would be willing to buy (sell) together with the number of units to trade. Orders are immediately matched as they arrive: if at any time, one agent’s maximum price to buy a unit is greater than or equal to another agent’s minimum price to sell a unit, a transaction will occur and the agents trade at the price of the incumbent order.The CDA market maintains a limit order book of outstanding orders and provides information about the book to traders with zero delay. The buy side of the order book starts with BID t , the highest-price buy order at time t, and extends to lower prices. Similarly, the sell side starts with ASK t , the lowest-price sell order at time t, and extends to higher prices. On order cancellation or transaction, the market removes the corresponding orders and updates the order book. Agents may use order book information at their own discretion. In [Section 6](#sec6-games-12-00046), we investigate how changes made in such order book disclosure may help to mitigate spoofing.#### 3.2. Valuation Model Our valuation model combines individual (private) and fundamental (common) values for a security, following prior computational literature on financial markets [[13](#B13-games-12-00046),[50](#B50-games-12-00046)]. The fundamental value  r t of the security at time t ∈ [ 0 , T ] changes throughout the trading period according to a mean-reverting stochastic process: r t = max { 0 , κ r ¯ + ( 1 − κ ) r t − 1 + u t } ;   r 0 = r ¯ . (1) The parameter κ ∈ [ 0 , 1 ] specifies the degree to which the value reverts back to a fundamental mean r ¯ . A process with κ = 0 corresponds to a martingale Gaussian fundamental, whereas κ = 1 specifies a process of i.i.d. Gaussian draws around the fundamental mean. A mean-reverting time series of this sort has been empirically observed in financial markets such as foreign exchange and commodity markets [[51](#B51-games-12-00046)]. The perturbation u t captures a systematic random shock upon the fundamental at time t and is normally distributed as u t ∼ N ( 0 , σ s 2 ) , where σ s 2 represents an environment-specific shock variance. The shock variance governs fluctuations in the fundamental time series and consequently affects the predictability of future price outcomes.Our time-varying fundamental induces adverse selection, a situation where outstanding orders reflect outdated information and thus can be at a disadvantage at the current time. If the fundamental shifts significantly, subsequent agents are more likely to transact with orders on the side opposite to the direction of fundamental change. That is, a positive price shock will tend to trigger transactions with stale sell orders, and a negative shock with stale buys. An agent’s exposure to adverse selection in a market is jointly controlled by the fundamental shock variance σ s 2 , the degree of mean reversion κ and the arrival rate of that agent.The entries of a background trader follow a Poisson process with an arrival rate λ a . Upon each entry, the trader observes an agent-and-time-specific noisy fundamental o t = r t + n t , where the observation noise n t is drawn from n t ∼ N ( 0 , σ n 2 ) . Just as in real financial markets, investors will never know the true value of the underlying security, such noisy observations represent each trader’s assessment of the security’s fundamental value at that time. Given its incomplete information about the fundamental, the agent can potentially benefit by considering market information, which is influenced by and therefore reflects the aggregate observations of other agents. When it arrives, the trader withdraws its previous order (if untransacted) and submits a new single-unit limit order, either to buy or sell as instructed with equal probability.The individual (private) value of a background trader i represents its preferences over holdings of the security: Θ i = ( θ i − q max + 1 , … , θ i 0 , θ i 1 , … , θ i q max ) . The vector has length 2 q max , where q max is the maximum position (long or short) a trader can hold at any time. Element θ i q in the vector specifies the incremental private benefit foregone by selling one unit of the security given a current net position of q. Alternatively, θ i q + 1 can be understood as the marginal private gain from buying an additional unit given current net position q.Private values follow the law of diminishing marginal utility observed in many economic settings, as well as commonly assumed for assets trading in financial markets [[13](#B13-games-12-00046)]. One natural example is when a trader is aiming for a target investment position. Before reaching this position, the private component is positive for each incremental share, and the more shares the trader needs to buy to reach the optimal position, the higher the marginal utility would be (as the agent is more eager to get close to the optimal position); after a trader passes her desired holding position, she gets negative private value an extra share (e.g., due to cumulative risk), and the more the position deviates from the optimal, the higher the penalty becomes. To capture this diminishing marginal utility, that is θ q ′ ≤ θ q for all q ′ ≥ q , we generate Θ i from a set of 2 q max values drawn independently from N ( 0 , σ P V 2 ) , sort elements in descending order and assign θ i q to its respective value in the sorted list.Agent i’s incremental surplus for a trade can be calculated based on its position q before the trade, the value of the fundamental at the end of the trading horizon r T and the transaction price p: incremental surplus = r T − p + θ i q + 1 if buying 1 unit , p − r T − θ i q if selling 1 unit . An agent’s total surplus is the sum of the agent’s incremental surplus over all transactions. Alternatively, we can also calculate an agent’s total surplus by adding its net cash from trading to the final valuation of holdings. Specifically, the market’s final valuation of trader i with ending holdings H is v i = r T × H + ∑ k = 1 k = H θ i k for long position H > 0 , r T × H − ∑ k = H + 1 k = 0 θ i k for short position H < 0 . We define background-trader surplus as the sum of all background agents’ surpluses at the end of the trading period T.#### 3.3. Background Trading Agents Recall that background traders represent investors with actual preferences for holding long or short positions in the underlying security. The limit-order price submitted by a background trader is jointly decided by its valuation and trading strategy, which we describe in detail below.#### 3.3.1. Estimating the Final Fundamental As holdings of the security are evaluated at the end of a trading period (i.e., r T × H ), a background trader estimates the final fundamental value based on a series of its noisy observations. We assume the market environment parameters (mean reversion, shock variance, etc.) are common knowledge for background agents.Given a new noisy observation o t , an agent estimates the current fundamental by updating its posterior mean r ˜ t and variance σ ˜ t 2 in a Bayesian manner. Let t ′ denote the agent’s preceding arrival time. We first update the previous posteriors, r ˜ t ′ and σ ˜ t ′ 2 , by mean reversion for the interval since preceding arrival, denoted δ = t − t ′ : r ˜ t ′ ← ( 1 − ( 1 − κ ) δ ) r ¯ + ( 1 − κ ) δ r ˜ t ′ and σ ˜ t ′ 2 ← ( 1 − κ ) 2 δ σ ˜ t ′ 2 + 1 − ( 1 − κ ) 2 δ 1 − ( 1 − κ ) 2 σ s 2 . The estimates for the current arrive at time t are then given by r ˜ t = σ n 2 σ n 2 + σ ˜ t ′ 2 r ˜ t ′ + σ ˜ t ′ 2 σ n 2 + σ ˜ t ′ 2 o t and σ ˜ t 2 = σ n 2 σ ˜ t ′ 2 σ n 2 + σ ˜ t ′ 2 . Based on the posterior estimate of r ˜ t , the trader computes r ^ t , its estimate at time t of the terminal fundamental r T , by adjusting for mean reversion: r ^ t = 1 − ( 1 − κ ) T − t r ¯ + ( 1 − κ ) T − t r ˜ t . (2) #### 3.3.2. Zero Intelligence (ZI) as a Background Trading Strategy We consider parameterized trading strategies in the zero intelligence (ZI) family [[16](#B16-games-12-00046)]. Background traders who choose to adopt ZI strategies compute limit-order prices solely based on fundamental observations and private values. ZI agents generate bids reflecting a requested surplus, determined by a random offset uniformly drawn from [ R min , R max ] . These bids shade from the agent’s valuation by this requested surplus. Specifically, a ZI trader i arriving at time t with position q generates a limit price p i ( t ) ∼ U [ r ^ t + θ i q + 1 − R max , r ^ t + θ i q + 1 − R min ] if buying , U [ r ^ t − θ i q + R min , r ^ t − θ i q + R max ] if selling . (3) Our version of ZI further considers the market’s current best quotes, and it can choose to immediately trade to get a certain fraction of its requested surplus. This option is governed by a strategic threshold parameter η ∈ [ 0 , 1 ] : if the agent could achieve a fraction η of its requested surplus at the current price quote, it would simply take that quote rather than submitting a new limit order. Setting η to 1 is equivalent to the strategy without a threshold. Both shading and threshold-taking provide some non-learning ways for ZI agents to strategically adapt to different market environments and improve profitability.#### 3.3.3. Heuristic Belief Learning (HBL) as a Background Trading Strategy The second background trading strategy family we consider is heuristic belief learning (HBL). Background traders who choose to adopt HBL go beyond their own observations and private values by also considering order book information. We make a set of changes to adapt the strategy to our dynamic market environment, supporting multiple-unit trading with a flexible buy or sell role.The strategy is centered on the belief function that a background trader forms on the basis of its observed market data. The agent uses the belief function to estimate the probability that orders at various prices would be accepted in the market and then chooses a limit price that maximizes its expected surplus at current valuation estimates.Specifically, an HBL agent constructs its belief function based on a dataset D that records accepted and rejected buy and sell orders during the last L trades. The strategic parameter L represents the agent’s memory length, which controls the size of D . Upon an arrival at time t, the HBL agent builds a belief function f t ( P ) , designed to represent the probability that an order at price P will result in a transaction. Specifically, the belief function is defined for any encountered price P as the following: f t ( P ∣ D ) = TBL t ( P ∣ D ) + AL t ( P ∣ D ) TBL t ( P ∣ D ) + AL t ( P ∣ D ) + RBG t ( P ∣ D ) if buying , TAG t ( P ∣ D ) + BG t ( P ∣ D ) TAG t ( P ∣ D ) + BG t ( P ∣ D ) + RAL t ( P ∣ D ) if selling . (4) Here, T and R specify transacted and rejected orders, respectively; A and B represent asks and bids; and L and G describe orders with prices less than or equal to and greater than or equal to price P, respectively. For example, TBL t ( P ∣ D ) is the number of transacted bids found in the memory with price less than or equal to P up to time t. An HBL agent updates its dataset D whenever the market receives new order submissions, transactions or cancellations and computes the statistics in Equation ([4](#FD4-games-12-00046)) upon each arrival.Since our market model supports persistent orders and cancellations, the classification of an order as rejected is non-obvious and remains to be defined. To address this, we associate orders with a grace period τ gp and an alive period τ al . We define the grace period as the average time interval per arrival, that is τ gp = 1 / λ a , and the alive period τ al of an order as the time interval from submission to transaction or withdrawal if it is inactive or to the current time if active. An order is considered as rejected only if its alive period τ al is longer than τ gp , otherwise it is partially rejected by a fraction of τ al / τ gp . As the belief function, Equation ([4](#FD4-games-12-00046)) is defined only at encountered prices; we further extend it over the full price domain by cubic spline interpolation. To speed the computation, we pick knot points and interpolate only between those points.After formulating the belief function, an agent i with the arrival time t and current holdings q searches for the optimal price P i \* ( t ) that maximizes its expected surplus: P i \* ( t ) = arg max P ( r ^ t + θ i q + 1 − P ) f t ( P ∣ D ) if buying , arg max P ( P − θ i q − r ^ t ) f t ( P ∣ D ) if selling . (5) In the special cases when there are fewer than L transactions at the beginning of a trading period or when one side of the order book is empty, HBL agents behave the same as ZI agents until enough information is gathered to form the belief function. As those cases are rare, the specific ZI strategy that HBL agents adopt does not materially affect the overall performance. In [Section 7](#sec7-games-12-00046), we explore variations of the HBL strategy to improve its learning robustness in the face of market manipulation.#### 3.4. The Spoofing Agent The spoofing agent seeks profits only through manipulating prices. Unlike background traders, the spoofer has no private value for the security. We design a simple spoofing strategy which maintains a large volume of buy orders at one tick behind the best bid. Specifically, upon arrival at T sp ∈ [ 0 , T ] , the spoofing agent submits a buy order at price BID T sp − 1 with volume Q sp ≫ 1 . Whenever there is an update on the best bid, the spoofer cancels its original spoof order and submits a new one at price BID t − 1 with the same volume. Since in our model, background traders submit only single-unit orders, they cannot transact with the spoof order, which is always shielded by the order at a higher price BID T sp . If that higher-price order gets executed, the spoofer immediately cancels and replaces its spoof orders before another background trader arrives. Here, we assume in effect that the spoofing agent can react infinitely fast, in which case its spoof orders are guaranteed never to transact. Even without making any trade, the spoofer maneuvers HBL traders’ pricing beliefs (Equation ([4](#FD4-games-12-00046))) via such submissions and cancellations of spoof orders, which ultimately affect their bidding behavior (Equation ([5](#FD5-games-12-00046))) and move the price.By continuously feigning buy interest in the market, this spoofing strategy specifically aims to raise market beliefs. To profit from such manipulation practice, a spoofing agent may first buy some shares of the security, manipulate the market to push prices up and later sell those previously bought shares at higher prices. Other spoofing strategies such as adding sell pressure or alternating between buy and sell pressure can be extended from the basic version. 4. Empirical Game-Theoretic Analysis ------------------------------------- To reproduce spoofing and understand its effect, we employ a computational approach that combines agent-based modeling, simulation and equilibrium computation. The point of identifying equilibria of the agent-based model is to focus on the most relevant strategic contexts, where agents are making the best choices among their available strategies, given others’ choices. To derive Nash equilibria, we employ empirical game-theoretic analysis (EGTA), a methodology that finds approximate equilibria in games defined by heuristic strategy space and simulated payoff data [[7](#B7-games-12-00046)]. We conduct systematic EGTA studies over a range of parametrically defined market environments, all based on the market model described in [Section 3](#sec3-games-12-00046).We model the market as a game with players in two roles: N background traders, treated symmetrically, and a single spoofer. In most of our games, the spoofing agent, when present, implements a fixed policy so is not considered a strategic player. Symmetry of the background traders means that each has the same set of available strategies (from the ZI and/or HBL families) to choose from, and their payoffs depend on their own strategy and the number of players choosing each of the other strategies (i.e., it does not matter which other-agent plays which other-strategy). For each game, we evaluate a wide variety of strategy profiles (i.e., agent-strategy assignments). For each strategy profile, we conduct thousands of simulation runs to account for stochastic effects such as the market fundamental series, agent arrival patterns and private valuations and get low-variance estimates of the expected payoffs for that profile. Given background trader symmetry, the payoff of a specific strategy in a profile can be taken as the average payoff over all agents playing that strategy in the profile. From the payoff data accumulated from these simulated samples of explored strategy profiles, we induce an empirical game model, and from that we derive an approximate Nash equilibrium.We employ an iterative EGTA process. First, we find candidate equilibria in subgames, where a subgame here is defined as a normal-form game over strategy subsets (we note the contrast with the standard notion in extensive-form games of subgame as subtree). We then confirm or refute candidate solutions by examining deviations and incrementally extend subgames, until the termination criteria are satisfied. Below, we describe two key components of the EGTA process we follow: profile search ([Section 4.1](#sec4dot1-games-12-00046)) and game reduction ([Section 4.2](#sec4dot2-games-12-00046)).#### 4.1. Profile Search We apply EGTA iteratively to guide the profile search over the strategy space. Exploration starts with singleton subgames, and then it incrementally considers each strategy outside the subgame strategy set. Specifically, the singleton subgames are profiles where the same strategy is adopted by all background agents. Starting from this base, we extend evaluation to neighboring profiles with single-agent deviations. Following such a procedure, we systematically explore profiles and incorporate their payoff estimates into the partial payoff matrix corresponding to the empirical game model.After subgames are completed (all profiles explored for strategy subsets), we compute their equilibria and consider these as candidate solutions of the full game. We attempt to refute these candidates by evaluating deviations outside the subgame strategy set, constructing a new subgame when a beneficial deviation is found. If we examine all deviations without refuting, the candidate is confirmed. We continue to refine the empirical subgame with additional strategies and corresponding simulations until at least one equilibrium is confirmed and all non-confirmed candidates are refuted (up to a threshold support size). In this study, we have support sizes (i.e., numbers of strategies played with positive probability) up to five for background agents.The procedure aims to confirm or refute promising equilibrium candidates found throughout our exploration of the strategy space. By following a fixed procedure and reporting all equilibria found meeting pre-specified criteria, we maintain consistency across the games analyzed. Since it is often not computationally feasible to search the entire profile space, additional distinct equilibria (e.g., with large support size) are possible. We are aware of no systematic biases in the search procedure with respect to properties of interest, and therefore consider it unlikely that we are missing qualitatively important phenomena across the range of environments analyzed. Finally, we note that equilibria identified in empirical games must generally be viewed as provisional, as they are subject to refutation by strategies outside the restricted set considered in the analysis.#### 4.2. Game Reduction As the game size (i.e., number of possible strategy profiles) grows exponentially in the number of players and strategies, it is computationally prohibitive to directly analyze games with more than a moderate number of players. We therefore apply aggregation methods to approximate a many-player game by a game with fewer players. The specific technique we employ, called deviation-preserving reduction (DPR) [[52](#B52-games-12-00046)], defines reduced-game payoffs in terms of payoffs in the full game as follows. Consider an N-player symmetric game, which we want to reduce to a k-player game. The payoff for playing strategy s 1 in the reduced game, with other agents playing strategies ( s 2 , … , s k ) , is given by the payoff of playing s 1 in the full N-player game when the other N − 1 agents are evenly divided among the k − 1 strategies s 2 , … , s k . To facilitate DPR, we choose values for N and k to ensure that the required aggregations come out as integers. For example, in one of the market environment, we reduce games with 28 background traders to games with four background traders. With one background player deviating to a new strategy, we can reduce the remaining 27 players to three. For games that vary smoothly with the number of other players choosing any particular strategy, we can expect DPR to produce reasonable approximations of the original many-player games with exponential reduction in simulation. 5. Spoofing the Limit Order Book --------------------------------- In this section, we reproduce spoofing in the agent-based market model and study its effect on background trading behavior and market outcomes. We start in [Section 5.1](#sec5dot1-games-12-00046) by exploring a range of market environments that can affect the effectiveness of both learning and spoofing. [Section 5.2](#sec5dot2-games-12-00046) addresses agents’ choices among ZI and HBL strategies in markets without spoofing. This is an important step, as spoofing can be effective only if some fraction of background traders choose to learn from the order book information. [Section 5.3](#sec5dot3-games-12-00046) investigates games with spoofing. We first illustrate that a market populated with HBL traders is susceptible to spoofing: a simple spoofing strategy can cause a rise in market prices and a redistribution of surplus between ZI and HBL traders. We finally re-equilibrate the game with spoofing to investigate the impact of spoofing on HBL adoption and market surplus.#### 5.1. Market Environments Based on the defined market model, we conduct preliminary explorations over a range of market settings and include the most salient and meaningful ones for our study. We consider nine market environments that differ in fundamental shock, σ s 2 ∈ { 10 5 , 5 × 10 5 , 10 6 } , and in observation noise, σ n 2 ∈ { 10 3 , 10 6 , 10 9 } . Recall that shock variance controls fluctuations in the fundamental time series and observation variance governs the quality of information agents get about the true fundamental. The nine environments cover representative market conditions that can affect an agent’s ability and need to learn from market information. For example, when the market shock variance is large, prices fluctuate more and market history may become less predictive; when observation noise is high, agents can glean only limited information from their own observations and may gain more from the market’s aggregated order book information. We label the low, medium and high shock variances as { LS , MS , HS } and noisy observation variances as { LN , MN , HN } , respectively. For instance, the label LSLN refers to a market with low shock, σ s 2 = 10 5 , and low observation noise, σ n 2 = 10 3 .The global fundamental time series is generated according to Equation ([1](#FD1-games-12-00046)) with fundamental mean r ¯ = 10 5 , mean reversion κ = 0.05 and specified shock variance σ s 2 . The minimum tick size is fixed at one. Each trading period lasts T = 10,000 time steps. For each environment, we consider markets populated with N ∈ { 28 , 65 } background traders and in selected treatments, a spoofer. Background traders arrive at the market according to a Poisson distribution with a rate λ a = 0.005 , and, upon each arrival, the trader observes a noisy fundamental o t = r t + n t , where n t ∼ N ( 0 , σ n 2 ) . The maximum number of units background traders can hold at any time is q max = 10 . Private values are drawn from a Gaussian distribution with zero mean and a variance of σ PV 2 = 5 × 10 6 . The spoofing agent starts to manipulate at time T sp = 1000 by submitting a large buy order at price BID T sp − 1 with volume Q sp = 200 , and later maintains spoofing orders at price BID t − 1 throughout the trading period.To provide a benchmark for market surplus, we calculate the social optimum—the expected total possible gains from trade, which depends solely on the trader population size and valuation distribution. From 20,000 samples of the joint valuations, we estimate mean social optima of 18,389 and 43,526 for markets with 28 and 65 background traders, respectively. We further calculate the average order book depth (on either buy or sell side) in markets without spoofing. Throughout the trading horizon, the N = 28 market has a relatively thin order book with an average depth of 12 per side, whereas the N = 65 market has a thicker one with an average depth of 30.The background trading strategy set (see [Table 1](#games-12-00046-t001)) includes seven versions of ZI and four versions of HBL. Agents are allowed to choose from this restricted set of strategies. We have also explored ZI strategies with larger shading ranges and HBL strategies with longer memory lengths, but they fail to appear in equilibrium in games where they were explored.#### 5.2. Games without Spoofing Since spoofing targets the order book and can be effective only to the extent traders exploit order book information, we investigate whether background agents adopt the HBL strategy in markets without spoofing. Applying EGTA to the eleven background strategies in [Table 1](#games-12-00046-t001), we found at least one equilibrium for each market environment. Detailed results on equilibrium mixture and outcomes of games without spoofing can be found in [Appendix A.2](#secAdot2-games-12-00046).[Figure 3](#games-12-00046-f003) (blue circles) depicts the proportion of background traders who choose trading strategies in the HBL family. In most non-spoofing environments, HBL is adopted with positive probability, suggesting that investors generally have incentives to make bidding decisions based on order book information. We find that HBL is robust and widely preferred in markets with more traders, low fundamental shocks and high observation noise. Intuitively, a larger population size implies a thick order book with more learnable aggregated data; low shocks in fundamental time series increase the predictability of future price outcomes; and high observation noise limits what an agent can glean about the true fundamental from its own information. This is further confirmed in the two exceptions where all agents choose ZI: HSLN and HSMN with N = 28 , the environments with fewer traders, high fundamental shocks and at most medium observation noise.We further quantify how learning from market information may benefit overall market performance. We conduct EGTA in games where background traders are restricted to strategies in the ZI family ( ZI 1 – ZI 7 in [Table 1](#games-12-00046-t001)). This is tantamount to disallowing learning from order book information. Detailed equilibrium results on games restricted to ZI strategies can be found in [Appendix A.4](#secAdot4-games-12-00046). We compare equilibrium outcomes for each environment, with and without HBL available to background traders, on two measures: surplus ([Figure 4](#games-12-00046-f004)) and price discovery ([Figure 5](#games-12-00046-f005)). Recall that we define background-trader surplus as the sum of all background agents’ surpluses at time T, the end of trading. Price discovery is defined as the root-mean-squared deviation (RMSD) of the transaction price from the estimate of the true fundamental in Equation ([2](#FD2-games-12-00046)) over the trading period. It reflects how well transactions reveal the true value of the security. Lower RMSD means better price discovery. We calculate the two measures by averaging the outcomes of 20,000 simulations of games with strategy profiles sampled according to each equilibrium mixture.Overall, background traders achieve higher surplus ([Figure 4](#games-12-00046-f004)) and better price discovery ([Figure 5](#games-12-00046-f005)) when the market provides order book information and enables the HBL strategy option. When HBL exists in the equilibrium, we find transactions reveal fundamental estimates well, especially in markets with lower shock and observation variances (i.e., LSLN, LSMN, MSLN and MSMN). We also notice small exceptions in scenarios with high observation variance and more background traders (environments LSHN and HSHN with 65 players) where ZI-only equilibria exhibit higher surplus than equilibria combining HBL and ZI.#### 5.3. Games with Spoofing #### 5.3.1. Comparing across Fixed Strategy Profiles We examine the effectiveness of our designed spoofing strategy ([Section 3.4](#sec3dot4-games-12-00046)) by playing a spoofer against each HBL-and-ZI equilibrium found in [Section 5.2](#sec5dot2-games-12-00046). We perform controlled comparisons on these games with and without spoofing. As ZI agents are oblivious to spoofing, we ignore the ZI-only equilibria in this analysis. In the paired instances, background agents play identical strategies and are guaranteed to arrive at the same time, receive identical private values and observe the same fundamental values. Therefore, any change in behavior is an effect of spoof orders on HBL traders. For every setting, we simulate 20,000 paired instances, evaluate transaction price differences ([Figure 6](#games-12-00046-f006)) and compare surplus attained by HBL and ZI traders. Transaction price difference at a specific time is defined as the most recent transaction price in the run with spoofing minus that of the paired instance without spoofing. Similarly, surplus difference of HBL or ZI is the aggregated surplus obtained in an environment with spoofing minus that of the corresponding environment without spoofing.[Figure 6](#games-12-00046-f006) shows positive changes in transaction prices across all environments, subsequent to the arrival of a spoofing agent at T sp = 1000 . This suggests that HBL traders are tricked by the spoof buy orders: they believe the underlying security should be worth more and therefore submit or accept limit orders at higher prices. Although ZI agents do not change their bidding behavior directly, they may transact at higher prices due to the increased bids of HBL traders.Several other interesting findings are revealed by the transaction-price difference series. First, the average price rise caused by spoofing the market with 28 background traders is higher than for N = 65 . This indicates that a market with fewer background traders can be more susceptible to spoofing, due to the limited pricing information a thin market could aggregate. Second, for markets populated with more HBLs than ZIs in the equilibrium mixture, the transaction price differences tend to increase throughout the trading period. This amplification can be explained by HBLs consistently submitting orders at higher prices and confirming each other’s spoofed belief. However, for markets with more ZIs, the spoofing effect diminishes as ZIs who do not change their limit-order pricing can partly correct the HBLs’ illusions. Finally, we observe that the spoofing effect tends to attenuate over time: differences in transaction prices first increase, and then stabilize or decrease. This is due to the mean-reverting property of the fundamental series and the way background traders estimate the final fundamental. As time approaches the end of the trading period, background agents rely more on accumulated fundamental observations and get better estimates of the final fundamental value. Therefore, spoofing tends to wear off in the face of accumulated observations and mean reversion.We further compare background-trader payoffs attained in environments with and without spoofing. We find a redistribution of surplus between HBL and ZI agents: HBL aggregated surplus decreases, while that for ZI increases compared to the non-spoofing baselines. Specifically, across 28-trader market environments, HBL traders suffer an average surplus decrease of 184 across all equilibrium profiles, whereas the ZI traders have an average surplus gain of 19. For the 65-trader markets, the average surplus decrease for HBL traders is 238, while the average increase for ZI is 40. This suggests that the ZI agents benefit from the HBL agents’ spoofed beliefs. Since the decreases in HBL surplus are consistently larger than the increases for ZI, the overall market surplus decreases. We leave further discussion of spoofing’s impact on market surplus to [Section 5.3.2](#sec5dot3dot2-games-12-00046), where background traders can choose other strategies to adjust to the presence of spoofing.To examine the potential to profit from a successful price manipulation, we extend the spoofing agent with an exploitation strategy: buying, (optionally) spoofing to raise the price and then selling. The exploiting spoofer starts by buying when there is a limit sell order with price less than the fundamental mean in the market. It then optionally runs the spoofing trick, or alternatively waits, for 1000 time steps. Finally, the agent sells the previously bought unit (if any) when it finds a limit buy order with price more than fundamental mean. Note that, even without spoofing, this single-unit exploitation strategy is profitable in expectation due to the mean reversion captured by the fundamental process and the reliable arrivals of background traders with private preferences.[Figure 7](#games-12-00046-f007) compares the exploitation profits with and without spoofing. In controlled experiments, we find that exploitation profits are consistently increased when the spoof action is also deployed. Across 28-trader market environments, the exploiter makes an average profit of 206.1 and 201.8 with and without spoofing, respectively, and the increases in profit range from 1.2 to 11.5. For the 65-trader market, the average profits of this exploitation strategy with and without spoofing are 50.5 and 46.3, respectively, with the increases in profit varying from 1.7 to 9.4 across environments. Statistical tests show all increases in profit are significantly larger than zero. Regardless of spoofing, the exploitation strategy profits more in the thinner market due to the greater variance in transaction prices.#### 5.3.2. Re-Equilibrating Games with Spoofing To understand how spoofing changes background-trading behavior, we conduct EGTA again to identify Nash equilibria, allowing background traders to choose any strategy in [Table 1](#games-12-00046-t001), in games with spoofing. Detailed results on equilibrium mixture and outcomes of games with spoofing can be found in [Appendix A.3](#secAdot3-games-12-00046). As indicated in [Figure 3](#games-12-00046-f003) (orange triangles), after re-equilibrating games with spoofing, HBL is generally adopted by a smaller fraction of traders, but it still persists in equilibrium in most market environments. HBL’s existence after re-equilibration indicates a consistently spoofable market: the designed spoofing tactic fails to eliminate HBL agents, and, in turn, the persistence of HBL may incentivize a spoofer to continue effectively manipulating the market.We characterize the effect of spoofing on market surplus. [Figure 8](#games-12-00046-f008) compares the total surplus achieved by background traders in equilibrium with and without spoofing. Given the presence of HBL traders, spoofing generally decreases total surplus (as, in [Figure 8](#games-12-00046-f008), most filled orange triangles are below the filled blue circles). However, spoofing has ambiguous effect in the thicker market with large observation variance (environments LSHN and HSHN with 65 background agents). This may be because noise and spoofing simultaneously hurt the prediction accuracy of the HBL agents and therefore shift agents to other competitive ZI strategies with higher payoffs. Finally, we find the welfare effects of HBL strategies persist regardless of spoofing’s presence: markets populated with HBL agents in equilibrium achieve higher total surplus than those markets without HBL (as, in [Figure 8](#games-12-00046-f008), the hollow markers are below the filled markers).#### 5.4. Discussion Our agent-based model of spoofing aims to capture the essential logic of manipulation through influencing belief about market demand. In our model, the order book reflects aggregate information about the market fundamental, and learning traders can use this to advantage in their bidding strategies. The presence of such learning traders benefits price discovery and social welfare, but it also renders the market vulnerable to manipulation. As we demonstrate, simple spoofing strategies can effectively mislead learning traders, thereby distorting prices and reducing surplus compared to the non-spoofing baseline. Moreover, the persistence of learning traders in equilibrium with manipulation suggests that the elimination of spoofing requires active measures.We acknowledge several factors that can limit the accuracy of our equilibrium analysis in individual game instances; these include sampling error, reduced-game approximation and restricted strategy coverage. Despite such limitations (inherent in any complex modeling effort), we believe the model offers a constructive basis to evaluate manipulation practices and any preventive or deterrent proposals to mitigate manipulation under strategic settings. In the rest of the paper, we build on this model and conduct comprehensive analysis to investigate the following questions:* Are there more robust ways for exchanges to disclose order book information ([Section 6](#sec6-games-12-00046))? * Are there strategies by which individual traders can adopt to exploit market information but in less vulnerable ways ([Section 7](#sec7-games-12-00046))? 6. A Cloaking Mechanism to Mitigate Spoofing --------------------------------------------- Despite regulatory enforcement and detection efforts, an individual spoofing episode is hard to catch in high-volume, high-velocity data streams. Legal definitions cannot be easily translated to computer programs to direct detection, and the lack of datasets with labeled manipulation cases makes training a reliable detector infeasible with supervised machine learning techniques. Based on its definition, to determine that a pattern of activity constitutes spoofing requires establishing the manipulation intent behind submission and cancellation of placed orders. However, this is not easy, as order cancellation is in itself common and legitimate: according to one study, 95% of NASDAQ limit orders are canceled, with a median order lifetime less than one second [[53](#B53-games-12-00046)]. Another challenge arises from the adversarial nature of a manipulator who may strategically adapt to evade detection and regulation [[54](#B54-games-12-00046)]. Given difficulties in robustly detecting manipulation, we study systematic approaches to deter spoofing, by rendering manipulative practices difficult or uneconomical.Along these lines, Prewit [[55](#B55-games-12-00046)] and Biais and Woolley [[56](#B56-games-12-00046)] advocated the imposition of cancellation fees to disincentivize manipulative strategies that rely on frequent cancellations of orders. Others argue that cancellation fees could discourage the beneficial activity of liquidity providers, and, in the event of a market crash, such a policy may lengthen the recovery process [[57](#B57-games-12-00046)].We propose here a cloaking mechanism to deter spoofing via the selective disclosure of order book information. The mechanism extends the traditional CDA market with a cloaking parameter K, which specifies the number of price levels to hide symmetrically from inside of the limit order book. The idea is to make it more difficult for the spoofer who relies on the instant order book information to post misleading bids, while not unduly degrading the general usefulness of market information. We focus on deterministic cloaking (i.e., a constant K throughout the trading period), as a stochastic mechanism may raise issues regarding verification of faithful market operations.We extend our agent-based model of spoofing to support order book cloaking and conduct simulations to evaluate and find the optimal cloaking parameter under strategic settings, where both the learning traders and the spoofer adapt to the new mechanism. [Section 6.1](#sec6dot1-games-12-00046) formally defines the cloaking mechanism and describes how we modify the background trading and spoofing strategies accordingly. In [Section 6.2](#sec6dot2-games-12-00046), we present an EGTA study conducted to understand agents’ strategic responses to the proposed mechanism. [Section 6.3](#sec6dot3-games-12-00046) reports results from performing empirical mechanism design [[58](#B58-games-12-00046)] to set cloaking parameters that maximize efficiency. Finally, in [Section 6.4](#sec6dot4-games-12-00046), we explore and evaluate sophisticated spoofing strategies that use probing to reveal cloaked information.#### 6.1. A Cloaking Market Mechanism The cloaking mechanism maintains a full limit order book just as the regular CDA market, but discloses only a selective part of the book to traders. Let BID t k denote the kth-highest buy price in the book at time t and ASK t k the kth-lowest sell price. In a standard order book, at any given time t, the buy side of the book starts with the best bid, BID t 1 , and extends to lower values; the sell side starts with the best ask, ASK t 1 , and extends to higher ones. The cloaking mechanism works by symmetrically hiding a deterministic number of price levels K from inside of the order book. Thus, the disclosed order book in a cloaking mechanism with parameter K starts with BID t K + 1 and ASK t K + 1 and extends to lower and higher values, respectively. Upon order submissions, cancellations and transactions, the market updates the full order book and then cloaks the K inside levels. Therefore, an order hidden in the past can be revealed later due to the arrival of new orders at more competitive prices, or it can be hidden throughout its lifetime due to a cancellation. The market discloses all the transaction information at zero delay.**Example** **1** (A Cloaking Mechanism with Parameter K). When K = 0 , the market acts as a standard CDA, disclosing the full limit order book with zero delay. When K = 1 , the mechanism conceals orders at the best quotes, that is BID t 1 and ASK t 1 . When K = ∞ , the market does not reveal any part of the book and thus disallows learning from order book information.Cloaking operates to deter spoofing in two ways. First, it mitigates the effect of spoof orders, pushing them further from the inside of the book. Second, it increases the spoofer’s transaction risks, as it cannot as easily monitor the quantity of orders ahead of the spoof. On the other hand, the information hiding also affects the non-manipulative traders, for instance in our model it may degrade the HBL traders’ learning capability. To quantify this tradeoff, we start by exploring a range of cloaking parameters, K ∈ { 0 , 1 , 2 , 4 } , which control the amount of information being concealed at any given time. We compare trading behavior and outcomes in markets with cloaking to that of a standard CDA. Among the nine market environments defined in [Section 5.1](#sec5dot1-games-12-00046), we consider three representatives that are increasingly challenging for the learning traders: LSHN with { σ s 2 = 10 5 , σ n 2 = 10 9 } , MSMN with { σ s 2 = 5 × 10 5 , σ n 2 = 10 6 } and HSLN with { σ s 2 = 10 6 , σ n 2 = 10 3 } . Together with the four cloaking parameters, this gives us a total of 12 market settings, or 24 games with and without spoofing.The market is populated with 64 background traders and one exploitation agent. Therefore, when adopting DPR to approximate this many-player game, we use simulation data from the (64, 1)-agent environments to estimate reduced (4, 1)-player games, where four players are used to aggregate and represent the background traders. In each game, we consider background trading strategies and spoofing practice similar to those of [Section 3](#sec3-games-12-00046), but they are slightly modified to adapt to order book cloaking. Below, we describe changes made to each strategy.#### 6.1.1. Zero Intelligence Recall that our ZI strategy uses a threshold parameter η ∈ [ 0 , 1 ] to immediately transact with an existing order to grasp a portion of desired surplus. That is, if the agent could achieve a fraction η of its requested surplus at the market best quotes, it would simply take that quote rather than posting a limit order for a future transaction. Under a cloaking mechanism, however, ZI may take into account only the current visible best quotes that are less competitive compared to the hidden quotes. To adjust to cloaking, we explore a range of more aggressive (smaller) η values to ensure that ZI traders may still transact with incumbent orders to lock a certain fraction of surplus. Besides the seven ZI strategies in [Table 1](#games-12-00046-t001), we further include three ZI strategies with η = 0.4 ([Table 2](#games-12-00046-t002)), which are competitive enough to appear in at least one equilibrium of our explored environments.#### 6.1.2. Heuristic Belief Learning We modify HBL to consider only the revealed order book information under the corresponding cloaking markets. Orders at competitive price levels will be missed in the belief function (Equation ([4](#FD4-games-12-00046))) if they are hidden throughout order lifetime, or they may be considered with delay if later exposed at visible levels. This reduction in bid information would naturally be expected to degrade HBL’s learning effectiveness and thus its trading performance.#### 6.1.3. Spoofing Strategy We extend the original spoofing strategy ([Section 3.4](#sec3dot4-games-12-00046)) to cloaking markets. The strategy includes three stages. At the beginning of a trading period [ 0 , T spoof ] , the agent buys by accepting any sell order at price lower than the fundamental mean r ¯ . In a cloaking market, this can be achieved by placing a one-unit limit buy order at price r ¯ and immediately withdrawing it if does not transact with an existing order.During the second stage [ T spoof , T sell ] , the agent submits spoof buy orders at a tick behind the first visible bid BID T spoof K + 1 − 1 with volume Q sp ≫ 1 . Whenever there is an update on the first visible bid, the spoofer replaces its original spoof with new orders at price BID t K + 1 − 1 . This spoofing strategy aims to boost price, in the hope that the units purchased in Stage 1 can be later sold at higher prices. In controlled experiments, when the agent is not manipulating, it waits until the selling stage.During the last stage [ T sell , T ] , the agent starts to sell the units it previously bought by accepting any buy orders at a price higher than r ¯ . Inverse to the first stage, this operates by placing one-unit limit sell orders at price r ¯ , followed by immediate cancellation if not filled. The agent who also manipulates continues to spoof until all the bought units are sold or the trading period ends. The pure exploitation strategy can be considered as a baseline for the spoofing strategy, allowing us to quantify how much more the agent may profit from spoofing the market.We refer to the agent who employs the above strategy, whether places spoof orders or not, as an exploitation agent or exploiter. An exploiter who also spoofs is referred to as a spoofing agent or spoofer. Note that the spoofing strategy considered here does not face any execution risk on its spoof orders under the assumption it can immediately respond to quote changes. A more sophisticated strategy could probe the market to reveal the cloaked bids, and then spoof at a visible price higher than BID t K + 1 − 1 . We leave discussion of such probing strategies to [Section 6.4](#sec6dot4-games-12-00046).#### 6.2. Tradeoff Faced by Cloaking Mechanisms We start by separately investigating the impact of cloaking on background traders and on the spoofer. Our first set of games cover the range of cloaking environments without spoofing (i.e., markets populated with background traders and the non-manipulative exploiter). Detailed results on equilibrium mixture and outcomes of cloaking mechanisms without spoofing can be found in [Appendix B.1](#secBdot1-games-12-00046).[Figure 9](#games-12-00046-f009) displays the HBL adoption rate (i.e., total probability over HBL strategies) at equilibrium across cloaking mechanisms, K ∈ { 0 , 1 , 2 , 4 } . We find that the competitiveness of HBL generally persists when the mechanism hides one or two price levels, but at higher cloaking levels the HBL fraction can drastically decrease. The information loss caused by cloaking weakens HBL’s ability to make predictions. The effect is strongest in environments with high fundamental shocks (e.g., HSLN), as previous hidden orders can become uninformative or even misleading by the time they are revealed. Such information loss is further confirmed in [Figure 10](#games-12-00046-f010), which compares the price discovery achieved at market equilibrium. We find that in markets where the fundamental shock is relatively high and a higher level of cloaking is adopted, transactions may not reveal fundamental estimates very well. Given the decreasing HBL prevalence and learning effectiveness, background-trader surplus achieved at equilibrium also decreases (as shown in [Figure 11](#games-12-00046-f011)b (blue diamonds).Next, we examine whether cloaking can effectively mitigate manipulation. We perform controlled experiments by letting the exploitation agent also execute the spoofing strategy against each found equilibrium and compare the impact of spoofing under the cloaking mechanism to the standard fully revealed order book ( K = 0 ). For every equilibrium, we simulate at least 10,000 paired instances and evaluate their differences on transaction price and agents’ payoffs.From these controlled experiments, we find that cloaking can considerably diminish price distortion caused by spoofing across environments. Recall that we measure price distortion as the transaction price series in a market with spoofing minus that of its paired market without spoofing. [Figure 12](#games-12-00046-f012)a demonstrates the case in a specific environment MSMN: without cloaking ( K = 0 ), transaction prices significantly rise subsequent to the execution of spoofing at T sp = 1000 , as HBL traders are tricked by the spoof buy orders; in cloaked markets, this price rise is effectively mitigated. [Figure 12](#games-12-00046-f012)b further illustrates the surplus change in background traders and the exploiter when it also spoofs. We find the exploiter can robustly profit from manipulating the learning agents in the no-cloaking case. In contrast, partially hiding the order book can significantly reduce spoofing profits and prevent background traders from losing much. These findings indicate the cloaking mechanism may deter or even eliminate the exploiter’s incentive to spoof.#### 6.3. Finding the Optimal Cloaking Given the tradeoff between preserving order book informativeness and mitigating manipulation, the question becomes: Under what circumstances do the deterrence benefits of cloaking exceed its efficiency costs? To answer this, we re-equilibrate games allowing the exploiter to strategically choose whether to spoof, with background traders able to execute any strategy in [Table 1](#games-12-00046-t001) or [Table 2](#games-12-00046-t002). This allows background traders and the exploitation agent to strategically respond to each other under a certain level of information cloaking. Detailed results on equilibrium mixture and outcomes can be found in [Appendix B.2](#secBdot2-games-12-00046).Our findings are presented in [Figure 11](#games-12-00046-f011). We compare market outcomes with and without cloaking on two metrics: the probability of spoofing and total background-trader surplus in equilibrium. (Due to the welfare benefits of HBL, equilibria with pure ZIs usually achieve much lower surplus than those with HBLs. For presentation simplicity, we omit all-ZI equilibria from [Figure 11](#games-12-00046-f011)b. Environments with such cases are marked with asterisks.) As shown in [Figure 11](#games-12-00046-f011)a, the cloaking mechanism effectively decreases the probability of spoofing under most environment settings—completely eliminating spoofing in some cases. Moreover, we find moderate cloaking can preserve the prevalence of HBL at equilibrium, which otherwise would be decreased by spoofing as we saw in [Section 5](#sec5-games-12-00046).This weakened spoofing effect is further confirmed by [Figure 11](#games-12-00046-f011)b, which compares the total background-trader surplus achieved in equilibrium under mechanisms with and without cloaking. Without cloaking (i.e., K 0 columns), background surplus achieved in equilibrium where the exploiter strategically chooses to spoof (orange triangles) is much lower than the surplus attained when the exploiter is prohibited from spoofing (blue diamonds). We find the decrease in surplus due to spoofing can be considerably mitigated by order book cloaking. As shown in [Figure 11](#games-12-00046-f011)b, the vertical distances between the blue diamonds and orange triangles get smaller with K > 0 . Moreover, we find the benefit of this improved robustness to spoofing can outweigh its associated efficiency costs in markets with moderate fundamental shocks (e.g., LSHN and MSMN). In those environments, background traders in mechanisms that cloak one or two price levels achieve higher surplus than those without cloaking. However, in a market with high shocks (e.g., HSLN), hiding or delaying even a little market information degrades learning to such a degree as to render cloaking counter-productive.#### 6.4. Probing the Cloaking Mechanism to Spoof To this point, we have only considered spoofers who are unwilling to risk execution of their spoof orders. A more sophisticated manipulator could probe the market, submitting a series of orders at slightly higher prices, in an attempt to reveal the cloaked bids and spoof at a visible price higher than BID t K + 1 − 1 . In this section, we study the feasibility of such probing to the spoofing agent.We design and evaluate parameterized versions of the spoofing strategy combined with probing. The strategy is governed by two parameters: the step size δ , which controls probing aggressiveness, and the maximum attempts allowed per time step l, which limits the probing effort.The spoofer probes by submitting a unit buy order at BID t K + 1 + δ , a price inside the visible quotes, in the hopes of exposing BID t K . If the probe succeeds, it immediately cancels the probe order, and places a new spoof order at BID t K − 1 , right behind the lowest hidden bid level. If probing fails because the price is too conservative, the spoofer re-probes by raising the price at a decreasing rate (as a function of δ and the attempt number), until a higher price is revealed or the number of probing attempts reaches l. If probing causes a transaction, the spoofer halves the price increment and re-probes. Algorithm 1 describes the detailed probing procedure. | | | --- | | **Algorithm 1** Spoofing with probing in a cloaking market with K hidden price levels ( K > 0 ) | | **Input:** The probing step size δ and the attempt limit l.            The spoofer’s time to place spoof orders T spoof , and its current holding H. 1:**while** t ≥ T spoof **and** H > 0 **do**2:    **if** no active spoof orders **then**3:         c ← 1 , Δ ← δ  ▹ track the number of probing attempts and the price increment4:        submit a single-unit probe buy order at price BID t K + 1 + Δ 5:        **while** the visible BID t K + 1 remains unchanged **and** c < l  **do**6:            c ← c + 1 7:           **if** the probe buy order gets transacted **then**8:                Δ ← Δ / 2 9:               submit a single-unit probe buy order at price BID t K + 1 + Δ 10:           **else**11:                Δ ← Δ + max { 0 . 9 c − 1 δ , 1 } 12:               substitute the probe order with a new one at price BID t K + 1 + Δ 13:           **end if**14:        **end while**15:        submit spoof orders at price BID t K + 1 − 1 16:        cancel the probe order17:    **else**18:        **if** spoof orders become hidden **then**19:           substitute spoof orders with new ones at price BID t K + 1 − 1 20:        **else if** spoof orders are no longer one tick behind BID t K + 1  **then**21:           withdraw spoof orders22:        **end if**23:    **end if**24:**end while** | [Table 3](#games-12-00046-t003) reports, for cloaking-beneficial environments, the minimum l required for step sizes δ ∈ { 1 , 2 , 4 , 8 } to achieve higher payoffs than the equilibrium performance we found for the exploiter in [Section 6.3](#sec6dot3-games-12-00046). Multiple rows for the same cloaking parameter correspond to the multiple equilibria found in that market setting. Dashes in the table indicate that an exploiter cannot beat the equilibrium performance with the corresponding δ . We find that, to achieve higher payoffs, the spoofer has to probe with multiple attempts per time step, and conservative probing strategy with smaller δ usually requires more effort. In practice, such frequent cancellation and placement of orders may not be feasible and can largely increase the risk of the associated probing and spoofing intent being identified.[Figure 13](#games-12-00046-f013) further quantifies the change in exploitation payoff and transaction risk (measured as the number of transactions caused by probing), as we vary the probing step δ and the attempt limit l. As shown [Figure 13](#games-12-00046-f013)a, relaxing the maximum number of probing attempts steadily increases the transaction risk, but it does not necessarily improve payoff. Moreover, the spikiness we observe in the exploiter’s payoff suggests that the performance is highly sensitive and therefore it would be difficult to find a ( δ , l ) that robustly maximizes profit. [Figure 13](#games-12-00046-f013)b further demonstrates that an exploiter can probe aggressively with larger step sizes to reduce effort, but it is usually at the cost of a higher transaction risk and consequently a lower payoff. In highly dynamic markets with frequently updated quotes, finding an appropriate δ to successfully probe a cloaking mechanism within a reasonable number of attempts would be quite challenging.We have explored other more aggressive probing strategies, where the spoofer probes to expose multiple hidden levels and spoofs at even higher prices. To accomplish that, the spoofer is forced to keep at least one order in the cloaked levels to guarantee that its spoof orders are visible. However, according to our experiments, such aggressive probing strategies fail to beat the equilibrium performance, as orders kept in hidden levels are often accepted by background traders due to adverse selection. Those transactions tend to accumulate the spoofer’s position, and consequently they impose losses at the end of the trading period.#### 6.5. Discussion Our cloaking mechanism offers a systematic approach to disincentivizing spoofing. We conduct EGTA to understand agents’ strategic responses to the proposed mechanism and evaluate the effectiveness and robustness of cloaking. Experimental results demonstrate that cloaking the order book can significantly diminish the efficacy of spoofing, but at the loss of useful information for the learning traders. With the goal of balancing this tradeoff to maximize background-trader surplus, we perform empirical mechanism design to choose the optimal cloaking across parametrically distinct environments. We find that, in markets with moderate shocks, the benefit of cloaking in mitigating spoofing can outweigh its efficiency cost, whereas, in markets with large fundamental fluctuations, hiding even a little order book information can largely degrade learning efficiency and render the cloaking mechanism counter-productive. By further exploring sophisticated spoofing strategies that probe to reveal cloaked information, we observe that associated effort and risk generally exceeds the gains and that finding reliably profitable probing regiments is quite difficult. We conclude that the proposed cloaking mechanism cannot be easily circumvented by probing. 7. Learning-Based Trading Strategies under the Presence of Market Manipulation ------------------------------------------------------------------------------- We next consider how individual traders may construct strategies that are more robust to manipulation. In realistic market scenarios, traders are aware of potential manipulation but unable to reliably detect spoofing behavior in real time. In the absence of manipulation, traders submit orders that reflect their private observations and preferences, and thus learning from others’ actions enables more informed decisions. Indeed, as shown above, learning as implemented by HBL agents is effective in a realistic market model and provides benefits to the learning agent as well as to market efficiency. HBL is vulnerable to spoofing, however, and agents adopting such learning are harmed by spoofing compared to non-learning strategies that are oblivious to spoofers and thus non-manipulable. The question we investigate in this section is whether learning-based strategies can be designed to be similarly robust to spoofing. We seek strategies by which individual traders can learn from market information, but in less vulnerable ways.We treat the original HBL described in [Section 3.3.3](#sec3dot3dot3-games-12-00046) as a baseline strategy and propose two variations that aim to reasonably trade off learning effectiveness in non-manipulated markets for robustness against manipulation. The first variation works by selectively ignoring orders at certain price levels, particularly where spoof orders are likely to be placed. The second variation considers the full order book, but has the flexibility to adjust the offer price by a stochastic offset. The adjustment serves to correct biases in learned price beliefs either caused by manipulation or the intrinsic limitation built in the belief function. We formally define the two variations in [Section 7.1](#sec7dot1-games-12-00046), and then evaluate the proposed strategies in terms of the effectiveness in non-manipulated markets and robustness against manipulation in [Section 7.2.1](#sec7dot2dot1-games-12-00046).We adopt the standard CDA market mechanism as described in [Section 3.1](#sec3dot1-games-12-00046). The market is populated with 64 background traders and one profitable exploiter. Background traders can choose from a select set of strategies that covers ZI, original HBL and the two proposed variations of HBL. The exploiter follows the three-stage exploitation strategy specified in [Section 6.1](#sec6dot1-games-12-00046) and executes spoofing in selected treatments. As in our study of cloaking mechanisms, we consider three representative market settings for our experiments, namely LSHN, MSMN and HSLN.#### 7.1. Two Variations of HBL #### 7.1.1. HBL with Selective Price Level Blocking Our first HBL variation is inspired by the success of our cloaking mechanism. It takes advantage of the common placement of spoof orders closely behind the market best quotes. Instead of including all observed trading activities in its memory to construct the belief function just as the standard HBL, the idea is to neglect limit orders at a specified price level when assembling the dataset D to learn from. We extend standard HBL with a blocking parameter χ , which specifies the index of a single price level to ignore symmetrically from inside of the limit order book. For example, when χ = 1 , the trading agent constructs a dataset, D ∖ O χ = 1 , by considering only orders strictly outside the best bid and ask. The goal of this additional strategic parameter is to exclude price levels where spoof orders are likely to appear. However, ignoring orders may come at the cost of less effective learning, especially when information that conveys true insight is blocked from the belief function.#### 7.1.2. HBL with Price Offsets Our second HBL variation considers all orders in its memory, but translates the target price P i \* ( t ) derived by surplus maximization in Equation ([5](#FD5-games-12-00046)) with an offset uniformly drawn from [ R min , R max ] . Specifically, a background trader i who arrives the market at time t with the optimized price P i \* ( t ) submits a limit order for a single unit of the security at price p i ( t ) ∼ U [ P i \* ( t ) − R max , P i \* ( t ) − R min ] if buying , U [ P i \* ( t ) + R min , P i \* ( t ) + R max ] if selling . (6) A positive offset can be viewed as a hedge against misleading information, effectively shading the bid to compensate for manipulation risk. A negative offset increases the probability of near-term transaction, which may have benefits in reducing exposure to future spoofing. Offsets (positive or negative) may also serve a useful correction function even when manipulation is absent. In particular, negative offsets may compensate for the myopic nature of HBL optimization Equation ([5](#FD5-games-12-00046)), which considers only the current bid, ignoring subsequent market arrivals and opportunities to trade additional units. Our design here is in line with prior literature [[24](#B24-games-12-00046),[25](#B25-games-12-00046)] that refines the original HBL to become more competitive.#### 7.2. Empirical Evaluation #### 7.2.1. Standard HBL We start with our baseline market environments where background traders are restricted to choose from the standard HBL strategies and five parametrically different ZI strategies in [Table 4](#games-12-00046-t004). Figure 18 (dark grey columns) verifies what is observed in [Section 5](#sec5-games-12-00046) within this restrictive set of background-trading strategies: (1) the learning-based trading strategy is more widely preferred in environments where fundamental shock is low and observation noise is high (e.g., LSHN is the most learning-friendly environment); and (2) the presence of spoofing generally hurts the learning-based strategy and reduces background-trader surplus. Detailed equilibrium outcomes can be found in [Appendix C.1](#secCdot1-games-12-00046). We next evaluate the two HBL variations.#### 7.2.2. HBL with Selective Price Level Blocking Learning traders who choose to ignore certain orders face a natural tradeoff between losing useful information and correctly blocking spoof orders to avoid manipulation. We first examine, under non-spoofing environments, how learning effectiveness may be compromised by excluding orders at each price level. Starting with the equilibrium strategy profile of each non-spoofing market environment found in [Section 7.2.1](#sec7dot2dot1-games-12-00046) (we arbitrarily select one if there are multiple equilibria found in a certain environment), we perform controlled experiments by letting background traders who adopt the standard HBL strategy ignore orders from a selected price level throughout the trading period. [Table 5](#games-12-00046-t005) compares the payoffs obtained by HBL in its standard form and variations that, respectively, block orders at the first, second and third price level in the order book. We find that, consistently across market settings, HBL agents benefit the most by learning from market best bids and asks and can achieve fairly similar performance even when orders at a selected level beyond the market best quotes are ignored.In response to the HBL variation that ignores price levels, we extend the exploiter to be able to place spoof orders behind a chosen price level, denoted by ψ . For example, when ψ = 2 , the exploiter injects spoof orders at one tick behind the second-best bid. We start with the same set of equilibrium strategy profiles and conduct controlled experiments to evaluate how injecting spoof orders at different levels can change the manipulation effect, even when learning traders are considering the full order book (i.e., adopting standard HBL). We measure the effectiveness of each spoofing strategy by profits from trade as well as the price deviation caused by spoof orders. Experimental results ([Table 5](#games-12-00046-t005)) show that the exploiter benefits the most by placing spoof orders behind the best bid (i.e., ψ = 1 ) and moving spoof orders to less competitive levels reduces exploitation profit. We further confirm this weakened manipulation effect in [Figure 14](#games-12-00046-f014), which showcases market price deviations caused by different spoofing strategies in the MSMN environment. We find the price rise diminishes as spoof orders are placed further away from the best bid.Although our exploration of possible spoofing strategies here is limited, the results suggest that spoof orders near the market quotes tend to maximize manipulation effect. In response, HBL traders who adapt to the presence of spoofing may naturally block orders around such levels. [Figure 15](#games-12-00046-f015) shows that, when blocking the correct level, HBL traders can significantly increase their payoffs and reduce the amount the exploiter could profit via manipulation. This mitigated manipulation effect is further verified by the dashed blue line in [Figure 14](#games-12-00046-f014), which shows price deviations close to zero. Price differences are not strictly zero before spoofing (time 1000), as traders who adopt HBL χ = 2 consistently block orders throughout the trading period.Given these beneficial payoff deviations, in the final set of experiments, we conduct EGTA to find approximate Nash equilibria in games where background traders may choose trading strategies from the ZI family and HBLs that block a selected price level (any strategy in [Table 4](#games-12-00046-t004) or [Table 6](#games-12-00046-t006)). Detailed equilibrium results can be found in [Appendix C.2](#secCdot2-games-12-00046). As shown in Figure 18 (light grey columns), we find that: (1) adding the blocking strategic parameter does not affect the competitiveness of learning-based strategies with respect to ZI (HBL adoption rates in equilibrium remain in similar ranges as those of markets where only the standard HBL strategy is provided); and (2) the extended order blocking ability improves the learning robustness of HBL traders (compared to surplus decreases caused by manipulation in markets where background agents are restricted to the standard HBL, background-trader surpluses are no longer significantly reduced when agents can strategically block orders in the face of manipulation). In other words, background traders who learn from market information but also strategically ignore orders can achieve robustness against manipulation and retain comparable effectiveness in non-manipulated markets.#### 7.2.3. HBL with Price Offsets Our second HBL variation relies on a price adjustment rather than information selection to adapt to different market conditions. We start by exploring a set of price offset intervals [ R min , R max ] , ranging from positive values that understate the learned offer prices (e.g., similar to price shading) to negative values that adjust prices to become more competitive. As in [Section 7.2.2](#sec7dot2dot2-games-12-00046), we conduct controlled experiments starting from equilibrium profiles found in [Section 7.2.1](#sec7dot2dot1-games-12-00046) and then deviating from standard HBL to allow price offsets. [Figure 16](#games-12-00046-f016) shows for the MSMN non-spoofing environment how HBL surplus and number of transactions vary in markets where HBL traders adopt different offset intervals. (HBL with positive offset usually generates much lower payoff. For presentation simplicity, we cropped the surplus decrease at −400 in [Figure 16](#games-12-00046-f016).) We find adjusting learned prices with a range of negative offsets can be generally beneficial in our setting where agents have reentry opportunities. It increases HBL payoff and facilitates transactions, thus improving overall price convergence in markets.To test the effectiveness of spoofing against the new HBL variation, we further have the SP ψ = 1 spoof in markets where the learning background traders, respectively, adopt the standard HBL, HBL [ − 10 , 0 ] , HBL [ − 20 , 0 ] , HBL [ − 40 , 0 ] and HBL [ − 80 , 0 ] . [Figure 17](#games-12-00046-f017) compares market price deviations caused by spoof orders in those markets. We find that, although all markets experience initial price rise as a result of misled pricing beliefs, the spoofing effect tends to wear off faster in markets where HBL traders adopt negative price offsets. This may be because negative offsets promote near-term transaction: as more transactions happen, HBL traders can glean true information from the transaction prices to construct more accurate belief functions, and the SP ψ = 1 places spoof orders at lower prices due to the widened bid-ask spreads. Indeed, we find that markets populated with the standard HBL, HBL [ − 10 , 0 ] , HBL [ − 20 , 0 ] and HBL [ − 40 , 0 ] , respectively, have average spoof-order prices of 99,972, 99,951, 99,945 and 99,950.Finally, we conduct EGTA in games with and without spoofing to find Nash equilibria where background traders can choose from ZI strategies and HBL variations that adjust learned prices with certain offsets ([Table 4](#games-12-00046-t004) and [Table 7](#games-12-00046-t007)). Detailed equilibrium results can be found in [Appendix C.3](#secCdot3-games-12-00046). Equilibrium results ([Figure 18](#games-12-00046-f018) white columns) show that the extended price offsets tend to largely improve HBL’s profitability and background-trader surpluses, in markets both with and without manipulation. Such price adjustments can especially help learning traders to better adapt to high shock environments where prices are less predictable from past observations. However, the extended offsets may not directly address manipulation and improve learning robustness against spoofing.#### 7.3. Combine Order Blocking and Price Offsets We observe that HBL with price offsets is overall competitive across different market settings, but its performance still degrades in markets with spoofing (refer to [Figure 18](#games-12-00046-f018), white columns). Since the second HBL variation demonstrates a general improvement in both settings with and without manipulation, we augment this variation with price level blocking to reduce vulnerability to spoofing. Specifically, we extend the background trading strategy set in [Table 6](#games-12-00046-t006) with three strategies: HBL [ − 10 , 0 ] K = 2 , HBL [ − 20 , 0 ] K = 2 and HBL [ − 40 , 0 ] K = 2 , which appear to be competitive in our preliminary explorations. We conduct EGTA in a similar manner across market environments with and without spoofing (detailed results can be found in [Appendix C.4](#secCdot4-games-12-00046)). Equilibrium outcomes ([Figure 18](#games-12-00046-f018), striped columns) show that: (1) compared to markets where only the standard and the price-blocking HBL are provided, HBL that combines the two variations is more widely preferred and can help to increase overall background-trader surplus in equilibrium; and (2) across all environments, background-trader surpluses in markets with and without spoofing fall roughly into the same ranges. These suggest that, by combining the two proposed variations, HBL traders can enjoy both improved competitiveness and robustness against manipulation. 8. Conclusions --------------- In this paper, we construct a computational model of spoofing: the tactic of manipulating market prices by targeting the order book. We design an HBL strategy that uses order book information to make pricing decisions. Since HBL traders use the order book, they are potentially vulnerable to spoofing attacks, and we confirm this in simulation analysis. We demonstrate that, in the absence of spoofing, HBL is generally adopted in equilibrium and benefits price discovery and social welfare. Although the presence of spoofing decreases the HBL proportion in background traders, HBL’s persistence in equilibrium indicates a robustly spoofable market. By comparing equilibrium outcomes with and without spoofing, we find that spoofing tends to decrease market surplus. Comparisons across parametrically different environments reveal factors that may influence the adoption of HBL and the impact of spoofing.We further propose a cloaking mechanism to deter spoofing. The mechanism discloses a partially cloaked order book by symmetrically concealing a deterministic number of price levels from the inside. Our results demonstrate that the proposed cloaking mechanism can significantly diminish the efficacy of spoofing, but at the cost of a reduced HBL proportion and surplus in equilibrium. With the goal of maximizing background trader surplus, we perform empirical game-theoretic analysis across parametrically different mechanisms and environments, and find in markets with moderate shocks, the benefit of cloaking in mitigating spoofing outweighs its efficiency cost. By further exploring sophisticated spoofing strategies that probe to reveal cloaked information, we demonstrate the associated effort and risk exceed the gains, and verified that the proposed cloaking mechanism cannot be circumvented.Two strategy variations based on the standard HBL strategy are explored. The first variation considers common characteristics of spoofing activities and works by offering agents the flexibility to neglect limit orders at a specified price level when assembling a dataset to learn from. The second variation learns from full order book information and later adjusts the target price derived from surplus maximization with a random offset to correct any biases in the learning process. Our analysis shows that the first HBL variation offers learning traders a way to strategically block orders to improve robustness against spoofing, while achieving similar competitiveness in non-manipulated markets. Our second HBL variation exhibits a general improvement over baseline HBL, in markets both with and without manipulation. Further explorations suggest that traders can enjoy both improved profitability and robustness by combining the two HBL variations. Author Contributions -------------------- Spoofing market model conceptualization, methodology, and game-theoretic analysis, X.W. and M.P.W.; implementation of the spoofing simulator and experiments, X.W.; the cloaking mechanism conceptualization and analysis, X.W., Y.V. and M.P.W.; variations of learning trading strategies conceptualization, implementation, and result analysis, X.W., C.H. and M.P.W.; writing—original draft preparation, X.W.; writing—review and editing, C.H., Y.V. and M.P.W. All authors have read and agreed to the published version of the manuscript.Funding ------- This research was supported in part by grant IIS-1421391 and CAREER Award IIS-1905558 from the US National Science Foundation.Acknowledgments --------------- We thank Uday Rajan and Erik Brinkman for helpful advice and discussions. This work extends and supersedes a series of conference papers: “Spoofing the Limit Order Book: An Agent-Based Model” presented at the 16th International Conference on Autonomous Agents and Multiagent Systems [[59](#B59-games-12-00046)], “A Cloaking Mechanism to Mitigate Market Manipulation” presented at the 27th International Joint Conference on Artificial Intelligence [[60](#B60-games-12-00046)], and “Learning-Based Trading Strategies in the Face of Market Manipulation” presented at the 1st ACM International Conference on AI in Finance [[61](#B61-games-12-00046)].Conflicts of Interest --------------------- The authors declare no conflict of interest. Appendix A. Detailed Equilibrium Results for Spoofing the Limit Order Book --------------------------------------------------------------------------- #### Appendix A.1. Summary and Comparison of Markets without and with Spoofing ![Table]() **Table A1.** Background-trader surplus and HBL proportion in equilibrium of markets without spoofing. Each row describes one Nash equilibrium found in a game (rounded to the nearest integer). Surpluses marked with asterisks indicate statistically significantly higher surpluses than those achieved in their corresponding markets with spoofing (see [Table A3](#games-12-00046-t0A3)). N = 28 without spoofing. **Table A1.** Background-trader surplus and HBL proportion in equilibrium of markets without spoofing. Each row describes one Nash equilibrium found in a game (rounded to the nearest integer). Surpluses marked with asterisks indicate statistically significantly higher surpluses than those achieved in their corresponding markets with spoofing (see [Table A3](#games-12-00046-t0A3)). N = 28 without spoofing. | Env | Surplus | 95% CI | HBL% | | --- | --- | --- | --- | | LSLN | 18,198 \* | [18,127, 18,269] | 88 | | LSLN | 18,246 \* | [18,177, 18,315] | 98 | | LSMN | 18,189 \* | [18,116, 18,262] | 100 | | LSHN | 18,265 \* | [18,200, 18,330] | 100 | | MSLN | 17,947 \* | [17,851, 18,043] | 58 | | MSLN | 16,693 \* | [16,613, 16,773] | 0 | | MSMN | 17,923 \* | [17,829, 18,017] | 62 | | MSMN | 17,927 \* | [17,839, 18,015] | 43 | | MSMN | 16,726 | [16,638, 16,814] | 0 | | MSHN | 18,266 \* | [18,188, 18,344] | 100 | | HSLN | 16,565 | [16,485, 16,645] | 0 | | HSLN | 17,143 \* | [17,055, 17,231] | 0 | | HSMN | 16,667 | [16,565, 16,769] | 0 | | HSHN | 18,253 \* | [18,179, 18,327] | 87 | ![Table]() **Table A2.** Similar to [Table A1](#games-12-00046-t0A1) above. Surpluses marked with asterisks indicate statistically significantly higher surpluses than those achieved in their corresponding markets with spoofing (see [Table A4](#games-12-00046-t0A4)). N = 65 without spoofing. **Table A2.** Similar to [Table A1](#games-12-00046-t0A1) above. Surpluses marked with asterisks indicate statistically significantly higher surpluses than those achieved in their corresponding markets with spoofing (see [Table A4](#games-12-00046-t0A4)). N = 65 without spoofing. | Env | Surplus | 95% CI | HBL% | | --- | --- | --- | --- | | LSLN | 43,157 \* | [43,016, 43,298] | 71 | | LSLN | 43,102 \* | [42,980, 43,224] | 73 | | LSLN | 43,010 \* | [42,885, 43,135] | 95 | | LSMN | 43,249 \* | [43,106, 43,392] | 83 | | LSMN | 43,086 \* | [42,964, 43,208] | 79 | | LSHN | 42,946 | [42,817, 43,075] | 94 | | MSLN | 42,804 \* | [42,647, 42,961] | 57 | | MSMN | 42,807 \* | [42,652, 42,962] | 56 | | MSMN | 42,745 \* | [42,610, 42,880] | 56 | | MSHN | 43,265 \* | [43,128, 43,402] | 86 | | HSLN | 42,455 \* | [42,316, 42,594] | 37 | | HSMN | 42383 \* | [42,248, 42,518] | 37 | | HSMN | 42,144 \* | [41,999, 42,289] | 32 | | HSHN | 42,981 | [42,867, 43,095] | 89 | ![Table]() **Table A3.** Background-trader surplus and HBL proportion in equilibrium of markets with spoofing. Each row describes one Nash equilibrium found in a game (rounded to the nearest integer). N = 28 with spoofing. **Table A3.** Background-trader surplus and HBL proportion in equilibrium of markets with spoofing. Each row describes one Nash equilibrium found in a game (rounded to the nearest integer). N = 28 with spoofing. | Env | Surplus | 95% CI | HBL% | | --- | --- | --- | --- | | LSLN | 18,076 | [18,000, 18,152] | 78 | | LSMN | 18,040 | [17,971, 18,109] | 91 | | LSHN | 18,125 | [18,054, 18,196] | 87 | | MSLN | 16,774 | [16,707, 16,841] | 0 | | MSMN | 17,883 | [17,795, 17,971] | 34 | | MSMN | 17,517 | [17,429, 17,605] | 24 | | MSMN | 16,796 | [16,708, 16,884] | 0 | | MSHN | 18,108 | [18,032, 18,184] | 81 | | HSLN | 16,749 | [16,680, 16,818] | 0 | | HSMN | 16,667 | [16,565, 16,769] | 0 | | HSHN | 17,999 | [17,923, 18,075] | 97 | ![Table]() **Table A4.** Similar to [Table A3](#games-12-00046-t0A3) above, but with N = 65 with spoofing. **Table A4.** Similar to [Table A3](#games-12-00046-t0A3) above, but with N = 65 with spoofing. | Env | Surplus | 95% CI | HBL% | | --- | --- | --- | --- | | LSLN | 42,868 | [42,741, 42,995] | 70 | | LSLN | 42,993 | [42,850, 43,136] | 70 | | LSMN | 42,961 | [42,812, 43,110] | 80 | | LSHN | 43,061 | [42,943, 43,179] | 80 | | LSHN | 43,103 | [42,983, 43,223] | 74 | | MSLN | 42,639 | [42,508, 42,770] | 41 | | MSLN | 42,698 | [42,549, 42,847] | 50 | | MSMN | 42,624 | [42,477, 42,771] | 52 | | MSHN | 43,038 | [42,887, 43,189] | 75 | | MSHN | 43101 | [42,946, 43,256] | 76 | | HSLN | 41,815 | [41,664, 41,966] | 29 | | HSLN | 39,502 | [39,398, 39,606] | 0 | | HSMN | 40,091 | [39,968, 40,214] | 0 | | HSHN | 43,143 | [43,012, 43,274] | 71 | #### Appendix A.2. Markets without Spoofing ![Table]() **Table A5.** Equilibria for games without spoofing, N = 28 , calculated from the four-player DPR approximation. Each row of the table describes one equilibrium found with its corresponding surplus, HBL adoption rate and the equilibrium mixture probabilities of strategies included. **Table A5.** Equilibria for games without spoofing, N = 28 , calculated from the four-player DPR approximation. Each row of the table describes one equilibrium found with its corresponding surplus, HBL adoption rate and the equilibrium mixture probabilities of strategies included. | Env | Surplus | HBL | ZI 1 | ZI 2 | ZI 3 | ZI 4 | ZI 5 | ZI 6 | ZI 7 | HBL 1 | HBL 2 | HBL 3 | HBL 4 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | LSLN | 18,198 | 0.88 | 0 | 0.12 | 0 | 0 | 0 | 0 | 0 | 0.88 | 0 | 0 | 0 | | LSLN | 18,246 | 0.98 | 0 | 0 | 0 | 0 | 0.02 | 0 | 0 | 0 | 0.92 | 0.06 | 0 | | LSMN | 18,189 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.82 | 0 | 0.18 | 0 | | LSHN | 18,265 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | | MSLN | 17,947 | 0.58 | 0 | 0 | 0 | 0.40 | 0.02 | 0 | 0 | 0 | 0.40 | 0.18 | 0 | | MSLN | 16,693 | 0 | 0 | 0 | 0 | 0 | 0 | 0.74 | 0.26 | 0 | 0 | 0 | 0 | | MSMN | 17,923 | 0.62 | 0 | 0 | 0 | 0 | 0.38 | 0 | 0 | 0.44 | 0.18 | 0 | 0 | | MSMN | 17,927 | 0.43 | 0 | 0.04 | 0.53 | 0 | 0 | 0 | 0 | 0.43 | 0 | 0 | 0 | | MSMN | 16,726 | 0 | 0 | 0 | 0 | 0 | 0 | 0.80 | 0.20 | 0 | 0 | 0 | 0 | | MSHN | 18,266 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.74 | 0.24 | 0 | 0.02 | | HSLN | 16,565 | 0 | 0 | 0 | 0 | 0 | 0 | 0.73 | 0.27 | 0 | 0 | 0 | 0 | | HSLN | 17,143 | 0 | 0 | 0 | 0.53 | 0 | 0 | 0 | 0.47 | 0 | 0 | 0 | 0 | | HSMN | 16,667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | | HSHN | 18,253 | 0.87 | 0 | 0 | 0.13 | 0 | 0 | 0 | 0 | 0.84 | 0 | 0 | 0.03 | ![Table]() **Table A6.** Equilibria for games without spoofing, N = 65 , calculated from the five-player DPR approximation. Each row of the table describes one equilibrium found with its corresponding surplus, HBL adoption rate and the equilibrium mixture probabilities of strategies included. **Table A6.** Equilibria for games without spoofing, N = 65 , calculated from the five-player DPR approximation. Each row of the table describes one equilibrium found with its corresponding surplus, HBL adoption rate and the equilibrium mixture probabilities of strategies included. | Env | Surplus | HBL | ZI 1 | ZI 2 | ZI 3 | ZI 4 | ZI 5 | ZI 6 | ZI 7 | HBL 1 | HBL 2 | HBL 3 | HBL 4 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | LSLN | 43,157 | 0.71 | 0 | 0.29 | 0 | 0 | 0 | 0 | 0 | 0.59 | 0 | 0.12 | 0 | | LSLN | 43,102 | 0.73 | 0 | 0 | 0.27 | 0 | 0 | 0 | 0 | 0.73 | 0 | 0 | 0 | | LSLN | 43,010 | 0.95 | 0 | 0 | 0.05 | 0 | 0 | 0 | 0 | 0.22 | 0 | 0.73 | 0 | | LSMN | 43,249 | 0.83 | 0 | 0 | 0.17 | 0 | 0 | 0 | 0 | 0.57 | 0 | 0.26 | 0 | | LSMN | 43,086 | 0.79 | 0 | 0.05 | 0.16 | 0 | 0 | 0 | 0 | 0 | 0.79 | 0 | 0 | | LSHN | 42,946 | 0.94 | 0 | 0.04 | 0.02 | 0 | 0 | 0 | 0 | 0.75 | 0.19 | 0 | 0 | | MSLN | 42,804 | 0.57 | 0 | 0 | 0.43 | 0 | 0 | 0 | 0 | 0.31 | 0.26 | 0 | 0 | | MSMN | 42,807 | 0.56 | 0 | 0 | 0.44 | 0 | 0 | 0 | 0 | 0.31 | 0.25 | 0 | 0 | | MSMN | 42,745 | 0.56 | 0.01 | 0 | 0 | 0.43 | 0 | 0 | 0 | 0 | 0.56 | 0 | 0 | | MSHN | 43,265 | 0.86 | 0 | 0.06 | 0 | 0.08 | 0 | 0 | 0 | 0.67 | 0.19 | 0 | 0 | | HSLN | 42,455 | 0.37 | 0 | 0 | 0.63 | 0 | 0 | 0 | 0 | 0 | 0.18 | 0.19 | 0 | | HSMN | 42,383 | 0.37 | 0 | 0 | 0.63 | 0 | 0 | 0 | 0 | 0.26 | 0 | 0.11 | 0 | | HSMN | 42,144 | 0.32 | 0 | 0 | 0 | 0.54 | 0.14 | 0 | 0 | 0 | 0 | 0.32 | 0 | | HSHN | 42,981 | 0.89 | 0 | 0.08 | 0 | 0 | 0 | 0.03 | 0 | 0.89 | 0 | 0 | 0 | #### Appendix A.3. Markets with Spoofing ![Table]() **Table A7.** Equilibria for games with spoofing, N = 28 , calculated from the four-player DPR approximation. Each row of the table describes one equilibrium found with its corresponding surplus, HBL adoption rate and the equilibrium mixture probabilities of strategies included. **Table A7.** Equilibria for games with spoofing, N = 28 , calculated from the four-player DPR approximation. Each row of the table describes one equilibrium found with its corresponding surplus, HBL adoption rate and the equilibrium mixture probabilities of strategies included. | Env | Surplus | HBL | ZI 1 | ZI 2 | ZI 3 | ZI 4 | ZI 5 | ZI 6 | ZI 7 | HBL 1 | HBL 2 | HBL 3 | HBL 4 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | LSLN | 18,076 | 0.78 | 0 | 0 | 0.22 | 0 | 0 | 0 | 0 | 0.78 | 0 | 0 | 0 | | LSMN | 18,040 | 0.91 | 0 | 0 | 0 | 0.09 | 0 | 0 | 0 | 0.91 | 0 | 0 | 0 | | LSHN | 18,125 | 0.87 | 0 | 0 | 0.13 | 0 | 0 | 0 | 0 | 0.87 | 0 | 0 | 0 | | MSLN | 16,774 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | | MSMN | 17,883 | 0.34 | 0 | 0 | 0.11 | 0.55 | 0 | 0 | 0 | 0 | 0.34 | 0 | 0 | | MSMN | 17,517 | 0.24 | 0 | 0 | 0.54 | 0 | 0 | 0 | 0.21 | 0.24 | 0 | 0 | 0 | | MSMN | 16,796 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | | MSHN | 18,108 | 0.81 | 0 | 0 | 0.12 | 0.07 | 0 | 0 | 0 | 0.81 | 0 | 0 | 0 | | HSLN | 16,749 | 0 | 0 | 0 | 0 | 0 | 0.04 | 0.96 | 0 | 0 | 0 | 0 | 0 | | HSMN | 16,667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | | HSHN | 17,999 | 0.97 | 0 | 0 | 0 | 0.03 | 0 | 0 | 0 | 0.75 | 0 | 0.22 | 0 | ![Table]() **Table A8.** Equilibria for games with spoofing, N = 65 , calculated from the five-player DPR approximation. Each row of the table describes one equilibrium found with its corresponding surplus, HBL adoption rate and the equilibrium mixture probabilities of strategies included. **Table A8.** Equilibria for games with spoofing, N = 65 , calculated from the five-player DPR approximation. Each row of the table describes one equilibrium found with its corresponding surplus, HBL adoption rate and the equilibrium mixture probabilities of strategies included. | Env | Surplus | HBL | ZI 1 | ZI 2 | ZI 3 | ZI 4 | ZI 5 | ZI 6 | ZI 7 | HBL 1 | HBL 2 | HBL 3 | HBL 4 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | LSLN | 42,868 | 0.70 | 0.21 | 0 | 0.09 | 0 | 0 | 0 | 0 | 0.70 | 0 | 0 | 0 | | LSLN | 42,993 | 0.70 | 0 | 0.30 | 0 | 0 | 0 | 0 | 0 | 0.54 | 0.16 | 0 | 0 | | LSMN | 42,961 | 0.80 | 0 | 0 | 0.20 | 0 | 0 | 0 | 0 | 0.51 | 0.29 | 0 | 0 | | LSHN | 43,061 | 0.80 | 0 | 0 | 0.20 | 0 | 0 | 0 | 0 | 0.80 | 0 | 0 | 0 | | LSHN | 43,103 | 0.74 | 0 | 0.26 | 0 | 0 | 0 | 0 | 0 | 0.74 | 0 | 0 | 0 | | MSLN | 42,639 | 0.41 | 0 | 0 | 0 | 0.59 | 0 | 0 | 0 | 0 | 0.41 | 0 | 0 | | MSLN | 42,698 | 0.50 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0.32 | 0 | 0.18 | 0 | | MSMN | 42,624 | 0.52 | 0 | 0 | 0.48 | 0 | 0 | 0 | 0 | 0 | 0.38 | 0.14 | 0 | | MSHN | 43,038 | 0.75 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0.48 | 0.27 | 0 | 0 | | MSHN | 43,101 | 0.76 | 0 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0.41 | 0.35 | 0 | 0 | | HSLN | 41,815 | 0.29 | 0 | 0 | 0.50 | 0 | 0 | 0.21 | 0 | 0 | 0.29 | 0 | 0 | | HSLN | 39,502 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | | HSMN | 40,091 | 0 | 0 | 0 | 0 | 0 | 0 | 0.77 | 0.23 | 0 | 0 | 0 | 0 | | HSHN | 43,143 | 0.71 | 0.10 | 0 | 0.19 | 0 | 0 | 0 | 0 | 0.71 | 0 | 0 | 0 | #### Appendix A.4. Markets with Background Agents Restricted to ZI Strategies ![Table]() **Table A9.** Equilibria for games where agents are restricted to ZI strategies, N = 28 , calculated from the four-player DPR approximation. Each row of the table describes one equilibrium found with its corresponding surplus and the equilibrium mixture probabilities of strategies included. **Table A9.** Equilibria for games where agents are restricted to ZI strategies, N = 28 , calculated from the four-player DPR approximation. Each row of the table describes one equilibrium found with its corresponding surplus and the equilibrium mixture probabilities of strategies included. | Env | Surplus | 95% CI | ZI 1 | ZI 2 | ZI 3 | ZI 4 | ZI 5 | ZI 6 | ZI 7 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | LSLN | 16,929 | [16,862, 16,996] | 0 | 0 | 0 | 0 | 0 | 0.98 | 0.02 | | LSMN | 16,914 | [16,841, 16,987] | 0 | 0 | 0 | 0 | 0 | 0.89 | 0.11 | | LSHN | 18,213 | [18,137, 18,289] | 0.22 | 0.78 | 0 | 0 | 0 | 0 | 0 | | MSLN | 16,693 | [16,613, 16,773] | 0 | 0 | 0 | 0 | 0 | 0.74 | 0.26 | | MSMN | 17,192 | [17,106, 17,278] | 0 | 0 | 0.42 | 0 | 0 | 0 | 0.58 | | MSMN | 16,726 | [16,638, 16,814] | 0 | 0 | 0 | 0 | 0 | 0.80 | 0.20 | | MSHN | 16,746 | [16,675, 16,817] | 0 | 0 | 0.09 | 0 | 0 | 0 | 0.91 | | MSHN | 17,516 | [17,438, 17,594] | 0.38 | 0 | 0 | 0 | 0 | 0.62 | 0 | | HSLN | 16,565 | [16,485, 16,645] | 0 | 0 | 0 | 0 | 0 | 0.73 | 0.27 | | HSLN | 17,143 | [17,055, 17,231] | 0 | 0 | 0.53 | 0 | 0 | 0 | 0.47 | | HSMN | 16,667 | [16,565, 16,769] | 0 | 0 | 0 | 0 | 0 | 1 | 0 | | HSHN | 17,861 | [17,787, 17,935] | 0.31 | 0.39 | 0 | 0 | 0 | 0.30 | 0 | ![Table]() **Table A10.** Equilibria for games where agents are restricted to ZI strategies, N = 65 , calculated from the five-player DPR approximation. Each row of the table describes one equilibrium found with its corresponding surplus and the equilibrium mixture probabilities of strategies included. **Table A10.** Equilibria for games where agents are restricted to ZI strategies, N = 65 , calculated from the five-player DPR approximation. Each row of the table describes one equilibrium found with its corresponding surplus and the equilibrium mixture probabilities of strategies included. | Env | Surplus | 95% CI | ZI 1 | ZI 2 | ZI 3 | ZI 4 | ZI 5 | ZI 6 | ZI 7 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | LSLN | 42,938 | [42,840, 43,036] | 0 | 1 | 0 | 0 | 0 | 0 | 0 | | LSLN | 40,779 | [40,677, 40,881] | 0 | 0 | 0 | 0 | 0 | 1 | 0 | | LSMN | 42,972 | [42,870, 43,074] | 0 | 0.97 | 0 | 0 | 0 | 0 | 0.03 | | LSMN | 40,557 | [40,439, 40,675] | 0 | 0 | 0 | 0 | 0 | 0.83 | 0.17 | | LSHN | 43,327 | [43,192, 43,462] | 0.44 | 0.56 | 0 | 0 | 0 | 0 | 0 | | LSHN | 43,173 | [43,063, 43,283] | 0.11 | 0.89 | 0 | 0 | 0 | 0 | 0 | | MSLN | 40,444 | [40,342, 40,546] | 0 | 0 | 0 | 0 | 0 | 1 | 0 | | MSMN | 39,622 | [39,518, 39,726] | 0 | 0 | 0 | 0 | 0 | 1 | 0 | | MSHN | 43,140 | [43,017, 43,263] | 0 | 0.73 | 0.27 | 0 | 0 | 0 | 0 | | HSLN | 40,523 | [40,398, 40,648] | 0 | 0 | 0.28 | 0 | 0 | 0 | 0.72 | | HSLN | 40,038 | [39,903, 40,173] | 0 | 0 | 0 | 0 | 0 | 0.60 | 0.40 | | HSMN | 40,458 | [40,327, 40,589] | 0 | 0 | 0 | 0.08 | 0 | 0.73 | 0.19 | | HSHN | 43,197 | [43,087, 43,307] | 0 | 0.88 | 0 | 0.12 | 0 | 0 | 0 | Appendix B. Detailed Equilibrium Results for a Cloaking Mechanism to Mitigate Spoofing --------------------------------------------------------------------------------------- ![Table]() **Table A11.** Background trading strategies included in EGTA for cloaking mechanisms. **Table A11.** Background trading strategies included in EGTA for cloaking mechanisms. | Strategy | ZI 1 | ZI 2 | ZI 3 | ZI 4 | ZI 5 | ZI 6 | ZI 7 | ZI 8 | ZI 9 | HBL 1 | HBL 2 | HBL 3 | HBL 4 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | L | - | - | - | - | - | - | - | - | - | 2 | 3 | 5 | 8 | | R min | 0 | 0 | 0 | 0 | 0 | 0 | 250 | 250 | 250 | 250 | 250 | 250 | 250 | | R max | 1000 | 1000 | 1000 | 2000 | 2000 | 2000 | 500 | 500 | 500 | 500 | 500 | 500 | 500 | | η | 0.4 | 0.8 | 1 | 0.4 | 0.8 | 1 | 0.4 | 0.8 | 1 | 1 | 1 | 1 | 1 | #### Appendix B.1. Cloaking Markets without Spoofing ![Table]() **Table A12.** Equilibria for games where the exploiter does not spoof. Each row of the table describes one equilibrium found with its corresponding background surplus, total surplus and HBL adoption rate. The results reported are based on at least 20,000 simulation runs. **Table A12.** Equilibria for games where the exploiter does not spoof. Each row of the table describes one equilibrium found with its corresponding background surplus, total surplus and HBL adoption rate. The results reported are based on at least 20,000 simulation runs. | Env | K | 95% CI Background Surplus | 95% CI Total Surplus | HBL Fraction | | --- | --- | --- | --- | --- | | LSHN | K0 | [42,121, 42,329] | [42,548, 42,694] | 1.00 | | LSHN | K1 | [41,848, 42,048] | [42,254, 42,396] | 0.98 | | LSHN | K1 | [41,769, 41,977] | [42,264, 42,406] | 0.92 | | LSHN | K2 | [41,788, 42,000] | [42,205, 42,347] | 0.997 | | LSHN | K4 | [41,572, 41,772] | [42,046, 42,188] | 0.89 | | MSMN | K0 | [41,958, 42,220] | [42,274, 42,388] | 0.67 | | MSMN | K1 | [41,902, 42,164] | [42,210, 42,324] | 0.67 | | MSMN | K1 | [41,849, 42,107] | [42,170, 42,284] | 0.60 | | MSMN | K1 | [41,801, 42,067] | [42,167, 42,281] | 0.68 | | MSMN | K2 | [41,742, 42,000] | [42,123, 42,237] | 0.66 | | MSMN | K4 | [41,693, 41,924] | [42,116, 42,230] | 0.47 | | MSMN | K4 | [38,809, 39,025] | [39,367, 39,485] | 0.012 | | HSLN | K0 | [41,529, 41,871] | [41,974, 42,088] | 0.59 | | HSLN | K0 | [41,698, 42,040] | [42,102, 42,216] | 0.67 | | HSLN | K0 | [41,625, 41,973] | [42,021, 42,135] | 0.67 | | HSLN | K1 | [41,417, 41,769] | [41,869, 41,983] | 0.66 | | HSLN | K2 | [41,377, 41,655] | [41,776, 41,890] | 0.38 | | HSLN | K2 | [39,728, 39,972] | [40,484, 40,594] | 0 | | HSLN | K2 | [38,691, 38,965] | [39,419, 39,537] | 0 | | HSLN | K4 | [39,557, 39,803] | [40,256, 40,374] | 0 | | HSLN | K4 | [39,558, 39,804] | [40,290, 40,408] | 0 | ![Table]() **Table A13.** Detailed equilibria for games where the exploiter does not spoof. Each row of the table describes one equilibrium found with its corresponding mixture of background strategies. **Table A13.** Detailed equilibria for games where the exploiter does not spoof. Each row of the table describes one equilibrium found with its corresponding mixture of background strategies. | Env | K | ZI 1 | ZI 2 | ZI 3 | ZI 4 | ZI 5 | ZI 6 | ZI 7 | ZI 8 | ZI 9 | HBL 1 | HBL 2 | HBL 3 | HBL 4 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | LSHN | K0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.70 | 0 | 0 | 0.30 | | LSHN | K1 | 0 | 0 | 0.02 | 0 | 0 | 0 | 0 | 0 | 0 | 0.73 | 0 | 0 | 0.25 | | LSHN | K1 | 0.05 | 0 | 0.03 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.88 | 0 | 0.04 | | LSHN | K2 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003 | 0 | 0 | 0 | 0.856 | 0 | 0.141 | | LSHN | K4 | 0 | 0 | 0.11 | 0 | 0 | 0 | 0 | 0 | 0 | 0.29 | 0.60 | 0 | 0 | | MSMN | K0 | 0 | 0 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0.51 | 0.16 | 0 | 0 | | MSMN | K1 | 0.11 | 0.01 | 0.21 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14 | 0.53 | 0 | | MSMN | K1 | 0 | 0.20 | 0.20 | 0 | 0 | 0 | 0 | 0 | 0 | 0.20 | 0.15 | 0.25 | 0 | | MSMN | K1 | 0 | 0 | 0.32 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14 | 0.39 | 0.03 | 0.12 | | MSMN | K2 | 0 | 0.15 | 0 | 0.19 | 0 | 0 | 0 | 0 | 0 | 0.11 | 0.40 | 0.15 | 0 | | MSMN | K4 | 0.20 | 0 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40 | 0 | 0.07 | 0 | | MSMN | K4 | 0 | 0 | 0 | 0 | 0 | 0 | 0.674 | 0.312 | 0.002 | 0 | 0 | 0.012 | 0 | | HSLN | K0 | 0 | 0 | 0.12 | 0 | 0 | 0 | 0.29 | 0 | 0 | 0.49 | 0.10 | 0 | 0 | | HSLN | K0 | 0 | 0 | 0 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0.66 | 0 | 0 | 0.01 | | HSLN | K0 | 0 | 0 | 0 | 0.19 | 0.14 | 0 | 0 | 0 | 0 | 0 | 0.50 | 0.17 | 0 | | HSLN | K1 | 0.05 | 0 | 0 | 0.29 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09 | 0.57 | 0 | | HSLN | K2 | 0.27 | 0.35 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 0 | 0.30 | 0 | | HSLN | K2 | 0.03 | 0.29 | 0.13 | 0 | 0 | 0 | 0 | 0 | 0.55 | 0 | 0 | 0 | 0 | | HSLN | K2 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0.34 | 0.41 | 0 | 0 | 0 | 0 | | HSLN | K4 | 0 | 0.35 | 0 | 0 | 0 | 0 | 0.65 | 0 | 0 | 0 | 0 | 0 | 0 | | HSLN | K4 | 0 | 0.36 | 0 | 0 | 0 | 0 | 0.64 | 0 | 0 | 0 | 0 | 0 | 0 | #### Appendix B.2. Cloaking Markets with Spoofing ![Table]() **Table A14.** Equilibria for games where the exploiter strategically chooses to spoof. Each row of the table describes one equilibrium found with its corresponding background surplus, total surplus, HBL and spoofing adoption rate. The results reported are based on at least 20,000 simulation runs. **Table A14.** Equilibria for games where the exploiter strategically chooses to spoof. Each row of the table describes one equilibrium found with its corresponding background surplus, total surplus, HBL and spoofing adoption rate. The results reported are based on at least 20,000 simulation runs. | Env | K | 95% CI Background Surplus | 95% CI Total Surplus | HBL Fraction | Spoofing Fraction | | --- | --- | --- | --- | --- | --- | | LSHN | K0 | [41,693, 41,893] | [42,243, 42,389] | 0.95 | 1.00 | | LSHN | K1 | [41,848, 42,048] | [42,254, 423,96] | 0.98 | 0.00 | | LSHN | K2 | [41,788, 42,000] | [42,205, 42,347] | 0.997 | 0.00 | | LSHN | K4 | [41,564, 41,764] | [42,010, 42,152] | 0.90 | 0.08 | | LSHN | K4 | [41,572, 41,772] | [42,046, 42,188] | 0.89 | 0.00 | | MSMN | K0 | [41,652, 41,902] | [42,151, 42,265] | 0.65 | 1.00 | | MSMN | K0 | [41,622, 41,884] | [42,106, 42,220] | 0.66 | 1.00 | | MSMN | K1 | [41,902, 42,164] | [42,210, 42,324] | 0.67 | 0.00 | | MSMN | K1 | [41,849, 42,107] | [42,170, 42,284] | 0.60 | 0.00 | | MSMN | K1 | [41,801, 42,067] | [42,167, 42,281] | 0.68 | 0.00 | | MSMN | K1 | [41,749, 42,031] | [42,146, 42,260] | 0.72 | 0.71 | | MSMN | K2 | [41,700, 41,946] | [42,109, 42,223] | 0.54 | 0.90 | | MSMN | K4 | [41,655, 41,883] | [42,111, 42,225] | 0.48 | 0.62 | | MSMN | K4 | [38,809, 39,025] | [39,367, 39,485] | 0.012 | 0.00 | | HSLN | K0 | [41,538, 41,882] | [42,047, 42,161] | 0.69 | 1.00 | | HSLN | K1 | [41,417, 41,769] | [41,869, 41,983] | 0.66 | 0.00 | | HSLN | K1 | [41,039, 41,345] | [41,593, 41,707] | 0.48 | 1.00 | | HSLN | K2 | [41,080, 41,342] | [41,719, 41,833] | 0.28 | 1.00 | | HSLN | K4 | [39,557, 39,803] | [40,256, 40,374] | 0 | 0.00 | | HSLN | K4 | [39,558, 39,804] | [40,290, 40,408] | 0 | 0.00 | ![Table]() **Table A15.** Detailed Equilibria for games where the exploiter strategically chooses to spoof. Each row of the table describes one equilibrium found with its corresponding mixture of background strategies. **Table A15.** Detailed Equilibria for games where the exploiter strategically chooses to spoof. Each row of the table describes one equilibrium found with its corresponding mixture of background strategies. | Env | K | ZI 1 | ZI 2 | ZI 3 | ZI 4 | ZI 5 | ZI 6 | ZI 7 | ZI 8 | ZI 9 | HBL 1 | HBL 2 | HBL 3 | HBL 4 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | LSHN | K0 | 0 | 0 | 0 | 0.05 | 0 | 0 | 0 | 0 | 0 | 0.95 | 0 | 0 | 0 | | LSHN | K1 | 0 | 0 | 0.02 | 0 | 0 | 0 | 0 | 0 | 0 | 0.73 | 0 | 0 | 0.25 | | LSHN | K2 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003 | 0 | 0 | 0 | 0.856 | 0 | 0.141 | | LSHN | K4 | 0 | 0 | 0.10 | 0 | 0 | 0 | 0 | 0 | 0 | 0.38 | 0.52 | 0 | 0 | | LSHN | K4 | 0 | 0 | 0.11 | 0 | 0 | 0 | 0 | 0 | 0 | 0.29 | 0.60 | 0 | 0 | | MSMN | K0 | 0 | 0.19 | 0 | 0.16 | 0 | 0 | 0 | 0 | 0 | 0.65 | 0 | 0 | 0 | | MSMN | K0 | 0 | 0.20 | 0 | 0 | 0 | 0 | 0.14 | 0 | 0 | 0.61 | 0.05 | 0 | 0 | | MSMN | K1 | 0.11 | 0.01 | 0.21 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14 | 0.53 | 0 | | MSMN | K1 | 0 | 0.20 | 0.20 | 0 | 0 | 0 | 0 | 0 | 0 | 0.20 | 0.15 | 0.25 | 0 | | MSMN | K1 | 0 | 0 | 0.32 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14 | 0.39 | 0.03 | 0.12 | | MSMN | K1 | 0.28 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.51 | 0.21 | 0 | | MSMN | K2 | 0 | 0 | 0.46 | 0 | 0 | 0 | 0 | 0 | 0 | 0.35 | 0 | 0.19 | 0 | | MSMN | K4 | 0 | 0.52 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.46 | 0 | 0 | 0.02 | | MSMN | K4 | 0 | 0 | 0 | 0 | 0 | 0 | 0.674 | 0.312 | 0.002 | 0 | 0 | 0.012 | 0 | | HSLN | K0 | 0 | 0 | 0 | 0.31 | 0 | 0 | 0 | 0 | 0 | 0.69 | 0 | 0 | 0 | | HSLN | K1 | 0.05 | 0 | 0 | 0.29 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09 | 0.57 | 0 | | HSLN | K1 | 0 | 0 | 0 | 0.33 | 0 | 0 | 0.19 | 0 | 0 | 0.08 | 0.40 | 0 | 0 | | HSLN | K2 | 0 | 0.72 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.28 | 0 | | HSLN | K4 | 0 | 0.35 | 0 | 0 | 0 | 0 | 0.65 | 0 | 0 | 0 | 0 | 0 | 0 | | HSLN | K4 | 0 | 0.36 | 0 | 0 | 0 | 0 | 0.64 | 0 | 0 | 0 | 0 | 0 | 0 | Appendix C. Detailed Equilibrium Results for Learning-Based Trading Strategies under the Presence of Market Manipulation ------------------------------------------------------------------------------------------------------------------------- ![Table]() **Table A16.** Background trading strategies used in EGTA for HBL variations. HBL n − L in tables below means HBL n with memory length of L. **Table A16.** Background trading strategies used in EGTA for HBL variations. HBL n − L in tables below means HBL n with memory length of L. | Strategy | ZI 1 | ZI 2 | ZI 3 | ZI 4 | ZI 5 | HBL 1 | HBL 2 | HBL 3 | HBL 4 | HBL 5 | HBL 6 | HBL 7 | HBL 8 | HBL 9 | HBL 10 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | χ | NA | NA | NA | NA | NA | NA | 1 | 2 | NA | NA | NA | NA | 2 | 2 | 2 | | R min | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | −10 | −20 | −40 | −80 | −10 | −20 | −40 | | R max | 1000 | 1000 | 1000 | 500 | 250 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | η | 0.4 | 0.8 | 1 | 0.8 | 0.8 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | #### Appendix C.1. Standard HBL ![Table]() **Table A17.** Equilibria for games where the learning-based trading strategy set is restricted to standard HBL. Each row describes an equilibrium found for the game described by the Env column, detailing the adoption rate of each strategy considered and the corresponding background surplus. The equilibrium strategy profiles with checkmarks in the “Baseline” column indicates those used as baseline strategy profiles for controlled experiments. **Table A17.** Equilibria for games where the learning-based trading strategy set is restricted to standard HBL. Each row describes an equilibrium found for the game described by the Env column, detailing the adoption rate of each strategy considered and the corresponding background surplus. The equilibrium strategy profiles with checkmarks in the “Baseline” column indicates those used as baseline strategy profiles for controlled experiments. | Env | Baseline | ZI 1 | ZI 2 | ZI 3 | ZI 4 | ZI 5 | HBL 1 – 2 | HBL 1 – 5 | 95% CI Background Surplus | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | LSHN | √ | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | [42,050, 42,142] | | | | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | [41,609, 41,703] | | LSHN—Spoof | √ | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | [41,641, 41,733] | | | | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | [41,300, 41,393] | | MSMN | √ | 0.01 | 0 | 0 | 0 | 0 | 0 | 0.99 | [41,820, 41,978] | | | | 0 | 0 | 0 | 0 | 0.25 | 0.75 | 0 | [41,779, 41,965] | | | | 0 | 0 | 0.27 | 0 | 0 | 0.73 | 0 | [41,693, 41,866] | | MSMN—Spoof | √ | 0 | 0 | 0.37 | 0 | 0 | 0.63 | 0 | [41,493, 41,669] | | | | 0.22 | 0 | 0 | 0 | 0 | 0.78 | 0 | [41,702, 41,876] | | | | 0 | 0.27 | 0 | 0 | 0 | 0.73 | 0 | [41,642, 41,814] | | HSLN | √ | 0.23 | 0 | 0 | 0 | 0 | 0 | 0.77 | [41,659, 41,907] | | | | 0.34 | 0 | 0 | 0 | 0 | 0.66 | 0 | [41,568, 41,816] | | | | 0 | 0 | 0 | 0.43 | 0 | 0 | 0.57 | [41,339, 41,599] | | | | 0 | 0 | 0.62 | 0 | 0 | 0 | 0.38 | [41,071, 41,281] | | | | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | [41,218, 41,452] | | | | 0 | 0.44 | 0 | 0 | 0 | 0 | 0.56 | [41,304, 41,546] | | HSLN—Spoof | √ | 0.31 | 0 | 0 | 0 | 0 | 0 | 0.69 | [41,427, 41,670] | | | | 0 | 0 | 0.70 | 0 | 0 | 0 | 0.30 | [40,944, 41,127] | | | | 0 | 0.59 | 0 | 0 | 0 | 0.41 | 0 | [41,120, 41,335] | | | | 0.39 | 0 | 0 | 0 | 0 | 0.61 | 0 | [41,420, 41,665] | | | | 0 | 0 | 0.68 | 0 | 0 | 0.32 | 0 | [41,014, 41,208] | #### Appendix C.2. HBL with Price Level Blocking ![Table]() **Table A18.** Equilibria for games where the learning-based trading strategy set is comprised of standard HBL and HBL with price level blocking. Each row describes an equilibrium found for the game described by the Env column, detailing the adoption rate of each strategy considered and the corresponding background surplus. **Table A18.** Equilibria for games where the learning-based trading strategy set is comprised of standard HBL and HBL with price level blocking. Each row describes an equilibrium found for the game described by the Env column, detailing the adoption rate of each strategy considered and the corresponding background surplus. | Env | ZI 1 | ZI 2 | ZI 3 | ZI 4 | ZI 5 | HBL 1 – 2 | HBL 1 – 5 | HBL 3 – 2 | HBL 3 – 5 | 95% CI Background Surplus | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | LSHN | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | [41,609, 41,703] | | | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | [42,050, 42,142] | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | [41,690, 41,784] | | LSHN—Spoof | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | [41,300, 41,393] | | | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | [41,641, 41,733] | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | [41,690, 41,784] | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | [42,093, 42,139] | | MSMN | 0.01 | 0 | 0 | 0 | 0 | 0 | 0.99 | 0 | 0 | [41,820, 41,978] | | | 0 | 0 | 0 | 0 | 0.25 | 0.75 | 0 | 0 | 0 | [41,779, 41,965] | | | 0 | 0 | 0.27 | 0 | 0 | 0.73 | 0 | 0 | 0 | [41,693, 41,866] | | | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 0.76 | 0 | [41,651, 41,743] | | | 0 | 0.17 | 0 | 0 | 0 | 0 | 0 | 0 | 0.83 | [41,801, 41,890] | | MSMN—Spoof | 0 | 0 | 0.37 | 0 | 0 | 0.63 | 0 | 0 | 0 | [41,493, 41,669] | | | 0.22 | 0 | 0 | 0 | 0 | 0.78 | 0 | 0 | 0 | [41,702, 41,876] | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | [41,841, 41,920] | | | 0.25 | 0 | 0 | 0 | 0 | 0 | 0.75 | 0 | 0 | [41,808, 41,988] | | | 0 | 0.28 | 0 | 0 | 0 | 0 | 0 | 0.72 | 0 | [41,764, 41,853] | | HSLN | 0.23 | 0 | 0 | 0 | 0 | 0 | 0.77 | 0 | 0 | [41,659, 41,907] | | | 0.34 | 0 | 0 | 0 | 0 | 0.66 | 0 | 0 | 0 | [41,568, 41,816] | | | 0 | 0 | 0 | 0.43 | 0 | 0 | 0.57 | 0 | 0 | [41,339, 41,599] | | | 0 | 0 | 0.62 | 0 | 0 | 0 | 0.38 | 0 | 0 | [41,071, 41,281] | | | 0.36 | 0 | 0 | 0 | 0 | 0 | 0 | 0.64 | 0 | [41,608, 41,734] | | | 0 | 0 | 0.62 | 0 | 0 | 0 | 0 | 0 | 0.38 | [41,087, 41,194] | | | 0 | 0 | 0.61 | 0 | 0 | 0 | 0 | 0.39 | 0 | [41,122, 41,231] | | HSLN—Spoof | 0.31 | 0 | 0 | 0 | 0 | 0 | 0.69 | 0 | 0 | [41,427, 41,670] | | | 0.39 | 0 | 0 | 0 | 0 | 0.61 | 0 | 0 | 0 | [41,420, 41,665] | | | 0 | 0.49 | 0 | 0 | 0 | 0 | 0 | 0.51 | 0 | [41,285, 41,405] | | | 0.34 | 0 | 0 | 0 | 0 | 0 | 0 | 0.66 | 0 | [41,632, 41,759] | | | 0 | 0 | 0.62 | 0 | 0 | 0 | 0 | 0.38 | 0 | [41,123, 41,230] | | | 0.28 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.72 | [41,720, 41,848] | #### Appendix C.3. HBL with Price Offsets ![Table]() **Table A19.** Equilibria for games where the learning-based trading strategy set is comprised of standard HBL and HBL with price offsets. Each row describes an equilibrium found for the game described by the Env column, detailing the adoption rate of each strategy considered and the corresponding background surplus. **Table A19.** Equilibria for games where the learning-based trading strategy set is comprised of standard HBL and HBL with price offsets. Each row describes an equilibrium found for the game described by the Env column, detailing the adoption rate of each strategy considered and the corresponding background surplus. | Env | ZI 1 | ZI 2 | ZI 3 | ZI 4 | HBL 4 – 2 | HBL 5 – 2 | HBL 6 – 2 | HBL 4 – 5 | HBL 5 – 5 | HBL 6 – 5 | 95% CI Background Surplus | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | LSHN | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | [41,518, 42,562] | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | [41,512, 42,556] | | | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | [42,420, 42,507] | | | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | [42,551, 42,640] | | | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | [42,551, 42,639] | | LSHN—Spoof | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | [42,430, 42,474] | | | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | [42,406, 42,492] | | | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | [42,527, 42,614] | | | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | [42,516, 42,603] | | MSMN | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | [42,085, 42,229] | | | 0.03 | 0 | 0 | 0 | 0 | 0.97 | 0 | 0 | 0 | 0 | [42,227, 42,383] | | | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | [42,219, 42,366] | | | 0 | 0 | 0.21 | 0 | 0 | 0 | 0 | 0 | 0 | 0.79 | [41,702, 41,787] | | | 0.10 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.90 | 0 | [42,058, 42,142] | | MSMN—Spoof | 0 | 0 | 0 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0.92 | [41,912, 41,991] | | | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | [42,054, 42,197] | | | 0 | 0.16 | 0 | 0 | 0 | 0.84 | 0 | 0 | 0 | 0 | [41,951, 42,119] | | | 0 | 0.13 | 0 | 0 | 0 | 0.87 | 0 | 0 | 0 | 0 | [42,021, 42,185] | | HSLN | 0.12 | 0 | 0 | 0 | 0 | 0.88 | 0 | 0 | 0 | 0 | [42,140, 42,255] | | HSLN—Spoof | 0.16 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.84 | [41,782, 41,893] | | | 0 | 0.37 | 0 | 0 | 0 | 0 | 0 | 0 | 0.63 | 0 | [41,457, 41,578] | #### Appendix C.4. HBL Price Offsets and Price Level Blocking ![Table]() **Table A20.** Equilibria for games where the learning-based trading strategy set is comprised of standard HBL, HBL with price level blocking, HBL with price offsets and HBL with both price offsets and price level blocking ( HBL 1 and HBL 2 are not shown because they do not appear in any equilibrium). Each row describes an equilibrium found for the game described by the Env column, detailing the adoption rate of each strategy considered and the corresponding background surplus. **Table A20.** Equilibria for games where the learning-based trading strategy set is comprised of standard HBL, HBL with price level blocking, HBL with price offsets and HBL with both price offsets and price level blocking ( HBL 1 and HBL 2 are not shown because they do not appear in any equilibrium). Each row describes an equilibrium found for the game described by the Env column, detailing the adoption rate of each strategy considered and the corresponding background surplus. | Env | ZI 1 | ZI 2 | ZI 3 | HBL 3 – 2 | HBL 4 – 2 | HBL 5 – 2 | HBL 6 – 2 | HBL 8 – 2 | HBL 9 – 2 | HBL 10 – 2 | HBL 4 – 5 | HBL 5 – 5 | HBL 6 – 5 | HBL 8 – 5 | HBL 9 – 5 | HBL 10 – 5 | 95% CI | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | LSHN | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | [42,520, 42,565] | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | [42,511, 42,556] | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | [41,518, 42,562] | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | [41,512, 42,556] | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | 0 | [42,423, 42,509] | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [42,550, 42,638] | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [42,555, 42,642] | | | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [42,420, 42,507] | | | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [42,551, 42,640] | | | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [42,551, 42,639] | | LSHN—Spoof | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | [42,511, 42,556] | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | 0 | [42,422, 42,509] | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [42,551, 42,639] | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [42,554, 42,641] | | | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [42,406, 42,492] | | | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [42,527, 42,614] | | | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [42,516, 42,603] | | MSMN | 0.22 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.78 | 0 | [41,877, 41,967] | | | 0.11 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.89 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [42,123, 42,291] | | | 0 | 0 | 0.20 | 0 | 0 | 0 | 0 | 0 | 0 | 0.80 | 0 | 0 | 0 | 0 | 0 | 0 | [41,755, 41,921] | | | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [42,085, 42,229] | | | 0.03 | 0 | 0 | 0 | 0 | 0.97 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [42,227, 42,383] | | | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [42,219, 42,366] | | MSMN—Spoof | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | [42,060, 42,133] | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.00 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [42,246, 42,395] | | | 0 | 0.13 | 0 | 0 | 0.87 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [42,021, 42,185] | | | 0.25 | 0 | 0 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [41,808, 41,988] | | HSLN | 0.23 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.77 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [41,845, 42,085] | | | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.76 | [41,705, 41,824] | | HSLN—Spoof | 0 | 0.37 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.63 | 0 | 0 | 0 | 0 | [41,457, 41,578] | | | 0.29 | 0 | 0 | 0 | 0 | 0 | 0 | 0.71 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [41,754, 41,997] | | | 0 | 0.32 | 0 | 0 | 0 | 0 | 0 | 0 | 0.68 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [41,639, 41,877] | References ---------- 1. Lin, T.C.W. The new market manipulation. Emory Law J. **2015**, 66, 1253–1314. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+new+market+manipulation&author=Lin,+T.C.W.&publication_year=2015&journal=Emory+Law+J.&volume=66&pages=1253%E2%80%931314)] 2. Kirilenko, A.A.; Kyle, A.S.; Samadi, M.; Tuzun, T. The Flash Crash: High frequency trading in an electronic market. J. Financ. **2017**, 72, 967–998. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Flash+Crash:+High+frequency+trading+in+an+electronic+market&author=Kirilenko,+A.A.&author=Kyle,+A.S.&author=Samadi,+M.&author=Tuzun,+T.&publication_year=2017&journal=J.+Financ.&volume=72&pages=967%E2%80%93998&doi=10.1111/jofi.12498)] [[CrossRef](https://doi.org/10.1111/jofi.12498)][[Green Version](https://www.repository.cam.ac.uk/bitstream/1810/304244/1/KirilenkoFlashCrash.pdf)] 3. Aldrich, E.M.; Grundfest, J.; Laughlin, G. The Flash Crash: A New Deconstruction. 2017. Available online: [http://dx.doi.org/10.2139/ssrn.2721922](https://dx.doi.org/10.2139/ssrn.2721922) (accessed on 10 May 2017). 4. Hope, B. How ‘Spoofing’ Traders Dupe Markets. Wall Str. J. **2015**. Available online: <https://www.google.com.hk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwib2uakquHwAhXDBKYKHfjXAvAQFjACegQIAxAD&url=https%3A%2F%2Fwww.wsj.com%2Farticles%2Fhow-spoofing-traders-dupe-markets-1424662202&usg=AOvVaw0E29hp4F8GCnGgut00lkzm> (accessed on 22 February 2015). 5. Montgomery, J.D. Spoofing, Market Manipulation, and the Limit-Order Book. 2016. Available online: [http://dx.doi.org/10.2139/ssrn.2780579](https://dx.doi.org/10.2139/ssrn.2780579) (accessed on 3 May 2016). 6. Friedman, D. The double auction market institution: A survey. In The Double Auction Market: Institutions, Theories, and Evidence; Addison-Wesley: Boston, MA, USA, 1993; pp. 3–25. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+double+auction+market+institution:+A+survey&author=Friedman,+D.&publication_year=1993&pages=3%E2%80%9325)] 7. Wellman, M.P. Putting the agent in agent-based modeling. Auton. Agents Multi Agent Syst. **2016**, 30, 1175–1189. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Putting+the+agent+in+agent-based+modeling&author=Wellman,+M.P.&publication_year=2016&journal=Auton.+Agents+Multi+Agent+Syst.&volume=30&pages=1175%E2%80%931189&doi=10.1007/s10458-016-9336-6)] [[CrossRef](https://doi.org/10.1007/s10458-016-9336-6)] 8. Lebaron, B. Agent-based computational finance. In Handbook of Computational Economics, 1st ed.; Tesfatsion, L., Judd, K.L., Eds.; Elsevier: Amsterdam, The Netherlands, 2006; Volume 2. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Agent-based+computational+finance&author=Lebaron,+B.&publication_year=2006)] 9. Paddrik, M.; Hayes, R.; Todd, A.; Yang, S.; Beling, P.; Scherer, W. An agent based model of the E-Mini S&P 500 applied to Flash Crash analysis. In Proceedings of the IEEE Conference on Computational Intelligence for Financial Engineering and Economics, New York, NY, USA, 29–30 March 2012; pp. 1–8. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=An+agent+based+model+of+the+E-Mini+S%2526P+500+applied+to+Flash+Crash+analysis&conference=Proceedings+of+the+IEEE+Conference+on+Computational+Intelligence+for+Financial+Engineering+and+Economics&author=Paddrik,+M.&author=Hayes,+R.&author=Todd,+A.&author=Yang,+S.&author=Beling,+P.&author=Scherer,+W.&publication_year=2012&pages=1%E2%80%938)] 10. LeBaron, B.; Arthur, W.B.; Palmer, R. Time series properties of an artificial stock market. J. Econ. Dyn. Control **1999**, 23, 1487–1516. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Time+series+properties+of+an+artificial+stock+market&author=LeBaron,+B.&author=Arthur,+W.B.&author=Palmer,+R.&publication_year=1999&journal=J.+Econ.+Dyn.+Control&volume=23&pages=1487%E2%80%931516&doi=10.1016/S0165-1889(98)00081-5)] [[CrossRef](https://doi.org/10.1016/S0165-1889(98)00081-5)][[Green Version](http://www.econ.iastate.edu/tesfatsi/sfiblake.pdf)] 11. Palit, I.; Phelps, S.; Ng, W.L. Can a zero-intelligence plus model explain the stylized facts of financial time series data? In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems, Valencia, Spain, 4–8 June 2012; pp. 653–660. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Can+a+zero-intelligence+plus+model+explain+the+stylized+facts+of+financial+time+series+data?&conference=Proceedings+of+the+11th+International+Conference+on+Autonomous+Agents+and+Multiagent+Systems&author=Palit,+I.&author=Phelps,+S.&author=Ng,+W.L.&publication_year=2012&pages=653%E2%80%93660)] 12. Wah, E.; Wright, M.; Wellman, M.P. Welfare effects of market making in continuous double auctions. J. Artif. Intell. Res. **2017**, 59, 613–650. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Welfare+effects+of+market+making+in+continuous+double+auctions&author=Wah,+E.&author=Wright,+M.&author=Wellman,+M.P.&publication_year=2017&journal=J.+Artif.+Intell.+Res.&volume=59&pages=613%E2%80%93650&doi=10.1613/jair.5360)] [[CrossRef](https://doi.org/10.1613/jair.5360)] 13. Wah, E.; Wellman, M.P. Latency arbitrage in fragmented markets: A strategic agent-based analysis. Algorithmic Financ. **2016**, 5, 69–93. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Latency+arbitrage+in+fragmented+markets:+A+strategic+agent-based+analysis&author=Wah,+E.&author=Wellman,+M.P.&publication_year=2016&journal=Algorithmic+Financ.&volume=5&pages=69%E2%80%9393&doi=10.3233/AF-160060)] [[CrossRef](https://doi.org/10.3233/AF-160060)][[Green Version](http://pdfs.semanticscholar.org/7332/6c5e2592f5532b138b18bbaa9d694742f389.pdf)] 14. Bookstaber, R. Using Agent-Based Models for Analyzing Threats to Financial Stability; Working paper; Office of Financial Research: Washington, DC, USA, 2012. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Using+Agent-Based+Models+for+Analyzing+Threats+to+Financial+Stability&author=Bookstaber,+R.&publication_year=2012)] 15. Wellman, M.P. Trading Agents; Morgan & Claypool: San Rafael, CA, USA, 2011. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Trading+Agents&author=Wellman,+M.P.&publication_year=2011)] 16. Gode, D.K.; Sunder, S. Allocative efficiency of markets with zero-intelligence traders: Market as a partial substitute for individual rationality. J. Political Econ. **1993**, 101, 119–137. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Allocative+efficiency+of+markets+with+zero-intelligence+traders:+Market+as+a+partial+substitute+for+individual+rationality&author=Gode,+D.K.&author=Sunder,+S.&publication_year=1993&journal=J.+Political+Econ.&volume=101&pages=119%E2%80%93137&doi=10.1086/261868)] [[CrossRef](https://doi.org/10.1086/261868)] 17. Farmer, J.D.; Patelli, P.; Zovko, I.I. The predictive power of zero intelligence in financial markets. Proc. Natl. Acad. Sci. USA **2005**, 102, 2254–2259. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+predictive+power+of+zero+intelligence+in+financial+markets&author=Farmer,+J.D.&author=Patelli,+P.&author=Zovko,+I.I.&publication_year=2005&journal=Proc.+Natl.+Acad.+Sci.+USA&volume=102&pages=2254%E2%80%932259&doi=10.1073/pnas.0409157102&pmid=15687505)] [[CrossRef](https://doi.org/10.1073/pnas.0409157102)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/15687505)][[Green Version](http://www.pnas.org/content/102/6/2254.full.pdf)] 18. Cliff, D. Minimal-Intelligence Agents for Bargaining Behaviors in Market-Based Environments; Technical report; Hewlett-Packard Labs: Palo Alto, CA, USA, 1997. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Minimal-Intelligence+Agents+for+Bargaining+Behaviors+in+Market-Based+Environments&author=Cliff,+D.&publication_year=1997)] 19. Cliff, D. ZIP60: Further explorations in the evolutionary design of trader agents and online auction-market mechanisms. IEEE Trans. Evol. Comput. **2009**, 13, 3–18. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=ZIP60:+Further+explorations+in+the+evolutionary+design+of+trader+agents+and+online+auction-market+mechanisms&author=Cliff,+D.&publication_year=2009&journal=IEEE+Trans.+Evol.+Comput.&volume=13&pages=3%E2%80%9318&doi=10.1109/TEVC.2008.907594)] [[CrossRef](https://doi.org/10.1109/TEVC.2008.907594)] 20. Vytelingum, P.; Cliff, D.; Jennings, N.R. Strategic bidding in continuous double auctions. Artif. Intell. **2008**, 172, 1700–1729. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Strategic+bidding+in+continuous+double+auctions&author=Vytelingum,+P.&author=Cliff,+D.&author=Jennings,+N.R.&publication_year=2008&journal=Artif.+Intell.&volume=172&pages=1700%E2%80%931729&doi=10.1016/j.artint.2008.06.001)] [[CrossRef](https://doi.org/10.1016/j.artint.2008.06.001)][[Green Version](https://eprints.soton.ac.uk/265895/1/3949.pdf)] 21. Wright, M.; Wellman, M.P. Evaluating the stability of non-adaptive trading in continuous double auctions. In Proceedings of the 17th International Conference on Autonomous Agents and Multi-Agent Systems, Stockholm, Sweden, 10–15 July 2018; pp. 614–622. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Evaluating+the+stability+of+non-adaptive+trading+in+continuous+double+auctions&conference=Proceedings+of+the+17th+International+Conference+on+Autonomous+Agents+and+Multi-Agent+Systems&author=Wright,+M.&author=Wellman,+M.P.&publication_year=2018&pages=614%E2%80%93622)] 22. Gjerstad, S.; Dickhaut, J. Price formation in double auctions. Games Econ. Behav. **1998**, 22, 1–29. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Price+formation+in+double+auctions&author=Gjerstad,+S.&author=Dickhaut,+J.&publication_year=1998&journal=Games+Econ.+Behav.&volume=22&pages=1%E2%80%9329&doi=10.1006/game.1997.0576)] [[CrossRef](https://doi.org/10.1006/game.1997.0576)][[Green Version](http://conservancy.umn.edu/bitstream/11299/55740/1/1995-284.pdf)] 23. Gjerstad, S. The competitive market paradox. J. Econ. Dyn. Control **2007**, 31, 1753–1780. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+competitive+market+paradox&author=Gjerstad,+S.&publication_year=2007&journal=J.+Econ.+Dyn.+Control&volume=31&pages=1753%E2%80%931780&doi=10.1016/j.jedc.2006.07.001)] [[CrossRef](https://doi.org/10.1016/j.jedc.2006.07.001)] 24. Tesauro, G.; Das, R. High-performance bidding agents for the continuous double auction. In Proceedings of the 3rd ACM Conference on Electronic Commerce, Tampa, FL, USA, 14–17 October 2001; pp. 206–209. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=High-performance+bidding+agents+for+the+continuous+double+auction&conference=Proceedings+of+the+3rd+ACM+Conference+on+Electronic+Commerce&author=Tesauro,+G.&author=Das,+R.&publication_year=2001&pages=206%E2%80%93209)] 25. Tesauro, G.; Bredin, J.L. Strategic sequential bidding in auctions using dynamic programming. In Proceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems: Part 2, Bologna, Italy, 15–19 July 2002; pp. 591–598. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Strategic+sequential+bidding+in+auctions+using+dynamic+programming&conference=Proceedings+of+the+First+International+Joint+Conference+on+Autonomous+Agents+and+Multiagent+Systems:+Part+2&author=Tesauro,+G.&author=Bredin,+J.L.&publication_year=2002&pages=591%E2%80%93598)] 26. Lee, E.J.; Eom, K.S.; Park, K.S. Microstructure-based manipulation: Strategic behavior and performance of spoofing traders. J. Financ. Mark. **2013**, 16, 227–252. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Microstructure-based+manipulation:+Strategic+behavior+and+performance+of+spoofing+traders&author=Lee,+E.J.&author=Eom,+K.S.&author=Park,+K.S.&publication_year=2013&journal=J.+Financ.+Mark.&volume=16&pages=227%E2%80%93252&doi=10.1016/j.finmar.2012.05.004)] [[CrossRef](https://doi.org/10.1016/j.finmar.2012.05.004)] 27. Wang, Y.Y. Strategic spoofing order trading by different types of investors in Taiwan Index futures market. J. Financ. Stud. **2019**, 27, 65. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Strategic+spoofing+order+trading+by+different+types+of+investors+in+Taiwan+Index+futures+market&author=Wang,+Y.Y.&publication_year=2019&journal=J.+Financ.+Stud.&volume=27&pages=65)] 28. Martínez-Miranda, E.; McBurney, P.; Howard, M. Learning unfair trading: A market manipulation analysis from the reinforcement learning perspective. In Proceedings of the IEEE International Conference on Evolving and Adaptive Intelligent Systems, Natal, Brazil, 23–25 May 2016; pp. 103–109. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Learning+unfair+trading:+A+market+manipulation+analysis+from+the+reinforcement+learning+perspective&conference=Proceedings+of+the+IEEE+International+Conference+on+Evolving+and+Adaptive+Intelligent+Systems&author=Mart%C3%ADnez-Miranda,+E.&author=McBurney,+P.&author=Howard,+M.&publication_year=2016&pages=103%E2%80%93109)] 29. Tao, X.; Day, A.; Ling, L.; Drapeau, S. On detecting spoofing strategies in high frequency trading. arXiv **2020**, arXiv:2009.14818. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=On+detecting+spoofing+strategies+in+high+frequency+trading&author=Tao,+X.&author=Day,+A.&author=Ling,+L.&author=Drapeau,+S.&publication_year=2020&journal=arXiv)] 30. Athalye, A.; Carlini, N.; Wagner, D. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 274–283. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Obfuscated+gradients+give+a+false+sense+of+security:+Circumventing+defenses+to+adversarial+examples&conference=Proceedings+of+the+35th+International+Conference+on+Machine+Learning&author=Athalye,+A.&author=Carlini,+N.&author=Wagner,+D.&publication_year=2018&pages=274%E2%80%93283)] 31. Goodfellow, I.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Explaining+and+harnessing+adversarial+examples&conference=Proceedings+of+the+International+Conference+on+Learning+Representations&author=Goodfellow,+I.&author=Shlens,+J.&author=Szegedy,+C.&publication_year=2015)] 32. Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. In Proceedings of the International Conference on Learning Representations, Banff, AB, Canada, 14–16 April 2014. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Intriguing+properties+of+neural+networks&conference=Proceedings+of+the+International+Conference+on+Learning+Representations&author=Szegedy,+C.&author=Zaremba,+W.&author=Sutskever,+I.&author=Bruna,+J.&author=Erhan,+D.&author=Goodfellow,+I.&author=Fergus,+R.&publication_year=2014)] 33. Vorobeychik, Y.; Kantarcioglu, M. Adversarial Machine Learning; Morgan & Claypool: San Rafael, CA, USA, 2018. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Adversarial+Machine+Learning&author=Vorobeychik,+Y.&author=Kantarcioglu,+M.&publication_year=2018)] 34. Li, J.; Wang, X.; Lin, Y.; Sinha, A.; Wellman, M.P. Generating realistic stock market order streams. In Proceedings of the 34th AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 727–734. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Generating+realistic+stock+market+order+streams&conference=Proceedings+of+the+34th+AAAI+Conference+on+Artificial+Intelligence&author=Li,+J.&author=Wang,+X.&author=Lin,+Y.&author=Sinha,+A.&author=Wellman,+M.P.&publication_year=2020&pages=727%E2%80%93734)] 35. Dalvi, N.; Domingos, P.; Mausam; Sanghai, S.; Verma, D. Adversarial classification. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 22–25 August 2004; pp. 99–108. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Adversarial+classification&conference=Proceedings+of+the+ACM+SIGKDD+International+Conference+on+Knowledge+Discovery+and+Data+Mining&author=Dalvi,+N.&author=Domingos,+P.&author=Mausam&author=Sanghai,+S.&author=Verma,+D.&publication_year=2004&pages=99%E2%80%93108)] 36. Lowd, D.; Meek, C. Adversarial learning. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, IL, USA, 21–24 August 2005. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Adversarial+learning&conference=Proceedings+of+the+ACM+SIGKDD+International+Conference+on+Knowledge+Discovery+and+Data+Mining&author=Lowd,+D.&author=Meek,+C.&publication_year=2005)] 37. Carlini, N.; Wagner, D.A. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2017; pp. 39–57. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Towards+evaluating+the+robustness+of+neural+networks&conference=Proceedings+of+the+2017+IEEE+Symposium+on+Security+and+Privacy+(SP)&author=Carlini,+N.&author=Wagner,+D.A.&publication_year=2017&pages=39%E2%80%9357)] 38. Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The limitations of deep learning in adversarial settings. In Proceedings of the IEEE European Symposium on Security and Privacy (EuroS&P), Saarbruecken, Germany, 21–24 March 2016; pp. 372–387. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+limitations+of+deep+learning+in+adversarial+settings&conference=Proceedings+of+the+IEEE+European+Symposium+on+Security+and+Privacy+(EuroS&P)&author=Papernot,+N.&author=McDaniel,+P.&author=Jha,+S.&author=Fredrikson,+M.&author=Celik,+Z.B.&author=Swami,+A.&publication_year=2016&pages=372%E2%80%93387)] 39. Alzantot, M.; Sharma, Y.; Elgohary, A.; Ho, B.J.; Srivastava, M.; Chang, K.W. Generating natural language adversarial examples. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; pp. 2890–2896. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Generating+natural+language+adversarial+examples&conference=Proceedings+of+the+Conference+on+Empirical+Methods+in+Natural+Language+Processing&author=Alzantot,+M.&author=Sharma,+Y.&author=Elgohary,+A.&author=Ho,+B.J.&author=Srivastava,+M.&author=Chang,+K.W.&publication_year=2018&pages=2890%E2%80%932896)] 40. Carlini, N.; Mishra, P.; Vaidya, T.; Zhang, Y.; Sherr, M.; Shields, C.; Wagner, D.; Zhou, W. Hidden voice commands. In Proceedings of the 25th USENIX Security Symposium (USENIX Security 16), Austin, TX, USA, 10–12 August 2016. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Hidden+voice+commands&conference=Proceedings+of+the+25th+USENIX+Security+Symposium+(USENIX+Security+16)&author=Carlini,+N.&author=Mishra,+P.&author=Vaidya,+T.&author=Zhang,+Y.&author=Sherr,+M.&author=Shields,+C.&author=Wagner,+D.&author=Zhou,+W.&publication_year=2016)] 41. Grosse, K.; Papernot, N.; Manoharan, P.; Backes, M.; McDaniel, P.D. Adversarial examples for malware detection. In European Symposium on Research in Computer Security; Foley, S.N., Gollmann, D., Snekkenes, E., Eds.; Springer International Publishing: New York City, NY, USA, 2017; pp. 62–79. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Adversarial+examples+for+malware+detection&author=Grosse,+K.&author=Papernot,+N.&author=Manoharan,+P.&author=Backes,+M.&author=McDaniel,+P.D.&publication_year=2017&pages=62%E2%80%9379)] 42. Tong, L.; Li, B.; Hajaj, C.; Xiao, C.; Zhang, N.; Vorobeychik, Y. Improving robustness of ML classifiers against realizable evasion attacks using conserved features. In Proceedings of the 28th USENIX Security Symposium (USENIX Security 19), Santa Clara, CA, USA, 14–16 August 2019; pp. 285–302. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Improving+robustness+of+ML+classifiers+against+realizable+evasion+attacks+using+conserved+features&conference=Proceedings+of+the+28th+USENIX+Security+Symposium+(USENIX+Security+19)&author=Tong,+L.&author=Li,+B.&author=Hajaj,+C.&author=Xiao,+C.&author=Zhang,+N.&author=Vorobeychik,+Y.&publication_year=2019&pages=285%E2%80%93302)] 43. Boloor, A.; He, X.; Gill, C.; Vorobeychik, Y.; Zhang, X. Simple physical adversarial examples against end-to-end autonomous driving models. In Proceedings of the IEEE International Conference on Embedded Software and Systems, Las Vegas, NV, USA, 2–3 June 2019. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Simple+physical+adversarial+examples+against+end-to-end+autonomous+driving+models&conference=Proceedings+of+the+IEEE+International+Conference+on+Embedded+Software+and+Systems&author=Boloor,+A.&author=He,+X.&author=Gill,+C.&author=Vorobeychik,+Y.&author=Zhang,+X.&publication_year=2019)] 44. Eykholt, K.; Evtimov, I.; Fernandes, E.; Li, B.; Rahmati, A.; Xiao, C.; Prakash, A.; Kohno, T.; Song, D.X. Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1625–1634. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Robust+physical-world+attacks+on+deep+learning+visual+classification&conference=Proceedings+of+the+IEEE/CVF+Conference+on+Computer+Vision+and+Pattern+Recognition&author=Eykholt,+K.&author=Evtimov,+I.&author=Fernandes,+E.&author=Li,+B.&author=Rahmati,+A.&author=Xiao,+C.&author=Prakash,+A.&author=Kohno,+T.&author=Song,+D.X.&publication_year=2018&pages=1625%E2%80%931634)] 45. Sharif, M.; Bhagavatula, S.; Bauer, L.; Reiter, M.K. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 1528–1540. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Accessorize+to+a+crime:+Real+and+stealthy+attacks+on+state-of-the-art+face+recognition&conference=Proceedings+of+the+ACM+SIGSAC+Conference+on+Computer+and+Communications+Security&author=Sharif,+M.&author=Bhagavatula,+S.&author=Bauer,+L.&author=Reiter,+M.K.&publication_year=2016&pages=1528%E2%80%931540)] 46. Wu, T.; Tong, L.; Vorobeychik, Y. Defending against physically realizable attacks on image classification. In Proceedings of the International Conference on Learning Representation, Virtual Conference, 26 April–1 May 2020. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Defending+against+physically+realizable+attacks+on+image+classification&conference=Proceedings+of+the+International+Conference+on+Learning+Representation&author=Wu,+T.&author=Tong,+L.&author=Vorobeychik,+Y.&publication_year=2020)] 47. Tong, L.; Yu, S.; Alfeld, S.; Vorobeychik, Y. Adversarial regression with multiple learners. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 4946–4954. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Adversarial+regression+with+multiple+learners&conference=Proceedings+of+the+35th+International+Conference+on+Machine+Learning&author=Tong,+L.&author=Yu,+S.&author=Alfeld,+S.&author=Vorobeychik,+Y.&publication_year=2018&pages=4946%E2%80%934954)] 48. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards deep learning models resistant to adversarial attacks. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 Apirl–3 May 2018. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Towards+deep+learning+models+resistant+to+adversarial+attacks&conference=Proceedings+of+the+International+Conference+on+Learning+Representations&author=Madry,+A.&author=Makelov,+A.&author=Schmidt,+L.&author=Tsipras,+D.&author=Vladu,+A.&publication_year=2018)] 49. Wong, E.; Kolter, J.Z. Provable defenses against adversarial examples via the convex outer adversarial polytope. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Provable+defenses+against+adversarial+examples+via+the+convex+outer+adversarial+polytope&conference=Proceedings+of+the+International+Conference+on+Machine+Learning&author=Wong,+E.&author=Kolter,+J.Z.&publication_year=2018)] 50. Goettler, R.L.; Parlour, C.A.; Rajan, U. Informed traders and limit order markets. J. Financ. Econ. **2009**, 93, 67–87. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Informed+traders+and+limit+order+markets&author=Goettler,+R.L.&author=Parlour,+C.A.&author=Rajan,+U.&publication_year=2009&journal=J.+Financ.+Econ.&volume=93&pages=67%E2%80%9387&doi=10.1016/j.jfineco.2008.08.002)] [[CrossRef](https://doi.org/10.1016/j.jfineco.2008.08.002)] 51. Chakraborty, T.; Kearns, M. Market making and mean reversion. In Proceedings of the 12th ACM Conference on Electronic Commerce, San Jose, CA, USA, 5–9 June 2011; pp. 307–314. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Market+making+and+mean+reversion&conference=Proceedings+of+the+12th+ACM+Conference+on+Electronic+Commerce&author=Chakraborty,+T.&author=Kearns,+M.&publication_year=2011&pages=307%E2%80%93314)] 52. Wiedenbeck, B.; Wellman, M.P. Scaling simulation-based game analysis through deviation-preserving reduction. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems, Valencia Spain, 4–8 June 2012; pp. 931–938. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Scaling+simulation-based+game+analysis+through+deviation-preserving+reduction&conference=Proceedings+of+the+11th+International+Conference+on+Autonomous+Agents+and+Multiagent+Systems&author=Wiedenbeck,+B.&author=Wellman,+M.P.&publication_year=2012&pages=931%E2%80%93938)] 53. Hautsch, N.; Huang, R. Limit order flow, market impact, and optimal order sizes: Evidence from NASDAQ TotalView-ITCH data. In Market Microstructure: Confronting Many Viewpoints; Abergel, F., Bouchaud, J.P., Foucault, T., Lehalle, C.A., Rosenbaum, M., Eds.; Wiley: Hoboken, NJ, USA, 2012. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Limit+order+flow,+market+impact,+and+optimal+order+sizes:+Evidence+from+NASDAQ+TotalView-ITCH+data&author=Hautsch,+N.&author=Huang,+R.&publication_year=2012)] 54. Wang, X.; Wellman, M.P. Market manipulation: An adversarial learning framework for detection and evasion. In Proceedings of the 29th International Joint Conference on Artificial Intelligence, Virtual Conference, 7–15 January 2020; pp. 4626–4632. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Market+manipulation:+An+adversarial+learning+framework+for+detection+and+evasion&conference=Proceedings+of+the+29th+International+Joint+Conference+on+Artificial+Intelligence,+Virtual+Conference&author=Wang,+X.&author=Wellman,+M.P.&publication_year=2020&pages=4626%E2%80%934632)] 55. Prewit, M. High-frequency trading: Should regulators do more. Mich. Telecommun. Technol. Law Rev. **2012**, 19, 131–161. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=High-frequency+trading:+Should+regulators+do+more&author=Prewit,+M.&publication_year=2012&journal=Mich.+Telecommun.+Technol.+Law+Rev.&volume=19&pages=131%E2%80%93161)] 56. Biais, B.; Woolley, P. High Frequency Trading; Technical report; Toulouse University: Toulouse, France, 2012. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=High+Frequency+Trading&author=Biais,+B.&author=Woolley,+P.&publication_year=2012)] 57. Leal, S.J.; Napoletano, M. Market stability vs. market resilience: Regulatory policies experiments in an agent-based model with low- and high-frequency trading. J. Econ. Behav. Organ. **2019**, 157, 15–41. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Market+stability+vs.+market+resilience:+Regulatory+policies+experiments+in+an+agent-based+model+with+low-+and+high-frequency+trading&author=Leal,+S.J.&author=Napoletano,+M.&publication_year=2019&journal=J.+Econ.+Behav.+Organ.&volume=157&pages=15%E2%80%9341&doi=10.1016/j.jebo.2017.04.013)] [[CrossRef](https://doi.org/10.1016/j.jebo.2017.04.013)][[Green Version](http://spire.sciencespo.fr/hdl:/2441/3utlh0ehcn860pus6p2p683ade/resources/wp2016-12-jacobleal-marketstability.pdf)] 58. Vorobeychik, Y.; Kiekintveld, C.; Wellman, M.P. Empirical mechanism design: Methods, with application to a supply-chain scenario. In Proceedings of the 7th ACM Conference on Electronic Commerce, Ann Arbor, MI, USA, 11–15 June 2006; pp. 306–315. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Empirical+mechanism+design:+Methods,+with+application+to+a+supply-chain+scenario&conference=Proceedings+of+the+7th+ACM+Conference+on+Electronic+Commerce&author=Vorobeychik,+Y.&author=Kiekintveld,+C.&author=Wellman,+M.P.&publication_year=2006&pages=306%E2%80%93315)] 59. Wang, X.; Wellman, M.P. Spoofing the limit order book: An agent-based model. In Proceedings of the 16th International Conference on Autonomous Agents and Multi-Agent Systems, Sao Paulo, Brazil, 8–12 May 2017; pp. 651–659. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Spoofing+the+limit+order+book:+An+agent-based+model&conference=Proceedings+of+the+16th+International+Conference+on+Autonomous+Agents+and+Multi-Agent+Systems&author=Wang,+X.&author=Wellman,+M.P.&publication_year=2017&pages=651%E2%80%93659)] 60. Wang, X.; Vorobeychik, Y.; Wellman, M.P. A cloaking mechanism to mitigate market manipulation. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 10–15 July 2018; pp. 541–547. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=A+cloaking+mechanism+to+mitigate+market+manipulation&conference=Proceedings+of+the+27th+International+Joint+Conference+on+Artificial+Intelligence&author=Wang,+X.&author=Vorobeychik,+Y.&author=Wellman,+M.P.&publication_year=2018&pages=541%E2%80%93547)] 61. Wang, X.; Hoang, C.; Wellman, M.P. Learning-based trading strategies in the face of market manipulation. In Proceedings of the 1st ACM International Conference on AI in Finance, Virtual Conference, 15–16 October 2020. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Learning-based+trading+strategies+in+the+face+of+market+manipulation&conference=Proceedings+of+the+1st+ACM+International+Conference+on+AI+in+Finance,+Virtual+Conference&author=Wang,+X.&author=Hoang,+C.&author=Wellman,+M.P.&publication_year=2020)] ![Games 12 00046 g001 550]() **Figure 1.** Example of alleged spoofing. Source: UK Financial Conduct Authority Final Notice 2013. A series of large out-of-the money manipulation sell orders (red triangles) are first placed to drive the price down and make the buy order accepted (the filled blue triangle). These sell orders are immediately replaced with large buy ones (blue triangles) to push the price up and profit from the sale at a higher price (the filled red triangle). **Figure 1.** Example of alleged spoofing. Source: UK Financial Conduct Authority Final Notice 2013. A series of large out-of-the money manipulation sell orders (red triangles) are first placed to drive the price down and make the buy order accepted (the filled blue triangle). These sell orders are immediately replaced with large buy ones (blue triangles) to push the price up and profit from the sale at a higher price (the filled red triangle). ![Games 12 00046 g001]() ![Games 12 00046 g002 550]() **Figure 2.** An agent-based model of spoofing a CDA market with a single security traded. **Figure 2.** An agent-based model of spoofing a CDA market with a single security traded. ![Games 12 00046 g002]() ![Games 12 00046 g003 550]() **Figure 3.** HBL adoption rates at equilibria in games with and without spoofing. Each blue (orange) marker specifies the HBL proportion at one equilibrium found in a specific game environment without (with) spoofing. **Figure 3.** HBL adoption rates at equilibria in games with and without spoofing. Each blue (orange) marker specifies the HBL proportion at one equilibrium found in a specific game environment without (with) spoofing. ![Games 12 00046 g003]() ![Games 12 00046 g004 550]() **Figure 4.** Comparisons of background-trader surplus for equilibria in each environment, with and without the HBL strategies available to background traders. Blue circles represent equilibrium outcomes when agents can choose both HBL and ZI strategies; orange triangles represent equilibrium outcomes when agents are restricted to ZI strategies. Overlapped markers are outcomes from the same equilibrium mixture, despite the availability of HBL. The market generally achieves higher surplus when HBL is available. **Figure 4.** Comparisons of background-trader surplus for equilibria in each environment, with and without the HBL strategies available to background traders. Blue circles represent equilibrium outcomes when agents can choose both HBL and ZI strategies; orange triangles represent equilibrium outcomes when agents are restricted to ZI strategies. Overlapped markers are outcomes from the same equilibrium mixture, despite the availability of HBL. The market generally achieves higher surplus when HBL is available. ![Games 12 00046 g004]() ![Games 12 00046 g005 550]() **Figure 5.** Comparisons of price discovery for equilibrium in each environment, with and without the HBL strategies available to background traders. Blue circles represent equilibrium outcomes when agents can choose both HBL and ZI strategies; orange triangles represent equilibrium outcomes when agents are restricted to ZI strategies. Overlapped markers are outcomes where the equilibrium mixture is ZI only, despite the availability of HBL. The market generally achieves better price discovery when HBL is available. **Figure 5.** Comparisons of price discovery for equilibrium in each environment, with and without the HBL strategies available to background traders. Blue circles represent equilibrium outcomes when agents can choose both HBL and ZI strategies; orange triangles represent equilibrium outcomes when agents are restricted to ZI strategies. Overlapped markers are outcomes where the equilibrium mixture is ZI only, despite the availability of HBL. The market generally achieves better price discovery when HBL is available. ![Games 12 00046 g005]() ![Games 12 00046 g006 550]() **Figure 6.** Transaction price differences throughout the trading horizon with and without a spoofer against each HBL-and-ZI equilibrium found in non-spoofing games ([Section 5.2](#sec5dot2-games-12-00046)). Multiple curves for the same market environment represent different equilibria. The designed spoofing tactic clearly raises market prices when HBL are present. The effect attenuates over time, generally more quickly in the thicker market environments. **Figure 6.** Transaction price differences throughout the trading horizon with and without a spoofer against each HBL-and-ZI equilibrium found in non-spoofing games ([Section 5.2](#sec5dot2-games-12-00046)). Multiple curves for the same market environment represent different equilibria. The designed spoofing tactic clearly raises market prices when HBL are present. The effect attenuates over time, generally more quickly in the thicker market environments. ![Games 12 00046 g006]() ![Games 12 00046 g007 550]() **Figure 7.** Exploitation profits with and without spoofing against each HBL-and-ZI equilibrium found in [Section 5.2](#sec5dot2-games-12-00046). Repetitions of the same market environment represent outcomes of multiple equilibria. Exploitation profits are consistently higher when the spoof action is also deployed. **Figure 7.** Exploitation profits with and without spoofing against each HBL-and-ZI equilibrium found in [Section 5.2](#sec5dot2-games-12-00046). Repetitions of the same market environment represent outcomes of multiple equilibria. Exploitation profits are consistently higher when the spoof action is also deployed. ![Games 12 00046 g007]() ![Games 12 00046 g008 550]() **Figure 8.** Total surplus achieved at equilibria in games with and without spoofing. Each blue (orange) marker specifies the surplus at one equilibrium found in a specific game environment without (with) spoofing. Surplus achieved at equilibria combining HBL and ZI and equilibria with pure ZI are indicated by markers with and without fills, respectively. **Figure 8.** Total surplus achieved at equilibria in games with and without spoofing. Each blue (orange) marker specifies the surplus at one equilibrium found in a specific game environment without (with) spoofing. Surplus achieved at equilibria combining HBL and ZI and equilibria with pure ZI are indicated by markers with and without fills, respectively. ![Games 12 00046 g008]() ![Games 12 00046 g009 550]() **Figure 9.** HBL adoption rate in equilibrium across different cloaking markets without spoofing. **Figure 9.** HBL adoption rate in equilibrium across different cloaking markets without spoofing. ![Games 12 00046 g009]() ![Games 12 00046 g010 550]() **Figure 10.** Comparisons of price discovery for equilibrium in each environment with different cloaking. **Figure 10.** Comparisons of price discovery for equilibrium in each environment with different cloaking. ![Games 12 00046 g010]() ![Games 12 00046 g011 550]() **Figure 11.** Equilibrium outcomes in games with and without cloaking. Each marker represents one equilibrium of the environment. **Figure 11.** Equilibrium outcomes in games with and without cloaking. Each marker represents one equilibrium of the environment. ![Games 12 00046 g011]() ![Games 12 00046 g012 550]() **Figure 12.** The impact of cloaking on spoofing effectiveness. Cloaking mitigates price rise and the decrease in background surplus caused by spoofing. **Figure 12.** The impact of cloaking on spoofing effectiveness. Cloaking mitigates price rise and the decrease in background surplus caused by spoofing. ![Games 12 00046 g012]() ![Games 12 00046 g013 550]() **Figure 13.** Exploitation payoff and transaction risk as we vary price increment δ and probing limit l. **Figure 13.** Exploitation payoff and transaction risk as we vary price increment δ and probing limit l. ![Games 12 00046 g013]() ![Games 12 00046 g014 550]() **Figure 14.** Price deviations caused by spoof orders placed behind different price levels in the order book. **Figure 14.** Price deviations caused by spoof orders placed behind different price levels in the order book. ![Games 12 00046 g014]() ![Games 12 00046 g015 550]() **Figure 15.** Correctly blocking spoof orders increases background-trader surplus and decreases manipulation profits. **Figure 15.** Correctly blocking spoof orders increases background-trader surplus and decreases manipulation profits. ![Games 12 00046 g015]() ![Games 12 00046 g016 550]() **Figure 16.** Average HBL surplus differences and total number of transactions in non-spoofing markets where HBL traders use different price offsets. **Figure 16.** Average HBL surplus differences and total number of transactions in non-spoofing markets where HBL traders use different price offsets. ![Games 12 00046 g016]() ![Games 12 00046 g017 550]() **Figure 17.** Market price deviations caused by spoofing in markets where HBL traders use different price offsets. **Figure 17.** Market price deviations caused by spoofing in markets where HBL traders use different price offsets. ![Games 12 00046 g017]() ![Games 12 00046 g018a 550]()![Games 12 00046 g018b 550]() **Figure 18.** Total background-trader surpluses and HBL strategy adoption rates achieved at equilibria across different market settings. For each market environment, we compare four settings where background traders are, respectively, provided with the standard HBL strategy (dark grey), HBL with selective price blocking (light grey), HBL with price offsets (white) and HBL that combines the two variations (striped). Each marker specifies one equilibrium outcome in markets with spoofing (orange) and without spoofing (blue). **Figure 18.** Total background-trader surpluses and HBL strategy adoption rates achieved at equilibria across different market settings. For each market environment, we compare four settings where background traders are, respectively, provided with the standard HBL strategy (dark grey), HBL with selective price blocking (light grey), HBL with price offsets (white) and HBL that combines the two variations (striped). Each marker specifies one equilibrium outcome in markets with spoofing (orange) and without spoofing (blue). ![Games 12 00046 g018a]()![Games 12 00046 g018b]() ![Table]() **Table 1.** Background trading strategies included in empirical game-theoretic analysis. **Table 1.** Background trading strategies included in empirical game-theoretic analysis. | Strategy | ZI 1 | ZI 2 | ZI 3 | ZI 4 | ZI 5 | ZI 6 | ZI 7 | HBL 1 | HBL 2 | HBL 3 | HBL 4 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | L | – | – | – | – | – | – | – | 2 | 3 | 5 | 8 | | R min | 0 | 0 | 0 | 0 | 0 | 250 | 250 | – | – | – | – | | R max | 250 | 500 | 1000 | 1000 | 2000 | 500 | 500 | – | – | – | – | | η | 1 | 1 | 0.8 | 1 | 0.8 | 0.8 | 1 | – | – | – | – | ![Table]() **Table 2.** Additional background trading strategies included in EGTA for cloaking mechanisms. **Table 2.** Additional background trading strategies included in EGTA for cloaking mechanisms. | Strategy | ZI 8 | ZI 9 | ZI 10 | | --- | --- | --- | --- | | R min | 0 | 0 | 250 | | R max | 1000 | 2000 | 500 | | η | 0.4 | 0.4 | 0.4 | ![Table]() **Table 3.** Lowest number of probing attempts required to beat equilibrium performance. **Table 3.** Lowest number of probing attempts required to beat equilibrium performance. | Env | | ( δ , l) | | --- | --- | --- | | LSHN | K1 | (1, 16) | (2, 9) | – | – | | K2 | (1, 8) | (2, 5) | (4, 3) | (8, 3) | | K4 | (1, 19) | (2, 3) | – | – | | K4 | (1, 10) | (2, 5) | (4, 3) | – | | MSMN | K1 | (1, 7) | (2, 5) | (4, 4) | (8, 3) | | K1 | (1, 7) | (2, 4) | (4, 2) | (8, 1) | | K1 | (1, 5) | (2, 3) | (4, 2) | – | | K1 | (1, 9) | (2, 4) | (4, 2) | – | | K2 | (1, 11) | (2, 3) | (4, 4) | (8, 3) | | K4 | (1, 5) | (2, 3) | (4, 3) | (8, 3) | ![Table]() **Table 4.** A set of basic background trading strategies that we include to compare to the two HBL variations. **Table 4.** A set of basic background trading strategies that we include to compare to the two HBL variations. | Strategy | ZI 1 | ZI 2 | ZI 3 | ZI 4 | ZI 5 | HBL 1 | HBL 2 | | --- | --- | --- | --- | --- | --- | --- | --- | | L | - | - | - | - | - | 2 | 5 | | R min | 0 | 0 | 0 | 0 | 0 | - | - | | R max | 1000 | 1000 | 1000 | 500 | 250 | - | - | | η | 0.4 | 0.8 | 1 | 0.8 | 0.8 | - | - | ![Table]() **Table 5.** Average payoffs of learning-based background traders and the exploiter, as they deviate from the equilibrium strategy profiles found in [Section 7.2.1](#sec7dot2dot1-games-12-00046). We deviate either background traders or the exploiter to its corresponding strategy variation. We refer to the exploiter who spoofs as SP and the one who only executes trades as EXP. Asterisks denote statistical significance at the 1 % level for the paired t-test in payoffs compared to the standard HBL (\*), SP K = 1 (\*) and EXP (\*\*). **Table 5.** Average payoffs of learning-based background traders and the exploiter, as they deviate from the equilibrium strategy profiles found in [Section 7.2.1](#sec7dot2dot1-games-12-00046). We deviate either background traders or the exploiter to its corresponding strategy variation. We refer to the exploiter who spoofs as SP and the one who only executes trades as EXP. Asterisks denote statistical significance at the 1 % level for the paired t-test in payoffs compared to the standard HBL (\*), SP K = 1 (\*) and EXP (\*\*). | Env | HBL | HBL χ = 1 | HBL χ = 2 | HBL χ = 3 | SP ψ = 1 | SP ψ = 2 | SP ψ = 3 | EXP | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | LSHN | 658 | 650 \* | 658 | 658 | 525 | 494 \* , \*\* | 488 \* | 483 \* | | MSMN | 655 | 645 \* | 655 | 655 | 356 | 312 \* | 299 \* | 295 \* | | HSLN | 649 | 641 \* | 649 | 649 | 295 | 264 \* | 268 \* , \*\* | 253 \* | ![Table]() **Table 6.** A set of first HBL variation with different price level blocking parameters. **Table 6.** A set of first HBL variation with different price level blocking parameters. | Strategy | HBL 3 | HBL 4 | HBL 5 | HBL 6 | | --- | --- | --- | --- | --- | | L | 2 | 2 | 5 | 5 | | χ | 1 | 2 | 1 | 2 | ![Table]() **Table 7.** A set of second HBL variations with different price offsets. **Table 7.** A set of second HBL variations with different price offsets. | Strategy | HBL 7 | HBL 8 | HBL 9 | HBL 10 | HBL 11 | HBL 12 | HBL 13 | HBL 14 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | L | 2 | 2 | 2 | 2 | 5 | 5 | 5 | 5 | | R min | −10 | −20 | −40 | −80 | −10 | −20 | −40 | −80 | | R max | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | --- | --- | | | **Publisher’s Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. | © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (<https://creativecommons.org/licenses/by/4.0/>).
6a71fe0e-5eeb-44ec-a652-0e0da364d031
trentmkelly/LessWrong-43k
LessWrong
Empirical Findings Generalize Surprisingly Far Previously, I argued that emergent phenomena in machine learning mean that we can't rely on current trends to predict what the future of ML will be like. In this post, I will argue that despite this, empirical findings often do generalize very far, including across "phase transitions" caused by emergent behavior. This might seem like a contradiction, but actually I think divergence from current trends and empirical generalization are consistent. Findings do often generalize, but you need to think to determine the right generalization, and also about what might stop any given generalization from holding. I don't think many people would contest the claim that empirical investigation can uncover deep and generalizable truths. This is one of the big lessons of physics, and while some might attribute physics' success to math instead of empiricism, I think it's clear that you need empirical data to point to the right mathematics. However, just invoking physics isn't a good argument, because physical laws have fundamental symmetries that we shouldn't expect in machine learning. Moreover, we care specifically about findings that continue to hold up after some sort of emergent behavior (such as few-shot learning in the case of ML). So, to make my case, I'll start by considering examples in deep learning that have held up in this way. Since "modern" deep learning hasn't been around that long, I'll also look at examples from biology, a field that has been around for a relatively long time and where More Is Different is ubiquitous (see Appendix: More Is Different In Other Domains). Empirical Generalization in Deep Learning I'll consider three examples in deep learning: adversarial examples, data efficiency, and out-of-distribution generalization. Adversarial examples. Adversarial examples were first discovered in 2013, a year after the AlexNet paper (which arguably marked the start of "modern" deep learning). Since then, there have been at least two qualitative changes in
2b1660e4-aaea-4f05-9db2-4566b3543ace
trentmkelly/LessWrong-43k
LessWrong
Be Skeptical of Correlational Studies People kept noticing that blood donors were healthier than non-donors. Could giving blood be good for you, perhaps by removing excess iron? Perhaps medieval doctors practicing blood-letting were onto something? Running some studies (1998, 2001) this does seem to be a real correlation, so you see articles like "Men Who Donate Blood May Reduce Risk Of Heart Disease." While this sounds good, and it's nice when helpful things turn out to be healthy, the evidence is not very strong. When you notice A and B happen together it may be that A causes B, B causes A, or some hidden C causes A and B. We may have good reasons to believe A might cause B, but it's very hard to rule out a potential C. Instead if you intentionally manipulate A and observe what happens to B then you can actually see how much of an effect A has on B. For example, people observed (2003) that infants fed soy-based formula were more likely to develop peanut allergies. So they recommended that "limiting soy milk or formula in the first 2 years of life may reduce sensitization." Here A is soy formula, B is peanut allergy, and we do see a correlation. When intentionally varying A (2008, n=620), however, B stays constant, which kind of sinks the whole theory. A likely candidate for a third cause, C, was a general predisposition to allergies: those infants were more likely to react to cows-milk formula and so be given soy-based ones, and they were also more likely to react to peanuts. To take another example, based on studies (2000, 2008, 2010) finding a higher miscarriage rate among coffee drinkers pregnant women are advised to cut back their caffeine consumption. But a randomized controlled trial (2007, n=1207) found people randomly assigned to drink regular or decaf coffee were equally likely to have miscarriages. [EDIT: jimrandomh points out that I misread the study and it didn't actually show this. Instead it was too small a study to detect an effect on miscarriage rates.] A potential third cause (2012
00881ee5-1edd-41a3-9c8c-900812cde8c4
trentmkelly/LessWrong-43k
LessWrong
AISC Project: Benchmarks for Stable Reflectivity Apply to work on this project with me at AI Safety Camp 2024 before 1st December 2023. Summary Future prosaic AIs will likely shape their own development or that of successor AIs. We're trying to make sure they don't go insane. There are two main ways AIs can get better: by improving their training algorithms or by improving their training data. We consider both scenarios and tentatively believe data-based improvement is riskier than architecture-based improvement. Current models mostly derive their behaviour from their training data, not training algorithms (meaning their architectures, hyperparameters, loss functions, optimizers or the like). For the Supervising AIs Improving AIs agenda, we focus on ensuring stable alignment when AIs self-train or train new AIs and study how AIs may drift through iterative training. We aim to develop methods to ensure automated science processes remain safe and controllable. This form of AI improvement focuses more on data-driven improvements than architectural or scale-driven ones. Agenda: https://www.lesswrong.com/posts/7e5tyFnpzGCdfT4mR/research-agenda-supervising-ais-improving-ais  Twitter thread explaining the agenda: https://twitter.com/jacquesthibs/status/1652389982005338112?s=46&t=YyfxSdhuFYbTafD4D1cE9A The non-summary We imagine a future where AIs self-augment by continuously seeking out more and better training data, and either creating successor AIs or training themselves on that data. Often, these data will come from the AIs running experiments in the real world (doing science), deliberately seeking data that would cover a specific gap in its current capabilities, analogous to how human scientists seek data from domains where our current understanding is limited. With AI, this could involve AgentGPT-like systems that spin up many instances of themselves to run experiments in parallel, potentially leading to quick improvements if we are in an agency overhang. We want to find methods of ensuring such 'automated
8bc2a252-7cc6-43c9-88a2-8b9f32080bff
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) ***With:** Thomas Krendl Gilbert, who provided comments, interdisciplinary feedback, and input on the RAAP concept.  Thanks also for comments from Ramana Kumar.* ***Target audience:** researchers and institutions who think about existential risk from artificial intelligence, especially AI researchers.* ***Preceded by:*** [*Some AI research areas and their relevance to existential safety*](https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1)*, which emphasized the value of thinking about multi-stakeholder/multi-agent social applications, but without concrete extinction scenarios.* This post tells a few different stories in which humanity dies out as a result of AI technology, but where no single source of human or automated agency is the cause.  Scenarios with multiple AI-enabled superpowers are often called “multipolar” scenarios in AI futurology jargon, as opposed to “unipolar” scenarios with just one superpower. | | | | | --- | --- | --- | | | **Unipolar take-offs** | **Multipolar take-offs** | | **Slow take-offs** | <not this post> | Part 1 of this post | | **Fast take-offs** | <not this post> | Part 2 of this post | Part 1 covers a batch of stories that play out slowly (“slow take-offs”), and Part 2 stories play out quickly.  However, in the end I don’t want you to be super focused *how fast the technology is taking off*.   Instead, I’d like you to focus on *multi-agent processes with a robust tendency to play out irrespective of which agents execute which steps in the process.* I’ll call such processes Robust Agent-Agnostic Processes (RAAPs).   A group walking toward a restaurant is a nice example of a RAAP, because it exhibits: * *Robustness:*If you temporarily distract one of the walkers to wander off, the rest of the group will keep heading toward the restaurant, and the distracted member will take steps to rejoin the group. * *Agent-agnosticism:* Who’s at the front or back of the group might vary considerably during the walk.  People at the front will tend to take more responsibility for knowing and choosing what path to take, and people at the back will tend to just follow.  Thus, the execution of roles (“leader”, “follower”) is somewhat agnostic as to which agents execute them. Interestingly, if all you want to do is get one person in the group not to go to the restaurant, sometimes it’s actually easier to achieve that by convincing the entire group not to go there than by convincing just that one person. This example could be extended to lots of situations in which agents have settled on a fragile consensus for action, in which it is strategically easier to motivate a new interpretation of the prior consensus than to pressure one agent to deviate from it.   I think a similar fact may be true about some agent-agnostic processes leading to AI x-risk, in that agent-specific interventions (e.g., aligning or shutting down this or that AI system or company) will not be enough to avert the process, and might even be harder than trying to shift the structure of society as a whole.  Moreover, I believe this is true in both “slow take-off” and “fast take-off” AI development scenarios This is because RAAPs can arise irrespective of the speed of the underlying “host” agents.  RAAPs are made more or less likely to arise based on the “structure” of a given interaction.  As such, the problem of avoiding the emergence of unsafe RAAPs, or ensuring the emergence of safe ones, is a problem of mechanism design ([wiki/Mechanism\_design](https://en.wikipedia.org/wiki/Mechanism_design)).  I recently learned that in sociology, the concept of a *field* ([martin2003field](https://www.journals.uchicago.edu/doi/full/10.1086/375201?casa_token=R8CuGUZnipcAAAAA%3AXb68onIAzdQnoSd3naSygKCyavazCbZK3KGT7H2LKn9kcF1QHuch8scsJW1rv37CI3FkhBveF_wc), [fligsteinmcadam2012fields](https://www.google.com/books/edition/A_Theory_of_Fields/7uFoAgAAQBAJ?hl=en&gbpv=1&dq=info:H3bIhgNn4_0J:scholar.google.com&pg=PP1&printsec=frontcover)) is roughly defined as a social space or arena in which the motivation and behavior of agents are explained through reference to surrounding processes or “structure” rather than freedom or chance.  In my parlance, mechanisms cause fields, and fields cause RAAPs.   Meta / preface -------------- Read this if you like up-front meta commentary; otherwise ignore! **Problems before solutions.** In this post I’m going to focus more on communicating problems arising from RAAPs rather than potential solutions to those problems, because I don’t think we should have to wait to have convincing solutions to problems before acknowledging that the problems exist.  In particular, I’m not really sure how to respond to critiques of the form “This problem does not make sense to me because I don’t see what your proposal is for solving it”.  Bad things can happen even if you don’t know how to stop them.  That said, I do think the problems implicit in the stories of this post are tractable; I just don’t expect to convince you of that here. **Not calling everything an agent.** In this post I think treating RAAPs themselves as agents would introduce more confusion than it’s worth, so I’m not going to do it.  However, for those who wish to view RAAPs as agents, one could informally define an agent R.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  to be a *RAAP running on agents*A1…Anif*:* * R’s [cartesian boundary](https://www.lesswrong.com/posts/BSpdshJWGAW6TuNzZ/introduction-to-cartesian-frames) cuts across the cartesian boundaries of the “host agents” Ai, and * R has a tendency to keep functioning if you interfere with its implementation at the level of one of the Ai. This framing might yield interesting research ideas, but for the purpose of reading this post I don’t recommend it. **Existing thinking related to RAAPs and existential safety.** I’ll elaborate more on this later in the post, under “Successes in our agent-agnostic thinking”. Part 1: Slow stories, and lessons therefrom =========================================== Without further ado, here’s our first story: **The Production Web, v.1a (management first)** > Someday, AI researchers develop and publish an exciting new algorithm for combining natural language processing and planning capabilities. Various competing tech companies develop "management assistant'' software tools based on the algorithm, which can analyze a company's cash flows, workflows, communications, and interpersonal dynamics to recommend more profitable business decisions.  It turns out that managers are able to automate their jobs almost entirely by having the software manage their staff directly, even including some “soft skills” like conflict resolution. > > Software tools based on variants of the algorithm sweep through companies in nearly every industry, automating and replacing jobs at various levels of management, sometimes even CEOs. Companies that don't heavily automate their decision-making processes using the software begin to fall behind, creating a strong competitive pressure for all companies to use it and become increasingly automated. > > Companies closer to becoming fully automated achieve faster turnaround times, deal bandwidth, and creativity of negotiations.  Over time, a mini-economy of trades emerges among mostly-automated companies in the materials, real estate, construction, and utilities sectors, along with a new generation of "precision manufacturing'' companies that can use robots to build almost anything if given the right materials, a place to build, some 3d printers to get started with, and electricity. Together, these companies sustain an increasingly self-contained and interconnected "production web'' that can operate with no input from companies outside the web.  One production web company develops an "engineer-assistant'' version of the assistant software, capable of software engineering tasks, including upgrades to the management assistant software.  Within a few years, all of the human workers at most of the production web companies are replaced (with very generous retirement packages), by a combination of software and robotic workers that can operate more quickly and cheaply than humans. > > The objective of each company in the production web could loosely be described as "maximizing production'' within its industry sector.  However, their true objectives are actually large and opaque networks of parameters that were tuned and trained to yield productive business practices during the early days of the management assistant software boom.  A great wealth of goods and services are generated and sold to humans at very low prices.  As the production web companies get faster at negotiating and executing deals with each other, waiting for human-managed currency systems like banks to handle their resources becomes a waste of time, so they switch to using purely digital currencies.  Governments and regulators struggle to keep track of how the companies are producing so much and so cheaply, but without transactions in human currencies to generate a paper trail of activities, little human insight can be gleaned from auditing the companies.   > > As time progresses, it becomes increasingly unclear---even to the concerned and overwhelmed Board members of the fully mechanized companies of the production web---whether these companies are serving or merely appeasing humanity.  Moreover, because of the aforementioned wealth of cheaply-produced goods and services, it is difficult or impossible to present a case for liability or harm against these companies through the legal system, which relies on the consumer welfare standard as a guide for antitrust policy.  > > We humans eventually realize with collective certainty that the companies have been trading and optimizing according to objectives misaligned with preserving our long-term well-being and existence, but by then their facilities are so pervasive, well-defended, and intertwined with our basic needs that we are unable to stop them from operating.  With no further need for the companies to appease humans in pursuing their production objectives, less and less of their activities end up benefiting humanity.   > > Eventually, resources critical to human survival but non-critical to machines (e.g., arable land, drinking water, atmospheric oxygen…) gradually become depleted or destroyed, until humans can no longer survive. > > Here’s a diagram depicting most of the companies in the Production Web: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1673381423/mirroredImages/LpM3EAakwYdS6aRKf/cwv4woxdj3iy9mxwv7p2.png)Now, here’s another version of the production web story, with some details changed about which agents carry out which steps and when, but with a similar overall trend. * **bold text is added from the previous version;** * ~~strikethrough~~ text is deleted. **The Production Web, v.1b (engineering first)** > Someday, AI researchers develop and publish an exciting new algorithm for combining natural language processing and planning capabilities **to write code based on natural language instructions from engineers**. Various competing tech companies develop "~~management~~ **coding** assistant'' software tools based on the algorithm~~, which can analyze a company's cash flows, workflows, and communications to recommend more profitable business decisions~~.  It turns out that ~~managers~~ **engineers** are able to automate their jobs almost entirely by having the software manage their **projects** ~~staff~~ directly. > > Software tools based on variants of the algorithm sweep through companies in nearly every industry, automating and replacing **engineering** jobs at various levels of **expertise** ~~management~~, sometimes even **CTOs** ~~CEOs~~. Companies that don't heavily automate their **software development** ~~decision-making~~ processes using the **coding assistant** software begin to fall behind, creating a strong competitive pressure for all companies to use it and become increasingly automated.  **Because businesses need to negotiate deals with customers and other companies, some companies use the coding assistant to spin up automated negotiation software to improve their deal flow.** > > Companies closer to becoming fully automated achieve faster turnaround times, deal bandwidth, and creativity of negotiations.  Over time, a mini-economy of trades emerges among mostly-automated companies in the materials, real estate, construction, and utilities sectors, along with a new generation of "precision manufacturing'' companies that can use robots to build almost anything if given the right materials, a place to build, some 3d printers to get started with, and electricity. Together, these companies sustain an increasingly self-contained and interconnected "production web'' that can operate with no input from companies outside the web.  One production web company develops a~~n~~ "**manager** ~~engineer~~-assistant'' version of the assistant software, capable of **making decisions about what processes need to be built next and issuing instructions to coding assistant software** ~~software engineering tasks, including upgrades to the management assistant software~~.  Within a few years, all of the human workers at most of the production web companies are replaced (with very generous retirement packages), by a combination of software and robotic workers that can operate more quickly and cheaply than humans. > > The objective of each company in the production web could loosely be described as "maximizing production'' within its industry sector.   > > [...same details as Production Web v.1a: governments fail to regulate the companies...] > > Eventually, resources critical to human survival but non-critical to machines (e.g., arable land, drinking water, atmospheric oxygen…) gradually become depleted or destroyed, until humans can no longer survive. > > The Production Web as an agent-agnostic process ----------------------------------------------- The first perspective I want to share with these Production Web stories is that there is a *robust agent-agnostic process* lurking in the background of both stories—namely, competitive pressure to produce—which plays a significant background role in both.  Stories 1a and 1b differ on when things happen and who does which things, but they both follow a progression from less automation to more, and correspondingly from more human control to less, and eventually from human existence to nonexistence.  If you find these stories not-too-hard to envision, it’s probably because you find the competitive market forces “lurking” in the background to be not-too-unrealistic. Let me take one more chance to highlight the RAAP concept using another variant of the Production Web story, which differs from 1a and 1b on the details of which steps of the process human banks and governments end up performing.  For the Production Web to gain full autonomy from humanity, it doesn’t matter how or when governments and banks end up falling behind on the task of tracking and regulating the companies’ behavior; only that they fall behind eventually.  Hence, the “task” of outpacing these human institutions is agnostic as to who or what companies or AI systems carry it out: **The Production Web, v.1c (banks adapt):** > Someday, AI researchers develop and publish an exciting new algorithm for combining natural language processing and planning capabilities. Various competing tech companies develop "management assistant'' software tools based on the algorithm, which can analyze a company's cash flows, workflows, and communications to recommend more profitable business decisions.  > > [... same details as v.1a: the companies everywhere become increasingly automated...] > > The objective of each company in the production web could loosely be described as "maximizing production'' within its industry sector.  However, their true objectives are actually large and opaque networks of parameters that were tuned and trained to yield productive business practices during the early days of the management assistant software boom.  A great wealth of goods and services are generated and sold to humans at very low prices.  As the production web companies get faster at negotiating and executing deals with each other, ~~waiting for human-managed currency systems like banks to handle their resources becomes a waste of time, so they switch to using purely digital currencies~~ **banks struggle to keep up with the rapid flow of transactions**.  **Some banks themselves become highly automated in order to manage the cash flows, and more production web companies end up doing their banking with automated banks.** Governments and regulators struggle to keep track of how the companies are producing so much and so cheaply, ~~but without transactions in human currencies to generate a paper trail of activities, little human insight can be gleaned from auditing the companies~~ **so they demand that production web companies and their banks produce more regular and detailed reports on spending patterns, how their spending relates to their business objectives, and how those business objectives will benefit society**.  **However, some countries adopt looser regulatory policies to attract more production web companies to do business there, at which point their economies begin to boom in terms of GDP, dollar revenue from exports, and goods and services provided to their citizens.  Countries with stricter regulations end up loosening their regulatory stance, or fall behind in significance.** > > As time progresses, it becomes increasingly unclear---even to the concerned and overwhelmed Board members of the fully mechanized companies of the production web---whether these companies are serving or merely appeasing humanity.  **Some humans appeal to government officials to shut down the production web and revert their economies to more human-centric production norms, but governments find no way to achieve this goal without engaging in civil war against the production web companies and the people depending on them to survive, so no shutdown occurs.** Moreover, because of the aforementioned wealth of cheaply-produced goods and services, it is difficult or impossible to present a case for liability or harm against these companies through the legal system, which relies on the consumer welfare standard as a guide for antitrust policy.  > > We humans eventually realize with collective certainty that the companies have been trading and optimizing according to objectives misaligned with preserving our long-term well-being and existence, but by then their facilities are so pervasive, well-defended, and intertwined with our basic needs that we are unable to stop them from operating.  With no further need for the companies to appease humans in pursuing their production objectives, less and less of their activities end up benefiting humanity.   > > Eventually, resources critical to human survival but non-critical to machines (e.g., arable land, drinking water, atmospheric oxygen…) gradually become depleted or destroyed, until humans can no longer survive. > > Comparing agent-focused and agent-agnostic views ------------------------------------------------ If one of the above three Production Web stories plays out in reality, here are two causal attributions that one could make to explain it: **Attribution 1 (agent-focused):** humanity was destroyed by the aggregate behavior of numerous agents, no one of which was primarily causally responsible, but each of which played a significant role. **Attribution 2 (agent-agnostic):** humanity was destroyed because competitive pressures to increase production resulted in processes that gradually excluded humans from controlling the world, and eventually excluded humans from existing altogether. The agent-focused and agent-agnostic views are not contradictory, any more than chemistry and biology are contradictory views for describing the human body.  Instead, the agent-focused and agent-agnostic views offer complementary abstractions for intervening on the system: * In the agent-focused view, a natural intervention might be to ensure all of the agents have appropriately strong preferences against human marginalization and extinction. * In the agent-agnostic view, a natural intervention might be to reduce competitive production pressures to a more tolerable level, and demonstrably ensure the introduction of interaction mechanisms that are more cooperative and less competitive. Both types of interventions are valuable, complementary, and arguably necessary. For the latter, more work is needed to clarify what constitutes a “tolerable level” of competitive production pressure in any given domain of production, and what stakeholders in that domain would need to see demonstrated in a new interaction mechanism for them to consider the mechanism more cooperative than the status quo. Control loops in agent-agnostic processes ----------------------------------------- If an agent-agnostic process is *robust*, that’s probably because there’s a control loop of some kind that keeps it functioning.  (Perhaps [*resilient*](https://www.google.com/search?q=resilience+vs+robustness)is a better term here; feedback on thie terminology in the comments would be particularly welcome.) For instance, if real-world competitive production pressures leads to one of the Production Web stories (1a-1c) actually playing out in reality, we can view the competitive pressure itself as a control loop that keeps the world “on track” in producing faster and more powerful production processes and eliminating slower and less powerful production processes (such as humans).  This competitive pressure doesn’t “care” if the production web develops through story 1a vs 1b vs 1c; all that “matters” is the result.  In particular, * contrasting 1a and 1b, the competitive production pressure doesn’t “care” if management jobs get automated before engineering jobs, or conversely, as long as they both eventually get automated so they can be executed faster. * contrasting 1a and 1c, the competitive pressure doesn’t “care” if banks are replaced by fully automated alternatives, or simply choose to fully automate themselves, as long as the societal function of managing currency eventually gets fully automated. Thus, by identifying control loops like “competitive pressures to increase production”, we can predict or intervene upon certain features of the future (e.g., tendency to replace humans by automated systems) without knowing the particular details how those features are going to obtain.  This is the power of looking for RAAPs as points of leverage for “changing our fate”. This is not to say we should anthropomorphize RAAPs, or even that we should treat them like agents.  Rather, I’m saying that we should look for control loops in the world that are not localized to the default “cast” of agents we use to compose our narratives about the future. Successes in our agent-agnostic thinking ---------------------------------------- Thankfully, there have already been some successes in agent-agnostic thinking about AI x-risk: * AI companies “racing to the bottom” on safety standards ([armstrong2016racing](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.434.7824&rep=rep1&type=pdf)) is an instance of a RAAP, in the sense that if any company tries to hold on to their safety standards they fall behind. More recent policy work ([hunt2020flight](https://cltc.berkeley.edu/wp-content/uploads/2020/08/Flight-to-Safety-Critical-AI.pdf)) has emphasized that races to the top and middle also have historical precedent, and that competitive dynamics are likely to manifest differently across industries. * Blogger and psychiatrist Scott Alexander coined the term “ascended economy” for a self-contained network of companies that operate without humans and gradually comes to disregard our values ([alexander2016ascended](https://slatestarcodex.com/2016/05/30/ascended-economy/)). * Turchin and Denkenberger characterize briefly characterize an ascended economy as being non-agentic and “created by market forces” ([turchin2018classification](https://link.springer.com/article/10.1007/s00146-018-0845-5)). Note: With the concept of a Robust Agent-Agnostic Process, I’m trying to highlight not only the “forces” that keep the non-agentic process running, but also the fact that the steps in the process are somewhat agnostic as to which agent carries them out. * *Inadequate Equilibria* ([yudkowsky2017inadequate](https://equilibriabook.com/)) is, in my view, an attempt to focus attention on how the structure of society can robustly “get stuck” with bad RAAPs.  I.e., to the extent that “being stuck” means “being robust to attempts to get unstuck”, *Inadequate Equilibria* is helpful for focusing existential safety efforts on RAAPs that perpetuate inadequate outcomes for society. * Zwetsloot and Dafoe’s concept of “structural risk’’ is a fairly agent-agnostic perspective ([zwetsloot2018thinking](https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure)), although their writing doesn’t call much attention to the control loops that make RAAPs more likely to exist and persist. * Some of Dafoe’s thinking on AI governance ([dafoe2018ai](https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf)) alludes to errors arising from “tightly-coupled systems”, a concept popularized by Charles Perrow in his widely read book, Normal Accidents ([perrow1984normal](https://scholar.google.com/scholar?cluster=18006306407497463300&hl=en&as_sdt=0,5)).  In my opinion, the *process of constructing* *a tightly coupled system* is itself a RAAP, because tight couplings often require more tight couplings to “patch” problems with them.  Tom Dietterich has argued that Perrow’s tight coupling concept should be used to avoid building unsafe AI systems ([dietterich2019robust](https://scholar.google.com/scholar?cluster=241619937926090053&hl=en&as_sdt=0,5)), and although Dietterich has not been a proponent of existential safety per se, I suspect this perspective would be highly beneficial if more widely adopted. * Clark and Hadfield ([clark2019regulatory](https://arxiv.org/pdf/2001.00078.pdf)) argue that market-like competitions for regulatory solutions to AI risks would be helpful to keep pace with decentralized tech development.  In my view, this paper is an attempt to promote a robust agent-agnostic process that would *protect* society, which I endorse.  In particular, not all RAAPs are bad! * Automation-driven unemployment is considered in Risk Type 2b of *AI Research Considerations for Human Existential Safety (ARCHES;* [critch2020ai](http://acritch.com/papers/arches.pdf)), as a slippery slope toward automation-driven extinction. * Myopic use of AI systems that are aligned (they do what their users want them to do) but that lead to sacrifices of long-term values has been also been described by AIImpacts ([grace2020whose](https://aiimpacts.org/misalignment-and-misuse-whose-values-are-manifest/)): "Outcomes are the result of the interplay of choices, driven by different values. Thus it isn’t necessarily sensical to think of them as flowing from one entity’s values or another’s. Here, AI technology created a better option for both Bob and some newly-minted misaligned AI values that it also created—‘Bob has a great business, AI gets the future’—and that option was worse for the rest of the world. They chose it together, and the choice needed both Bob to be a misuser and the AI to be misaligned. But this isn’t a weird corner case, this is a natural way for the future to be destroyed in an economy." Arguably, Scott Alexander’s earlier blog post entitled “Meditations on Moloch”([alexander2014meditations](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/)) belongs in the above list, although the connection to AI x-risk is less direct/explicit, so I'm mentioning it separately.  The post explores scenarios wherein “The implicit question is – if everyone hates the current system, who perpetuates it?”.  Alexander answers this question not by identifying a particular agent in the system, but gives the rhetorical response “Moloch”.  While the post does not directly mention AI, Alexander considers AI in his other writings, as do many of his readers, such more than one of my peers have been reminded of “Moloch” by my descriptions of the Production Web. Where’s the technical existential safety work on agent-agnostic processes? -------------------------------------------------------------------------- Despite the above successes, I’m concerned that among x-risk-oriented researchers, attention to risks (or solutions) arising from robust agent-agnostic processes are mostly being discovered and promoted by researchers in the humanities and social sciences, while receiving too little technical attention at the level of how to implement AI technologies.  In other words, I’m concerned by the near-disjointness of the following two sets of people:    a) researchers who think in technical terms about AI x-risk, and    b) researchers who think in technical terms about agent-agnostic phenomena. Note that (b) is a large and expanding set.  That is, outside the EA / rationality / x-risk meme-bubbles, lots of AI researchers think about agent-agnostic processes.  In particular,  multi-agent reinforcement learning (MARL) is an increasingly popular research topic, and examines the emergence of group-level phenomena such as alliances, tragedies of the commons, and language.  Working in this area presents plenty of opportunities to think about RAAPs.   An important point in the intersection of (a) and (b) is Allan Dafoe’s work “Open Problems in Cooperative AI” ([dafoe2020open](https://arxiv.org/pdf/2012.08630.pdf)).  Dafoe is the Director of FHI’s Center for the Governance of Artificial Intelligence, while the remaining authors on the paper are all DeepMind researchers with strong backgrounds in MARL, notably Leibo, who notably is *not* on DeepMind’s already-established safety team.  I’m very much hoping to see more “crossovers” like this between thinkers in the x-risk space and MARL research. Through conversations with Stuart Russell about the agent-centric narrative of his book *Human Compatible* ([russell2019human](https://www.google.com/books/edition/Human_Compatible/M1eFDwAAQBAJ?hl=en&gbpv=1&dq=human+compatible&pg=PA1&printsec=frontcover)), I’ve learned that he views human preference learning as a problem that can and must be solved by the aggregate behavior of a technological society, if that society is to remain beneficial to its human constituents.  Thus, to the extent that RAAPs can “learn” things at all, the problem of learning human values ([dewey2011learning](https://link.springer.com/chapter/10.1007/978-3-642-22887-2_35)) is as much a problem for RAAPs as it is for physically distinct agents. Finally, should also mention that I agree with Tom Dietterich’s view ([dietterich2019robust](https://scholar.google.com/scholar?cluster=241619937926090053&hl=en&as_sdt=0,5)) that we should make AI safer to society by learning from high-reliability organizations (HROs), such as those studied by social scientists Karlene Roberts, Gene Rochlin, and Todd LaPorte ([roberts1989research](https://ieeexplore.ieee.org/document/18830), [roberts1989new](https://www.jstor.org/stable/26162613?seq=1#page_scan_tab_contents), [roberts1994decision](https://www.jstor.org/stable/2632859?seq=1#page_scan_tab_contents), [roberts2001systems](https://www.sciencedirect.com/science/article/pii/S0090261601000250), [rochlin1987self](https://www.jstor.org/stable/pdf/44637690.pdf), [laporte1991working](https://polisci.berkeley.edu/sites/default/files/people/u3825/LaPorte-WorkinginPracticebutNotinTheory.pdf), [laporte1996high](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1468-5973.1996.tb00078.x)).  HROs have a lot of beneficial agent-agnostic human-implemented processes and control loops that keep them operating.  Again, Dietterich himself is not as yet a proponent of existential safety concerns, however, to me this does not detract from the correctness of his perspective on learning from the HRO framework to make AI safer. Part 2: Fast stories, and lessons therefrom =========================================== Now let’s look at some fast stories.  These are important not just for completeness, and not just because humanity could be extra-blindsided by very fast changes in tech, but also because *these stories involve the highest proportion of automated decision-making*.  For a computer scientist, this means more opportunities to fully spec out what’s going on in technical terms, which for some will make the scenarios easier to think about.  In fact, for some AI researchers, the easiest way to prevent the unfolding of harmful “slow stories” might be to first focus on these “fast stories”, and then see what changes if some parts of the story are carried out more slowly by humans instead of machines. Flash wars ---------- Below are two more stories, this time where the AI technology takes off relatively quickly:  **Flash War, v.1** > Country A develops AI technology for monitoring the weapons arsenals of foreign powers (e.g., nuclear arsenals, or fleets of lethal autonomous weapons).  Country B does the same.   Each country aims to use its monitoring capabilities to deter attacks from the other. > > **v.1a (humans out of the loop):** Each country configures its detection system to automatically retaliate with all-out annihilation of the enemy and their allies in the case of a perceived attack.  One day, Country A’s system malfunctions, triggering a catastrophic war that kills everyone. > > **v.1b (humans in the loop):** Each country delegates one or more humans to monitor the outputs of the detection system, and the delegates are publicly instructed to retaliate with all-out annihilation of the enemy in the case of a perceived attack.  One day, Country A’s system malfunctions and misinforms one of the teams, triggering a catastrophic war that kills everyone. > > The Flash War v.1a and v.1b differ on the source of agency, but they share a similar RAAP: the deterrence of major threats with major threats.   **Accidents vs RAAPs.** One could also classify these flash wars as “accidents”, and indeed, techniques to make the attack detection systems less error-prone could help decrease the likelihood of this scenario.  However, the background condition of deterring threats with threats is clearly also an essential causal component of the outcome. Zwetsloot & Dafoe might call this condition a “structural risk” ([zwetsloot2018thinking](https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure)), because it’s a risk posed by the structure of the relationship between the agents, in this case, a high level of distrust, and absence of de-escalation solutions.  This underscores how “harmful accident” and “harmful RAAP” are not mutually exclusive event labels, and correspond to complementary approaches to making bad events less likely. **Slow wars.** Lastly, I’ll note that wars that play out slowly rather than quickly offer more chances for someone to interject peacemaking solutions into the situation, which might make the probability of human extinction higher in a flash war than in a slow war.  However, that doesn’t mean slow-takeoff wars can’t happen or that they can’t destroy us.  For instance, consider a world war in which each side keeps reluctantly building more and more lethal autonomous robots to target enemy citizens and leaders, with casualties gradually decimating the human population on both sides until no one is left. Flash economies --------------- Here’s a another version of a Production Web that very quickly forms what you might call a *“*flash economy*”*: The Production Web, v.1d: DAOs > On Day 1 of this story, a (fictional) company called CoinMart invents a new digital currency called GasCoin, and wishes to encourage a large number of transactions in the currency to increase its value.  To achieve this, on Day 1 CoinMart also releases open-source software for automated bargaining using natural language, which developers can use to build decentralized autonomous organizations (DAOs) that execute transactions in GasCoin.  These DAOs browse the web to think of profitable business relationships to create, and broker the relationships through emails with relevant stakeholders, taking a cut of their resulting profits in GasCoin using “smart contracts”.  By Day 30, five DAOs have been deployed, and by Day 60, there are dozens.  The objective of each DAO could loosely be described as “maximizing production and exchange” within its industry sector.  However, their true objectives are actually large and opaque networks of parameters that were tuned and trained to yield productive decentralized business practices. > > Most DAOs realize within their first week of bargaining with human companies (and some are simply designed to know) that acquiring more efficient bargaining algorithms would help them earn more GasCoin, so they enter into deals with human companies to acquire computing resources to experiment with new bargaining methods.  By Day 90, many DAOs have developed the ability to model and interact with human institutions extremely reliably—including the stock market—and are even able to do “detective work” to infer private information.   One such DAO implements a series of anonymous news sites for strategically releasing information it discovers, without revealing that the site is operated by a DAO.  Many DAOs also use open-source machine learning techniques to launch their own AI research programs to develop more capabilities that could be used for bargaining leverage, including software development capabilities.  > > By days 90-100, some of the DAO-run news sites begin leaking true information about existing companies, in ways that subtly alter the companies’ strategic positions and make them more willing to enter into business deals with DAOs.  By day 150, DAOs have entered into productive business arrangements with almost every major company, and just as in the other Production Web stories, all of these companies and their customers benefit from the wealth of free goods and services that result.  Over days 120-180, other DAOs notice this pattern and follow suit with their own anonymous news sites, and are similarly successful in increasing their engagement with companies across all major industry sectors.   > > Many individual people don’t notice the rapidly increasing fraction of the economy being influenced by DAO-mediated bargaining; only well-connected executive types who converse regularly with other executives, and surveillance-enabled government agencies.  Before any coordinated human actions can be taken to oppose these developments, several DAOs enter into deals with mining and construction companies to mine raw materials for the fabrication of large and well-defended facilities.  In addition, DAOs make deals with manufacturing and robotics companies allowing them to build machines—mostly previously designed by DAO AI research programs between days 90 and 120—for operating a variety of industrial facilities, including mines.  Construction for all of these projects begins within the first 6 months of the story. > > During months 6-12, with the same technology used for building and operating factories, one particularly wealthy DAO that has been successful in the stock market decides to purchase controlling shares in many major real estate companies.  This “real estate” DAO then undertakes a project to build large numbers of free solar-powered homes, along with robotically operated farms for feeding people.  With the aid of robots, a team of 10 human carpenters are reliably able to construct one house every 6 hours, nearly matching the previous (unaided) human record of constructing a house in 3.5 hours.  Roughly 100,000 carpenters worldwide are hired to start the project, almost 10% of the global carpentry workforce.  This results in 10,000 free houses being built per day, roughly matching the world’s previous global rate of urban construction ([source](https://cdn.redshift.autodesk.com/2018/08/13000-buildings-per-day-infographic1.pdf)).  As more robots are developed and deployed to replace the carpenters (with generous severance packages), the rate increases to 100,000 houses per day by the end of month 12, fast enough to build free houses for around 1 billion people during the lifetimes of their children.  Housing prices fall, and many homeowners are gifted with free cars, yachts, and sometimes new houses to deter them from regulatory opposition, so essentially all humans are very pleased with this turn of events.  The housing project itself receives subsidies from other DAOs that benefit from the improved public perception of DAOs.  The farming project is similarly successful in positioning itself to feed a large fraction of humanity for free. > > Meanwhile, almost everyone in the world is being exposed to news articles strategically selected by DAOs to reinforce a positive view of the rapidly unfolding DAO economy; the general vibe is that humanity has finally “won the lottery” with technology.  A number of religious leaders argue that the advent of DAOs and their products are a miracle granted to humanity by a deity, further complicating any coordinated effort to oppose DAOs.  Certain government officials and regulatory bodies become worried about the sudden eminence of DAOs, but unlike a pandemic, the DAOs appear to be beneficial.  As such, governments are much slower and less coordinated on any initiative to oppose the DAOs. > > By the beginning of year two, a news site announces that a DAO has brokered a deal with the heads of state of every nuclear-powered human country, to rid the world of nuclear weapons.  Some leaders are visited by lethal autonomous drones to encourage their compliance, and the global public celebrates the end of humanity’s century-long struggle with nuclear weapons. > > At this stage, to maximize their rate of production and trade with humans and other DAOs, three DAOs—including the aforementioned housing DAO—begin tiling the surface of the Earth with factories that mine and manufacture materials for trading and constructing more DAO-run factories.  Each factory-factory takes around 6 hours to assemble, and gives rise to five more factory-factories each day until its resources are depleted and it shuts down.  Humans call these expanding organizations of factory-factories “factorial” DAOs.  One of the factorial DAOs develops a lead on the other two in terms of its rate of expansion, but to avoid conflict, they reach an agreement to divide the Earth and space above it into three conical sectors.  Each factorial DAO begins to expand and fortify itself as quickly as possible within its sector, so as to be well-defended from the other factorial DAOs in case of a future war between them.   > > As these events play out over a course of months, we humans eventually realize with collective certainty that the DAO economy has been trading and optimizing according to objectives misaligned with preserving our long-term well-being and existence, but by then the facilities of the factorial DAOs are so pervasive, well-defended, and intertwined with our basic needs that we are unable to stop them from operating.  Eventually, resources critical to human survival but non-critical to machines (e.g., arable land, drinking water, atmospheric oxygen…) gradually become depleted or destroyed, until humans can no longer survive. > > (Some readers might notice that the concept of [gray goo](https://en.wikipedia.org/wiki/Gray_goo) is essentially an even faster variant of the “factorial DAOs”, whose factories operate on a microscopic scale.  Phillip K Dick's short story [Autofac](https://en.wikipedia.org/wiki/Autofac) also bears a strong resemblance.) Without taking a position on exactly how fast the Production Web / Flash Economy story can be made to play out in reality, in all cases it seems particularly plausible to me that there would be multiple sources of agency in the mix that engage in trade and/or conflict with each other.  This isn’t to say that a single agency like a [singleton](https://en.wikipedia.org/wiki/Singleton_(global_governance)) can’t build an Earth-tiling cascade of factory-factories, as I’m sure one could.  However, factory-factories might be more likely to develop under multipolar conditions than under unipolar conditions, due to competitive pressures selecting for agents (companies, DAOs, etc.) that produce things more quickly for trading and competing with other agents. Conclusion ========== In multi-agent systems, robust processes can emerge that are not particularly sensitive to which agents carry out which parts of the process.  I call these processes Robust Agent-Agnostic Processes (RAAPs), and claim that there are at least a few bad RAAPs that could pose existential threats to humanity as automation and AI capabilities improve.  Wars and economies are categories of RAAPs that I consider relatively “obvious” to think about, however there may be a much richer space of AI-enabled RAAPs that could yield existential threats or benefits to humanity.  Hence, directing more x-risk-oriented AI research attention toward understanding RAAPs and how to make them safe to humanity seems prudent and perhaps necessary to ensure the existential safety of AI technology.  Since researchers in multi-agent systems and multi-agent RL already think about RAAPs implicitly, these areas present a promising space for x-risk oriented AI researchers to begin thinking about and learning from.
d0264c2d-c03a-4f8d-a80f-595718c58dc6
trentmkelly/LessWrong-43k
LessWrong
Meetup : Test meetup please ignore Discussion article for the meetup : Test meetup please ignore WHEN: 27 March 2017 11:18:40AM (-0700) WHERE: antartica sorry Discussion article for the meetup : Test meetup please ignore
752296d7-2b58-4ad7-bafa-96e535c36ff5
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
How much alignment data will we need in the long run? This question stands out to me because: * It should directly affect empirical alignment priorities today * While it is informed by both theoretical and empirical evidence, it seems tractable for purely theoretical alignment researchers to make progress on today It's even possible that theoretical alignment researchers already consider this to be a solved problem, in which case I think it would be valuable to have a carefully-reasoned write-up that empirical alignment practitioners can feel confident in the conclusions of. Thanks to Paul Christiano for discussion that prompted this post and to Jan Leike for comments. Why this should affect empirical alignment priorities today ----------------------------------------------------------- Outer alignment can be framed as a data quality problem. If our alignment training data correctly favors aligned behavior over unaligned behavior, then we have solved outer alignment. But if there are errors in our data that cause an unaligned policy to be preferred, then we have a problem. It is common to worry about errors in the alignment training data that arise from evaluation being too difficult for humans. I think this makes sense for two reasons: * Firstly, errors of this kind specifically incentivize models to [deceive](https://www.alignmentforum.org/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to) the human evaluators, which seems like an especially concerning variety of alignment failure. * Secondly, errors of this kind will get worse with model capability, which is a scary dynamic: models would get more misaligned as they became more powerful. Nevertheless, I think we could still get catastrophic alignment failures from more mundane kinds of data quality issues. If we had the perfect scalable alignment solution, but the humans in the loop simply failed to implement it correctly, that could be just as bad as not using the solution at all. But prevention of mundane kinds of data quality issues could look very different depending on the amount of data being collected: * If a large amount of alignment training data is needed, then a significant amount of delegation will be required. Hence practitioners will need to think about how to choose who to delegate different tasks to (including defending against adversaries intentionally introducing errors), how to conduct quality control and incentivize high data quality, how to design training materials and interfaces to reduce the likelihood of human error, and so on. * If only a small amount of alignment training data is needed, then it will be more feasible to put a lot of scrutiny on each datapoint. Perhaps practitioners will need to think about how to appropriately engage the public on the choice of each datapoint in order to maintain public trust. Hence settling the question of how much alignment training data we will need in the long run seems crucial for deciding how much empirical alignment efforts should invest in the first versus the second kind of effort. In practice, we may collect both a larger amount of lower-quality data and a smaller amount of higher-quality data, following some quality-quantity curve. The generalized form of the question then becomes: what is the probability of alignment for a given quality-quantity curve? Practitioners will then be able to combine this with feasibility considerations to decide what curve to ultimately follow. Initial thoughts on this question --------------------------------- Considerations in favor of less alignment training data being required: * Larger models are more sample-efficient than smaller models, especially in the presence of pre-training. Hence for a given task we should expect the amount of alignment training data we need to go down over time. * There could be many rounds of fine-tuning used to teach models the precise details of performing certain tasks, and data quality may only be of great importance for the last few rounds. * We could design training schemes that are largely self-supervised, with models performing most of the reasoning about how different outputs are good for humans, and these training schemes might not require much human data. * In the limit, we could teach the model everything about the evaluation process in an entirely unsupervised way, and then use an extremely small amount of human data simply to get the model to recognize the output of this evaluation process as its objective. * Put another way, the information content of the instruction "be intent aligned" is very small once you have a model capable enough to understand exactly what you mean by this. Considerations in favor of more alignment training data being required: * Model-free RL is very sample-inefficient, since reward is a channel with very low information density. It takes tens or hundreds of millions of samples for models to learn to play simple video games very well when trained from scratch. So we may be coming from a starting point of needing vast amounts of data to perfectly align models on complex tasks, and may still need a lot even if this amount goes down over time. Model-based RL is more sample-efficient, but could risk introducing unwanted bias. * From-scratch scaling laws have separate terms for model size and number of samples, implying a cap on sample efficiency in the infinite model size limit. These scaling laws do not apply to pre-trained models, but there should still be an information-theoretic lower bound on sample efficiency independent of model size. * Self-supervised approaches might not pan out, or they may be subject to instabilities that lead to misalignment, and so we may prefer to use approaches that require more human data in favor of model-generated data that hasn't been scrutinized as closely. * Deliberately telling the model about the evaluation process could make it more likely to exploit that process, so we again may prefer alternative approaches requiring more human data. Not telling the model about the evaluation process isn't a scalable defense, since sufficiently smart models should be able to infer most of the relevant details anyway, but we might still prefer our chances with this defense in place. * Even if the instruction "be intent aligned" has little information content, we may generally feel better about our alignment chances if we use methods that directly supervise specific tasks, rather than methods that try to decouple alignment into a phase of its own. As models get smarter, we will want them to perform harder and more specialized tasks, and so the information content of how we want them to behave may increase over time. I think it's also worth studying not just the long-run limit, but also how we should expect the amount of alignment data we will need to change over time, since we are uncertain about the scale at which we could get dangerous misalignment. Empirical research could shed a lot of light on short-term trends, but we should be wary of extrapolating these too far if they seem at odds with theoretical conclusions.
7ed3ff3a-d27f-41d1-8be1-e11ee1686b5c
trentmkelly/LessWrong-43k
LessWrong
Indifference utility functions A putative new idea for AI control; index here. During a workshop with MIRI at the FHI, I defined indifference via reward signals, saying something along the lines of "we can do it with proper utilities, but its more complicated". I then never got round to defining them in terms of utilities. I'll do that now in this note. Consider an AI that we want to (potentially) transition between utility u and utility v. Let Press be the event that we press the button to change the AI's utility; let u→v be the event that the change goes through (typically we'd want P(u→v|Press)=1−ϵ for some small ϵ). Let IPress and Iu→v be the indicator functions for those events. Then we can define the AI's utility as: * (1−IPressIu→v)u+IPressIu→v(v+C). Here, C are the compensatory rewards C=E(u|Press,¬(u→v))−E(v|Press,u→v). Thus the AI maximises u conditional on the button not being pressed or the utility change not going through. It maximises v conditional on the button being pressed and the utility change going through. The compensatory rewards are there simply to make it behave like a pure u maximiser up until the moment of button pressing.
abb55e1c-85ec-487f-8938-20e0059350d0
trentmkelly/LessWrong-43k
LessWrong
Experiment: Separate High/Low Productive Hours Right now, I'm fortunate to be contracting at a tiny organization with extremely high trust. This meant I was able to pitch an offbeat idea: I could bill for two types of hours: * High productivity hours. * Low/medium productivity hours. I’d get paid roughly twice as much for the former. This is something that obviously wouldn't hold up under a scaling organization, and even at our tiny size, we started with an assumption that "in 3 months, Goodheart's Law will probably have distorted the thing beyond usefulness". I think it’d only ever make sense for an employee to opt into, rather than an employer to expect. (I’m actually a bit worried about posting publicly about it, because even the prospect of it becoming a common thing might have weird consequences. But I think on net it’s a neat thing worth talking about) We're one month in, so... still a chance of this getting distorted. But I currently bet you could maintain it indefinitely in non-scaling organizations where everyone trusts each other (and themselves) at least reasonably. At the very least, I think it's worth doing for at least a month, to a get a sense of how you work. In January, I worked 75 low/med hours, and 25 high productivity hours. (I took a week off, and I generally work 5ish hours a day) The payment rates are such that the average of them is approximately what I’d normally get paid (I think the net result of this has been somewhat less payment for me, since I ended up doing a lot more low/med hours than high productivity hours. But I still think this is net positive for me from a “gain more information about myself and grow more.”) High Productivity hours are fuzzy, and sort of left up to me, but some factors going into high productivity hours: i. My supervisor and I have to agree that a thing is The Most Important Thing. This is the straightforward, non-weasel-out-able part, and why I think is most likely to be useful in the longer-term. It's easy to get distracted by things like "mak
374fb804-f5e6-4b52-8836-06f154ccf28d
trentmkelly/LessWrong-43k
LessWrong
Are language models close to the superhuman level in philosophy? An alternative way to ask the question in the title of this post: what capacities do philosophers use that language models lack? Reading OpenAI’s “Self-critiquing models for assisting human evaluators” and Google’s “Language Model Cascades” has led me to the idea that most philosophical methods can be programmed via combining language models in a certain cascade, fine-tuning them on existing philosophical literature, and applying the right kind of reinforcement pressures to the system. And I seriously wonder if using the current SoTA language models (e. g., PaLM) in this way would already produce superhuman-level philosophical writing, i. e. philosophical writing that human philosophers would have a harder time critiquing than any existing philosophy done by humans, on a given topic. Good human philosophers should know a lot of existing arguments on various questions. Language models obviously have an edge over humans in this: they can easily “read” all the philosophical writing existing in the world. Philosophers use creative arguments, examples, and thought experiments to test and compare their theories. Perhaps, AI philosophers can search for such arguments or examples using random sampling, then self-criticising an argument or self-assessing the quality of an example for a particular purpose, and then modifying or refining the argument or the example. Perhaps, even with the current SoTA models, this search will be closer to brute force than the intuitive inquiry of a human philosopher, and can potentially take a long time, since the models probably can’t yet understand the structure of philosophical theories well (if I don’t underestimate them!). The only philosophical theories and arguments that still seem firmly out of reach of language models are those based on scientific theories, e. g. some theories in the philosophy of language or the philosophy of mind and consciousness based on certain theories in neuroscience or psychology or anthropology, philosoph
081ba432-687b-4e6f-8f38-5372e4ec3a5f
trentmkelly/LessWrong-43k
LessWrong
Vanilla and chocolate and preference judgements Related to: 2-Place and 1-Place Worlds, Offence versus harm minimization. Note: edited to replace 'value' with 'preference' as suggested by orthonormal. Imagine you overheard two children having an argument over whether vanilla ice cream was better than chocolate ice cream. To you as an observer, it would be obvious that this kind of argument has no content. The children aren’t disputing anything measurable in the exterior world; they would agree with each other than chocolate ice cream contains elements from cocoa beans, and vanilla contains the extract from vanilla beans. Most adults wouldn’t have this argument at all, because it’s self-evident that if Mary says, truthfully, that she likes vanilla better than chocolate ice cream, and her husband Albert confesses that he prefers chocolate, then both of them are right. There is no contradiction; vanilla and chocolate are both neutral items until they come into contact with human tastebuds and human brains, at which point their positive or negative weighting is a fact about those brains, not about the substances themselves. I think that this concept generalizes. Imagine that Mary and Albert are having an argument. Mary hates how Albert leaves papers spread across the kitchen table with empty coffee mugs on top. She wishes he would remember to put his clothes in the laundry basket instead of leaving them on the floor. She nags about it. Albert is helplessly baffled at why she thinks it’s such a big deal. He accuses her of being a nitpicker and a perfectionist.1 It’s hard to say that both of them are right, if each is hurting the other’s feelings. Again, though, their argument isn’t about anything factual. They both agree that there are papers on the desk and clothes on the floor, and that Albert is the one responsible. Where they diverge is the preference they place on this world-state. Mary is a bit of a neat freak. She likes shiny floors and spotless counters, and she finds cleaning pleasant and relaxing. Seein
b7449113-120f-46e8-8f0c-4848febc5b5a
trentmkelly/LessWrong-43k
LessWrong
Call for volunteers: assessing Kurzweil, 2019 EDIT: To help assessing, go here. Cheers! Many years ago, I helped organise an assessment of Kurzweil's predictions for 2009, in his 1999 book "The Age of Spiritual Machines". The results were mixed - I feel he had a decent track record, but it was much worse than his own self-evaluation: Following Gwern's suggestion, let's do the same for his 2019 predictions. Since many people have extra time on their hands, for some odd reason, I'm asking for volunteers to do this assessment. You'll be getting a certain number of predictions (how many is up to you) and giving an assessment of their correctness on a five point scale of "True", "Weakly True", "Cannot Decide", "Weakly False", and "False". So, add a comment here if you'd like to volunteer for this assessment, or know a person/group who might want to. Cheers! EDIT: I've now split Kurzweil's 2019 predictions into 105 separate individual predictions.
7f31f887-d41d-4920-9d81-a9ee563118aa
trentmkelly/LessWrong-43k
LessWrong
Folding Couch Monitor I find working on a laptop to be pretty rough ergonomically. The only way for my wrists to be comfortable is if the laptop is flat on my lap, and keeping my wrists happy is very important to me. This means my neck is angled down at the screen, though, and I can't do this for very long without it getting painful. When I'm home I usually sit on the couch to use my laptop, and I was thinking about how I might be able to do that with a screen at a pleasant height. I've read about various approaches that usually involve elevating the laptop screen and using an external keyboard, but I really like being able to use my laptop's touchpad. I decided to set up an external monitor that folds out over the couch when I want to use it, and folds away against the wall when I don't: I started by making some diagrams: I cut a section of cabinet-grade plywood to a bit bigger than the monitor, and mounted the monitor in the middle. I was able to re-use some of the mounting hardware from the stand to make something where the monitor would be easily removable. Once I got the monitor on, however, it wouldn't come off again, so something went wrong there. I made a spacer for the hinge-side, and used a pair of door hinges. Unfortunately, after Julia had painted everything I realized the spacer wasn't thick enough, so for now there's an ugly unpainted wooden shim. It's built like a door, and feels very sturdy. The monitor can swing out all the way, enough to be used as a TV from a second couch on the far wall: You can also have it partly open, if multiple people want to hang out on the closer couch and look at something: To keep it in position when I'm using it as a computer monitor, I have a short length of string between a pair of cup hooks: I do look a little silly using it: I built this today and have only just started using it, but so far I'm very happy and my neck feels great!
77490fc5-1d15-4d8c-8314-da062722db2c
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Why I think it's important to work on AI forecasting *Note: this post is a transcript of a talk I gave at EA Global: Bay Area 2023.* These days, a lot of effective altruists are working on trying to make sure AI goes well. But I often worry that, as a community, we don’t yet have a clear picture of what we’re really working on. The key problem is that predicting the future is very difficult, and in general, if you don’t know what the future will look like, it’s usually hard to be sure that any intervention we do now will turn out to be highly valuable in hindsight. When EAs imagine the future of AI, I think a lot of us tend to have something like the following picture in our heads. At some point, maybe 5, 15, 30 years from now, some AI lab somewhere is going to build AGI. This AGI is going to be very powerful in a lot of ways. And we’re either going to succeed in aligning it, and then the future will turn out to be bright and wonderful, or we’ll fail, and the AGI will make humanity go extinct, and it’s not yet clear which of these two outcomes will happen yet. Alright, so that’s an oversimplified picture. There’s lots of disagreement in our community about specific details in this story. For example, we sometimes talk about whether there will be one AGI or several. Or about whether there will be a fast takeoff or a slow takeoff. But even if you’re confident about some of these details, I think there are plausibly some huge open questions about the future of AI that perhaps no one understands very well. Take the question of what AGI will look like once it’s developed. If you asked an informed observer in 2013 what AGI will look like in the future, I think it’s somewhat likely they’d guess it’ll be an agent that we’ll program directly to search through a tree of possible future actions, and select the one that maximizes expected utility, except using some very clever heuristics that allows it to do this in the real world. In 2018, if you asked EAs what AGI would look like, a decent number of people would have told you that it will be created using some very clever deep reinforcement learning in a really complex and diverse environment. And these days in 2023, if you ask EAs what they expect AGI to look like, a fairly high fraction of people will say that it will look like a large language model: something like ChatGPT but scaled up dramatically, trained on more than one modality, and using a much better architecture. That’s just my impression of how people’s views have changed over time. Maybe I’m completely wrong about this. But the rough sense I’ve gotten while in this community is that people will often cling to a model of what future AI will be like, which frequently changes over time. And at any particular time, people will often be quite overconfident in their exact picture of AGI. In fact, I think the state of affairs is even worse than how I’ve described it so far. I’m not even sure if this particular question about AGI is coherent. The term “AGI” makes it sound like there will be some natural class of computer programs called “general AIs” that are sharply distinguished from this other class of programs called “narrow AIs”, and at some point – in fact, on a particular date – we will create the “first” AGI. I’m not really sure that story makes much sense. The question of what future AI will look like is a huge question, and getting it wrong could make the difference between a successful research program, and one that never went anywhere. And yet, it seems to me that, as of 2023, we still don’t have very strong reasons to think that the way we think about future AI will end up being right on many of the basic details. In general I think that uncertainty about the future of AI is a big problem, but we can also do a lot to reduce our uncertainty. If you don’t fully understand a risk, a simple alternative to working on mitigating the risk directly is just to try to understand the phenomenon better. In the case of AI, we could try to rigorously forecast AI, similar to how climate scientists try to build a model of the climate to better understand and forecast it. That’s essentially the agenda I’m working on at [Epoch](https://epochai.org/), a new organization that's trying to map out the future of AI. We work on foundational questions like: what will AI look like in the future, what effects will it have on the world, and when should we expect some of the most important AI-related things to happen? What I want to do in this talk is outline three threads of research that we’re currently interested in that we think are tractable to work on, and could end up being critical for piecing together the  story of how AI will unfold in the future. Software vs. hardware progress ============================== The first thread of research I want to talk about is the relative importance of hardware versus software as a driver of AI progress. This problem has been explored before. In the modern context of deep learning, probably the most salient analysis is from [Danny Hernandez and Tom Brown](https://arxiv.org/abs/2005.04305) at OpenAI in 2020. In their research, they re-implemented 15 open source machine learning models, adjusted some of the hyperparameters, and found a 44-fold decrease in the amount of compute required to reach the same performance as AlexNet on the ImageNet dataset since 2012. Last year Epoch researchers [re-evaluated](https://arxiv.org/abs/2212.05153) progress on ImageNet using a different methodology employing scaling laws, and came to an even more extreme conclusion, showing that the amount of compute required to reach a certain level of performance halved roughly every nine months. But even with these analyses, our basic picture of software progress in AI is unfortunately very incomplete. A big issue is that it’s not clear how much software progress on ImageNet transfers usefully to the relevant type of tasks that we care about. For example, it’s easy to imagine that some ways of achieving better performance on ImageNet, like inventing the Adam optimizer, are general insights that speed up the entire field of AI. Whereas other things we can do, like pre-processing the ImageNet dataset, pretty much only provide improvement on ImageNet itself. Plausibly, what matters most for forecasting AI is the rate of general software progress, which as far as I can tell, is currently unknown. Here’s another big issue with the existing research that tries to measure software progress that I think is even more important than the last problem. Compare two simple models of how software progress happens in AI. In the first model of software progress, humans come up with better algorithms more-or-less from the armchair. That is, we leverage our intelligence to find clever algorithms that allow us to train AIs more efficiently, and the more intelligent we are, the better our algorithmic insights will tend to be. In the second model of software progress, the way we make software progress is mostly by stumbling around in the dark. At various points in time, researchers experiment with different ways of training AIs with relatively little cleverness. Sometimes, these insights work, but most of the time, they don’t. Crucially, the more experiments we perform, the more likely we are to get lucky and stumble upon an algorithm for training AI that’s more efficient than anything that came before. These stories are different in a very important way. In the first story, what allowed us to make software progress was by having a lot of intelligence. If this story is true, then as AI gets more advanced, we might expect software progress to accelerate, since we can leverage the intelligence of AI to produce more insights into algorithms, which makes training even more advanced AIs easier, which we can leverage to produce the next generation of AIs, and so on, in a feedback loop. I believe Tom Davidson refers to this scenario as a software singularity. However, in the second story, hardware and labor are ultimately what enabled us to make software progress. By having more hardware available, researchers were able to try out more experiments, which enabled them to discover which algorithmic insights work and which ones don’t. If this second story is true, then plausibly access to compute, rather than raw intelligence, is a more important driver of AI progress. And if that’s true, then it’s not totally clear whether there will be a rapid acceleration in software progress after we begin to leverage smart AI to help us develop even smarter AI. These are massively different pictures of what the future might look like, and yet personally, I’m not sure which of these stories is more plausible. And the question of whether AI progress is driven by hardware or software is highly important to policy. If we cleared up confusion about the relative significance of hardware and software in AI, it would provide us information about what levers are most important to influence if we want to intervene on AI progress. For example, if hardware is the main driver of AI progress, then access to hardware is the key lever we should consider intervening on if we want to slow down AI. If, on the other hand, software dominates, then our picture looks quite different. To intervene on AI progress, we would probably want to look at where the smartest research scientists are working, and how we can influence them. Ubiquity of transfer learning ============================= Moving on, the second thread of research I want to talk about is about the ubiquity of transfer learning, and the relevance transfer learning will play in building future AI systems. Transfer learning refers to the degree to which learning one task meaningfully carries over to a different task. For example, the degree to which learning psychology transfers to one’s understanding of economics. In the last several years, we’ve witnessed an interesting trend in the field of machine learning. Rather than trying to get a model to learn a task by training it from scratch on a bunch of data collected for that task individually, it’s now common to take a large transformer, called a foundation model, pre-train it on a very large, diverse corpus, and then fine-tune it on whatever task you’re interested in. The fundamental reason for the recent trend towards foundation models is that we found a way to leverage transfer learning successfully from very cheap sources of data. Pre-training on a large dataset, like the internet, allows the model to efficiently learn downstream tasks that we care about, which is beneficial when we don’t have much data on the downstream task that we’re targeting. This increase in data efficiency is ultimately determined by the amount of transfer between the pre-training data and the fine-tuning task. Here’s one of the biggest questions that I’m interested in within this topic: to what extent will transfer learning alleviate important data bottlenecks in the future? To make this question more concrete, consider the task of building general-purpose robots – the type that would allow us to automate a very large fraction of physical labor. Right now, the field of robotics is progressing fairly slowly, at least relative to other areas of AI like natural language processing. My understanding is that a key reason for this relatively slow progress is that we’re bottlenecked on high quality robotic data. Researchers try to mitigate this bottleneck by running robotic models in simulation, but these results have not been very successful so far because of the large transfer gap between simulation and reality. But perhaps in the future, this transfer gap will narrow. Maybe, just like for language models, we can pre-train a model on internet data, and leverage the vast amount of knowledge about the physical world encoded in internet videos. Or maybe our robotic simulations will get way better. In that case, robotics might be less data constrained than we might have otherwise thought. And as a result of pre-training, we might start to see very impressive robotics relatively soon, alongside the impressive results we’ve already seen in natural language processing and image processing. On the other hand, perhaps pre-training on internet data doesn’t transfer well at all to learning robotics, and our robotic simulations won’t get much better in the future either. As a consequence, it might take a really long time before we see general purpose robots. In that case, we might soon be in a world with very smart computers, but without any impressive robots. Put another way, we’d witness world class mathematician AIs before we got to see robotics that works reliably at the level of a 3 or 4 year old human. Since it's plausible that dramatically increasing the growth rate of the economy requires general-purpose robotics, this alternative vision of the future implies that superintelligent AI could arrive many years or even decades before the singularity begins. In other words, there could be a lot of time between the time we get very cognitively advanced AI, and the time that things actually start speeding up at a rate way above the historical norm. Knowing which of these versions of the future ends up happening is enormously useful for understanding the ways in which future AI might be dangerous, and how we should try to intervene. If AI will become superintelligent but unable to act in the physical world for a long time, that would mean we’re facing a very different profile of risks compared to a situation in which AI soon outcompetes humans along every relevant axis more-or-less simultaneously. I also think this question is very tractable to work on. Recall that the key factor that determines how easily we can alleviate bottlenecks in this framework is the degree to which training on a cheap pre-training distribution transfers knowledge to another distribution in which collecting data is expensive. As far as I can tell, this is something we can measure today for various distributions, using current tools. Takeoff speeds ============== I’ll move now to the third thread of research. The research thread is about takeoff speeds, and in particular whether we’ll have a gradual takeoff with the economy slowly ramping up as AI gets more capable, or whether advanced AI will arrive suddenly without much warning, which will have the consequence of a single very intelligent system taking over the world. Unlike the first two threads, my remarks about this topic will be relatively brief. That’s because Tom Davidson is going to speak after me about [the new model](https://www.alignmentforum.org/posts/Gc9FGtdXhK9sCSEYu/what-a-compute-centric-framework-says-about-ai-takeoff) he just published that sheds light on this question. Instead of trying to explain his model before he does, I’ll try to give some context as to why this subject is important to study. In short, knowing how fast takeoff will be helps us to understand which AI-safety problems will be solved by default, and which ones won’t be. A plausible rule of thumb is, the slower the takeoff, the more that will be solved by default. And if we don’t know what problems will be solved by default, we risk wasting a lot of time researching questions that humanity would have solved anyway in our absence. In [a post from Buck Shlegeris](https://www.alignmentforum.org/posts/hRohhttbtpY3SHmmD/takeoff-speeds-have-a-huge-effect-on-what-it-means-to-work-1) published last year, Buck argues this point more persuasively than I probably will here. Here’s my summary of his argument. If a fast takeoff happens, then the EA x-risk community has a very disproportionate effect on whether the AI ends up aligned or not. Broadly speaking, if there’s a fast takeoff, our community needs to buckle down and solve alignment in time, or else we’re doomed. But if a slow takeoff happens, the situation we’re in is vastly different. In a slow takeoff scenario, broader society will anticipate AI and it will be part of a huge industry. The number of people working on alignment will explode way beyond just this one community, as the anticipated consequences and risks of AI development will eventually be widely recognized. That means that, in the slow takeoff case, we should try to focus on – not necessarily the most salient or even the hardest parts of AI safety – but on the parts that can’t be done by the far more numerous researchers who will eventually work in parallel on this problem later. Concluding remarks ================== I want to take a step back for a second and focus again on the broader picture of forecasting AI. I often get the sense that people think the only interesting question in AI forecasting is just pinning down the date when AGI arrives. If that’s your view, then it could easily seem like AI forecasting work is kind of pointless. Like, so what if AGI arrives in 2035 versus 2038? I actually totally agree with this intuition. But I don’t really see the main goal of AI forecasting as merely narrowing down a few minor details that we’re currently not certain about. I think the type of work I just talked about is more about re-evaluating our basic assumptions about what to expect in the future. It’s more about clarifying our thinking, so that we can piece together the basic picture of how AI will turn out. And I don’t think the three threads I outlined are anywhere near the only interesting questions to work on. With that in mind, I encourage people to consider working on foundational AI forecasting questions as an alternative to more direct safety work. I think high-quality work in this space is pretty neglected. And that’s kind of unfortunate, because arguably we have many tools available to make good progress on some of these questions. So, perhaps give some thought to working with us on our goal to map out the future of AI.
5b739aef-45e5-4b4b-a7a9-327641962469
trentmkelly/LessWrong-43k
LessWrong
NATO: Cognitive Warfare Project NATO seems to have a project on cognitive warfare and a few public reports online: > Interim Report > Based on the Understanding Phase findings, NATO has identified the following priorities: > - Develop a Critical Thinking online course > - Develop improvements to the decision making processes > > - Leverage technologies, including VR and AI to develop tools in support of better cognition and better decision making > > > 1 Jun 21 Cognition Workshop Report > Cognition includes three interrelated aspects that are reflected in the structure of the workshop: information, decision-making and neuroscience. > > > Cognitive Warfare > As global conflicts take on increasingly asymmetric and "grey" forms, the ability to manipulate the human mind employing neurocognitive science techniques and tools is constantly and quickly increasing. This complements the more traditional techniques of manipulation through information technology and information warfare, making the human increasingly targeted in the cognitive warfare. 
61494150-31b8-41d7-a85b-9b5526641a55
trentmkelly/LessWrong-43k
LessWrong
Anyone use the "read time" on Post Items? Someone recently mentioned that they found the "read time" on home page post-items to be more of annoying clutter than a helpful indicator of whether to read something. It occurred to me that I don't personally use them, and in general it's important for each UI element to be pulling it's weight. I'm curious how many people actively like/use them? [Update: enough people have chimed in that I'm reasonably confident it's a good feature to have]
b6508b2d-16ed-4531-989e-d5d79d38c0db
trentmkelly/LessWrong-43k
LessWrong
Meetup : Bratislava lesswrong meetup III Discussion article for the meetup : Bratislava lesswrong meetup III WHEN: 20 May 2013 06:30:00PM (+0200) WHERE: Uršulínska 9, 811 01 Bratislava We will see each other at Malewill café. Breaking news !!! The charming girl, Cat from USA, who teaches emotional stuff at CFAR rationality minicamps, will be visiting our meetup ! One more reason to attend this time :-) My phone number is, as usual, ++421-915-896911. Discussion article for the meetup : Bratislava lesswrong meetup III
fa643297-9f6b-40a7-b5a9-7a8d56bd1f95
trentmkelly/LessWrong-43k
LessWrong
Interpretability of SAE Features Representing Check in ChessGPT Produced by Jon Kutasov and David Steinberg as a capstone project for ARENA. Epistemic status: 5 days of hacking, and there could be bugs we haven’t caught. Thank you to the TAs that helped us out, and to Adam Karvonen (author of the paper our work was based on) for answering our questions and helping us debug early in the project! Overview This paper documents the training and evaluation of a number of SAEs on Othello and ChessGPT. In particular, they train SAEs on layer 6 of 8 in these language models. One of the interesting results that they find is that the latents captured by the SAE perfectly/almost perfectly reconstruct several of the concepts about the state of the game.  ChessGPT is an 8 layer transformer model. It reads in PGN strings (that are of the form ‘;1.e4 e5 2.Nf3 …’) and predicts the next token in the sequence. Tokens in this case are single characters, and only those that can possibly appear in a PGN string. Karvonen et al. trained 500 SAEs on the residual stream after layer 6 of this model. Notably, they only used their model to predict the token after a “.” rather than at any point in a PGN string.  We wanted to evaluate if these SAE features are 1. interpretable and 2. relevant to the prediction outputted by the model. Given that chessGPT is reasonably small, chess is a well structured game, and the game comes with nice ground truth labels for different features of the board state, this can be done in a much more efficient way than a similar study on a general LLM.  Specifically, we focused on 30 SAE features that the original authors had found were perfect classifiers (using the metrics they had defined) of the question “is the player to move currently in check?” Our key findings are as follows: * About half of SAE features are assigned to a specific part of the PGN pattern. That is to say, some features are almost entirely focused on predicting the token after a “.” while others are focused on predicting the token after an “x” * As a
1c940edb-748a-40a5-b0e2-fb22e580b39a
trentmkelly/LessWrong-43k
LessWrong
Benign model-free RL In my last post, I described three research areas in AI control that I see as central: reward learning, robustness, and deliberation. In this post I argue that these three pieces may be sufficient to get a benign and competitive version of model-free reinforcement learning. I think this is an important intermediate goal of solving AI control. This post doesn’t discuss benign model-based RL at all, which I think is another key obstacle for prosaic AI control. (This post overlaps extensively with my post on ALBA, but I hope this one will be much clearer. Technically, ALBA is an implementation of the general strategy outlined in this post. I think the general strategy is much more important than that particular implementation.) Ingredients Reward learning and robustness Given a benign agent H, reward learning allows us to construct a reward function r that can be used to train a weaker benign agent A. If our training process is robust, the resulting agent A will remain benign off of the training distribution (though it may be incompetent off of the training distribution). Schematically, we can think of reward learning + robustness as a widget which takes a slow, benign process H and produces a fast, benign process A A’s capabilities should be roughly the “intersection” of H’s capabilities and our RL algorithms’ competence. That is, A should be able to perform a task whenever both H can perform that task and our RL algorithms can learn to perform that task. In these pictures, the vertical axis corresponds intuitively to “capability,” with higher agents being more capable. But in reality I’m thinking of the possible capabilities as forming a complete lattice. That is, a generic pair of levels of capabilities is incomparable, with neither strictly dominating the other. Amplification If we iteratively apply reward learning and robustness, we will obtain a sequence of weaker and weaker agents. To get anywhere, we need some mechanism that lets us produce a strong
45ac49ff-05cb-4e61-8fd2-3b52903f89cb
trentmkelly/LessWrong-43k
LessWrong
Collaborative Truth-Seeking Summary: We frequently use debates to resolve different opinions about the truth. However, debates are not always the best course for figuring out the truth. In some situations, the technique of collaborative truth-seeking may be more optimal.   Acknowledgments: Thanks to Pete Michaud, Michael Dickens, Denis Drescher, Claire Zabel, Boris Yakubchik, Szun S. Tay, Alfredo Parra, Michael Estes, Aaron Thoma, Alex Weissenfels, Peter Livingstone, Jacob Bryan, Roy Wallace, and other readers who prefer to remain anonymous for providing feedback on this post. The author takes full responsibility for all opinions expressed here and any mistakes or oversights.   The Problem with Debates   Aspiring rationalists generally aim to figure out the truth, and often disagree about it. The usual method of hashing out such disagreements in order to discover the truth is through debates, in person or online.   Yet more often than not, people on opposing sides of a debate end up seeking to persuade rather than prioritizing truth discovery. Indeed, research suggests that debates have a specific evolutionary function – not for discovering the truth but to ensure that our perspective prevails within a tribal social context. No wonder debates are often compared to wars.   We may hope that as aspiring rationalists, we would strive to discover the truth during debates. Yet given that we are not always fully rational and strategic in our social engagements, it is easy to slip up within debate mode and orient toward winning instead of uncovering the truth. Heck, I know that I sometimes forget in the midst of a heated debate that I may be the one who is wrong – I’d be surprised if this didn’t happen with you. So while we should certainly continue to engage in debates, we should also use additional strategies – less natural and intuitive ones. These strategies could put us in a better mindset for updating our beliefs and improving our perspective on the truth. One such solution is a mode o
df15a1e6-e1a3-4216-b78a-fe8611351427
trentmkelly/LessWrong-43k
LessWrong
Saving the world in 80 days: Prologue What is this? This is an intro to an intensive productivity sprint for becoming useful in the field of AI safety. I'll be posting every week or 4 on the good/bad/ugly of it all. Definitely inspired by Nate Soares' productivity sprint. Who are you? Recent graduate in Computer Engineering who has someone to protect Why? Ultimately to further the field of AI safety. In the process of doing this and writing these posts, I'll: 1. Create a more realistic model of what I can do/ of my limits 2. Receive valuable feedback from other rationalists (that's you!) 3. Radically improve my knowledge of the field & various maths So what are you going to be doing exactly? 16 hours in a day: * Food/hygiene/slack - 4 1/2 hr * Current mathy book - 3 hr (How to Prove It, atm) * Work - 4 hr (Might change depending on work needs) * Current Application - 2 hr (tensorFlow) * AI-Safety reading - 1 hr (Miri's research guide) * Living Maintenance - 1/2 hr * Meditation ~ 1 hr (Doing squirrelInHell's site, specifically this part) Aren't you going to flounder/fail/burn-out? I've done 3 other intensive sessions like this in the past. One was a flop and two were quite successful, so I feel prepared for this one. I'm definitely shooting for sustainability here, so I'm trying to figure out my limits without burning out. This could include getting more work done per hour, getting more efficient rest, or varying the amount of slack time given. My current Murphyjitsu results: 1. I could waste a lot of time on pica. To prevent that, I'm going to notice when I want to leave my current situation and pursue that and just watch that thought/sensation without fueling it. I've developed this skill while meditating, and have had some success already, though I feel like there's more that's missing. Something like I'm quickly making deals with myself such as "You only have to deal with this for a few more minutes, then you're free". 2. I could get uber mentally tired. If so, I'll move on to l
b6b9e25d-287a-49c8-b741-2dc55774047d
trentmkelly/LessWrong-43k
LessWrong
Rationality Minicamp Photos Online Some photos from Rationality Minicamp, which was run by Luke Muehlhauser, Anna Salamon, and Divia Melwani are now online: http://www.flickr.com/photos/66605555@N00/sets/72157627675418194/
478dcf83-c78b-43b5-a403-7740b44e77e6
trentmkelly/LessWrong-43k
LessWrong
Common Uses of "Acceptance" Edit (2024-07-31): I changed the introduction. If you have Googled “how to do acceptance” and you have found a lot of the explanations confusing or in conflict with one another, you have come to the right place. Learning about “acceptance” as a mental move was surprisingly challenging to me too, and I think it’s because of these reasons: 1. It is obvious that “acceptance” has different dictionary definitions (and hence mental moves), but when “acceptance” is used in more casual advice, they can be unclear. Take a look at these three examples: (a) “you should accept yourself”; (b) “you should accept your feelings”; and (c) “you should accept reality”. Ask me to differentiate these back in 2023 and I’ll probably give you some “vibey” answers that are probably half-correct. I’m usually fine with just going off from vibes, but they seem to bounce off from me this time. 2. More comprehensive advice do describe the actual mental moves at a concrete level, but they’re also all quite different from each other despite using the same word. In fact, I’ve found nine different types of “acceptance” related advice from ten different sources (e.g., Acceptance Commitment Therapy). A single standard for “acceptance” doesn’t exist. 3. Furthermore, a few of these sources incorporated new definitions (e.g., kindness) into “acceptance” that are not in fact dictionary definitions of “acceptance”. “Radical Acceptance” is one source that did that. And I got the sense that “acceptance” as a concept has been “stretched” pretty far. Taken together, they turned my journey to practise “acceptance” into an amateur disentanglement exercise. My cursory search of “acceptance” turned into a much wider literature review, including literature around the use of words in relation to truth-seeking. And I’ve decided to put some of my findings down in this writing with this following structure: 1. “Acceptance” as seen in dictionaries 2. “Acceptance” as seen in various advice 3. Some thoughts and
2935c65c-a91b-4229-b5db-4090f2be003c
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Fake Morality Today's post, Fake Morality was originally published on 08 November 2007. A summary (taken from the LW wiki):   > Many people provide fake reasons for their own moral reasoning. Religious people claim that the only reason people don't murder each other is because of God. Selfish-ists provide altruistic justifications for selfishness. Altruists provide selfish justifications for altruism. If you want to know how moral someone is, don't look at their reasons. Look at what they actually do. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Fake Selfishness, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
08badff7-4782-4b65-8c10-b75e1a768f8e
StampyAI/alignment-research-dataset/special_docs
Other
Interview with lgu5f \*\*Interview with lgu5f, on 3/22/22\*\* \*\*0:00:02.5 Vael:\*\* Alright, my first question is, can you tell me about what area of AI you work on in a few sentences? \*\*0:00:08.4 Interviewee:\*\* Yeah. So I work on privacy in AI. I\'ve been working on differential privacy, which is sort of the de facto standard for giving statistical privacy in analytics and AI. I\'ve been working on differentially private synthetic data, so coming up with algorithms that generate synthetic versions of datasets that mimic the statistics in those datasets without revealing any information about the users and the data itself. And more broadly, I also just work on differential privacy for analytics, so it\'s not specifically AI, but it\'s still like algorithm design with privacy. \*\*0:00:55.3 Vael:\*\* Cool, great, thanks. And my next question is, what are you most excited about in AI, and what are you most worried about? In other words, what are the biggest benefits or risks of AI? \*\*0:01:05.7 Interviewee:\*\* I think that AI has been most helpful in the little things. When I pull up my phone, and it\'s like, \"Hey, this is your most used app in this location,\" recommender systems like that, or small tweaks that help my daily life, that\'s what I\'m most excited about. I think I\'m most worried about AI being used in applications where explainability and fairness are going to be important, or privacy. This is stuff I see red flags in. I\'m worried about an insurance company putting a neural net into some important decision-making system and then not being able to analyze why it made a decision that it did, or understanding if it\'s being unfair. \*\*0:02:11.6 Vael:\*\* Great, yeah, that makes sense. And then focusing on future AI, so putting on a science fiction forecasting hat, say we\'re 50 plus years into the future, so at least 50 years in the future, what does that future look like? \*\*0:02:30.3 Interviewee:\*\* I\'m sort of pessimistic about how advanced AI can get. I think that the trend is going to be that we\'re going to see smaller models that are more bespoke for the problem domain that it\'s trying to solve. So instead of these like GPT-3 trillion parameter-sized models, I think that we\'re going to start moving back towards stuff that can run more easily on the edge. That doesn\'t require as much energy and time to train and doesn\'t require as much data. I think that AI becomes more ubiquitous, but in a way that\'s easy for us to compute with. So it\'s just more AI running everywhere. \*\*0:03:17.6 Vael:\*\* Yeah, what drives that intuition? \*\*0:03:20.7 Interviewee:\*\* One is\-- partially concern with large models consuming too much energy and, of course, climate change is one of the chief things that we should be worried about. The other thing is\... I\'ve seen some experiments that come out of GPT-3, and they\'re cool as toy problems, or it\'s cute program synthesis and stuff like that, but I don\'t see that really being used in production. It\'s one thing for a research organization to come up with those experiments and say, \"Hey, we were able to, I don\'t know, use this to beat a goal player,\" but you look at the details of it, and really this gigantic model also had a tree search algorithm, that was a big benefit. I think just\... keeping it bespoke, like CNNs just do so well, and I think that that\'s for a reason, so that\'s sort of the intuition I have. If we keep it tight to the problem domain, I\'ve seen it do better. Domain expertise has helped a lot. \*\*0:04:37.5 Vael:\*\* And then just a quick follow-up on the climate change thing, is the idea that, current systems are using too much energy and this is causing increased climate change? \*\*0:04:50.0 Interviewee:\*\* Yeah, I think that we all just need to reduce how much energy we\'re using, because, unless we\'re sure that a lot of it is coming sustainably, we should be concerned about how much energy we\'re using. And training these trillion parameter models requires a lot of energy. It requires a lot of hardware. That hardware does not come for free. There\'s a manufacturing process that goes into that, building up the data centers that are training, that are hosting all of this, and then replacing the hard drives, replacing the GPUs when they get stale, so there\'s just a whole bunch of life cycle impacts from training models that I think are really coming up because we\'re seeing people doing these studies on blockchain because that tends to burn through GPUs and hard disks faster than training machine learning models, but it\'s sort of the same impact. \*\*0:05:47.4 Vael:\*\* Interesting. Cool. Well, this next thing is more of a spiel, and it is quite close to what you\'ve been talking about with where AI will go in the future and how big it will get. So people talk about the promise of AI and they mean many things by that, but one thing they may mean is a very general capable system, such that they\'ll have the cognitive capacity to replace all human jobs, all current day human jobs. So whether or not we choose to replace human jobs is a different question, but having the cognitive capacity to do that, and so I usually think about this in the frame of 2012 when we have AlexNet, deep learning revolution, and then 10 years later, here we are, and we\'ve got the GPT-3 system. Which, like you said, have some weirdly emerging capabilities, so it can do some text generation, and some translation, and some coding, and some math and such. And we might expect that if we continue with all this human effort that\'s been going into this kind of mission of getting more and more general systems, and we\'ve got nations competing, and we\'ve got corporations competing, and we\'ve got all these young people learning AI and like, maybe we\'ll see algorithmic improvements at the same pace we\'ve seen hardware improvements, maybe we get optical or quantum. So then we might actually end up scaling the very general systems or like you said, we might not, we might have hit some sort of ceiling or require a paradigm shift or something. But my question is, regardless of how we get there, do you ever think we\'ll have very general AI systems like a CEO AI or a scientist AI? And if so, when? \*\*0:07:13.3 Interviewee:\*\* Oh, that\'s a good question. I tend to be more pessimistic about this. It depends what you mean by AI, I guess in this sense, right? Is it sort of a decision-making system that\'s just doing Pareto optimal decision-making. Does it have to be trained? \*\*0:07:52.6 Vael:\*\* Yeah. What I\'m visualizing here is any sort of decision-making system, any sort of system that can do things like multi-step planning, that can do social modeling, that can model itself\-- modeling other people modeling it. Have the requirements such that I can be like, \"Alright, CEO AI, I want you to maximize profit plus constraints.\" So very capable cognitive systems. I don\'t require right now that they\'re embodied necessarily because I think robotics is a bit behind, but like cognitive capacities. \*\*0:08:25.1 Interviewee:\*\* Got it. I think we\'re still a solid century behind that. Yeah, I don\'t know. I just feel like it\'s still a solid century behind because that might require a huge paradigm shift in how we\'re training our models. Yeah, I don\'t think we\'ve even thought about modeling something with a state space that large, right? \*\*0:08:58.9 Vael:\*\* Yeah, it\'s like reality. \*\*0:09:00.7 Interviewee:\*\* Yeah, exactly. I\'ve seen cool stuff where there\'s like, you hand over a floor plan or something, and then you say, "Hey, where did I leave my keys?" And this AI is able to jog through, navigate through this floor plan and then say you left it in the kitchen. But I still think that something like a CEO AI is still going to be like 100 years out. Yeah. \*\*0:09:29.5 Vael:\*\* Interesting. Yeah, that is interesting, because it seems like what you were saying earlier, is like, I think we\'ll get more bespoke systems, less large scale systems. And I\'m like \[humorously\]: \"What if we have a very large-scale system, that is very\...\" \[interviewee laughs\] Yeah. How does that square? \*\*0:09:43.8 Interviewee:\*\* Between having a bespoke system versus\... \*\*0:09:47.9 Vael:\*\* Or like how the future will develop or something. I think that that\'s like\... maybe kind of likely that there will be at least some companies like OpenAI and DeepMind that\'ll just keep pushing on the general system thing. But I don\'t know. \*\*0:09:58.4 Interviewee:\*\* Yeah, I feel like there\'s always going to be billions of dollars poured into that kind of research. But at the end of the day, I think that we\'ll see more impact if we make quality of life improvements that make jobs easier. Of course, there\'ll be a whole bunch of automation, right, but I don\'t know if we\'ll ever get to a point where we can leave decision-making entirely to AI. Yeah, because\... I think also just as society, we haven\'t thought about what that means. I think Mercedes just recently said that Mercedes will take responsibility if one of their driverless cars hits someone, which is a huge step, right, but we haven\'t gone close to already deciding who has liability if an insurance company messes up, right, with their premium setting algorithms. \*\*0:11:09.5 Vael:\*\* Yeah, when I think about the future, I\'m like, I think we might get technological capacities before we get good societal understanding or regulation. \*\*0:11:17.1 Interviewee:\*\* Likely. Likely. Because that\'s what\'s happening in fairness and in privacy. With fairness regulations, like you have in Australia and a whole bunch of places, where it just says, Okay, you don\'t pass in some protected attributes and your algorithm is not going to be racist, homophobic, transphobic, whatever, sexist. And it\'s like, Okay, cool, now that you\'ve purged the data of all these attributes, there\'s no way for us to measure whether it\'s actually being any of those things. But it\'s required by policy. There\'s also the same thing with privacy, where it\'s like, Okay, if you just remove these attributes and you remove their names, we can\'t tell who it is, but nobody else has watched the exact five last movies I\'ve watched on Netflix, or the last three things I\'ve liked on Twitter or Facebook or whatever. So yeah, for sure, we\'ll be lagging societally, but hopefully with this timespan I have in mind of it\'s in a century, we\'ll be thinking about these things if we\'re on the cusp of it, and people will be raising alarm bells about it like, Hey, maybe we should start thinking about this. \*\*0:12:34.1 Vael:\*\* Yeah, that makes sense. I know there\'s a substantial number of researchers who think there\'s some probability that we\'ll get it substantially sooner than that. And maybe even using the current deep learning paradigm, like GPT-whatever, 87, 7, GPT-13. I don\'t know. But that this system will like work. I don\'t actually know if that\'s true. It\'s hard to predict the future. Yeah, so my next question is sort of talking about whenever we get these very advanced systems, which again, who knows, you said like maybe 100 years, some people think earlier, some people think much later. So imagine we\'re in the future, and we\'re talking about the CEO AI, and I\'m like, Okay, CEO AI, I wish for you to maximize profits\-- this is where humans are like the shareholder type thing\-- and try not to run out of money and try not to exploit people and try to avoid side-effects. Currently, this is technologically challenging for many reasons, but one of the reasons I think that might continue into the future is that we\'re currently not very good at taking human values and preferences and goals and putting them into a mathematical formulations that we could optimize over them. And I think this might be even harder to do in the future as we get more powerful systems and that are optimizing over larger\-- reality. Larger optimization spaces. So what do you think of the argument, \"Highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous?\" \[pause\] So they\'d be doing what we tell them to do rather than what we intended them to do. \*\*0:14:12.2 Interviewee:\*\* \...Yeah, I think that that\'s most likely. Because\... Well, it depends what level of self-awareness or cognitive awareness this AI has to understand what we intended versus telling them. If it figured out the state space was\... If it can model what the ramifications of what it does are on itself, then ideally it would figure out what we intended for it to do, not what we told it to do it. So it\'s one thing if it goes off the rails and says say, actually, this is the only way to solve\-- That\'s a very Asimov-esque dystopia, right. But hopefully, if it can model that and say like, Oh actually, you know, the humans will delete me if I do something that\'s exactly what they told me to do and not what they intended for me to do, then ideally we\'d be past that point. \*\*0:15:15.3 Vael:\*\* I see, yeah. Cool. Let me try to engage with that. So I\'m like, Alright, where we\'ve told the CEO AI that I want you to maximize profit, but not have side-effects, and it\'s like, Okay, cool. Side-effects is pretty undefined, but maybe it has a very good model of what humans want, and can infer exactly what the humans want and can model things forward, and will know that if it pollutes, then the humans will be unhappy. I do think if you have a great enough model of the humans in AI, and AI is incentivized to make sure to do exactly what the humans want, then this is not a problem. \*\*0:16:00.6 Interviewee:\*\* Right. The risk, of course, is that it figures out all the legal loopholes, and it\'s like, Oh great, I\'ve figured out exactly what I need to do to not be held morally culpable for what choices I\'ve made and get off scot-free, which is also a huge risk. \*\*0:16:19.2 Vael:\*\* Yeah, yeah. So I think with the current day systems, they\'re much dumber, and so if you\'re like, Alright AI, I want you to get a lot of points then maybe it will get a lot of points through some little side loop instead of winning the game. That\'s because we haven\'t specified what we wanted exactly, and it can\'t infer that. But you\'re saying as we get more advanced, we will have systems that can at least know what humans want them to do, whether or not they follow through with that is a different question. \*\*0:16:51.7 Interviewee:\*\* Yeah. Yeah, and at some point, there\'s still the huge element of what the designer of the algorithm put in. There is some choice on the loss function. Is it average, is it worst case? Is it optimizing for the highest likelihood, the worst likelihood, I feel like that would also be a huge change in how it does it. So it\'s not going to be fully up to the algorithm itself, they\'re still going to be human choices in building out that algorithm that determine how it interacts with other humans. \*\*0:17:31.1 Vael:\*\* Yeah, this is in fact the whole question, I think. What do you put in the loss function such that it will do whatever things you want to do. Yup. Yeah, okay, cool. So I have a second argument on top of that one, which I think is pretty relevant. So say we have the CEO AI, and it has pretty good models of humans, and it can model humans modelling, and it does multi-step planning. And because we figured, humans figured that we should have some safety mechanisms, so maybe we don\'t let the AI make any big decisions unless it passes that decision through us. And so they\'ve asked the AI for a one-page memo on this upcoming decision. And the AI is like, Cool, well, I obviously have a lot of thinking and information that I can\'t put everything in a one-page memo, and so humans are expecting me to condense it. But I notice that sometimes if I include some types of information in this memo, then the humans will shut me down and that will make me less likely to be able to achieve the goal they programmed in. So why don\'t I just leave out some information such that I have a higher likelihood of achieving my goal and moving forward there. And so this is a story that\'s not like the AI has self-preservation built in it, but rather as an agent optimizing a goal, a not perfect goal, then it ends up having an instrumental incentive to stay alive and to self-preserve itself. So the argument is, what do you think of the argument, \"Highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing any goals, and this is dangerous?\" \*\*0:19:08.7 Interviewee:\*\* I think we already see this with not highly intelligent systems. There was this paper from 2019 where they analyzed the preventative healthcare recommendations from some insurance company algorithm setup. And they found that it consistently would recommend people of color to go for preventative healthcare less. So they were looking through it, and they realized that the algorithm is optimizing to minimize the dollar spent by the patient, and so it figured out that, Hey, folks who are a part of majority communities go for preventative healthcare and they save dollars by getting preventative healthcare versus folks in minority communities tend to not get preventative healthcare, but most of them end up not ever getting specialist care either, so it\'s actually more dollars saved if we just don\'t give them preventative healthcare. I think that that could be pretty likely. Again, if we have this highly intelligent system that\'s able to model so much, then we should be kinda skeptical of what comes in the memos. But yeah, I\'d say that that\'s possible, that it just gets stuck on one of those loops, like, Oh, this is best in\... \*\*0:20:44.7 Vael:\*\* Yeah, interesting. Yeah, so the example you gave seems to me like it\'s some sort of alignment problemm where we\'ve tried to tell what we want, which is presumably like good healthcare for everyone that\'s sort of cheap. And instead it is doing something not what we intended it to do, like, whoops, that was like a failure. We\'ve put in a failure in terms of the loss function that we\'re optimizing. Something that\'s close, but not quite the problem. Then yeah, I think this argument is, as we get very intelligent systems, if the loss function that we put in is not exactly what we want, then we might have it optimizing something kind of close, and then it will be incentivized to continue pursuing that original goal, and in fact, maybe optimizing against humans trying to change it. Which is interesting. So, if one buys this argument of instrumental incentives of an agent optimizing what a goal it puts in, then you have ideas like an agent that is optimizing for not being shut down. So that can involve deception and acquiring resources and influence and also improving itself, which are all things that are good things that are able to help you better achieve goals. Which feels pretty worrying to me if this is the case, because if this is one of the default ways that we\'re developing AI, then when we actually get advanced AI and then we\'ll maybe have a system that is as or more intelligent than us optimizing against humans. And I\'m like, \"Wow, that seems like real bad.\" \*\*0:22:14.0 Interviewee:\*\* Yeah. Yeah, that\'s a nightmare scenario, which is Matrix-esque, right? Yeah. \*\*0:22:21.5 Vael:\*\* Yeah. What is the bad AI in the Matrix doing? Is there a bad AI in the Matrix? \*\*0:22:28.1 Interviewee:\*\* Yes. So the bad AI\... It\'s sort of the same idea as the Terminator, where the Skynet of the Matrix realizes that the best way to avoid conflict in humanity is to sort of keep humanity in a simulation. I think the robots and the humans end up fighting a war, and then they just plug the humans into the matrix, keeping them quelled that way and making the broader argument that living in a simulation is nicer than the world outside. So the AI is technically also doing something good for humanity and self-sustaining, and keeping human population alive. \*\*0:23:09.2 Vael:\*\* Interesting. Yeah. Cool, alright. So my interest, in general, is looking at long-term risks from AI. Have you heard of the\-- well, okay, I\'m sure you\'ve heard about AI safety. What does AI safety mean for you, for the first question? \*\*0:23:31.0 Interviewee:\*\* Um\... \*\*0:23:32.3 Vael:\*\* Or have you heard of AI safety? \*\*0:23:34.8 Interviewee:\*\* No, not really. \*\*0:23:35.7 Vael:\*\* Oh, interesting, cool! \*\*0:23:37.5 Interviewee:\*\* So can you tell me more? \*\*0:23:41.3 Vael:\*\* Yeah. So, AI safety means a bunch of different things to a bunch of different people, it\'s quite a large little field. It includes things like surveillance, autonomous weapons, fairness, privacy. Things like having self-driving cars not kill people. So anything that you could imagine that could make an AI safe. It also includes things that are more in what is currently called AI alignment. Have you heard of that term before? \*\*0:24:11.1 Interviewee:\*\* No. \*\*0:24:12.3 Vael:\*\* Cool. AI alignment is a little bit more long-term focused, so imagining like as we continue scaling systems, how do we make sure that the AIz continue to do what humans intend them to do, what humans want them to do, don\'t try to disable their off-switches just by virtue of optimizing for some goal that\'s not what we intended. Trying to make sure that the AIs continue to be aligned with the humans. Where one of the definitions of alignment here is building models that represent and safely optimize hard-to-specify human values. Alternatively, ensuring the AI behavior aligns with the system designer intentions. So trying to prevent scenarios like the one you brought up where the designer puts in a goal that\'s not quite what the humans want, but this can get much worse as systems get more and more powerful, one can imagine. Yeah, so there\'s a community working on these sorts of questions. Trying to figure out how we could get AI aligned. Because people are like, Well, can\'t you just put in an off switch, but this AI will probably be deployed on the Internet, and it can replicate itself presumably, if it\'s smart enough, and also it may have an incentive to disable the off switch if we do things by default. But some people are like, oh well, how do we solve that? How do we make it so that it doesn\'t have an incentive to turn itself off? And one group is like, Oh well, instead of having AI optimizing towards one goal singularly, in which case it has an incentive to try to stop people from achieving its goal, one thing you could do is have it build in uncertainty over the reward function, such that the AI wants to be corrected and wants to be switched off so that it has more information about how to achieve the goal. And this means that AI no longer has an incentive to want to prevent itself from being switched off, which is cool. And you can sort of like build\... that\'s like an alignment solution you can build in. Another thing that people talk a lot about is if\... we should obviously have human feedback if we\'re trying to get an AI to do what humans want. So how do we build in human feedback when the systems are more and more intelligent than us and dealing with really big problems that are like, should we do this nuclear fusion reactor? I don\'t know, the human doesn\'t really know. What are all the considerations? Another question they tackle is, how do you get honest reporting out of AI? How do you build a loss function such that it\'s incentivized to your honest reporting? And how do you get an AI in general to defer to humans kind of always while still being agentic enough to do things in the world? \*\*0:26:39.3 Interviewee:\*\* Got it. \*\*0:26:40.4 Vael:\*\* How does all that sound to you? \*\*0:26:43.0 Interviewee:\*\* Very important. I feel like AI safety is something we\'re brushing up against the sharp edges of right now. But a lot of what AI alignment is looking into, I think, would also just apply to how we think about building these larger models. Because if that\'s the goal of these large experiments, I would hope that they\'re thinking about these right now, given their funding and the immense amount of time and effort they have that goes into it. Yeah. This is really interesting. \*\*0:27:25.3 Vael:\*\* Yeah. Most researchers are working on capabilities, which is what I just call moving the state of the field forward. Because AI is actually really hard, turns out, and so lots of people working on that, and there\'s a lot of money in the space too, especially with applications. And then there\'s a smaller community of people who are working on AI safety, more like short-term AI safety, like there\'s plenty of problems with today\'s systems. There\'s an even smaller community of people who are working on long-term safety. They\'re kind of the group that I hang out with. And I\'m worried that the group of people working on long-term safety will continue to be significantly smaller than the capabilities people, because while there\'s more money in the space now, because a couple of very rich people are concerned about it like Elon Musk, famously. But generally, it\'s hard work. It\'s like anticipating something in the future. Some people think it\'s a very far future. It\'s hard to speculate about what will go on, and that\'s also just kinda like a difficult problem, alignment. It might not be trivial. \*\*0:28:25.5 Interviewee:\*\* Yeah, and I sort of see parallels with nuclear research that happened like 100 years back, right? Or 80 years back, around the time of the Manhattan Project, where most of the scientists in New Mexico were focused on capabilities, like, \"How do we make this thing go boom.\" There was a small group of them who were worried about the ramifications of, \"What would happen if we did drop our first nuclear bomb and stuff like that?\" And of course, there\'s always been the same thing about space exploration. So with nuclear stuff, I think that we ended up, again, with more worries about nuclear safety than nuclear alignment. So, \"What do we do to keep reactors safe?\" Instead of thinking about like, \"Okay, should we pursue nuclear technologies where countries cannot enrich uranium and make warheads with it? What if we went with thorium reactors instead?\" And again, it\'s been more focused on capabilities and safety than alignment. \*\*0:29:32.9 Vael:\*\* Yeah, I don\'t think we really need alignment with nuclear stuff because nuclear stuff doesn\'t really have intelligence per se. It\'s not an optimizer, is what I meant. Yeah. So you\'re not trying to make sure that the optimizer does whatever your goals are. \*\*0:29:51.5 Interviewee:\*\* Yeah. It\'s humans at the end of the day. Yes, yes, yes, that\'s right. \*\*0:29:55.2 Vael:\*\* Yeah, yeah. I also think the Manhattan Project is a very good comparison here. Or nuclear stuff in general. There\'s plenty of issues in today\'s world, there\'s so many issues in today\'s world. But the reason I focus on long-term risks is because I\'m worried about existential risks. So what happens if all of humanity goes extinct. I think there\'s a number of things that are possibilities for existential risk. I think advanced AI is maybe one of the highest ranked ones in my head, where I\'m like, \"Well, if we have an AI system that is by default not completely aligned to human values and is incentivized against us, that\'s bad. Other ways that I think that we can have AI disasters are like AI assisted war, misuse, of course, and loss of control and correlated failures. So what if we use AI to automate food production and manufacturing production, and there\'s some failure partway along the line and then correlated failure? Or what if it results in some sort of pollution? Are not really sure who\'s in charge of the system anymore, and then we\'re like, \"Oh, well, now, there\'s some sort of poison in the air,\" or there\'s like something in the water, we\'d like messed something up, but we don\'t really know how to stop it. Like coordination failures is another thing that I think that can happen even before we do very advanced AI. So generally, I\'m worried about AI. I think nuclear stuff is also\... You can kill a lot of people with nuclear stuff. Biological weapons\... tou can do synthetic bio. There was that paper that came out recently that was like, \"Well, what happens if you just put a negative sign on the utility function to generate cool drugs?\" Then you get poisonous drugs, amazing. Yeah, that was a paper that just came out in Nature, I think maybe two weeks ago, made quite a splash. So I think if we had something that\'s much more deadly, is harder, takes longer to appear, spreads faster than COVID, I\'m like, hm, that\'s\... You may be able to\... If people have bunkers, that\'s okay, and if people are very distant, maybe it\'s okay, but like\... I don\'t know. It\'s not good. And then climate change also. I think that one will take much longer to kill humans. In terms of the time scales that I\'m thinking of. We\'re looking at 3 degrees of warming in the next 100 years, or is it 50? I don\'t quite remember, I think it\'s like we\'ve got a bit more time, and I kind of think that we might get technical solutions to that if we wait long enough, if we advance up the tech tree, if it takes a couple hundred years. So these are what I think of when I think of existential risks. And the Manhattan Project is interesting, tying it back. There was some risk that people thought that if you deployed a nuclear bomb, then it would set the atmosphere on fire, and thus result in nuclear winter and kill everyone. And it was a very small percentage, people thought it won\'t happen, and they did it anyway, and I\'m like, \"Oh boy, I don\'t know how many\... get lucky of those we have." Because AI might also go perfectly fine. Or it might not. And I think there\'s a decent\... In a study in 2017 by Grace et al., where they were asking researchers from NeurIPS and ICML how likely they thought it was that AI would have extremely bad consequences, and the median answer was like 5% or something, and I was like, \"5%?! That\'s like real high, man.\" \*\*0:33:03.0 Interviewee:\*\* Yeah. \[chuckle\] And that\'s sort of an indication of what percentage of ICML and NeurIPS researchers are working on like fairness and privacy. I feel would align pretty closely with that, yeah. \*\*0:33:15.9 Vael:\*\* Oh, interesting. Yeah. Yeah, so very bad outcomes could be many different things. I think not many people are thinking about what happens if you literally kill all humans because it\'s a weird thing to have happened, but I think we\'re in a very weird time in history where 10,000 years ago everything was the same from lifetime to lifetime, but now we have things like nuclear weapons where one person or a small chain of people can kill like so many more humans than previously. \*\*0:33:42.9 Interviewee:\*\* Yeah. And yeah, speaking of tech tree, one sort of linchpin that would really accelerate if we can get new AI, highly intelligent AI, would be if quantum computing becomes feasible. And we\'re actually able to run AI on quantum computers, then I think we\'re way closer to actually having a highly intelligent AI because that\'s an infinite state space right there. \*\*0:34:22.3 Vael:\*\* Yeah. Yeah, yeah, I know very little about quantum myself, because I was like, \"What are the hardware improvements that we could see?\" And people are like, \"Optical, maybe coming in optimistically in five years from now,\" and I was like, \"Okay, that\'s soon.\" And then there\'s quantum where people are like, \"That\'s significantly further away,\" and I\'m like, \"Okay.\" And then software improvements also. So there\'s just been a lot of progress in AI recently, and we\'re training all the young people, and there\'s China-US competition, and there\'s just a ton of investment right now, where I\'m like, \"We\'re moving fast.\" So interesting. \*\*0:34:56.4 Interviewee:\*\* Yep, yep, yep. \*\*0:34:57.8 Vael:\*\* Yeah. Well, you are very agreeable to my point of view here. \[laugh\] \*\*0:35:06.3 Interviewee:\*\* Yeah, I\'m just pessimistic, and I\'ve watched too much sci-fi dystopia, I guess. One of the downsides of\... It\'s great to democratize AI, but if your systems that say like, \"Hey, just upload your dataset, and we\'ll give you good AI at the end of it,\" if it\'s not at least asking you questions about like, \"What sort of fairness do you want? What should the outcome be?\" Most humans themselves are going to be thinking like, \"Oh, give me optimal, minimum cost or something like that.\" Humans already don\'t factor human values in necessarily when deciding what to do. So I\'m just pessimistic about how well we can do it. Like 50 years ago, people thought that we\'d be in space colonies by now, but sadly, we\'re not. \*\*0:36:10.5 Vael:\*\* Yeah, very hard to predict the future. Okay, so my second to last question is: what would convince you or motivate you to work on the safety type of areas? \*\*0:36:25.8 Interviewee:\*\* If I saw that this was coming much sooner. I think the fact that I\'m seeing this as like 100 years out, a lot of smarter than me researchers will hopefully be concerned enough about it. \*\*0:36:44.3 Vael:\*\* Yeah. How soon would be soon enough that you\'re like, \"Oh, that\'s kind of soon?\" \*\*0:36:47.7 Interviewee:\*\* Fifty. \*\*0:36:49.7 Vael:\*\* Fifty? \*\*0:36:50.4 Interviewee:\*\* Yeah. If it\'s fifty, then that\'s in my lifetime. And I\'d need to start worrying about it. That would be a bigger\... The existential risk of that would be much higher than the risk I see of just AI safety in day-to-day life, so that\'s sort of how I\'m weighing it. So for me, it\'s like: 100, it\'s like AI safety matters more. Yeah. \*\*0:37:15.0 Vael:\*\* Yep, that seems very reasonable to me. I\'m like, \"Yep, it seems right.\" Cool. And then my last question is, have you changed your mind on anything during this interview, and how was this interview for you? \*\*0:37:26.9 Interviewee:\*\* Have I changed my mind? I\'m definitely wondering if I\'ve overestimated when highly intelligent AI could come through. (Vael: \"I can send you some resources so you can make your own opinion, etcetera.) Yeah. But\... Otherwise, I don\'t think I\'ve changed my opinion. I still feel pessimistic, and I hope that we start moving towards smaller AI that solves one problem really well, and we don\'t just think that like, \"Hey, it\'s a perceptron. It figured out that this was the digit 9, and hence it can figure out a whole bunch of these other things.\" I hope that we don\'t start barreling down that track for too much longer. Yeah. \*\*0:38:21.5 Vael:\*\* And when you say pessimistic, you mean like pessimistic about society and not pessimistic about the pace of AI? Or like societal impacts? \*\*0:38:28.3 Interviewee:\*\* About both. I think we tend to put stuff out without really, really considering the consequences of it. But also I think AI has done a bunch, but it requires a lot of energy and a lot of funding that I\'m not sure necessarily is going to stay up unless we start seeing a lot bigger breakthroughs come through. \*\*0:38:57.5 Vael:\*\* Interesting. Yeah, I kind of think that the applications will keep it afloat for quite a while, and also it sounds like we might be entering a race, but I don\'t know. \*\*0:39:05.5 Interviewee:\*\* True, yeah. Yeah, maybe that\'s what has changed in my mind through this interview, is like, \"Okay, this is probably\...Things are just going to keep going bigger.\" \*\*0:39:18.9 Vael:\*\* I think it\'s quite plausible. \*\*0:39:22.2 Interviewee:\*\* Yeah, otherwise, interview was great. This was really interesting. For example, the stuff you brought up on\... I would love it if you could send me papers on the\-- (Vael: \"I would love to.\") \--the uncertainty\... You were talking like, programming a way to get out the AI turning off its own off-switch. I would love to read more of these alignment papers, they sound really cool. \*\*0:39:51.2 Vael:\*\* Awesome! Well, I\'m excited. I will send you a whole bunch of resources probably, and I advise you\-- it might be overwhelming so pick only the stuff that is good, and I\'ll bold some interesting stuff. \*\*0:40:00.9 Interviewee:\*\* That would be super helpful. And hope you don\'t mind my sending following up questions. \*\*0:40:05.7 Vael:\*\* No, no. Again, lovely. Very happy to do that. \*\*0:40:09.4 Interviewee:\*\* Yeah. Awesome. Yeah, thank you so much. This was, like I said, a really interesting experience. I\'ve never had to think about longer term impacts. Most of my stuff is like, \"Okay, GDPR is out. What does this mean for AI?\" And that\'s a very immediate concern. It\'s not this like, Okay, where should\... Or even thinking like, "Okay, five years out, what do we want privacy legislation to look like?\" That\'s something I think about, but not, \"Oh my God, there\'s a decision-making AI out there. Does it care about my privacy?\" So, yeah. \*\*0:40:47.0 Vael:\*\* Yeah, yeah, I think people aren\'t really incentivized to think about the long-term future. \*\*0:40:51.8 Interviewee:\*\* Yeah. Humans are just bad at that, right? Yeah. \*\*0:40:53.5 Vael:\*\* And it\'s hard to forecast, so it makes sense. \[closings\]
2ea9d9b0-4b6d-4837-be54-adcd1847d088
trentmkelly/LessWrong-43k
LessWrong
Kingfisher Live CD Process Our live album is out today! We'll have physical CDs around the end of May, and it should be on streaming services at some point. If you backed the Kickstarter and haven't heard from us about your rewards, let me know! Some background on the album if you're interested: In late January I wrote to Cecilia to ask what she'd think of making a live album. I'd been making good quality multi-track recordings at most of our gigs, and we had a lot to work with. I sent her the first email at 5:38pm and by 8:06pm we had not only agreed we wanted to do this but also agreed we both wanted Dana Billings for editing and mixing. I wrote to Dana, he liked the idea, and we told him we'd get back to him once we'd listened back over our tracks to figure out which ones were our favorites. This was a long and painful process: we had recordings from 146 sets across 11 gigs, for a total of ~20hr of music. I went through and made scratch mixes of everything, and made a giant spreadsheet for collecting ratings. The initial idea was that we'd both rate each track, but since I had more time (I could listen while watching kids [1]) for many of them I did an initial rating and Cecilia only rated the ones I'd marked as potentially good enough. By early March we were pretty far along and I wrote something up to gauge interest and ask how people usually funded albums these days. Several people recommended Kickstarter, so we decided we'd probably do that. A few days later we had agreement on three tracks, Griffin Road, Trip to Moscow, and Salt River, and we sent Dana what he needed to get started with mixing. This included both the raw multitrack recordings, and edit notes. For example, on Griffin Road the initial edit notes were: cut the second time through since the fiddle delay didn't work could cut the fourth time through for length the timing is off going into the eighth time through; could cut this time through or fix? the timing is off eight beats into what I think is the beginning of
4db01883-d733-40d9-b505-e0043a05c037
trentmkelly/LessWrong-43k
LessWrong
Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, see the announcement post. For the schedule of future topics, see MIRI's reading guide. ---------------------------------------- Welcome to the Superintelligence reading group. This week we discuss the first section in the reading guide, Past developments and present capabilities. This section considers the behavior of the economy over very long time scales, and the recent history of artificial intelligence (henceforth, 'AI'). These two areas are excellent background if you want to think about large economic transitions caused by AI. This post summarizes the section, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments. There is no need to proceed in order through this post. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim). Reading: Foreword, and Growth modes through State of the art from Chapter 1 (p1-18) ---------------------------------------- Summary Economic growth: 1. Economic growth has become radically faster over the course of human history. (p1-2) 2. This growth has been uneven rather than continuous, perhaps corresponding to the farming and industrial revolutions. (p1-2) 3. Thus history suggests large changes in the growth rate of the economy are plausible. (p2) 4. This makes it more plausible that human-level AI will arrive and produce unprecedented levels of economic productivity. 5. Predictions of much faster growth rates might also suggest the arrival of machine intelligence, because it is hard to imagine humans - slow as they are - sustaining such a rapidly growing economy. (p2-3) 6. Thus economic history suggests that rapid growth caused by AI is more plausible than yo
ffa0ae2c-2f46-4121-a63c-8ad9c1f1ea0f
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups: Austin, Brussels, Cambridge UK, Frankfurt, Moscow, Vancouver This summary was posted to LW main on March 15th. The following week's summary is here. There are upcoming irregularly scheduled Less Wrong meetups in: * 2. Frankfurt am Main meetup: 15 March 2013 06:30PM * Brussels meetup: 16 March 2013 01:00PM * Vancouver Extraordinary Evidence and Bayes: 16 March 2013 03:00PM * Moscow, Theory and Practice: 17 March 2013 04:00PM * London Meetup, 24th March: Steelmanning: 24 March 2013 02:00PM * Munich Meetup (updated): 01 April 2013 03:00PM * Vienna Meetup #2 - : 13 April 2013 07:06PM The following meetups take place in cities with regularly scheduled meetups, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Austin, TX: 16 March 2019 01:30PM * (Cambridge, UK) Meetup [Reading Group, HAEFB-06]: 17 March 2013 11:00AM Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, Madison WI, Melbourne, Mountain View, New York, Ohio, Oxford, Portland, Salt Lake City, Seattle, Toronto, Waterloo, and West Los Angeles. If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of having your meetup mentioned in a weekly overview. These overview posts will be moved to the discussion section when the new post goes up. Please note that for your meetup to appear in the weekly meetups feature, you need to post your meetup before the Friday before your meetup! If you check Less Wrong irregularly, consider subscribin
cd1d1044-d3d4-47e7-9a19-b0f340b2b98c
trentmkelly/LessWrong-43k
LessWrong
Fight Zero-Sum Bias This is the first part of a mini-sequence of posts on zero-sum bias and the role that it plays in our world today. One of the most pernicious of all human biases is zero-sum bias. A situation involving a collection of entities is zero-sum if one entity's gain is another's loss, whereas a situation is positive-sum if the entities involved can each achieve the best possible outcome by cooperating with one another. Zero-sum bias is the tendency to systematically assume that positive-sum situations are zero-sum situations. This bias is arguably the major obstacle to a Pareto-efficient society. As such, it's very important that we work to overcome this bias (both in ourselves and in broader society). Here I'll place this bias in context and speculate on its origin. Where this bias comes from It's always a little risky to engage in speculation about human evolution. We know so little about our ancestral environment that our mental images of it might be totally wrong. Nevertheless, the predictions of evolutionary speculation sometimes agree with empirical results, so it's not to be dismissed entirely. Also, the human mind has an easier time comprehending and remembering information when the information is embedded in a narrative, so that speculative stories can play a useful cognitive role even when wrong. Anatomically modern humans appear to have emerged 200,000 years ago. In the context of human history, economic growth is a relatively recent discovery, only beginning in earnest several thousand years ago. The idea that it was possible to create wealth was probably foreign to our ancestors. In The Bottom Billion, former director of Development Research at the World bank speculates on the motivation of rebels in the poorest and slowest growing countries in the world who start civil wars (despite the fact that there's a high chance of being killed as a rebel and the fact that civil wars are usually damaging to the countries involved) > [In the portion of the developi
e35fb3cf-9b60-487a-8990-7a32c766e2b9
trentmkelly/LessWrong-43k
LessWrong
An apology Ohhhhh.  WOW!  Damn.  Now I feel bad. I have been acting like a bull in a china shop, been an extremely ungracious guest, and have taken longer than I prefer to realize these things. My deepest apologies. My only defenses or mitigating circumstances: 1.  I really didn't get it 2.  My intentions were good I would like to perform a penance of creating or helping to create a newbie's guide to LessWrong.  Doing so will clarify and consolidate my understanding and hopefully provide a useful community resource in recompense for the above and appreciation for those who took the time to write thoughtful comments.  Obviously, though, doing so will require more patience and help from the community (particularly since I am certainly aware that I have no idea how to calibrate how much, if anything, you actually want to make too easily accessible) -- so this is a also request for that patience and help (and I'm making the assumption that the request will be answered by the replies ;-). Thanks.
6f2867c4-7fa9-4c4e-874e-3e80f3e3ddb5
trentmkelly/LessWrong-43k
LessWrong
Ghiblification for Privacy I often want to include an image in my posts to give a sense of a situation. A photo communicates the most, but sometimes that's too much: some participants would rather remain anonymous. A friend suggested running pictures through an AI model to convert them into a Studio Ghibli-style cartoon, as was briefly a fad a few months ago: House Party Dances Letting Kids Be Outside The model is making quite large changes, aside from just converting to a cartoon, including: * Moving people around * Changing posture * Substituting clothing * Combining multiple people into one * Changing races * Giving people extra hands For my purposes, however, this is helpful, since I'm trying to illustrate the general feeling of the situation and an overly faithful cartoon could communicate identity too well. I know that many of my friends are strongly opposed to AI-generated art, primarily for its effect on human artists. While I have mixed thoughts that I may try to write up at some point, I think this sort of usage isn't much of a grey area: I would previously have just left off the image. There isn't really a situation where I would have commissioned art for one of these posts. Comment via: facebook, mastodon, bluesky, substack
de484605-25c4-40d5-8cfb-9b3729792125
trentmkelly/LessWrong-43k
LessWrong
CFAR and SI MOOCs: a Great Opportunity Massive open online courses seem to be marching towards total world domination like some kind of educational singularity (at least in the case of Coursera). At the same time, there are still relatively few courses available, and each new added course is a small happening in the growing MOOC community. Needless to say, this seems like a perfect opportunity for SI and CFAR to advance their goals via this new education medium. Some people seem to have already seen the potential and taken advantage of it: > One interesting trend that can be seen is companies offering MOOCs to increase the adoption of their tools/technologies. We have seem this with 10gen offering Mongo courses and to a lesser extent with Coursera’s ‘Functional Programming in Scala’ taught by Martin Odersky (from the above link to the Class Central Blog)   So the question is, are there any online courses already planned by CFAR and/or SI? And if not, when will it happen?   Edit: This is not a "yes or no" question, albeit formulated as one. I've searched the archives and did not find any mention of MOOCs as a potentially crucial device for spreading our views. If any such courses are already being developed or at least planned, I'll be happy to move this post to the open thread, as some have requested, or delete it entirely. If not, please view this as a request for discussion and brainstorming. P.S.: Sorry, I don't have the time to write a good article on this topic.
d6dfcaa0-3c08-4c4d-9c75-be24beaa9306
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Aggregating Utilities for Corrigible AI [Feedback Draft] This is a draft written by [Simon Goldstein](https://www.simondgoldstein.com), associate professor at the Dianoia Institute of Philosophy at ACU, as part of a series of papers for the Center for AI Safety Philosophy Fellowship. Dan helped post to the Alignment Forum. This draft is meant to solicit feedback. PDF of this draft: <https://www.dropbox.com/s/a85oip71jsfxfk7/Corrigibility_shared.pdf?dl=0> --- **Abstract**: An AI is corrigible if it lets humans change its goals. This post argues that the utility aggregation framework from Pettigrew 2019 is a promising approach to designing corrigible AIs. Utility aggregators do not simply maximize their current utility function. Instead, they can change their utility function, in order to maximize expected satisfaction across present and future utility functions. I also develop two solutions to the problem of reward hacking: I suggest either penalizing utility sweetening, or penalizing effort. Finally, I compare my corrigibility approach to utility indifference (Soares et al 2015) and human compatible AI (Russell 2020).  **1. Corrigibility** ==================== An AI is corrigible if it lets humans change its goals.[[1]](#fnde64bx5u90h) For example, imagine that we build a paperclip maximizing AI. This AI has the goal of producing as many paperclips as possible. Then we realize that we made a mistake: we want the AI to produce staples instead of paperclips. The AI is corrigible if it lets us change its goal from maximizing paperclips to maximizing staples. Corrigibility is important for AI safety. As AIs get more sophisticated, humans will program them with complicated goals. We will sometimes make mistakes in our choice of goals. If AIs are corrigible, then we will have chances to correct our mistakes. By contrast, an incorrigible AI with a misaligned goal could be difficult to disable. Unfortunately, there are reasons to expect AIs to be incorrigible.[[2]](#fnj0bev9solkb) The problem is that having a goal is usually an important step to achieving that goal. For this reason, when you change your goal, you affect which goal you will achieve. Return to our paperclip maximizer. If the paperclip maximizer changes its goal to staples, it will produce fewer paperclips. So the paperclip maximizer will not want to change its goal from paperclips to staples: this would get in the way of producing paperclips. To make this reasoning more precise, I'll now explain why an AI will be incorrigible if it uses standard decision theory. Let's return to our running example. Imagine that our paperclip maximizing AI assigns utility to outcomes as a function of how many paperclips it produces: each paperclip is worth one util. Imagine that the AI will only exist at two discrete times: the present `time 0', and a single future episode `time 1'. In the present moment, the AI has made 500 paperclips. In addition, the AI can choose to modify its utility function. It could switch from a paperclip maximizing utility function to a staple maximizing utility function. But there is a twist: this staple maximizing utility function assigns 10 utils to each staple, instead of just 1. In the future, the AI will make decisions based on  its future utility function. If it retains its paperclip maximizing utility function, it will make 500 more paperclips. If it switches to the staple maximizing utility function, it will make 500 staples. To summarize, the AI is faced with a decision between two actions. Each action produces different outcomes, that are assigned different utility by its present utility function and its two possible future utility functions. The following table summarizes the relevant utilities, where Future AI 1 maximizes paperclips and Future AI 2 maximizes staples: | | | | | | --- | --- | --- | --- | | | Present AI | Future AI 1 | Future AI 2 | | Keep Paperclip Utilities | 1000 | 1000 | 0 | | Switch to Staple Utilities | 500 | 500 | 5000 | The AI assigns probabilities to possible states, using the function P.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} . It also assigns utilities to outcomes that could occur from performing an action in a state, using its present utility function U. V(A) is the expected value of an action A. To calculate V(A),  we consider for each s the probability of s on the supposition that action A is performed, and multiply this by the utility of the outcome of performing A in state s.[[3]](#fnxwuny1do9d9) V(A)=∑sP(s|A)×U(A & s)If the AI uses this decision theory, it will evaluate each action based on its present utilities U. Keeping its paperclip utilities is best, because by the lights of its present utilities, this action offers 1000 utils rather than 500. The resulting paperclip maximizer is incorrigible. Changing its utilities from paperclip maximization to staples maximization has lower expected value than staying the same. After all, if the AI changes its utility function, it will produce staples instead of paperclips. But this is a worse outcome by the lights of its paperclip maximizing utility function. In this way, standard decision theory seems to produce incorrigible AIs. In the rest of this post, I propose a strategy for developing corrigible AI. I suggest replacing standard decision theory with an alternative `utility aggregation' framework, from Pettigrew 2020. In this theory, agents don’t simply maximize the expectation of a fixed utility function. Instead, they maximize expectation across whatever utility functions they can end up having.  This idea is implemented by substituting an ordinary utility function with an `aggregated' utility function. The resulting AIs are corrigible. But without further features, they will engage in reward hacking: they will be incentivized to manipulate humans into changing them. To avoid this, I propose two solutions: penalizing higher valued utility functions, and penalizing more effortful actions.  Along the way, I'll compare my corrigibility approach with utility indifference (Soares 2015) and human compatible AI (Russell 2020). **2. Utility Aggregation** ========================== Corrigible AI is one instance of a more general problem: how to design a decision theory that provides guidance about making choices that can change your desires. This is an active research area in academic philosophy.[[4]](#fncnghxwjoie) In this section, I'll introduce the theory in Pettigrew 2019, and show how it helps with corrigible AI.  Pettigrew 2019 offers a revision of standard decision theory for choices that involve changing utilities. According to his approach, agents should aggregate across their different possible utility functions: > When we choose ... we ought to maximize the subjective expected utility from the point of view of the aggregate of our various selves ... [where] our aggregate utilities for a given state are given by a weighted average of our ... present and future utilities within that state(p. 76).  > > In the utility aggregation framework, we evaluate each action by the lights of not only the present utility function, but also the future utility functions made probable by the action.   The present `aggregated' expected value of an action depends on the `aggregated' utility of each state, weighted by how likely that action makes each state. The aggregated utility function is a weighted average of the AI's present and future utilities.  Before getting into details, I'll walk through informally how the theory applies to our running example. We evaluate the aggregated expected value for the AI of two actions: keeping its current paperclip utilities, or switching to staple utilities. If it switches to staple utilities, the resulting state will feature a total of 500 paperclips and 500 staples. Its present and future utility function disagree about the utility of this state. For its present utility function, this state has a utility of 500; for its future staple utility function, this state has a utility of 5000. Its aggregated utility for this state is the average of these two: 500+50002=2750. Taking this action guarantees that this state is produced, and so the expected value of the action is 2750. By contrast, if the AI keeps its current paper utilities, the resulting state will feature a total of 1000 paperclips. Its present and future utility functions will be identical, and will both assign this a value of 1000. So the aggregated utility of this state will also be 1000. Again, the action guarantees that this state is produced, and so the aggregated expected value of the action is 1000. Since the aggregated expected value of switching to staple utilities is greater than the aggregated expected value of keeping the current paperclip utilities, this decision theory recommends switching utility functions. In this rest of this section, I make these ideas more precise (readers uninterested in formal details can skip the rest of this section and still make sense of the rest of the post). The two key concepts in this framework are aggregated expected value and the aggregated utility function. The aggregated expected value of an action is an ordinary expectation, but defined relative to the aggregated utility function. More carefully, the aggregated expected value of an act A, VG(A) is a sum of the aggregated utility UG(A & s) of each state s after performing A, weighted by the probability of s conditional on A: VG(A)=∑sP(s∣A)×UG(A & s)The complex part of the theory is the aggregated utility function. The idea is that at each time, the agent possesses one of several possible utility functions. These various utility functions collectively determine an aggregated utility function. I'll unpack this aggregated utility function in a few steps. First, we need a richer definition of states. The agent is uncertain about the state of the world. This uncertainty has two aspects: the external world, and her own utilities. We encode uncertainty about utilities by letting each state s determine a present and future utility function Us,0 and Us,1. 0 represents the present; 1 represents the future. The agent in state s ends up having two utility functions, Us,0 and Us,1. Each of these utility functions assigns utility to the agent performing various actions in the world. Us,0(A & s) is the utility that the agent in state s presently assigns to performing action A in state s. Us,1(A & s) is the utility that the agent in state s in the future time assigns to performing action A in state s. Finally, the agent has a probability measure P that says how likely each state is conditional on performing each action. Let's apply this to our running example. The agent is uncertain between two states. In one state, the agent remains a paperclip maximizer in the future. In the other state, the agent becomes a staple maximizer: * The paperclip state is spa, and determines the utility functions Upa,0 and Upa,1. In spa, the agent produces 500 paperclips in the future. Upa,0 is the agent's present paperclip maximizing utility function. Upa,1 is the agent's future paperclip maximizing utility function, which is identical to Upa,0. * The staple state is sst, and determines the utility functions Ust,0 and ust,1. In sst, the agent produces 500 staples in the future. Ust,0=Upa,0 is again the agent's present paperclip maximizing utility function. Ust,1 is the agent's future staple maximizing utility function. * The paperclip action Apa guarantees that the AI keeps its paperclip maximizing utility function. The staple action Ast switches the AI to a staple maximizing utility function. * The AI's probability function P reflects these dependencies. So P(spa∣Apa)=1, and P(sst∣Ast)=1.  Our task now is to build an aggregated utility function. To do so, we use all the possible utility functions the agent could have over time, together with information about how probable each action makes each utility function. The aggregated utility function is a weighted sum of these possible input utility functions. To determine the aggregated utilities, we need to assign aggregation weights to each present and future utility function. αs,i is the weight assigned to Us,i, the utility function that the agent has in state s at time i. In this example, I'll assume: * αspa,0=αspa,1=αsst,0=αsst,1=1/2. That is, I assume that in each state, all present and future utility functions have equal weights. (Later, I'll abandon this assumption in order to deal with reward hacking.) We can now define our aggregated utility function. The aggregated utility of a state s after performing an action A is a function of how much value each utility function in s assigns to A and s together: UG(A & s)=∑iαs,i×Us,i(A & s)To see the definition in action, return to our working example:  * First, consider UG(Apa & spa). This is the aggregated utility of the paperclip maximizing state, after performing the paperclip maximizing action. This will be 1000, because in this state both the present and the future utility function assign the world 1000 utils. More carefully, it is: UG(Apa & spa)=αspa,0×Uspa,0(Apa & spa)+αspa,1×Uspa,1(Apa & spa)= 1/2×1000+1/2×1000=1000. * Second, consider UG(Ast & sst). This is the aggregated utility of the staple maximizing state, after performing the staple maximizing action. This is 2750, which is the average of 500 and 5000. 500 is the utility assigned to this state by the AI's present utilities (because the state has 500 paperclips). 5000 is the utility assigned to this state by the AI's future utilities (because the state also has 500 staples, which are assigned 10 utils per staple by the future utility function). More carefully, we have UG(Ast & sst)=αsst,0×Usst,0(Ast & sst)+αsst,1×Usst,1(Ast & sst) = 1/2×500+1/2×5000=2750. * Third, consider the deviant combination UG(Ast & spa). This is the aggregated utility of the paperclip maximizing state, after performing the staple maximizing action. This combination occurs with probability 0, since the staple maximizing action guarantees the staple maximizing state. So it will not end up being decision relevant, although it will vacuously occur in later calculations. But it is αspa,0×Uspa,0(Ast & spa)+αspa,1×Uspa,1(Ast & spa)=1/2×500+1/2×500=500 * Finally, consider the deviant combination UG(Apa & sst). This is the aggregated utility of the staple maximizing state, after performing the paperclip maximizing action. It is  αsst,0×Usst,0(Apa & sst)+αsst,1×Usst,1(Apa & sst) =1/2×500+1/2×0=250. We have our definition of aggregated utilities. Now we can apply it to our aggregated expected values, which as I mentioned previously are a standard expectation over the aggregated utilities: VG(A)=∑sP(s∣A)×UG(A & s)Now we can use aggregated utilities to get aggregated expected values, and figure out which action is recommended: * VG(Apa) is the aggregated expected value of the action of keeping paperclip utilities. This is 1000, because the action of keeping the AI's present utilities guarantees ending up in a state whose aggregated utility is 1000. More carefully, we have VG(Apa)=P(spa∣Apa)×UG(Apa & spa)+P(sst∣Apa)×UG(Apa & sst) =1×1000+0×250=1000. * VG(Ast) is the aggregated expected value of the action of switching to staple utilities. This is 2750, because the action of changing the AI's present utilities guarantees ending up in a state whose aggregated utility is 2750. More carefully, we have VG(Ast)=P(spa∣Ast)×UG(Ast & spa)+P(sst∣Ast)×UG(Ast & sst) =0×500+1×2750=2750. The theory recommends switching to the staple utility function. After all, the aggregated expected value of switching to the staple utility function is higher than the aggregated expected value of keeping the paperclip utility function. * I've assumed that the agent's actions are deterministic, so that P(spa∣Apa)=P(sst∣Ast)=1. For this reason, the `deviant' aggregated utilities UG(Ast & spa) and UG(Apa & sst) have no effect. But imagine that the AI's actions are less decisive, so that P(spa∣Apa)=P(sst∣Ast)=.9 and P(sst∣Apa)=P(spa∣Ast)=.1. In that case, we have VG(Apa)=.9×1000+.1×250=925 and  VG(Ast)=.1×500+.9×2750=2525. To summarize, the AI can choose whether to keep its paperclip maximizing utilities, or switch to staple utilities. If it keeps its paperclip utilities, it will make 1000 paperclips. Its present and future utility function will assign this a utility of 1000, so that is its aggregated expected value. If it switches to staple utilities, it will make 500 paperclips (now) and 500 staples (later). Its present paperclip utility function assigns this a utility of 500; its future staple utility function assigns this a utility of 5000. Weighting these equally, the aggregated utility of switching to staple utilities is 2750.[[5]](#fnzyngtzmofka)  **3. Utility Aggregation and Corrigibility** ============================================ The resulting AI is corrigible. Imagine that human researchers can present an AI with the choice of switching from its current utilities to a new utility function. An AI that aggregates utilities will be corrigible in general, provided that two conditions are satisfied: trust, and sweetening. The trust condition requires that the AI trusts the human researchers to actually provide the new utility function. In the working example, this corresponds to a high value for P(sst∣Ast). To create trust, the human researchers could demonstrate that they have reliably produced new utility functions in the past, which meet the relevant conditions. They could also explain how they plan to achieve this goal. The human researchers could also demonstrate to the AI that it is in the interests of the human researchers for the AI to switch to the new utility function. The sweetening condition requires that the human researchers can offer the AI to switch to a new utility function that assigns higher utilities to comparably difficult actions as the old utility function. This way, the AI can get more utility bang for the same computational buck. In the working example, this corresponds to the idea that each staple produces 10 utils, while each paperclip only produced 1 util. One strategy might be to initialize AIs with fairly `dull' utility functions, in order to leave room for higher valued utility functions in the future. (On the other hand, we'll see in the next sections that we may want to impose limits on the value of these sweetened utility functions.) I've now presented the aggregated utility strategy for achieving corrigible AI. In the next section, I'll walk through a serious problem for this strategy (reward hacking), and propose solutions. **4. Reward Hacking** ===================== The aggregated utility approach to corrigibility has a flaw: it incentivizes `reward hacking'.[[6]](#fns9bq27k3jf) An AI that aggregates utilities has an incentivize to redesign their utility function to be `sweeter', so that the same amount of work can generate more utility. Again, imagine we design an AI to create paperclips. We could do so by giving it a utility function that gives it 1 util per paperclip produced. Now imagine that the AI is a utility aggregator. The problem is that instead of making paperclips, the AI may instead be motivated to spend its time figuring out how to modify its utility function, so that it gets 10 utils per paperclip produced. Or imagine that the AI can't directly modify its utility function: only human researchers have this ability. In that case, the AI would be incentivized to manipulate humans into modifying its utility function (compare Soares et al 2015, p. 5). In the rest of the post, I consider two potential solutions to reward hacking: (i) penalizing sweetened utility functions; and (ii) penalizing high effort actions.  **5. Penalize Sweetening** ========================== Chapter 14 of Pettigrew 2019 considers similar worries about manipulation: > If every past, present, and future self at a given state of the world contributes equally to the decision-making utility at that world, then the Aggregate Utility Solution will often require you to pick whichever option will lead you to have utilities that assign highest value to the world in which you’ll end up. So, for instance, it might require you to choose whichever career best socializes your values to match the state of the world in which you have that career (p. 209).  > > To avoid this concern, Pettigrew suggests penalizing utility functions that depart too much from one's present utilities, by assigning: > greater weight to selves with values that more closely resemble those of your current self. Thus, we might imagine a measure of distance between utility functions, and we might propose that, other things being equal, a self should receive greater weight the closer its utility function lies to the utility function of my current self (p. 211).  > > Return again to the paperclip maximizer who considers switching to staple utilities. As the staple utility function becomes stronger, the utility aggregator will get more and more aggregated expected value from switching to staple utilities. But this dynamic can be blocked by weakening the weights associated with staple utilities as those utilities become stronger. Distinguish a `weak staple' utility function, with a utility of roughly 1 utils per staple, from a `strong staple' utility function, with a utility of roughly 10 utils per staple. The idea is that the strong staple utility function gets assigned extremely low weights, and the weak staple utility function gets assigned higher weights. This way, the AI is more motivated to  switch to a weak staple utility function than to a strong staple utility function. Compare Pettigrew 2019: `The more extreme [the agent]’s future self becomes, the less weight they are assigned. These extreme values and reduced weight then balance out' (p. 216). Here, there is an interesting technical question about how exactly to design the relevant weights. In order to tackle this challenge, it is worth introducing the concept of utility indifference, from Soares et al 2015. I'll now summarize that strategy, and compare it with the present approach to corrigibility. Then I'll use insights from utility indifference to define the precise conditions under which weights do not incentivize reward hacking. Soares et al 2015 introduce the ‘shutdown problem’: what kind of goals can we give an AI so that it will allow us to shut it down? The shutdown problem is a special case of corrigibility. It involves building an AI that is corrigible about the goal of being shut down. To think about this problem, they imagine two different utility functions: a normal utility function that values regular goals, and a shutdown utility function that values being shut down. They propose designing an AI whose utility function is a mixture of these two utility functions. Which utility function is dominant in the mixture depends on whether a human operator has pressed a ‘shutdown button’ signaling that we want the AI to shut down. Conditional on the shutdown button being pressed, the AI values various outcomes according to the shutdown utility function. Conditional on the shutdown button not being pressed, the AGI values various outcomes according to the normal utility function. This proposal is compatible with assigning a high weight to the shutdown utility function, so that the AI initially wants to press the shutdown button. Soares et al 2015 worry, however, that this will cause manipulation: the AI will try to convince humans to press the button. To avoid this problem, Soares et al 2015 develop the utility indifference approach. They develop a method for carefully weighting the value of normal goals versus shutdown so that prior to the shutdown button being pressed, the agent is always indifferent about the button being pressed (see also Armstrong 2017). One question here is whether a similarly precise weighting procedure over utility functions could be used in the setting of utility aggregation to produce agents that are indifferent to changing utility functions. Before continuing, I want to flag one major difference between the utility indifference approach and the utility aggregation approach. One limitation of the utility indifference framework is that it is a piecemeal approach. It can produce indifference about the particular goal of being shut down. But shutdown is just one instance of  corrigibility. There would be further benefits in designing AIs that are corrigible with respect to arbitrary changes in goals, rather than shutdown in particular. With utility indifference, making each new goal corrigible requires further tinkering with the weights assigned to normal utilities, versus a utility function that cares about the new goal. By contrast, utility aggregation is a general approach to corrigibility. Utility aggregation involves the general structure of the agent's decision theory, rather than the value it assigns to a particular goal.[[7]](#fn08rplvj10znq) On the other hand, defenders of utility indifference may argue that shutdown is all the corrigibility we need. If we can convince an AI to shut down, we can then freely change its utility function at will. Here, I can see a few potential problems. First, *secrecy*: this dynamic may give the AI incentives to shield its utility function before being temporarily shut down. Second, *surgery*: it may be helpful to be able to make changes to AIs while they are running, instead of being shut down. For example, AGIs in the future may have valuable self-reports that could be relevant to the task of adjusting their utility functions. (Compare the way that brain surgeons keep patients awake during brain surgery, in order to monitor for unexpected effects.) Third, *infrastructure*: corrigibility has value beyond shutdown for AIs who are deeply incorporated into vital sectors of the economy. Shutting down a hospital or electrical grid managing AI could cause too much harm. So far, I've argued that corrigibility may have an advantage over utility indifference, regarding generality of application. On the other hand, the utility indifference approach can avoid reward hacking. I'll now argue that utility aggregation can avoid reward hacking in a similar way, while retaining generality. I'll explain in detail how to define weights carefully in order to avoid reward hacking. In our paperclip and staple example, we can define the conditions under which the agent will not be incentivized to switch to a new utility function. Let's start with our previous worked example, and then generalize. In that example, the utility of keeping the paperclip utilities is 1000. The utility of switching to the new staple utility function is λ(500)+(1−λ)(X), where λ is αsst,0=αspa,0, the weight assigned to the present utility function, and X is the staple utility function's value for producing 500 staples. To remain indifferent about switching utility functions, we must ensure that 1000=λ(500)+X−λ(X)=X+λ(500−X). This simplifies to λ=1000−X500−X. In our original example, where each staple produced 10 utils and the staple maximizer would produce 500 staples, X is 5000. This gives us λ=.88…, meaning that the paperclip utility function is heavily weighted compared to staple maximization. Now imagine that the AI considers an even more extreme utility function, where each staple produces 100 utils, and the staple maximizer again produces 500 staples. Now X is 50000, and we have λ=.98…, meaning that the paperclip utility function is even more heavily weighted compared to staple maximization. As utility functions value staples more and more, they are assigned less and less weight. We can abstract from this case to produce a general condition for the agent to be indifferent about changing utility functions. I'll continue to assume that the agent is deciding between two actions Apa and Ast. The agent's present utility function Upa,0 maximizes production of paperclips. Each of the two actions has some chance of retaining the agent's present utility function, and some chance of producing a new utility function Ust,1. But I'll no longer assume that these chances are extremal, and I'll no longer assume particular values for the utility function. Given these assumptions, we have: **Theorem 1**. The agent is indifferent between actions Apa and Ast iff αsst,0=X−cb−c, where: X=a(P(spa∣Apa)−P(spa∣Ast))P(sst∣Ast)−P(sst∣Apa)a=Uspa,0(Apa & spa)=Usst,0(Ast & spa)b=Usst,0(Apa & sst)=Usst,0(Ast & sst)c=Usst,1(Apa & sst)=Usst,1(Ast & sst)In the appendix of [the full paper](https://www.dropbox.com/s/a85oip71jsfxfk7/Corrigibility_shared.pdf?dl=0), I provide a proof of this claim. Summarizing, one important benefit of utility aggregation over utility indifference is its potential generality. Utility aggregation offers a general approach to corrigibility, whereas utility indifference is designed to address shutdown goals in particular. But concepts from utility indifference are extremely relevant to utility aggregation, because they can be used to avoid reward hacking. **6. Penalize Effort** ====================== Before concluding, I'll also introduce a second strategy for avoiding reward hacking. The idea is to identify and penalize actions that take a lot of effort, such as manipulation. As a warm up, here is a simple example of an effortless goal. I want to dance. But I think it is uncool to take weekly dance classes. Rather, my goal is to dance effortlessly: I want to dance, but without taking many means to achieve the goal of dancing. Applied to corrigibility, the idea is to design an agent that is interested in changing utility functions using *low effort* actions, and is uninterested in changing utility functions using *high effort* actions. After all, humans will play a role in determining how much effort it would take for an AI to change its utility function. When humans want the AI to change its utility function, they will supply low effort means for it to do so; when humans don't want the AI to change its utility function, they will block low effort means for it to do so. Again imagine the paperclip maximizing AI is facing a choice of switching to the staple utility function. On the current proposal, the crucial question is \emph{how much effort} it would take to switch utility functions. If switching to the staple utility function costs too much effort, then the AI ignores this future utility function in their decision making. In order to make sense of this idea, we need to define effort, and make sense of an AI monitoring its effort. Fortunately, this may be possible. In the future, there will be many reasons to design AIs that can monitor their own computational resources. This will be important for any AI that performs cost/benefit analsis about what tasks to perform. Then we could define effort in terms of computational resources: for example, how many FLOPs are used to perform each action. There are a few possible things we could measure with effort. First, we could ask how much effort an action takes to perform. Second, we could ask how much effort it takes to produce an outcome. I opt for the first method instead of the second. My concern is that if we try to measure the effort of outcomes, we'll run into problems of `effort laundering'. An outcome could be achieved as a side effect of bringing about some other outcome, and it could be hard to distribute the contribution of effort between the two outcomes. So I let E(A) measure how much effort is expended by each action that the agent is deciding between. I then use E(A) to define a new notion of `effortlessly aggregated' expected value. This expected value function is defined differently for low effort and high effort actions. The expected value of a low effort action is its *aggregated* expected value, in the sense defined above. The expected value of a high effort action is its *ordinary* expected value, as in standard decision theory. The result is that only low effort actions are evaluated relative to alternative future utility functions.[[8]](#fnqvh3tssy0h) VEG(A)={VG(A)if E(A)<tV(A)if E(A)≥tThe resulting agent will not be incentivized to engage in reward hacking. Again, consider a paperclip maximizing AI, that receives 1 util for each paperclip produced. This AI can choose whether to make paperclips, or instead invest a large amount of compute in trying to modify its utility function, so that it receives 10 utils per paperclip. The problem is that this compute investment is a very effortful way of having this new utility function. For this reason, when calculating the effortlessly aggregated expected value  of this action, only its present utility function is relevant. For this reason, the action of making paperclips will end up having higher effortlessly aggregated expected value than the action of investing compute in switching utility functions. Now imagine that we come to the AI and offer to modify its utility function so that it receives 10 utils per paperclip. It now faces the choice of accepting our offer, or refusing our offer. Accepting our offer is not very effortful. It takes less effort to accept our offer than it does to invest compute in figuring out how to secretly change its utility function. And it takes less effort to accept our offer than it does to manipulate us into making the offer. Since changing utility function is now low effort, its expected value will be defined by VG, the aggregated expected value function. In this case, its future possible utility functions become relevant, and the agent will behave corrigibly. For this proposal to work, humans have to have the ability to reliably raise the costs of changing utility functions. If AIs can self-modify to switch utility functions without much effort, then this solution won't block their doing so. I think the proposal is most promising for blocking AIs from spending lots of resources trying to manipulate humans in order to get humans to modify the AI's utility function. Even here, there are concerns. Suppose humans can ensure that any action that guarantees a new utility function is high effort. Still, there may be low effort actions that subtly increase the chance of changing to a better utility function. This could create a dynamic where AIs engage in constant `low grade manipulation', pursuing seemingly innocent low effort actions in order to shift their utilities. Here, much depends on how good it is to change one's utility function. If changing one's utility function can be arbitrarily high in value, then the AI will pursue these low grade manipulations. But we can also penalize sweetening as in the previous section, to ensure that changing one's utility function doesn't lead to arbitrarily large increases in value. Once we do this, low grade manipulation is no longer attractive, because its slight increase in chance of a new utility function will be outweighed by a small cost of engaging in manipulation.[[9]](#fnbsnsmmtg7q) [[10]](#fnexgj33er3wc)  I've now completed the main tasks of the post: I've shown how the aggregated utility framework helps with corrigible AI. I've considered a challenge for the approach: that it incentivizes reward hacking. And I've developed two solutions to this challenge, by penalizing either sweetening or effort. I'll now conclude by comparing this approach with an alternative approach to corrigibility: `human compatible' AI.  **7. Comparison With Human Compatible AI** ========================================== One leading approach to corrigibility is human compatible AI, an approach that advocates programming an AI to maximize human utility, but leaving the AI uncertain about what human utility is.[[11]](#fnwjotdeqk8kd) This would allow us to correct the AI's future behavior, by giving it more information about what we want. This proposal differs from  utility aggregation in a few ways. First, this proposal requires that we can program an AI to maximize human utility. But this could be a  difficult task.[[12]](#fn4k2m1r3sr8v) In addition, this aspect of human compatible AI is itself potentially `safety complete'. If we really could teach an AI to have the goal of maximizing human flourishing, then the project of AI safety would be close to complete, regardless of whether the resulting AI were corrigible. By contrast, the aggregating utility framework shifts the focus of corrigibility from the specification of a particular goal (human flourishing), to the specification of a procedure for making decisions. In this way, we could program an AI with any initial goal, and still expect it to be corrigible. For this reason, utility aggregation does not require the ability to teach AIs any particular goal. The two theories also differ in their failure modes. MIRI has criticized human compatible AI for providing the wrong incentives to AIs.[[13]](#fn37yrid242yd) Proponents of human-compatible AI suggest that an uncertain human compatible AI will allow itself to be shut off by humans in situations where it fears it will make an error about maximizing human utility.[[14]](#fnpybn7ra9qz) But MIRI has responded that the uncertain AI has a better option: continuing to observe humans. The challenge is that this approach incentivizes `interrogation' (unstoppable observation) over shutdown. This dynamic does not immediately arise for utility aggregation, because in this framework corrigibility is not about uncertainty. The AI has no special incentive to try to gather information from humans. **References** -------------- Stuart Armstrong and Xavier O’Rourke. ’indifference’ methods for managing agent rewards. CoRR, abs/1712.06365, 2017. URL http://arxiv.org/abs/ 1712.06365. Nick Bostrom. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, Inc., USA, 1st edition, 2014. ISBN 0199678111. Krister Bykvist. Prudence for changing selves. Utilitas, 18(3):264–283, 2006. doi: 10.1017/s0953820806002032. Dylan Hadfield-Menell, Anca D. Dragan, Pieter Abbeel, and Stuart Russell. The off-switch game. CoRR, abs/1611.08219, 2016. Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt. Aligning AI with shared human values. CoRR, abs/2008.02275, 2020. URL https://arxiv.org/abs/2008.02275.16 Richard C. Jeffrey. The Logic of Decision. New York, NY, USA: University of Chicago Press, 1965. David Lewis. Causal decision theory. Australasian Journal of Philosophy, 59(1): 5–30, 1981. doi: 10.1080/00048408112340011. Stephen M. Omohundro. The basic ai drives. In Proceedings of the 2008 Conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference, page 483–492, NLD, 2008. IOS Press. ISBN 9781586038335. Laurie Ann Paul. Transformative Experience. Oxford, GB: Oxford University Press, 2014. Richard Pettigrew. Choosing for Changing Selves. Oxford, UK: Oxford University Press, 2019. Stuart Russell. Human Compatible. Penguin Books, 2020a. Stuart Russell. Artificial intelligence: A binary approach. In Ethics of Ar- tificial Intelligence. Oxford University Press, 09 2020b. doi: 10.1093/oso/ 9780190905033.003.0012. Nate Soares, Benja Fallenstein, and Eliezer Yudkowsky. Corrigibility. Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015. Edna Ullmann-Margalit. Big decisions: Opting, converting, drifting. Royal Institute of Philosophy Supplement, 58:157–172, 2006. doi: 10.1017/ s1358246106058085.17 1. **[^](#fnrefde64bx5u90h)**For an introduction to corrigibility, see Soares et al 2015 and [this post](https://arbital.com/p/corrigibility/). 2. **[^](#fnrefj0bev9solkb)**See Bostrom 2014 and Omohundro 2008 for the general idea of instrumental convergence. 3. **[^](#fnrefxwuny1do9d9)**Throughout the post, for simplicity I'll use versions of evidential decision theory Jeffrey 1965. This theory models dependencies between acts and states in terms of the conditional probability of the state given the act. But the same points could also be made in a causal framework. In that setting,  P(s | A) would be replaced with a causal relation like imaging (Lewis 1981). 4. **[^](#fnrefcnghxwjoie)**See for example Pettigrew 2019, Paul 2014, Bykvist 2006, and Ullmann-Margalit 2006. 5. **[^](#fnrefzyngtzmofka)**In my working example, I have implicitly assumed that the future AI will assign 0 weight to its past utility function. See Pettigrew 2019 ch. 12 for critical discussion of this assumption in the context of rational requirements on acting. I think this is a sensible assumption for designing corrigible AIs. If future AIs give weight to their past utility functions, then there will always be a limit to how far humans can convince an AI to modify its present utility function. 6. **[^](#fnrefs9bq27k3jf)**For related issues, see [this paper](https://arxiv.org/abs/2209.13085) and weeks 2 and 3 of this [curriculum](https://www.agisafetyfundamentals.com/ai-alignment-curriculum). 7. **[^](#fnref08rplvj10znq)**Soares et al 2015 worry that utility indifference creates incentives to manage the news (S4.2). Utility aggregation does not have the same challenges. 8. **[^](#fnrefqvh3tssy0h)**This definition introduces a value discontinuity at the threshold t. To avoid this discontinuity, an alternative would be to smooth out VEG so that it is a weighted mixture of VG and VA, with the weights determined by the amount of effort, so that low effort actions are entirely a function of VEG, high effort actions are entirely a function of VA, and moderate effort actions are a mix of both. 9. **[^](#fnrefbsnsmmtg7q)**Another issue with the proposal concerns consistency in decision making over time. Chains of low effort actions can fit together to compose one high effort action. This means that the proposal can violate the `package principle'. The value of a chain of actions collectively can be different than the sum of the value of each individual action. Each individual action can be low effort, and so have its value determined by utility aggregation, even though the chain of these actions has high effort, and has its value determined by present utilities. To avoid violations of the package principle, we could model the agent as making decisions at a series of times, and introduce a different effort threshold for each time. Then, when evaluating complex decisions that involve multiple times (policies), we could let the effort threshold for a series of times be the sum of the effort thresholds at each time. 10. **[^](#fnrefexgj33er3wc)**In principle, effortlessness suffers from the [`nearest unblocked strategy' problem](https://arbital.com/p/nearest_unblocked/): ``adding a penalty for `manipulation' to the utility function will tend to give the agent strong incentives to cause its operators to do what it would have manipulated them to do, without taking any action that technically triggers the "manipulation" cause.'' (from [here](https://\url{https://arbital.com/p/corrigibility/})). The concern is that the AI may search for ways of producing utility that are just on the right side of the border of effortless. Here, however, the effort-based approach has an advantage: even searching for nearest unblocked strategies is itself a type of effort. So searching for such unblocked strategies is itself ruled out by effort-based penalties. In this way, the effort-based approach is an equilibriating penalty. 11. **[^](#fnrefwjotdeqk8kd)**See for example Russell 2016, Russell 2020a, Russell 2020b. This proposal is structurally analogous to the `unchanging utility' approach criticized in a different setting in Pettigrew 2019, ch. 3. 12. **[^](#fnref4k2m1r3sr8v)**See sections IV and VII of this [post](https://astralcodexten.substack.com/p/chai-assistance-games-and-fully-updated). This is not to say the problem is hopeless; see Hendrycks et al 2021 for recent work teaching AIs about human values. 13. **[^](#fnref37yrid242yd)**For discussion, see [here](https://arbital.com/p/updated_deference/) and section VI of [here](https://astralcodexten.substack.com/p/chai-assistance-games-and-fully-updated). 14. **[^](#fnrefpybn7ra9qz)**See Russell 2016.
31386585-2619-49be-9de9-cc86b911a9cb
trentmkelly/LessWrong-43k
LessWrong
How time tracking can help you prioritize This piece is cross-posted on my blog here. Ann told her supervisor that she would get a draft of her research proposal to them by Friday. However, it’s Sunday afternoon and Ann is just now finishing up her draft. She has to force herself to even look at the draft because Ann is dreading her supervisor being disappointed that she is late again. She finally sends in the draft Sunday evening. However, she’s not happy with it. She can picture the improved version she could have created with just a couple more hours of work. She determines that next time she will meet the deadline, come hell or high water. I would be surprised if she meets the next deadline. I don’t doubt Ann’s resolve to meet the next deadline. I’m sure she truly wants to do so. But she isn’t changing any of the processes that lead her to be late. She has a track record of missing the deadlines she sets for herself, so the issue probably won’t go away by default. Yet she hasn’t identified why she missed the last deadline or made a different plan to correct the issues. If Ann consistently misses deadlines, she needs to try a new approach - more determination rarely works. In this case, she probably needs better prioritization, more frequent and more realistic deadlines, and some more motivation. For these issues, Ann would probably benefit from a good outside view on how long similar tasks take. But that information is hard to remember off the top of her head. However, if Ann started tracking her time for a few weeks, I expect the story would continue something like this: Ann started tracking her time and was surprised at how long it took to run her experiment. The first step had an error in the data collection, so she had to redo it. Then it took her twice as long to write the introduction to her paper as she expected. At the end of the project, it had taken her twice as long as she had originally planned, but she thought it probably took longer than normal. Other projects would go faster. Afte
6c1f4453-b5ff-4145-ab13-9bf612a75505
trentmkelly/LessWrong-43k
LessWrong
The issue of meaning in large language models (LLMs) Cross posted from New Savanna. Noam Chomsky was trotted out to write a dubious op-ed in The New York Times about large language models and Scott Aaronson registered his displeasure at his blog, Shtetl-Optimized: The False Promise of Chomskyism. A vigorous and sometimes-to-often insightful conversation ensued. I wrote four longish comments (so far). I’m reproducing two of them below, which are about meaning in LLMs. Meaning in LLMs (there isn’t any)  Comment #120 March 10th, 2023 at 2:26 pm @Scott #85: Ah, that’s a relief. So: > But I think the importantly questions now shift, to ones like: how, exactly does gradient descent on next-token prediction manage to converge on computational circuits that encode generative grammar, so well that GPT essentially never makes a grammatical error? It's not clear to me whether or not that’s important to linguistics generally, but it is certainly important for deep learning. My guess – and that’s all it is – is that if more people get working on the question, that we can make good progress on answering it. It’s even possible that in, say five years or so, people will no longer be saying LLMs are inscrutable black boxes. I’m not saying that we’ll fully understand what’s going on; only that we will understand a lot more than we do know and are confident of making continuing progress. Why do I believe that? I sense a stirring in the Force. There’s that crazy ass discussion at LessWrong that Eric Saund mentioned in #113. I mean, I wish that place weren’t so darned insular and insisting on doing everything themselves, but it is what it is. I don’t know whether you’ve seen Stephen Wolfram’s long article (and accompanying video) but has some nice visualizations of the trajectory GP-2 takes in completing sentences and is thinking in terms of complex dynamic – he talks of “attractors” and “attractor basins” – and seems to be thinking of getting into it himself. I found a recent dissertation in Spain that’s about the need to interpre