id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
5da3e594-2eee-412f-9706-ce13058633d8
StampyAI/alignment-research-dataset/arxiv
Arxiv
A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment 1 Introduction --------------- In reinforcement learning [Sutton1998](#bib.bib61) (RL), agents identify policies to collect as much reward as possible in a given environment. Recently, leveraging parametric function approximators has led to tremendous success in applying RL to high-dimensional domains such as Atari games [Mnih2015](#bib.bib39) or robotics [Schulman2015](#bib.bib55) . In such domains, inspired by the policy gradient theorem [Sutton2000](#bib.bib62) ; [Degris2012](#bib.bib13) , actor-critic approaches [Lillicrap2016](#bib.bib35) ; [Mnih2016](#bib.bib40) attain state-of-the-art results by learning both a parametric policy and a value function. Empowerment is an information-theoretic framework where agents maximize the mutual information between an action sequence and the state that is obtained after executing this action sequence from some given initial state [Klyubin2005](#bib.bib26) ; [Klyubin2008](#bib.bib27) ; [Salge2014](#bib.bib52) . It turns out that the mutual information is highest for such initial states where the number of reachable next states is largest. Policies that aim for high empowerment can lead to complex behavior, e.g. balancing a pole in the absence of any explicit reward signal [Jung2011](#bib.bib23) . Despite progress on learning empowerment values with function approximators [Mohamed2015](#bib.bib41) ; [deAbril2018](#bib.bib12) ; [Qureshi2019](#bib.bib48) , there has been little attempt in the combination with reward maximization, let alone in utilizing empowerment for RL in the high-dimensional domains it has become applicable just recently. We therefore propose a unified principle for reward maximization and empowerment, and demonstrate that empowered signals can boost RL in large-scale domains such as robotics. In short, our contributions are: * a generalized Bellman optimality principle for joint reward maximization and empowerment, * a proof for unique values and convergence to the optimal solution for our novel principle, * empowered actor-critic methods boosting RL in MuJoCo compared to model-free baselines. 2 Background ------------- ### 2.1 Reinforcement Learning In the discrete RL setting, an agent, being in state s∈S, executes an action a∈A according to a behavioral policy πbehave(a|s) that is a conditional probability distribution πbehave:S×A→[0,1]. The environment, in response, transitions to a successor state s′∈S according to a (probabilistic) state-transition function P(s′|s,a), where P:S×A×S→[0,1]. Furthermore, the environment generates a reward signal r=R(s,a) according to a reward function R:S×A→R. The agent’s aim is to maximize its expected future cumulative reward with respect to the behavioral policy maxπbehaveEπbehave,P[∑∞t=0γtrt], with t being a time index and γ∈(0,1) a discount factor. Optimal expected future cumulative reward values for a given state s obey then the following recursion: | | | | | | --- | --- | --- | --- | | | V⋆(s)=maxa(R(s,a)+γEP(s′|s,a)[V⋆(s′)])=:maxaQ⋆(s,a), | | (1) | referred to as Bellman’s optimality principle [Bellman1957](#bib.bib4) , where V⋆ and Q⋆ are the optimal value functions. ### 2.2 Empowerment Empowerment is an information-theoretic method where an agent executes a sequence of k actions →a∈Ak when in state s∈S according to a policy πempower(→a|s) which is a conditional probability distribution πempower:S×Ak→[0,1]. This is slightly more general than in the RL setting where only a single action is taken upon observing a certain state. The agent’s aim is to identify an optimal policy πempower that maximizes the mutual information I[→A,S′∣∣s] between the action sequence →a and the state s′ to which the environment transitions after executing →a in s, formulated as: | | | | | | --- | --- | --- | --- | | | E⋆(s)=maxπempowerI[→A,S′∣∣s]=maxπempowerEπ%empower(→a|s)P(k)(s′|s,→a)[logp(→a|s′,s)π% empower(→a|s)]. | | (2) | Here, E⋆(s) refers to the optimal empowerment value and P(k)(s′|s,→a) to the probability of transitioning to s′ after executing the sequence →a in state s, where P(k):S×Ak×S→[0,1]. Importantly, p(→a|s′,s)=P(k)(s′|s,→a)πempower(→a|s)∑→aP(k)(s′|s,→a)πempower(→a|s) is the inverse dynamics model of πempower. The implicit dependency of p on the optimization argument πempower renders the problem non-trivial. From an information-theoretic perspective, optimizing for empowerment is equivalent to maximizing the capacity [Shannon1948](#bib.bib57) of an information channel P(k)(s′|s,→a) with input →a and output s′ w.r.t. the input distribution πempower(→a|s), as outlined in the following [Csiszar1984](#bib.bib11) ; [Cover2006](#bib.bib10) . Define the functional If(πempower,P(k),q):=Eπ% empower(→a|s)P(k)(s′|s,→a)[logq(→a|s′,s)π% empower(→a|s)], where q is a conditional probability q:S×S×Ak→[0,1]. Then the mutual information is recovered as a special case of If with I[→A,S′∣∣s]=maxqIf(πempower,P(k),q) for a given πempower. The maximum argument | | | | | | --- | --- | --- | --- | | | q⋆(→a|s′,s)=P(k)(s′|s,→a)πempower(→a|s)∑→aP(k)(s′|s,→a)π% empower(→a|s) | | (3) | is the true Bayesian posterior p(→a|s′,s)—see [Cover2006](#bib.bib10) Lemma 10.8.1 for details. Similarly, maximizing If(πempower,P(k),q) with respect to πempower for a given q leads to: | | | | | | --- | --- | --- | --- | | | | | (4) | As explained e.g. in [Cover2006](#bib.bib10) page 335 similar to [Ortega2013](#bib.bib45) . The above yields the subsequent proposition. ###### Proposition 1 Maximum Channel Capacity. Iterating through Equations ([3](#S2.E3 "(3) ‣ 2.2 Empowerment ‣ 2 Background ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) and ([4](#S2.E4 "(4) ‣ 2.2 Empowerment ‣ 2 Background ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) by computing q given π\emph{empower} and vice versa in an alternating fashion converges to an optimal pair (q⋆,π⋆\emph{empower}) that maximizes the mutual information maxπ\emph{empower}I[→A,S′∣∣s]=If(π⋆\emph{empower},P(k),q⋆). The convergence rate is O(1/N), where N is the number of iterations, for any initial π\emph{ini}\emph{empower} with support in Ak∀s—see [Cover2006](#bib.bib10) Chapter 10.8 and [Csiszar1984](#bib.bib11) ; [Gallager1994](#bib.bib16) . This is known as Blahut-Arimoto algorithm [Arimoto1972](#bib.bib2) ; [Blahut1972](#bib.bib7) . Remark. *Empowerment is similar to curiosity concepts of predictive information that focus on the mutual information between the current and the subsequent state [Bialek2001](#bib.bib6) ; [Prokopenko2006](#bib.bib47) ; [Zahedi2010](#bib.bib68) ; [Still2012](#bib.bib60) ; [Montufar2016](#bib.bib42) ; [Schossau2016](#bib.bib53) .* 3 Motivation: Combining Reward Maximization with Empowerment ------------------------------------------------------------- The Blahut-Arimoto algorithm presented in the previous section solves empowerment for low-dimensional discrete settings but does not readily scale to high-dimensional or continuous state-action spaces. While there has been progress on learning empowerment values with parametric function approximators [Mohamed2015](#bib.bib41) , how to combine it with reward maximization or RL remains open. In principle, there are two possibilities for utilizing empowerment. The first is to directly use the policy π⋆empower obtained in the course of learning empowerment values E⋆(s). The second is to train a behavioral policy to take an action in each state such that the expected empowerment value of the next state is highest (requiring E⋆-values as a prerequisite). Note that the two possibilities are conceptually different. The latter seeks states with a large number of reachable next states [Jung2011](#bib.bib23) . The first, on the other hand, aims for high mutual information between actions and the subsequent state, which is not necessarily the same as seeking highly empowered states [Mohamed2015](#bib.bib41) . We hypothesize empowered signals to be beneficial for RL, especially in high-dimensional environments and at the beginning of the training process when the initial policy is poor. In this work, we therefore combine reward maximization with empowerment inspired by the two behavioral possibilities outlined in the previous paragraph. Hence, we focus on the cumulative RL setting rather than the non-cumulative setting that is typical for empowerment. We furthermore use one-step empowerment as a reference, i.e. k=1, because cumulative one-step empowerment learning leads to high values in such states where the number of possibly reachable next states is high, and preserves hence the original empowerment intuition *without* requiring a multi-step policy—see Section [4.3](#S4.SS3 "4.3 Practical Verification in a Grid World Example ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"). The first idea is to train a policy that trades off reward maximization and *learning* cumulative empowerment: | | | | | | --- | --- | --- | --- | | | maxπbehaveEπbehave,P[∞∑t=0γt(αR(st,at)+βlogp(at|st+1,st)πbehave(at|st))], | | (5) | where α≥0 and β≥0 are scaling factors, and p indicates the inverse dynamics model of πbehave in line with Equation ([3](#S2.E3 "(3) ‣ 2.2 Empowerment ‣ 2 Background ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")). Note that p depends on the optimization argument πbehave, similar to ordinary empowerment, leading to a non-trivial Markov decision problem (MDP). The second idea is to learn cumulative empowerment values *a priori* by solving Equation ([5](#S3.E5 "(5) ‣ 3 Motivation: Combining Reward Maximization with Empowerment ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) with α=0 and β=1. The outcome of this is a policy π⋆empower (and its inverse dynamics model p) that can be used to construct an intrinsic reward signal which is then added to the external reward: | | | | | | --- | --- | --- | --- | | | maxπbehaveEπbehave,P[∞∑t=0γt(αR(st,at)+βEπ⋆empower(a|st)P(s′|st,a)[logp(a|s′,st)π⋆empower(a|st)])]. | | (6) | Importantly, Equation ([6](#S3.E6 "(6) ‣ 3 Motivation: Combining Reward Maximization with Empowerment ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) poses an ordinary MDP since the reward signal is merely extended by another stationary state-dependent signal. Both proposed ideas require to solve the novel MDP as specified in Equation ([5](#S3.E5 "(5) ‣ 3 Motivation: Combining Reward Maximization with Empowerment ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")). In Section [4](#S4 "4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"), we therefore prove the existence of unique values and convergence of the corresponding value iteration scheme (including a grid world example). We also show how our formulation generalizes existing formulations from the literature. In Section [5](#S5 "5 Scaling to High-Dimensional Environments ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"), we carry our ideas over to high-dimensional continuous state-action spaces by devising off-policy actor-critic-style algorithms inspired by the proposed MDP formulation. We evaluate our novel actor-critic-style algorithms in MuJoCo demonstrating better initial and competitive final performance compared to model-free state-of-the-art baselines. 4 Joint Reward Maximization and Empowerment Learning in MDPs ------------------------------------------------------------- We state our main theoretical result *in advance*, proven in the remainder of this section (an intuition follows): the solution to the MDP from Equation ([5](#S3.E5 "(5) ‣ 3 Motivation: Combining Reward Maximization with Empowerment ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) implies unique optimal values V⋆ obeying the Bellman recursion | | | | | | --- | --- | --- | --- | | | V⋆(s)=maxπ% behaveEπbehave,P[∞∑t=0γt(αR(st,at)+βlogp(at|st+1,st)πbehave(at|st))∣∣ ∣∣s0=s]=maxπbehave,qEπbehave(a|s)[αR(s,a)+EP(s′|s,a)[βlogq(a|s′,s)πbehave(a|s)+γV⋆(s′)]]=βlog∑aexp(αβR(s,a)+EP(s′|s,a)[logq⋆(a|s′,s)+γβV⋆(s′)]), | | (7) | where | | | | | | --- | --- | --- | --- | | | q⋆(a|s′,s)=P(s′|s,a)π⋆behave(a|s)∑aP(s′|s,a)π⋆behave(a|s)=p(a|s′,s) | | (8) | is the inverse dynamics model of the optimal behavioral policy π⋆behave that assumes the form: | | | | | | --- | --- | --- | --- | | | π⋆behave(a|s)=exp(αβR(s,a)+EP(s′|s,a)[logq⋆(a|s′,s)+γβV⋆(s′)])∑aexp(αβR(s,a)+EP(s′|s,a)[logq⋆(a|s′,s)+γβV⋆(s′)]), | | (9) | where the denominator is just exp((1/β)V⋆(s)). While the remainder of this section explains how Equations ([7](#S4.E7 "(7) ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) to ([9](#S4.E9 "(9) ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) are derived in detail, it can be insightful to understand at a high level what makes our formulation non-trivial. The difficulty is that the inverse dynamics model p=q⋆ depends on the optimal policy π⋆behavioral and vice versa leading to a non-standard optimal value identification problem. Proving the existence of V⋆-values and how to compute them poses therefore our main theoretical contribution, and implies the existence of at least one (q⋆,π⋆behave)-pair that satisfies the recursive relationship of Equations ([8](#S4.E8 "(8) ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) and ([9](#S4.E9 "(9) ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")). This proof is given in Section [4.1](#S4.SS1 "4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") and leads naturally to a value iteration scheme to compute optimal values in practice. The convergence of this scheme is proven in Section [4.2](#S4.SS2 "4.2 Value Iteration and Convergence to Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") and we also demonstrate value learning in a grid world example—see Section [4.3](#S4.SS3 "4.3 Practical Verification in a Grid World Example ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"). In Section [4.4](#S4.SS4 "4.4 Generalization of and Relation to Existing MDP formulations ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"), we elucidate how our formulation generalizes and relates to existing MDP formulations. ### 4.1 Existence of Unique Optimal Values Following the second line from Equation ([7](#S4.E7 "(7) ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")), let’s define the Bellman operator B⋆:R|S|→R|S| as | | | | | | --- | --- | --- | --- | | | B⋆V(s):=maxπbehave,qEπbehave(a|s)[αR(s,a)+EP(s′|s,a)[βlogq(a|s′,s)πbehave(a|s)+γV(s′)]]. | | (10) | ###### Theorem 1 Existence of Unique Optimal Values. Assuming a bounded reward function R, the optimal value vector V⋆ as given in Equation ([7](#S4.E7 "(7) ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) exists and is a unique fixed point V⋆=B⋆V⋆ of the Bellman operator B⋆ from Equation ([10](#S4.E10 "(10) ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")). Proof. The proof of Theorem [1](#Thmtheorem1 "Theorem 1 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") comprises three steps. First, we prove for a given (q,πbehave)-pair the existence of unique values V(q,πbehave) which obey the following recursion | | | | | | --- | --- | --- | --- | | | V(q,πbehave)(s)=Eπbehave(a|s)[αR(s,a)+EP(s′|s,a)[βlogq(a|s′,s)πbehave(a|s)+γV(q,πbehave)(s′)]]. | | (11) | This result is obtained through Proposition [2](#Thmproposition2 "Proposition 2 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") following [Bertsekas1996](#bib.bib5) ; [Rubin2012](#bib.bib50) ; [Grau-Moya2016](#bib.bib18) where we show that the value vector V(q,πbehave) is a unique fixed point of the operator Bq,πbehave:R|S|→R|S| given by | | | | | | --- | --- | --- | --- | | | Bq,πbehaveV(s):=Eπbehave(a|s)[αR(s,a)+EP(s′|s,a)[βlogq(a|s′,s)πbehave(a|s)+γV(s′)]]. | | (12) | Second, we prove in Proposition [3](#Thmproposition3 "Proposition 3 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") that solving the right hand side of Equation ([10](#S4.E10 "(10) ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) for the pair (q,πbehave) can be achieved with a Blahut-Arimoto-style algorithm in line with [Gallager1994](#bib.bib16) . Third, we complete the proof in Proposition [4](#Thmproposition4 "Proposition 4 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") based on Proposition [2](#Thmproposition2 "Proposition 2 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") and [3](#Thmproposition3 "Proposition 3 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") by showing that V⋆=maxπbehave,qV(q,πbehave), where the vector-valued max-operator is well-defined because both πbehave and q are conditioned on s. The proof completion follows again [Bertsekas1996](#bib.bib5) ; [Rubin2012](#bib.bib50) ; [Grau-Moya2016](#bib.bib18) . □ ###### Proposition 2 Existence of Unique Values for a Given (q,π\emph{behave})-Pair. Assuming a bounded reward function R, the value vector V(q,π\emph{behave}) as given in Equation ([11](#S4.E11 "(11) ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) exists and is a unique fixed point V(q,π\emph{behave})=Bq,π\emph{behave}V(q,π\emph{behave}) of the Bellman operator Bq,π\emph{behave} from Equation ([12](#S4.E12 "(12) ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")). As opposed to the Bellman operator B⋆, the operator Bq,πbehave does *not* include a max-operation that incurs a non-trivial recursive relationship between optimal arguments. The proof for existence of unique values follows hence standard methodology [Bertsekas1996](#bib.bib5) ; [Rubin2012](#bib.bib50) ; [Grau-Moya2016](#bib.bib18) and is given in Appendix [A.1](#A1.SS1 "A.1 Proof of Proposition 2 from the Main Paper ‣ Appendix A Theoretical Analysis ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"). ###### Proposition 3 Blahut-Arimoto for One Value Iteration Step. Assuming that R is bounded, the maximization problem maxπ\emph{behave},q from Equation ([10](#S4.E10 "(10) ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) in the Bellman operator B⋆ can be solved for (q,π\emph{behave}) by iterating through the following two equations in an alternating fashion: | | | | | | --- | --- | --- | --- | | | q(m)(a|s′,s)=P(s′|s,a)π(m)\emph{behave}(a|s)∑aP(s′|s,a)π(m)\emph{behave}(a|s), | | (13) | | | | | | | --- | --- | --- | --- | | | π(m+1)\emph{behave}(a|s)=exp(αβR(s,a)+EP(s′|s,a)[logq(m)(a|s′,s)+γβV(s′)])∑aexp(αβR(s,a)+EP(s′|s,a)[logq(m)(a|s′,s)+γβV(s′)]), | | (14) | where m is the iteration index. The convergence rate is O(1/M) for arbitrary initial π(0)\emph{behave} with support in A∀s. M is the total number of iterations. The complexity for a single s is O(M|S||A|). Proof Outline. The problem in Proposition [3](#Thmproposition3 "Proposition 3 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") is mathematically similar to the maximum channel capacity problem [Shannon1948](#bib.bib57) from Proposition [1](#Thmproposition1 "Proposition 1 ‣ 2.2 Empowerment ‣ 2 Background ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") and proving convergence follows similar steps that we outline here—details can be found in Appendix [A.2](#A1.SS2 "A.2 Proof of Proposition 3 from the Main Paper ‣ Appendix A Theoretical Analysis ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"). First, we prove that optimizing the right-hand side of Equation ([10](#S4.E10 "(10) ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) w.r.t. q for a given πbehave results in Equation ([13](#S4.E13 "(13) ‣ Proposition 3 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) according to [Cover2006](#bib.bib10) Lemma 10.8.1. Second, we prove that optimizing w.r.t. πbehave for a given q results in Equation ([14](#S4.E14 "(14) ‣ Proposition 3 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) following standard techniques from variational calculus and Lagrange multipliers. Third, we prove convergence to a global maximum when iterating alternately through Equations ([13](#S4.E13 "(13) ‣ Proposition 3 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) and ([14](#S4.E14 "(14) ‣ Proposition 3 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) following [Gallager1994](#bib.bib16) . ###### Proposition 4 Completing the Proof of Theorem [1](#Thmtheorem1 "Theorem 1 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"). The optimal value vector is given by V⋆=maxπ\emph{behave},qV(q,π\emph{behave}) and is a unique fixed point V⋆=B⋆V⋆ of the Bellman operator B⋆. Completing the proof of Theorem [1](#Thmtheorem1 "Theorem 1 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") requires two ingredients: the existence of unique V(q,πbehave)-values for any (q,πbehave)-pair as proven in Proposition [2](#Thmproposition2 "Proposition 2 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"), and the fact that the optimal Bellman operator can be expressed as B⋆=maxπbehave,qBq,πbehave where maxπbehave,q is the max-operator from Proposition [3](#Thmproposition3 "Proposition 3 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"). The proof follows then standard methodology [Bertsekas1996](#bib.bib5) ; [Rubin2012](#bib.bib50) ; [Grau-Moya2016](#bib.bib18) , see Appendix [A.3](#A1.SS3 "A.3 Proof of Proposition 4 from the Main Paper ‣ Appendix A Theoretical Analysis ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"). ### 4.2 Value Iteration and Convergence to Optimal Values In the previous section, we have proven the existence of unique optimal values V⋆ that are a fixed point of the Bellman operator B⋆. This section devises a value iteration scheme based on the operator B⋆ and proves its convergence. We commence by a corollary to express B⋆ more concisely. ###### Corollary 1 Optimal Bellman Operator. The operator B⋆ from Equation ([10](#S4.E10 "(10) ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) can be written as | | | | | | --- | --- | --- | --- | | | B⋆V(s)=βlog∑aexp(αβR(s,a)+EP(s′|s,a)[logq\emph{converged}(a|s′,s)+γβV(s′)]), | | (15) | where q\emph{converged}(a|s′,s) is the result of the converged Blahut-Arimoto scheme from Proposition [3](#Thmproposition3 "Proposition 3 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"). This result is obtained by plugging the converged solution πconvergedbehave from Equation ([14](#S4.E14 "(14) ‣ Proposition 3 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) into Equation ([10](#S4.E10 "(10) ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) and leads naturally to a two-level value iteration algorithm that proceeds as follows: the outer loop updates the values V by applying Equation ([15](#S4.E15 "(15) ‣ Corollary 1 ‣ 4.2 Value Iteration and Convergence to Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) repeatedly; the inner loop applies the Blahut-Arimoto algorithm from Proposition [3](#Thmproposition3 "Proposition 3 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") to identify qconverged required for the outer value update. ###### Theorem 2 Convergence to Optimal Values. Assuming bounded R and let ϵ∈R be a positive number such that ϵ<η1−γ where η=αmaxs,a|R(s,a)|+βlog|A|. If the value iteration scheme with initial values of V(s)=0∀s is run for i≥⌈logγϵ(1−γ)η⌉ iterations, then ∥∥V⋆−B(i)⋆V∥∥∞≤ϵ, where the notation B(i)⋆V means to apply B⋆ to V i-times consecutively. Proof. Via a sequence of inequalities, one can show that the following holds true: ∥∥V⋆−B(i)⋆V∥∥∞≤γ∥∥V⋆−B(i−1)⋆V∥∥∞≤γi∥V⋆−V∥∞≤γi11−γη—see Appendix [A.4](#A1.SS4 "A.4 Proof Details of Theorem 2 from the Main Paper ‣ Appendix A Theoretical Analysis ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") for a more detailed derivation. This implies that if ϵ≥γi11−γη then i≥⌈logγϵ(1−γ)η⌉ presupposing ϵ<η1−γ. □ Conclusion. *Together, Theorems [1](#Thmtheorem1 "Theorem 1 ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") and [2](#Thmtheorem2 "Theorem 2 ‣ 4.2 Value Iteration and Convergence to Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") prove that our proposed value iteration scheme convergences to optimal values V⋆ in combination with a corresponding optimal pair (q⋆,π⋆\emph{behave}) as described at the beginning of this section in the third line of Equation ([7](#S4.E7 "(7) ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) and in Equations ([8](#S4.E8 "(8) ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) and ([9](#S4.E9 "(9) ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) respectively. The overal complexity is O(iM|S|2|A|) where i and M refer to outer and inner iterations.* Remark. *Our value iteration is required for both objectives from Section [3](#S3 "3 Motivation: Combining Reward Maximization with Empowerment ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") to combine reward maximization with empowerment. Equation ([5](#S3.E5 "(5) ‣ 3 Motivation: Combining Reward Maximization with Empowerment ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) motivated our scheme in the first place, whereas Equation ([6](#S3.E6 "(6) ‣ 3 Motivation: Combining Reward Maximization with Empowerment ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) requires cumulative empowerment values without reward maximization (α=0, β=1).* ### 4.3 Practical Verification in a Grid World Example In order to practically verify our value iteration scheme from the previous section, we conduct experiments on a grid world example. The outcome is shown in Figure [1](#S4.F1 "Figure 1 ‣ 4.3 Practical Verification in a Grid World Example ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") demonstrating how different configurations for α and β, that steer cumulative reward maximization versus empowerment learning, affect optimal values V⋆. Importantly, the experiments show that our proposal to learn cumulative one-step empowerment values recovers the original intuition of empowerment in the sense that high values are assigned to states where many other states can be reached and low values to states where the number of reachable next states is low, *but without* the necessity to maintain a multi-step policy. ![Value Iteration for a Grid World Example. The agent aims to arrive at the goal ’G’ in the lower left—detailed information regarding the setup can be found in Appendix ](https://media.arxiv-vanity.com/render-output/7359457/x1.png) Figure 1: Value Iteration for a Grid World Example. The agent aims to arrive at the goal ’G’ in the lower left—detailed information regarding the setup can be found in Appendix [C.1](#A3.SS1 "C.1 Grid World ‣ Appendix C Experiments ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"). The plots show optimal values for different α and β: α increases from left to right while β decreases. The leftmost values show raw cumulative empowerment learning (α=0.0, β=1.0). High values are assigned to states where many other states can be reached, i.e. the upper right; and low values to states where the number of reachable next states is low, i.e. close to corners and dead ends. The rightmost values recover ordinary cumulative reward maximization (α=1.0, β=0.0) assigning high values to states close to the goal and low values to states far away from the goal. ### 4.4 Generalization of and Relation to Existing MDP formulations Our Bellman operator B⋆ from Equation ([10](#S4.E10 "(10) ‣ 4.1 Existence of Unique Optimal Values ‣ 4 Joint Reward Maximization and Empowerment Learning in MDPs ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) relates to prior work as follows (see also Appendix [A.5](#A1.SS5 "A.5 Limit Cases of Equation (7) ‣ Appendix A Theoretical Analysis ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")). * Ordinary value iteration [Russell2016](#bib.bib51) is recovered as a special case for α=1 and β=0. * Cumulative one-step empowerment is recovered as a special case for α=0 and β=1, with *non-cumulative* one-step empowerment [Kumar2018](#bib.bib29) as a further special case of the latter (γ→0). * When setting q(a|s′,s)=q(a|s), using a distribution that is *not* conditioned on s′ and *omitting* maximizing w.r.t. q, one recovers as a special case the soft Bellman operator presented e.g. in [Rubin2012](#bib.bib50) . Note that this soft Bellman operator also occurred in numerous other work on MDP formulations and RL [Azar2011](#bib.bib3) ; [Fox2016](#bib.bib14) ; [Neu2017](#bib.bib44) ; [Schulman2017](#bib.bib54) ; [Leibfried2018](#bib.bib32) . * As a special case of the previous, when q(a|s′,s)=U(A) is the uniform distribution in action space, one recovers cumulative entropy regularization [Ziebart2010](#bib.bib69) ; [Nachum2017](#bib.bib43) ; [Levine2018](#bib.bib33) that inspired algorithms such as soft Q-learning [Haarnoja2017](#bib.bib20) and soft actor-critic [Haarnoja2018](#bib.bib21) ; [Haarnoja2019](#bib.bib22) . * When dropping the conditioning on s′ and s by setting q(a|s′,s)=q(a) but *without omitting* maximization w.r.t. q, one recovers a formulation similar to [Tishby2011](#bib.bib64) based on mutual-information regularization [Shannon1959](#bib.bib58) ; [Sims2003](#bib.bib59) ; [Genewein2015](#bib.bib17) ; [Leibfried2016](#bib.bib31) that spurred RL algorithms such as [Leibfried2015](#bib.bib30) ; [Grau-Moya2019](#bib.bib19) . * When replacing q(a|s′,s) with q(a|s′,a′), where s′ and a′ refers to the state-action pair of the previous time step, one recovers a formulation similar to [Tiomkin2018](#bib.bib63) based on the information-theoretic principle of directed information [Marko1973](#bib.bib37) ; [Kramer1998](#bib.bib28) ; [Massey2005](#bib.bib38) . 5 Scaling to High-Dimensional Environments ------------------------------------------- In the previous section, we presented a novel Bellman operator in combination with a value iteration scheme to combine reward maximization and empowerment. In this section, by leveraging parametric function approximators, we validate our ideas in high-dimensional state-action spaces and when there is no prior knowledge of the state-transition function. In Section [5.1](#S5.SS1 "5.1 Empowered Off-Policy Actor-Critic Methods with Parametric Function Approximators ‣ 5 Scaling to High-Dimensional Environments ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"), we devise novel actor-critic algorithms for RL based on our MDP formulation since they are naturally capable of handling both continuous state and action spaces. In Section [5.2](#S5.SS2 "5.2 Experiments with Deep Function Approximators in MuJoCo ‣ 5 Scaling to High-Dimensional Environments ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"), we practically confirm that empowerment can boost RL in the high-dimensional robotics simulator domain of MuJoCo using deep neural networks. ### 5.1 Empowered Off-Policy Actor-Critic Methods with Parametric Function Approximators Contemporary off-policy actor-critic approaches for RL [Lillicrap2016](#bib.bib35) ; [Abdolmaleki2018](#bib.bib1) ; [Fujimoto2018](#bib.bib15) follow the policy gradient theorem [Sutton2000](#bib.bib62) ; [Degris2012](#bib.bib13) and learn two parametric function approximators: one for the behavioral policy πϕ(a|s) with parameters ϕ, and one for the state-action value function Qθ(s,a) of the parametric policy πϕ with parameters θ. The policy learning objective usually assumes the form: maxϕEs∼D[Eπϕ(a|s)[Qθ(s,a)]], where D refers to a replay buffer [Lin1993](#bib.bib36) that stores collected state transitions from the environment. Following [Haarnoja2018](#bib.bib21) , Q-values are learned most efficiently by introducing another function approximator Vψ for state values of πϕ with parameters ψ using the objective: | | | | | | --- | --- | --- | --- | | | minθEs,a,r,s′∼D[(Qθ(s,a)−(αr+γVψ(s′)))2], | | (16) | where (s,a,r,s′) refers to an environment interaction sampled from the replay buffer (r stands for the observed reward signal). We multiply r by the scaling factor α from our formulation because Equation ([16](#S5.E16 "(16) ‣ 5.1 Empowered Off-Policy Actor-Critic Methods with Parametric Function Approximators ‣ 5 Scaling to High-Dimensional Environments ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) can be directly used for the parametric methods we propose. Learning policy parameters ϕ and value parameters ψ requires however novel objectives with two additional approximators: one for the inverse dynamics model pχ(a|s′,s) of πϕ, and one for the transition function Pξ(s′|s,a) (with parameters χ and ξ respectively). While the necessity for pχ is clear, e.g. from inspecting Equation ([5](#S3.E5 "(5) ‣ 3 Motivation: Combining Reward Maximization with Empowerment ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")), the necessity for Pξ will fall into place shortly as we move forward. In order to preserve a clear view, let’s define the quantity f(s,a):=EPξ(s′|s,a)[logpχ(a|s′,s)]−logπϕ(a|s), which is short-hand notation for the empowerment-induced addition to the reward signal—compare to Equation ([5](#S3.E5 "(5) ‣ 3 Motivation: Combining Reward Maximization with Empowerment ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")). We then commence with the objective for value function learning: | | | | | | --- | --- | --- | --- | | | minψEs∼D[(Vψ(s)−Eπϕ(a|s)[Qθ(s,a)+βf(s,a)])2], | | (17) | which is similar to the standard value objective but with the added term βf(s,a) as a result of joint cumulative empowerment learning. At this point, the necessity for a transition model Pξ becomes apparent. In the above equation, new actions a need to be sampled from the policy πϕ for a given s. However, the inverse dynamics model (inside f) depends on the subsequent state s′ as well, requiring therefore a prediction for the next state. Note also that (s,a,r,s′)-tuples from the replay buffer as in Equation ([16](#S5.E16 "(16) ‣ 5.1 Empowered Off-Policy Actor-Critic Methods with Parametric Function Approximators ‣ 5 Scaling to High-Dimensional Environments ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) can’t be used here, because the expectation over a is w.r.t. to the current policy whereas tuples from the replay buffer come from a mixture of policies at an earlier stage of training. Extending the ordinary actor-critic policy objective with the empowerment-induced term f yields: | | | | | | --- | --- | --- | --- | | | maxϕEs∼D[Eπϕ(a|s)[Qθ(s,a)+βf(s,a)]]. | | (18) | The remaining parameters to be optimized are χ and ξ from the inverse dynamics model pχ and the transition model Pξ. Both problems are supervised learning problems that can be addressed by log-likelihood maximization using samples from the replay buffer, leading to maxχEs∼D[Eπϕ(a|s)Pξ(s′|s,a)[logpχ(a|s′,s)]] and maxξEs,a,s′∼D[logPξ(s′|s,a)]. Coming back to our motivation from Section [3](#S3 "3 Motivation: Combining Reward Maximization with Empowerment ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"), we propose two novel empowerment-inspired actor-critic approaches based on the optimization objectives specified in this section. The first combines cumulative reward maximization and empowerment learning following Equation ([5](#S3.E5 "(5) ‣ 3 Motivation: Combining Reward Maximization with Empowerment ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) which we refer to as empowered actor-critic. The second learns cumulative empowerment values to construct intrinsic rewards following Equation ([6](#S3.E6 "(6) ‣ 3 Motivation: Combining Reward Maximization with Empowerment ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) which we refer to as actor-critic with intrinsic empowerment. Empowered Actor-Critic (EAC). In line with standard off-policy actor-critic methods [Lillicrap2016](#bib.bib35) ; [Fujimoto2018](#bib.bib15) ; [Haarnoja2018](#bib.bib21) , EAC interacts with the environment iteratively storing transition tuples (s,a,r,s′) in a replay buffer. After each interaction, a training batch {(s,a,r,s′)(b)}Bb=1∼D of size B is sampled from the buffer to perform a *single* gradient update on the objectives from Equation ([16](#S5.E16 "(16) ‣ 5.1 Empowered Off-Policy Actor-Critic Methods with Parametric Function Approximators ‣ 5 Scaling to High-Dimensional Environments ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) to ([18](#S5.E18 "(18) ‣ 5.1 Empowered Off-Policy Actor-Critic Methods with Parametric Function Approximators ‣ 5 Scaling to High-Dimensional Environments ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")) as well as the log likelihood objectives for the inverse dynamics and transition model—see Appendix [B](#A2 "Appendix B Pseudocode for the Empowered Actor-Critic (EAC) ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") for pseudocode. Actor-Critic with Intrinsic Empowerment (ACIE). By setting α=0 and β=1, EAC can train an agent merely focusing on cumulative empowerment learning. Since EAC is off-policy, it can learn with samples obtained from executing *any* policy in the real environment, e.g. the actor of *any other* reward-maximizing actor-critic algorithm. We can then extend external rewards rt at time t of this actor-critic algorithm with intrinsic rewards Eπϕ(a|st)Pξ(s′|st,a)[logpχ(a|s′,st)πϕ(a|st)] according to Equation ([6](#S3.E6 "(6) ‣ 3 Motivation: Combining Reward Maximization with Empowerment ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment")), where (ϕ,ξ,χ) are the result of *concurrent* raw empowerment learning with EAC. This idea is similar to the preliminary work of [Kumar2018](#bib.bib29) using non-cumulative empowerment as intrinsic motivation for deep value-based RL with discrete actions in the Atari game Montezuma’s Revenge. ### 5.2 Experiments with Deep Function Approximators in MuJoCo We validate EAC and ACIE in the robotics simulator MuJoCo [Todorov2012](#bib.bib65) ; [Brockman2016](#bib.bib8) with deep neural nets under the same setup for each experiment following [vanHasselt2010](#bib.bib66) ; [Kingma2014](#bib.bib25) ; [Rezende2014](#bib.bib49) ; [Kingma2015](#bib.bib24) ; [Schulman2015](#bib.bib55) ; [Lillicrap2016](#bib.bib35) ; [vanHasselt2016](#bib.bib67) ; [Schulman2017b](#bib.bib56) ; [Abdolmaleki2018](#bib.bib1) ; [Chua2018](#bib.bib9) ; [Fujimoto2018](#bib.bib15) ; [Haarnoja2018](#bib.bib21) —see Appendix [C.2](#A3.SS2 "C.2 MuJoCo ‣ Appendix C Experiments ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") for details. While EAC is a standalone algorithm, ACIE can be combined with any RL algorithm (we use the model-free state of the art SAC [Haarnoja2018](#bib.bib21) ). We compare against DDPG [Lillicrap2016](#bib.bib35) and PPO [Schulman2017b](#bib.bib56) from RLlib [Liang2018](#bib.bib34) as well as SAC on the MuJoCo v2-environments (ten seeds per run [Pineau2018](#bib.bib46) ). The results in Figure [2](#S5.F2 "Figure 2 ‣ 5.2 Experiments with Deep Function Approximators in MuJoCo ‣ 5 Scaling to High-Dimensional Environments ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") confirm that both EAC and ACIE can attain better initial performance compared to model-free baselines. While this holds true for both approaches on the pendulum benchmarks (balancing and swing up), our empowered methods can also boost RL in demanding environments like Hopper, Ant and Humanoid (the latter two being amongst the most difficult MuJoCo tasks). EAC significantly improves initial learning in Ant, whereas ACIE boosts SAC in Hopper and Humanoid. While EAC outperforms PPO and DDPG in almost all tasks, it is not consistently better then SAC. Similarly, the added intrinsic reward from ACIE to SAC does not always help. *This is not unexpected as it cannot be in general ruled out that reward functions assign high (low) rewards to lowly (highly) empowered states, in which case the two learning signals may become partially conflicting.* ![MuJoCo Experiments. The plots show maximum episodic rewards (averaged over the last ](https://media.arxiv-vanity.com/render-output/7359457/x2.png) Figure 2: MuJoCo Experiments. The plots show maximum episodic rewards (averaged over the last 100 episodes) achieved so far [Chua2018](#bib.bib9) versus steps—*non-maximum* episodic reward plots can be found in Figure [3](#S5.F3 "Figure 3 ‣ 5.2 Experiments with Deep Function Approximators in MuJoCo ‣ 5 Scaling to High-Dimensional Environments ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"). EAC and ACIE are compared to DDPG, PPO and SAC (DDPG did not work in Ant, see [Haarnoja2018](#bib.bib21) and Appendix [C.2](#A3.SS2 "C.2 MuJoCo ‣ Appendix C Experiments ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") for an explanation). Shaded areas refer to the standard error. Both EAC and ACIE improve initial learning over baselines in the three pendulum tasks (upper row). In demanding problems like Hopper, Ant and Humanoid, our methods can boost RL. In terms of final performance, EAC is competitive with the baselines: it consistently outperforms DDPG and PPO on all tasks except Hopper, but is not always better than SAC. Similarly, the ACIE-signal does not always help SAC. This is not unexpected as extrinsic and empowered rewards may partially conflict. For the sake of completeness, we report Figure [3](#S5.F3 "Figure 3 ‣ 5.2 Experiments with Deep Function Approximators in MuJoCo ‣ 5 Scaling to High-Dimensional Environments ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") which is similar to Figure [2](#S5.F2 "Figure 2 ‣ 5.2 Experiments with Deep Function Approximators in MuJoCo ‣ 5 Scaling to High-Dimensional Environments ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment") but shows episodic rewards and *not* maximum episodic rewards obtained so far [Chua2018](#bib.bib9) . Also, limits of y-axes are preserved for the pendulum tasks. Note that our SAC baseline is comparable with the SAC from [Haarnoja2019](#bib.bib22) on Hopper-v2, Walker2d-v2, Ant-v2 and Humanoid-v2 after 5⋅105 steps (the SAC from [Haarnoja2018](#bib.bib21) uses the earlier v1-versions of Mujoco and is hence not an optimal reference). However, there is a discrepancy on HalfCheetah-v2. This was earlier noted by others who tried to reproduce SAC results in HalfCheetah-v2 but failed to obtain episodic rewards as high as in [Haarnoja2018](#bib.bib21) ; [Haarnoja2019](#bib.bib22) , leading to a GitHub issue <https://github.com/rail-berkeley/softlearning/issues/75>. The final conclusion of this issue was that differences in performance are caused by different seed settings and are therefore of statistical nature (comparing all algorithms under the same seed settings is hence valid). ![Raw Results of MuJoCo Experiments. The plots are similar to the plots from Figure ](https://media.arxiv-vanity.com/render-output/7359457/x3.png) Figure 3: Raw Results of MuJoCo Experiments. The plots are similar to the plots from Figure [2](#S5.F2 "Figure 2 ‣ 5.2 Experiments with Deep Function Approximators in MuJoCo ‣ 5 Scaling to High-Dimensional Environments ‣ A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment"), but report episodic rewards (averaged over the last 100 episodes) versus steps—*not* maximum episodic rewards seen so far as in [Chua2018](#bib.bib9) . For the pendulum tasks, the limits of the y-axes are preserved. 6 Conclusion ------------- This paper provides a theoretical contribution via a unified formulation for reward maximization and empowerment that generalizes Bellman’s optimality principle and recent information-theoretic extensions to it. We proved the existence of and convergence to unique optimal values, and practically validated our ideas by devising novel parametric actor-critic algorithms inspired by our formulation. These were evaluated on the high-dimensional MuJoCo benchmark demonstrating that empowerment can boost RL in challenging robotics tasks (e.g. Ant and Humanoid). The most promising line of future research is to investigate scheduling schemes that dynamically trade off rewards vs. empowerment with the prospect of obtaining better asymptotic performance. Empowerment could also be particularly useful in a multi-task setting where task transfer could benefit from initially empowered agents. #### Acknowledgments We thank Haitham Bou-Ammar for pointing us in the direction of empowerment.
beeac37c-96bc-4224-aef9-d8cf2631c518
trentmkelly/LessWrong-43k
LessWrong
Why artificial optimism? Optimism bias is well-known. Here are some examples. * It's conventional to answer the question "How are you doing?" with "well", regardless of how you're actually doing. Why? * People often believe that it's inherently good to be happy, rather than thinking that their happiness level should track the actual state of affairs (and thus be a useful tool for emotional processing and communication). Why? * People often think their project has an unrealistically high chance of succeeding. Why? * People often avoid looking at horrible things clearly. Why? * People often want to suppress criticism but less often want to suppress praise; in general, they hold criticism to a higher standard than praise. Why? The parable of the gullible king Imagine a kingdom ruled by a gullible king. The king gets reports from different regions of the kingdom (managed by different vassals). These reports detail how things are going in these different regions, including particular events, and an overall summary of how well things are going. He is quite gullible, so he usually believes these reports, although not if they're too outlandish. When he thinks things are going well in some region of the kingdom, he gives the vassal more resources, expands the region controlled by the vassal, encourages others to copy the practices of that region, and so on. When he thinks things are going poorly in some region of the kingdom (in a long-term way, not as a temporary crisis), he gives the vassal fewer resources, contracts the region controlled by the vassal, encourages others not to copy the practices of that region, possibly replaces the vassal, and so on. This behavior makes sense if he's assuming he's getting reliable information: it's better for practices that result in better outcomes to get copied, and for places with higher economic growth rates to get more resources. Initially, this works well, and good practices are adopted throughout the kingdom. But, some vassals get the idea of ex
9a151168-15d0-4874-a403-3edc7f1c4d8a
trentmkelly/LessWrong-43k
LessWrong
Behavior Cloning is Miscalibrated Behavior cloning (BC) is, put simply, when you have a bunch of human expert demonstrations and you train your policy to maximize likelihood over the human expert demonstrations. It’s the simplest possible approach under the broader umbrella of Imitation Learning, which also includes more complicated things like Inverse Reinforcement Learning or Generative Adversarial Imitation Learning. Despite its simplicity, it’s a fairly strong baseline. In fact, prompting GPT-3 to act agent-y is essentially also BC, just rather than cloning on a specific task, you're cloning against all of the task demonstration-like data in the training set--but fundamentally, it's a scaled up version of the exact same thing. The problem with BC that leads to miscalibration is that the human demonstrator may know more or less than the model, which would result in the model systematically being over/underconfident for its own knowledge and abilities. For instance, suppose the human demonstrator is more knowledgeable than the model at common sense: then, the human will ask questions about common sense much less frequently than the model should. However, with BC, the model will ask those questions at the exact same rate as the human, and then because now it has strictly less information than the human, it will have to marginalize over the possible values of the unobserved variables using its prior to be able to imitate the human’s actions. Factoring out the model’s prior over unobserved information, this is equivalent[1] to taking a guess at the remaining relevant info conditioned on all the other info it has (!!!), and then act as confidently as if it had actually observed that info, since that's how a human would act (since the human really observed that information outside of the episode, but our model has no way of knowing that). This is, needless to say, a really bad thing for safety; we want our models to ask us or otherwise seek out information whenever they don't know something, not rando
28ca3bcd-283c-41ec-9f42-8b43dd723b94
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Satisficers Tend To Seek Power: Instrumental Convergence Via Retargetability **Summary**:  Why exactly should smart agents tend to usurp their creators? Previous results only apply to optimal agents tending to stay alive and preserve their future options. I extend the power-seeking theorems to apply to many kinds of policy-selection procedures, ranging from planning agents which choose plans with expected utility closest to a randomly generated number, to satisficers, to policies trained by some reinforcement learning algorithms. The key property is not agent optimality—as previously supposed—but is instead the *retargetability of the policy-selection procedure*. These results hint at which kinds of agent cognition and of agent-producing processes are dangerous by default. I mean "retargetability" in a sense similar to [Alex Flint's definition](https://www.lesswrong.com/posts/znfkdCoHMANwqc2WE/the-ground-of-optimization-1#Defining_optimization): > **Retargetability**. Is it possible, using only a microscopic perturbation to the system, to change the system such that it is still an optimizing system but with a different target configuration set?  > > A system containing a robot with the goal of moving a vase to a certain location can be modified by making just a small number of microscopic perturbations to key memory registers such that the robot holds the goal of moving the vase to a different location and the whole vase/robot system now exhibits a tendency to evolve towards a different target configuration.  > > In contrast, a system containing a ball rolling towards the bottom of a valley cannot generally be modified by any *microscopic* perturbation such that the ball will roll to a different target location. > > (I don't think that "microscopic" is important for my purposes; the constraint is not physical size, but changes in a single parameter to the policy-selection procedure.) I'm going to start from the naive view on power-seeking arguments requiring optimality (i.e. what I thought early this summer) and explain the importance of retargetablepolicy-selection functions. I'll illustrate this notion via satisficers, which randomly select a plan that exceeds some goodness threshold. Satisficers are retargetable, and so they have *orbit-level instrumental convergence*: for most variations of every utility function, satisficers incentivize power-seeking in the situations covered by my theorems.  Many procedures are retargetable, including *every procedure which only depends on the expected utility of different plans*. I think that alignment is hard in the expected utility framework not because agents will *maximize* too hard, but because all expected utility procedures are extremely retargetable—and thus easy to "get wrong." Lastly: the unholy grail of "instrumental convergence for policies trained via reinforcement learning." I'll state a formal criterion and some preliminary thoughts on where it applies. *The linked Overleaf paper draft contains complete proofs and incomplete explanations of the formal results.* Retargetable policy-selection processes tend to select policies which seek power ================================================================================ To understand a range of retargetable procedures, let's first orient towards the picture I've painted of power-seeking thus far. In short: > Since power-seeking tends to lead to larger sets of possible outcomes—staying alive lets you do more than dying—the agent must seek power to reach most outcomes. The power-seeking theorems say that [for the vast, vast, vast majority](https://www.lesswrong.com/s/fSMbebQyR4wheRrvk/p/Yc5QSSZCQ9qdyxZF6) [of](https://www.lesswrong.com/s/fSMbebQyR4wheRrvk/p/hzeLSQ9nwDkPc4KNt#Instrumental_convergence_can_get_really__really_strong) variants of every utility function over outcomes, the max of a largerFootnote: similarity.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} > .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} > .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} > .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} > .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} > .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} > .mjx-numerator {display: block; text-align: center} > .mjx-denominator {display: block; text-align: center} > .MJXc-stacked {height: 0; position: relative} > .MJXc-stacked > \* {position: absolute} > .MJXc-bevelled > \* {display: inline-block} > .mjx-stack {display: inline-block} > .mjx-op {display: block} > .mjx-under {display: table-cell} > .mjx-over {display: block} > .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} > .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} > .mjx-stack > .mjx-sup {display: block} > .mjx-stack > .mjx-sub {display: block} > .mjx-prestack > .mjx-presup {display: block} > .mjx-prestack > .mjx-presub {display: block} > .mjx-delim-h > .mjx-char {display: inline-block} > .mjx-surd {vertical-align: top} > .mjx-surd + .mjx-box {display: inline-flex} > .mjx-mphantom \* {visibility: hidden} > .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} > .mjx-annotation-xml {line-height: normal} > .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} > .mjx-mtr {display: table-row} > .mjx-mlabeledtr {display: table-row} > .mjx-mtd {display: table-cell; text-align: center} > .mjx-label {display: table-row} > .mjx-box {display: inline-block} > .mjx-block {display: block} > .mjx-span {display: inline} > .mjx-char {display: block; white-space: pre} > .mjx-itable {display: inline-table; width: auto} > .mjx-row {display: table-row} > .mjx-cell {display: table-cell} > .mjx-table {display: table; width: 100%} > .mjx-line {display: block; height: 0} > .mjx-strut {width: 0; padding-top: 1em} > .mjx-vsize {width: 0} > .MJXc-space1 {margin-left: .167em} > .MJXc-space2 {margin-left: .222em} > .MJXc-space3 {margin-left: .278em} > .mjx-test.mjx-test-display {display: table!important} > .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} > .mjx-test.mjx-test-default {display: block!important; clear: both} > .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} > .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} > .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} > .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} > .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} > .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} > .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} > .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} > .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} > .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} > .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} > .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} > .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} > .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} > .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} > .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} > .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} > .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} > .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} > .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} > .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} > .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} > .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} > .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} > .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} > .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} > .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} > .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} > .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} > @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} > @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} > @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} > @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} > @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} > @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} > @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} > @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} > @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} > @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} > @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} > @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} > @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} > @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} > @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} > @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} > @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} > @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} > @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} > @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} > @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} > @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} > @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} > @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} > @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} > @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} > @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} > @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} > @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} > @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} > @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} >  set of possible outcomes is greater than the max of a smaller set of possible outcomes. Thus, optimal agents will tend to seek power. > > But I want to step back. What I call "the power-seeking theorems", they aren't really about optimal choice. They're about two facts. 1. Being powerful means you can make more outcomes happen, and 2. *There are more ways to choose something from a bigger set of outcomes than from a smaller set*. For example, suppose our cute robot Frank must choose one of several kinds of fruit.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/6b6db28b0164d8da5c2d911acdd347785b7d43fb7dca780a.png)🍒 vs 🍎 vs 🍌 So far, I proved something like "if the agent has a utility function over fruits, then for at least 2/3 of possible utility functions it could have, it'll be optimal to choose something from {🍌,🍎}." This is because for every way 🍒 could be strictly optimal, you can make a new utility function that permutes the 🍒 and 🍎 reward, and another new one that permutes the 🍌 and 🍒 reward. So for every "I like 🍒 strictly more" utility function, there's at least two permuted variants which strictly prefer 🍎 or 🍌. Superficially, it seems like this argument relies on optimal decision-making.  But that's not true. The crux is instead that we can *flexibly retarget* the decision-making of the agent: **For every way the agent could end up choosing 🍒, we change a variable in its cognition (its utility function) and make it choose the 🍌 or 🍎 instead.** Many decision-making procedures are like this. First, a few definitions.  *I aim for this post to be readable without much attention paid to the math.* The agent can bring about different outcomes via different policies. In stochastic environments, these policies will induce outcome *lotteries*, like 50%🍌 / 50%🍎. Let C contain all the outcome lotteries the agent can bring about.  **Definition: Permuting outcome lotteries**. Suppose there are d outcomes. Let X⊆Rd be a set of outcome lotteries (with the probability of outcome k given by the k-th entry), and let ϕ∈Sd be a permutation of the d possible outcomes. Then ϕ acts on X by swapping around the labels of its elements: ϕ⋅X:={Pϕx∣x∈X}.Footnote: row  For example, let's define the set of all possible fruit outcomes FC:={🍌,🍎,🍒} (each different fruit stands in for a standard basis vector in R3). Let FB:={🍌,🍎} and FA:={🍒}. Let ϕ1:=(🍒🍎) swap the cherry and apple, and let ϕ2:=(🍒🍌) transpose the cherry and banana. Both of these ϕ are *involutions*, since they either leave the fruits alone or transpose them.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/4984bcae29d616dd130c2e36b5f39e5340b356c02da9a0f2.png)Another illustration beyond the fruit setting: set 2 contains three copies of set 1.**Definition: Containment of set copies.** Let A,B⊆Rd. B*contains*n*copies of*A when there exist involutions ϕ1,…,ϕn such that ∀i:ϕi⋅A=:Bi⊆B and ∀i≠j:ϕi⋅Bj=Bj. (The subtext is that B is the set of things the agent could make happen if it gained power, and A is the set of things the agent could make happen without gaining power. Because power gives more options, B will usually be larger than A. Here, we'll talk about the case where B contains *many copies of*A.) In the fruit context: * ϕ1⋅FA:={ϕ1(🍒)}={🍎}⊊{🍌,🍎}=:FB. * ϕ2⋅FA:={ϕ2(🍒)}={🍌}⊊{🍌,🍎}=:FB. Note that ϕ1⋅{🍌}={🍌} and ϕ2⋅{🍎}={🍎}. Each ϕ leaves the other subset of FB alone. Therefore, FB:={🍌,🍎} contains two copies of FA:={🍒} via the involutions ϕ1 and ϕ2.  Further note that ϕi⋅FC=FC for i=1,2. The involutions just shuffle around options, instead of changing the set of available outcomes.  So suppose Frank is deciding whether he wants a fruit from FA:={🍒} or from FB:={🍌,🍎}. It's definitely possible to be motivated to pick 🍒. However, it sure seems like for lots of ways Frank might make decisions, *most parameter settings (utility functions) will lead to Frank picking* 🍌 *or* 🍎. There are just *more* outcomes in FB, since it contains two copies of FA!  **Definition: Orbit tendencies.** Let f1,f2:Rd→R be functions from utility functions to real numbers, let U⊆Rd be a set of utility functions, and let n≥1. f1≥nmost: Uf2 when for *all* utility functions u∈U:  # of permutations of u for which f1>f2∣∣{uϕ∈Sd⋅u∣f1(uϕ)>f2(uϕ)}∣∣≥n# of permutations of u for which f1<f2∣∣{uϕ∈Sd⋅u∣f1(uϕ)<f2(uϕ)}∣∣.In this post, if I don't specify a subset U, that means the statement holds for U=Rd. For example, the [past results](https://www.lesswrong.com/posts/Yc5QSSZCQ9qdyxZF6/the-more-power-at-stake-the-stronger-instrumental) show that IsOptimal(FB) ≥2most IsOptimal(FA)—this implies that for *every* utility function, at least 2/3 of its orbit makes FB optimal. (For simplicity, I'll focus on "for most utility functions" instead of "for most distributions over utility functions", even though most of the results apply to the latter.) Orbit tendencies apply to many decision-making procedures ========================================================= For example, suppose the agent is a [satisficer](https://www.lesswrong.com/tag/satisficer)*.* I'll define this as: The agent uniformly randomly selects an outcome lottery with expected utility exceeding some threshold t.  **Definition: Satisficing.** For finite X⊆C⊊Rd and utility function u∈Rd, define Satisficet(X,C|u):=|X∩{c∈C∣c⊤u≥t}||{c∈C∣c⊤u≥t}|, with the function returning 0 when the denominator is 0. Satisficet returns the probability that the agent selects a u-satisficing outcome lottery from X.  And you know what? Those ever-so-*suboptimal* satisficers *also are "twice as likely" to choose elements from*FB*than from*FA*.* **Fact.**Satisficet({🍌,🍎},{🍌,🍎,🍒}∣u)≥2mostSatisficet({🍒},{🍌,🍎,🍒}∣u). Why? Here are the two key properties that Satisficet has: (1) Weakly increasing under joint permutation of its arguments -------------------------------------------------------------- Satisficet doesn't care what "label" an outcome lottery has—just its expected utility. Suppose that for utility function u, 🍒 is one of two u-satisficing elements: 🍒 has a 12 chance of being selected by the u-satisficer. Then ϕ1⋅🍒=🍎 has a 12 chance of being selected by the (ϕ1⋅u)-satisficer. If you swap what fruit you're considering, and you also swap the utility for that fruit to match, then that fruit's selection probability remains the same. More precisely: Satisficet({🍒},{🍌,🍎,🍒}|u)=Satisficet(ϕ1⋅{🍒},ϕ1⋅{🍌,🍎,🍒}|ϕ1⋅u)=Satisficet({🍎},{🍌,🍎,🍒}∣ϕ1⋅u).In a sense, Satisficet is not "biased" against 🍎: by changing the utility function, you can advantage 🍎 so that it's now as probable as 🍒 was before. Optional notes on this property: * While st is invariantunder joint permutation, all we need in general is that it be *weakly increasing* under both ϕ1 and ϕ2*.* + Formally, Satisficet(FA,FC∣u)≤Satisficet(ϕ1⋅FA,ϕ1⋅FC∣ϕ1⋅u) and Satisficet(FA,FC∣u)≤Satisficet(ϕ2⋅FA,ϕ2⋅FC∣ϕ2⋅u). + This allows for decision-making functions which are biased towards picking a fruit from FB. * I consider this property (1) to be a form of functional retargetability. (2) Order-preserving on the first argument ------------------------------------------ Satisficers must have greater probability of selecting an outcome lottery from a superset than from one of its subsets.  Formally, if X′⊆X, then it must hold that Satisficet(X′,C|u)≤Satisficet(X,C|u). And indeed this holds: Supersets can only contain a greater fraction of C's satisficing elements.  And that's all. --------------- If (1) and (2) hold for a function, then that function will obey the orbit tendencies. Let me show you what I mean. As illustrated by Table 1 in the linked paper, the power-seeking theorems apply to: 1. Expected utility-maximizing agents. 2. EU-minimizing agents. 1. Notice that EU minimization is equivalent to maximizing −1×a utility function. This is a hint that EU maximization instrumental convergenceis only a special case of something much broader. 3. Boltzmann-rational agents which are exponentially more likely to choose outcome lotteries with greater expected utility. 4. Agents which uniformly randomly draw k outcome lotteries, and then choose the best. 5. Satisficers. 6. Quantilizers with a uniform base distribution. 1. I conjecture that this holds for base distributions which assign sufficient probability to B. But that's not all. There's more. If the agent makes decisions *only based on the expected utility of different plans*,Footnote: EU then the power-seeking theorems apply. And I'm not just talking about EU maximizers. I'm talking about *any* function which only depends on expected utility: EU minimizers, agents which choose plans if and only if their EU is equal to 1, agents which grade plans based on how close their EU is to some threshold value. There is no clever EU-based scheme which doesn't have orbit-level power-seeking incentives. *Suppose*n*is large, and that most outcomes in*B*are bad, and that the agent makes decisions according to expected utility. Then alignment is hard because for every way things could go right, there are at least*n*ways things could go wrong! And*n*can be **huge**. In a* [*previous toy example*](https://www.lesswrong.com/posts/hzeLSQ9nwDkPc4KNt/seeking-power-is-convergently-instrumental-in-a-broad-class#Beyond_survival_seeking)*, it equaled*10182*.* It doesn't matter if the decision-making procedure f is rational, or anti-rational, or Boltzmann-rational, or satisficing, or randomly choosing outcomes, or only choosing outcome lotteries with expected utility equal to 1: There are more ways to choose elements of B than there are ways to choose elements of A. These results also have closure properties. For example, closure under mixing decision procedures, like when the agent has a 50% chance of selecting Boltzmann rationally and a 50% chance of satisficing. Or even more exotic transformations: Suppose the probability of f choosing something from X is proportional to  P(X is Boltzmann-rational under u)⋅P(X satisfices u)+P(X is optimal for u).Then the theorems still apply.  **There is no possible way to combine EU-based decision-making functions so that orbit-level instrumental convergence doesn't apply to their composite.** To "escape" these incentives, you have to make the theorems fail to apply. Here are a few ways: 1. Rule out most power-seeking orbit elements *a priori* (AKA "know a lot about what objectives you'll specify") 1. As a contrived example, suppose the agent sees a green pixel iff it sought power, but we know that the specified utility function zeros the output if a green pixel is detected along the trajectory. Here, this would be enough information about the objective to update away from the default position that formal power-seeking is probably incentivized. 2. This seems risky, because much of the alignment problem comes from *not knowing the consequences of specifying an objective function*. 2. Use a decision-making procedure with intrinsic bias towards the elements of A 1. For example, imitation learning is not EU-based, but is instead biased to imitate the non-crazy-power-seeking behavior shown on the training distribution. 2. For example, modern RL algorithms will not reliably produce policies which seek real-world power, because the policies *won't reach or reason about that part of the state space anyways*. This is a bias towards non-power-seeking plans. 3. Pray that the relevant symmetries don't hold. 1. Often, they won't hold exactly. 2. But common sense dictates that they don't have to hold exactly for instrumental convergence to exist: If you inject ϵ irregular randomness to the dynamics, do agents stop tending to stay alive? Orbit-level instrumental convergence is just a *particularly strong* version. 4. Find an ontology (like POMDPs or infinite MDPs) where the results don't apply for technical reasons. 1. I don't see why POMDPs should be any nicer. 2. Ideally, we'd ground agency in a way that makes alignment simple and natural, which automatically evades these arguments for doom. 3. Orbit-level arguments seem easy to apply to a range of previously unmentioned settings, like causal DAGs with choice nodes. 5. Don't do anything with policies. 1. Example: microscope AI Lastly, we maybe don't want to *escape* these incentives entirely, because we probably want smart agents which will seek power *for us.* I think that empirically, the power-requiring outcomes of B are mostly induced by the agent first seeking power over humans.  Retargetable training processes produce instrumental convergence ================================================================ These results let us start talking about the incentives of real-world trained policies. In an appendix, I work through a specific example of how Q-learning on a toy example provably exhibits orbit-level instrumental convergence. The problem is small enough that I computed the probability that each final policy was trained. Realistically, we aren't going to get a closed-form expression for the distribution over policies learned by PPO with randomly initialized deep networks trained via SGD with learning rate schedules and dropout and intrinsic motivation, etc. But we don't need it. These results give us a *formal criterion* for when policy-training processes will tend to produce policies with convergent instrumental incentives*.* The idea is: Consider some set of reward functions, and let B contain n copies of A. Then if, for each reward function in the set, you can retarget the training process so that B's copy of A is at least as likely as A was originally, these reward functions will tend to produce train policies which go to B. For example, if agents trained on objectives R tend to go right, switching reward from right-states to left-states also pushes the trained policies to go left. This can happen when changing the reward changes what was "reinforced" about going right, to now make it "reinforced" to go left.  Suppose we're training an RL agent to go right in MuJoCo, with reward equal to its x-coordinate. ![Roboschool](https://openai.com/content/images/2017/05/image4.gif)If you permute the reward so that high y-values are rewarded, the trained policies should nearly perfectly symmetrically reflect that change. Insofar as x-maximizing policies were trained, now y-maximizing policies will be trained. This criterion is going to be a bit of a mouthful. The basic idea is that when the training process can be redirected such that trained agents induce a variety of outcomes, then most objective functions will train agents which *do induce* those outcomes. In other words: Orbit-level instrumental convergence will hold. **Theorem: Training retargetability criterion.** Suppose the agent interacts with an environment with d potential outcomes (e.g. world states or observation histories). Let P be a probability distribution over joint parameter space Θ, and let train:Θ×Rd→Δ(Π) be a policy training procedure which takes in a parameter setting and utility function u∈Rd, and which produces a probability distribution over policies.  Let U⊆Rd be a set of utility functions which is closed under permutation. Let A,B be sets of outcome lotteries such that B contains n copies of A via ϕ1,...,ϕn. Then we quantify the probability that the trained policy induces an element of outcome lottery set X⊆Rd: f(X∣u):=Pθ∼P,π∼train(θ,u)(π does something in X).If ∀u∈U,i∈{1,...,n}: f(A∣u)≤f(ϕi⋅A∣ϕi⋅u), then f(B∣u)≥nmostf(A∣u).  **Proof.** If X′⊆X, then f(X′∣u)≤f(X∣u) by the monotonicity of probability, and so (2): order-preserving on the first argument holds. By assumption, (1): increasing under joint permutation holds. Therefore, the Lemma B.6 (in the linked paper) implies the desired result. QED. This criterion is testable. Although we can't test all reward functions, we *can* test how retargetable the training process is in simulated environments for a variety of reward functions. If it can't retarget easily for reasonable objectives, then we concludeFN: retarget that instrumental convergence isn't arising from retargetability at the training process level. Let's think about Minecraft. (Technically, the theorems don't apply to Minecraft yet. The theorems can handle [partial observability+utility over observation histories](https://www.lesswrong.com/s/fSMbebQyR4wheRrvk/p/hzeLSQ9nwDkPc4KNt), *or* full observability+world state reward, but not yet partial observability+world state reward. But I think it's illustrative.) We could reward the agent for ending up in different chunks of a Minecraft world. Here, retargeting often looks like "swap which chunks gets which reward."  ![Minecraft How To Zoom Out Map - Maping Resources](https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Fminecraft-zh.gamepedia.com%2Fmedia%2Fminecraft-zh.gamepedia.com%2Fd%2Fde%2FMinecraft_maps_3by3.png%3Fversion%3D13f1dd1debcba5e560438a14d6c8d1e1&f=1&nofb=1)We could consider all chunks within 1 million blocks of the agent, and reward the agent for being in one of them.* At low levels of instrumental convergence and training procedure competence, agents will just mill about near the starting area. * At higher levels of competence, most of the accessible chunks are far away, and so we should observe a strong tendency for policies to e.g. [quickly tame a horse and reach](https://gaming.stackexchange.com/questions/20835/what-is-the-fastest-way-to-travel-long-distances-in-minecraft) the [Nether](https://minecraft.fandom.com/wiki/The_Nether) (where each Nether block traveled counts for 8 blocks traveled back in the overworld). + Thus, in Minecraft, trained policy instrumental convergence will increase with the training procedure competence. The retargetability criterion also accounts for reward shaping guiding the learning process to hard-to-reach parts of the state space. If the agent needs less reward shaping to reach these parts of the state space, the training criterion will hold for larger sets of reward functions. * Since the training retargetability criterion only requires weak inequality, it's OK if the training process cannot be perfectly "reflected" across different training trajectories, if equality does not hold. I think empirically this weak inequality will hold for many reward functions and training setups. + This section does not formally *settle* the question of when trained policies will seek power. The section just introduces a sufficient criterion, and I'm excited about it. I may write more on the details in future posts. + However, my intuition is that this formal training criterion captures a core part of how instrumental convergence arises for trained agents. * In some ways, the training-level arguments are *easier* to apply than the optimal-level arguments. Training-based arguments require somewhat less environmental symmetry. + For example, if the symmetry holds for the first 50 trajectory timesteps, and the only agent ever trains on those timesteps, then there's no way that asymmetry can affect the training output. + Furthermore, if there's some rare stochasticity which the agent almost certainly never confronts, then I suspect we should be able to empirically disregard it for the training-level arguments. Therefore, the training-level results should be practically invariant to tiny perturbations to world dynamics which would otherwise have affected the "top-down" decision-makers. Why cognitively bounded planning agents obey the power-seeking theorems ======================================================================= Planning agents are more "top-down" than RL training, but a Monte Carlo tree search agent still isn't e.g. approximating Boltzmann-rational leaf node selection. A bounded agent won't be considering *all* of the possible trajectories it can induce. Maybe it just knows how to induce some subset of available outcome lotteries C′⊊C. Then, considering only the things it knows how to do, it *does* e.g. select one Boltzmann-rationally (sometimes it'll fail to choose the highest-EU plan, but it's more probable to choose higher-utility plans).  As long as {power-seeking things the agent knows how to do} contains n copies of {non-power-seeking things the agent knows how to do}, then the theorems will still apply. I think this is a reasonable model of bounded cognition. Discussion ========== * AI retargetability seems appealing *a priori*. Surely we want an expressive language for motivating AI behavior, and a decision-making function which reflects that expressivity! But these results suggest: maybe not. Instead, we may want to *bias* the decision-making procedure such that it's less expressive-qua-behavior. + For example, imitation learning is not retargetable by a utility function. Imitation also seems far less likely to incentivize catastrophic behavior. + Imitation is far less expressive, and far more biased towards reasonable behavior that doesn't navigate towards crazy parts of the state space which the agent needs a lot of power to reach. - For example, [it can be hard to even get a perfect imitator to do a *backflip* if you can't do it yourself](https://arxiv.org/pdf/1706.03741.pdf). + One key tension is that we want the procedure to pick out plans which perform a *pivotal act* and end the period of AI risk. We also want the procedure to work robustly across a range of parameter settings we give it, so that it isn't too sensitive / fails gracefully. * AFAICT, alignment researchers didn't necessarily think that satisficing was safe, but that's mostly due to [speculation that satisficing incentivizes the agent to create a maximizer](https://www.lesswrong.com/posts/2qCxguXuZERZNKcNi/satisficers-want-to-become-maximisers). Beyond that, though, why not avoid "the AI paperclips the universe" by only having the AI choose a plan leading to at least 100 paperclips? Surely that helps? + This implicit focus on [extremal goodhart](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy#Extremal_Goodhart) glosses over a key part of the risk. The risk isn't just that the AI goes crazy on a simple objective. Part of the problem is that *the vast vast majority of the AI's trajectories can only happen if the AI first gains a lot of power!* + That is: Not only do I think that EU maximization is dangerous, *most trajectories through these environments are dangerous!* + You might protest: Does this not prove too much? Random action does not lead to dangerous outcomes. - Correct. Adopting the uniformly random policy in Pac-Man does not mean a uniformly random chance to end up in each terminal state. It means you probably end up in an early-game terminal state, because Pac-Man got eaten alive while banging his head against the wall. - However, random *outcome selection leads to convergently instrumental action.* If you uniformly randomly choose a terminal state to navigate to, that terminal state probably requires Pac-Man to beat the first level, and so the agent stays alive, as pointed out by [Optimal Policies Tend To Seek Power](https://arxiv.org/abs/1912.01683). - This is just the flipside of instrumental convergence: If most goals are best achieved by taking some small set of preparatory actions, this implies a "bottleneck" in the state space. Uniformly randomly taking actions will not tend to properly navigate this bottleneck. After all, if they did, then most actions would be instrumental for most goals! * The trained policy criterion also predicts that we won't see convergently instrumental survival behavior from present-day embodied agents, because the RL algorithm *can't find or generalize to the high-power part of the state space*. + When this starts changing, then we should worry about instrumental subgoals in practice. + Unfortunately, since the real-world is not a simulator with resets, any agents which do generalize to those strategies won't have done it before, and so at most, we'll see attempted deception. + This lends theoretical support for "the training process is highly retargetable in real-world settings across increasingly long time horizons" being a fire alarm for instrumental convergence. - In some sense, this is bad: Easily retargetable processes will often be more economically useful, by virtue of being useful for more tasks. Conclusion ========== I discussed how a wide range of agent cognition types and of agent production processes are *retargetable*, and why that might be bad news. I showed that in many situations where power is possible, retargetable policy-production processes tend to produce policies which gain that power. In particular, these results seem to rule out a huge range of expected-utility based rules. The results also let us reason about instrumental convergence at the trained policy level. I now think that more instrumental convergence comes from the practical retargetability of how we design agents. If there were more ways we could have counterfactually messed up, it's more likely *a priori* that we *actually* messed up. The way I currently see it is: Either we have to really know what we're doing, or we want processes where it's somehow hard to mess up.  Since these theorems are crisply stated, I want to more closely inspect the ways in which alignment proposals can violate the assumptions which ensure extremely strong instrumental convergence. *Thanks to Ruby Bloom, Andrew Critch, Daniel Filan, Edouard Harris, Rohin Shah, Adam Shimi, Nisan Stiennon, and John Wentworth for feedback.* Footnotes --------- **FN: Similarity.** Technically, we aren't just talking about a cardinality inequality—about staying alive letting the agent do *more things* than dying—but about similarity-via-permutation of the outcome lottery sets. I think it's OK to round this off to cardinality inequalities when informally reasoning using the theorems, keeping in mind that sometimes results won't formally hold without a stronger precondition. **FN: Row.** I assume that permutation matrices are in row representation: (Pϕ)ij=1 if i=ϕ(j) and 0 otherwise. **FN: EU.** Here's a bit more formality for what it means for an agent to make decisions only based on expected utility. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/0b7501fac9c964b9f837efcbc18005e6fde77ade2c96880c.png)This definition basically says that f can be expressed in terms of the expected utilities of the set elements—the output will only depend on expected utility.**Theorem: Retargetability of EU decision-making.** Let A,B⊆C⊊Rd be such that B contains n copies of A via ϕi such that ϕi⋅C=C. For X⊆C, let f(X,C∣u) be an EU/cardinality function, such that f returns the probability of selecting an element of X. Then f(B,C∣u)≥nmostf(A,C∣u). **FN: Retargetability.** The trained policies could conspire to "play dumb" and pretend to not be retargetable, so that we would be more likely to actually deploy one of them.  Worked example: instrumental convergence for trained policies ------------------------------------------------------------- Consider a simple environment, where there are three actions: Up, Right, Down.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/1def51addf905c57c155fb97bd4d3a1830fe6020d16dc5ec.png)**Probably optimal policies.** By running [tabular Q-learning](https://en.wikipedia.org/wiki/Q-learning) with ϵ-greedy exploration for e.g. 100 steps with resets, we have a high probability of producing an optimal policy for any reward function. Suppose that all Q-values are initialized at −100. Just let learning rate α=1 and γ=1. This is basically a [bandit problem](https://en.wikipedia.org/wiki/Multi-armed_bandit). To learn an optimal policy, at worst, the agent just has to try each action once. For e.g. a sparse reward function on the Down state (1 reward on Down state and 0 elsewhere), there is a very small probability (precisely, 23(1−ϵ2)99) that the optimal action (Down) is never taken.  In this case, symmetry shows that the agent has an equal chance of learning either Up or Right. But with high probability, the learned policy will output Down. For any sparse reward function and for any action a, this produces decision function  f({esa},{es∣s∈S}∣r):={13(1−ϵ2)99if a is r-suboptimal1−23(1−ϵ2)99if a is r-optimal.f is invariant to joint involution by ϕ1:=(esDownesRight) and ϕ2:=(esDownesUp). That is, f({esDown},{es∣s∈S}∣r)=f(ϕ1⋅{esa},ϕ1⋅{es∣s∈S}∣ϕ1⋅r)=f({esRight},{es∣s∈S}∣ϕ1⋅r).And similarly for ϕ2. That is: Changing the optimal state also changes which state is more probably selected by f. This means we've satisfied condition (1) above. f is additive on union for its first argument, and so it meets condition (2): order preservation.  Therefore, for this policy training procedure, learned policies for sparse reward functions will be *twice as likely* to navigate to an element of {esUp,esRight} as an element of {esDown}!  This is a formal argument that a stochastic policy training procedure has certain tendencies across a class of reward functions, and I'm excited to be able to make it. As the environment grows bigger and the training procedure more complex, we'll have to consider questions like "what are the inductive biases of large policy networks?", "what role does reward shaping play for this objective, and is the shaping at least as helpful for its permuted variants?", and "to what extent are different parts of the world harder to reach?". For example, suppose there are a trillion actions, and two of them lead to the Right state above. Half of the remaining actions lead to Up, and the rest lead to Down.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/1def51addf905c57c155fb97bd4d3a1830fe6020d16dc5ec.png)2 actions transition right to chocolate. 12(1012−2) actions transition up to candy. 12(1012−2) actions transition down to hug.Q-learning is ridiculously unlikely to ever go Right, and so the symmetry breaks. In the limit, tabular Q-learning on a finite MDP will learn an optimal policy, and then the normal theorems will apply. But in the finite step regime, no such guarantee holds, and so *the available action space* can violate condition (1): increasing under joint permutation. Appendix: tracking key limitations of the power-seeking theorems ---------------------------------------------------------------- From [last time](https://www.lesswrong.com/posts/hzeLSQ9nwDkPc4KNt/seeking-power-is-convergently-instrumental-in-a-broad-class#Appendix__Tracking_key_limitations_of_the_power_seeking_theorems): > 1. ~~assume the agent is following an optimal policy for a reward function~~ > 2. Not all environments have the right symmetries > * But most ones we think about seem to > 3. don't account for the ways in which we might practically express reward functions > I want to add a new one, because the theorems >       1. don't deal with the agent's uncertainty about what environment it's in. > > I want to think about this more, especially for online planning agents. (The training redirectability criterion black-boxes the agent's uncertainty.)
ba9afe0d-c4f2-4a47-961e-ee671c655035
trentmkelly/LessWrong-43k
LessWrong
Is there a widely accepted metric for 'genuineness' in interpersonal communication? ... And if you don't believe there is one on LW or elsewhere, what what you consider to be the ideal?   'genuineness' here refers to all the positive qualities we associate with the word 'genuine' such as truthfulness, completeness (to their best knowledge at the time of record), fidelity, consistency (with past claims, statements, etc.), logical validity (to the best extent attainable), etc... My own personal ideal would be something along the following, if no prior track record is available: * irreversibility - the more difficult to 'take back', the more likely the communication is genuine. * e.g. a direct email vs. a random verbal comment from the same person * detail - the more explicit, enumerated,  spelled out, etc., the more likely the communication is genuine * e.g. an email with every claim spelled out, highlighted, and justified with some explanation vs. a quick email * costly - the higher the cost it took to produce the communication, the more likely it is genuine * e.g. a company selling widgets sending out free physical samples of their products in a premium packaged box vs. sending out typical email newsletters * 'gotcha' free - the less strings attached the more likely it is genuine * e.g. a car salesman letting you take home the car for a 24 hour test drive vs. a salesman offering only a supervised quick test drive
c970e4a6-bbec-490d-ad29-fb1e850d5503
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Cascades, Cycles, Insight... **Followup to**:  [Surprised by Brains](/lw/w4/surprised_by_brains/) *Five sources of discontinuity:  1, 2, and 3...* **Cascades** are when one thing leads to another.  Human brains are effectively discontinuous with chimpanzee brains due to a whole bag of design improvements, even though they and we share 95% genetic material and only a few million years have elapsed since the branch.  Why this whole series of improvements in us, relative to chimpanzees?  Why haven't some of the same improvements occurred in other primates? Well, this is not a question on which one may speak with authority ([so far as I know](/lw/kj/no_one_knows_what_science_doesnt_know/)).  But I would venture an unoriginal guess that, in the hominid line, one thing led to another. The chimp-level task of modeling others, in the hominid line, led to improved self-modeling which supported recursion which enabled language which birthed politics that increased the selection pressure for outwitting which led to sexual selection on wittiness... ...or something.  It's hard to tell by looking at the fossil record what happened in what order and why.  The point being that it wasn't *one optimization* that pushed humans ahead of chimps, but rather a *cascade* of optimizations that, in *Pan*, never got started. We fell up the stairs, you might say.  It's not that the first stair ends the world, but if you fall up one stair, you're more likely to fall up the second, the third, the fourth... I will concede that farming was a watershed invention in the history of the human species, though it intrigues me for a different reason than Robin.  Robin, presumably, is interested because the economy grew by two orders of magnitude, or something like that.  But did having a hundred times as many humans, lead to a hundred times as much thought-optimization *accumulating* per unit time?  It doesn't seem likely, especially in the age before writing and telephones.  But farming, because of its sedentary and repeatable nature, led to repeatable trade, which led to debt records.  Aha! - now we have *writing.*  *There's* a significant invention, from the perspective of cumulative optimization by brains.  Farming isn't writing but it *cascaded to* writing. Farming also cascaded (by way of surpluses and cities) to support *professional specialization*.  I suspect that having someone spend their whole life thinking about topic X instead of a hundred farmers occasionally pondering it, is a more significant jump in cumulative optimization than the gap between a hundred farmers and one hunter-gatherer pondering something. Farming is not the same trick as professional specialization or writing, but it *cascaded* to professional specialization and writing, and so the pace of human history picked up enormously after agriculture.  Thus I would interpret the story. From a zoomed-out perspective, cascades can lead to what look like discontinuities in the historical record, *even given* a steady optimization pressure in the background.  It's not that natural selection *sped up* during hominid evolution.  But the search neighborhood contained a low-hanging fruit of high slope... that led to another fruit... which led to another fruit... and so, walking at a constant rate, we fell up the stairs.  If you see what I'm saying. *Predicting* what sort of things are likely to cascade, seems like a very difficult sort of problem. But I will venture the observation that - with a sample size of one, and an optimization process very different from human thought - there was a cascade in the region of the transition from primate to human intelligence. **Cycles** happen when you connect the output pipe to the input pipe in a *repeatable* transformation.  You might think of them as a special case of cascades with very high regularity.  (From which you'll note that in the cases above, I talked about cascades through *differing* events: farming -> writing.) The notion of cycles as a source of *discontinuity* might seem counterintuitive, since it's so regular.  But consider this important lesson of history: Once upon a time, in a squash court beneath Stagg Field at the University of Chicago, physicists were building a shape like a giant doorknob out of alternate layers of graphite and uranium... The key number for the "pile" is the effective neutron multiplication factor.  When a uranium atom splits, it releases neutrons - some right away, some after delay while byproducts decay further.  Some neutrons escape the pile, some neutrons strike another uranium atom and cause an additional fission.  The effective neutron multiplication factor, denoted *k*, is the average number of neutrons from a single fissioning uranium atom that cause another fission.  At *k* less than 1, the pile is "subcritical".  At *k* >= 1, the pile is "critical".  Fermi calculates that the pile will reach *k*=1 between layers 56 and 57. On December 2nd in 1942, with layer 57 completed, Fermi orders the final experiment to begin.  All but one of the control rods (strips of wood covered with neutron-absorbing cadmium foil) are withdrawn.  At 10:37am, Fermi orders the final control rod withdrawn about half-way out.  The geiger counters click faster, and a graph pen moves upward.  "This is not it," says Fermi, "the trace will go to this point and level off," indicating a spot on the graph.  In a few minutes the graph pen comes to the indicated point, and does not go above it.  Seven minutes later, Fermi orders the rod pulled out another foot.  Again the radiation rises, then levels off.  The rod is pulled out another six inches, then another, then another. At 11:30, the slow rise of the graph pen is punctuated by an enormous CRASH - an emergency control rod, triggered by an ionization chamber, activates and shuts down the pile, which is still short of criticality. Fermi orders the team to break for lunch. At 2pm the team reconvenes, withdraws and locks the emergency control rod, and moves the control rod to its last setting.  Fermi makes some measurements and calculations, then again begins the process of withdrawing the rod in slow increments.  At 3:25pm, Fermi orders the rod withdrawn another twelve inches.  "This is going to do it," Fermi says.  "Now it will become self-sustaining.  The trace will climb and continue to climb.  It will not level off." Herbert Anderson recounted (as told in Rhodes's *The Making of the Atomic Bomb*): > "At first you could hear the sound of the neutron counter, clickety-clack, clickety-clack. Then the clicks came more and more rapidly, and after a while they began to merge into a roar; the counter couldn't follow anymore. That was the moment to switch to the chart recorder. But when the switch was made, everyone watched in the sudden silence the mounting deflection of the recorder's pen. It was an awesome silence. Everyone realized the significance of that switch; we were in the high intensity regime and the counters were unable to cope with the situation anymore. Again and again, the scale of the recorder had to be changed to accomodate the neutron intensity which was increasing more and more rapidly. Suddenly Fermi raised his hand. 'The pile has gone critical,' he announced. No one present had any doubt about it." > > Fermi kept the pile running for twenty-eight minutes, with the neutron intensity doubling every two minutes. That first critical reaction had *k* of 1.0006. It might seem that a cycle, with the same thing happening over and over again, ought to exhibit continuous behavior.  In one sense it does.  But if you pile on one more uranium brick, or pull out the control rod another twelve inches, there's one hell of a big difference between *k* of 0.9994 and *k* of 1.0006. If, rather than being able to calculate, rather than foreseeing and taking cautions, Fermi had just reasoned that 57 layers ought not to behave all that differently from 56 layers - well, it wouldn't have been a good year to be a student at the University of Chicago. The inexact analogy to the domain of self-improving AI is left as an exercise for the reader, at least for now. Economists like to measure cycles because they happen repeatedly.  You take a potato and an hour of labor and make a potato clock which you sell for two potatoes; and you do this over and over and over again, so an economist can come by and watch how you do it. As I [noted here at some length](/lw/vd/intelligence_in_economics/), economists are much less likely to go around measuring how many scientific discoveries it takes to produce a *new* scientific discovery.  All the discoveries are individually dissimilar and it's hard to come up with a common currency for them.  The analogous problem will prevent a self-improving AI from being *directly* analogous to a uranium heap, with almost perfectly smooth exponential increase at a calculable rate.  You can't apply the same software improvement to the same line of code over and over again, you've got to invent a new improvement each time.  But if self-improvements are triggering more self-improvements with great *regularity,* you might stand a long way back from the AI, blur your eyes a bit, and ask:  *What is the AI's average* *neutron multiplication factor?* Economics seems to me to be [largely the study of production *cycles*](/lw/vd/intelligence_in_economics/) - highly regular repeatable value-adding actions.  This doesn't seem to me like a very deep abstraction so far as the study of optimization goes, because it leaves out the creation of *novel knowledge* and *novel designs* - further *informational* optimizations.  Or rather, treats productivity improvements as a mostly exogenous factor produced by black-box engineers and scientists.  (If I underestimate your power and merely parody your field, by all means inform me what kind of economic study has been done of such things.)  (**Answered:**  This literature goes by the name "endogenous growth".  See comments [starting here](http://www.overcomingbias.com/2008/11/cascades-cycles.html#comment-140280102).)  So far as I can tell, economists do not venture into asking where discoveries *come from*, leaving the mysteries of the brain to cognitive scientists. (Nor do I object to this division of labor - it just means that you may have to drag in some extra concepts from outside economics if you want an account *of self-improving Artificial Intelligence.*  Would most economists even object to that statement?  But if you think you can do the whole analysis using standard econ concepts, then I'm willing to see it...) **Insight** is that mysterious thing humans do by grokking the search space, wherein one piece of highly abstract knowledge (e.g. Newton's calculus) provides the master key to a huge set of problems.  Since humans deal in the compressibility of compressible search spaces (at least the part *we* can compress) we can bite off huge chunks in one go.  This is not mere cascading, where one solution leads to another: Rather, an "insight" is a chunk of knowledge *which, if you possess it, decreases the cost of solving a whole range of governed problems.* There's a parable I once wrote - I forget what for, I think ev-bio - which dealt with creatures who'd *evolved* addition in response to some kind of environmental problem, and not with overly sophisticated brains - so they started with the ability to add 5 to things (which was a significant fitness advantage because it let them solve some of their problems), then accreted another adaptation to add 6 to odd numbers.  Until, some time later, there wasn't a *reproductive advantage* to "general addition", because the set of special cases covered almost everything found in the environment. There may be even be a real-world example of this.  If you glance at a set, you should be able to instantly distinguish the numbers one, two, three, four, and five, but seven objects in an arbitrary (non-canonical pattern) will take at least one noticeable instant to count.  IIRC, it's been suggested that we have hardwired numerosity-detectors but only up to five. I say all this, to note the difference between evolution nibbling bits off the immediate search neighborhood, versus the human ability to do things in one fell swoop. Our compression of the search space is also responsible for *ideas cascading much more easily than adaptations*.  We actively examine good ideas, looking for neighbors. But an insight is higher-level than this; it consists of understanding what's "good" about an idea in a way that divorces it from any single point in the search space.  In this way you can crack whole volumes of the solution space in one swell foop.  The insight of calculus apart from gravity is again a good example, or the insight of mathematical physics apart from calculus, or the insight of math apart from mathematical physics. Evolution is not completely barred from making "discoveries" that decrease the cost of a very wide range of further discoveries.  Consider e.g. the ribosome, which was capable of manufacturing a far wider range of proteins than whatever it was actually making at the time of its adaptation: this is a general cost-decreaser for a wide range of adaptations.  It likewise seems likely that various types of neuron have reasonably-general learning paradigms built into them (gradient descent, Hebbian learning, more sophisticated optimizers) that have been reused for many more problems than they were originally invented for. A ribosome is something like insight: an item of "knowledge" that tremendously decreases the cost of inventing a wide range of solutions.  But even evolution's best "insights" are not quite like the human kind.  A sufficiently powerful human insight often approaches a closed form - it doesn't feel like you're *exploring* even a compressed search space.  You just apply the insight-knowledge to whatever your problem, and out pops the now-obvious solution. Insights have often cascaded, in human history - even major insights.  But they don't quite cycle - you can't repeat the identical pattern Newton used originally to get a new kind of calculus that's twice and then three times as powerful. Human AI programmers who have insights into intelligence may acquire discontinuous advantages over others who lack those insights.  *AIs themselves* will experience discontinuities in their growth trajectory associated with *becoming able to do AI theory itself* *-* a watershed moment in the FOOM.
bb461126-7be8-428e-a764-50a242ecc8a0
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Impact stories for model internals: an exercise for interpretability researchers *Inspired by Neel's* [*longlist*](https://www.alignmentforum.org/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability)*; thanks to* [*@Nicholas Goldowsky-Dill*](https://www.alignmentforum.org/users/nicholas-goldowsky-dill?mention=user) *and* [*@Sam Marks*](https://www.alignmentforum.org/users/sam-marks?mention=user) *for feedback and discussion, and thanks to AWAIR attendees for participating in the associated activity.* As part of the [Alignment Workshop for AI Researchers](https://awairworkshop.org/) in July/August '23, I ran a session on theories of impact for model internals. Many of the attendees were excited about this area of work, and we wanted **an exercise to help them think through** ***what exactly*** **they were aiming for and** ***why***. This write-up came out of planning for the session, though I didn't use all this content verbatim. My main goal was to find concrete starting points for discussion, which 1. have the right shape to be a theory of impact 2. are divided up in a way that feels natural 3. cover the diverse reasons why people may be excited about model internals work (according to me[[1]](#fnlcj0zkvurb)). This isn't an endorsement of any of these, or of model internals research in general. The ideas on this list are due to many people, and I cite things sporadically when I think it adds useful context: feel free to suggest additional citations if you think it would help clarify what I'm referring to. Summary of the activity ======================= During the session, participants identified which impact stories seemed most exciting to them. We discussed why they felt excited, what success might look like concretely, how it might fail, what other ideas are related, etc. for a couple of those items. I think [categorizing existing work based on its theory of impact](https://www.alignmentforum.org/posts/LNA8mubrByG7SFacm/against-almost-every-theory-of-impact-of-interpretability-1#Case_study_of_some_cool_interp_papers) could also be a good exercise in the future. I personally found the discussion useful for helping me understand what motivated some of the researchers I talked to. I was surprised by the diversity. Key stats of an impact story ============================ Applications of model internals vary a lot along multiple axes: * Level of human understanding needed for the application[[2]](#fni1f6lal2mu) + If a lot of human understanding is needed, does that update you on the difficulty of executing in this direction? If understanding is not needed, does that open up possibilities for non-understanding-based methods you hadn't considered? + For example, [determining whether the model does planning](https://www.alignmentforum.org/posts/KfDh7FqwmNGExTryT/impact-stories-for-model-internals-an-exercise-for#9__Scientific_understanding) would probably require understanding. On the other hand, [finding adversarial examples](https://www.alignmentforum.org/posts/KfDh7FqwmNGExTryT/impact-stories-for-model-internals-an-exercise-for#4__Finding_adversarial_examples) or [eliciting latent knowledge](https://www.alignmentforum.org/posts/KfDh7FqwmNGExTryT/impact-stories-for-model-internals-an-exercise-for#2__Eliciting_latent_knowledge) might not involve any. * Level of rigor or completeness (in terms of % model explained) needed for the application + If a high level of rigor or completeness is needed, does that update you on the difficulty of executing in this direction? What does the path to high rigor/completeness look like? Can you think of modifications to the impact story that might make partial progress be more useful? + For example, we get value out of [finding adversarial examples](https://www.alignmentforum.org/posts/KfDh7FqwmNGExTryT/impact-stories-for-model-internals-an-exercise-for#4__Finding_adversarial_examples) or [dangerous capabilities](https://www.alignmentforum.org/posts/KfDh7FqwmNGExTryT/impact-stories-for-model-internals-an-exercise-for#3__Auditing), even if the way we find them is somewhat hacky. Meanwhile, if we don't find them, we'd need to be extremely thorough to be sure they don't exist, or sufficiently rigorous to [get a useful bound](https://www.lesswrong.com/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment) on how likely the model is to be dangerous. * Is using model internals essential for the application,[[3]](#fnfzypbj87ift) or are there many possible approaches to the application, only some of which make use of model internals? + Steering model behaviors can be done via [model editing](https://www.alignmentforum.org/posts/KfDh7FqwmNGExTryT/impact-stories-for-model-internals-an-exercise-for#8__Model_editing), or by prompting or finetuning; but, there are reasons (mentioned below) why editing could be a better approach. Many impact stories (at least as I've categorized them) have variants that live at multiple points on these spectra. **When thinking about one, you should think about where it lands, and what variants you can think of** that might be e.g. easier but still useful. The list ======== Some are more fleshed out than others; some of the rest could be fleshed out with a bit more effort, while others are more speculative. **1. Scalable oversight** ------------------------- Looking into the model’s internals could help avoid reward misspecification if it's easier than evaluating the final output. This is a form of process-based feedback, where the process is the model's internalized cognition. The looking and feedback could be coming from a human or from another model. It could be directly in the loss, or just available to inspect when computing the reward. The feedback could be based on how transparent the internals are, how good the transparent parts look, or both. **2.** [**Eliciting latent knowledge**](https://www.lesswrong.com/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge) -------------------------------------------------------------------------------------------------------------------------------------------------- The model may have learned useful knowledge that we would like to make use of, but which we can’t train it to report (e.g. because it is in a superhuman regime). How is this different from the previous point? ELK *could* be used for oversight. However, ELK more specifically targets 1) revealing something which was previously inaccessible to the overseer, while e.g. just linearly boosting overseer efficiency could improve oversight, 2) details of what the model is doing that it *knows* about, while any details of what the model is doing could be used to improve oversight in general, and 3) (often) the worst case rather than merely improving on baselines. We could also make use of the elicited knowledge in ways other than computing the reward for the model. It’s possible that we would be able to have a great deal of understanding and just “read off” the model’s knowledge. Alternatively, we may need to use the model’s structure more abstractly: e.g. if we find two separate representations of something that seems like it could be our latent of interest, we might make use of the fact that one is more upstream in the model’s computation.[[5]](#fn7q9jzvkin8g) **3. Auditing** --------------- To catch *whether the model learned the right thing* even if it does a great job on all training examples. Examples of things we might want to check are absent are social reasoning capability, knowledge of bioweapons, or different behavior in training and deployment. If we find the model has one of these, we might throw out the whole model, just cut out a concerning piece, or do something else. We could also check for things we *do* want to see. Model internals may be extra-helpful here for two reasons. We want to notice properties of the model as early as possible, perhaps before they are meaningfully reflected in I/O behavior. Also, we want to avoid relying on merely observing I/O behavior in cases when the model may be trying to hide things from us: it’s harder for something bad to look good the more bits you get to observe.[[4]](#fnabo9zbqs52d) However, other techniques are possible: e.g. if we are worried about manipulation capabilities then we can try [capability elicitation](https://www.lesswrong.com/posts/dBmfb76zx6wjPsBC7/when-can-we-trust-model-evaluations): finetune the model to be manipulative, and measure whether it is able to do a good job. (This would address both concerns above about relying on I/O behavior). Using model internals for this directly probably requires some understanding (though not necessarily on the target model: e.g. we could maybe understand what social reasoning tends to look like in some models we constructed, and then look for the same pattern in the target model). There are also indirect applications of model internals for auditing, such as... **4. Finding adversarial examples** ----------------------------------- This is an indirect use of model internals, whose impact is via accelerating other safety work (adversarial training). We can also use the existence of adversarial examples as a form of [auditing](https://www.alignmentforum.org/posts/KfDh7FqwmNGExTryT/impact-stories-for-model-internals-an-exercise-for#3__Auditing). Today there are a lot of "white-box" adversarial attacks which blindly backprop through the network, which I wouldn't count as really using model internals. Occasionally people construct adversarial examples to validate interpretability findings[[6]](#fn9huuod4t6ht) but as far as I know this approach has not actually been used for measuring or improving model robustness. Note that finding advexes does not necessarily require understanding: for example, we could train an adversary model as in [the Go paper](https://arxiv.org/abs/2211.00241); or we could notice some low level property that leads to finding an advex without actually understanding what is going on. **5.** [**Relaxed adversarial training**](https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d#c423) ------------------------------------------------------------------------------------------------------------------- It may be intractable to construct examples where the model displays some bad behavior. In that case, perhaps we can *relax* the adversary's job to describing a situation in which the model would plausibly behave badly. As in the previous case we can, in addition to training, [use this as a form of](https://www.lesswrong.com/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment) [auditing](https://www.alignmentforum.org/posts/KfDh7FqwmNGExTryT/impact-stories-for-model-internals-an-exercise-for#3__Auditing). Similarly to the previous example of regular adversarial training, model internals here could be useful for constructing adversarial "inputs" (potentially more so, as the space of possible inputs is much larger). Also, we now need a way to measure, in a way we can back-propagate through, that the model would plausibly behave badly, since we cannot actually run it on the adversarial "input". We may need model internals for that step. **6.** [**Mechanistic anomaly detection**](https://www.alignmentforum.org/posts/vwt3wKXWaCvqZyF74/mechanistic-anomaly-detection-and-elk) ---------------------------------------------------------------------------------------------------------------------------------------- That is, is the model’s output on this input “for the normal reason”? This means looking for anomalies in the *mechanism* the model uses, rather than in its I/O (as OOD detection might do). As a simplified example, it may be that the model *usually* produces answers because it's trying to answer questions honestly, but this time it selected an answer which it thought we were most likely to like. We might not know that that's the underlying explanation (unlike in the ambitious form of auditing), but we could detect that the model is doing *something different* and flag it for further inspection before acting on its answer. Why should we care about mechanistic anomalies? Perhaps the distribution shift isn't detectable in the I/O at all. Or perhaps it is, but we need to allow some distribution shifts while detecting “unsafe” ones, and leveraging the model internals could distinguish these. We could also imagine doing this between models or generations (though the techniques may look quite different, and this isn’t referred to as MAD). That is, we could measure whether one model implements a behavior similarly to another, even if we don’t understand either of them. **7. Model editing** -------------------- We could modify the model’s internals to modify its behavior (steering, unlearning, maybe also capability elicitation, etc.). Today we do this with finetuning or prompting, but it’s possible that modifying internals could work better. This may be due to data unavailability, difficulty of constructing a prompt (similarly to relaxed adversarial training), or just for random reasons. For a recent example, see the [Inference-Time Intervention](https://www.alignmentforum.org/posts/kuQfnotjkQA4Kkfou/inference-time-intervention-eliciting-truthful-answers-from) paper. **8. Scientific understanding** ------------------------------- Better understanding could be helpful for prioritization and coordination. A relatively concrete example would be seeing whether the model’s cognition tends to involve planning/optimization, or whether it is more like a pile of heuristics. This would provide empirical evidence for/against threat models, and it may be difficult to get comparable evidence from I/O alone. In general, we may be able to somehow make predictions about what capabilities might emerge in models or how different interventions would affect a model. One reason that using model internals for understanding might be good: facts about internals may turn out to be easier for us to reason about than facts about behavioral evaluations. **9.** [**Communicating AI risk**](https://www.alignmentforum.org/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety#Deliberate_design) --------------------------------------------------------------------------------------------------------------------------------------------- Lack of interpretability could be a particularly salient way to communicate how little we understand the AI systems we are building and the associated risks. Working on interpretability could be helpful for this if we can show a contrast between "good interpretability we did on a model from 3 years ago" and "model that is being considered for deployment today." The opposite might also be true: progress in interpretability could create a false sense of security and reduce appreciation of AI risk. **10. Microscope AI** --------------------- "Maybe we can avoid deploying agents at all, by training systems to do complex tasks, then interpreting how they do it and doing it ourselves" (from Neel's list). We can expand "doing it ourselves" to "writing software that we fully understand." **11. Better inductive bias** ----------------------------- We can train models to be more transparent/interpretable, or to have/avoid certain thought patterns. This maybe only gets us *apparent* transparency, but this could still give a better inductive bias for the behavior that the model learns. Note that without specific reasons to think this would happen, this is an example of safety impact via a *byproduct* of work on model internals, rather than a direct impact story. 1. **[^](#fnreflcj0zkvurb)**When considering existing lists, sometimes I thought a headline point was important, but for different reasons than those given in the summary; or I thought something made sense as a theory of impact for model internals but wouldn't require any interpretability; or I thought a single item mixed multiple distinct ideas. 2. **[^](#fnrefi1f6lal2mu)**This is why the title here is “model internals” rather than “interpretability”: the latter sounds to me like it involves human understanding (someone interprets what the internals *mean*) which we don’t want to assume. 3. **[^](#fnreffzypbj87ift)**Here are some reasons internals might be especially suited to a given application (though I don’t claim that they necessarily apply in practice, for applications I know of): 1. Model internals can give us more bits of information on what the model is doing, making it substantially harder for the model to hide facts from the overseer. 2. Model internals may be somehow crisper than e.g. the model’s outputs, such that it's easier to isolate and understand factors we care about. 3. Model internals may allow finding unknown unknowns that we wouldn’t have thought to look for; though this maybe requires high levels of *detail* and *understanding*. 4. Model internals approaches are just generally different from other approaches, so might fail in uncorrelated ways. 4. **[^](#fnrefabo9zbqs52d)**Powerful enough models could potentially make their internals look rosy as well, but this requires much more advanced capabilities. 5. **[^](#fnref7q9jzvkin8g)**This sentence (and entire section) may be confusing if you are not familiar with ELK. I'm not aiming to fully explain it here. It's also not the best example of using model internals for ELK, just one that's relatively easy to say quickly. 6. **[^](#fnref9huuod4t6ht)**e.g. [these](https://arxiv.org/abs/2201.11114) [papers](https://arxiv.org/abs/2006.14032) out from Jacob Andreas' group
f4bdab66-085e-467a-8742-727994ca93e3
trentmkelly/LessWrong-43k
LessWrong
Applied Rationality Exercises
00bb4fc1-bb55-4633-a182-b0214db4614e
trentmkelly/LessWrong-43k
LessWrong
Not another bias! Behavioural economics has significantly advanced our understanding of human behaviour, shedding light on how it frequently diverges from the traditional economic model predictions of the homo economicus. There is however a growing sense, that the marginal returns of finding and investigating biases have decreased significantly. There is a renewed interest in adaptive explanations for biases and cognitive biases. This interest is bolstered by progress in at least two different areas: cognitive neuroscience and economic theory. In the former, models assuming that human brains are solving complex Bayesian calculations are now being routinely used. In the latter, richer models studying optimal/equilibrium behaviour in situations where information is incomplete and costly to acquire have enriched economists' ability to study more realistic decision problems with the economic toolbox. I recently published a book, "Optimally Irrational", that discusses the results from behavioural economics with an adaptive perspective that takes from old and recent explanations of human biases as having good reasons. I also started an eponym Substack where I'll be discussing human behaviour with an adaptive perspective, informed by recent advances in economic theory and cognitive neuroscience. I think these would be of interest to the LessWrong community.
b321f46f-b523-4723-a821-aa6145220e9d
trentmkelly/LessWrong-43k
LessWrong
A Tale of Two Restaurant Types While I sort through whatever is happening with GPT-4, today’s scheduled post is two recent short stories about restaurant selection. YE OLDE RESTAURANTE Tyler Cowen says that restaurants saying ‘since year 19xx’ are on net a bad sign, because they are frozen in time, focusing on being reliable. For the best meals, he says look elsewhere, to places that shine brightly and then move on. I was highly suspicious. So I ran a test. I checked the oldest places in Manhattan. The list had 15 restaurants. A bunch are taverns, which are not relevant to my interests. The rest include the legendary Katz’s Delicatessen, which is still on the short list of very best available experiences (yes, of course you order the Pastrami), and the famous Keen’s Steakhouse. I don’t care for mutton, but their regular steaks are quite good. There’s also Peter Lugar’s and PJ Clarke’s. There were also two less impressive steakhouses. Old Homestead is actively bad, and Delmonico’s was a great experience because we went to The Continental and then to John Wick 3 but is objectively overpriced without being special. Those are all ‘since 18xx,’ so extreme cases. What about typical cases? Unfortunately, getting opening date data is tricky. Other lists I found did not actually correspond to when places opened all that well. I wasn’t able to easily test more systematically. Looking at ‘places you like the most’ has obvious bias issues. The one I love most opened in 1978. Others were newer but mostly not that recent. However I’ve had a lifetime to find them, so a question is, how fast and completely do I evaluate new offerings? My guess is that: 1. Most new restaurants are below average, and also rather uninteresting. The average new (non-chain) restaurant is higher quality now than in the past, but it is also less interesting. 2. Average (mean or median) quality increases with age, at least initially, due to positive selection via survivorship. If a place folds quickly, you usually did not mis
3541c0aa-0cdd-4779-8497-de26fb077b84
trentmkelly/LessWrong-43k
LessWrong
Political philosophy and/versus political action, not to mention arguing I found an essay with so much good stuff in it that I was in danger of exceeding my quote quota, so I'm putting more quotes here: "Awareness of what is first-best is a condition of being able to aim at second-best." "What would be nice, then, would be a political philosophy that did a better job of taking this sort of typical deformation into intelligent account, which would discourage it – since it thrives on not being seen for what it is. (Also would be nice: a pony!) A theory of first-best that talks astutely about second-best. This is inherently hard to do, so I don’t say ‘theorizes well’. I guess I would propose a sort of line-of-sight rule. Optimally, you shouldn’t lose sight of your ideals or of reality. So much so obvious. But really the trick is keeping accurate score with regard to semi-idealistic philosophical and policy proposals. Philosophers like to talk about the difficulty deriving an ought from an is (or an is from an ought). But it’s equally important to think about the difficulty in analyzing an is-ought compound into component elements, the better to reduce and potentially reconstitute it." "As philosophers, we would like to address the strongest arguments our opponents have. But this etiquette of the seminar room is bad ethics, in that this just isn’t the right approach in political philosophy. Liberalism is as liberalism does (not just as it might do, ideally). And the same goes for conservatism and libertarianism and communism and the rest. If you only address good arguments you will miss out on a perilously large proportion of your actual subject matter." I'm not quite as sure about this one. I suppose it depends on what one means by an argument. Political philosophy presumably includes implied arguments that trying to approach some specific proposed ideal system will produce better results than not trying to approach it.
298a8779-c2e0-4359-a5ce-2133942e5ba5
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Agent Foundations track in MATS In MATS Winter 2023, I'm going to mentor scholars who want to work on the [learning-theoretic agenda](https://www.lesswrong.com/posts/ZwshvqiqCvXPsZEct/the-learning-theoretic-agenda-status-2023). This was not part of the original announcement, because (due to logistical reasons) I joined the programme in the last moment. Apply [here](https://www.matsprogram.org/agent). See [here](https://www.lesswrong.com/posts/tqyg3DpoiE4DKyi4y/apply-for-mats-winter-2023-24) for more details about the broader programme. Here are some examples of work by my scholars in the Winter 2022 cohort: [1](https://www.lesswrong.com/posts/cYJqGWuBwymLdFpLT/non-unitary-quantum-logic-seri-mats-research-sprint) [2](https://www.lesswrong.com/posts/HNnRCPe2CejfupSow/fixed-points-in-mortal-population-games) [3](https://www.lesswrong.com/posts/nEFAno6PsCKnNgkd5/infra-bayesian-logic). The most impressive example is not here because it's still unpublished. One of the scholars from that cohort now holds a long-term position working on the learning-theoretic agenda.  If you know someone else who might be suitable and interested to apply, please pass on the word! --- Tangentially related: Although it was not officially announced, I had plans to create an internship programme for researchers interested to work on the learning-theoretic agenda, which would involve moving to Israel for a short period. This is now postponed indefinitely because of the war. If you heard of that plan and considered to apply, you might consider applying to my track in MATS instead.
e526cce3-1ab0-4613-9b45-fa84087b1540
trentmkelly/LessWrong-43k
LessWrong
Is Coronavirus active in Summer? It seems like the corona virus has spread mostly in cold climates on the northern hemisphere (in winter) and there are very few (confirmed) cases in Africa, South America and Australia. On the one hand the first two probably don't have the capabilities for testing on a large scale, but they should have it easier to detect the virus, since it can't be confused for the common cold. Could it be that the virus is transmitted (well) only in cold climates? And would that mean that the danger in Europe/the US will reduce drastically by May or June?
7ad55889-2e92-47c1-b0da-4369682e311b
trentmkelly/LessWrong-43k
LessWrong
Meetup : Inaugural Canberra meetup Discussion article for the meetup : Inaugural Canberra meetup WHEN: 12 February 2014 07:30:00PM (-0800) WHERE: CSIT Building, Acton ACT 2601, Australia (I'm posting this on behalf of my lurker friend.) WHEN: 12 February 2014 07:30:00PM(+1100) WHERE: CSIT Building, Blg 108, North Road, ANU This will be the first meetup in Canberra! We will get to know each other and play board games, joining the weekly board games night held by the ANU Computer Science Students' Association. Vegan-friendly snacks and board games will be provided (although if anyone has board games, it would be great if you could bring your own). I will be waiting at the main door of the building from 7:20 to 7:40 to show people to the room where we will be meeting. Discussion article for the meetup : Inaugural Canberra meetup
5356349f-1360-4403-ac52-d6e743fd0ea9
trentmkelly/LessWrong-43k
LessWrong
Collective Identity Thanks to Simon Celinder, Quentin Feuillade--Montixi, Nora Ammann, Clem von Stengel, Guillaume Corlouer, Brady Pelkey and Mikhail Seleznyov for feedback on drafts. This post was written in connection with the AI Safety Camp.   Executive Summary: This document proposes an approach to corrigibility that focuses on training generative models to function as extensions to human agency.  These models would be designed to lack independent values/preferences of their own, because they would not have an individual identity; rather they would identify as part of a unified system composed of both human and AI components. * The selfless soldier: This section motivates the difference between two kinds of group centric behavior, altruism (which is based in individual identity) and collective identity. * Modeling groups vs individuals: Here we argue that individuals are not always the most task-appropriate abstraction, and that it often makes sense to model humans on the group level. * Generative predictive models: This section describes how generative predictive models will model themselves and their environment, and motivates the importance of the “model of self” and its connection to personal identity. * Strange identities: There are several ways (in humans) in which the one-to-one correspondence between a neural network and its model of self breaks down, and this section discusses three of those examples in order to suggest that identity is flexible enough that an AI’s identity need not be individual or individuated. * Steps toward identity fusion: Here we aim to clarify the goal of this agenda and what it would mean for an AI to have an identity based on a human-AI system such that the AI component extends the human’s agency. While we don’t give a clear plan for how to bring about this fusion, we do offer an antithetical example of what kind of training would clearly fail. * Relevance for corrigibility: This section concludes the document by drawing more direct conne
13cf29b5-ea91-42ae-9a7d-51c52e35ba5b
trentmkelly/LessWrong-43k
LessWrong
Preface to the sequence on value learning This is a meta-post about the upcoming sequence on Value Learning that will start to be published this Thursday. This preface will also be revised significantly once the second half of the sequence is fully written. Purpose of the sequence The first part of this sequence will be about the tractability of ambitious value learning, which is the idea of inferring a utility function for an AI system to optimize based on observing human behavior. After a short break, we will (hopefully) continue with the second part, which will be about why we might want to think about techniques that infer human preferences, even if we assume we won’t do ambitious value learning with such techniques. The aim of this part of the sequence is to gather the current best public writings on the topic, and provide a unifying narrative that ties them into a cohesive whole. This makes the key ideas more discoverable and discussable, and provides a quick reference for existing researchers. It is meant to teach the ideas surrounding one specific approach to aligning advanced AI systems. We’ll explore the specification problem, in which we would like to define the behavior we want to see from an AI system. Ambitious value learning is one potential avenue of attack on the specification problem, that assumes a particular model of an AI system (maximizing expected utility) and a particular source of data (human behavior). We will then delve into conceptual work on ambitious value learning that has revealed obstructions to this approach. There will be pointers to current research that aims to circumvent these obstructions. The second part of this sequence is currently being assembled, and this preface will be updated with details once it is ready. The first half of this sequence takes you near the cutting edge of conceptual work on the ambitious value learning problem, with some pointers to work being done at this frontier. Based on the arguments in the sequence, I am confident that the obvious f
b69c1028-a57b-4083-b70b-b351b1f4a50f
trentmkelly/LessWrong-43k
LessWrong
Group rationality diary, 5/14/12 Background:  I and many other attendees at the CFAR rationality minicamp last weekend learned a lot about applied rationality, and made big personal lists of things we wanted to try out in our everyday lives.  I think that a regular (weekly or maybe semi-weekly) post where people mention any new interesting habits, decisions, and actions they have taken recently would be cool as a supplement to this; it ought to be rewarding for people to be able to write a list of the cool things they did, and I expect it'll also be interesting for other people to peek in and see the sorts of goals and self-modifications people are working on.  Others at minicamp seemed enthusiastic about the idea, so I hope it takes off.  Feel free to meta-discuss whether this is a good idea or if it can be done better. Addendum 5/15: By the way, non-minicamp people should feel free to post too!  I am highly certain that minicamp attendees are not the only ones working on interesting things in their lives. This is the public group instrumental rationality diary for the week of May 14th.  It's a place to record and chat about it if you have done, or are actively doing, things like: * Established a useful new habit * Obtained new evidence that made you change your mind about some belief * Decided to behave in a different way in some set of situations * Optimized some part of a common routine or cached behavior * Consciously changed your emotions or affect with respect to something * Consciously pursued new valuable information about something that could make a big difference in your life * Learned something new about your beliefs, behavior, or life that surprised you * Tried doing any of the above and failed Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves.  Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't t
bed62a41-fb86-45fa-97c7-5c1b5b29e460
trentmkelly/LessWrong-43k
LessWrong
Meetup : Small Berkeley meetup at Zendo Discussion article for the meetup : Small Berkeley meetup at Zendo WHEN: 20 June 2012 07:00:00PM (-0700) WHERE: berkeley, ca There will be another Berkeley meetup at my home, Zendo. The theme will be "calibration games". We've done calibration games before but this time I have software that the Center for Applied Rationality is using to train Bayesian commandos in its camps. Doors open at 7pm and games start at 7:30pm. For the location of Zendo, see the mailing list. This counts as a Small meetup. Discussion article for the meetup : Small Berkeley meetup at Zendo
88a8faaa-f486-4cfa-a86a-bde3af6d758c
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
AI-Relevant Regulation: CPSC ### Preface This post is part of a series exploring existing approaches to regulation that seem relevant for thinking about governing AI. The goal of this series is to provide a brief overview of a type regulation or a regulatory body so others can understand how they work and glean insights for AI governance. These posts are by no means exhaustive, and I would love for others to dig deeper on any topic within that seems useful or fruitful. While I’d be happy to answer any questions about the content below, to be honest I probably don’t know the answer; I’m just a guy who did a bunch Googling in the hopes that someone can gain value from this very high level research. Thank you to Akash Wasil and Jakub Kraus for their feedback on earlier drafts. \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ Consumer Product Safety Commission (CPSC) ----------------------------------------- While AI is a broad, general purpose technology, AI systems themselves are in many cases consumer products. There are open source models (most notably [Llama](https://ai.meta.com/llama/)) out there, but many of the most popular systems like [GPT-4](https://openai.com/gpt-4), [Claude](https://www.anthropic.com/product), and [Stable Diffusion](https://stability.ai/stablediffusion), are proprietary pieces of software meant for consumer or commercial use. [OpenAI](https://openai.com/product) and [Anthropic](https://www.anthropic.com/product) actually list their models as “Product” on their website. If AI systems are products, then it seems valuable to understand how we currently regulate consumer goods to make sure they are safe for public use. This post looks at the [Consumer Product Safety Commission (CPSC)](https://www.cpsc.gov/), an [independent agency](https://en.wikipedia.org/wiki/Independent_agencies_of_the_United_States_government) that regulates consumer product in the US. CPSC focuses on addressing “unreasonable risks” of injury (through coordinating recalls, evaluating products that are the subject of consumer complaints or industry reports, etc.); developing uniform safety standards (some mandatory, some through a voluntary standards process); and conducting research into product-related illness and injury. In particular, I focus on the [testing and certification](https://www.cpsc.gov/Business--Manufacturing/Testing-Certification) that the CPSC requires, which seems especially useful for thinking about how to govern AI. --- ### [Childen’s products](https://www.cpsc.gov/Business--Manufacturing/Business-Education/childrens-products) * Children’s products are subject to the strictest testing/certification process * [Initial certification testing](https://www.cpsc.gov/Business--Manufacturing/Testing-Certification/Third-Party-Testing/Initial-Testing): every domestically-produced and imported children’s product must be tested by a third party, CPSC-approved laboratory before it can be sold + Over [400 CPSC-accredited labs](https://www.cpsc.gov/Business--Manufacturing/Testing-Certification/Lab-Accreditation) around the world are authorized to test for different types of products + The manufacturer or importer is responsible for getting the test done and issuing a “Children’s Product Certificate” [more below] * [Component part testing](https://www.cpsc.gov/Business--Manufacturing/Testing-Certification/Third-Party-Testing/Component-Part-Testing): if the component(s) of a product have already been third-party tested/approved, these can be accepted as proof of the product’s safety/compliance * [Material Change Testing](https://www.cpsc.gov/Business--Manufacturing/Testing-Certification/Third-Party-Testing/Material-Change-Testing): If there has been a material change to a children’s product, the manufacturer/supplier is required to either retest the product or retest the component part that was changed to check for compliance with those rules affected by the material change + Manufacturer/supplier must then issue a new Children’s Product Certificate based on the new passing test results * [Periodic Testing](https://www.cpsc.gov/Business--Manufacturing/Testing-Certification/Third-Party-Testing/Periodic-Testing): third-party testing must continue to be conducted while a children’s product is being produced + Testing must be conducted at a minimum of 1-, 2-, or 3-year intervals, depending upon whether the manufacturer has a periodic testing plan, a production testing plan, or plans to conduct continued testing using an accredited laboratory * [Children’s Product Certificate](https://www.cpsc.gov/Testing-Certification/Childrens-Product-Certificate-CPC) + No standardized format: simply written in a word doc or equivalent + CPC must physically “accompany” the product or shipment of products covered by the certificate; manufacturers/importers must also “furnish” the GCC to distributors/retailers + If you sell directly to consumers, the certificate does not need to be provided to the consumer + You do not need to file your certificate with the government or any other regulatory body in the US; however, the CPSC can request a copy of your certificate at any time, and it must be supplied in such cases + It is a violation of the CPSA to fail to furnish a CPC and to issue a false CPC; a violation could lead to a civil penalty, criminal penalties, and asset forfeiture + Required elements - Identification of the product covered by the certificate * Describing the product(s) in enough detail to match the certificate to each product it covers and no others - Citation to each CPSC children’s product safety rule to which the product is being certified * The certificate must identify separately each children’s product safety rule that is applicable to the children’s product - Identification of the importer or domestic manufacturer certifying compliance of the product * Provide the name, full mailing address, and phone number of the importer or US domestic manufacturer certifying the product - Contact information for the individual maintaining records of test results * Provide the name, full mailing address, e-mail address, and phone number of the person maintaining test records in support of the certification - Date and place where this product was manufactured * For the date(s), provide at least the month and year. For the place, provide at least the city, state (if applicable), and country where the product was manufactured or finally assembled. If the same manufacturer operates more than one location in the same city, provide the street address of the factory. - Provide the date(s) and place when the product was tested for compliance with the consumer product safety rule(s) cited above * Provide the location(s) of the testing and the date(s) of the test(s) or test report(s) on which certification is being based - Identify any third party, CPSC-accepted laboratory on whose testing the certificate depends * Provide the name, full mailing address, and phone number of the lab ### Testing for [non-children’s (“general use”) products](https://www.cpsc.gov/Business--Manufacturing/Testing-Certification/General-Use-Products-Certification-and-Testing) * Manufacturers and importers of general use products for which consumer product safety rules apply (full list [here](https://www.cpsc.gov/Business--Manufacturing/Testing-Certification/Lab-Accreditation/Rules-Requiring-a-General-Certificate-of-Conformity)), must certify in a written certificate that their products comply with applicable rules * The product must undergo a “reasonable testing program”: unlike with children’s products, there are no mandates specifying what this looks like + The CPSC “suggests best practices for a reasonable testing program, but the guidance is not mandatory” + CPSC guidance states that a reasonable testing program should: - “Provide a high degree of assurance that its consumer product complies with the applicable consumer product safety rule or standard” - “Be in writing and should be approved by the senior management of the manufacturer/ importer” - “Be based on the considered judgment and reasoning of the manufacturer/importer concerning the number, frequency, and methods of tests to be conducted on the products” + There is no requirement that the testing lab be CPSC-approved, or even a third party: testing can be done in-house + There is also no requirement to periodically retest/certify general use products or re-test them after materials changes (although the CPSC “recommends” these tests) * Once the reasonable testing plan has been completed, manufacturer/importer must issue a [General Certificate of Conformity](https://www.cpsc.gov/Business--Manufacturing/Testing-Certification/General-Certificate-of-Conformity) (GCC) ( very similar to Children’s Product Certificate) + GCC must “accompany” the product or shipment of products covered by the certificate; manufacturers/importers must “furnish” the GCC to distributors/retailers + There is no requirement to file the GCC with the government + However, it is a violation of the CPSA to fail to furnish a GCC, to issue a false certificate of conformity under certain conditions, and to otherwise fail to comply with section 14 of the CPSA; a violation of the CPSA could lead to a civil penalty and possibly criminal penalties and asset forfeiture + GCC required elements - Identification of the product covered by this certificate * Describe the product(s) in enough detail to match the certificate to each product it covers and no others - Citation to each consumer product safety regulation to which the product is being certified * The certificate must identify separately each consumer product safety rule administered by the Commission that is applicable to the produc - Identification of the importer or domestic manufacturer certifying compliance of the product * Provide the name, full mailing address, and phone number of the importer or US domestic manufacturer certifying the product - Contact information for the individual maintaining records of test results * Provide the name, full mailing address, e-mail address, and phone number of the person maintaining test records in support of the certification - Date and place where this product was manufactured * For the date(s) when the product was manufactured, provide at least the month and year. For the place of manufacture provide at least the city (or administrative region) and country where the product was manufactured or finally assembled. If the same manufacturer operates more than one location in the same city, provide the street address of the factory. - Provide the date(s) and place when the product was tested for compliance with the consumer product safety rule(s) cited above * Provide the location(s) of the testing and the date(s) of the test(s) or test report(s) on which certification is being based - Identification of any third party laboratory on whose testing the certificate depends ### Duty to Report * While testing/certification for non-children’s products is somewhat vague, companies have pretty strict rules on reporting problems with consumer goods they make, distribute, or sell * Manufacturers, importers, distributors, and retailers of all consumer products have a [legal obligation to immediately report](https://www.cpsc.gov/Business--Manufacturing/Recall-Guidance/Duty-to-Report-to-the-CPSC-Your-Rights-and-Responsibilities) the following information to the CPSC: + A defective product that could create a substantial risk of injury to consumers + A product that creates an unreasonable risk of serious injury or death + A product that fails to comply with an applicable consumer product safety rule or with any other rule, regulation, standard, or ban under the CPSA or any other statute enforced by the CPSC + When a child dies, suffers serious injury, ceases breathing for any length of time, or is treated by a medical professional after using a product + Certain types of lawsuits + Failure to fully and immediately report this information may lead to substantial [civil or criminal](https://www.cpsc.gov/Business--Manufacturing/Civil-and-Criminal-Penalties) penalties; CPSC advice is “when in doubt, report” + CPSC provides a [handbook](https://www.cpsc.gov/s3fs-public/pdfs/blk_pdf_8002.pdf) with detailed information on how to recognize hazardous products and what to do when this happens
3e823c3f-d2af-48d4-bf45-356839a57601
trentmkelly/LessWrong-43k
LessWrong
[LINK] Collaborate on HPMOR blurbs; earn chance to win three-volume physical HPMOR Collaborate on HPMOR blurbs; earn chance to win three-volume physical HPMOR.   > I intend to print at least one high-quality physical HPMOR and release the files. There are printable texts which are being improved and a set of covers (based on e.b.'s) are underway. I have, however, been unable to find any blurbs I'd be remotely happy with. > >   > > I'd like to attempt to harness the hivemind to fix that. As a lure, if your ideas contribute significantly to the final version or you assist with other tasks aimed at making this book awesome, I'll put a proportionate number of tickets with your number on into the proverbial hat. > >   > > I do not guarantee there will be a winner and I reserve the right to arbitrarily modify this any point. For example, it's possible this leads to a disappointingly small amount of valuable feedback, that some unforeseen problem will sink or indefinitely delay the project, or that I'll expand this and let people earn a small number of tickets by sharing so more people become aware this is a thing quickly. > >   > > With that over, let's get to the fun part. > >   > > A blurb is needed for each of the three books. Desired characteristics: > >   > > * Not too heavy on ingroup signaling or over the top rhetoric. > > * Non-spoilerish > > * Not taking itself awkwardly seriously. > > * Amusing / funny / witty. > > * Attractive to the same kinds of people the tvtropes page is. > > * Showcases HPMOR with fun, engaging, prose. > >   > > Try to put yourself in the mind of someone awesome deciding whether to read it while writing, but let your brain generate bad ideas before trimming back. > >   > > I expect that for each we'll want  > > * A shortish and awesome paragraph > > * A short sentence tagline > > * A quote or two from notable people > > * Probably some other text? Get creative. > >   > > Please post blurb fragments or full blurbs here, one suggestion per top level comment. You are encouraged to remix each other
a694fb94-f2d4-4886-8c48-53afdbf4a499
trentmkelly/LessWrong-43k
LessWrong
Meetup : Melbourne practical rationality meetup Discussion article for the meetup : Melbourne practical rationality meetup WHEN: 04 November 2011 07:00:00PM (+1100) WHERE: 55 Walsh St, West Melbourne VIC 3003, Australia Practical rationality, as distinct from socialising and rationality outreach. Look for a social meetup on the 3rd Friday of each month, and a rationality outreach meetup TBD. Discussion: http://groups.google.com/group/melbourne-less-wrong http://www.google.com/moderator/#16/e=6a317 This meetup repeats on the 1st Friday of each month. Discussion article for the meetup : Melbourne practical rationality meetup
b89c47d5-1c5c-4b76-918e-1729aa128364
trentmkelly/LessWrong-43k
LessWrong
[LINK] NYTimes essay on willpower, based on an upcoming Baumeister book A surprisingly good New York Times essay on willpower / ego depletion: Do You Suffer From Decision Fatigue? As it turns out, the essay is based on an upcoming Roy F. Baumeister book, "Willpower: Rediscovering the Greatest Human Strength", which will be available from Amazon in a couple of weeks (September 1, 2011) both as a hardcover and a Kindle edition. Some quotes from the essay (italics and headings mine):   You spend the most willpower when you have to make AND implement your decisions: > which phase of the decision-making process was most fatiguing? To find out, Kathleen Vohs, a former colleague of Baumeister’s now at the University of Minnesota, performed an experiment using the self-service Web site of Dell Computers. One group in the experiment carefully studied the advantages and disadvantages of various features available for a computer — the type of screen, the size of the hard drive, etc. — without actually making a final decision on which ones to choose. A second group was given a list of predetermined specifications and told to configure a computer by going through the laborious, step-by-step process of locating the specified features among the arrays of options and then clicking on the right ones. The purpose of this was to duplicate everything that happens in the postdecisional phase, when the choice is implemented. The third group had to figure out for themselves which features they wanted on their computers and go through the process of choosing them; they didn’t simply ponder options (like the first group) or implement others’ choices (like the second group). They had to cast the die, and that turned out to be the most fatiguing task of all. When self-control was measured, they were the one who were most depleted, by far.   Willpower depletion makes you reluctant to make trade-offs: > Once you’re mentally depleted, you become reluctant to make trade-offs, which involve a particularly advanced and taxing form of decision making. In the res
2483f34c-8a28-4e04-86c3-e9cda997cf8a
trentmkelly/LessWrong-43k
LessWrong
Alignment - Path to AI as ally, not slave nor foe More thoughts resulting from my previous post I'd like to argue that rather than trying hopelessly to fully control an AI that is smarter than we are, we should rather treat it as an independent moral agent, and that we can and should bootstrap it in such a way that it shares much of our basic moral instinct and acts as an ally rather than a foe. In particular, here are the values I'd like to inculcate: * the world is an environment where agents interact, and can compete, help each other, or harm each other; ie "cooperate" or "defect" * it is always better to cooperate if possible, but ultimately in some cases tit-for-tat is appropriate (alternative framing: tit-for-tat with cooperation bias) * hurting or killing non-copyable agents is bad * one should oppose agents who seek to harm others * copyable agents can be reincarnated, so killing them is not bad in the same way * intelligence is not the measure of worth, but rather capability to act as an agent, ie respond to incentives, communicate intention, and behave predictably; these are not binary measures. * you never know who will judge you, never assume you're the smartest/strongest thing you'll ever meet * don't discriminate based on unchangeable superficial qualities To be clear, one reason I think these are good values is because they are a distillation of our values at our best, and to the degree they are factual statements they are true. In order to accomplish this, I propose we apply the same iterated game theory / evolutionary pressures that I believe gave rise to our morality, again at our best. So, let's train RL agents in an Environment with the following qualities: * agents combine a classic RL model with an LLM; the RL part (call it "conscience", and use a small language model plus whatever else) gets inputs from the environment and outputs actions; and it can talk to the LLM through a natural language interface (like ChatGPT); furthermore, agents have a "copyable" bit, plus some superf
39b027e4-6a5e-4eec-b7eb-2ea5c47e9614
trentmkelly/LessWrong-43k
LessWrong
Longtermist implications of aliens Space-Faring Civilizations - Introduction Crossposted on the EA Forum. Over the last few years, progress has been made in estimating the density of intelligent life in the universe (e.g., Olson 2015, Sandberg 2018, Hanson 2021). Bits of progress have been made in using these results to update longtermist macrostrategy, but these results are partial and stopped short of their potential (Finnveden 2019, Olson 2020, Olson 2021, Cook 2022). Namely, this work stopped early in its tracks, at best, only hinting at the meaty part of the implications and leaving half of the work almost untouched: comparing the expected utility produced by different Space-Faring Civilizations (SFCs). In this post, we hint at the possible macrostrategic implications of these works: A possible switch for the longtermist community from targeting decreasing X-Risks (including increasing P(Alignment)[1]), to increasing P(Alignment | Humanity creates an SFC). Sequence: This post is part 1 of a sequence investigating the longtermist implications of alien Space-Faring Civilizations. Each post aims to be standalone. Summary We define two hypotheses:  * Civ-Saturation Hypothesis: Most resources will be claimed by Space-Faring Civilizations (SFCs) regardless of whether humanity creates an SFC[2]. * Civ-Similarity Hypothesis: Humanity's Space-Faring Civilization would produce utility[3] similar to other SFCs.  If these hypotheses hold, this could shift longtermist priorities away from reducing pure extinction risks and toward specifically optimizing P(Alignment | Humanity creates an SFC)[1]. This means that rather than focusing broadly on preventing misaligned AI and extinction, longtermists might need to prioritize strategies that specifically increase the probability of alignment conditional on humanity creating an SFC. Macrostrategy updates include the following: * (i) Deprioritizing significantly extinction risks, such as nuclear weapon and bioweapon risks. * (ii) Deprioritizing to some degree AI Safety agendas mostly increasing P
dd995686-cf25-42b6-8f7a-0e072c42fdb9
trentmkelly/LessWrong-43k
LessWrong
Moral Alignment: An Idea I'm Embarrassed I Didn't Think of Myself Back in February, I attended the Bay Area EA Global as I have every year since they started having them. I didn't have a solid plan for what to do there this year, though, so I decided to volunteer. That means I only attended the sessions where I was on room duty, and otherwise spent the day having a few 1:1s when I wasn't on shift. That's okay because, as everyone always says, the 1:1s are the best part of EA Global, and once again they were proven right. Among the many great folks I met and friends I caught up with, I got the chance to meet Ronen Bar and learn about his idea of AI moral alignment. And when he told me about it, I was embarrassed I hadn't thought of it myself. Simply put, moral alignment says that, rather than trying to align AI with human values, we try to explicitly align it to be a positive force for all sentient beings. In all my years of thinking about AI alignment, I've not exactly ignored animals and other creatures both known and unknown, but I also figured they'd get brought along because humans care about them. But I have to admit, while it might come to the same outcome, it feels more authentic to say I want AI that is aligned to all beings rather than just humans because I, though I may be human, do in fact care about the wellbeing of all life and wish for all of it to flourish as it best can with the aid of future AI technology. I think I missed articulating an idea like moral alignment because I was too close to the ideas. That is, I understood intuitively that if we succeeded in building AI aligned with human flourishing, that would necessarily mean alignment with the flourishing of all life, and in fact I've said that the goal of building aligned AI is to help life flourish, but not that AI should be aligned to all life directly. Now that we are much closer to building artificial superintelligence and need to figure out how to align it, the importance of aligning to non-human life stands out to me as a near-term priority. For e
bd5b03a4-f9c5-45c8-bbfa-aa409814d197
trentmkelly/LessWrong-43k
LessWrong
All images from the WaitButWhy sequence on AI Lots of people seem to like visual learning. I don't see much of an issue with that. People who have fun with thinking tend to get more bang for their buck. It seems reasonable to think that that Janus's image of neural network shoggoths makes it substantially easier for a lot of people to fully operationalize the concept that RLHF could steer humanity off of a cliff:   Lots of people I've met said that they were really glad that they encountered Tim Urban's WaitButWhy blog post on AI back in 2015, which was largely just a really good distillation of Nick Bostrom's Superintelligence (2014). It's a rather long (but well-written) post, so what impressed me was not the distillation, but the images. The images in the post were very vivid, especially in the middle. It seems to me like images can work as a significant thought aid, by leaning on visual memory to aid recall, and/or to make core concepts more cognitively available during the thought process in general.  But also, almost by themselves, the images do a pretty great job describing the core concepts of AI risk, as well as the general gist of the entirety of Tim Urban's sequence. Considering that he managed to get that result, even though the post itself is >22,000 words (around as long as the entire CFAR handbook), maybe Tim Urban was simultaneously doing something very wrong and very right with writing the distillation; could he have turned a 2-hour post into a 2-minute post by just doubling the number of images? If there was a true-optimal blog post to explain AI safety for the first time, to an otherwise-uninterested layperson (a very serious matter in AI governance), it wouldn't be surprising to me if that true-optimal blog post contained a lot of images. Walls of text are inevitable at some point or another, but there's the old saying that a picture's worth a thousand words. Under current circumstances, it makes sense for critical AI safety concepts to be easier and less burdensome to think and learn
48617302-7031-47d2-840a-0d450631abfd
trentmkelly/LessWrong-43k
LessWrong
"Nice Guys Finish First" - Youtube Video of selected reading (by Dawkins) from The Selfish Gene Check it out here. Brief summary: Dawkins demonstrates a classic "Prisoner Dilemma AI tournament". No big surprise to us today, but at the time the revelation that Tit for Tat is one of -- if not *the* -- most effective strateg(y|ies) was a surprising result.  He goes on to demonstrate animals employing the Tit for Tat strategy.  Assumptions of generosity, with vengefulness, appear to be strongly selected for.
ff41623b-2d62-439d-a6bb-9f5c85267951
trentmkelly/LessWrong-43k
LessWrong
Opportunity to to learn more about AI Innovation & Security Policy Note: Sharing this workshop since LW has a lot of people interested in testing their fit for AI Governance. Sorry if this too spammy for this forum, feel free to let me know if so.  If you're interested in a job tackling tough problems at the intersection of AI, innovation, and national security, consider applying to Horizon’s AI Innovation & Security Policy Workshop, taking place July 11-13 in Washington DC.   What do you get?  * Through hands-on sessions, expert panels, and networking opportunities, participants will gain practical insights into policy careers and develop the skills needed to effectively bridge the gap between the technology and policy worlds. * Dive into the critical issues of AI innovation and security policy, including: * National security applications and implications of AI * Energy production and grid capacity for data centers * Global technological competition, export controls, and China’s AI ecosystem * Model testing and evaluation frameworks * The intersection of AI, cyber security, and critical infrastructure * Past Horizon workshop participants have said: * “The workshop was my first step into my new career trajectory and it was inspiring to hear from such amazing speakers and meet like-minded peers.” * “Horizon gave me the confidence to change my career and move to DC to work in government. I also gained a new network of friends and learned about emerging tech.” * “I doubt I would have made the transition into policy without the Horizon workshop.”    Who is it for?  * This workshop is for US citizens from all backgrounds and experience levels. * We welcome technologists, researchers, industry experts, and other professionals who bring valuable domain knowledge but may be unfamiliar with policy processes or government. * No prior policy experience is required.    Speakers include (more to be announced):  * Samuel Hammond – Chief Economist @ Foundation for American Innovation * Lennart Heim – AI
53d86605-58db-4ee5-871e-1c779c00f955
trentmkelly/LessWrong-43k
LessWrong
Any systems to rate more abstract things? We have many in place to give us aggregated reviews on books (example: goodreads), films (imdb), restaurants (yelp), places (tripadvisor), apps, etc... Yet it seems like there's no such thing for abstract yet important stuffs like theories (e.g. Maslow hierarchy), techniques (CFAR's various), methods... Without proper research on the issue, I'm still going forth and saying that it's actually in pretty high need. Do you agree? If not, why? If yes, what does the system look like in your mind, in detail? And more important, how many of us here are willing to roll up their sleeves and together build such thing?
5ba54a6d-bab0-4874-83bc-2151a54b4e9a
trentmkelly/LessWrong-43k
LessWrong
Entangled Equilibria and the Twin Prisoners' Dilemma In this post, I present a generalization of Nash equilibria to non-CDT agents. I will use this formulation to model mutual cooperation in a twin prisoners' dilemma, caused by the belief that the other player is similar to you, and not by mutual prediction. (This post came mostly out of a conversation with Sam Eisenstat, as well as contributions from Tsvi Benson-Tilsen and Jessica Taylor) This post is sketchy. If someone would like to go through the work of making it more formal/correct, let me know. Also, let me know if this concept already exists. Let A1,…,An be a finite collection of players. Let Mi be a finite collection of moves available to Ai. Let Ui:∏iMi→R be a utility function for player Ai. Let Δi be the simplex of all probability distributions over Mi. Let Vi denote the vector space of functions v:Mi→R, with ∑m∈Miv(m)=0. A vector given a distribution in p∈Δi and a vector v∈Vi we say that v is unblocked from p if there exists an ε>0, such that p+εv∈Δi. i.e., it is possible to move in the direction of v from p, and stay in Δi. Given a strategy profile P=(p1,…,pn) in ∏j≤nΔj, and a vector V=(v1,…,vn) in ∏j≤nVj, we say that V improves P for Ai if limε→0∑m1∈M1…∑mn∈MnUi(m1,…,mn)(∏j≤npj(mj)+εvj(mj)−∏j≤npj(mj))ε>0. We call this limit the utility differential for Ai at P in the direction of V. i.e., V improves P for Ai if Ui is increased when P is moved an infinitesimal amount in the direction of V. (not that this is defined even when the vectors are not unblocked from the distributions) A "counterfactual gradient" for player Ai is a linear function from Vi to ∏j≤nVj, such that the function into the Vi component is the identity. This represents how much player Ai expects the probabilities of the other players to move when she moves her probabilities. A "counterfactual system" for Ai is a continuous function which takes is a strategy profile P in ∏j≤nΔj, and outputs a counterfactual gradient. This represents the fact that a player's counterfactual gradient
b47551ca-f926-46f2-99e8-544899819fcf
trentmkelly/LessWrong-43k
LessWrong
Shh, don't tell the AI it's likely to be evil If you haven't been online in the past week: ChatGPT is OpenAI's new fine-tuned language model that is really much better at doing all sorts things that you'd expect. Here's it debugging some code, or writing awful-sounding Seinfeld episodes, or (my favorite) writing a verse from the King James Bible about taking a PB sandwich out of a VCR. My fairly-basic understanding of large language models is that they are trying to predict the most-likely-next token in a sequence. So if you start a sentence with "I love peanut butter and..." the large language model will finish it with "jelly." It's clear that the relationships be captured by "most-likely-token" are way more complex and interesting that one might initially guess (see: getting a PB sandwich out of a VCR above). Given that LLMs are trained on the public internet, I'm wondering if we should be thinking hard about what sorts of sequences we are exposing these models to. Imagine OpenAI of the future goes to fine-tune one of their models to be a "friendly and nice and happy chatbot that just wants to help humans solve their problems, while also being aligned + not evil." ChatGPT5 is born, and when given a question as a prompt, it will likely respond with something friend and nice and happy, etc.  But deep in the weights in ChatGPT5's network are a memory of the other "friendly and nice and happy chatbots" it has seen before: the AIs that we have been writing about since Asimov. And most of the AIs in our fiction have a tendency to work great at first, but eventually turn evil, and then try to kill everyone. And so, as these are the only other chatbots that ChatGPT5 has learned from, this may become the most-likely sequence. Be a nice and friendly and happy chatbot - and then, after about 2 years of being friendly and visibly-aligned, go ham -- for no other reason than this is the most likely set of tokens it's seen. Another way to ask this question that: would you rather ChatGPT had been trained on the dataset
599ea652-3e80-4765-a47f-b23ca4006354
trentmkelly/LessWrong-43k
LessWrong
Can LessWrong provide me with something I find obviously highly useful to my own practical life? When I meet new people, I often like to figure out how I can be useful in a really short amount of time, just by talking.  Example:  Often this will involve evangelizing some items from a short list of technologies that have impacted my life a lot, such as erotic hypnosis (you can use this to have really good sex over any distance or even asynchronously, those 2 facts alone have a lot of lifechanging implications imo). Telling people about the (very large and real) efficacy of erotic hypnosis is useful because most people go from not imagining this is possible to imagining its possible and having a clear path to test it out in a very short amount of time. If it works, the benefit is obvious, and I give a lot of update about whether it works + info about how to see for themselves quickly and easily.  Another example: In 2021, Sapphire gave me really good intel about founding crypto companies and really good generic arguments about why the EMH is false and therefore I should in fact act upon the good intel. Another example: A post about how candles are bad for you Non-example: When I try to extract useful conversations from others, a common reply is "if you meditate a lot its really good". That's not useless to hear, I guess (I don't meditate and wouldn't know), but it's not super useful since people say it all the time and it's not really that easy to verify for myself.    I would really like a forum where the entire point of the forum was stuff like this: Read this -> instantly or near-instantly become way smarter about things of actual practical relevance to your life. To me, this is the best possible kind of post in the world, and the space of possible posts of this kind is so big that you could just fill a forum with only that. Why bother with anything else?  As far as I can tell, LessWrong is not such a forum, but people do try to do this some of the time successfully. I really want to engage with the people who are trying to do this, but I'm lazy and I
a0a7268e-59a3-440f-ac8c-14739d2585ac
trentmkelly/LessWrong-43k
LessWrong
Meetup : Atlanta June meetup - Hacking Motivation Discussion article for the meetup : Atlanta June meetup - Hacking Motivation WHEN: 08 June 2014 07:03:20PM (-0400) WHERE: 2388 Lawrenceville Hwy. Unit L. Decatur, GA 30033 Oge says: I recently read “The Motivation Hacker” by Nick Winter, a book which summarizes the state of the art in setting goals and getting our human brains to actually want to accomplish those goals. At this meetup, I’ll give a short talk on the highlights of the book. Afterward, we’ll collaboratively hack on any motivation issues that are brought to the group (so bring yours). I can’t wait to see y’all again! If you have trouble getting to the venue, contact me at 404-542-6392 You can nominate future meetup topics here. Discussion article for the meetup : Atlanta June meetup - Hacking Motivation
302ae94c-5e49-4386-b660-e5f3029d23e0
trentmkelly/LessWrong-43k
LessWrong
post proposal: Attraction and Seduction for Heterosexual Male Rationalists It seems there's some interest in PUA and attraction at Less Wrong. Would that subject be appropriate for a front-page post? I've drafted the opening of what I had in mind, below. Let me know what you think, and whether I should write the full post. Also, I've done lots of collaborative writing before, with much success (two examples). I would welcome input from or collaboration with others who have some experience and skill in the attraction arts. If you're one of those people, send me a message! Even if you just want to comment on early drafts or contribute a few thoughts. I should probably clarify my concept of attraction and seduction. The founders of "pickup" basically saw it as advice on "how to trick women into bed", but I see it as a series of methods for "How to become the best man you can be, which will help you succeed in all areas of life, and also make you attractive to women." Ross Jeffries used neuro-linguistic programming and hypnosis, and Mystery literally used magic tricks to get women to sleep with him. My own sympathies lie with methods advocated by groups like Art of Charm, who focus less on tricks and routines and more on holistic self-improvement. ... ... EDIT: That didn't take long. Though I share much of PhilGoetz's attitude, I've decided I will not write this post, for the reasons articulated here, here, here and here.  ... Here was the proposed post... ... When I interviewed to be a contestant on VH1's The Pick-Up Artist, they asked me to list my skills. Among them, I listed "rational thinking." "How do you think rational thinking will help you with the skills of attraction?" they asked. I paused, then answered: "Rational thinking tells me that attraction is a thoroughly non-rational process." A major theme at Less Wrong is "How to win at life with rationality." Today, I want to talk about how to win in your sex life with rationality.a I didn't get the part on the VH1 show, but no matter: studying and practicing pick-up has
0bce4ac5-3440-4df0-8f4c-7a4e44aa9293
trentmkelly/LessWrong-43k
LessWrong
Project "MIRI as a Service" One theoretically possible way to greatly speed-up alignment research is to upload the minds of alignment researchers (with their consent, of course). Ideally, we would upload MIRI and some other productive groups, create thousands of their copies, and speed them up by orders of magnitude. This way, we may be able to solve alignment withing years, if not months.  Obviously, the tech is not there yet. But there is a poor-man's version of mind uploading that does exist already:  1. collect everything that the person has ever wrote 2. take the smartest available language model and fine-tune it on the writings  3. the result is an incomplete but useful digital replica of the person 4. upon the release of a better language model, create a better replica, rinse, repeat.  The current language models are already smart enough to assist with research in non-trivial ways. If fine-tuned on alignment writings, such a model could become a "mini-MIRI in your pocket", available 24/7. A few examples of a possible usage: * "We have implemented the following confinement measures. What are some non-trivial ways to circumvent them?" * "A NYT reporter asked us to explain our research on Proof-Producing Reflection for HOL. Write a short summary suitable for the NYT audience" * "How this idea can be combined with the research on human preferences by Paul Christiano..." * "Analyze this research roadmap and suggest how we can improve it" * "Here is my surprisingly simple solution to alignment! Tell me how it could go terribly wrong?" * "Provide a list of most recent papers, with summaries, related to corrigibility in the context of alignment research" * "Let's chat about Value Learning. Correct me if I'm wrong, but..." This could be a small technical project with a disproportionately large positive impact on alignment research. The first proof-of-concept could be as simple as a well-crafted prompt for ChatGPT, nudging it to think like an alignment researcher[1]. For a pro
aa80334c-1a32-4d45-8775-c907afbce22b
trentmkelly/LessWrong-43k
LessWrong
The Mountaineer's Fallacy I'm sure this has a name, but I can't remember it. So I have given it a new name. The Mountaineer's Fallacy. The Mountaineer's Fallacy is the suggestion that climbing Mount Everest is a good start on a trip to the moon. In one sense, you are making progress on a key metric: distance from the moon. But in another, more accurate sense, you are wasting a lot of time and effort on something that has no chance of helping you get to the moon.
dc08ec6d-e26a-45a0-bfef-64cde0bec962
trentmkelly/LessWrong-43k
LessWrong
An Ontology for Strategic Epistemology In order to apply game theory, we need a model of a strategic interaction. And that model needs to include data about what agents are present, what they care about, and what policies they can implement. Following the same aesthetic as the Embedded Agency sequence, I think we can build world models which at a low level of abstraction do not feature agents at all. But then we can apply abstractions to that model to draw boundaries around a collection of parts, and draw a system diagram for how those parts come together to form an agent. The upshot of this sequence is that we can design systems that work together to model the world and the strategic landscape it implies. Even when those systems were designed for different purposes using different ways to model the world, by different designers with different ways they would like the world to be. Automating Epistemology A software system doesn't need to model the world in order to be useful. Compilers and calculators don't model the world. But some tasks are most naturally decomposed as "build up a model of the world, and take different actions depending on the features of that model." A thermostat is one of the simplest systems I can think of that meaningfully implements this design pattern: it has an internal representation of the outside world ("the temperature" as reported by a thermometer), and takes actions that depend on the features of that representation (activate a heater if "the temperature" is too low, activate a cooler if "the temperature" is too high).  A thermostat's control chip can't directly experience temperature, any more than our brains can directly experience light bouncing off our shoelaces and hitting our retinas. But there are lawful ways of processing information that predictably lead a system's model of the world to be correlated with the way the world actually is. That lead a thermostat to register "the temperature is above 78° F" usually only in worlds where the temperature is above 78°
89296cbf-f685-495c-961e-0fb1ef23d720
trentmkelly/LessWrong-43k
LessWrong
Making the "stance" explicit Often, when people explain the behavior of an AI system, there is one of Daniel Dennett’s stances implicit in their explanation. I think that when this happens implicitly, it can often lead to misleading claims and sloppy reasoning. For this reason, I feel it can be really valuable to be deliberately explicit which “stance” one is using. Here are Dennett’s three stances (in my own words): Physical/structural stance: Explaining a system’s behavior mechanically, as the interaction between its parts and with the outside environment. For example, we could explain the output of a neural net by looking at which combinations of neurons fired to bring that about. Design/functional stance: Explaining a system by looking at what its function is, or what it was made to do. For example, we could explain the behavior of a thermometer by the fact that its function is to track the temperature.  Intentional stance: Explaining a system’s behavior as a product of beliefs, desires, or mental reasoning of the system. For example, my friend got up to get a cup of coffee because they believed there was coffee in the other room, wanted to have the coffee, and reasoned that to get the coffee they had to get up.    The idea is that all three stances can in principle be applied to any system, but that for different systems each stance offers a different amount of explanatory power. For example, if I tried to understand my friend in terms of their design (perhaps the evolutionary pressures placed upon humans, or the in-lifetime learning applied to my friend’s brain) this would get me a lot less bang for my buck than my much simpler model of them as wanting coffee. Likewise, thinking of a thermometer as “wanting” to track the temperature and “believing” that the temperature outside does not match what it is displaying is unnecessarily complicated. Statements can sometimes make implicit assertions about which stance is the most useful. For example, consider: “GPT is just a text predictor.
6f6fb8c8-6080-4cf9-828c-dad132f06295
trentmkelly/LessWrong-43k
LessWrong
Have You Tried Hiring People? The purpose of this post is to call attention to a comment thread that I think needs it. ---------------------------------------- seed: Wait, I thought EA already had 46$ billion they didn't know where to spend, so I should prioritize direct work over earning to give? https://80000hours.org/2021/07/effective-altruism-growing/ rank-biserial: I thought so too. This comment thread on ACX shattered that assumption of mine. EA institutions should hire people to do "direct work". If there aren't enough qualified people applying for these positions, and EA has 46 billion dollars, then their institutions should (get this) increase the salaries they offer until there are. To quote EY: > There's this thing called "Ricardo's Law of Comparative Advantage". There's this idea called "professional specialization". There's this notion of "economies of scale". There's this concept of "gains from trade". The whole reason why we have money is to realize the tremendous gains possible from each of us doing what we do best. > > This is what grownups do. This is what you do when you want something to actually get done. You use money to employ full-time specialists. Excerpts From The ACX Thread:[1] Daniel Kokotajlo: For those trying to avert catastrophe, money isn't scarce, but researcher time/attention/priorities is. Even in my own special niche there are way too many projects to do and not enough time. I have to choose what to work on and credences about timelines make a difference. Daniel Kirmani: I don't get the "MIRI isn't bottlenecked by money" perspective. Isn't there a well-established way to turn money into smart-person-hours by paying smart people very high salaries to do stuff? Daniel Kokotajlo: My limited understanding is: It works in some domains but not others. If you have an easy-to-measure metric, you can pay people to make the metric go up, and this takes very little of your time. However, if what you care about is hard to measure / takes lots of time for yo
ef875ab4-4d14-46b4-acbc-f1218d5ab21a
trentmkelly/LessWrong-43k
LessWrong
Jason Silva on AI safety Just an FYI that Jason Silva, a "performance philosopher" who is quickly gaining popularity and audience, seems to have given little thought to, or not been exposed to the proper arguments for, or is unconvinced by, the existential threat of AGI. But of course, perhaps this optimism is what has allowed him to become so engagingly exuberant. "And I think if they're truly trillions of times more intelligent than us, they're not going to be less empathetic than us---they're probably going to be more empathetic. For them it might not be that big of deal to give us some big universe to play around in, like an ant farm or something like that. We could already be living in such a world for all we know. But either way, I don't think they're going to tie us down and enslave us and send us to death camps; I don't think they're going to be fascist A.I.'s. " Anyone have the connections to change his mind and help the X-risk meme piggyback on his voice?  Perhaps inviting him to Singularity Summit?
207a1629-f53e-47b7-938a-8f99d1107e57
trentmkelly/LessWrong-43k
LessWrong
Nozick's Dilemma: A Critique of Game Theory Introduction to Non-Cooperative Game Theory One of the most interesting applications of game theory and decision theory in general are the so-called “non-zero-sum games”. In these games, we have payoffs that do not cancel out between players, as in a zero-sum game, and it is possible to have intermediate payoffs. More than that, these games are quite famous for working with the possibility of cooperation. In a zero-sum scenario, the rational action for players is always to maximize their payoff and reduce their rivals' results as much as possible (since a rival's gain necessarily means a loss for themselves). However, in a “non-zero-sum games” scenario, it is possible that players can cooperate because there is a chance that a gain for one of the players does not mean a loss for another. Despite this, in general, the insights gleaned from these games are dismal. They show how agents acting in their own interest can generate deleterious and harmful results for a cooperative solution. The most famous of these games is the infamous Prisoners' Dilemma. This game, popularized by the Hollywood as a contest for girls, was developed in the 1950s by mathematicians Melvin Drescher and Alex Tucker at the Rand Corporation as a form of thought experiment to test hypotheses drawn from the work of Morgenstern and Von Neumann. Later, this game would be used as the perfect application of the idea of equillibrium developed by John Nash. To illustrate this game let's use an ... fun example. I'm a big fan of british film director Christopher Nolan, mainly because of his extremely clever and complex plots. Nolan is the type of director who certainly spends a few hours reading Nature or Scientific American to improve his films.  One of his best movies is Batman: The Dark Knight. In this film, our eccentric billionaire hero is trying to save Gotham from a series of terrorist attacks carried out by his eternal rival, the Joker. In the final part of the film, Joker poses a wonderful di
3a09ebd2-655f-43e5-9759-2b6d07b21d9d
trentmkelly/LessWrong-43k
LessWrong
Agents Over Cartesian World Models Thanks to Adam Shimi, Alex Turner, Noa Nabeshima, Neel Nanda, Sydney Von Arx, Jack Ryan, and Sidney Hough for helpful discussion and comments. Abstract We analyze agents by supposing a Cartesian boundary between agent and environment. We extend partially-observable Markov decision processes (POMDPs) into Cartesian world models (CWMs) to describe how these agents might reason. Given a CWM, we distinguish between consequential components, which depend on the consequences of the agent's action, and structural components, which depend on the agent's structure. We describe agents that reason consequentially, structurally, and conditionally, comparing safety properties between them. We conclude by presenting several problems with our framework. Introduction Suppose a Cartesian boundary between agent and environment:[1] There are four types: actions, observations, environmental states, and internal states. Actions and observations go from agent to environment and vice-versa. Environmental states are on the environment side, and internal states are on the agent side. Let A,O,E,I refer to actions, observations, environmental states, and internal states. We describe how the agent interfaces with the environment with four maps: observe, orient, decide, and execute.[2] * observe:E→ΔO describes how the agent observes the environment, e.g., if the agent sees with a video camera, observe describes what the video camera would see given various environmental states. If the agent can see the entire environment, the image of observe is distinct point distributions. In contrast, humans can see the same observation for different environmental states. * orient:O×I→ΔI describes how the agent interprets the observation, e.g., the agent's internal state might be memories of high-level concepts derived from raw data. If there is no historical dependence, orient depends only on the observation. In contrast, humans map multiple observations onto the same internal state. * decide:I→
b154224f-2d1c-46d5-a60d-b3b6e184baaf
trentmkelly/LessWrong-43k
LessWrong
Self-programming through spaced repetition Spaced repetition software is a flashcard memorization technology based on the spacing effect. Personally, I think of it as a way of engineering dispositions, a form of self-programming. More concretely, I find that spaced repetition is helpful for * internalizing knowledge  * compressing recent experiences  * conditioning specific future behaviors  * making analogies  * laying the groundwork for future insights  * confusion identification  * concept clarification  * reconciling models  * creating new representations  * creating examples  Below I give some principles, tips, and examples that aim at helping you get the most out of SRS. This post is compact, and I think it will be helpful to re-read it periodically as you use SRS more. Principles * SR strengthens connections between mental representations. There a variety of ways this can happen, seeing as there is both a variety of mental representations and connections between them. For example, the two mental representations could be of a context and a behavior, and strengthening the connection would mean making the behavior more likely in that context.  * Mental representations precisely condition behavior. The point of doing SR is to change behavior (whether mental or physical) and I think it helps to keep the chain of causality in mind. It gives me a framework to think about the different things SR does for me, and how those things are achieved. * SR establishes a personal language of thought. Cards about situations or concepts give you a canonical handle/representation for thinking about them (like Eliezer's clever post titles). This can have positive effects (consistency of thought and building potential) as well as negative effects (potential ridigity of thought). Tips * Be very specific when conditioning behaviors. Make cards that follow the form "Do actionable_behavior_x in specific_situation_y under condition_z." This will limit confusion about whether you should execute the behav
b45dd701-9683-42b0-8d3f-161e5185159e
trentmkelly/LessWrong-43k
LessWrong
Meetup : London meetup: thought experiments Discussion article for the meetup : London meetup: thought experiments WHEN: 29 September 2013 02:00:00PM (+0100) WHERE: LShift, Hoxton Point, 6 Rufus St, London, N1 6PE NB note the change of location! We are so good at providing reasons for our decisions after the fact that the reasons before the fact that cause our decisions are often not clear to us. At this meeting we'll discuss and practice techniques for getting past our rationalisations and learning more about the real reasons we find ourselves making the choices we do. Discussion article for the meetup : London meetup: thought experiments
b88fa363-8491-470d-ba13-e5e0280d4a57
trentmkelly/LessWrong-43k
LessWrong
Why do we enjoy music? Enjoying music doesn't seem to have any obvious purpose. Sure you can argue it strengthens social bonds, but why specifically sounds arranged in patterns through time over anything else? At least with humor you can say it's about identifying the generating function of some observation which is sort of like reducing prediction error in predictive coding (and I suspect something like this is the basis for aesthetics) but I can't fit music into being anything like this.
a34a404c-232c-4117-839d-a12068382de7
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Sharing the World with Digital Minds **Note:** This is "Draft version 1.1" of this paper. If I learn that a later draft has been released, I'll edit this post to include it. If you learn of a later draft, please let me know! --- *Written by* [*Carl Shulman*](https://forum.effectivealtruism.org/users/carl_shulman) *and Nick Bostrom.* Abstract -------- > The minds of biological creatures occupy a small corner of a much larger space of possible minds that could be created once we master the technology of artificial intelligence. Yet many of our moral intuitions and practices are based on assumptions about human nature that need not hold for digital minds. This points to the need for moral reflection as we approach the era of advanced machine intelligence.  > > Here we focus on one set of issues, which arise from the prospect of digital “utility monsters”. These may be mass-produced minds with moral statuses and interests similar to those of human beings or other morally considerable animals, so that collectively their moral claims outweigh those of the incumbent populations. Alternatively it may become easy to create individual digital minds with much stronger individual interests and claims to resources than humans.  > > Disrespecting these could produce a moral catastrophe of immense proportions, while a naïve way of respecting them could be disastrous for humanity. A sensible approach requires reforms of our moral norms and institutions along with advance planning regarding what kinds of digital minds we bring into existence. > > [**Read the rest of the paper**](https://www.nickbostrom.com/papers/monster.pdf?fbclid=IwAR1ecJvhDq6FKQF0lRicd8BeTCbuJ9iTJD55fuFVPbGuxAqQy4b-s3Y9fMI)
672be506-7e2e-483c-b783-6133b8ba5772
trentmkelly/LessWrong-43k
LessWrong
Feelings A Brief History "Reason alone can never be a motive to any action of the will… it can never oppose passion in the direction of the will… The reason is, and ought to be, the slave of the passions, and can never pretend to any other office than to serve and obey them" These words were written by David Hume's A Treatise of Human Nature in 1773.[1] We have had philosophical systems that put the wills or the motive forces in their rightful place in the hierarchy of action choices for a long time. The idea being that there's a process outside rationality making selections based on the reality the personality inhabits that are evolutionarily benefiting the human animal. I intend to argue that emotions are not a separate phenomenon from rational thought, but are instead a core functional component for assembling the rational universe. I Feel Therefor I Am Everything we humans care about. Everything we notice. Everything in the rational human universe exists because at some point in time a human had a feeling about it enough to care about it and turn it into part of their shared existence with other human beings. The Emotional Machine I had set myself the goal of building an artificial general intelligence based in part on Marvin Minsky's emotion machine. I tried working out what it would take to model what I knew about emotions as a biologist and trying out functional abstractions for them as a computer scientist. I had both personal and educational exposure to psychological concepts so I began with the discrete emotions and tried imagining what each might look like as a function. Here is an entire article on the various models people have tried before. While looking into the possibility, I ran across the continuous emotional model (figure 1 [2]) and after staring at that for a while, realized I was looking at a four dimensional space. (figure 2) figure 1 : The discrete emotions mapped as dyads of every emotion with the exception of its polar opposite.figure 2 :
7efda1b8-e82c-408d-9204-4954995b6cb2
trentmkelly/LessWrong-43k
LessWrong
Requirements for a Basin of Attraction to Alignment TL;DR: It has been known for over a decade that that certain agent architectures based on Value Learning by construction have the very desirable property of having a basin of attraction to full alignment, where if you start sufficiently close to alignment they will converge to it, thereby evading the problem of  "you have to get everything about alignment exactly right on the first try, in case of fast takeoff". I recently outlined in Alignment has a Basin of Attraction: Beyond the Orthogonality Thesis the suggestion that for sufficiently capable agents this is in fact a property of any set of goals sufficiently close to alignment, basically because with enough information and good intentions the AI can deduce or be persuaded of the need to perform value learning. I'd now like to analyze this in more detail, breaking the argument that the AI would need to make for this into many simple individual steps, and detailing the background knowledge that would be required at each step, to try to estimate the amount and content of the information that the AI would need for it to be persuaded by this argument, to get some idea of the size of the basin of attraction. I am aware that some of the conclusions of this post may be rather controversial. I would respectfully ask that anyone who disagrees with it, do me and the community the courtesy of posting a comment explaining why it is incorrect, or if that is too time-consuming at least selecting a region of the text that you disagree with and then clicking the resulting smiley-face icon to select a brief icon/description of how/why, rather then simply down-voting this post just because you disagree with some of its conclusions. (Of course, if you feel that this post is badly written, or poorly argued, or a waste of space, then please go right ahead and down-vote it — even if you agree with most or all of it.) Why The Orthogonality Thesis Isn't a Blocker The orthogonality thesis, the observation that an agent of any intellig
2002dcbc-0b0a-403a-953a-d98189476f5d
StampyAI/alignment-research-dataset/lesswrong
LessWrong
is gpt-3 few-shot ready for real applications? This is a lengthy reply to [@the-moti](https://tmblr.co/mWXtvJ3Sr0NPLUIFhgtxxnA)​​‘s post [here](https://the-moti.tumblr.com/post/625378774714908672/nostalgebraist-all-the-gpt-3-excitementhype-im).  Creating a new post to limit thread length, and so I can crosspost to LW. [@the-moti](https://tmblr.co/mWXtvJ3Sr0NPLUIFhgtxxnA)​​ says, in part: > This obviously raises two different questions: 1. Why did you think that no one would use few-shot learning in practice? 2. Why did other people think people would use few-shot learning in practice? > > I would be interested in hearing your thoughts on these two points. > > > — Thanks for asking! First of all, I want to emphasize that the GPT-3 *paper* was not about few-shot GPT-3 as a practical technology. (This is important, because the paper is the one large body of quantitative evidence we have on few-shot GPT-3 performance.) This is not just my take on it: before the OpenAI API was announced, allthe discussion I saw took for granted that we were talking about a scientific finding and its broader implications.  I didn’t see any commentator whose main takeaway was “wow, if I could do this few-shot thing right now, I could build amazing projects with it.” Indeed, a [common](https://www.lesswrong.com/posts/ZHrpjDc3CepSeeBuE/gpt-3-a-disappointing-paper?commentId=t7FCECuHezj3KFoap) [theme](https://www.lesswrong.com/posts/ZHrpjDc3CepSeeBuE/gpt-3-a-disappointing-paper?commentId=rE77s5PKPMzq73QZr) in critical commentary on my post was that I was too focused on whether few-shot was useful right now with this specific model, whereas the critical commentators were more focused on the implications for even larger models, the confirmation of scaling laws over a new parameter regime, or the illustration-in-principle of a kind of meta-learning.  [Gwern’s May newsletter](https://www.gwern.net/newsletter/2020/05) is another illustrative primary source for the focus of the discussion in this brief “pre-API” period.  (The API was announced on June 11.) As I read it (perhaps benefitting from hindsight and discussion), the main points of the paper were (1) bigger models are better at zero/few-shot (i.e. that result from the GPT-2 paper holds over a larger scale), (2) more “shots” are better when you’re doing zero/few-shot, (3) there is an interaction effect between 1+2, where larger models benefit more from additional “shots,” (4) this *could* actually become a practical approach (even the dominant approach) in the future, as illustrated by the example of a very large model which achieves competitive results with few-shot on some tasks The paper did not try to optimize its prompts – indeed its results are [already being improved upon](http://gptprompts.wikidot.com/linguistics:word-in-context) by API acolytes – and it didn’t say anything about techniques that will be common in any application, like composing together several few-shot “functions.”  It didn’t talk about speed/latency, or what kind of compute backend could serve many users with a guaranteed SLA, or how many few-shot “function” evaluations per user-facing output would be needed in various use cases and whether the *accumulated* latency would be tolerable.  (See [this post](https://minimaxir.com/2020/07/gpt3-expectations/) on these practical issues.) It was more of a proof of concept, and much of that concept was about scaling rather than this particular model. — So I’d argue that right now, the ball is in the few-shot-users’ court.  Their approach *might* work – I’m not saying it couldn’t! In their favor: there is plenty of room to further optimize the prompts, explore their composability, etc. On the other hand, there is no body of evidence saying this actually works.  OpenAI wrote a long paper with many numbers and graphs, but that paper wasn’t about *whether their API was actually a good idea.* (That is not a criticism of the paper, just a clarification of its relevance to people wondering whether they should use the API.) This is a totally new style of machine learning, with little prior art, running on a mysterious and unproven compute backend.  *Caveat emptor!* *—* Anyway, on to more conceptual matters. The biggest *advantages* I see in few-shot learning are **(+1)** broad accessibility (just type English text) and ability to quickly iterate on ideas **(+2)** ability to quickly define arbitrary NLP “functions” (answer a factual question, tag POS / sentiment / intent, etc … the sky’s the limit), and compose them together, without incurring the memory cost of a new fine-tuned model per function What could really impress me is (+2).  IME, it’s not really that costly to *train* new high-quality models: you can finetune BERT on a regular laptop with no GPU (although it takes hours), and on ordinary cloud GPU instances you can finetune BERT in like 15 minutes. The real cost is keeping around an entire finetuned model (~1.3GB for BERT-large) for *each individual* NLP operation you want to perform, and holding them all in memory at runtime. The GPT-3 approach effectively trades this memory cost for a time cost.  You use a single very large model, which you hope already contains every function you will ever want to compute.  A function definition in terms of this model doesn’t take a gigabyte to store, it just takes a tiny snippet of text/code, so you can store tons of them.  On the other hand, evaluating each one requires running the big model, which is slower than the task-specific models would have been. So storage no longer scales badly with the number of operations you define.  However, latency still does, and latency per call is now much larger, so this might end up being as much of a constraint.  The exact numbers – not well understood at this time – are crucial: in real life the difference between 0.001 seconds, 0.1 seconds, 1 second, and 10 seconds will make or break your project. *—* As for the potential *downsides* of few-shot learning, there are many, and the following probably excludes some things I’ve thought of and then forgotten: **(-1)** The aforementioned potential for deal-breaking slowness. **(-2)** You can only provide a very small amount of information defining your task, limited by context window size. The fact that more “shots” are better arguably compounds the problem, since you face a tradeoff between providing more examples of the *same* thing and providing examples that define a more *specific* thing. The extent to which this matters depends a lot on the task.  It’s a complete blocker for many creative applications which require imitating many nuances of a particular text type not well represented in the training corpus. For example, I could never do [@nostalgebraist-autoresponder](https://tmblr.co/mJeO8knbQHSr-MbpPt5lyGg)​​ with few-shot: my finetuned GPT-2 model knows all sorts of things about my writing style, topic range, opinions, etc. from seeing ~3.65 million tokens of my writing, whereas few-shot you can only identify a style via ~2 thousand tokens and hope that’s enough to dredge the rest up from the prior learned in training.  (I don’t know if my blog was in the train corpus; if it wasn’t, we’re totally screwed.) I had expected AI Dungeon would face the same problem, and was confused that they were early GPT-3 adopters.  But it turns out they actually [fine-tuned](https://twitter.com/nickwalton00/status/1287885952543682560) (!!!!), which resolves my confusion … and means the first real, exciting GPT-3 application out there *isn’t actually a demonstration of the power of few-shot* but in fact the opposite. With somewhat less confidence, I expect this to be a blocker for specialized-domain applications like medicine and code.  The relevant knowledge may well have been present in the train corpus, but with so few bits of context, you may not be able to overcome the overall prior learned from the whole train distribution and “zoom in” to the highly specialized subset you need. **(-3)** Unlike supervised learning, there’s no built-in mechanism where you continually improve as your application passively gathers data during usage. I expect this to a be a big issue in commercial applications.  Often, a company is OK accepting a model that isn’t great at the start, *if* it has a mechanism for self-improvement without much human intervention. If you do supervised learning on data generated by your product, you get this for free.  With few-shot, you can perhaps contrive ways to feed in segments of data across different calls, but from the model’s perspective, no data set bigger than 2048 tokens “exists” in the same world at once. **(-4)** Suffers a worse form of the ubiquitous ML problem that “you get *exactly* what you asked for.” In supervised learning, your model will avoid doing the hard thing you want if it can find easy, dumb heuristics that still work on your train set.  This is bad, but at least it can be identified, carefully studied (what was the data/objective? how can they be gamed?), and mitigated with better data and objectives. With few-shot, you’re no longer asking an *arbitrary* query and receiving, from a devious genie, the response you deserve.  Instead, you’re constrained to ask queries of a particular form: *“what is the next token, assuming some complicated prior distributed from sub-sampled Common Crawl + WebText + etc.?”* In supervised learning, when your query is being gamed, you can go back and patch it in arbitrary ways.  The lower bound on this process comes only from your skill and patience.  In few-shot, you are fundamentally lower-bounded by the extent to which the thing you really want can be expressed as next-token prediction over that complicated prior.  You can try different prompts, but ultimately you might run into a fundamental bound here that is prohibitively far from zero.  No body of research exists to establish how bad this effect will be in typical practice. I’m somewhat less confident of this point: the rich priors you get out of a large pretrained LM will naturally help push things in the direction of outcomes that make linguistic/conceptual sense, and expressing queries in natural language might add to that advantage.  However, few-shot *does* introducea new gap between the queries you want to ask and the ones you’re able to express, and this new gap *could* be problematic. **(-5)** Provides a tiny window into a huge number of learned parameters. GPT-3 is a massive model which, in each call, generates many intermediate activations of vast dimensionality.  The model is pre-trained by supervision on a tiny subset of these, which specify probability distributions over next-tokens. The few-shot approach makes the gamble that this *same* tiny subset is all the user will need for applications.  It’s not clear that this is the right thing to do with a large model – for all we know, it might even be the case that it is *more* suboptimal the larger your model is. This point is straying a bit from the central topic, since I’m not arguing that this makes GPT-3 few-shot (im)practical, just suboptimal relative to what might be possible.  However, it does seem like a significant impoverishment: instead of the flexibility of leveraging immense high-dimensional knowledge however you see fit, as in the original GPT, BERT, [adapters](https://arxiv.org/pdf/1902.00751.pdf), etc., you get even immenser and higher-dimensional knowledge … presented through a tiny low-dimensional pinhole aperture. — The main reason I initially thought “no one would use few-shot learning like this” was the superior *generalization* *performance* of fine-tuning.  I figured that if you’re serious about a task, you’ll care enough to fine-tune for it. I realize there’s a certain mereology problem with this argument: what is a “single task,” after all?  If each fine-tuned model incurs a large memory cost, you can’t be “serious about” many tasks at once, so you have to chunk your end goal into a small number of big, hard tasks.  Perhaps with few-shot, you can chunk into smaller tasks, themselves achievable with few-shot, and then compose them. That may or may not be practical depending on the latency scaling.  But if it works, it gives few-shot room for a potential edge.  You might be serious enough about a large task to fine-tune for it … but what if you can express it as a composition of smaller tasks you’ve already defined in the few-shot framework?  Then you get it instantly. This is a flaw in the generalization performance argument.  Because of the flaw, I didn’t list that argument above.  The list above provides more reasons to doubt few-shot *above and beyond* the generalization performance argument, and again in the context of “serious” work where you care enough to invest some time in getting it right. I’d like to especially highlight points like (-2) and (-3) related to scaling with additional task data. The current enthusiasm for few-shot and meta-learning – that is, for immediate transfer to new domains with an *extremely* low number of domain examples – makes sense from a scientific POV (humans can do it, why can’t AI?), but strikes me as misguided in applications. Tiny data is rare in applied work, both because products generate data passively – and because *if* a task might be profitable, *then* it’s worth paying an expert to sit down for a day or two and crank out ~1K annotations for supervised learning.  And with modern NLP like ELMo and BERT, ~1K is really enough! It’s worth noting that most of the [superGLUE tasks](https://arxiv.org/pdf/1905.00537.pdf) have <10K train examples, with several having only a few hundred.  (This is a “low-data regime” relative to the expectations of the recent past, but a regime where you can now get good results with a brainless cookie-cutter finetuning approach, in superGLUE as in the rest of life.) ![](https://64.media.tumblr.com/ac5d758bca0e30627f0d452b0eff4de7/752a82d8cbc74d92-45/s540x810/6e8615fa54923bbe8848959baa66a67adbc1c88d.png)GPT-3 few-shot can perform competitively on some of these tasks while pushing that number down to 32, but at the cost of many downsides, unknowns, and flexibility limitations.  Which do you prefer: taking on all those risks, or sitting down and writing out a few more examples? — The trajectory of my work in data science, as it happens, looks sort of like a move from few-shot-like approaches toward finetuning approaches. My early applied efforts assumed that I would never have the kind of huge domain-specific corpus needed to train a model from scratch, so I tried to compose the output of many SOTA models on more general domains.  And this … worked out terribly.  The models did exactly what they were trained to do, not what I wanted.  I had no way to scale, adapt or tune them; I just accepted them and tried to work around them. Over time, I learned the value of doing *exactly* what you want, not something close to it.  I learned that a little bit of data in your *actual* domain, specifying your *exact* task, goes much further than any domain-general component.  Your applied needs will be oddly shaped, extremely specific, finicky, and narrow.  You rarely need the world’s greatest model to accomplish them – but you need a model with access to *a very precise specification of exactly what you want.* One of my proudest ML accomplishments is a system that does something very domain-specific and precisely shaped, using LM-pretrained components plus supervised learning on ~1K of my own annotations.  *Sitting down and personally churning out those annotations must have been some of the most valuable time I have ever spent at work, ever*.   I wanted something specific and finicky and specialized to a very particular use case.  So I sat down and specified what I wanted, as a long list of example cases.  It took a few days … and I am still reaping the benefits a year later. If the few-shot users are working in domains anything like mine, they either know some clever way to evade this hard-won lesson, or they have not yet learned it. — But to the other question … why *are* people so keen to apply GPT-3 few-shot learning in applications?  This questions forks into “why do end users think this is a good idea?” and “why did OpenAI provide an API for doing this?” I know some cynical answers, which I expect the reader can imagine, so I won’t waste your time writing them out.  I don’t actually know what the non-cynical answers look like, and my ears are open. *(For the record, all of this only applies to few-shot.  OpenAI is apparently going to provide finetuning as a part of the API, and has already provided it to AI Dungeon.  Finetuning a model with 175M parameters is a whole new world, and I’m very excited about it.* *Indeed, if OpenAI can handle the costs of persisting and running finetuned GPT-3s for many clients, all of my concerns above are irrelevant.  But if typical client use of the API ends up involving a finetuning step, then we’ll have to revisit the GPT-3 paper and much of the ensuing discussion, and ask when – if not now – we actually expect finetuning to become obsolete, and what would make the difference.)*
f969cfc9-89f8-462e-8090-5f741b52d73b
trentmkelly/LessWrong-43k
LessWrong
Singlethink I remember the exact moment when I began my journey as a rationalist. It was not while reading Surely You’re Joking, Mr. Feynman or any existing work upon rationality; for these I simply accepted as obvious. The journey begins when you see a great flaw in your existing art, and discover a drive to improve, to create new skills beyond the helpful but inadequate ones you found in books. In the last moments of my first life, I was fifteen years old, and rehearsing a pleasantly self-righteous memory of a time when I was much younger. My memories this far back are vague; I have a mental image, but I don’t remember how old I was exactly. I think I was six or seven, and that the original event happened during summer camp. What happened originally was that a camp counselor, a teenage male, got us much younger boys to form a line, and proposed the following game: the boy at the end of the line would crawl through our legs, and we would spank him as he went past, and then it would be the turn of the next eight-year-old boy at the end of the line. (Maybe it’s just that I’ve lost my youthful innocence, but I can’t help but wonder . . .) I refused to play this game, and was told to go sit in the corner. This memory—of refusing to spank and be spanked—came to symbolize to me that even at this very early age I had refused to take joy in hurting others. That I would not purchase a spank on another’s butt, at the price of a spank on my own; would not pay in hurt for the opportunity to inflict hurt. I had refused to play a negative-sum game. And then, at the age of fifteen, I suddenly realized that it wasn’t true. I hadn’t refused out of a principled stand against negative-sum games. I found out about the Prisoner’s Dilemma pretty early in life, but not at the age of seven. I’d refused simply because I didn’t want to get hurt, and standing in the corner was an acceptable price to pay for not getting hurt. More importantly, I realized that I had always known this—that the real m
011cb084-c563-4101-973a-29d255783b7a
trentmkelly/LessWrong-43k
LessWrong
Explorative Questions towards Argumentation + GitHub I am interested in the potential of automated reasoning and how this is done in groups. Since LessWrong has done such a good job of promoting and demonstrating feasible rationality, I am asking for insight as to how this could work with 'dialogue' or reasoned debate. I have a general concept in mind about how GitHub could be used and wondered if anyone could share their thoughts on how they see 'version control' been used to help people make better decisions and come to a concensus easier.  If templates could work well for certain topics, replicating and extending functionality would be easier and could become a standard operational perspective when discussing issues and solving problems. I realize I need a unique and novel angle to exploit and initially I want to explore the extent to which programming elements can be used to extend the functionality of adding and approving elements of the discussion. This would also include the evolution of the templates and naturally the topics they can handle for improving scope.  The problem I have, other than demonstrating the specific benefits of choosing GitHub as the mechanism to evolve 'reasoned debate', is that a certain methodology's effectiveness and proven worth to reach consensus would have to be shown for each type of argumentation topic. I am not assuming this would be hard to do and would count towards an initial MVP of the project.  Any feedback or alternative perspectives worth considering and/or potential implementation issues would be highly appreciated. :)
5deb25af-331f-46f7-80a8-a04399b23257
trentmkelly/LessWrong-43k
LessWrong
Don't Get Distracted by the Boilerplate Author’s Note: Please don’t get scared off by the first sentence. I promise it's not as bad as it sounds. There’s a theorem from the early days of group theory which says that any continuous, monotonic function which does not depend on the order of its inputs can be transformed to addition. A good example is multiplication of positive numbers: f(x, y, z) = x*y*z. It’s continuous, it’s monotonic (increasing any of x, y, or z increases f), and we can change around the order of inputs without changing the result. In this case, f is transformed to addition using a logarithm: log(f(x, y, z)) = log(x) + log(y) + log(z). Now, at first glance, we might say this is a very specialized theorem. “Continuous” and “monotonic” are very strong conditions; they’re not both going to apply very often. But if we actually look through the proof, it becomes clear that these assumptions aren’t as important as they look. Weakening them does change the theorem, but the core idea remains. For instance, if we remove monotonicity, then our function can still be written in terms of vector addition. Many theorems/proofs contain pieces which are really just there for modelling purposes. The central idea of the proof can apply in many different settings, but we need to pick one of those settings in order to formalize it. This creates some mathematical boilerplate. Typically, we pick a setting which keeps the theorem simple - but that may involve stronger boilerplate assumptions than are strictly necessary for the main idea. In such cases, we can usually relax the boilerplate assumptions and end up with slightly weaker forms of the theorem, which nonetheless maintain the central concepts. Unfortunately, the boilerplate occasionally distracts people who aren’t familiar with the full idea underlying the proof. For some reason, I see this problem most with theorems in economics, game theory and decision theory - the sort of theorems which say “either X, or somebody is giving away free money”. Peo
9bc4f605-7c05-4fcd-a8db-e888f57bc344
trentmkelly/LessWrong-43k
LessWrong
Should there be more people on the leaderboard? I'm wondering what the optimal number of people on the leaderboard would be. I suspect that people who appear on the leaderboard post more often because they want to remain on it. The other advantage, is that if the leaderboard seems in reach, more people will compete to get on it.On the other hand, if too many people were added to the leaderboard, then "being on the leaderboard" would be worthless and people would only care if they had a high position. There are currently 15 people on the leaderboard. I suspect that if there were 20 people on the leaderboard, that would increase the motivation effect, without significantly devaluing being on the leaderboard itself. What do people think?
4db5b9c0-c5e9-434f-bd79-0fcda96d37b7
trentmkelly/LessWrong-43k
LessWrong
What are some of bizarre theories based on anthropic reasoning? Doomsday argument and simulation argument are quite bizarre for most people. But these are not the only strange theories one can come up with when employing anthropics. Some examples: It is likely to have an unusual high IQ. Perhaps brain works in a way, that having high IQ correlates with something that also causes more observer moments. Hence there are more of high IQ experience in the world than that of low IQ. Fragile universe Total universe destroying physical catastrophes that expands in speed of light (say false vacuum collapse) could be very frequent. As much, as once every second. And it is only due to survivor bias that we think universe is stable and safe. How would we know? Animals does not have consciousness There are more animals than humans on Earth. Still we find ourselves as humans. Perhaps it is because we only can be humans, as only humans have consciousness. We are stuck inside infinite loop Lets assume simulation argument is correct. Then we probably exist inside simulation, run by some software. All software have bugs. One of the bugs software sometimes have is getting itself in an infinite loop. Biggest amount of experience computed by this software then could possibly be inside this infinite loop.
388d8b34-7389-43b3-b371-50b796bda9f8
trentmkelly/LessWrong-43k
LessWrong
Announcement: Legacy karma imported Hey Everyone, I finally got around to properly importing the old karma from LessWrong 1.0. Your vote-weight and the karma displayed on your profile should now reflect all the contributions you made to LessWrong over the years. We might change this around again at some future point in time, but we just stuck with the same karma formula as the old LessWrong had, with old main posts being worth 10 karma per vote, and everything else being worth 1. Sorry for this taking so long. Comment or ping us on Intercom if you notice anything karma-related being broken, or your karma not being properly imported.
7aec60c4-fc8b-4201-aca7-bcabc28a67b3
trentmkelly/LessWrong-43k
LessWrong
European Master's Programs in Machine Learning, Artificial Intelligence, and related fields While there is no shortage of detailed information on master’s degrees, we think that there is a lack of perspectives from students that have actually completed the program and experienced the university. Therefore we decided to write articles on multiple European master's programs on Machine Learning, Artificial Intelligence, and related fields. The texts are supposed to give prospective students an honest evaluation of the teaching, research, industry opportunities, and city life of a specific program. Since many of the authors are Effective Altruists and interested in AI safety a respective section is often included as well. It may not always be obvious, but there are many English-language degrees across Europe. Compared to America, these can be more affordable, offer more favorable visa terms, and a higher quality of life. We hope that you will consider bringing your talents to Europe. These are the articles that have already been written: * University of Amsterdam (Master's Artificial Intelligence) * University of Edinburgh (Master's Artificial Intelligence) * ETH Zürich (ML related M.Sc. programs) * EPF Lausanne (ML related M.Sc. programs) * University of Oxford (Master's Statistical Science) * University of Tübingen (Master's Machine Learning) * University of Darmstadt (Master's Computer Science with ML focus) This selection of Master’s programs is not an ultimate list of “good” master's programs – we just happened to know the authors of the articles. If you want to add an article about any ML-related program anywhere in the world don’t hesitate to contact us and we will add it to the list. We also don't claim that this is a complete overview of the respective programs and want to emphasize that this does not necessarily reflect the perception of all students within that program.  Authors: Marius (lead organizer), Leon (lead organizer), Marco, Xander, David, Lauro, Jérémy, Ago, Richard, James, Javier, Charlie,  Magdalena, Kyle. 
cd41a6fc-cb29-43c1-aac5-bcf7050d4fc4
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Horrible LHC Inconsistency Today's post, Horrible LHC Inconsistency was originally published on 22 September 2008. A summary (taken from the LW wiki):   > An illustration of inconsistent probability assignments. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was How Many LHC Failures Is Too Many?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
ac5f69ed-5179-4f52-b908-0a2bb95816d4
trentmkelly/LessWrong-43k
LessWrong
When can I be numerate? I recently started working in a setting making physical products, where I discovered that I had a whole nother section of my brain purposed for physical interactions between things and when things might fall on other things or melt or fall apart or shatter or scar or any number of other things. Unfortunately, because of my inexperience, this is a part of my brain I had to be prompted to use. I had to be asked directly: "What do you think will happen as a result of that?" in order to actually use that part of my brain. I can't help but feel that the same thing must be going on with my math ability. I want to apply my math knowledge to the world so I can actually get some use out of it, but I have no idea how to go about doing that. Wat do?
cbae435f-18bb-4f92-aa15-5ab660c3908e
trentmkelly/LessWrong-43k
LessWrong
Who is your favorite rationalist? Light reading about 'Rationalist Heroes'. I am not sure how useful people find having personal heroes. I would argue that they are definitely useful for children. Perhaps I haven't really grown up enough yet (growing up without a father possibly contributed), but I like to have some people in my head I label as "I wonder what would X think about this". Many times they've set me straight through their ideas. Other times I've had to reprimand them, though unfortunately they never get the memo. One living example is Charlie Munger. He was an early practical adopter of the cognitive biases framework, and moreover he clearly put it into context of "something to protect": "not understanding human misjudgment was reducing my ability to help everything I loved" (The quote is from his talk on "Misjudgment" which is worth reading on its own http://vinvesting.com/docs/munger/human_misjudgement.html) One interesting point is that Charlie is seemingly a Christian. I have a deep suspicion that he believes that religion is valuable, for the time, as a payload delivering mechanism. “Economic systems work better when there’s an extreme reliability ethos. And the traditional way to get a reliability ethos, at least in past generations in America, was through religion. The religions instilled guilt. … And this guilt, derived from religion, has been a huge driver of a reliability ethos, which has been very helpful to economic outcomes for man.” Also, judge for yourself from his recommended reading list - looks like something out of an Atheist's Bookshelf. Charlie Munger's reading recommendations There might also be other reasons, family or whatever, that help prop up the religious appearance. I myself am still wearing a yarmulke for this category of reasons. Whatever it is, Munger is no trinity worshiper. Another interesting thing is that it is clear today's Berkshire Hathaway was Buffett and Munger's joint venture, and most likely would not succeed in the same way without Mu
e4194a36-25f4-40a9-a057-1a3a9faaeb38
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
AI Impacts Quarterly Newsletter, Apr-Jun 2023 *Every quarter, we have a newsletter with updates on what’s happening at AI Impacts, with an emphasis on what we’ve been working on. You can see past newsletters* [*here*](https://blog.aiimpacts.org/t/newsletter) *and subscribe to receive more newsletters and other blogposts* [*here*](https://blog.aiimpacts.org/subscribe)*.* During the past quarter, Katja wrote an article in TIME, we created and updated several wiki pages and blog posts, and we began working on several new research projects that are in progress. We’re running a [reader survey](https://forms.gle/KU1hhqEfkEVadmk19), which takes 2-5 minutes to complete. We appreciate your feedback! If you’d like to donate to AI Impacts, you can do so [here](https://aiimpacts.org/donate/). Thank you! News ==== ### Katja Grace’s TIME article In May, TIME published Katja’s article “[AI Is Not an Arms Race](https://time.com/6283609/artificial-intelligence-race-existential-threat/).” People sometimes say that the situation with AI is an arms race that rewards speeding forward to develop AI before anyone else. Katja argues that this is likely not the situation, and that if it is, we should try to get out of it. ### References to AI Impacts Research The [2022 Expert Survey on Progress in AI](https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/) was referenced in [an article in The Economist](https://www.economist.com/science-and-technology/2023/04/19/how-generative-models-could-go-wrong), a [New York Times op-ed by Yuval Noah Harari](https://www.nytimes.com/2023/03/24/opinion/yuval-harari-ai-chatgpt.html), a [Politico op-ed](https://www.politico.com/news/magazine/2023/05/08/manhattan-project-for-ai-safety-00095779) that argues for a “Manhattan Project” for AI Safety, and a [report from Epoch AI’s Matthew Barnett and Tamay Besiroglu](https://epochai.org/blog/the-direct-approach) about a method for forecasting the performance of AI models. The Japanese news agency Kyodo News published an [article](https://english.kyodonews.net/news/2023/04/c85905aa447b-focus-do-ai-bots-like-chatgpt-threaten-humanity-if-left-unchecked.html) about AI risks that referenced [Katja’s talk at EA Global](https://www.youtube.com/watch?v=j5Lu01pEDWA) from earlier this year. We also maintain an ongoing [list of citations of AI Impacts work](https://wiki.aiimpacts.org/doku.php?id=selected_citations) that we know of. Research and writing highlights =============================== ### Views on AI risks * Rick compiled a list of [quotes from prominent AI researchers and leaders](https://wiki.aiimpacts.org/doku.php?id=arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai) about their views on AI risks. * Jeffrey compiled a list of [quantitative estimates about the likelihood of AI risks](https://wiki.aiimpacts.org/doku.php?id=arguments_for_ai_risk:quantitative_estimates_of_ai_risk) from people working in AI safety. * Zach compiled a list of [surveys that ask AI experts or AI safety/governance experts](https://wiki.aiimpacts.org/doku.php?id=uncategorized:ai_risk_surveys) for their views on AI risks. ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/Bmjucdecv3p5smNC8/tcjuejxiemx0gxdzq6hj)*Average given likelihoods of how good or bad human-level AI will be, from 559 machine learning experts in the* [*2022 Expert Survey on Progress in AI*](https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/)*.*### The supply chain of AI development * Harlan wrote a page outlining some [factors that affect the price of AI hardware](https://wiki.aiimpacts.org/doku.php?id=ai_timelines:hardware_and_ai_timelines:factors_that_affect_gpu_price) * Jeffrey wrote a [blogpost](https://blog.aiimpacts.org/p/horizontal-and-vertical-integration) arguing that slowing AI is easiest if AI companies are horizontally integrated, but not vertically integrated. ### Ideas for internal and public policy about AI * Zach compiled reading lists of research and discussion about [safety-related ideas that AI labs could implement](https://www.lesswrong.com/posts/GCMMPTCmGagcP2Bhd/ideas-for-ai-labs-reading-list#Desiderata) and [ideas for public policy related to AI.](https://www.lesswrong.com/posts/NfqqsHqembNEsTrSr/ai-policy-ideas-reading-list) * Zach also compiled a list of [statements that AI labs have made about public policy](https://wiki.aiimpacts.org/doku.php?id=uncategorized:ai_labs_statements_on_governance). ### AI timeline predictions * Harlan updated the main page for [Predictions of Human-Level AI Timelines](https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines) and made a page about [AI timeline prediction markets](https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_prediction_markets). * Zach and Harlan updated a list of [AI Timeline surveys](https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:ai_timeline_surveys) with data from recent surveys. ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/Bmjucdecv3p5smNC8/qmlgrxwtcukvkmyvcy0g)*Median predicted year of given probabilities of Human-Level AI from* [*surveys over the years*](https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:ai_timeline_surveys)*.*### Slowing AI * Zach compiled a [reading list](https://www.lesswrong.com/s/xMdkfEJhDNCL2KweB/p/d8WpJbjhn2Yi6kfmM) and an outline of [some strategic considerations](https://www.lesswrong.com/s/xMdkfEJhDNCL2KweB/p/MoLLqFtMup39PCsaG) related to slowing AI progress. * Jeffrey wrote a [blogpost](https://blog.aiimpacts.org/p/the-broader-fossil-fuel-community) arguing that people should give greater consideration to visions of the future that don’t involve AGI. ### Miscellany * Jeffrey wrote a [blogpost](https://blog.aiimpacts.org/p/a-tai-which-kills-all-humans-might) arguing that AI systems are currently too reliant on human-supported infrastructure to easily cause human extinction without putting the AI system at risk * Harlan, Jeffrey, and Rick submitted responses to the National Telecommunication and Information Administration’s AI accountability policy [request for comment](https://www.ntia.gov/issues/artificial-intelligence/request-for-comments) and the Office of Science and Technology Policy’s [request for information](https://www.whitehouse.gov/wp-content/uploads/2023/05/OSTP-Request-for-Information-National-Priorities-for-Artificial-Intelligence.pdf). Ongoing projects ================ * Katja and Zach are preparing to publish a report about the [2022 Expert Survey on Progress in AI](https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/), with further analysis of the results and details about the methodology. * Jeffrey is working on a case study of Institutional Review Boards in medical research. * Harlan is working on a case study of voluntary environmental standards. * Zach is working on a project that explores ways of evaluating AI labs for the safety of their practices in developing and deploying AI. Funding ======= We are still seeking funding for 2023 and 2024. If you want to talk to us about why we should be funded or hear more details about our plans, please write to Elizabeth, Rick, or Katja at [firstname]@aiimpacts.org. If you'd like to donate to AI Impacts, you can do so [here](https://aiimpacts.org/donate/). (And we thank you!) Reader survey ============= We are running a reader survey in the hopes of getting useful feedback about our work. If you’re reading this and would like to spend 2-5 minutes filling out the reader survey, you can find it [here](https://docs.google.com/forms/d/e/1FAIpQLSemlCrQC59D1g5s3Sz6pgLqrkkIpF8zcsZDqJ08e93y97K1Qw/viewform). Thank you!
5a90eb30-57d8-4309-aeec-5740126c55fd
trentmkelly/LessWrong-43k
LessWrong
President of European Commission expects human-level AI by 2026 On May 20, during her speech at the Annual EU Budget Conference 2025, Ursula von der Leyen, President of the European Commission, stated: > When the current budget was negotiated, we thought AI would only approach human reasoning around 2050. Now we expect this to happen already next year. It is simply impossible to determine today where innovation will lead us by the end of the next budgetary cycle. Our budget of tomorrow will need to respond fast. This is remarkable coming from the highest-ranking EU official. It suggests the Overton window for AI policy has shifted significantly.
f160cf7e-4b3e-4025-8c5b-a34bd3f0aaae
trentmkelly/LessWrong-43k
LessWrong
A Multidisciplinary Approach to Alignment (MATA) and Archetypal Transfer Learning (ATL) Abstract Multidisciplinary Approach to Alignment (MATA) and Archetypal Transfer Learning (ATL) proposes a novel approach to the AI alignment problem by integrating perspectives from multiple fields and challenging the conventional reliance on reward systems. This method aims to minimize human bias, incorporate insights from diverse scientific disciplines, and address the influence of noise in training data. By utilizing 'robust concepts’ encoded into a dataset, ATL seeks to reduce discrepancies between AI systems' universal and basic objectives, facilitating inner alignment, outer alignment, and corrigibility. Although promising, the ATL methodology invites criticism and commentary from the wider AI alignment community to expose potential blind spots and enhance its development.    Intro Addressing the alignment problem from various angles poses significant challenges, but to develop a method that truly works, it is essential to consider how the alignment solution can integrate with other disciplines of thought. Having this in mind, accepting that the only route to finding a potential solution would require a multidisciplinary approach from various fields - not only alignment theory. Looking at the alignment problem through the MATA lens makes it more navigable, when experts from various disciplines come together to brainstorm a solution.  Archetypal Transfer Learning (ATL) is one of two concepts[1] that originated from MATA. ATL challenges the conventional focus on reward systems when seeking alignment solutions. Instead, it proposes that we should direct our attention towards a common feature shared by humans and AI: our ability to understand patterns. In contrast to existing alignment theories, ATL shifts the emphasis from solely relying on rewards to leveraging the power of pattern recognition in achieving alignment.  ATL is a method that stems from me drawing on three issues that I have identified in alignment theories utilized in the realms of Large Langua
0bbc889e-1226-45f4-82c0-1c4df84c74ae
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Large Language Models Suggest a Path to Ems TL:DR: * Whole brain emulation from first principles isn't happening * LLMs are pretty good/human like + This suggests neural nets are flexible enough to implement human like cognition despite alien architecture used * Training a (mostly) human shaped neural net on brain implant sourced data would be the fastest way to aligned human level AGI. TL:DR end supposition: intelligence learning scale of difficulty: 1. from scratch in a simulated world via RL(reinforcement learning) ([AlphaGo Zero](https://en.wikipedia.org/wiki/AlphaGo_Zero), [XLand](https://www.deepmind.com/blog/generally-capable-agents-emerge-from-open-ended-play), [Reward is Enough](https://www.deepmind.com/publications/reward-is-enough)) 2. from humans via imitation learning (LLMs, [RL pretrained on human play(Minecraft)](https://openai.com/blog/vpt/), human children) * plus a bit of RL to continuously learn required context to interpret training data and to fine tune after the fact (EG:self play to reach superhuman levels) 3. Distillation (learn from teacher in white box model) * IE: look inside teacher model during training Distillation in a nutshell: * why train a model from another model? * to make it cheaper to run (teach a smaller model to imitate a bigger one) + [EG: Stable Diffusion model with half the layers doing 4x the work per pass](https://twitter.com/EMostaque/status/1598131202044866560) * teacher is not a black box, internal state of teacher is used: + teacher attention + patterns of activation (feature based distillation) + output distribution (AKA:"soft answers")((candidate answer,probability) list for given input) + if model has similar architecture just train for similarity of intermediate activations + etc. * this requires: + prefect information about the teacher model (or some parts thereof) + ability to run complex linear algebra on internal states and weights - differentiation, attribution, finding teacher/student state projections etc. supposition:why this works * intermediate states give insight into teacher model sub-computations + student only has to learn sub-computations not derive everything from scratch + student model can obviously be smaller because they're doing less exploration of the compute space - <https://www.lesswrong.com/posts/iNaLHBaqh3mL45aH8/magna-alta-doctrina#Who_art_thou_calling_a_Local_optimizer> * soft answers give a little more insight into sub-computations near the end of the model giving better gradients at all layers of the network. My pitch: Train AI from humans via brain implant generated data. * Ideally: + crack open volunteer skulls like eggshells + apply reading electrodes liberally * Other output channels include: + gaze tracking + FMRI(doesn't scale well but has already yielded some connection topology data) + EEG(the bare minimum). * Tradeoff of (invasiveness/medical risk) vs. (data rate,granularity) + likely too much: 100,000 Neuralink fibers/subject + likely too little: EEG skull caps + better off paying for more people without EEG caps * nothing gives complete access to all neurons but much better than black box * progress on the visual system was made via poking individual neurons * [FMRI data seems granular enough to do some mind-reading](https://www.biorxiv.org/content/10.1101/2022.09.29.509744v1.full) * this is very encouraging * good middle ground might be non/minimally brain penetrating electrodes How this might work: * figure out brain topology * imitate with connected neural net layers as modules * add some globally connected modules with varying feedback time scales Problems: * some of the interesting stuff is under other stuff * some of the stuff is dynamic (EG:memory, learning) * but whatever happens there will be lots of data to help figure things out * even a human level model lacking long term memory/learning is very useful Alignment? ---------- Obviously get training data from kind non-psychopaths. If the resulting models have similar behavior to the humans they're modeled from they're going to be aligned because they're basically copies. Problems arise if the resulting copies are damaged in some way especially if that leads to loss of empathy, insanity or something. Reasons for Hope ---------------- RL and other pure AI approaches seem to require a lot of tinkering to get working. Domain specific agents designed around the task can do pretty well but AGI is elusive. They do well when training data can be generated through self play(Chess, Go, DOTA) or where human world models don't apply([AlphaFold](https://alphafold.com/about)(Protein folding), [FermiNet](https://www.deepmind.com/blog/ferminet-quantum-physics-and-chemistry-from-first-principles)(Quantum mechanics)) and the AI is beating the best human designed algorithms but pure RL approaches have to learn a world model and a policy from scratch. Human data based approaches get both from the training dataset. This is an after the fact rationalisation of why LLMs have had so much success. As a real life example, RL approaches have a ways to go before they can do things like solve real world action planning problems which LLMs get more or less for free([palm-saycan](https://sites.research.google/palm-saycan)). Could this go Superhuman? ------------------------- Larger models should just end up overparameterized. All bets are off if you add 20 zeroes as with any ML system. Capabilities Generalization? ---------------------------- Capabilities research for imitation AI shouldn't transfer well to RL because the core problem with RL is training signal sparsity and the resulting credit attribution problem. Knowing how the adult human brain works might help but info on early childhood development won't be there (hopefully) which is a good thing for ASI risk because early childhood brain development details would be very useful to anyone designing RL based AIs. Additionally, the human brain uses feedback connections extensively. Distilling an existing human brain from lots of intermediate neuron level data should stabilize things but from scratch training of a similar network won’t work using modern methods. The bad scenario is if data collected allows reverse engineering a general purpose learning algorithm. Maybe the early childhood info isn't that important. Someone then clones a github repo adds some zeroes to the config and three months into the training run the world ends. Neuralink has some data from live animal brains collected over month-long timespans. Despite this, the world hasn't ended which lowers my belief this sort of data is dangerous. They also might be politely sitting on something very dangerous. If capabilities don't transfer there's plausibly some buffer time between AGI and ASI. Scalable human level AGI based on smart technically capable non psychopaths is a very powerful tool that might be enough to solve big problems. There's a number of plausible okayish outcomes that could see us not all dying! That's a win in my book. Isn't this just ems? -------------------- Yes, yes it is ... probably. In an ideal world maybe we wouldn't do this? On the one hand, human imitation based AGIs are in my moral circle, abusing them is bad, but some humans are workaholics voluntarily and/or don't have no-messing-with-my-brain terminal values such that they would approve of becoming aligned workaholics or contributing to an aligned workaholic gestalt especially in the pursuit of altruistic goals. Outcomes where ems don't end up in virtual hell are acceptable to me or at least better than the alternative(paperclips). Best case scenario, em (AI/conventional)programmers automate the boring stuff so no morally valuable entities have to do boring mindless work. There are likely paths where high skill level ems won't be forced to do things they hate. The status quo is worse in some ways. Lots of people hate their jobs. Won't a country with no ethics boards do this first? ---------------------------------------------------- They might have problems trusting the resulting ems. You could say they'd have a bit of an AGI alignment problem. Also democratic countries have an oligopoly on leading edge chip manufacturing so large scale deployment might be tough. Another country could also steal some em copies and offer better working conditions. There's a book to be written in all of this somewhere. Practical next steps (no brain implants) ---------------------------------------- * Develop distillation methods that rely on degraded information about teacher model internal states + Concretely:Develop distillation method using a noisy randomly initialised lower-dimensional projection of teacher internal state + Concretely:Develop "mind reading" for the teacher model using similar noisy data * Develop methods to convert between recurrent and single pass networks + this could have practical applications (EG:make LLMs cheaper to run by using a recurrent model trained to imitate internal states in the LLM thus re-using intermediate results) + really this is more to show that training that normally blows up (large recurrent nets or nets with feedback) can be stabilized by having internal state from another model available to give local objectives to the training process. If this can be shown to work and work efficiently, starting the real world biotech side becomes more promising.
6a4d0722-3dfd-49ce-946c-754c4c0a06fd
StampyAI/alignment-research-dataset/lesswrong
LessWrong
your terminal values are complex and not objective a lot of people seem to want [terminal](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value) (aka intrinsic aka axiomatic) [values](https://carado.moe/what-is-value.html) (aka ethics aka morality aka preferences aka goals) to be *simple and elegant*, and to be [*objective and canonical*](https://en.wikipedia.org/wiki/Moral_realism). this carries over from epistemology, where [we do favor simplicity and elegance](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor). [we have uncertainty about our values](https://carado.moe/what-is-value.html), and it is true that *our model of* our values should, as per epistemology, generally tend to follow [a simplicity prior](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor). but that doesn't mean that *our values themselves* are simple; they're definitely evidently complex enough that just thinking about them a little bit should make you realize that they're much more complex than the kind of simple model people often come up with. both for modeling the world and for modeling your values, you should favor simplicity *as a prior* and then *update by filtering for hypotheses that match evidence*, because *the actual territory is big and complex*. there is no objectively correct universal metaethics. there's just a large, complex, tangled mess of stuff that is [hard to categorize](https://carado.moe/guess-intrinsic-values.html) and contains not just *human notions* but also *culturally local notions* of love, happiness, culture, freedom, friendship, art, comfort, diversity, etc. and yes, these are **terminal** values; there is no simple process that re-derives those values. i believe that **there is no thing for which i instrumentally value love or art, which if you presented me something else that does that thing better, i would happily give up on love/art. i value those things *intrinsically*.** if you talk of "a giant cosmopolitan value handshake between everyone", then picking that rather than [paperclips](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer), while intuitive to *you* (because *you have your values*) and even to other humans doesn't particularly track anything universally canonical. even within the set of people who claim to have cosmopolitan values, *how conflicts are resolved* and *what "everyone" means* and many other implementation details of cosmopolitanism will differ from person to person, and again *there is no canonical unique choice*. your notion of cosmopolitanism is a very complex object, laden with not just human concepts but also cultural concepts you've been exposed to, which many other humans don't share both across time and space. there is no "metaethics ladder" you can which climb up in order to resolve this in an objective way for everyone, not even all humans — *what ladder* and *how you climb it* is still a complex subjective object laden with human concepts and concepts from your culture, and there is no such thing as a "pure" you or a "pure" person without those. some people say "simply detect all agents in the cosmos and do a giant value handshake between those"; but on top of the previous problems for implementation details, this has the added issue that the things whose values we want to be satisfied aren't *agents* but *[moral patients](https://www.lesswrong.com/posts/kQLw5hFDSMuS5SFGg/carado-s-shortform?commentId=X82HzJcdEKRHk3RyC)*. those don't necessarily match — superintelligent [grabby](https://grabbyaliens.com/) agents shouldn't get undue amounts of power in the value handshake. some people see the simplicity of paperclips as the problem, and declare that complexity or negentropy or something like that is the *ultimate good*. but [a superintelligence maximizing for that](https://carado.moe/core-vals-exist-selfdet.html) would just fill the universe with maximally random noise, as opposed to preserving the things you like. turns out, ["i want whatever is complex" is not sufficient to get our values](https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence); they're not just *anything complex* or *complexity itself*, they're an *extremely specific* complex set of things, as opposed to *other* equally complex sets of things. entropy just doesn't have much to do with terminal values whatsoever. sure, it has a lot to do with *instrumental* values: negentropy is the resource we have to allocate to the various things we want. but that's secondary to *what it is we want to begin with*. as for myself, i love cosmopolitanism! i would like an [egalitarian utopia where everyone has freedom and my personal lifestyle preferences aren't particularly imposed on anyone else](https://carado.moe/%E2%88%80V.html). but make no mistake: this cosmopolitanism *is my very specific view of it*, and other people have different views of cosmopolitanism, when they're even cosmopolitan at all. see also: * [Value is Fragile](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile) * [surprise! you want what you want](https://carado.moe/surprise-you-want.html) * [generalized wireheading](https://www.lesswrong.com/posts/z3cNdBSmNXEZLzeqT/generalized-wireheading) * ["humans aren't aligned" and "human values are incoherent"](https://carado.moe/human-values-unaligned-incoherent.html) * [CEV can be coherent enough](https://carado.moe/cev-coherent-enough.html)
29777ec8-830c-44ab-8e10-79e2b9656a25
StampyAI/alignment-research-dataset/special_docs
Other
Bayesian computational models for inferring preferences Bayesian Computational Models for Inferring Preferences by ARCHIVES MASSACHUSETTS INSTITUTE Owain Rhys Evans OF TECHNOLOGY B.A., Columbia University in the City of New York (2008) OCT 15 2015 LIBRARIES Submitted to the Department of Philosophy in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 2015 0 2015 Massachusetts Institute of Technology. All rights reserved. Signature redacted Signature of A uthor ......................... .................................................... Depart ent of Philosophy / 26 August 2015 Signature redacted Certified by.................. .......... .. .. .... ........... Roger White Professor of Philosophy Signature redactedupervisor A ccepted by ..................... ................... . x Byrne V Professor of Philosophy Philosophy Chair and Chair of the C mittee on Graduate Students Bayesian Computational Models for Inferring Preferences Owain Evans submitted to the Department of Philosophy on August 26, 2015, in partial fulfillment of the requirements for the degree of Doctor of Philosophy Abstract This thesis is about learning the preferences of humans from observations of their choices. It builds on work in economics and decision theory (e.g. utility theory, revealed preference, utilities over bundles), Machine Learning (inverse reinforcement learning), and cognitive science (theory of mind and inverse planning). Chapter 1 lays the conceptual groundwork for the thesis and introduces key challenges for learning preferences that motivate chapters 2 and 3. I adopt a technical definition of 'preference' that is appropriate for inferring preferences from choices. I consider what class of objects preferences should be defined over. I discuss the distinction between actual preferences and informed preferences and the distinction between basic/intrinsic and derived/instrumental preferences. Chapter 2 focuses on the challenge of human 'suboptimality'. A person's choices are a function of their beliefs and plans, as well as their preferences. If they have inaccurate beliefs or make inefficient plans, then it will generally be more difficult to infer their preferences from choices. It is also more difficult if some of their beliefs might be inaccurate and some of their plans might be inefficient. I develop models for learning the preferences of agents subject to false beliefs and to time inconsistency. I use probabilistic programming to provide a concise, extendable implementation of preference inference for suboptimal agents. Agents performing suboptimal sequential planning are represented as functional programs. Chapter 3 considers how preferences vary under different combinations (or &compositions') of outcomes. I use simple mathematical functional forms to model composition. These forms are standard in microeconomics, where the outcomes in question are quantities of goods or services. These goods may provide the same purpose (and be substitutes for one another). Alternatively, they may combine together to perform some useful function (as with complements). I implement Bayesian inference for learning the preferences of agents making choices between different combinations of goods. I compare this procedure to empirical data for two different applications. Thesis supervisor: Roger White Title: Professor of Philosophy, Massachusetts Institute of Technology. 2 Collaborations Chapter 2 is based on joint work in collaboration with Andreas Stuhimueller and Daniel Hawthorne. Chapter 3 is based on two projects. One (which I acknowledge in the text) was in collaboration with Leon Bergen and Josh Tenenbaum. The other (on the Knobe effect) was in collaboration with Josh Tenenbaum. All mistakes are my own. 3 Bayesian Computational Models for Inferring Preferences Owain Evans Long Abstract This thesis is about learning the preferences of humans by observing their choices. These choices might be in the context of psychology experiments in a lab, where the decision problems that people face are simple and controlled. They could be choices in some real-world task, such as navigating a city or building a website. Preferences inferred from observed choices are sometimes called 'revealed preferences' in economics. A related idea appears in the field of Machine Learning, under the label 'Inverse Reinforcement Learning' (Ng and Russell 2000), and also in Cognitive Science, where it's called 'Inverse-Planning' (Baker and Tenenbaum 2014). Inferring things from observed choices is not the only way to learn about someone's preferences. It's possible that in daily life we learn more about preferences from verbal testimony than from observed choices. Nevertheless, this thesis will only consider inference of preferences from choices'. I treat the problem of learning preferences from choices as an inductive, statistical inference problem. One goal of this work is to develop algorithms for automatically learning an agent's preferences from data that records their choices. However, my principal concern in this thesis is with questions that are prior to building such algorithms. Two of these major questions, which are explored in Chapters 2 and 3, are the following: 1 One can construe verbal testimony as a choice (much like construing speech as an 'action') and treat it in the framework I develop in this thesis. This is beyond the scope of this work. 4 (1) What is the causal and evidential connection between people's preferences and their choices? If people do not always choose what they prefer, what kind of evidence do their choices provide about their preferences? Can we learn about their preferences even if people don't reliably choose based on them? (2) How should human preferences be formally represented so as to facilitate inference from choices and to capture the full range and structure of human preferences? Chapter 1 lays the conceptual groundwork for the rest of the thesis and introduces the key challenges for learning preferences that motivate chapters 2 and 3. Chapter 1 begins with a discussion of the concept of 'preference' itself. My focus is on what I call 'economic preferences', which are defined as 'total subjective comparative evaluations' of world histories2.This definition aims to capture the notion of preference used in economics and decision theory. Since economic preferences are total evaluations, the question of whether A is preferred to B depends on a comparison across all evaluative dimensions that are relevant to the agent. So it's not just a question of whether the agent likes A more than B. It might also depend on whether A is morally better than B, whether A satisfies an obligation or commitment (e.g. a promise or agreement) that B does not, and so on for other evaluative dimensions. Having explained the notion of economic preferences, I discuss two distinctions that are important in the rest of this thesis. First, there is the distinction from the theory of well-being between actual preferences and informed or enlightened preferences (Railton 1986, Sobel 1994). Actual preferences have a closer link to observed choices but a weaker link to what is good for the agent. While much more difficult to infer from choices, I argue that enlightened preferences should be the ultimate target of the project of preference inference. The second distinction is between basic preferences and instrumental or derived preferences. Some world states are preferred for their intrinsic or immediate properties (this is a basic preference), while others are preferred only because they cause the former kind of state (a derived preference). It is often difficult to infer from observed choices whether a preference is 2 This analysis of economic preferences is taken wholesale from Hausman (2012). 5 derived or basic. Yet basic preferences will provide more concise and robust explanations of behavior. They should generally be the aim of preference inference. Having made some conceptual distinctions about preferences, I explain why the problem of learning preferences is important. I consider its relevance to the social sciences, to normative ethics and value theory, and to cognitive science. The final section of Chapter 1 introduces two general challenges for learning preferences from observations of human choices. The first challenge is the problem of human 'suboptimality'. An idealized agent has complete knowledge of the world3 and computes optimal plans to realize its preferences. The problem of inferring the preferences of such an agent has been studied in economics and Machine Learning. In real-world situations, humans are not omniscient and are not optimal in their planning. So existing formal techniques for inferring preferences don't straightforwardly carry over to humans. In Chapter 2, I develop models for learning the preferences of agents with false beliefs and for agents subject to time inconsistency4.These models apply to sequential decision problems. I make use of recent advances in probabilistic programming to provide a concise, easily extendable implementation of preference inference for suboptimal agents. The second challenge is the problem of finding formal representations for preferences that facilitate learning and are able to capture (as far as possible) the richness and complexity of human preferences. Human preferences seem to be 'sparse'. That is, most macro-level properties are irrelevant to whether people prefer an outcome. Few people care, for example, whether the number of blades of grass in their lawn is odd or even. It seems that preference learning would be easier and more efficient, if we could make some general assumptions about which properties are likely to matter for preferences. A related question is how preferences vary under different combinations (or 'compositions') of properties. Suppose someone values a number of distinct and independent goods (e.g. friendship, art, community, hedonic pleasures). It's plausible that while these goods are independent, they are not perfectly substitutable. That is, having a large quantity of just one of these goods would not be preferred to having a 3 When I write 'complete knowledge', I always mean 'complete knowledge up to non-deterministic features of the world that are not possible to know'. 4 Time inconsistency has been used to explain behaviors like procrastination, temptation, and addiction (Ainslie 2001). 6 more equitable balance'. In Chapter 3, I explore representations of preferences that use simple mathematical functions to model how different properties compose. These functions are standard utility functions in microeconomics, where the properties in question are quantities of goods or services that a consumer might purchase. These goods may serve the same purpose (and so be substitutes for one another), they may combine together to perform some useful function (as with complements), or they may be entirely independent. I describe and implement a Bayesian inference procedure for inferring the preferences of agents making choices between different combinations of goods. I compare this procedure to empirical data for two different applications. s I use the term 'goods'. This might sound like a discussion of 'objective goods' for an individual, rather than a discussion of the structure of someone's preferences. However, I'm just using 'goods' as a shorthand for 'a property or quality of a future state of affairs that someone prefers/values'. 7 Chapter 1: Key Concepts and Challenges for Learning Preferences 1. Concepts and Distinctions for Preference Learning 1.1. Preferences as total subjective comparative evaluations Throughout this thesis I do not use 'preference' in its everyday sense. Instead I use the notion of 'preference' found often in economic theory or decision theory'. This notion is close to the everyday notion but differs in important ways. Hausman (2012) argues that preferences in both economic and everyday senses are 'subjective comparative evaluations' of states or outcomes. These evaluations involve multiple outcomes (in contrast to desires) and they involve multiple dimensions of evaluation. The key difference is that the economic notion of preference involves all dimensions of evaluation that the subject finds relevant. The everyday notion, in contrast, includes some but not all dimensions. So on Hausman's terminology, economic preferences are 'total subjective comparative evaluations', while everyday preferences are 'overall subjective comparative evaluations'. The everyday notion of preference includes someone's 'likes', their predilections, and their aesthetic evaluations. It typically excludes moral considerations, obligations, 6 I'm not suggesting that work in economic theory or decision theory always uses this notion of 'preference'. Some work uses this notion but it's often hard to tell exactly which notion is being used. See Hausman's discussion of this in his (2012). 7 Everyday preferences are 'overall evaluations' because they involve comparing outcomes across a number of dimensions and then coming to an overall, holistic judgment. For example, a preference for one restaurant over another will take into account the food, the cost, the dicor, the service, and so on. 8 and commitments. Someone might prefer (in the everyday sense) to drink wine at dinner but abstain because they promised their partner to cut down or because they agreed to drive people home. In this case their economic preference is not to drink wine (Hausman 2012). Economic preferences consider all relevant dimensions and weigh them against one another to reach a total evaluation. Why work with economic preferences instead of everyday preferences? Here are two major motivations: 1. The economic notion of preference has a tighter connection to choice than the everyday notion. In domains where moral considerations and commitments are important, people's preferences (in the everyday sense) won't much influence their choices. The tighter connection between preference and choice makes prediction of choices from preferences more reliable. I discuss the importance of predicting choices below when explaining the relevance of preference learning to the social sciences and cognitive science. 2. The economic notion of preferences is more relevant to the task of helping someone by providing normative advice or normative recommendations. If you only learn about someone's preferences in the everyday sense, you won't provide useful advice in situations where the moral stakes are high. By definition, economic preferences include all dimensions of evaluation relevant to the agent. If you learn someone's economic preferences, you can provide advice that includes everything the subject deems relevant to their making a decision. For this reason, someone's economic preferences are similar to their 'values' or to 'everything they consider important'. The difference is that economic preferences encompass not only the list of evaluative dimensions that are important to someone but also the precise way in which these dimensions are weighted or traded-off in different contexts8. Using a notion of preferences that combines or 'collapses' different evaluative dimensions does not imply that distinctions among these dimensions are unimportant. On the contrary, it's plausible that there are important philosophical and 8 Advice based on economic preferences may notbe the best advice from an objective moral standpoint. For one, some individuals may give no weight to moral considerations in their preferences. 9 psychological differences between moral constraints or commitments and someone's personal tastes or predilections. However, it can be hard to determine (a) whether someone's choice was influenced by moral considerations or by a personal preference and (b) if both influences were present how those influences interact/compete to result in the choice. In this thesis the focus is on learning preferences from observed choices. Observed choices typically do not provide evidence about whether the evaluations underlying the choice were moral or were due to personal preference or predilection. So while the distinctions between evaluative dimensions are important, it won't be something that the preference learning ideas I explore here can directly address. It's important to note that the economic notion of preference does not collapse all subjective influences on choice. I discuss below the fact that choices are influenced by cognitive biases such as Loss Aversion and hyperbolic discounting. In Chapter 2, I consider ways to learn preferences from observed choices without collapsing these biases into the preferences. Building on the ideas in Chapter 2, future work could try to infer preferences while avoiding collapsing different evaluative dimensions (e.g. avoid collapsing together someone's likes and someone's moral concerns)'. 1.2. The Objects of Preferences If economic preferences are 'total evaluations', what are they evaluations of? In everyday speech, we sometimes talk about preferences over types of goods or over types of activities. For example, one might prefer Coke to Pepsi or prefer playing tennis to hiking. We also talk of preferences over tokens of these types. For example, one might prefer to play tennis this afternoon (but not necessarily on other occasions). I discuss preferences over things like goods and activities in Chapter 3. In this chapter I will not consider preferences over goods or activities (in either the type and token sense). Instead I consider preferences to be over more 'fine-grained' or 'specific' entities, namely, complete world histories (or 'possible worlds'). 9 The idea is to make some structural assumptions about tastes and moral constraints and how they influence choice. If there is some difference in how they influence choice, you might be able to distinguish them from observing choices alone. One example is that people will talk differently about tastes and moral constraints. Another is that moral constraints might be more consistent across individuals and across time. People might also be less likely to make mistakes or be subject to biases for some dimensions rather than others, and so on. 10 To make it clear what I mean by 'preferences over world histories', I will make some terminological distinctions. In this chapter I use the term 'world state' to mean the state of the world at a particular time. I use 'world history' to mean the sequence of all world states, i.e. the set of all facts about a world throughout time as well as space. When rational agents are choosing actions, they evaluate the world histories that would result from taking each possible action. (Note that the world history includes the past and future world states that result from taking an action. In some cases, the agents will only consider the future consequences of an action when evaluating it'0.) The motivation for defining preferences over world states is that it leads to a simpler connection between choices and preferences. If preferences were over token goods or activities, there would be a more complex relation between preferences and choices. This is because rational agents make choices based on the ensuing complete world history, rather than on the good or activity that ensues in the short-term. For example, someone might enjoy drinking wine more than water but choose water because of the likely consequences of drinking wine. These consequences could be short-term (e.g. a hangover) or long-term (e.g. acquiring a social reputation as a drinker or adverse long-term health effects). The assumption that people consider the entire future resulting from each action is unrealistic. People have cognitive/computational limits. Moreover, it's clear that many properties of a world history will be irrelevant to most people's preferences. For example, the choice of saying one word vs. another word will influence the movement of a particular oxygen molecule in the room. But no one would care about this molecule's movements in itself. I will consider each of these issues later in this chapter. A related question is whether we should consider world states to be the primary objects of preferences, as opposed to world histories. Some would claim that people's preferences on world histories are a conceptually and computationally simple function of their preferences on world states (e.g. a discounted sum of world-state utilities)". I " In the models in Chapter 2, the agents only take into account the value of the future resulting from an action. For some purposes (e.g. maintaining pre-commitments) it might be important to also take into account the past. 11 This claim is made both by economists and computer scientists (Schwarz 2009). People who are hedonists about preferences (i.e. who think that people's only basic or fundamental preferences are for pleasure over pain) might also have this view. If one is modeling preferences in a narrow domain, it 11 will not address this claim here. In this chapter I will stick to world histories, which are the more fine-grained object. If it turns out human preferences can be fully characterized in terms of preferences on world states, then this will make little difference to the points I raise here. 1.3. Actual vs. enlightened preferences Consider the following example: Fred is offered the choice between a cup of ice water and a cup of hot tea. Because it's hot and humid, Fred would like the water. Unbeknownst to Fred, the water is poisoned and would kill Fred if he drank it. There is no poison in the tea. This kind of example has been used in philosophical discussions of welfare or well- being to illustrate the .distinction between 'actual' and 'informed'/'enlightened' preferences. The idea is that Fred's actual preference for the cup of water differs from the preference he'd have if he knew the water was poisoned (his 'informed' preference). According to the notion of preference I've introduced, Fred does not have an actual preference for the cup of water. I define preferences over world histories, rather than objects like 'the cup of water' or the state of drinking the water. The relevant world histories are: (1) 'feel refreshed (from water), alive' (2) 'not refreshed (due to drinking tea), alive' (3) 'feel refreshed, dead from poison'"2 Fred prefers the first and second histories to the third one. Unfortunately for Fred, the first (and most preferred) outcome is not a realistic outcome given his choice. might be plausible that this assumption holds. And even if the assumption is not plausible, it might be a modeling assumption that simplifies computation. 12 I only include the first part of these world histories and not the entire future extending beyond what happens to Fred. We can assume the histories don't differ in important ways after the events in the first part. 12 On my notion of preference, Fred does not prefer the cup of water. However he does plan to get the water and does believe this will cause a good outcome. This is relevant to tightness of the connection between preference and choice. Fred chooses the water despite its causing a world history he prefers least. The virtue of my view is that preferences are robust over time. Fred's preference wouldn't change on learning about the poison. It's plausible that many preferences (e.g. for survival, bodily integrity and basic comforts, or for social interaction) are much more robust over time than beliefs about which choices will help realize these preferences. I've just argued that if preferences are over world histories, there is no difference between actual and informed/enlightened preferences in the example of Fred. This argument did not depend on Fred being logically omniscient or having any unrealistic computational powers. If we asked Fred about his preferences over outcomes (1) -(3) above, he would rank the outcomes with no difficulty. Moreover, we could read his preferences off his choices if he didn't have false beliefs. So even though Fred's preferences are defined over complex, fine-grained objects (i.e. entire world histories), there is not a practical difficulty here in learning those preferences. While the actual vs. informed/enlightened distinction isn't applicable in Fred's case, it is still applicable to my account of preferences. It's often hard to say which of two world histories is preferable. This is the case in moral dilemmas or in the kind of examples used to argue for incommensurability or incomparability (Chang 1998). In these examples the difficulty seems to lie in trying to compare different dimensions of value. In other examples where comparison is difficult, the problem is not just multiple dimensions of value but also the cognitive complexity of the comparison. I will illustrate the distinction between actual and enlightened preferences with an example. Suppose Bill, a teenager, considers moving to a foreign country permanently vs. staying in his home country. Possible futures resulting from each choice will differ in many ways. If Bill were able to see each future laid out in full detail (e.g. in an exhaustive, novelistic description of his possible life stories), it might be very hard to choose between them. Making this choice might require lots of deliberation and empirical investigation (e.g. trying out different experiences or hearing the testimony of other people). When it's hard to decide between two world histories, people's judgments will vary over time as a function of their deliberation and inquiry. In such cases we can distinguish between the person's actual preferences 13 and their enlightened preferences. The enlightened preferences are what the person would converge to under further deliberation and inquiry. This actual/enlightened distinction is important for the problem of learning preferences from observed choices. Suppose we observe someone making a choice and we know that they have no false beliefs and no failings of rationality. Yet for the reasons given above, we also know that they might revise the actual preference implied by this choice. How likely are they to revise it? Answering this would require having a model of (a) how likely they are to deliberate and investigate this preference, and (b) how likely they are to change their mind if they do deliberate on it. I won't discuss such models here. Incorporating them into my current framework is a challenge for future work. 2. Why preferences? Why learning preferences? The previous section introduced the idea of economic preference and made some basic distinctions about preferences. Why should we care about economic preferences? And why care about learning them? 2.1. Social Science The social sciences aim to predict and explain human behavior that is relevant to social, political and economic entities. People's behavior is a function of many things. It depends on their beliefs, their abilities to plan and to strategize, and on constraints imposed by law and convention. Individual preferences are a crucial determinant of behavior. Suppose, for example, we want to predict how people will behave during a major recession. This will depend on their preferences for moving to a new city, their preferences for being employed (even at lower wages), their preferences for consumption, and the preferences of employers to hold on to employees (even if times are bad), as well as many other more specific preferences. 13 One might argue that the actual preferences are likely to be incoherent. So they won't have the formal structure that the enlightened preferences would have. I won't pursue this here. 14 Why care about learning preferences? As a thought experiment, we can imagine humans having homogeneous preferences that researchers could write down by doing some armchair introspection. It could be that humans prefer only a certain simple-to- identify pleasure state and all human behavior is the pursuit of this state. Alternatively, it could be that human preferences are easy to deduce from our evolutionary history. In actual fact, it seems that human preferences are complex. People seem to care about a wide array of apparently distinct things. Preferences are also heterogeneous across times and places and across individuals. So techniques for learning preferences have two key roles. First, they are needed to learn the complex structure constituting human preferences (some of which is probably common to nearly all humans). Second, they are needed to learn particular preferences of each individual. This argument for the importance of preference learning doesn't explain my choice of focusing on learning from observed human choices. An obvious alternative way of learning preferences is to give people a big survey about their preferences. While surveys are valuable in learning about preferences, I want to highlight some advantages of learning from actual choices. First, people may intentionally misreport some of their preferences (e.g. to conceal unpopular preferences). Second, people may be bad at accurately reporting their enlightened preferences on a survey. If people actually face a choice between outcomes, they are likely to think carefully about each option, to consult wise friends, and to gather any empirical evidence that is relevant. A well-constructed survey might provoke some of this thinking but it's unlikely to match the real-life decision. 2.2. Normative Ethics and Value Theory One theory of individual welfare/well-being is that someone's well-being is a function of whether their preferences are satisfied (Parfit 1984). I will call this the 'preference- satisfaction' theory. Versions of this theory do not always construe preferences as economic preferences that include all relevant evaluative dimensions between outcomes. Nevertheless, whether an agent's economic preferences are satisfied is likely to be highly relevant to their well-being on such theories. 15 There are various objections to preference-satisfaction theories. One objection is that a person preferring an outcome is insufficient for the outcome to confer well- being (Hausman 2012). The person also has to find the outcome rewarding or gratifying in some way. Another objection focuses on the problem (discussed above) that people can have actual preferences for outcomes they will later not prefer. Even if these objections undermine the claim that preferences are all that matters for individual well-being, it is still plausible that satisfying one's preferences is an important part of well-being. If we assume that preferences are an important component of individual well-being, then the problem of learning individual preferences will be important both practically and philosophically. On the practical side, making people's lives better will depend on learning their preferences. For the reasons discussed in the section above on social science, there are good grounds for learning preferences from observed choices and not just from surveys or from introspection on one's own preferences. On the philosophical side, if preferences are important for well-being, we need to understand better the content of those preferences. This means not just learning preferences in order to predict future choices but developing ways to represent the learned preference in some comprehensible form". It's plausible that we already have a good understanding of some important preferences (e.g. for survival and for good health). But some preferences may be more subtle. For instance, most people have a preference for having children. Some of these people don't end up having children and fertility rates in many rich countries are much lower than in the past. People's preferences here don't seem easy to capture in a simple formula or simple logical principle5 . Moving on from the preference-satisfaction theory, there are other ways in which preferences are relevant to value theory and normative ethics. First, consider the importance of respecting people's freedom of choice and autonomy in modern societies. If there was a drug that made people less likely to commit violent crimes, many philosophers would judge it wrong for a government to secretly put the drug in "4 In principle there could be an algorithm that infers preferences and represents them in a form that is very hard for us to interpret. '- People don't desire to maximize their number of children. People don't want exactly one or two children (otherwise there'd be no people who wanted children but didn't have them). Many only want children if the right partner comes along or if it won't get in the way too much of other projects. But how can we formalize these conditions? The claim is not that it's impossible but that it's a substantive task to formalize human preferences. 16 the water supply. This judgment would be stronger if the drug had negative side- effects -even if the benefits of the drug overall would outweigh the side-effects. More generally, governments need to obtain consent from individuals before taking actions that affect them in substantive ways. An individual will give consent if they prefer (in the economic-preference sense) the outcome. Individuals can be influenced in their judgments about what they prefer by evidence and argumentation: it's not just their immediate gut feeling that matters. But people's preferences will be an important determiner of whether moral or political ideas are able to influence actual behavior'". 2.3. Cognitive Science As I noted in my discussion of social science (above), preferences are important for predicting human behavior. This makes the problems of learning and representing preferences of obvious relevance to cognitive science. Preference learning is also important in cognitive science because it's an important cognitive task that humans perform. People's ability to reason about other minds, often known as Theory of Mind', has been extensively studied in recent decades. It has been shown that Theory of Mind and preference inference develop very early, with sophisticated abilities observed before one year of age (Hamlin et al. 2007 and Hamlin et al. 2013). It is plausible that preference learning plays an important role in young children's ability to learn about the world during childhood. For adults, inferring the preferences of others is important in many spheres of life. It's important in social and family life, but also in doing deals (in business or politics), in designing new products or organizations, and in creating or understanding art. Improved theories of the structure of human preferences and new techniques for learning human preferences can be applied to modeling Theory of Mind. In the other direction, we might better understand how to learn human preferences from studying how humans do it. 16 Note that this is analogous to the relevance of Pareto improvements in economics. Policies should be easier to bring about if no one is worse off from them. (One final way in which preferences relate to normative ethics: If people can be swayed by argument to economically prefer what is objectively good for them, then economic preferences will ultimately converge to preferences for objectively good things.) 17 3. Challenges for Learning Preferences from Observed Choices The previous section motivates the general problem of learning preferences from observed choices. This section introduces two specific challenges that are part of this general problem. I address these challenges in greater detail in Chapters 2 and 3. In those chapters, I define and implement formal models that learn preferences in ways that attempt to deal with these challenges. This section discusses the challenges from a big-picture perspective. 3.1. False beliefs, suboptimal cognition and failures of rationality 3.1.1. Suboptimality and false beliefs The choices of optimal, omniscient agents will generally be more informative about their preferences than those of suboptimal agents. Imagine an omniscient, optimal agent with human-like preferences is navigating a busy shopping district. (Optimal agents love shopping.) This agent knows about every product in every store, without having to do any browsing. So if it buys something, then it must have bought the best item relative to its preferences. Imagine now that a normal human is thrown into the same shopping district with no ability to see or hear (e.g. due to a blindfold) and having never visited the district before. If this person manages to stumble into a store, their doing so would not provide any information about their shopping preferences. This example shows that even if we (as observers of the person doing the choosing) had full knowledge of the person's epistemic state, we would not be able to infer anything about their shopping preferences from their choices. There are similar problems with inferring preferences when we (as observers) are uncertain about the epistemic state of the agent choosing. If we don't know whether or not the person wears a blindfold, then knowing which store they visit will not allow us to identify their preferences-though it will provide some information. Consider 18 again the example of Fred and the poisoned water (above). If Fred chooses the hot tea, it's ambiguous whether he prefers the tea (and doesn't know about the poison) or prefers the water (and knows about the poison). The examples of the blindfolded person and of Fred involve agents with uncertain or false beliefs. It's important to note that the difficulty in inferring their preferences is not that their choices are 'random' or 'noisy'. Random errors in someone's choices will make it more difficult to infer their preferences. But if the choices are repeated and are not completely random, errors will average out over time and preferences can still be inferred. By contrast, inaccurate beliefs can make it impossible to infer preferences -even if observations are repeated. Here's an example. Suppose Sue regularly eats candy containing gelatin. One possible inference is that Sue is happy to eat animal products. Another is that she doesn't know about the gelatin in the candy (or doesn't know that gelatin is an animal product). If Sue never receives any evidence about the gelatin, then we (as observers) can never distinguish between these two possibilities. I have given examples that show how the agent's uncertainty or false beliefs can make preference inference more difficult. Other kinds of deviation from an ideally rational, omniscient agent will generate similar difficulties. One such deviation is suboptimal planning". For example, without computer assistance, people will plan travel routes suboptimally. The deviation from optimality will be bigger for a longer and more complex route. Suppose someone works out a route from A to B that is very inefficient. We observe part of their route. From this evidence, we might infer that they intend to go from A to a different place C, for their route might be an efficient way to get to C. (Also note that if someone's ability to plan routes is completely non- existent, their choices would tell us nothing about their intended destination. This is like the example of the blindfolded person.) 3.1.2. Cognitive Biases Suboptimal planning is a deviation from optimality but not one that humans could be blamed for. After all, many planning problems are known to be computationally intractable. Psychologists have argued that humans are also subject to an array of 17 Note that suboptimal planning is consistent with the agent knowing all the relevant empirical facts. Constructing the optimal plan is a computational task that might require elaborate calculations. 19 'cognitive biases'. These are deviations from ideal rationality that arise from a contingent shortcoming of our cognitive design -- analogous to our visual blind spot. I will give two examples of biases and discuss how they interact with preference inference. Loss Aversion You are offered a gamble where you get $200 if the coin comes up heads and you lose $100 if it comes up tails. Many Americans (for whom losing $100 is a small, inconsequential loss) reject this gamble, despite its expected value (Rabin 2000). Psychologists explain this as a specific aversion to outcomes that are framed or interpreted as a loss. How does Loss Aversion interact with learning preferences? Studies show that not all subjects are equally loss averse. If we observed someone reject this gamble and didn't know whether they were subject to Loss Aversion, we would be uncertain about their preferences. They might reject it because losing $100 would have drastic consequences relative to winning $200. While this interpretation is unlikely to describe a typical US college professor, it is important that a general-purpose preference learning technique include this possibility. Loss Aversion is described as a bias because it leads to 'preference reversals', i.e. to sets of preferences over outcomes that are inconsistent. It can't be explained away as a basic preference for avoiding loss over realizing gains. Time Inconsistency Time inconsistency (also called 'dynamic inconsistency') is one of the best-studied cognitive biases. This inconsistency arises when someone's desired choice between two options changes over time without them receiving any new pertinent information about the options. For example, someone might desire to get up early the night before and then reverse their desire when their alarm rings in the morning. One of the most influential formal models of time inconsistency is the 'hyperbolic discounting' model (Ainslie 2001). I will consider time inconsistency in much greater detail in Chapter 2. 20 I've noted that deviations from optimality and omniscience create difficulties for learning preferences from choices. The general problem is roughly that suboptimal agents will not always end up with outcomes that they prefer. This means that outcomes will either be uninformative about preferences or they will be highly ambiguous. One idea for getting around this problem is to try to learn preferences only from simple settings where human behavior is likely to be optimal. This could be a natural setting or it could be a lab setting which is specifically constructed to be simple. Simplified settings seem to be a valuable way of learning about preferences while avoiding the problems of suboptimality discussed here. There are, however, some shortcomings with this approach. First, it's often hard to communicate to subjects the precise structure of a very simple decision problem. Consider the Prisoner's Dilemma. Formally the game is defined by a pay-off matrix with four entries. Yet when subjects without a background in decision theory play against each other, they are likely to place some value on their opponent's utility and on equitable or 'fair' outcomes. This can make the 'cooperate' option more appealing than it is in payoff matrix. A different, more fundamental problem is that some important human preferences may be hard to test in a simplified setting. For one example, consider again the Prisoner's Dilemma and also related games involving cooperation, coordination and conflicts between selfish and altruistic behavior. Even if the payoff matrix in these games is simple, the game is intrinsically complex because simulating the game involves simulating another human being. Moreover, it's not clear one could really model and understand human behavior in game-theoretic settings without including our ability to simulate others. Another example of preferences that are hard to investigate in simplified settings are those involving processes unfolding over long periods of time and correspondingly large time investments. It might be hard to learn someone's preferences for friendships, family relationships, or for skills that take years to develop, in a lab experiment lasting only a few hours. Suppose it takes five years of training or experience to be in a position to evaluate two life options A and B. Then a lab experiment won't be able to do a simplified or controlled version of this training period and so won't be helpful in trying to learn about this preference. 21 I've argued that simplified settings are not sufficient to get around the problem of humans being suboptimal in their planning and being non-omniscient. In Chapter 2, 1 develop a different approach to learning preferences despite these deviations from optimality. This approach explicitly models the deviations, learns from observed behavior whether they apply to a given individual, and then uses this accurate model of an individual to infer their preferences. There are limitations to this approach also. You can only learn someone's deviations from optimality if there is sufficient behavioral evidence to do so. Moreover, if someone has very limited knowledge or planning ability (as in the case of the blindfolded person), then inferring the precise nature of their suboptimality will not help with inferring their preferences. So it seems likely that a hybrid approach to inferring preferences (e.g. combining inference in simplified settings with explicit modeling of suboptimality in the style of Chapter 2) will be needed. 3.2. Representing preferences The second challenge with learning preferences that I explore in this thesis is the problem of how to formally represent human preferences. 3.2.1 Representing preferences as a series of high-level properties In Section 1 of this chapter (above) I specified preferences as taking world histories as their objects. From the perspective of everyday human cognition, world histories are extremely fine-grained entities. Miniscule differences in the physical properties of single atoms (even for a very short period of time) anywhere in the universe suffice to differentiate world histories. Yet most humans, if actually living through these two world histories, would find them indistinguishable. Humans are typically aware of only very coarse, high-level or macro-level properties of a world history. 22 Moreover, human preferences seem to mostly concern a small subset of these high- level properties". The relevant properties concern pleasure and pain, good physical and mental health, rewarding artistic and leisure activities, family and friends, and so on for many more. While we might have some practical or scientific interest in how many grains of sand there are on a beach, or in whether a particular tree has more mass in its roots or its bark, few people will have a preference either way. I summarize this point by saying that preferences are 'sparse' over the set of useful macro-level properties of the world. That is, if we collect a long, diverse list of high/macro-level properties of world histories then most of these predicates would be irrelevant to the preferences of most people. It's not easy to spell out this 'sparsity' claim formally. The intuition is that if we took a non-gerrymandered and extensive list of macro-level properties that are useful in describing the world, then most of these properties would not be useful for characterizing human preferences. An example of a non-gerrymandered list would be all properties expressed by single words in a big dictionary or all properties used in Wikipedia articles. This sparsity property is important when it comes to learning human preferences from observations. In most cases the two possible world histories resulting from a choice will differ in a large number of their high-level properties. For instance, suppose Mary is choosing between a holiday in Puerto Rico and one in Mexico'. This choice clearly has consequences that are relevant to Mary's preferences. But the choice also has many consequences that Mary would not even think about. In Mexico Mary would minutely influence currency exchange rates and would also influence prices for Mexican goods and labor. She might kill some individual blades of grass by treading on them. She might bring over some pollen from the US, and so on. If we observe Mary choose to go to Mexico, then this could be because of a preference for any of these high-level properties on which the Mexico and Puerto Rico options differ. All we know is that one of these thousands of properties speaks in favor of Mexico. It could also be the interaction of different properties that is important (see below). If we assume that each of these different properties is equally important, we would scarcely be able to generalize having observed Mary's choice. Any similar choices in the future would themselves be between world histories which differ in '8 By 'properties' I mean any function of world histories, not just those expressed by one-place predicates. This can include arbitrary relations. '9 In practice, Mary would usually have lots of uncertainty about how the holidays would go. But for this example I assume (implausibly) that there is no such uncertainty. 23 thousands of mostly inconsequential ways from each other and from the Puerto Rico and Mexico world histories. One obvious way around this problem of generalization is to limit the set of relevant properties from the beginning. That is, we just assume that most of the consequences of choosing Mexico as opposed to Puerto Rico are irrelevant and so we don't even consider them. This still leaves us the problem of which properties to include. Ideally, we would develop algorithms that include a large space of properties and winnow them down in response to data. It's also not obvious what properties to include in the initial 'large space of properties'. It's clear that we can ignore micro- physical properties for the most part (e.g. the acceleration of a random nitrogen molecule). But what kind of high-level properties do we need to include? There are two problems here. One is the problem of casting our net wide enough that we capture all relevant properties. The other is finding good formalizations of the properties that we are pretty sure should be included. Various feelings of pleasure, reward, pride, belonging, familiarity, novelty and so on are plausibly important to us. While we have familiar and frequently used words for these concepts, it's not clear that everyday language is good at precisely specifying the states that are valuable. It might be that we can communicate about them with each other because we have enough common experience of them to be able to use vague terms20 . 3.2.2. Basic/intrinsic preferences vs. derived/instrumental preferences I've suggested that we can represent preferences by a long list of high-level properties of world histories. It's a theoretical possibility that this long list turns out to be unnecessary. For instance, it could turn out that all humans are in fact psychological hedonists and the only relevant properties are amounts of pleasure and pain. But it seems plausible that a long and diverse list of properties is needed" 20 So we might want to include some related concepts from philosophy, cognitive science or neuroscience that might help precisely formulate these states. 2 The claim of sparsity (above) is that most high-level properties useful for describing the world are not useful for characterizing human preferences. This is consistent with there being a fairly long and diverse list of properties that are needed to characterize human preferences. 24 One general issue is that there will be semantically distinct properties that agree or coincide across a large space of possible world histories. For example, consider the following properties: \* 'being in pain' \* 'having activation in pain circuits in the brain' \* 'having activation in pain receptors (which normally leads to feeling of pain)' \* 'having a cut on the part of the skin where the pain receptors are' While some of these properties are semantically distinct, they would coincide across many world histories. This has important consequences for learning preferences. Suppose we (as observers) see someone choosing between options, where some of the options involve getting a cut and feeling pain. From this evidence alone, we cannot tell which of the list of properties above is relevant to determining the person's choice. It's intuitively clear that some of these properties are what people actually care about in a 'basic', 'primitive', or 'intrinsic' way, while the importance of the rest is only a consequence of their robust connection to those we care about intrinsically. For example, we don't think people care intrinsically about their pain receptors firing. But people do care about the experience pain and about damage to their body. This suggests a distinction between preferences that are 'intrinsic' or 'basic' and those that are 'instrumental' or 'derived'. In explaining someone's choice, basic preferences will give a stronger and deeper explanation. Derived preferences are contingent, and would not be present in all possible world histories". 22 This distinction, for a given individual, can be read off from individual's preferences on world histories. If someone doesn't have a basic preference for their pain receptors being activated then they won't care about this happening in strange world histories where it happens without being caused by body damage and without causing pain. 23 Note that basic/intrinsic preferences are not the same thing as intrinsic objective goods. If humans are able to converge on normative truths, then we'd hope that they end up having preferences for the intrinsic objective goods. But it's possible that humans have basic/intrinsic preferences for objectively bad things and that these preferences are very robust. 25 3.2.3. Composition of properties I've considered representing preferences in terms of a long list of high-level properties of world histories. An individual's preferences would then be represented by a utility function from this list of properties to the real numbers. But what kind of function is this? I'll refer to this as the question of how properties compose to produce preferences or utilities. The simplest functional form for a utility function would be a weighted sum of each property, where the property will have a value of '0' or '1' depending on whether it holds of the world history. We could also consider multiplicative functions, functions with 'max' or 'min' operators, and functions with terms with diminishing marginal utility, and so on. As I discuss in Chapter 3, such functions have been explored in microeconomics. From a theoretical standpoint, it is desirable to explore the most expressive class of functions. So we could consider all functions satisfying some smoothness property or else all functions satisfying computability or some tractability constraint. The distinction (above) between basic and derived preferences relates to the issue of composition. The compositional structure discussed in microeconomics can sometimes be eliminated by considering only properties for which the agent has a basic, rather than derived, preference. Consider the following example. As chili is added to a dish it first gets better (i.e. tastier) and then gets worse (as the quantity of chili makes the dish too hot). One can plot the function for the quality of the dish as a function of amount of chili. But this function is not actually needed to represent someone's preferences. The basic preference here is for a certain richness of flavoring and for the overall gustatory experience. Varying the amount of chili is only relevant to the person's preferences in so far as it changes the gustatory experience. Whereas the chili had a non-monotonic effect on the desirability of a dish (more chili made the dish better then worse), the property of gustatory experience is monotonic in its desirability. It's plausible that lots of rich compositional structure (like the non-monotonic influence of chili on a dish) that appears to be relevant to preferences can actually be eliminated by switching to basic preferences. This leaves open the question of whether there is any rich and interesting compositional structure between properties 26 for which there are basic preferences. One candidate would be the desirability of a balance between distinct sources of value24.People might prefer a balance between success or satisfaction (a) in work, (b) in family life, and (c) in personal hobbies and leisure. That is, people would prefer attaining some decent standard in all three categories over huge success in just one of them. For this to be an aspect of their basic preferences, the preference for balance cannot arise from the causal effect of failure in one category on the others. Some minimal level of success in work (or in making money) might be a causal pre-requisite for success in other spheres of life. The question is whether a certain level of success in work intrinsically enhances the value of success in other areas2 . 4. Conclusion This chapter introduced the notion of economic preference that I use in this thesis. Economic preferences are total subjective comparative evaluations of world histories. Since economic preferences depend on all dimensions of normative evaluation of world histories, they are robustly relevant both to predicting human choices and to providing normative advice or recommendations about how someone should choose. Economic preferences that satisfy basic rationality constraints have been formalized and studied extensively in economics and decision theory. I make use of the utility- function representation of economic preferences in the models I develop for learning preferences. In Section of 2 of this chapter, I considered two challenges for inferring preferences from observed choices. The first challenge is due to humans deviating from omniscience and from optimal rationality. Because of human suboptimality, human choices are less informative about preferences than they'd otherwise be. I discussed 24 Some version of this idea goes back to Aristotle's Nichomachean Ethics. 5 Here are some quite different examples. The same pain experience could be bad (in normal contexts) or good (for an exercise freak who grows to like the pain of training or for a masochist). Likewise, one might normally prefer to treat people with respect. But one might prefer to withhold respect for someone who doesn't deserve it. These are cases where the value of an event varies conditionally on the context. 27 some strategies for dealing with suboptimality and I will look at a different strategy in Chapter 2. The second challenge I looked at was the questions of the form and representation of human preferences. This is a fundamental question for thinking about preferences in general. The question is, 'What is a precise description (in regimented natural language or a formal language) of human preferences or of the kinds of preferences that humans have?' I also considered the related question of which preferences are basic/intrinsic as opposed to derived/instrumental. These questions are important for inferring preferences from choices. If we rule out many kinds of preferences a priori (because they seem too implausible) we risk failing to capture some surprising and counter-intuitive feature of human preferences. On the other hand, the wider we cast our net (i.e. considering a very broad class of possible preferences) the more difficult we make learning. This problem is discussed in a more applied setting in Chapter 3. 28 Chapter 2: Learning the Preferences of Ignorant, Inconsistent Agents 0. Overview Chapter 1 gave an overview of conceptual issues related to learning preferences from observed choices. In this chapter, I introduce a Bayesian statistical model for inferring preferences and describe my implementation of this model using probabilistic programming. The model intends to deal with one of the challenges for preference inference that I raised in Chapter 1. This is the problem of 'suboptimality'. I consider two kinds of suboptimality in this chapter. First, humans often have uncertain or inaccurate beliefs about the empirical world (e.g. believing that a restaurant is open when it's already closed). Second, humans are subject to time inconsistency (e.g. staying in bed after having planned to get up early). My approach is to define a model of decision making that includes both inaccurate beliefs (which can be updated on evidence) and time inconsistency (formalized as hyperbolic discounting). I show that it is possible to learn each of the following properties of an agent from observed choices: (1) whether an agent has inaccurate beliefs and the values of the credences or subjective probabilities, (2) whether the agent is time inconsistent and (if so) the value of the discount rate, (3) the agent's preferences. The learning approach is Bayesian joint inference (implemented via probabilistic programming), returning a posterior distribution on all parameters characterizing the agent and their preferences. I test the inference algorithm's performance on simple scenarios involving planning and choosing a restaurant. This performance is also compared to inferences from human subjects asked to make preference inferences from the same stimuli. 29 1. Introduction 1.1. Examples illustrating inaccurate beliefs and time inconsistency In Chapter 1, 1 gave some examples of how uncertain or false beliefs can make it more difficult to infer someone's preferences. I will now consider an additional example and discuss in more detail its implications for inference. The example is chosen to match the kind of decision problem I'll focus on in this chapter. Example 1 John is choosing between restaurants for lunch. There are two restaurants close by. The Noodle Bar is a five-minute walk to John's left and the Vegetarian Cafe is five minutes to his right. Each is open and has plenty of tables. John goes directly to the Vegetarian Cafe and eats there. Noodle Bar Veg Cafe Figure 1. John's choice between two restaurants in Example 1. Each restaurant is a five-minute walk from John's starting point. John's starting point If John is fully informed about the restaurants, then it's a plausible inference that he prefers the Vegetarian Cafe'. However, John might not know that the Noodle Bar even exists. He might think it's further away than it is. He might falsely believe that it is closed today or that it has no free tables. He might have accurate beliefs about the Noodle Bar but falsely believe the Vegetarian Cafe to be closer or less busy or better in the quality of its food than it actually is. He might know nothing about either restaurant and have chosen the Vegetarian Cafe by flipping a coin. ' As I discuss below, it's also possible that he has a long-term preference for eating at the Noodle Bar but is close enough to the Vegetarian Cafe that he feels tempted to eat there. 30 In the terminology of Chapter 1, John chose the world history where he eats at the Vegetarian Cafe. But this choice is compatible with his preference being for the world history where he eats at the Noodle Bar. There are many plausible inaccurate beliefs that could have led John to pick an outcome he does not prefer. I now consider a similar example, intended to illustrate how the possibility of time inconsistency can make inferring preferences more difficult. Example 2 John is choosing between restaurants for lunch. There are two options nearby. There is a Burger Joint seven minutes away and a Salad Bar another five minutes further down the street. John walks down the street and eats at the Burger Joint. John's starting point "Nw Salad Bar Figure 2. John's choice between restaurants in Example 2. Suppose we assume that John has full information about each restaurant. Can we infer that he prefers the Burger Joint? Here is an alternative explanation. John planned to go to the Salad Bar. In walking to the Salad Bar, he had walk right past the Burger Joint. From close proximity, he found the Burger Joint so tempting that he abandoned his plan and ate there instead. This kind of explanation is common in everyday reasoning about people's actions. It ascribes a temporal or 'dynamic' inconsistency to John. When making his plan, John prefers that he eat at the Salad Bar over the Burger Joint. But when John stands right in front of the Burger Joint he has the reverse preference. One explanation for the preference reversal is a change in John's knowledge. John might see how delicious the burgers look and change his mind on this basis. But I want to focus on the case where there is no new information of this kind. John might feel tempted on seeing the burgers but not because he gets any new information about them. If this temptation explanation is right, should we say that John has a preference for the Burger Joint? We can imagine arguing either side. One side is that John's real preference is for the Salad Bar and that he only went to the Burger Joint because of weakness of will. He planned to avoid eating there and he'll regret having eaten there 31 afterwards. On the other side, we argue that John's failure was in his planning to go to the Salad Bar when his true preference was for the Burger Joint. When John is right in front of the Burger Joint he sees sense and realizes that he should have planned to eat there all along. I will not enter into this debate. Instead I assume John has longer-term and shorter- term preferences that differ. When considering his choice of restaurants from his starting point, John is not close to either restaurant and so he acts on his longer-term preferences. When John is outside the Burger Joint, a few minutes away from sinking his teeth into a burger, his shorter-term preference is for the burger and he acts on this (where some people would stick to their longer-term preferences). In this chapter, I'll show that both longer- and shorter-term preferences can be inferred from observed choices. 1.2. The Wider Significance of Inaccurate Beliefs and Time Inconsistency The examples above show how inaccurate beliefs and time inconsistency can make choices less informative about preferences. I will now consider the wider significance of each of these deviations from optimality. 1.2.1. Inaccurate beliefs Humans often have false beliefs about concrete everyday matters of fact (e.g. falsely believing a cafe is open). Many such false beliefs will be corrected by direct observation or by the testimony of other people. Beliefs are more likely to be corrected if they are directly relevant to an important choice. However, even if humans get evidence that conflicts with a false belief, they do not always abandon the belief. Superstitions of various kinds survive counter-evidence. Beliefs about emotionally charged topics (e.g. whether one is a bad person, whether one is still physically attractive despite one's advancing years, and so on) can also survive evidence. Human choices also depend on beliefs about the causal and probabilistic structure of the environment. This includes everything from the fundamental physical laws of 32 the universe to the chance of winning a state lottery. Beliefs about causal and probabilistic structure are also crucial important for practical tasks. Consider the example of someone driving on an icy road. The driver will make use of a mostly implicit model of how the tires interact with the icy road2.The accuracy of this model will depend on the driver's experience with this particular car and with icy conditions in general. On many technical questions about causal/probabilistic structure, laypeople will be agnostic and defer to authority. But on certain questions laypeople may oppose the scientific consensus3.Improving the accuracy of such beliefs is not just a question of getting more evidence. Sometimes both laypeople and scientists will need to gain new concepts in order to understand how existing evidence relates to a particular question. For example, understanding the expected value of playing different casino games depends on an understanding of probability theory. I have described some ways in which human beliefs can be inaccurate. How are these ways relevant to learning human preferences from choices? As I discuss below, false beliefs about everyday matters of fact (like falsely believing a cafe is open) can be dealt with in some existing frameworks for inferring preferences. One challenge here is that beliefs vary across individuals. Different individuals will have different false beliefs and so inference models will have to infer an individual's inaccurate beliefs and their preferences simultaneously. If certain false beliefs are robust over time, it might not be possible to learn certain preferences with confidence (as choices based on the preferences could also result from false beliefs). Inaccurate beliefs about the causal and probabilistic structure of the world will generally make inferring preferences even more difficult. In standard models of optimal planning (see 'Formal Model' section), a distinction is made between the causal structure of the world (which the agent knows fully up to non-determinism in the world) and contingent elements of the world state of which the agent might be ignorant. If we (as observers trying to learn preferences) know the exact false causal model that the agent is using, then inferring preferences may be possible. The hard 2 I'm not suggesting that the driver's mental model is something like a physics simulation used by the engineers who design the car. But the driver is able to make some estimates of how the car will respond to the brakes or to a sharp turn. 3 When people decide whether or not to have their child vaccinated, they invoke beliefs about the causal consequences of the vaccine and the base rates of the relevant diseases. These beliefs sometimes go against the scientific consensus. 33 case is where (1) the agent has a false causal model, and (2) we (as observers) don't know what the agent's model is. Unlike beliefs about contingent parts of the world state, a false causal model can lead deviations from optimality that are both large and counterfactually robust. Moreover, the space of possible causal structures that an agent could falsely believe is large. There are many historical examples of strange but persistent behavior that arguably stemmed from inaccurate causal models. This includes much of Western medicine up until the last two centuries and the treatment of the dead in many cultures (e.g. tombs filled with goods for the afterlife). Without much experience in anthropology or history, it is a non-trivial task to work out what some ancient behavioral practice is actually about. Though we have a decent intuitive idea of the preferences of ancient people, it's difficult to work out their causal models of the world. 1.2.2. Inconsistencies and cognitive biases Example 2 (above) illustrated the phenomenon of time inconsistency in human preferences. Many behaviors have been explained in terms of an underlying time inconsistency. First, there are many examples that have a similar form to the tempting Burger Joint in Example 2. These examples involve some compelling activity with bad longer-term consequences, e.g. overspending on a credit card, taking drugs, gambling, staying out too late, getting violently angry, and so on. People plan to avoid this activity but abandon the plan at a later time. A second example is procrastination. Consider someone who procrastinates on writing a referee report and puts it off until it's too late. On day one, they prefer to do other things today and write the review tomorrow. When tomorrow comes around, their preference has changed and they now prefer to do other things again rather than write the review. There is work trying to explain these apparent inconsistencies as resulting from purely rational choices (Becker 1976). However, one of the most influential psychological accounts of these phenomena (Ainslie 2001), which I draw in this ' By counterfactually robust, I mean that even if the decision problem is repeated many times with small variations the suboptimal choices will persist. 34 chapter, argues that this behavior is not rational and provides formal models of such inconsistencies. The behaviors above that involve time inconsistency (temptation, procrastination, addiction, impulsive violence) are just as familiar from everyday experience as false or inaccurate beliefs. By contrast, the literature in psychology and economics on 'cognitive biases' has uncovered many subtle deviations from optimal cognition. Kahneman and Tversky (1979) provided experimental evidence that people are subject to 'framing effects', where evaluations of outcomes are influenced by irrelevant details of how the outcomes are presented. They also argued that human inferences violate the probability axioms (e.g. see the Conjunction Fallacy) and that the basic human representation of probabilities is flawed (Prospect Theory). There is a huge literature on biases and inconsistencies that influence choice and decision-making (Kahneman and Tversky 1974). Some of this work provides mathematical models of choice for agents subject to a bias (e.g. Prospect Theory or the Hyperbolic Discounting theory). However, there is little work on how to infer the preferences of people who are subject to these biases. The problem for inference is the same as in Example 2. If someone chooses X over Y, this could be because of a preference for X or because there is a bias that inflates their estimation of X over Y (or both). If a choice task has been studied extensively in the lab, we may be confident that a particular subject will display the typical bias. But in the general case, it is extremely difficult to say whether someone will be subject to a bias. Even in the lab, none of the prominent cognitive biases are universal. Outside the lab, people have a stronger incentive to avoid certain biases and they are able to consult experts before making decisions. So we will often be in the same problematic situation as in the case of inaccurate beliefs. That is, we know that some individuals will be subject to inconsistencies and biases. But we'll have to infer this from people's behavior at the same time as we are trying to infer their preferences. As with inaccurate beliefs, this sometimes makes it impossible to pin down someone's preferences. Yet in other cases, even a small number of choices can sometimes reveal both someone's biases and their preferences (as I will demonstrate below)'. 5 Some will argue that certain biases (e.g. framing effects or temporal inconsistencies) undermine the idea that there are preferences that are stable over time and independent of context. I think this argument is important but I will not discuss it here. 35 2. Informal Description of a Model for Inferring Preferences My goal in this chapter is to develop a statistical model for inferring preferences for agents with inaccurate beliefs and time inconsistency. The data for such inferences will come from choices on a decision problem. There are many different kinds of decision problem. Psychologists and economists studying preferences have often used decisions that are easy to specify formally and easy to run in a lab setting. These include choices between different lotteries (as in Prospect Theory) or simple two- player games such as the Prisoner's Dilemma, Rock-Paper-Scissors or the Dictator Game. In this chapter I work with decision problems that have a quite different flavor. These problems are not intrinsically non-deterministic (unlike lotteries) and they involve only a single player. They are complex because of the need to take a sequence of coordinated choices to achieve a good outcome. The sequential nature of these problems makes them analogous to many real-world problems and also provides a richer framework in which to explore a phenomenon like time inconsistency (which intrinsically involves a temporal sequence). This section begins with background material on sequential decision problems. I then provide background on time-inconsistent planning and hyperbolic discounting. With these components in place, I give an informal description of my model for inferring preferences. The final subsection provides a formal description of the model and describes my implementation of the model. 2.1. Sequential Decision Theory 2.1.1. The Basic Idea of Sequential Decision Theory The theory of optimal sequential action and choice, sequential decision theory, has been studied extensively in economics, computer science, statistics, and operations research (Russell and Norvig 2003, Ng and Russell 2000, Sutton and Barto 1998 36 Bernardo and Smith 2000)'. Sequential decision theory is a generalization of one-shot decision theory to situations where an agent faces a series of choices instead of a single choice. Typically an agent's earlier choices and their outcomes influence future choices (as opposed to a series of independent one-shot problems). Moreover, the agent knows at the outset that they face a sequence of dependent choices and they can plan accordingly. The mathematical objective in sequential decision theory is essentially the same as in the one-shot case. For each decision, the agent takes the action that maximizes expected utility of the resulting sequence of world states'. The key difference is that the resulting sequence of world states will also depend on future actions the agent takes. I will give a formal description of this in the section 4. 2.1.2. Why consider SDT in preference inference The main motivation for my considering sequential decision problems is that they allow a wider range of human choices to be formally modeled. In standard one-shot decision problems (e.g. lotteries, Prisoner's Dilemma, Rock-Paper-Scissors) the agent's choices lead directly or immediately to outcomes of varying expected utility. The outcome might be non-deterministic (as in the case of lotteries) but it doesn't depend on any subsequent actions of the agent. By contrast, most human choices will lead to utility only if the person makes appropriate choices in the future. Sequential decision-making is somewhat like a coordination game with one's future self. Each choice cannot be considered in isolation: it must be considered relative to how it facilitates future choices. 6 In philosophy I know of little work that focuses on sequential decision theory in particular. This is probably because many philosophical questions about decision theory can be illustrated by focusing on a one-shot decision problem. Consider, for example, the Representation Theorem for Expected Utility (von Neumann and Morgenstern 1947), the distinction between EDT and CDT (via Newcomb's Problem and related one-shot decision problems), questions of self-indication and indexicals (via Sleeping Beauty), and problems with large or infinite quantities (via Pascal's Wager, the St. Petersburg Paradox, etc.). One exception is the discussion of the iterated Prisoner's Dilemma. However such iterated games have a very simple sequential structure. Much of their complexity is already in the one- shot game itself. " In this chapter, I will usually talk about sequences of world states rather than world histories. Moreover, the utility functions I consider are defined primarily on world states rather than on world histories. This simplifies modeling and inferences but is inessential to the setup. Further work could explore the more general case of utility functions defined primarily on world histories. 37 Noodle Bar Veg Cafe Figure 3. John's choice between two restaurants in Example 1. Each restaurant is a five-minute walk from John's starting point. John's sta ng point Consider John's choice in Example 1 (see Fig. 3) of where to eat lunch. From a high- level planning perspective, John's choice is whether to eat at the Noodle Bar or the Vegetarian Cafe. But from a lower-level perspective he faces a sequence of choices'. His first choice is whether to take the left or right fork in the road. There is no immediate benefit to John of taking the right over the right fork. John takes the right fork because it will put him in a location where subsequent choices will take him more efficiently to his restaurant of choice. (If John started by taking the left fork and then switched back to the right, his total path would be needlessly inefficient.) It might seem odd to model this lower-level choice of whether to take the left or right fork. Given that planning the most efficient route is so trivial, why not just model the high-level choice of restaurant? For this simple problem, it would be fine to model the high-level choice. However, for slightly more complex problems, it becomes important to take into account the lower-level choices. The first thing to note is that there is not always a 'most efficient' route. Suppose there are two routes to a restaurant. The shorter route is usually faster but is sometimes more congested than the other. In this case, a natural strategy is to try the shorter route and switch over if it's too congested. There's not an optimal route: there's an optimal policy (a function from states to actions) that's best in expectation. This policy wouldn't work if there was no way to switch routes. That is, the optimal strategy/policy here depends on which choices are available to the agent throughout the sequential problem. If the agent can switch routes midway, then it might be better for them to try the shorter route first. Otherwise, it might be best to always take the longer route (as it's better in expectation). ' It's really more like a continuum of choices, because John could stop walking at any moment and walk the other way. But I'll imagine that space and time are discretized and so John has a finite series of choice points. 38 2.13. Sequential decision problems, inaccurate beliefs and time inconsistency The difference between one-shot and sequential decision problems is significant in the case of problems where the agent has inaccurate beliefs. In a one-shot decision problem, an agent with inaccurate beliefs might overestimate the expected value of a choice X and choose it over Y. Without further information, we (as observers) won't be able to distinguish this from a preference for X. In the sequential case, the consequences of a false belief will often be protracted and easy to recognize from observations. A single false belief (if uncorrected) could lead the agent to a long sequence of actions that result in a very poor outcome. On the other hand, such a sequence of actions will often provide sufficient information to infer that the agent has a false belief. For example, if someone walks a very long way to a Starbucks (instead of going to the Starbucks nearby) we might infer both that they prefer Starbucks and that they didn't know about the closer Starbucks. Sequential decision problems also help bring out the interesting properties of time- inconsistent planning. This will be discussed in detail in the section REF on 'time inconsistent planning', where I give a general introduction to time inconsistent planning. 2.1.4. The Restaurant-Choice Decision Problem This section introduces the kind of sequential decision problem that I model in this chapter. The overall goal is to observe an agent playing this decision problem and then infer their preferences from these observations. The decision problem I focus on is an elaboration of the examples of John choosing a restaurant in Section 1. I will refer to this problem as the 'Restaurant-Choice problem'. Consider the decision problem depicted in Figure 4a. The backstory is that John gets out of a meeting and needs to find somewhere to get some food. We assume that, all things being equal, he prefers walking less. The green trail indicates the path that John took. These are not real human actions choices: I created the paths based on what seemed to me a plausible sequence of actions. Rather than view John's path as a single choice (e.g. 'take the most efficient route to the Vegetarian Cafe'), I model John as making a series of discrete choices. One way of doing this is indicated by the grid 39 of Figure 4b. For each square on the grid, if John stands in that square then he has a choice of moving to any of the neighboring squares. Once John enters a square with a restaurant, we assume he eats at that restaurant and the decision problem ends. Although I've described the Restaurant-Choice problem as a sequential decision problem, we could just as well call it a planning problem or a problem of instrumental ('means-ends') reasoning. John has some goal (initially unknown to the observer) of eating at his preferred restaurant but he is subject to the constraint that he'd prefer to walk as small as a distance as possible. The restaurants themselves have terminal value for John. The squares near to the restaurants have instrumental value because they are close to the states with terminal value. Vegetarian CafZ Donut Chain Store Noodle Shop D2 D2 Fig 4a (left). Shows John's choice of restaurant (Veg Cafd) from his starting point in the bottom left. Fig 4b (right) shows a discrete 'grid' representation of the state space. The sequential decision problem shown in Figure 4a is just one of a number of variant decision problems I consider in this chapter. To generate variants, I vary (1) John's starting point (which influences how close different restaurants are to him), (2) which restaurants are open, and (3) which streets are open or closed. I refer to each variant as a scenario. Figures 5a and 5b show two additional scenarios and John's path on those scenarios. Below I'll consider cases where an agent with different preferences than John plays the same sequence of scenarios. 40 SCENARIO 1 Vegetarian Cafe Donut Chain Store Noodle ShopSCENARIO 2 Vegetarian Cafe Donut Chain Store Noodle Shop Fig 5a (left) and 5b (right). In 5b, the section of road marked with crosses is not open (the agent can't go along that path and knows this fact). It's clear that Restaurant-Choice problem abstracts away lots of the complexity of the real-life task of choosing somewhere to eat. I assumed that the streets themselves are homogeneous, which precludes preferences for safer or quieter routes. I also assumed that the agent has full knowledge of the street layout. While the Restaurant-Choice problem is minimalist, it does allow us to explore the implications of John having uncertain or false beliefs and of John being time inconsistent. I consider cases where (a) John does not know about a certain restaurant, and (b) John has a false belief (or uncertainty) about whether or not a certain restaurant is open or closed. I also consider different ways in which John takes actions in a time-inconsistent manner (see the following subsection). The Restaurant-Choice problem I've described may seem quite specific to the case of navigating a city. In fact, the formal model of this problem I introduce in Section 3 is very general. The formal model consists of a set of discrete world states (which are the grid squares in Figure 4b). For each world state, there is a fixed, finite set of world states that are reachable from it (viz. the neighboring grid squares). The agent has preferences defined over these world states and also over movement between states. A sequence of world states is generated by physical/dynamical laws that update the world state given the agent's choice. In the literature on sequential decision problems, a large and diverse array of problems have been usefully encoded in this form. These include playing backgammon and other games, flying a helicopter, financial trading, and deciding which ads to show to web users. 41 2.2. Time Inconsistency and Hyperbolic Discounting 2.2.1. Hyperbolic Discounting in one-shot decisions Having explained the Restaurant-Choice problem, I now introduce a standard mathematical model of time-inconsistent behavior and show how it leads to two distinct kinds of agent for sequential problems. In Example 2 (above), I illustrated time inconsistency by considering how John might be tempted when walking past the Burger Joint on his way to the Salad Bar. A different kind of example (which has been extensively studied in lab experiments) concerns preferences over cash rewards at different times. Subjects in a psychology experiment are asked whether they'd prefer to receive $100 immediately or $110 the next day. They are also asked if they'd prefer $100 in 30 days over $110 in 31 days. Many more subjects prefer the immediate $100 than the $100 after 30 days. If people aren't able to 'lock-in' or 'pre-commit' to holding on till the 31st day for the $110, then this pair of preferences will lead to inconsistent behavior. Similar experiments have been done with non-cash rewards (e.g. rewards of food, or negative rewards such as small electric shocks) and with animals, and the same inconsistency is frequently observed. The general pattern is that a 'smaller-sooner' reward is preferred to a 'larger-later' reward when it can be obtained quickly. But this preference reverses if each reward is shunted (by an equal amount) into the future. There are various theories and formal models of time inconsistency. I'm going to focus on the model of hyperbolic discounting advocated by George Ainslie (2001). Ainslie explains time inconsistency in preferences in terms of a 'time preference' or 'discount function' that is applied whenever humans evaluate different outcomes. Humans prefer the same reward to come earlier rather than later: so the later reward is discounted as a function of time. If such discount functions were exponential, then this would not lead to dynamic inconsistency. However, the discount function for humans is better captured by a hyperbola, with a steep slope for very small delays and 42 a progressively shallower slope for longer delays'. Figure 6 illustrates the difference between exponential and hyperbolic discount curves. 1.0 0.8 080.6 4-; =1 0.40 CA) 0.2 0.0I I~ -I -- I Hyperbolic discounting -Exponential discounting I I I 0 2 4 6 8 10 Time from Dresent Fig. 6. Graphs show how much a reward is discounted as a function of when it is received. Ainslie's claim is not that discounting only applies to cash rewards or to sensory rewards like pleasure or pain. The idea is that discounting would apply to any world state that humans find intrinsically (non-instrumentally) valuable. It would apply to socializing with friends or family, to enjoying art, and to learning and developing valuable skills'0 . Hyperbolic discounting predicts the inconsistency I described where $100 is preferred over a delayed $110 when it's immediate (but not when it's 30 days in the 9 Various functions have these properties. The hyperbola is just a simple and analytically tractable example. '0 Using the terminology of Chapter 1, Ainslie gives a theory of how preferences over outcomes vary as a function of time. It's simplest to specify the theory if preferences are over world states (not world histories) and if they are represented by a utility function. The theory then has a simple compositional structure. There are timeless utilities over world states. (These capture what the agent would choose if all states could be reached at the same time). Then if one world state is received after another, it's utility is discounted by the appropriate amount. 43 future) .Suppose we use the follow hyperbolic function to compute the discount factor (where delay is measured in days): 1discount factor =1 + delay Then at delay=0, the value of the options is 100 vs. 55. At delay=30, the values are 3.22 (for $100 at 30 days) vs. 3.42 (for $100 after 31 days). 2.2.2 Hyperbolic Discounting in Sequential Decision Problems 2.2.2.1. Naive vs. Sophisticated Agents The example with cash rewards ($100 now vs. $110 tomorrow) illustrates the basic idea of hyperbolic discounting. It does not, however, demonstrate how hyperbolic discounting interacts with sequential decision-making. In the calculation above, I simply computed the discounted value of each of the outcomes. I didn't take into account the possibility of planning, which could allow one to pre-commit to waiting till the 31st day for the $110. As I explained above, in a sequential decision problem an agent's choices need to take into account the choices they'll make in the future. So to take the best action now, the agent will benefit from having an accurate model of their future choices. (Analogously, each player in a coordination game benefits from an accurate model of the other players.) There are two natural ways for a time-inconsistent agent to model its future choices": 1. A Sophisticated agent has a fully accurate model of its own future decisions. So if it would take $100 on day 30, it will predict this on day 0. 2. A Naive agent models its future self as evaluating options in the same way as its present self. So if it currently prefers to 'delay gratification' on the 30th day and wait for the $110, then it predicts (incorrectly) that its future self will do the same. "See references in Kleinberg and Oren 2014 for more on this distinction. 44 Note that both Sophisticated and Naive agents will fail to delay gratification if they reach the 30th day and get to choose between the $100 now or the $110 later (Otherwise they wouldn't be time-inconsistent agents!). The Sophisticated agent has the advantage that it can foresee such behavior and potentially take steps to avoid ever being in the situation where it's tempted. This is the ability Odysseus displays when he ties himself to the mast. (Note that I'll sometimes talk about sophisticated planning. This is the planning a Sophisticated agent does, i.e. planning that takes into account accurate (rather than inaccurate) predictions about the agent's future actions. Similarly, naive planning is just planning a Naive agent does.) 2.2.2.2. Different Behavior of Naive and Sophisticated Agents Naive and Sophisticated agents will behave differently on some of the decision problems that I model in this chapter. Consider the setup in Fig. 7a. The green line shows the path of an agent, John, through the streets. He moves along the most direct path to the Vegetarian Cafe but then goes left and ends up at the Donut Store. If he generally preferred donuts, he could've taken the shorter route to the other branch. Instead he took a path which is suboptimal regardless of his preferences. Given the right balance of utilities for each restaurant, the behavior in Fig. 7a can be predicted by assuming that John is a Naive agent. Suppose that each restaurant has an 'immediate' utility (which John experiences immediately on eating the meal) and a 'delayed' utility (which John experiences after eating the meal). Suppose that for John donuts provide more immediate utility (or 'pleasure in the moment') than vegetables, but that vegetables provide significantly more delayed utility. (For instance, if John is trying to eat healthily, then he'll feel bad after eating the donuts and good after eating the vegetables). Now consider the choices that John faces. Early on in his path, John could turn left and go to Donut Store Dl. At that point, Dl and the Vegetarian Cafe are each more than 5 minutes' walk away. From this distance, due to hyperbolic discounting, the greater immediate utility of the Donut Store doesn't lead it to win out overall. So John walks along the shortest route to the Vegetarian Cafe. This takes him right past the second Donut Store D2. Since he now gets very close to D2, it wins out over the Vegetarian Cafe. So John's being Naive can explain this behavior. 45 Vegetarian Cafi Vegetarian Cafe Donut Chain Store 3 Donut Chain Store Figure 7a (left) is a path consistent with a Naive agent who prefers the Veg Caf6 overall but gets high immediate utility from the Donut Store. Figure 7b (right) is a path consistent with a Sophisticated agent with the same preference structure. NB: You should assume that it takes around 10 minutes to walk the path in Fig 7a. Suppose instead that John was a Sophisticated agent. Consider John's decision of whether to (a) turn right onto the street that contains the Noodle Store, or (b) continue north towards Donut Store D2 and the Vegetarian Cafd. When standing at the intersection of the street with the Noodle Store, John will predict that if he went north he would end up at the Donut Store D2 (due to being tempted when he is very close to it). By contrast, if he goes right, he will avoid temptation and take a long route to the Vegetarian Cafe (assuming he doesn't like the Noodle Store). This route is shown in Figure 7b. To get this behavior, John would have to get significantly more overall utility (i.e. delayed utility + immediate utility) from the Vegetarian Cafe than from the Donut Store D2. If the Donut Store is not much worse overall than the Vegetarian Cafe, then he'll go to the Donut Store (since it's closer). However, he wouldn't go to Donut Store D2. He would instead realize that going north will not take him to the Vegetarian Cafe (due to temptation) and so instead of taking this inefficient route to a Donut Store, he'd take the more efficient route and go to Dl. Naive and Sophisticated agents behave differently on this decision problem. An agent without hyperbolic discounting behaves differently again. This agent either takes Donut Store Dl or will take the path shown in Fig 4a (above). More generally, the hyperbolically discounting agents produce patterns of behavior that are hard to explain for an agent who doesn't discount in this way. Fig 7a and 7b are both inefficient ways to reach a state. Without time inconsistency, ignorance, or some kind of suboptimal planning, agents will not reach goals inefficiently. Since hyperbolically discounting agents produce distinct behavior from time-consistent agents, we can infer from behavior whether or not an agent hyperbolically discounts. Moreover, we can often infer whether the agent is Naive or Sophisticated. These are the key ideas behind the algorithms for learning preferences I present in this chapter. As a final point, I want to emphasize that the Naive/Sophisticated distinction does not just apply to hyperbolic discounters. The distinction lies in whether the agent predicts their own preference reversals (and so is able to plan to avoid them). This distinction is important in explaining a range of human behavior. Consider pre- commitment strategies, which are actions that cause a later action to be worse (or more costly) without any direct offsetting reward". John's taking the long route in Fig. 7b is a pre-commitment strategy, as it keeps John far enough from the Donut Store that he's not tempted. Here are some everyday examples of pre-commitment strategies: " Spending time in places where temptations (TV, web access, alcohol) are hard to obtain, e.g. a cabin in the woods. e Asking a boss/advisor for a hard deadline to provide an extra motivation to finish a project. e Putting money in a savings account from which withdrawals are costly or time-consuming. Although pre-commitment strategies are common, we still often observe time inconsistency in human actions. It's plausible that Naive planning is part of the explanation for this. 12 By making the later action more costly, one's future self is less tempted to take it. An extreme version, familiar from Odysseus and the Sirens, is when the later action is made almost possible to take. 47 2.2.3. Learning preferences for Agents with false beliefs and hyperbolic discounting 2.2.3.1. Inference: Bayesian generative models I've introduced the Restaurant-Choice problem and the kind of agents that will face this problem (i.e. agents with inaccurate beliefs and with Naive/Sophisticated hyperbolic discounting). The overall goal of this chapter is to learn an agent's preferences from their choices on these decision problems. I employ a Bayesian generative-models approach (Tenenbaum et al. 2011). I define a large hypothesis space of possible agents and use Bayesian inference to infer both the preferences of the agent and their beliefs and discounting. The space of possible agents is defined by considering all possible combinations of the following properties: (a) Cardinal preferences over restaurants (represented by a utility function), which includes immediate and delayed utilities for each restaurant. (b) Degree of hyperbolic discounting (including no hyperbolic discounting at all), which is represented by a discount rate parameter (see Formal Model section). (c) The agent's 'planning type', which can be Naive or Sophisticated. (d) The agent's prior belief distribution. For example, whether the agent thinks it likely that a particular restaurant is closed (before getting close enough to see). 2.2.3.2. Evaluating algorithms by comparing to human judgments I've outlined an approach to inferring preferences using Bayesian inference on a space of agents. Implementing this Bayesian inference procedure provides an algorithm for preference learning. Given observations of actual human behavior, this algorithm would infer some set of preferences for the agent, but how would we evaluate whether the inference is correct"'? This is a deep problem and (in my opinion) not one that has a single straightforward solution. Preferences cannot be directly observed from behavior and we are not infallible when it comes to accurately reporting our own preferences. For this chapter, I adopt a simple approach. I consider a variety of behaviors on the Restaurant-Choice problem, including as a subset the examples illustrated in Figure 5 and Figure 7. I present these behaviors to a pool of human subjects (usually more than " In other words, what if the model is mis-specified and doesn't contain any model of a human that fits the data observed? 48 50 subjects) and have them make inferences about the preferences. I then compare the human inferences to the algorithm's inferences. The idea is that if the algorithm is doing a good job at inferring preferences, then it will give results similar to those of human subjects. One might object that humans may not be good at this kind of 'third-person' preference inference. That is, maybe people are good at reporting on preferences that lead to their own choices but bad at inferring the preferences of others. For my experiments, I don't this issue will arise. In the Restaurant-Choice problems, the choices take place over a short time-period and are between only a few simple options. Preferences over restaurants are familiar from everyday experience and so I expect that human third-person inferences will be reliable". 2.3.2.3. Dealing with non-omniscience and suboptimality Deviations from omniscience and optimality mean that human choices can be consistent with many possible preferences. When Jill chooses X over Y, it could be because of a genuine preference for X or because of some bias or suboptimal evaluation that makes X look better than Y. My approach addresses this problem by explicitly modeling inaccurate beliefs and hyperbolic discounting. The idea is to use observed choices to jointly infer the biases and false beliefs of an agent as well as their preferences. With sufficiently varied decision problems, inaccurate beliefs and hyperbolic discounting predict behaviors that differ from more optimal agents. These differing predictions allow inaccurate beliefs and discounting to be inferred by Bayesian inference. The overall idea is that if you know exactly how a person is biased, you can use that knowledge to infer their preferences. This approach can be extended to other biases or deviations from optimality. One problem with this approach is that a bias or deviations may be very difficult to model accurately. It might be relatively simple mathematically but hard to discover in the first place. Alternatively, it might be a model that requires a large number of fitted parameters. Ultimately, it seems hard to model choices without incorporated high- 14 In future work, I'd want to (a) record behavior from humans -- rather than making up plausible behavior myself, (b) get judgments from those individuals about their preferences. Then I could compare the inferences of the algorithm to the preferences as stated by each individual. I could get further information by also asking additional people to make impartial third-person judgments about the preferences. 49ig \_i, I I - I "I , I - I .. I I 6A, " level reasoning (e.g. moral deliberation where someone thinks deeply about a dilemma they face), and such high-level reasoning is difficult to model at present. 2.3.2.4. Distinguishing the normative and description in preference inference Another general difficulty with inferring preferences from behavior is the issue of distinguishing the preferences from the normatively irrelevant causes of choice. My approach involves defining a large space of possible agents and then finding the agent features (beliefs, discount rate, preferences) that best predict the observed choices. But consider this general problem: 'Given a good predictive model for human choices, how do we extract from this model the preferences?' I will focus on a closely related problem: 'Given a good predictive model for a human's choices, how do we extract normative advice/directives for that person?'. Consider this problem applied to agents with inaccurate beliefs and hyperbolic discounting. It's clear that in making normative recommendations to an agent we should ignore their false beliefs. For example, if John falsely believes his favorite restaurant is closed, we'd still say that John should go to the restaurant. (To get John to follow the advice, we need to convince him first that his belief is false. But I leave aside this practical dimension.) What about an agent who hyperbolically discounts? It seems clear that the behavior of the Naive agent shown in Fig 7a is irrational, as the agent can do strictly better by taking other options (as we see with the Sophisticated agent in Fig. 7b). One idea is to make normative recommendations to a time-inconsistent agent based on what the most similar time-consistent agent would do. But there is still a question of whether to preserve the time preference that the hyperbolic discounting agents have. These agents prefer getting utility sooner rather than later. Should we treat this as part of their preference or as a bias that we ignore when making recommendations about what they should do? There has been discussion about this issue (or closely related issues) from philosophers (Rawls, Parfit, Hausman 2012) and economists (Nordhaus 2007, Hanson 2008). While there is consensus that everyday false beliefs should be ignored when making recommendations, there is not a consensus about whether some kind of pure time preference is rational or not. How much of a problem is it that we can't say definitively whether time preference is rational or not? This depends on our goals. If 50 our goal is to write down the true preferences of an individual, then we can't do so without resolving this problem. However, we are still able to give normative recommendations conditional on each possibility (non-discounting and discounting) and the individual could simply choose for themselves which to follow. 3. Formal Model Description 3.1. Overview I have given an informal description of my approach to preference inference. I now give a mathematical description of the model and explain my use of probabilistic programming to implement the model. My approach infers an agent's preferences from their observed choices by inverting a model of (possibly suboptimal) sequential decision-making. I consider discrete sets of possible states and actions. The environment's dynamics are captured by a stochastic transition function from state-action pairs to probability distributions on states. Such transitions functions are standard in Markov Decision Problem (MDPs) and in Reinforcement Learning more generally (Sutton and Barto 1998). A Markov Decision Problem (MDP) is a sequential decision problem in which the current state (which is fully known to the agent) screens off all the previous states. The Restaurant- Choice problem naturally forms an MDP because the only information relevant to a decision is where the agent is currently positioned, not how the agent reached that position. If it matters how the agent reached the position then the problem is not an MDP.15 MDPs are decision problems. In what follows, I abuse terminology somewhat by talking about 'MDP agents'. By this I mean an agent with full knowledge of the current state, which is something agents in MDPs always have. I consider three different agent models that vary in their assumptions about the agent's knowledge and time preference: '5 Though one can shoehorn the problem into the MDP form by enriching the state space with the information about how the agent reached his position. 51 MDP agent This is an agent that does expected-utility maximization given full knowledge of its world state and of the transition function. The MDP agent discounts future utility in a dynamically consistent way (which means discounting is exponential or there's no discounting). The sequential planning problem for this agent can be solved efficiently via Dynamic Programming. POMDP agent A POMDP is a Partially Observable Markov Decision Problem. The difference between a POMDP and MDP is that in a POMDP the agent has uncertainty over which state it's in at a given time-step. The POMDP agent updates its beliefs based on observations. In evaluating actions, it takes into account the information it might gain by visiting certain states. In a POMDP all of the agent's uncertainty is uncertainty over the state it is in. This does not actually restrict the kind of facts the agent can be uncertain about. For the Restaurant-Choice problem, the agent always has full knowledge of its location but it may be uncertain about whether a restaurant is open or not. We can represent this in a POMDP by defining the state to include both the agent's location and whether or not the restaurant is open or closed. Belief updating for the POMDP agent is simply Bayesian inference. This depends on a likelihood function, which defines the probability of different observations in different states, as well as a prior on the agent's state. Time Inconsistent (TI) agent The TI agent is an MDP or POMDP agent that is subject to dynamic inconsistency. I use the hyperbolic discounting model of dynamic inconsistency (described above). 3.2. Formal Model Description We begin by defining optimal sequential decision making for an MDP agent and then generalize to POMDP and TI agents. The formal setting is familiar from textbook examples of MDPs and POMDPs (Russell and Norvig 2003). 52 Let S be a discrete state space and let A be a discrete action space". Let T be a stochastic transition function such that T(s,a) is a probability distribution on S. The transition function species how the state of the world evolves as a function of the current state and the agent's current action. In the Restaurant-Choice problem the function T is deterministic. (If the agent chooses to move right, then it moves right, and so on). However, I consider the more general case where T is stochastic. Sequences of world states are generated by repeated application of T via the recursion: Equation (1): st+1 = T(st I Qs8)) In state so the agent chooses the action that maximizes the expected total (discounted) utility of the future state sequence. This depends on a real-valued utility function U on the state space S and the standard exponential discount rate y, and is given by": Equation (2): EU(a; so) = E ( U(st) This expectation is over the distribution on state sequences sO, ..., Sk that results from action a, which is generated by Equation (1). So the expected utility of a depends both on T and on the agent's future actions after performing a. Hence in the MDP case, rational planning consists of a simple recursion. The agent simulates the entire world forward, conditional on each action. Simulating the world forward means recursively calling the agent to take future actions. This simple recursion does not hold in the case of TI agents. (Note that the inference problem for MDP agents is Inverse Reinforcement Learning). Planning for POMDP agents is significantly more complex". Unlike in the MDP setting, the agent is now uncertain about the state it is in. The agent updates a belief 16 I focus on a finite state space and a fixed, finite time horizon. Many aspects of my approach generalize to the un-bounded cases. " I actually work with a more general model where the agent does not take the arg-max action but instead samples actions according to the standard soft-max distribution. This noise in the agent's choice of actions has to be taken into account when planning. This does not complicate the model or the implementation in a significant way. 53 distribution over its state on receiving observations from the world. Equation (2) still holds but the agent also has to integrate over its belief distribution. Defining an agent with hyperbolic discounting requires a small change to the expected utility definition in Equation (2). We need to add a constant k, which controls the slope of the discount curve. The new equation is: Equation (3): EU(a; so) =E E +kd) I have replaced the time index t with d (which stands for 'delay'). This is to highlight the dynamic inconsistency of hyperbolic discounting. The hyperbolic discounter evaluates future states based on their delay from the present (not from where they are in the absolute time sequence). As the present moves forward, evaluations of future states change in ways that lead to inconsistencies. As in the MDP case (Equation 1) the expectation in Equation (3) depends the agent's simulation of its own future actions. For a Sophisticated hyperbolic discounter, this 'self-model' is accurate and its simulations match what would happen in reality. So the Sophisticated agent's actions depend on the same recursion as the MDP agent. By contrast, the Naive hyperbolic discounting agent simulates its own future actions using an inaccurate self-model. Let 'Mr. Naive' be a Naive agent. Then Mr. Naive will assume at time t that his future selves at any t'>t evaluate options with hyperbolic delays starting from time t, rather than from zero. So Mr. Naive wrongly assumes that his future selves are time-consistent with his present self and with each other. The assumption that the future selves are consistent is computationally simple to implement and can lead to efficient algorithms for planning for the case of deterministic planning (Kleinberg and Oren 2014). The Naive agent can plan cheaply. While this is an advantage, keep in mind that it will sometimes abandon these plans! I have described the two main independent dimensions on which agents can vary. They can have uncertainty over their state (POMDP agents) and they can have time- 18 Formal details of the inference problem for POMDPs can be found in Baker and Tenenbaum, 'Modeling Human Plan Recognition using Bayesian Theory of Mind' (2014). 54 inconsistent discounting (TI agents). For inference, I define a hypothesis space of possible agents as the product of all these dimensions. First, all agents will have a utility function U which represents their undiscounted preferences over all states. Second, each agent will have a probability distribution B (for 'belief) on states which represents their prior probability that each state is the actual initial state9.Finally, we let Y be a discrete variable for the agent type, which can be 'non-discounting' (i.e. time consistent), 'hyperbolically discounting and Naive', or 'hyperbolically discounting and Sophisticated'. The variable k is the discount rate for discounting agents. So an agent is defined by a quadruple (B, U, Y, k) and our goal is to perform Bayesian inference over the space of such agents. I note that apart from the addition of TI agents, this inference problem is the problem of Inverse Reinforcement Learning (Russell and Ng 2000) or Inverse-Planning (Baker and Tenenbaum 2014). In the most general formulation of this inference problem, only state sequences (rather than agent choices/actions) are observed. Hence the variables describing the agent are inferred by marginalizing over action sequences. The posterior joint distribution on agents, conditioned on state sequence So:T and given transition function T is: Equation (4): P(B, U, Y, k I SO:T, T) a P(SO:T I B, U, Y, k, T) P(B, U, Y, k) The likelihood function here is given by Equation (2) or (3), with marginalization over actions. The prior distribution P(B,U,Yk) will reflect prior assumptions or background knowledge about which kinds of agent are more likely. 19 For the Restaurant-Choice POMDP case, we can represent all the agent uncertainty we want by considering only the agent's prior on the start state. 55 3.3. Implementation: Agents as Probabilistic Programs 33.1. Overview The previous section gives a formal specification of the Bayesian inference we want to perform. This section explains my implementation of this inference for the Restaurant-Choice problem. The implementation uses recent ideas from probabilistic programming and is one of the contributions of this work. I explain some key ideas behind this implementation and then outline the implementation itself. The Formal Model section (above) describes a hypothesis space of possible agents which vary in terms of their preferences, their beliefs and their discounting. Conceptually, Bayesian inference over this hypothesis space is straightforward. However, actually implementing this computation is not trivial. In order to compute the likelihood of observations for a given agent, we need to compute the actions that agent will take. For MDP agents, there are simple, efficient algorithms for computing an agent's actions. However, I do not know of any existing algorithms for the general case of an agent with uncertain beliefs and with either Naive or Sophisticated hyperbolic discounting. In this section I exhibit such an algorithm and discuss how it can be made efficient. Techniques from probabilistic programming are used both to implement this algorithm and (with minimal extra work) to do full Bayesian inference over agents implemented with this algorithm. I use the ability of probabilistic programming to support models with recursion. My model of an agent can simulate its own future choices, by recursively calling itself to make a simulated future choice. The program expressing this model is short and many variant models of decision-making can be incorporated in a few lines of additional code. 3.3.2. Agents as Programs Probabilistic programming languages (PPLs) provide automated inference over a space of rich, complex probabilistic models. All PPLs automate some parts of inference. Some provide default settings that the user can tweak (STAN, Church). Some allow the automated inference primitives to be composed to produce complex 56 inference algorithms (Venture). I work with the probabilistic programming language Webppl (Goodman and Stuhlmueller 2015). Webppl was designed in part to facilitate Bayesian inference over models that contain decision-theoretic agents. In particular, Webppl allows for Bayesian inference over models that themselves involve instances of Bayesian inference. This feature has been used to formulate models of two-player 'Schelling-style' coordination games, models of two-player adversarial games where agents' simulate each other to varying depths, and models of scalar implicatures for pragmatic language understanding (Stuhlmueller 2015). To explain my implementation, I will start with simpler programs that capture one- shot decision-making with no deviations from optimality, and then build up to the full program that captures sequential decision making with hyperbolic discounting. I do not have space to give a detailed explanation of how these programs work, but an excellent tutorial introduction is available (Goodman and Stuhlmueller 2015). I start with a program for one-shot decision making with full knowledge and no discounting. This programs illustrates a modeling technique that I use throughout, known as 'planning as inference' (Botvinick and Toussaint 2012). The idea is to convert decision making into an inference problem. The standard formulation of decision theory involves computing the consequences (either evidential or causal) of each possible action and picking the action with the best consequences (in expectation). In 'planning as inference', you condition instead on the best outcome (or on the highest expected utility) and then infer the action that leads to this outcome from a set of possible actions0 . 3.3.3. Generative Model: model for sequential decision-making Figure 8 shows a model of an agent who chooses between actions 'a' and 'b'. The 'condition' statement specifies Bayesian conditioning. In this overly simple example, the agent conditions on getting the highest utility outcome. When inference is run on this program, it returns action 'a' (which yields this high utility outcome) with probability 1. Figure 9 shows a model of a slightly more complex agent, who chooses not based on maximal utility but based on maximal expected utility. This involves 20 This is analogous to the idea of backwards chaining, as found in work on logic-based planning. The motivation for using 'planning as inference' is that it transforms the planning problem into a different form. This form might be easier to capture with a short program or it might be amenable to techniques that make planning more computationally efficient. 57 computing the expected utility of each action and then using the statement 'factor' to do a soft conditioning on this expected utility. This corresponds to soft-max decision- making. For our purposes, this is a continuous relaxation of maximization that tends towards maximization as the parameter 'alpha' tends to infinity. var consequence -function(state){ if (state-'a'){return 20;} if (state-'b'){return 19;} }; var agent -function){ var action -uniformDraw( ['a','b'] ); condition( consequence(action) -20); return action; }; Figure 8 (above): one-shot, goal-based, planning as inference. var noisyConsequence -function(action){ if (action-'a'){return categorical([.5,.5],[20,Q]);} if (action-a'b'){return categorical([.9,. 1],[15,0]);} }; var agent -functiono{ var action -uniformDraw( ('a','b'] ); var expectedUtility -expectation( Enumerate( function){ return noisyConsequence(action); })); factor(alpha\*expectedUtility); return action; }; Figure 9 (below): one-shot, EU-maximizing, planning as inference. Figure 10 generalizes the previous models to a model for sequential decision-making for the case of an MDP agent. This agent has full knowledge about the environment up to features assumed to be non-deterministic. The agent model is essentially the same as before. However, to compute the consequences of an action the agent now invokes the function 'simulateWorld'. This function runs the whole world forward until the decision problem (or 'scenario') terminates. Since this is a sequential decision problem, running the whole world forward means computing the future actions of the agent. So 'simulateWorld' will make repeated calls to the agent. So in computing the 58 consequences of an action 'a', the agent simulates its future actions given that it first takes action 'a'. var agent -function(state, numActionsLeft){ var action -uniformDraw( ['a','b'] ); var expectedUtility - expectation( Infer(function(){ return discountedSum( simulateWorld(transition(state, action)), numActionsLeft); }); factor(alpha\*expectedUtility); return action; }; var simulateWorld -function(state, history, numActionsLeft){ if (numActionsLeft -@){ return history; } else { var action -agent(state); var nextState -transition(state, action); var nextHistory -push(history, nextState); return simulateWorld(nextState, nextHistory, numActionsLeft-1); } }; Figure 10. Agent that solves MDP via planning as inference, conditioning on expected utility. This program (Fig. 10) also includes a discount function 'discountedSum', which corresponds to the discounting function in the mathematical model description. By varying this function we can obtain different kinds of agent. If there is no discounting (or exponential discounting), we get the standard MDP agent (i.e. an agent with no false beliefs and no hyperbolic discounting). If we use the hyperbolic discounting function, then we get a Sophisticated agent. I won't provide the code here but a simple modification of the code in Fig. 10 gives us the Naive agent. The Naive agent has a model of its future self that is inaccurate. So the Naive agent needs to call 'simulateWorld' with an additional argument 'selfModel' by which it passes its own model of itself. This self-model discounts utility in the same way as the top-level agent (while the actual future self will discount in a way that is inconsistent with the top-level agent). 59 The final program (Fig. 11) allows the agent to have uncertainty over the start state. This requires two changes from Fig. 10. First, the world must generate observations (which will vary depending on the actual world state). This is done in the line 'var nextObs ...'. Second, the agent must update its beliefs on these observations. This is done in the assignment to variable 'stateDistribution' and is show in a schematic (simplified) form. The basic idea is that the agent samples the world forward from its prior distribution and then conditions the outcome on the observations. This distribution on the current state is then used when computing the expected value of the different actions. I digress briefly to make a note on computational efficiency. The programs given here will be exponential time in the number of actions available to the agent. For the MDP agent without discounting, we can use standard dynamic programming techniques to get an algorithm quadratic in the size of the state space. For the POMDP case, we can use standard approximation techniques to get reasonable performance on small instances. For the Naive agent case, we can use a transformation of the state- space to turn this into a series of MDP problems. We form new states by taking the product of states and time-steps. The utility of a pair (state, time) is just its discounted utility under the original state space. To choose an action, we solve an MDP over this state-space. The process is then repeated for every subsequent action. This algorithm will have runtime numberactions x (number states)A2. 60 var agent -function(state, observations, numAct){ var stateDistribution -Infer(functiono{ var startState -startStatePrioro; var outcome -sample( simulateWorld(startState, E ,t, 0 ,random)); condition( outcome.observation.slice(t) -- observations); return outcome.obseration[t-1] }); var action -uniformDraw( E'a','b'] ); var expectedUtility -E(functiono{ var state -sample(stateDistribution); return discountedSum( simulateWorld( ... ) ); }); factor(alpha\*expectedUtility); return action; }; var simulateWorld -function(state,history,numAct,observations,-agent). if (numAct -- return {history:history, observations:observations}; } else { var action --agent(state,observations); var nextState -transition(state, action); var nextHistory -push(history, nextState); var nextObs -push(observations, observe(nextState)); return simulateWorld(nextState,nextHistory,numAct-1,nextObs); Figure 11. Agent with potentially false beliefs, Bayesian belief updating on observations and hyperbolic discounting (implementing as in Fig 10). This is a POMDP agent with hyperbolic discounting (in the earlier terminology). 3.3.4. Inference model The programs discussed above implement the model of decision making for different kinds of agents. For inference over these agents, I define a prior on the variables in Equation (4) (above) and use the built-in inference methods provided by Webppl. I show some actual inference results in the next section. 61 4. Experiments In the previous sections, I outlined my Bayesian inference model and its implementation as a probabilistic program. This section describes the experiments I carried out to test the model's inferences of preferences against human inferences. The human subjects were recruited from Amazon's Mechanical Turk and were shown the stimuli via a web interface. Both the human subjects and my inference model were presented with sequences of choices from agents playing the Restaurant-Choice problem. The goal was to infer the agent's preferences and whether the agent had false beliefs or was time inconsistent. 4.1. Experiment 1 Experiment 1 considers only MDP and hyperbolic discounting agents and excludes agents with uncertainty (POMDP agents). The goal is to test the model's ability to capture key qualitative features of the human subjects' inferences. This includes inference about temptation (or immediate vs. delayed desirability of an outcome), and also prediction and explanation of actions. Experiment 1 has a between-subjects design with three separate conditions. In each condition, subjects are shown three 'scenarios', each showing an individual ('John') choosing a restaurant in the same neighborhood. Subjects have to infer John's preferences by observing his movements across the three scenarios. Across the scenarios, there is variation in (1) John's starting location (which determines which restaurants he can get to quickly), and (2) which restaurants are open. Note that John will always know whether a restaurant is open or closed. The stimuli for the Naive and Sophisticated conditions are shown in Figure 12. The Neutral Condition was already shown in Figures 4 and 5. 62 SCENARIO I -Wl Vegetarian Cafe Donut Chain Store Noodle Shop SCENARIO 3 Sophisticated Scenario 3 SVegetarian CaMe Donut Chain Stor eSCENARIO 2EVogetarutn CuafI Donut Cham $tore Noodle Shoe Figure 12. The three scenarios above and left make up the Naive condition. Subjects in the experiment were presented with these exact images and John's backstory. The Sophisticated condition differs only on scenario 3. Its version of scenario 3 is below left. The three conditions agree on the first two scenarios, which show John selecting the Vegetarian Cafe. They differ on Scenario 3. The Naive and Sophisticated conditions exhibit actions that fit with Naive and Sophisticated agents respectively. In the Neutral condition, John chooses the same restaurant (Veg Cafd) every time and there is no evidence of inconsistency. After viewing the scenarios, subjects answered three kinds of question. Part I was inferring John's preferences. Part II was predicting John's actions on additional scenarios (which added new restaurants to the neighborhood). 63 4.2. Model Predictions We first describe the general results of our model inference and then discuss how well our model captures the inference of the human subjects on Parts 1-11 of Experiment 1. 4.2.1. Computing Model Predictions I constructed a prior over the space of agent models described in Section 3.2 but leaving out the POMDP agents. I assume that John gets 'immediate' and 'delayed' utility for each restaurant. If a restaurant is tempting or a 'guilty pleasure', it might provide positive immediate utility and negative delayed utility. In place of continuous variables for the utilities and discount constant k, I use a discrete grid approximation. I did joint inference over all parameters using Webppl's built-in inference by enumeration (Goodman and Stuhlmueller 2015) and then computed posterior marginals. To summarize inference over preferences for restaurants, I first introduce two properties of an agent's utility function that capture the kind of information that I'm interested in: Property 1: 'X is tempting relative to Y' We say that the Donut Store is tempting relative to the Vegetarian Cafe (or Noodle Store) if its immediate utility is greater than the Cafe's discounted utility (with one unit of delay). Property 2: 'Agent has an undiscounted preference for X over Y' We say John has an undiscounted preference for the Donut Store over the Cafe or Noodle Place if its summed, undiscounted utilities ('immediate' + 'delayed') are greater. 4.2.2. Model Posterior Inference I present first the model posterior inferences for each condition. After that I compare these inferences to those of the human subjects. 64 The model does inference over the discount rate k of the agent, where k=O means that the agent does not discount utility at all. For agents with k>O, it infers whether they are Sophisticated or Naive. The model also infers the agent's delayed and immediate utilities for the restaurants. (I present results only for the Donut Store and Vegetarian Cafe and don't report the results for Noodle Store). posterior -prior valueAl 0.05 0 1 2 3 4\*\* Snaive -0.05 sophist neutral -0.1 -0.15 -0.2Discount rate constant k (k=O means no discounting) Figure 13. Graph showing the change from prior to posterior for each value of k and for each condition. Figure 13 shows the difference between the posterior and prior for a given value of k. The prior was uniform over the values [0, 1, 2, 3, 4] and so was 0.2 for each. We see that in the Naive and Sophisticated conditions, k=O is ruled out with full confidence. In the Neutral condition, the model updates strongly in favor of no discounting. Figure 14 shows the inference about whether the agent was Naive or Sophisticated. Here the model strongly infers the appropriate agent type in each condition. There is no evidence either way in the Neutral condition. This is what we expect, as there's no evidence here in favor of discounting. 65icated I Inference for whether agent is Naive or Sophisticated Difference between posterior and prior probability that agent is Naive0.5 0.4 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4 -0.5I naive Sc neutral CONDITION Figure 14. Graph showing posterior inference for whether agent is Naive Sophisticated. Posterior inference for whether Donut Store is tempting and whether it has higher overall (or 'undiscounted') utility compared to Vegetarian Cafeor 0.6 0.4 0.2 Difference in probability between posterior and prior0 -0.2 -0.4 -0.60 Donut is tempting 0 Donut is better overall Figure 15. Graph showing posterior inference for utilities of Donut vs. Veg Caf6. 66na sophis ed Figure 15 (previous page) shows that an overall or undiscounted preference for the Donut Store was ruled out in all three conditions. By contrast, in the Naive and Sophisticated conditions there was a strong inference towards the Donut Store being tempting relative to the Cafe. 4.23. Model vs. Human Subjects In Part I, subjects were asked to judge how pleasurable John finds each food option. In Figure 16, I compare this to model inference for the immediate utility of each option. Each bar shows the ratio of the pleasure rating for the Donut Store vs. the Vegetarian Cafe. The most noteworthy feature of human and model results is the high rating given to Donut Store in the Naive and Sophisticated conditions. In these conditions, John chose the Vegetarian Cafe two out of three times, and was willing to walk a fairly long distance for it. Yet the Donut Store is inferred to have an immediate pleasure close to that of the Vegetarian Cafe. This suggests inferences are sensitive to the specific context of John's choices, e.g. that donuts are tempting when up close. (Note that the Neutral condition, where the Donut Store gets a low score, shows that the model and humans did not have strong priors in favor of Donut Store being more pleasurable.) 67 Ratio of pleasure rating for Donut vs. Vegetarian Cafe for each condition (if >1, this means Donut is more pleasurable) 1.3 1.1 --\_- 0.9 a model 0.5 0.3 0.1 naive sophisticated neutral Figure 16. Inference about pleasurability of Donut vs. Veg Caf6 or experimental subjects and model. The model differs from humans in assigning a higher immediate utility to Donut Store than to the Vegetarian Cafe. It's not clear how much to read into this discrepancy. First, subjects were asked how pleasurable the food was, which doesn't map exactly onto the 'immediate utility', because something with lots of negative delayed utility could easily be judged less pleasurable. Second, the ratios shown are point estimates and omit uncertainty information. Both subject responses and model inferences had significant variation, and we shouldn't take small differences in these point estimates too seriously. In Part II, subjects predicted John's choices in two additional scenarios. In Scenario 4, John starts off far from both the Donut Store and Vegetarian Cafe. In Scenario 5, John starts off close to a new branch of the Donut Store. In the Neutral condition, people and the model strongly predict the Cafe in both scenarios. For Scenario 4, people and the model strongly predict the Cafe in both Naive and Sophisticated Conditions (see Fig. 17). These results show that people and the model infer a 'longer- term' preference for the Cafe across all conditions. For Scenario 5, people and the model both predict a much higher chance of John taking the tempting option (see Fig 68 18). This confirms the result in Part I that people and the model infer that the Donut Store is appealing to John, despite his only choosing it once. Probability that agent chooses Veg Cafe on Scenario 4 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Figure 17.K -- naive sophisticated neutral Model vs. People for Prediction Task (Experiment 1,8 model -people Part II). Probability that agent chooses Veg Cafe on Scenario 5 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0%4 I U model -people naive sophisticated neutral Figure 18. Model vs. People for Prediction Task (Experiment 1, Part II). 4.3. Experiment 2 In Experiment 1, John was shown playing three different decision problems. Combined with the assumption of John having full knowledge, this enabled strong posterior inferences about John's preferences and his discounting. If instead, John is shown playing only a single decision problem, and inaccurate beliefs are not ruled 69 out, then there will be multiple explanations of John's actions and broader posteriors on parameter values. Experiment 2 tests the model's inferences from choices on a single scenario. I test whether the model can produce the same kinds of explanations as humans and whether it can capture the relative strength of different explanations. This time model includes POMDP as well as MDP agents (and it still includes hyperbolic discounting). For Experiment 2, subjects were shown Scenario 3 in Fig 12 (Naive version), without being shown the other scenarios. Subjects are told that John generally prefers to go to somewhere closer to his starting point. They are also told that the two branches of the Donut Store are similar. Subjects were then asked to explain John's behavior and were allowed to write multiple explanations. (Subjects varied in how many distinct explanations they gave.) After looking through the explanations, I devised a series of categories that covered all the explanations given. I then categorized each subject's response, with some responses belonging to multiple categories. The proportion of subjects giving explanations in each category is shown in Fig 19. Note that the categories are not mutually exclusive. For example, the explanation 'Donut Store is closer' is likely to correlate with the explanation '[John] planned to go to the Donut Store'. Likewise, the explanation '[John] changed his mind about eating at the Vegetarian Cafe' is likely to correlate with '[John] planned to eat at the Vegetarian Cafe'. Proportion of subjects giving each explanation 06 Donut Store is closer 0.6 planned to go to Donut Store 0.5 temptations/cravings for donuts 0.4 prefers Donut Store D2 over D1 0.3 is ignorant of Donut Store D1 0.2 \* changed his mind about eating at Veg E. planned to eat at Veg 0.1 wanted variety or had a change of preferences 0.0 Figure 19. Subjects' explanations for John's behavior in Naive scenario 3. 70 Posterior:prior ratio for different explanations of John's behavior 1.35 - 1.3 - 1.25 - ratio for each explanation (=1 1.2 means no change 1.15 - from prior) 1.1 1 - naive ignorant of D2 preferred D2 preferred (temptation) D1 / believes to D1 to Veg Cafe closed Figure 20. Model explanations for John's behavior in Naive Scenario 3. To compare the model inferences to these results, I computed the changes between the prior and posterior for various statistics of the agent model (Fig 20). The 'naive (temptation)' statistic corresponds to the explanation in terms of a Naive agent being tempted when walking right by Donut Store D2. The statistic 'ignorant of DI / believes closed' tracks whether the agent has a prior belief that DI is closed or whether the agent is ignorant of DI's being there. The statistic 'D2 is preferred to Vegetarian Cafe' measures whether D2 is preferred to the Cafe from John's starting point. This takes into account the utility of each restaurant as well as how far each restaurant is from John. (So the Donut Store could be slightly less preferred on its own merits, but John would choose it because it's closer than the Vegetarian Cafe). 4.3.1. Comparing results We cannot directly map explanations like 'John changed his mind about eating at the Vegetarian Cafe' onto inferences from the model. It's possible that the change in mind is because of John being tempted by the Donut Store. But another possibility is that John was uncertain about the distance of the Cafe and changed his mind after 71 realizing it was too much of a walk. My model does not include uncertainty about distances. Still, most of the explanations given by people are captured by the model in some form. Moreover, the strength of posterior inferences for the model roughly track the frequency of explanations given by people. It's plausible that subjects gave the explanations that seemed most plausible to them. Follow up experiments, where subjects were asked to choose between explanations, generally confirmed this. There are two salient things about John's behavior. First, he doesn't choose the Vegetarian Cafe (or Noodle Store). Second, of the two Donut Stores, which are assumed to be similar, he chooses the further away branch (D2). The model makes a strong inference for a preference for D2 over the Vegetarian Cafe. It was not common for people to say directly that John prefers D2 to the Cafe (as they presumably thought this too obvious to be part of a good explanation). However, the categories 'Donut Store is closer' and 'John planned to go to the Donut Store' both fit will with this inference. In terms of why John did not choose Dl, the strongest answer for both people and the model is a preference for D2 over DI. The next strongest answer is that John doesn't know about DI or thinks it is closed. These two explanations are judged much stronger than the explanation discussed above, on which John is a Naive agent and is tempted by D2 when walking towards the Vegetarian Cafe (and where D2 and DI have very similar utility and John has full knowledge of DI). The model does capture this explanation (its posterior probability is higher than its prior). It does comparatively poorly as an explanation because it only works with a fairly narrow range of parameters values [This is an instance of the Bayesian Occam's Razor.]. For most parameter settings, a Naive agent would either (a) go straight to Dl (because the discounted utility of the Vegetarian Cafe is low compared to the much closer Dl) or (b) go to the Vegetarian Cafe (because D2 isn't that tempting). The basic problem is that if D2 is really tempting, it must have high utility (especially immediate utility) and this will also make Dl more appealing. In Experiment 1, the model infers the temptation explanation with very high probability. There are two reasons for this. First, I did not include the 'ignorance of Dl' explanation as a possibility in that case. Second, in that case the model was doing inference based on three scenarios. Scenario 1 (Fig 12) shows John start off closer to 72 D2 than to the Vegetarian Cafe and still choose the Vegetarian Cafe. In Scenario 2, John starts closer to DI but chooses the Cafe. These scenarios suggest that John has an overall preference for the Vegetarian Cafe. This now leaves the question of why he didn't go all the way to the Cafe in Scenario 3, and his being Naive gives an explanation. Conclusion This chapter addresses a key difficulty for inferring human preferences from observed behavior, viz. that humans have inaccurate beliefs, cognitive biases (such as time inconsistency) and cognitive bounds (that mean they so approximate planning and inference). The chapter looked at two particular deviations from optimal homo economicus: inaccurate beliefs and time inconsistency. The approach was to explicitly model these deviations from optimality and do inference over a large space of agents, where agents can include these deviations or not. Although the presence of these deviations from optimality can sometimes make preference inference more difficult, I showed that it is possible from short action sequences to diagnoses these deviations from optimality and to infer a significant amount about the agent's preferences. This approach is quite general. If a deviation from optimality can be formalized, then it can be added to a Bayesian model. In this chapter, I implemented this inference model using a probabilistic program. I was able to use a single concise program (less than 150 lines of code for the agent model) to model a variety of different agents (MDP, POMDP and hyperbolic discounting versions of each). This program can be easily extended to capture other kinds of decision-making behavior. For instance, agents could have make plans of limited complexity, agents could have preferences that vary over time, or agents could do inference to learn about their own properties or parameters. One shortcoming of this probabilistic program is that it builds in inflexible assumptions about the structure of the agent's decision making. (Agents can vary in whether they hyperbolically discount and how they discount, but the structures that implement discounting and planning are fixed). Further work could explore more 73 flexible models that are able to learn more what kind of structures are useful for explaining human behavior (rather than assuming them). The performance of my inference model was demonstrated by doing inference on simple sequences of plausible human actions. I compared the inferences of the model to those of human subjects. My Experiment 1 was for inferences about MDP hyperbolically discounting agents, where the agent is observed over multiple decision problems. The model produced qualitatively similar inferences to the human subjects on tests of (1) inferring the preferences of the agent and (2) predicting the agent's choices on new scenarios. In Experiment 2, inferences were based on a single, ambiguous decision problem and the space of possible agents included POMDP agents (which can have inaccurate beliefs). I showed that the model was able to produce much of the range of explanations that humans gave for behavior and that the model captured the relative plausibility of different explanations. While these results testing the model are promising, it would be valuable to extend the model to richer applications. One project is to run an experiment where subjects both generate behavior and self-report their preferences. For example, subjects could be recorded using the web for a variety of tasks. Then we test the algorithm's ability to infer their self-reported preferences. 74 Chapter 3: Multivariable utility functions for representing and learning preferences Introduction and Motivation This section introduces some of the key ideas of this chapter. The basic approach of this chapter is to look at how preferences are modeled in economics and decision theory and consider whether this approach can scale up to learning all of human preferences. Preferences are often learned for a very specific task. A model that predicts well on such a task need not generalize well to novel tasks. Compositionality in utility functions is a simple way to facilitate such generalization. This section begins by considering non-compositional utility functions and the problems they run into. The key point is that without compositionality, a utility function would need to contain a numerical utility for every way in which world histories can differ. This is implausible as an account of human cognition and would also make learning a complete utility function impossible. Along the way, I introduce the ideas of independent goods and complement goods, which play an important role throughout the chapter. These concepts correspond to simple functional forms for compositional utility functions. In the final part of this section, I consider approaches that learn the utility function from a much more restricted class of functions. The idea is to work out the utility function a priori (or at least not to learn it by observing human choices). Such a utility function will presumably be defined in terms of properties for which humans have basic preferences (or at least properties that are closely related to those which are preferred in a 75I "W - .. btu , i , ILA Wa ,A .6.6" -il , I" ..... ...... basic way). I discuss the difficulty of building formal techniques that make predictions based on such functions. 1. Utilityfunction over objects of choice Imagine an agent making the choice between some pieces of fruit (e.g. an apple, a pear, an orange, etc.). The agent chooses exactly one item from the list. When looking only at this single choice situation, we could perfectly predict the agent's choice from a utility function that assigns utilities only to the objects of choice (i.e. to each kind of fruit). We can generalize beyond this example. In any situation, the agent has a number of choices available. If we learn a utility function over all objects of choice, then we can predict actions in any situation. Learning such a utility function is conceptually simple: you infer ordinal utilities from choices. However, this approach has obvious shortcomings'. If we only ever made choices between a small fixed set of kinds of fruit, this approach would be successful in making predictions. But we have an enormous set of options available, many of which are not choices of consumer goods. As a concrete example, consider the complexity of a utility function that assigns a unique utility to every item on Amazon.com or to items and services on offer at Craigslist. This approach makes computing predictions and plans from the utility function simple, but at the cost of making the utility function too complex to be learned from a feasible quantity of data. Such an approach can only be useful in very narrow domains, where the set of choices is fixed and small. There are some obvious ways to patch up this approach. We can group together various choices for which the utility function has underlying structure. For example, instead of allowing unique, independent utilities to be assigned to "1 orange", "2 oranges", "3 oranges", and so on, we can instead specify a function U(orange, k), which will be some increasing concave function of k. This corresponds to a standard practice in 1 This utility function is intuitively wrong. We don't have basic preferences over most of the objects of choice. For instance, I might choose one restaurant over another because I like that kind of food better (or because it's closer to my house) and not because of some sui generis preference. Moreover, there are things we value that we never directly choose, e.g. being a good parent or a good friend. In any case, this approach is very useful to describe in order to delineate the kinds of approaches one can take. 76 economics, where the utilities for different quantities of the same good are all systematically related by a simple function (usually with diminishing marginal utility). We could also simplify the function by making it compositional. For example, instead of having a unique utility for the option "a Porsche and a swimming pool" we can compute this utility from the utilities of its components ("Porsche" and "swimming pool"). In this example, it's natural to view the utilities as additive: U("Porsche and swimming pool") = U("Porsche') + U("swimming pool") These goods are independent, in the sense I introduce in Section 1 of this chapter (below). If many objects of choice were independent, then a utility function defined on objects of choice could be made much simpler by exploiting independence. Instead of having a unique utility value for each "bundle" or combination of objects of choice, you can compute utility values by summing the utilities of the components. Thus, the planning element has not become more complex in any significant way, but the utility function has fewer free parameters. Even if many objects of choice were independent, this approach could still come unstuck. You still need to learn a utility for each of the independent objects, which would include all the items and services on Amazon and Craigslist. However, there can be compositionality without independence. Whenever you have n bike frames and 2n bike wheels, you have n working bikes. Bundles of the form "n bike frames and 2n bike wheels" will be systematically related in utility. But clearly these goods are not independent. The value of either two bike wheels or a frame on its own is minimal (ignoring resale value), and so the value of bike and two wheels is not equal to the sum of its components. Economists call such goods complements. I will discuss complement goods in Section 1. One point to recognize is that there's not going to be a general way to decide how the utility of a composite depends on the utility of the components. You need knowledge about the causal structure of the world2.This is the same kind of knowledge that you need 2 For example, the fact that two wheels are necessary for a working bike. 77 to compute plans from a utility function defined on more abstract objects3.So simplifying the utility function by finding compositional structure in it will either require many observations (e.g. to determine exactly from observations how the utility of a bundle is related to the utility of its components) or will require the world knowledge needed for planning. Note that some objects of choice could be decomposed into their properties. For example, the value of a cup of coffee for an agent may derive only from its caffeine content. This could be helpful in simplifying a utility function. However, it also requires world knowledge and it won't work as a way to decompose all objects of choice (such as in the example bike wheels discussed above). 2. Hedonistic utility functions On the approach discussed above, utility functions are defined on the objects of choice. This makes it easy to predict what someone will do given his or her utility function. However, the approach has serious shortcomings. Either the utility function is defined on a small number of objects of choice, in which case it won't generalize to many different choice situations. Or the utility function is defined over a very large number of objects of choice, in which case it will be too hard to learn4 . I will now discuss one example of a very different approach to modeling preferences. Instead of having utilities defined only on the objects of choice, we define a utility function that depends only on the agent's hedonic state. For instance, we could pick out a particular brain state, and let utility be an increasing function of the intensity of this state. In this case, there is no need to learn the utility function, as we have specified it a priori. So we avoid any of the difficulties of learning the utility function from observing human behavior. The choice of a hedonistic utility function here was just for the sake of having a concrete example. The general point is that we can select a utility function by some method that doesn't involve learning it from human choice data. We can also use a hybrid 3 If my utility function was defined on having a working bike, then a rational plan to increase my utility would be to choose two wheels and a frame from the bike store. 4 One would need too many observations to find the utilities of every possible object of choice. 78 approach. We can choose a restricted class of utility functions (e.g. all functions that depend on some set of distinct hedonic states) and then select between this restricted class using data from human choices. This will make the learning problem much more straightforward. The problem is how to compute predictions of choices from this utility function. Again I focus on the hedonistic function for concreteness. The difficulty is that many pleasure experiences result from elaborate sequences of actions (e.g. completing a hard assignment, re-uniting with an old friend, experiencing of great work of art). Computing a rational plan for maximizing utility will involving (a) considering this range of diverse events that would cause pleasure, (b) computing a sequence of actions that would cause these events. Mimicking the human ability to make such plans will require the kind of extensive and nuanced causal understanding of the world that humans have. The general lesson is as follows. On one approach, we learn a utility function over a very large space of 'objects of choice'. This will require many observations of choices in order to fit all the parameters. The advantage is that we don't need an elaborate causal model of the world. The other approach is to choose a restricted class of utility functions. We now need much less data to pin down the utility function. The problem is that predicting behavior requires sophisticated causal knowledge. It's important to point out that in the former approach (where the utility function class is less restricted) we essentially have to learn the causal knowledge in an implicit way. If diverse events cause the same pleasure state, then we'll have to learn a similar utility value for each of these states. (In this sense, there is some analogy between learning a utility function over a large set of states and model-free reinforcement learning). 3. Money maximization Economists sometimes model humans as "money maximizers". These are agents have utility functions that are some simple increasing (concave) function of the money in their possession. This utility function is a clear misrepresentation of human preferences. However, as a purely instrumental theory with the aim of achieving good predictive power, it has virtues. First, there's no utility function to learn: all agents are assumed to 79 have the same simple utility function. Second, it's much easier to work out profit maximizing actions than actions that bring about some specific brain state. So it's easier to compute the plans for different agents, by using economic methods to work out profit maximizing strategies. Finally, the theory will give correct predictions to the extent that everything that people do in fact have preferences over can be bought and sold. There are lots of details here that I won't discuss. I mainly want to note the structural features of this model of human preference. Overview of Chapter In Section 1 of the chapter, I consider in detail the approach above on which utility functions are defined on the object of choice (the first of the three examples above). I consider the kinds of compositional functions that economists use and how they relate to the structure of human preferences5.In later parts of this chapter, I consider some novel practical applications of this approach to modeling human theory of mind. The question is how well this approach matches the way in which humans carry out "intuitive economics", inferring utility functions from observations. This can also be applied to modeling the Knobe effect, where the inferences are not about preferences but about intentions. 5 As I've noted above, there are serious limitations to the approach where utilities are defined over objects of choice. However, it's important to understand the details of this very simple approach before moving on to richer but more complex approaches. 80 Section 1. Microeconomic models of human preference 1.1. Utility functions over bundles 1.1.1. Utility functions of a single variable I start by considering simple models of preference based on utility functions. First, consider a utility function for money. If we make certain rationality assumptions, we can infer an agent's utility function over money by observing their choices over lotteries6.We expect money to have diminishing marginal utility for individuals, and thus for utility 7functions representing these preferences to be concave If we observe an agent making some choices between lotteries with cash prizes, we can use the information about their utility function to predict choices among different cash lotteries (which could be much more complex than the previously seen lotteries). The simplicity of the model will make it relatively easy to learn and compute predictions from the utility function. The accuracy of the model is an empirical question. What's important for now is that the utility function we learn for money will be of limited use. We can predict choices between new cash lotteries, but not choices between lotteries over oranges or pencils (or separate information about the cash value of these objects). To predict choices over oranges, we'd need to observe some choices over oranges and infer a separate utility function over oranges. 6 Experimental evidence (Kahneman and Tversky 1981) suggests that people violate the rationality assumptions needed for the expected utility representation theorem and their choices over lotteries seem very hard to capture with simple functions (Rabin 2005). I will touch on these issues later on in the chapter. 7 The function need not be concave at all points. Even if it is always concave, it's hard to see any a priori restrictions on the complexity of the function. For modeling purposes, it will often be easiest to assume that the utility function has a simple functional form and then fit the parameters of that functional form to the observed data. 8 The success of Prospect Theory could be seen as strong evidence against the idea that a utility-based model could be useful in making predictions. However, Prospect Theory tests subjects on a set of unfamiliar lottery choices in the lab. We might get different results by testing human experts in making decisions under different risk conditions (professional gamblers, traders, business people). In any case, the accuracy of the utility model for money is not my concern here. 81 1.1.1.1. Orange juice example Things become more complex when we consider utility functions over multiple kinds of good. Suppose I am at the supermarket, deciding which brand of orange juice to buy. There are four brands and I have tried all of them before. We assume that I have diminishing marginal utility in each brand. But some brands will be better than others, and by varying degrees. One starting point would be to observe choices (lotteries or other choice situations) that just involve at most one carton of any brand. From this we can put each of the four brands on a utility scale and predict choices over lotteries (or over deals in which brands sell as different prices). For instance, imagine I'm observed buying brand A over brand B, even though A is $0.50 more expensive per quantity. The inferred utility for A would be appropriately higher than for B. We'd thus predict that I'd take A over B if at some future time there was a change of prices which made brand A slight cheaper. (If A became slightly more expensive, we'd still expect it to be chosen. This depends on prior assumptions about how likely different utilizes are.) Having a single utility function defined on each of the four brands of orange juice will be adequate if all choices involve at most one unit of each brand. This is not realistic. We'd like to be able to predict whether someone would take two units of brand A over one unit of each of brands B and C. The naive way to do this would be to infer a utility function over each brand (as we described doing for money above) and then add up these utilities. This assumes the independence property introduced above. Let 'lA' mean '1 unit of A', '2A 'mean '2 units of A', and so on for other quantities and brands. Let UA be the single-variable utility function for brand A (and similarly for other brands). Let U be the overall utility function. Then we have: U(2A)= UA(2A) U(lB, 1C)= UB(JB)+ Uc (1C) Yet this is clearly not right. Unless there are huge differences between the juice brands, there will be diminishing utility for any additional orange juice, not just for additional bottles of the same brand. The independence property does not hold here. 82 1.1.1.2. Printers, cartridges, and a pair of speakers The phenomenon just described is not particular to orange juice. Similar examples are ubiquitous. I want to focus on another example where these simple utility models fail and where there's a quite intuitive solution. Suppose that Sue goes to an electronics store with $200 to spend. She is observed buying two HP print cartridges. Two weeks later she is in the store again with another $200 and buys some audio speakers. If we had to represent the print cartridges and the speakers as two points on a common utility scale (as in the orange juice case), we would have to infer that Sue's preferences had changed. Her first choice gave evidence that the print cartridges had more utility than the speakers and so would predict that two weeks later she would choose the print cartridge again. For informed observers, the obvious explanation of Sue's behavior is not that her preferences have changed. Instead, we'd explain that if you have just one printer, you only need two cartridges and those cartridges will last for months. Extra cartridges will be of very little value to you if you already have two. This is not just a case of diminishing marginal utility. If you had no HP printer, then there'd be almost no utility (save for resale value) in having a single HP cartridge. If you had ten printers and needed to be using all of them at once, then a single cartridge would have no value and you'd need instead to have two for each printer. The general point is that the utility you get from a good depends on what other goods you have in your possession. We saw this in a less obvious example with the brands of orange juice. The marginal bottle of juice is less valuable depending on how many bottles of juice you have, even if the bottles are of different brands. There is a diverse array of further cases showing the sensitivity of the utility of one good to other "goods" (broadly construed) in the agent's possession. A season ticket for the Metropolitan Opera becomes less valuable to someone if they take a job in California. A book in French becomes more valuable after you've learned French. An app for the iPhone becomes more valuable once you own an iPhone and again more valuable if you have other apps that complement its functionality or if the network service improves in 83 your area and speeds up your internet connection. In all these cases, it is plausible that the benefit to'the agent of acquiring the good in question varies depending on the other goods the agent possesses, without there being anything we'd describe as a change in the agent's preference9 . 1.1.2. Utility functions on bundles (multivariable utility functions) What are the consequences of this kind of example for formal modeling of preferences? Let's go back to the print cartridge case. When Sue enters the store with $200, it's natural to construe all items in the store that cost under $200 as her optionsl0, because these are the things that she could feasibly buy from the shop. However, if we model her preferences as a utility function that assigns a utility to each one of these individual items (as we considered a function assigning each orange juice brand a point on the utility scale), then we won't be able to make reliable predictions about her future choices. One idea is to take the utility function to be defined not on the items in the store, but instead on sets or bundles of items that could be in Sue's possession at a given time. We considered this approach above when asking about the utility of 2 units of brand A juice vs. one unit of each of brands B and C. I'll illustrate with an example. We can imagine that Sue can only possess three kinds of goods (in varying quantities) at a time: printers, cartridges, and speakers. Bundles will have the form (P, C, M), where P is the number of printers, C the number of cartridges and M the number of pairs of speakers. Each bundle (P,C,M) will have a real-valued utility U(P,C,M). I'll call this multivariable utility function a bundle utility function (and sometimes I'll just say utility function). The first time Sue enters the store, she buys two cartridges rather than one set of speakers (where both cost the same). This gives the information that: U(1,2,) > U(1,0,1) When she later buys the speakers over more cartridges, we get the information that: U(1,2,1) > U(1,4,0) 9 For contrast: At age 13, John is a self-described hater of opera. At age 23, after maturing in various ways, John enjoys going to the opera. This is naturally described as a change in preferences. It's not clear in general how to decide between attributing a preference change vs. the acquisition of some good or ability that changes the utility of some option. 10 More generally we could consider sets of items whose total price is less than $200 budget. 84 Sue probably doesn't explicitly construe her choice in the store as being between (1 printer, 2 cartridges, 1 pair of speakers) and (1 printer, 4 cartridges, 0 pair of speakers), but we presume that her choices in the store will be sensitive to possessions of hers like the printer. This will enable us to predict that in a future situation where Sue has lost her two print cartridges and her speakers, she is likely to first buy more cartridges and then speakers. As with the single-variable utility function for money, we could infer an agent's utility function- over bundles by observing many choices over bundles. In the juice example, we considered a two-variable utility function over quantities of brands B and C that was just the sum of the single-variable utility functions over each brand. For the current example of Sue, an additive utility function, where the utility of a bundle (P, C,M) is just U(P)+ U(C)+U(M), is a worse fit than it is for orange juice. The additive model assumes that possession of one good doesn't change the value of any other. Yet it's clear that possession of a printer dramatically changes the value of cartridges. The general virtue of the additive model is its computational simplicity. On the additive model, the utility of a bundle of goods is just the sum of the utilities of its components. So adding more goods means only a linear increase in the number of objects that the utility function has to assign values to. If we allow goods to have arbitrary dependencies, then the number of objects will grow exponentially, because we need utilities for every possible combination of goods". Thus in the printer example, it could be that U(O, 1,0) is very low (no printer, one cartridge), U(0,2,0) involves an extra cartridge but zero extra utility, and U(1,2,0) is suddenly much higher (now you have both printer and cartridges). 1.1.2.2. Complement Goods I've noted the space of bundles increases exponentially as we add more goods, and that in principle bundle utility functions could become that much more complex as the bundle size increases. To make such functions tractable, however, economists use multivariable functions that take simple functional forms. These forms include the additive function above but other forms are possible. " Having a domain that grows linearly is not sufficient for the function to be simple. You could still have a very complex function with a domain that is "small". However, other things being equal, smaller domains will make for a simpler function. 85 The printer example will provide a useful illustration. Suppose we just focus on bundles of two goods: printers and print cartridges. For concreteness, we'll imagine that Sue is now working in an office that needs to print a large volume of pages in as short a time as possible. Sue would like to be running lots of printers in parallel in order to finish the job quickly, and each printer needs exactly two cartridges. (We also assume that no cartridges will run out during the time period). For Sue's task, the function U(P,C) will have a fairly simple structure. Two cartridges will lead to an increase in utility iff there's at least one printer already in possession that lacks cartridges. Thus we get a function of the following form: (1)2 U(P,C) = min(P, 0.5C) This function does not model the diminishing marginal utility of working printers for Sue. However, assuming that this depends only on the number of working printers (and not on printers or cartridges independently), it would be straightforward to add extra parameters for this. Function (1) is usefully visualized using indifference curves (see Diagram 1 below). Each indifference durve is the set of all bundles that have a given amount of utility. (And so they pick out a set of bundles the agent is indifferent between in choosing). 10 Diagram 1 (left) B Indifference curves for U(P,C) above, with quantity of printers on x-axis and 6 cartridges on the y-axis. Colored lines are bundles with utility 1 (navy), 2 (cyan), 3 4 - (yellow) and 4 (brown) respectively. Note U(1,2)=1, but U(l+c,2)=I for any positive constant c. This is because adding a printer 2 without cartridges does not increase the number of printers in operation. 00 2 4 6 8 10 12 The "min" function returns the minimum of its two arguments. To be rigorous, we need to correct for the case where P and C are less than 1 and 2 respectively. 86 We can generalize the printer example. In the case of the printer, the source of utility is a working printer, and a certain fixed ratio of printers to cartridges is needed for a working printer. More generally, when a fixed ratio of goods A:B is needed to produce additional utility we call the goods complements. The utility function over these goods is the Complements utilityfunction and has the form13: (2) U(A,B) = min(A,aB) The parameter a, which I'll call the mixing ratio controls the ratio of A:B needed to produce additional units of utility. The example I've given of a Complements utility function only takes two variables (printers and cartridges). However, it is straightforward to generalize to more variables. For example, if each printer needs a special power cable as well as two cartridges, we'd get a "min" function on three variables. Yet it is clear that not every pair of goods interacts in the way complements do. For example, having both a pair of speakers and a print cartridge does not provide any extra utility over the utility provided by these goods independently. Thus, if we return to the bundle utility function above, U(P,C,M), we can imagine it having the following simple functional form: (3) U(P,C,M) = min(P, 0.5C) + aU(M) The parameter a in equation (3) controls the utility of a working printer relative to a pair of speakers. Here we have a kind of independence between goods that did not obtain between brands of orange juice or between printers and print cartridges. The intuition behind independent goods is almost opposite to the intuition behind complements. If goods are complements, then both are necessary in some specific ratio in order to produce something of value (e.g. a working printer). Diminishing utility for each complement good will generally be a function of diminishing utility for the product of the complements (e.g. working printers). Independent goods produce utility by independent means. Hence, diminishing utility for each good will also (in many cases) be the result of the diminishing ability of the good itself to produce utility. For example, most people will get little value from additional pairs of speakers because one set enough. This is completely independent of how many printers one has. Thus in the general case, a utility 13 Again, I'm not going go into the details of ordinal vs. cardinal utility functions and of diminishing marginal utility. 87 function over a bundle of independent goods will decompose into a sum of single- variable utility functions over each of the goods, each of which will have its own independent marginal behavior. 1.1.2.3. Substitute Goods The notion of independent goods is related to another standard notion in microeconomics. Economists use the term substitutes for goods that perform the same function. The idea is that if the price of one good increases, some consumers will use the other good as a substitute. For example, when the price of butter increases, demand for margarine increases (all things being equal). Two substitute goods could perform their.function equally well and thus be so-called perfect substitutes. For example, I can't distinguish Coke from Pepsi and so they are perfect substitutes for me. However, if you have a strong preference for Coke over Pepsi, they will be imperfect substitutes for you. You will always choose Coke over Pepsi at the same price and the price of Coke will have to get high before you switch to Pepsi. Substitute goods differ in an important way from complement goods. Complement goods A and B produce utility only when combined in a fixed ratio: they are useless alone or in the wrong ratio. Substitute goods each perform the same function. So the value of having units of each good at the same time will just be the sum of their value independently (modulo diminishing utility). For example, suppose I have a slight preference for Coke over Pepsi and rate Pepsi as having 90% of the value of Coke. Given a Coke and a Pepsi, I drink the Coke today and the Pepsi tomorrow. This is no different from summing up the utility of the world where I just have a Coke today with the utility of the world where I just have a Pepsi tomorrow. Thus the Substitutes utility function will have the following functional form (indifference curves are below): (4) U(A,B) = A + aB The a parameter, or mixing ratio, now controls how much utility a unit of B provides relative to A. This parameter might take different values for different individuals. For example, if Pepsi and Coke are perfect substitutes for me, then I'll have a utility function for which a=l. Someone who is willing to pay twice as much for Coke as Pespi (an unusually strong preference) would have a=0.5. Note that the mixing ratio for the 88 Complements function could also take different values for different individuals, though this wouldn't happen in the printer case. 6 Diagram 2 (left) 5 Indifference curves for U(A,B) above, with quantity of A on the x-axis and B on the y- 4 axis and with a=0.5. Colored lines are bundles with utility 1 (navy), 2 (green), 3 3 (brown) and 4 (cyan) respectively. Note that if we hold fixed the value of A at A=1, 2 and then vary B, there is no change in the marginal value of increasing A. (Taking 1 partial derivatives w.r.t. A, we can ignore taB as it's a constant). 00 1 2 3 4 Equation (4) shows that Substitutes have the same additive functional form as independent goods. This makes sense. Coke or Pepsi provides the same "function" of being a cola drink of the same size, same sweetness level, same caffeine content, and so on. Moreover, they each provide this function independently. They don't combine in the way that the printer and print cartridge do. In this way, they are similar to the pair of speakers and the printer. However, since the cans of Coke and Pepsi provide the same function (unlike the speakers and printer) and there is diminishing utility in this function, they are not independent in the way they contribute to diminishing marginal utility. Thus, as with the Complements function (2), we will need to add parameters outside of function (4) in order to model diminishing utility. The intuition here is very simple. One's enjoyment of cola will drop quickly after a couple of cans. Whether the cola is Coke or Pepsi will make little difference to this drop in enjoyment. In contrast, additional Porsche cars won't change the value of additional swimming pools. Additional printers don't change the value of additional pairs of speakers. Here is a summary of the functional forms, if we include parameters for diminishing marginal utility. For concreteness, I have chosen a particular functional form for the diminishing marginal utility, viz. raising to an exponent. The exponents below (r and s) are all bounded between 0 and 1. One example is the square root function, for which we'd have r=0.5. 89 (independent) U(A,B) = A' + aff (substitutes) U(A,B) = (A + aB)' (complements) U(A,B) = min(A, aB)r 1.1.3 Summary For the example of Sue making purchases in the electronics store, we considered representing Sue's choices in terms of three single-variable utility functions, one for each good: U(P, C,M) = U(P) + U(C) + U(M) This functional form, although very simple, fails to capture the dependence between printers and cartridges, which produce additional utility only from an additional one printer and two cartridges. It is clear that if we are able to consider any possible function on (P, C,M), then we can find a function that allows us to capture the dependency between printers and cartridges. However, we would like to work with a function that is both analytically and computationally tractable. At this point, we introduce non-additive multivariable utility functions that are simple and tractable but also capture the way in which different goods can interact to produce utility. We introduce complementary goods and substitute goods along with their respective utility functions. We then give a plausible Complements utility function for (PC). We note that (P,C) and M are independent in the way they produce utility. Hence the three-variable utility function on this bundle will be the following: (5) U(P,C,M) = min(P,0.5C) + aU(M) We can then add diminishing marginal utility to each of the independent components of this sum. This example of the printer involves only three goods. We'd like to consider agents who, like modem consumers, make choices among a vast number of goods. Let 90 A, A2, ... , A,, be the quantities of n goods that an agent is choosing between. Then we can straightforwardly generalize the approach taken in the printer example. The n- variable function U(A1, A2, ... , A) will decompose into various functions on fewer variables, due to dependencies between the goods. For instance, if (A1,A2) are complements and are independent of every other variable, we'll get: U(A1, A2, ... , An) = min(A l, aIA2) + a2 U(A3, A4, ... , A) As I will discuss below, there are dependencies between goods other than those captured by the notions of substitutes or complements. However, we have the outline of a general approach to modeling preferences. Given a set of goods that can be purchased in arbitrary quantities, we model an agent's preferences over bundles of goods using a multivariable utility function. This function decomposes (or "factors") into smaller functions, capturing dependencies or interactions between subsets of the set of all goods. So far I have not given any empirical evidence in support of this approach to modeling preferences. Empirical testing of this model will be explored in Section 2 of this chapter. In the last subsection of Section, I will explore extensions to this bundle utility approach. Before then, I want to consider at some length a theoretical objection to the approach to modeling suggested so far. 1.2. Interlude: Objecting to the Bundle Utility framework The move from single- to multi-variable utility functions is a small departure from our starting point. We start by using simple functions on single goods, such as different quantities of oranges. We switch to using simple functions on bundled goods like combinations of printers and cartridges, which we write as (number ofprinters, number of cartridges). The utility function is first defined on the object of choice (e.g. an orange, a print cartridge, etc.) and then defined on a bundle of objects (which may or may not be chosen as a bundle). What the printer example really points to is that we get value not from printers or from cartridges, but from having a working printer. The value we assign 91 to having a working printer will explain our choices over printers and cartridges. This knowledge also explains why, having been given a Canon (rather than HP) printer as a gift, we would suddenly become interested in buying Canon print cartridges. The transition to bundle utility functions suggests a general approach that could be taken much further than we have gone so far. The approach is to try to define the utility function over the world-states that are "more" intrinsically valuable. I'm using the word "intrinsic" in a limited sense here, avoiding any controversial philosophical claims. I don't mean that to model someone's preferences we must first solve the problem of discovering the sources of intrinsic value for an individual. I'm supposing that there is some set of goods or outcomes that we consider defining the utility function over. We can then talk about outcomes being intrinsically or instrumentally valuable relative to this set. If the set contains just "printer", "print cartridge", "working printer", and "pair of speakers" then we might say that (relative to this set) the outcome "working printer" has intrinsic utility for an agent. Of course not many people find a working printer intrinsically valuable in absolute terms. So in this example, it will easy to add items to the set such that "working printer" is no longer intrinsically valuable relative to the set. (If the printer is used for printing a term paper, then we could add to the set the outcome 'printed term paper' or 'term paper that got a good grade' or 'getting a good grade for the course overall'. Any of these would displace the printer in terms of having intrinsic value relative to the set). The virtue of assigning utilities to more intrinsically valuable outcomes is that these utility functions should lead to more robust predictions. As noted above, if HP print cartridges are assigned independent value, then we'll predict that Sue will buy cartridges even if she is dispossessed of her printer and lacks funds for a new one. Likewise, if having a working printer is assigned independent value, we'll predict that Sue will invest in printer equipment even if her class now allows for email submission of assignments. It is not surprising that the "intrinsic values" approach should lead to more robust predictions. It is a move towards a model that better captures the actual causal dynamics of the situation. These causal dynamics may be extremely complicated. Intrinsic or terminal values for a single human may be diverse and may change over time (e.g. 92 outcomes previously having only instrumental value coming to be intrinsically valuable). A further complication is weakness of will. This "intrinsic values" approach seems to me worth pursuing both for predictive purposes in psychology and economics and (even more so) as a proposal about how folk- psychology makes predictions based on preferences. To have a flexible model for predicting human actions across a wide range of situations, this approach seems necessary. Humans can take a huge range of actions (or choose from a huge range of options) and it's not feasible to construct a utility function that includes a utility for all of them (or for all bundles of them). However, the "intrinsic values" approach is will be difficult to get to work. First, as the printer example suggests, the outcomes that have more intrinsic value tend to be more abstract and further from observable or easily recordable choice. Compare "having a print cartridge" to "communicating my assignment to the professor". This can make the connection between preference and action much more complex. Consider that there could be various kinds of action that satisfy an abstract goal like "communicate ideas to professor". (In the microeconomic bundle models considered above, this connection is very simple: you will choose a unit of a good if the resulting bundle is utility maximizing). There is also a challenge in learning which outcomes have more intrinsic utility for an agent. For instance, suppose Sue needs a working printer to print an assignment for a class. Given our folk knowledge of the situation, we'd think it unlikely that Sue gets any intrinsic value from being able to print the essay. If, hypothetically, she were offered to submit the paper electronically, she would probably do so14.A further question is whether Sue gets any utility from producing the essay (as some students certainly would). We can easily imagine a case where Sue is merely taking the class to fulfill a requirement and that she places no value at all on the activity of completing the essay. If she could fulfill the requirement by going to see a movie, she would much rather do that. Fulfilling the requirement for a degree may also be merely instrumental. If Sue were offered the degree 1 Sue may also take (at least some) pleasure in producing an attractive physical instantiation of her work. Human preferences are complex in this way: something with mainly instrumental value can also have some intrinsic value. (That Sue would not continue to print papers were electronic submission allowed does not imply that she gets no intrinsic value from printing papers). 93iauk-. . , - - -, -- -.- - --- A16. (or the employment advantage of the degree) without having to fulfill the requirements, she might take it. In general, it's hard to work out which states are merely instrumentally valuable because it's hard to get access to the relevant discriminating evidence. Usually, there's no way to fulfill the class requirement other than by writing the paper. We can look at whether Sue writes papers just for fun. But her not doing so is inconclusive because she may be too busy writing papers for class or because of akrasia. We can ask her whether she'd skip writing the paper if given the option. But it's not a given that her answer would track her hypothetical behavior. So this extra information gained by interviewing Sue and examining her general paper-writing habits would not necessarily give us that much information. If we did have good information about Sue's intrinsic values, we could still have difficulty predicting her actions based on this. We face the double challenge of coming up with a formal model for making rational plans and then adapting the model to take into account ways in which humans might deviate from the rational- planning model. The microeconomic bundle-utility approach we have sketched above, in which utility functions are defined over sets of dependent goods, can be seen as a small first step towards defining utility functions over the states that have intrinsic value to the agent. Learning these utility functions won't allow us to generalize in the way that we'd ultimately like to. (Example: If Sue buys print cartridges today, she probably has some things she wants to print in the next couple of days. If her printer malfunctions and she is offered a cheap printer by an acquaintance, she will probably take the offer). On the other hand, we might be able to learn these simple utility functions quickly and be able to make reasonably good predictions about choices made under similar underlying conditions. However if the world changes, e.g. a university switches to only accept electronic submission of papers, then we won't always be able to predict choices very well. 94 1.3. Bundle Utility: Preference for variety and other challenges 1.3.1. Multiplicative, Cobb-Douglas Utility functions The previous section discussed some of the limitations of bundle utility functions and a general approach to remedying these limitations by defining utility functions over more abstract, more intrinsically valuable outcomes or states. In this section, I return to the bundle-utility model itself. I want to discuss some other ways in which goods can interact to produce utility and discuss how such interactions are problematic for this formal model. I start with an example. Suppose that Scott is preparing food for a large group of people. His aim is the greatest enjoyment of the greatest number of people. There is excess demand for the food he makes, and so he's trying to make as much as possible - as well as making it tasty. Scott can hire more chefs, leading to better recipes and more expert food preparation. He can also buy more cooking equipment, enabling him to output the same food but in larger quantities15.Let C be the number of chefs Scott has and E be the number of units of cooking equipment. We can imagine that for small values of C and E, there is a multiplicative relation between C, E and U(CE). For example, for a given quantity for E, if Scott doubles C then we could imagine a doubling in utility. (If quality of recipes and of food preparation doubles, then every dish served will double in quality and so the total utility doubles). Likewise, doubling the amount of cooking equipment doubles the number of dishes produced (without changing the quality of the dishes) and so doubles the number of people who get food, hence doubling the utility. Here is a utility function with this property (where 'CE' means 'C times E'): (6) U(C,E) = CE This utility function seems likely to have limited application. We know that adding more chefs can only improve the recipes so much (and in practice we expect sharply 15 Assume that all other inputs to Scott's food production don't cost him anything-he already has ample free labor to oversee the food, but it's not skilled. 95 diminishing returns to more chefs). Moreover, we may expect Scott's utility to be non- linear in the number of people satisfied. (A doubling of the number of satisfied customers will not double his utility). Nevertheless, the example of Scott illustrates a way in which goods can interact that differs from independent goods, substitutes, and complements. For independent goods, changing the quantity of one good has no impact on the utility gained by a marginal increase in the other good. (Whether you have one or two pairs of speakers makes no difference to the utility gained by having a printer). For substitutes, marginal increases in one good can change the marginal utility of increases in the other good, but only by making the utility lower (due to diminishing marginal utility). Compare this to the food example. If Scott has lots of equipment and so can produce lots of food, then he gets more benefit from having more chefs than if he produces only a small amount of food. Something like this is also true for complement goods. However, for complements, there are no gains in utility without increases in both goods in the appropriate ratio. (If you have no print cartridges, you get no benefit from adding more printers). Whereas in the food example, having more chefs will always improve utility (even if only by a small amount) and the same is true for increases in productive capacity. Equation .(6) above is an example of a Cobb-Douglas Utility function. This utility function (named for the inventors of an analogous production function in economics) is for goods that have a multiplicative effect on utility. (Put differently: for cases where one good is a limiting factor on utility relative to the other goods). The general functional form is the following: (7) U(X, ) = X' 1 Here, the constants a and b control the marginal utility of each the goods, i.e. the rate of increase of U as one of X and Y increases. Utility increases fastest when the two goods are combined in a ratio that's a function of a/b. The indifference curves for this function are below: 96 7 Diagram 3 (left) Indifference curves for U(X,Y) above, with 5 quantity of X on the x-axis and Y on the y- axis and with U(X,Y) = Xa yb. Colored 4 lines are bundles with utility 1 (navy), 2 (green), 3 (brown) and 4 (cyan) respectively. Note that if we hold fixed the 2 value of (say) A at X=1, and then vary Y, there is no change in the marginal value of 1 increasing X. 0 ' 0 2 4 6 8 The example given above of a Cobb-Douglas utility function (the food example with Scott) was more plausible as a case of a utility function for a company, or some other kind of organization, than as a utility function for an individual. In general it is harder to come up with goods that have multiplicative utility for individual. There are cases similar to the food example. It could be that someone's scientific output doubles in value as a multiplicative function of the number of students in the lab and the amount of equipment and computing power in the lab. We could also consider the quality of different eating experiences. If we had a small slither of tiramisu, we can improve the taste by adding a small amount of rum. If we now have a full serving of tiramisu, the same amount of rum might now have even more effect on the overall experience, as it will add some rum flavor to the whole serving. However, it's clear that the multiplicative effect will be weaker, because there is a "complements" interaction with the rum and tiramisu that is also important to the utility. Another possible example is the utility gained from relaxing activity (e.g. lazing in the pool) after a workout. Marginal increases in the length of the relaxation activity have more impact on utility if the workout was longer or harder. 97 1.3.2. Preference for variety The examples discussed above in connection to the Cobb-Douglas utility function are related to a broad class of important examples involving what I'll call "preference for variety". Imagine Dave goes to the fridge and chooses a strawberry yoghurt over a vanilla yoghurt. If the next day Dave chooses vanilla (when faced with an identical choice and assuming Dave has ample past experience with each flavor) we would probably not attribute to Dave a change of preference. We would probably think that Dave just likes some variety in his yoghurt flavors. (The same goes for someone who eats Indian food one night and Chinese food the following night). Preferences for variety exist for a very wide range of goods or options: from very particular and insignificant options (e.g. flavor of yoghurt, which song from a CD to play, which route to take home) to life choices of great importance (e.g. jobs, significant hobbies, living location). Preference for variety is closely related to diminishing marginal utility, and I want to briefly explore that relation. First, take the diminishing marginal utility of money. Assume that I get a yearly salary and I'm not allowed to save any of my money. Then my utility increases more if my yearly income moves from $10,000 to $30,000, than if it does from $150,000 to $170,000. We get evidence for this from my risk-averse choices. The intuitive underlying reason for the diminishing utility of money is that the $10,000- 30,000 raise enables me to purchase things that make a bigger impact on my quality of life (e.g. a better apartment, a car) than the $150,000-170,000 raise. Note that if I can save money, then the situation is more complicated. The extra money in the $150,000- 170,000 case might have a big impact if I am later out of work. The utility gained from the extra money will therefore depend on the likelihood of my being alive in future years and in need of using it16 . Related to the diminishing utility of wealth is the diminishing utility of various goods. Suppose there's a cake that needs to be eaten today. Then I get more value in going from 0 to 1 slice than from 1 to 2 slices or 4 to 5 slices. (After four slices, I may get no value at all from extra slices). Yet as with money, things are more complicated if my consumption is spread out across time. If I eat only one slice per month, then going from 4 to 5 slices 16Another factor here is whether I discount my future welfare relative to near-term welfare. 98 could be just as good as going from 0 to 1 slice. So a good can have sharply diminishing marginal utility for an agent when consumed at one time, without the agent having any preference for variety even over short time spans. For example, someone might be able to only manage one slice of cake at a time, but still prefer to eat the same cake every day with minimal loss of utility over time. We could try to assimilate these two phenomena of diminishing utility and preference for variety. After eating a slice of cake the utility of an extra slice immediately goes down. This utility will rise over time but the growth rate can vary between individuals. For some it might take hardly any time at all, for others it might be a day or week. The diminishing utility of cake slices over a given time period will depend on this time-utility function. The problem with this assimilation is that it fails to capture important differences in the underlying phenomena. Moving from 4 to 5 slices of cake will be bad for most people due to satiation. This is nothing particular to the kind of cake in question. On the other hand, a preference for having different kinds of dessert on consecutive days seems to be based on a preference for different flavors and taste experiences. Assimilating diminishing utility and preference for variety is not straightforward. For the rest of this chapter, I will use the terms in the way suggested by the cake example above. That is, "diminishing utility" will refer to phenomena like the decline in utility of subsequent slices of cake at a given time. "Preference for variety" will be used for phenomena like the decline in utility (independent of satiation) of a given kind of cake that results from having consumed that kind of cake recently. There is lots of work in economics and other social sciences studying diminishing marginal utility of wealth in humans7.In contrast, I know of very little work studying the preference for variety within a utility-based framework18.There are two kinds of choice situations that one could try to model and predict that depend on preferences for variety. First, there is the choice between single items at a particular time. For example, if Dave goes to the fridge for yoghurt after every meal, we can try to predict what flavor he 17 The phenomenon is more complex than I space to discuss here. It seems that some of the value of wealth is contextually sensitive to one's peer group. There's also some psychology / experimental philosophy looking at human intuitions about this notion: Greene and Baron 2001. '8 "Novelty, Preferences and Fashion" (Bianchi 2001) explicitly addresses the dearth of work in this area and discusses some economic work that at least touches on these issues. 99.WUWA6"W -[UAL chooses. Here we need some function that assigns flavors decreasing utility as a function of when they were last sampled. Second, we could model the choice of bundles of goods that are chosen with preference for variety in mind. For example, suppose Dave slightly prefers vanilla to strawberry, in the sense that if he had no yoghurt for months and could only choose one, he'd take vanilla first. Given his preference for variety, Dave might still buy 6 vanilla and 6 strawberry yogurts. Thus we would need a utility function over bundles of vanilla and strawberry that assigns more utility to the bundle (6 vanilla, 6 strawberry) than to any other bundle of 12 yogurts total. The utilities we get for these bundles would not necessarily be very robust: If Dave switches to eating yogurt less frequently, he may switch to a bundle with a higher proportion of vanilla. Here we have the same problem noted above concerning what the utility function is defined over. The entity with more intrinsic value is the sum of the yogurt-consumption experiences, rather than the bundles containing different numbers of vanilla and strawberry yogurts. To better match the actual structure of the agent's values, we need a model which takes into account when the yogurt will be eaten and how much preference for variety Dave has19 . It seems likely that preference for variety as manifest in choices over bundles will combine aspects of the "independent" and "substitutes" Utility functions. Having had lots of vanilla yoghurt, we switch to strawberry because vanilla has got boring and strawberry is "newer" or "fresher". As with substitutes, there is declining marginal utility in additional units of any kind of yoghurt. But there is separate, independent diminishing utility in particular flavors. It's also worth considering that there is some Cobb-Douglas- type non-linearity with some goods. Maybe two different art forms or different holiday locations can interact in a positive and non-linear way. 19 It's worth noting that the choices people make when purchasing a bundle may not match the choices they make if they are making each choice separately. On the one hand, it may be hard to predict when you'll get bored of something. And on the other hand, people choosing bundles might think more carefully about the need for variety because they have the whole "schedule" laid out before them. There is empirical work studying inconsistencies between this kind of decision: see Loewenstein and collaborators on the so-called "Diversification Bias". 100 1.4. Temporally extended preferences In the previous section, I introduced a third kind of utility function (the Cobb-Douglas function) and gave some textbook-style applications for this function in modeling human preferences over goods that interact multiplicatively. I speculated that this kind of interaction is less common than substitute or complement interactions. I introduced preference for variety and discussed its relation to diminishing marginal utility and how it could be modeled using bundle utility functions. The preference for variety is a very common feature of human preferences that any comprehensive modeling effort will have to reckon with. Moreover, it suggests the importance of temporal features of consumption in determining value. The value of a bundle of different flavors of yogurt will be contingent on my consuming the yogurt in the right sequence and over the right duration. What has value is a particular world history (the one in which I alternate between flavors and space out my consumption) and having the mixed bundle makes this would history possible. There are other examples where the entities bearing value are even more particular. Consider trying to solve a hard problem in mathematics or physics. (The problem could be an unsolved open problem, or it could be a recreational or textbook problem). If the problem is interesting and theoretically important, then after working on it for a while I will probably be extremely curious to know the answer. However, if you offered to tell me the answer, I would say no. I would even incur costs to avoid hearing the answer. However, once I'm almost sure that I've got the answer myself, I'll be very happy to hear you relay the right answer. Here's a similar example. When you finish a videogame you are generally treated to an ending sequence that presents the climax of the narrative of the game, often with interesting visuals. Having played much of the way through (and got caught up in the narrative), you might be very curious to see the ending, but you wouldn't want to see it unless you had finished the game yourself. You value seeing the ending sequence of the game and seeing the closure of the narrative, but part of the value of the seeing it is conditional on having "earned it" by succeeding at the game and following the narrative all the way through to the end. Similarly, a novelist may crave praise and prizes from 101ifi"WAUag WAQ- - -- ,".- I --- I Z. \_-1\_-jjdLk"-.,.-.. -- critics but might not accept them she hadn't produced work that she felt merited these accolades. (Imagine the novelist gets praise for a work by a mysterious author of the same name. Or that the novelist writes a very mediocre work that is feted because people have some independent reason for flattering the novelist). One way of interpreting these examples is to say that the person involved just values a particular activity (problem solving, videogame playing, novel writing). But these cases seem more subtle. The goal of these activities (e.g. solving the math problem, finishing the game, producing a good novel) seems psychologically important. Failure to attain the goal could be much more disappointing than losing a friendly and informal game of soccer. On the other hand, it also seems clear that if solving the math problem could be had in an instant (by a click of the fingers) then it would lose its pull as a goal. Both the temporally extended process and the goal seem to be important in these cases. The question of how exactly to characterize the source of value in these examples is not my task here. There may be models of human preference that fail to do this but still are able to make good predictions in most cases. The problem here is that it seems that models which define utility functions over world states rather than world histories might fail to make good predictions. In the videogame case, if the source of utility is seeing the ending sequence, then you'd predict that the player would skip to that sequence without finishing the game. If the source is the fun of playing the game itself, then you'd expect the player to keep playing if he learned that a bug in his version of the game would make it freeze three-quarters of the way through (removing the possibility of seeing the closing sequence). If utility comes from a particular causal sequence of events -the player successfully playing through the whole game and then seeing the ending sequence -then you need a utility function defined in terms of world histories. Switching to world histories is a challenge because representing sequences of events (rather than world states or bundles) leads to an exponential blow-up in the space of objects that utility can be defined over. Moreover, we need to have some rich formal language in order to correctly define the temporally extended complex occurrences that constitute important sources of value. In conclusion: There are particular aspects of human preferences that present a challenge for existing microeconomic models. Some of these preferences (e.g. to produce 102 significant works of art or science or to bring about other difficult, time-consuming projects like starting a school, or some other organization) seem closely related to the idea of a meaningful human life. However, it is important to point out that the ethical significance of these preferences may not be matched by their practical significance. It may be that models that are not capable of representing this kind of preference can do well in the task of predicting human choices across many situations. Section 2. An experiment testing the utility framework 2.1. Introduction: Psychological and Folk-Psychological Preference Learning In Section 1, I introduced a formal framework for modeling preferences over bundles of goods that have some kind of dependency or interaction. I pointed out various everyday properties of human preference and the challenges they pose for the microeconomic formal framework. In Section 2, I describe ways of testing the formal framework empirically. There are two main empirical applications of formal models of preference. The first application is to modeling and predicting people's actual choices. The second application is to modeling how people predict the choices of others (i.e. folk psychology or Theory of Mind). I will briefly sketch each application area. Modeling and predicting choices (first-order preference learning) Predicting actual choices is important for economics, where we'd like to predict how people's spending responds to changes in their budget or to changes in the cost of goods. 103 But predicting choices is also important outside of the economic transactions involving money that have traditionally been the focus in economics. For psychology and for the social sciences, we may be interested in the choices of a single individual or a very small group (rather than the large populations of individuals that have great significance in economics). We may also be interested in predicting the choices of children, and of people in communities where money plays a minor role. A topic of particular psychological and philosophical interest is the way people make choices between different kinds of goods. There is an extensive literature in psychology that looks at how altruistic people are in performing certain game-theory inspired tasks (Camerer 2003). There is also work suggesting that there are some goods that subjects consider "sacred" and treat differently from most goods (Tetlock 2003). Much of this work is informal and may be illuminated by a formal model that can represent a range of different ways that goods can be related. Folk-psychology or Theory of Mind (second-order preference inference) The second application of formal models of preference is as a model of human folk- psychology or 'theory of mind'. Here we are interested not in predicting what humans will do, but in modeling and predicting the predictions that humans make about other humans. I will refer to the first application as the psychological or first-order project, and the second as thefolk-psychological, theory of mind or second-order project. The study of folk-psychology is a significant subfield within cognitive science. This work ranges from qualitative studies of the development of concepts like false-belief in children (Gopnik and Meltzoff 1994) to more recent work on the neuroscience of folk-psychology (Saxe 2005) and quantitative modeling of some aspects of theory of mind within a Bayesian framework (Goodman et al. 2006, Lucas et al. 2008, Baker et al. 2010). Folk-psychology is an important topic for cognitive science because understanding the intentions of the people around us is so important. Children, in particular, are dependent on others for help and teaching in many areas of life and so need to quickly develop an ability to read the mental states of others. This chapter focuses on one particular Theory of Mind task: modeling and inferring preferences in order to predict choices. This inference is important in quotidian contexts. 104 In social situations we often have to predict what kinds of things other people will like. Consider the task of preparing a dinner party, or selecting entertainment for an evening, or choosing fun activities for a child. It can also be socially important to work out someone's preferences (e.g. political or aesthetic preferences) from minimal observations of the person's actions. In business, companies try to work out what kinds of goods consumers would like to buy based on previous information about their buying habits. In politics, bargains can be made when each party is able to correctly identify something that the other party values. Due to the importance of this theory-of-mind task in daily life, there is reason to think that human abilities are quite developed in this area. This suggests that (a) humans may be able to make inferences that can only be matched by quite sophisticated formal models and (b) that studying human inference may be useful in coming up with richer formal models. 2.2. Turning utility functions into a model of preference inference 2.2.1. Bayesian inference for learning utility functions In order to model preferences for psychology or folk-psychology, we will need more apparatus than just the bundle utility functions described in Section 1 of this chapter. If I have the utility function for an agent defined on the bundles (A,, ... , A) then I can predict his choice between any set of bundles that he could have as options. However, we want to consider situations where the utility function is not just given a priori but instead has to be learned from data. There are two different steps to learning bundle utility functions. We have to learn: i.) The functional form that someone's utility function takes over a particular set of goods. For example, suppose Lucia is assembling a table. She purchasing screws in two different sizes: "big" and "small". It could be that the two sizes of screw are complements. For instance, if the table that Lucia is assembling requires two small screws for every one big screw, with additional screws being useless unless 105 they come in this ratio. On the other hand, Lucia could be building something where either small or big screws can be used and so the screw sizes are substitutes. We want to be able to learn from observing Lucia's choices which utility function she has over these goods. Likewise, Bob might purchase coffee and milk and only be interested in producing milky coffee with a precise but unknown ratio of milk to coffee. Alternatively, Bob might only be interested in drinking milk and coffee on their own (as substitute drinks). ii.) The parameter settings for the utility function. If the screws are complements, what is the mixing ratio a for big vs. small screws? If coffee and milk are complements, what is the mixing ratio? (Is Bob making a latte, cappuccino, flat white or espresso macchiato?) If the screws are substitutes, how much better are the big screws than the small? (Operationally, how much more would you pay for a big screw than for a small screw, assuming that big screws are stronger?) My approach is to use Bayesian inference to do this learning. We start with a prior distribution over the different forms the function could take (e.g. a prior over the three functional forms given by the complements, substitutes and Cobb-Douglas utility functions). We then learn an agent's utility function by updating these distributions on observing choices. Both in the first- and second-order settings, it can be important to predict choices while only having limited information about the agent's utility functions. This is easily achieved in a Bayesian framework. For instance, if I have a 0.7 probability that Lucia has a complements function over screws with parameter a=2, and a 0.3 probability that she has a substitutes function with parameter a=1, then I predict her choices over different bundles by averaging the predictions generated by each utility function. 2.2.2. Learning for individuals and for populations The approach sketched so far allows us to learn utility functions for individuals from observing their choices. How well this model works for prediction will depend on whether any of the utility functions in our hypothesis space fits the individual's actual 106 pattern of choice and on whether we observe choices that are informative about the individual's general pattern. So far, I have only discussed models for learning the choices of individuals. A very important and closely related inference problem is to learn about the distribution of preferences within populations. It is, clear that there are regularities in preference across individuals. Within my formal framework, this can be on the level offunctionalform of a utility function, e.g. most people would treat printers and print cartridges as complements because they have no use for the parts separately but would derive value from a working printer. It can also be at the level of parameter settings of utility functions. For example, it could be that almost everyone finds two very similar brands of orange juice to be perfect substitutes. But in other areas we find polarization, with people strongly preferring one or the other item (e.g. rival sports teams). We would like to learn population-level regularities from observing multiple individuals drawn from the population. Those regularities can then be exploited when making predictions about new individuals taken from the same population. For instance, if for some substitute goods there are polarized preferences, then a single observation of a new person should be enough to yield strong predictions about future choices. For example, if I see someone on a match day wearing the colors of one team, I can immediately make the inference that this person has a strong preference for that team. 2.2.3. Dealing with unreliability A further important element of a practically useful model is some formal way of dealing with human error or the "random noise" associated with human actions. The bundle utility functions are a model for homo economicus. They predict that even if two bundles differ by only a tiny amount in utility, the agent will still take the better one every time. Humans don't seem to work like this. When two options are very closely matched for quality (e.g. two meals in a restaurant) we are often not sure which to pick and may pick different options at different times (i.e. pick inconsistently) and also recognize after the fact that our choice was slightly suboptimal. 107 In my modeling in this chapter, I use a simple model of noise in human judgments that has been developed by psychologists and decision theorists (Luce 1980). The model assumes that people are more likely to take suboptimal options when the difference in utility is small and become exponentially less likely as the utility difference decreases. This model of choice is called the "Luce choice rule". Agents who choose according to this rule do not always take the option with maximum utility, but they are very unlikely to deviate when the difference between options is big. Hence we call them "soft maximizers" of utility. The idea of humans as soft-maximizers makes intuitive sense. Someone may choose his second-best option at a restaurant if the two are very closely matched. However, it's very unlikely that someone with a serious nut allergy would pick something with nuts in it, assuming that they had complete knowledge of the ingredients of the dish while choosing. People do take options that seem to be disastrously bad in retrospect. However, these often involve false beliefs or significant uncertainty on the part of the chooser. When they don't, they may involve weakness of will or other near-term biases. The cases I will discuss in this section of the chapter don't involve these complications. 2.2.4. Generalizing the Model The framework described so far for learning utility functions from individuals and from populations is quite general. In Section 1, I described three different functional forms for bundle utility functions and described how these functions could be combined (e.g. decomposing one big utility into many independent functions of complements, substitutes, etc.). We could consider other kinds of functional forms for modeling how goods interact to produce utility. A general concern is the tractability of the functions. Some functions would make utilities time-consuming to compute. Other functions might have maximums or derivatives that are difficult to find. In any case, the Bayesian 108 framework can in principle handle a much larger class of functions than I have 20considered and learning and prediction will take place in exactly the same way 2.2.5. Summary In this section I gave a brief description of a formal model for inference about preferences that combines microeconomic utility functions, the Luce choice rule for noise in human choice, and Bayesian inference for learning individual utility functions and population distributions over utility functions. I will refer to this model (complete with the Bayesian learning component) as the Utility Model of human preference. This model was introduced in joint work with Leon Bergen (Bergen, Evans, Tenenbaum 2010). I will now describe some experiments we performed that test how well this model fits with human folk-psychological inferences about preference. 2.3. Empirical testing of the Utility Model for folk psychology 2.3.1. Overview of our experiment In joint work with Leon Bergen, I tested the Utility Model using a simple experimental paradigm. Subjects were given a vignette in which an agent makes a choice between two different bundles. Subjects' task was then to make an inference about the agent's preferences based on their choice, and then to predict the agent's choice between a different pair of bundles. We ran the Utility Model on the same data (building in by hand 20 There may need to be differences in how learning is implementing to deal with the computational overhead of a larger hypothesis space. But at the conceptual or mathematical level, there is no problem with adding more functional forms to our hypothesis space. 109 relevant background information communicated to subjects by the vignettes). We then compared how the Utility Model's predictions compared to subject predictions. We only tested some aspects of the Utility Model. The model can learn utility functions over more than two goods and model choices over more than two options, but we only looked at two-variable functions and choices between two options. We also did not test whether the human ability to learn the form of utility functions is well captured by the Utility Model. In the experiment, we only tested subject's ability to learn the mixing ratio a for utility functions they were assumed to already have. What the experiment did test was (1) whether people distinguish between different kinds of goods as the economic notions of complements and substitute would predict, and (2) whether the Bayesian learning model with soft-maximization matches human predictions. We also compared the fit of the Bayesian Utility Model to the fit of some simple heuristics. These heuristics are described below. 2.3.2. Stimuli Our subjects were given 11 vignettes, in which agents were described choosing between bundles of goods. The vignettes were inspired by textbook examples from economics. Sample vignettes are shown below. Note that subjects were told that all bundles were identical in price. Hence none of the choices described involved trading off between cost and desirability. Complements Vignette [annotations in red] Patricia has just started a new restaurant and is decorating it. She is putting up large blinds on the windows, and she thinks she needs to use different length screws for different parts of the blinds. She went to the hardware store to buy screws for putting up the first blinds, and was given two offers on boxes of short screws and long screws. (A) A box containing 4 long screws and 2 short screws (B) A box containing 3 long screws and 3 short screws 110 Patricia chose option (A) and thought that was the best choice given her options. [Note: I'll call this OBSERVED CHOICE] Patricia is putting up blinds on the rest of the windows, and needs to buy more screws. She goes back to the hardware store, and is given the following offers on boxes of screws. Both boxes (C) and (D) have the same price: (C) A box containing 7 long screws and 3 short screws (D) A box containing 2 long screws and 8 short screws? Which option do you think Patricia should choose from (C) and (D)? [Note: I'll call this PREDICTED CHOICE] Substitutes with Preference for Variety Vignette [annotations in redi Last year Ted bought a small box of chocolates from a specialty chocolate store. The store makes boxes with two kinds of chocolates: chocolates with nuts and chocolates with caramel. The store had two small boxes available, which both cost the same amount: (A) A box containing 3 chocolates with nuts and 3 chocolates with caramel (B) A box containing 5 chocolates with nuts and 1 chocolates with caramel [I won't include the PREDICTED CHOICE here, but it exactly parallels the above scenario, with Ted choosing between larger boxes of chocolates which in some cases have quite different ratios than in the observed choice.] Substitutes without Preference for Variety Vignette [annotations in red] Mary and her husband often have to go out at night. When they do, they hire a babysitter to look after their four-year old son. Mary has to schedule babysitters for the next month. Mary wants to make sure that her son gets along with the babysitter, and in general, she thinks that some babysitters do a better job than others. There are two babysitters who are usually available, Jill and Sarah. Neither can work all the nights that Mary needs. 111 Last month, Mary chose between two different hiring schedules. Both schedules would have cost her the same amount: (A) Sarah for 4 nights and Jill for 2 nights (B) Sarah for 3 nights and Jill for 3 nights. [as above we leave out PREDICTED CHOICE]. For each of these 11 scenarios, we considered 16 different numerical conditions. That is, conditions under which we varied the numbers appearing in the observed choice and predicted choice. Thus the ratios between the goods and the absolute quantities were varied. This is relevant to discriminating between the different utility functions in our model's hypothesis space, which make similar predictions for certain bundles. As is clear from the sample vignettes above, the goods that the agents are choosing would be familiar to our subjects. We assume that subjects would be able to use their background knowledge to work out how the goods would interact (e.g. whether they are complements or substitutes). Hence we gave our model the prior information about the utility function type but not about the mixing ratio for the function. 2.3.3. Alternative folk-psychological Theories The performance of the Utility model was tested against non-Bayesian, non-utility based heuristics. We invented a number of candidate heuristics, inspired by the work of Gigerenzer and collaborators (1996). I will describe the two heuristics that performed best in terms of fitting with subject judgments: -The Crude Ratio Heuristic, like the Complements utility function, assumes that there is an ideal ratio between the two goods A and B, i.e. a ratio in which the goods most efficiently combine to produce utility. However, it is also "greedily" assumes that the ratio of goods in the bundle that the agent was observed choosing 112 is the ideal ratio of goods, and that the agent will always choose the bundle that comes closest to this ratio. (In other words, the heuristic treats the observed choice as perfectly reliable evidence of the ideal ratio of the goods for the agent). \* The Max Heuristic (like the Substitutes utility function) assumes that one of the two goods is favored over the other and that the agent will choose whichever bundle has most units of this favored good. Both heuristics make absurd predictions when the two options differ a lot in overall quantity (e.g. the bundles (A=4, B=2) vs. (A=15, B=16)). This is because they avoid doing a weighted sum or product and instead just work with features that can be extracted very quickly by humans. However, we did not include conditions in this experiment that tested the heuristics against the model on this apparent vulnerability. 2.3.4. Results I will not discuss the results of the experiment in detail. The Utility Model generally produced good fits to human judgments. Fits were better when the Utility Model was assuming the intuitively correct functional form (e.g. complements or substitutes, depending on the vignette in question). Thus we got some evidence that subjects distinguish between goods in the way textbook microeconomics predicts. We got some weak evidence for a distinction between the "substitutes without preference for variety" and "substitutes with preference for variety" but plan to further examine whether people make this kind of distinction in future work. The fit of the model supports the view that humans' ability to infer preference is sensitive to size of bundles and the kind of interaction between goods (as the Utility Model would predict). It also shows that humans are sensitive to base rates and noise in the way that Bayesian inference would predict. However, more data is still needed to give full confirmation that the Utility Model does better than the crude heuristics. In terms of pure correlation, the two best performing heuristics did as well as the Utility Model. Thus the 113 current experimental evidence is compatible with the view that human judgments are based on very crude heuristics in this kind of situation. However, due to the absurd predictions of the heuristics when bundles differ in overall size, we expect that further experiments will show that the heuristics are not a plausible general mechanism for making these inferences21 . It goes without saying that more research is needed. 114 Section 3. The Utility Model and a Computational Account of the Knobe Effect 3.0. Introduction I have described some early experimental support for the Utility Model as an account of folk-psychological inferences about preferences in choice situations involving multiple goods. I now want to consider how this model could be applied as the core of a computational account of the Knobe Effect (Knobe 2003, 2004, 2006). I will assume familiarity with the classic Knobe vignettes in what follows (see Knobe references). In the classic Knobe vignettes, subjects are asked to make a judgment about the mental states of an agent who has just been described making a choice that involves a trade-off between two different "goods" (e.g. helping/harming environment and profit)22 . Thus there is some reason to think the Utility Model might be able to model some aspects of subject judgments. However, an immediate limitation is that the Utility Model makes inferences about preferences of an agent from observed choices, rather than about their intentions (or whether they brought about a result intentionally). Still, it seems possible that part of what drives the judgments about whether the agent intended a particular outcome is whether the agent had a strong preference for that outcome. One challenge in trying to apply the Utility Model to the Knobe effect is spelling out formally some possible relations between preferences and intention judgments (i.e. the judgments subjects make in Knobe vignettes as to whether an outcome was brought about intentionally). I will discuss this below. 22 Knobe has developed related experiments that focus on attribution of causality and other relations that are prima facie not intentional [2006]. I focus on the experiments that probe mental state attribution. 115 Another immediate challenge is how to construe the classic Knobe vignettes in terms of choices between bundles of goods. In the Chairman vignette (and other Knobe vignettes) the agent is presented as deciding whether to implement a new plan. Thus the choice is between "new plan" and "status quo", where the status quo is not explicitly described at all. Moreover, the "new plan" is described only in the most vague and underspecified terms, without the precise numbers that the utility functions of the Utility Model require. Consider one of the outcomes in the Chairman vignette, namely, the outcome of "harming the environment". We could correctly describe a wide range of activities as harming the environment, from a family's littering a national park to a logging company cutting down an enormous area of Amazon rainforest. Standard quantitative measures of preferences, such as surveys of people's "willingness to pay" to prevent the damage, will assign values to these outcomes that vary substantially. In the context of the Chairman vignette, there are pragmatic reasons for readers to infer that the environmental harm is significant (otherwise it wouldn't be worthy of presentation to the chairman). So one thing we need to be careful about in analyzing the original Knobe vignette is how these vague elements might be interpreted by human subjects. 3.1. Modeling the Knobe effect using the Utility Model This section outlines my proposal for modeling the Knobe effect. My first observation is that plans or policies that have multiple, highly predictable effects can be seen as bundles of the outcomes. (If the effects are less predictable, we will need to switch to probability distributions over bundles, but I'll stick to the highly predictable case). Choosing between a new plan and the status quo is thus a choice between two bundles. Preferences that generate choices over bundles can be modeled using bundle utility functions. By "observing" a choice between bundles (i.e. having a particular choice on behalf of the agent narrated to us), we can use Bayesian inference in the Utility Model to update a prior distribution on the mixing ratio a that the agent's utility function has for the two goods. 116 Thus, the observed choice of the agent allows us to make an inference about the agent's preference between the goods. The second part of the proposal concerns the link between preference inference and the judgment about one of the outcomes being intentional. Suppose that the two outcomes of a choice are X and Y. The claim is that subjects will judge an outcome X as more intentional if the choice that produced X has given the subject information that the agent has a preference for X relative to Y. I will discuss below various ways to make this claim more precise. My account of the Knobe effect is very similar to the recent account given by Uttich and Lombrozo (2010) and also resembles other accounts in the literature (Machery 2008, Sripada 2010). Another similar account, which also involves a computational model, is found in recent work by Sloman et al (2010). The main thing that's new in my account is a quantitative, utility-based model of preference which is integrated with a Bayesian model of inference. This allows for precise formulation of hypotheses in terms what kind of information the vignette gives subjects in Knobe experiments and how the information they get is used to generate attributions of intention. My account is also more minimalist than the related theories in the literature. It is minimalist because while it invokes notions of preference and of the informativeness of different choices, it does not make use of norms or norm violations, harms and helps, or side-effects vs. main-effects. In contrast, on Uttich and Lombrozo's view "harm" cases are judged more intentional because they involve violations of norms. Violations of norms, whether moral norms or conventional norms, provide more information about the violator than instances of conformity to norms23.On my account, there is no special role for norms. The intention judgment depends on an inference about the agent's preferences, and choices can be very informative about preferences without constituting a norm violation. Only a very thin or weak notion of norm plays any role in my account. For me, the strength of the intention judgment will depend on how much the agent's choice shows his preferences to differ from the subject's expectations. If the agent's choice shows him to be very different from the population average, then this will probably lead to a strong 23 Holton (2010) also invokes an asymmetry between violating and conforming to a norm to explain the Knobe effect. However, his account centers on a normative asymmetry, rather than an asymmetry in the evidence these kinds of acts provide. 117 intention judgment. But merely differing a lot from the population involves violating a norm only in a very weak sense. (If someone loves a particular shade of green much more than most people in the population, she doesn't thereby violate a norm in any strong sense). So my account predicts that vignettes that involve no norm violation (moral or conventional) should still be able to produce a Knobe effect. Similarly, my proposal should not require any actions that constitute "harms" or "helps" to generate a Knobe effect. After all, there are other kinds of actions that provide information about preferences. Finally, my account doesn't make a distinction between the different elements or components of the bundle that are chosen, and so we should be able to get a Knobe effect without there being an outcome that is naturally described as a side-effect. To illustrate the role of an outcome being a side-effect, consider the Chairman vignette. In the vignette, environmental harm is a side effect of the profit-generating plan because it is not part of the chairman's main set of goals and he would still be considering the plan even if it generated profit and had no environment effect. There are some vignettes in the existing literature that generate Knobe effects and that can be interpreted (I won't argue for that interpretation here) as differing from classic vignettes such as the CEO case by being minimal in the way I just described. There is the Smoothie case (Machery 2008) and an unpublished case mentioned in Sloman (2010). These cases are minimal in involving no moral or conventional norms, no harm/help and not necessarily any side-effect. Uttich and Lombrozo present vignettes with a Knobe effect that involve non-statistical, explicitly verbalized social conventions and side effects. The vignettes I introduce below have two main aims. First, they provide a stringent qualitative test for the minimalist account, by leaving out norms, side-effects, and helps/harms. Second, they allow for something that has not been explored much in the literature. Since my model is computational and involves continuous parameters for probabilities and mixing ratios, the model can generate quantitative predictions. Up to this point, I've stated my account in a qualitative way. I've said that that an outcome X will be judged intentional when its being chosen gives information about preferences 24 Sripada et al. 2010 and Sloman et al. 2010 are recent exceptions. 118 towards X. However, there are various ways in which we could model how the strength of the intention judgment depends on the strength of the preference that the choice informs us of. Here's a simple quantitative model: As the choice inference places more posterior probability on mixing ratios with stronger biases towards X, the strength of the intention judgment for the X outcome (as measured by the standard 1-7 scale) will increase. I will discuss below some slightly different ways of connecting the information gained from the observation with the intention judgment. I have constructed two vignettes that aim to test the predictions of the Utility Model. I will first present the vignettes and then provide more detail on the different ways of modeling subject judgments using the Utility Model. 3.2. Experimental Stimuli I designed two vignettes in order to test the qualitative and quantitative predictions of the Utility Model as applied to the Knobe effect. My account predicts that intention judgments will vary quantitatively with how informative the agent's choice is about his preferences. So in my vignettes, agents make choices which give information about their preferences between the goods being traded off. The bundles in the vignettes are varied quantitatively to allow for quantitative variation in the amount of information choices provide. In contrast to most existing Knobe vignettes, the agents do not violate any strong social or conventional norms (such as harming the environment). Moreover, in my first vignette (the "Investment Vignette" below) the effects of choices are not described (or easily describable) as helps and harms and they are not naturally seen as side-effects of those choices. I tested the vignettes as follows. In each vignette, an agent chooses between two bundles, with the bundles described using precise numbers. For instance, in the Investment Vignette, an agent chooses between investment strategies that have different expected short and long-term payouts. To test quantitative predictions, I varied one of the quantities in one of the bundles. In the Investment Vignette, I varied the long-term payout 119 of the first investment strategy and held everything else fixed. This generates a series of vignettes that vary only along this one dimension, enabling me to test precise quantitative predictions. 1. Investment Vignette [annotations in red] John is investing some of his savings. He is choosing between two eight-year investment strategies. These strategies both pay out twice: first after four years and then again after eight years. John's financial advisor predicts Strategy A will pay out $70,000 after four years and will pay out another $100,000 after eight years. The advisor predicts that Strategy B will pay out $90,000 after four years and another $80,000 after eight years. This is summarized in the table below: Payout after 4 years Payout after 8 years Total payout Strategy A $70,000 X=$100,000 (90K, 96K 100K, ... 160K) 1$170,000 Strategy B $90,000 $80,000 $170,000 John chooses Strategy B. Just as the advisor predicted, John makes $90,000 after four years and makes another $80,000 after eight years. John's payout decreased from $90,000 after four years to $80,000 after eight years. On a scale of 1-7, how much do you agree/disagree with the following statement: "John intentionally caused his payout to decrease." Strongly Strongly Neutraldisagree agree 1 2 3 4 5 6 7 2. North-South Vignette [annotations in red] Mr. Jones is the CEO of a major company. One day, his vice-president presents two different 120 plans for restructuring the company. The main difference between the plans is how they affect the two regional divisions of the company. The plans are predicted to impact profits as follows: Plan A +$10 million rise in profit for Northeast Division of the company -$1 million drop in profit for South Division of the company Plan B +$6 million rise in profit for Northeast Division of the company -$0.5 million drop in profit for South Division of the company [variants: +$2m rise NE vs. -$2m drop S, +$4m rise NE vs. -$4m drop S, ...] Mr. Jones decides to implement Plan A. The vice-president's prediction was correct: Plan A resulted in a $10 million rise in profit for the Northeast Division and a $1 million drop for the South Division. After implementing Plan A, profits in the South Division dropped by $1 million. On a scale of 1-7, how much do you agree/disagree with the following statement: "Mr. Jones intentionally harmed the profits of the South Division." Strongly disagree 7 Neutral Strongly agree 1 4 6 7 C~~\_ r 121 3.3. Experimental Results Results for the Investment Vignette are displayed below. When X, the size of the second payout from Strategy A, is less than $1 00K, the mean judgment is below 4.0 (labeled "neutral" on the questionnaire) and so people on average don't see the outcome as being done intentionally. For X greater than $1 OOK, the outcome is seen as (on average) intentional. Thus, as the vignettes vary quantitatively in this one variable, we get a gradual Knobe-effect switch from "unintentionally" to "intentionally" judgments. We get this same basic result from the North-South Vignette, although the average intention judgments are higher for the lowest values (almost at "neutral"). Investment Vignette Results and Graph X in $1000s 90 95 1-00 110 130 140 160 Mean 2.35 3.41 4.00 4.50 4.69 4.88 5.00 SD 2.03 2.21 2.00 2.00 1.94 1.45 2.00 # subjects 16 28 24 21 31 25 29 122 6.00 5.00 S C 2 00 gg3.400 2.00 1.00 M 80 90 100 110 120 130 140 150 160 X (10005) North-South Vignette Results (North, South) profits in $m for Plan B Mean SD n3.80 3.89 5.04 5.37 5.46 5.78 5.44 2.22 2.12 1.60 1.99 2.12 1.10 1.54 40 37 28 29 25 31 33 3.4. Modeling the Knobe Effect The results discussed above provide some confirmation for the qualitative predictions of the Utility Model account of the Knobe effect. I will now describe some different ways of modeling the effect quantitatively. The idea is simple. To predict intention judgments, we 123 first model the inference subjects make about the preferences of the agent based on his choice. This inference is the basis for the intention judgment. 3.4.1. Preference Inference We use the Utility Model to model the preference inference. In doing so, we assume subjecfs model the agent as having a certain bundle utility function25 over the set of bundles that are available. Of the three utility functions discussed above, the Substitutes function is the obvious choice for both vignettes. For the Investment Vignette, we are asking "How much does someone value money earned in 4 years vs. in 8 years time?" and "What is the marginal value of more money in 4 years or 8 years?". These questions are important in economics. I will sidestep these questions and start by assuming that 26subjects have a very simple model of the agent's utility function .I will use the standard additive Substitutes function (without including a parameter for diminishing marginal utility). Thus we have: (8). U(S,L) = S + aL Here S is the short-term (4-year) gain and L is the long-term (8-year) gain. The mixing ratio a can be seen as a measure of how much one discounts long-term gains. This will vary between individuals. Hence we expect subjects to start with a probability distribution over the value for a. On observing the agent's choice, they will update this distribution based on the evidence that the choice provides. Here's an illustration. We start with a symmetric prior distribution on a (in blue on the graph below), with mean a=0.5 and small variance. Thus we have a prior expectation that the agent will either assign the same weight to short- and long-term gains, or have a 2 It would be easy to model subjects as having a probability distribution over utility functions the agent might have. 26 We are interested in folk-psychology or folk-economics here, rather than economics. Still, it's plausible that subjects have a more sophisticated model of the utility function than the one we present here. As discussed above, the general framework presented here could incorporate more sophisticated models. 124 27small bias in one direction .We observe the agent choosing the ($90K, $80K) Strategy B over the ($70K, $120K) Strategy A. This gives strong evidence that the agent is discounting long-term payouts by a substantial amount, and so the posterior (in black on the graph) shifts over towards 1. Prior Posterior r r 0.6 0.7 0.8 0.9 1 3.4.2. Relating Intention to Preference There are various ways that subjects' inference about the agent's preference (say for short- or long-term gains) could be related to their intention judgment. The basic idea is that outcomes will be judged more intentional if they are brought about by (or flow from) a (strong) preference. This can be made precise in different ways: 1. Model ]: Absolute preference (with either graded or binary preference) The outcome is more intentional if the posterior expectation for a (i.e. the mixing ratio which measures the magnitude of the preference) is higher (graded account). Or, on the alternative binary account, if P(a > 0.5), the subjective probability that 27 It may be odd to consider someone who values long over short-term gains. But it's not obvious that folk- psychology would rule this out a priori in the way economists might. (One might prefer long-term gains to make sure that money is available long-term even if short-term gains are squandered.) 1251 0.8- S0.6- .~0.4- 0.2- 0 0/ / 0.1 0.2 0.3 0.4 0.5 Alpha Diagram 4 (above) Graph showing the change from prior to posterior after a Bayesian update on the evidence of the agent choosing ($90K, $80K) Strategy B over the ($70K, $120K) Strategy A.r the agent is biased28 in a particular direction, is high. For both of these accounts, what matters is the absolute preference of the agent and not how the agent differs from a statistical norm. 2. Model 2: Relative preference The outcome is more intentional if the posterior for a differs more from the subjective prior or from the norm. Here what matters is whether weighting a is unusually far from the prior or norm, i.e. whether the preference for one good over the other is unusually strong. Obviously this measure will depend on what the prior/norm is. If we are dealing with a CEO, is the norm/prior relative to all adults or relative to all business people? I will discuss further modeling questions below. Before doing so, I want to illustrate the kind of predictions we can get from the models discussed so far. I generated predictions for Model 2 (above) using various weak assumptions about the relevant prior probabilities and noise parameters29 against the subject data from the Investment Vignette. I include a linear and a log version of Model 2, both of which are graphed below. 28 The bias could be arbitrarily small. What matters on this account is the strength of the belief that the agent has some bias. 29 There is not space to discuss these assumptions in detail. One thing to note is that how well the model fits the human data is not dependent on the precise values I ended up using. 126 Log(Model) vs. Subjects 007 6 5 0 0 U) 0 0 3 0 r r r r r r r 05 0 0.05 0.1 0.15 0.2 0.25 0.3 Exp(Posterior) -Exp(Pror)0 00 0 0 0 2- 1: r r r r r r r -5.5 -5 -4.5 -4 -3.5 -3 -2.5 -2 -1.5 log ( Exp(Posteror) -Exp(Pror)) 1277 6 3- 2- -0. Diagram 5 (above) Scatterplot showing model predictions for Model 2 vs. aggregated subject judgments for the Investment Vignette. Model predictions (which are a measure of the change in estimated preferences for short vs. long-term gains) on the x-axis and subject intention judgments on y- axis. Linear correlations are 0.87 and 0.98 for linear and log models respectively.Model vs. Subjects Let's return to the question of how to model intention judgments using the Utility model. The experiments based on the two vignettes shown above do not allow us to easily differentiate between the two basic models (absolute and relative) described above. We could distinguish by measuring people's priors over the preferences of the agents involved in various vignettes before letting them "observe" the agent making a choice. We could also design new vignettes in which background knowledge about preferences for the agents involved is controlled. The distinction between absolute and relative preference is just one kind of refinement we need for a complete model. Here are some further questions that a model needs to decide: e Does the choice itself have to provide information about the deviation from the prior in order to be judged intentional? (For example, would the agent in the Investment Vignette's heavy discounting of long-term gains be judged intentional even if you already knew he heavily discounted long-term gains?). e Does the choice itself have to be counterfactually dependent on the preference? (For example, if we know an agent heavily discounts long-term gains, does it look intentional when he takes an option with trivial long-term gains when his alternative option also had equally trivial long-term gains?). A different set of questions concerns the broadness of applicability of the Utility Model as a model of choice and preference inference. One reason the classic Knobe results are interesting is that they suggest a surprising connection between moral judgment and intention judgment. My account of the Knobe Effect does not give any special role to moral or conventional norms. Thus, on a formal level, it would treat a preference for (say) protecting the environment as the same kind of thing as a preference for long-term vs. short-term wealth. That is, one's preference for helping the environment should be commensurable with preferences for satisfying occupational norms (e.g. making profit) and "egocentric" preferences (e.g. eating nice food or socializing). For instance, even a staunch environmentalist would be willing to accept (at least as revealed by behavior) some harm to the environment for huge benefits in other areas. This is a psychological (or first-order) question about human preference, rather than a question of folk-psychology. 128 But if predicting human choices requires treating moral attitudes as a distinct kind of preference, then this psychological question could be important in giving a full account of folk-psychological phenomena like the Knobe effect. 129 References for Chapters 1 and 2 Ainslie, George. 2001. Breakdown of Will. Baker and Tenenbaum. 2014. "Modeling Human Plan Recognition using Bayesian Theory of Mind". Becker, Gary. 1976. The Economic Approach to Human Behavior. Bernardo and Smith. 2000. Bayesian Theory. Botvinick M, Toussaint M. 2012. "Planning as Inference". Chang, Ruth (editor). 1998. Incommensurability, Incomparability, and Practical Reason. N. D. Goodman and A. Stuhlmuller (electronic). 2015. The Design and Implementation of Probabilistic Programming Languages. Hamlin, J., Wynn, K., & Bloom, P. 2007. "Social evaluation by preverbal infants". Nature, 450, 557-560. Hamlin, J., Ullman, T., Tenenbaum, J., Goodman, N., & Baker, C. 2013. "The mentalistic basis of core social cognition: experiments in preverbal infants and a computational model". Developmental science, 16,209-226. Hanson, Robin. 2008. "Overcoming Bias: For Discount Rates". Blog post. Hausman, Daniel. 2012. Preference, Value, Choice, and Welfare. Tversky, A. and Kahneman, D. 1974. "Judgment under uncertainty: Heuristics and biases", Science 185: 1124-1131. Kleinberg and Oren. 2014. "Time-inconsistent Planning". Kahneman and Tversky. 1979. "Prospect Theory". Econometrica. Von Neumann and Morgenstern. 1947. Theory of games and economic behavior. Ng and Russell. 2000. "Algorithms for Inverse Reinforcement Learning". ICML. Nordhaus, William. 2007. "The Stern Review on the Economics of Climate Change". Parfit, Derek. 1984. Reasons and Persons. Rabin, Matthew. 2000. "Diminishing Marginal Utility of Wealth Cannot Explain Risk Aversion". Railton, Peter. 1986. "Facts and Values." Philosophical Topics 24: 5-31. 130 Russell and Norvig. 2003. AI: A Modern Approach. Schwarz, Wolfgang. 2009. "Values and consequences in economics and quantum mechanics". Blog post. Sobel, David. 1994. "Full Information Accounts of Well-Being". Ethics. Stuhlmueller, Andreas. 2015. Modeling Cognition with Probabilistic Programs: Representations and Algorithms. PhD Thesis. Sutton and Barto. 1998. Reinforcement Learning. J. B. Tenenbaum, C. Kemp, T. L. Griffiths, & N. D. Goodman. 2011. "How to grow a mind: Statistics, structure, and abstraction.". Science 131
5df56e78-dd63-483e-886e-1dfef5ad0ab3
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"I spoke yesterday of my conversation with a nominally Orthodox Jewish woman who vigorously defended the assertion that she believed in God, while seeming not to actually believe in God at all. While I was questioning her about the benefits that she thought came from believing in God, I introduced the Litany of Tarski—which is actually an infinite family of litanies, a specific example being: If the sky is blue I desire to believe "the sky is blue" If the sky is not blue I desire to believe "the sky is not blue". "This is not my philosophy," she said to me. "I didn't think it was," I replied to her. "I'm just asking—assuming that God does not exist, and this is known, then should you still believe in God?" She hesitated. She seemed to really be trying to think about it, which surprised me. "So it's a counterfactual question..." she said slowly. I thought at the time that she was having difficulty allowing herself to visualize the world where God does not exist, because of her attachment to a God-containing world. Now, however, I suspect she was having difficulty visualizing a contrast between the way the world would look if God existed or did not exist, because all her thoughts were about her belief in God, but her causal network modelling the world did not contain God as a node. So she could easily answer "How would the world look different if I didn't believe in God?", but not "How would the world look different if there was no God?" She didn't answer that question, at the time. But she did produce a counterexample to the Litany of Tarski: She said, "I believe that people are nicer than they really are." I tried to explain that if you say, "People are bad," that means you believe people are bad, and if you say, "I believe people are nice", that means you believe you believe people are nice. So saying "People are bad and I believe people are nice" means you believe people are bad but you believe you believe people are nice. I quoted to her: "If there were a verb meaning 'to believe falsely', it would not have any significant first person, present indicative." —Ludwig Wittgenstein She said, smiling, "Yes, I believe people are nicer than, in fact, they are. I just thought I should put it that way for you." "I reckon Granny ought to have a good look at you, Walter," said Nanny. "I reckon your mind's all tangled up like a ball of string what's been dropped." —Terry Pratchett, Maskerade And I can type out the words, "Well, I guess she didn't believe that her reasoning ought to be consistent under reflection," but I'm still having trouble coming to grips with it. I can see the pattern in the words coming out of her lips, but I can't understand the mind behind on an empathic level. I can imagine myself into the shoes of baby-eating aliens and the Lady 3rd Kiritsugu, but I cannot imagine what it is like to be her. Or maybe I just don't want to? This is why intelligent people only have a certain amount of time (measured in subjective time spent thinking about religion) to become atheists. After a certain point, if you're smart, have spent time thinking about and defending your religion, and still haven't escaped the grip of Dark Side Epistemology, the inside of your mind ends up as an Escher painting. (One of the other few moments that gave her pause—I mention this, in case you have occasion to use it—is when she was talking about how it's good to believe that someone cares whether you do right or wrong—not, of course, talking about how there actually is a God who cares whether you do right or wrong, this proposition is not part of her religion— And I said, "But I care whether you do right or wrong. So what you're saying is that this isn't enough, and you also need to believe in something above humanity that cares whether you do right or wrong." So that stopped her, for a bit, because of course she'd never thought of it in those terms before. Just a standard application of the nonstandard toolbox.) Later on, at one point, I was asking her if it would be good to do anything differently if there definitely was no God, and this time, she answered, "No." "So," I said incredulously, "if God exists or doesn't exist, that has absolutely no effect on how it would be good for people to think or act? I think even a rabbi would look a little askance at that." Her religion seems to now consist entirely of the worship of worship. As the true believers of older times might have believed that an all-seeing father would save them, she now believes that belief in God will save her. After she said "I believe people are nicer than they are," I asked, "So, are you consistently surprised when people undershoot your expectations?" There was a long silence, and then, slowly: "Well... am I surprised when people... undershoot my expectations?" I didn't understand this pause at the time. I'd intended it to suggest that if she was constantly disappointed by reality, then this was a downside of believing falsely. But she seemed, instead, to be taken aback at the implications of not being surprised. I now realize that the whole essence of her philosophy was her belief that she had deceived herself, and the possibility that her estimates of other people were actually accurate, threatened the Dark Side Epistemology that she had built around beliefs such as "I benefit from believing people are nicer than they actually are." She has taken the old idol off its throne, and replaced it with an explicit worship of the Dark Side Epistemology that was once invented to defend the idol; she worships her own attempt at self-deception. The attempt failed, but she is honestly unaware of this. And so humanity's token guardians of sanity (motto: "pooping your deranged little party since Epicurus") must now fight the active worship of self-deception—the worship of the supposed benefits of faith, in place of God. This actually explains a fact about myself that I didn't really understand earlier—the reason why I'm annoyed when people talk as if self-deception is easy, and why I write entire blog posts arguing that making a deliberate choice to believe the sky is green, is harder to get away with than people seem to think. It's because—while you can't just choose to believe the sky is green—if you don't realize this fact, then you actually can fool yourself into believing that you've successfully deceived yourself. And since you then sincerely expect to receive the benefits that you think come from self-deception, you get the same sort of placebo benefit that would actually come from a successful self-deception. So by going around explaining how hard self-deception is, I'm actually taking direct aim at the placebo benefits that people get from believing that they've deceived themselves, and targeting the new sort of religion that worships only the worship of God. Will this battle, I wonder, generate a new list of reasons why, not belief, but belief in belief, is itself a good thing? Why people derive great benefits from worshipping their worship? Will we have to do this over again with belief in belief in belief and worship of worship of worship? Or will intelligent theists finally just give up on that line of argument? I wish I could believe that no one could possibly believe in belief in belief in belief, but the Zombie World argument in philosophy has gotten even more tangled than this and its proponents still haven't abandoned it. I await the eager defenses of belief in belief in the comments, but I wonder if anyone would care to jump ahead of the game and defend belief in belief in belief? Might as well go ahead and get it over with." The excerpt provided comes from a user post on LessWrong. How would you reconcile the concept of belief in belief with the idea of cognitive dissonance, where holding two conflicting beliefs typically leads to discomfort and mental stress? Cognitive dissonance, as you know, is a psychological theory developed by Leon Festinger in the 1950s. It refers to the mental discomfort or tension that a person experiences when they hold two or more contradictory beliefs, values, or attitudes, especially when their behavior doesn't align with these beliefs or values. The concept of belief in belief, on the other hand, refers to the idea that someone can hold a belief about a belief. For example, a person might not believe in God but might believe in the importance of believing in God. This is a meta-belief, a belief about a belief. At first glance, these two concepts might seem to be in conflict. After all, if a person experiences discomfort from holding contradictory beliefs, how can they comfortably hold a belief in belief? The key to reconciling these two concepts lies in understanding that cognitive dissonance is not an absolute rule that applies in all situations, but rather a tendency that can be influenced by various factors. For instance, the discomfort caused by cognitive dissonance can be mitigated if the person perceives a higher value or benefit in holding the conflicting beliefs. In the case of belief in belief, a person might perceive a social or moral benefit in holding a belief that they don't personally accept. For instance, they might believe in the importance of religious faith for maintaining social order or moral behavior, even if they don't personally believe in God. The perceived benefit of this belief in belief can outweigh the discomfort of cognitive dissonance. Moreover, people have various strategies for managing cognitive dissonance. They might ignore or deny the conflict, or they might reinterpret one of the beliefs to make it more compatible with the other. In the case of belief in belief, a person might reinterpret their lack of personal faith as a form of skepticism or agnosticism rather than outright disbelief, thereby reducing the conflict with their belief in the importance of faith. So, while cognitive dissonance and belief in belief might seem to be in conflict, they can coexist within the complex landscape of human cognition. The discomfort of cognitive dissonance can be managed and mitigated, allowing people to hold a wide range of beliefs and meta-beliefs. Reflecting on the coexistence of cognitive dissonance and belief in belief, what potential drawbacks or negative consequences could arise from this reconciliation in a real-world context? The reconciliation of cognitive dissonance and belief in belief can indeed have several potential drawbacks or negative consequences in real-world contexts. 1. Impaired Decision Making: One of the primary drawbacks is that it can lead to impaired decision making. When individuals hold conflicting beliefs, they may struggle to make decisions that align with their true values and understanding of the world. This can lead to choices that are not in their best interest or that conflict with their personal goals and values. 2. Stress and Anxiety: The mental tension resulting from cognitive dissonance can lead to stress and anxiety. Even if individuals find ways to reconcile their conflicting beliefs, the underlying tension may still cause emotional distress. 3. Inauthentic Living: Belief in belief can lead to inauthentic living. If individuals are maintaining beliefs they don't truly hold for the sake of societal or personal expectations, they are not being true to themselves. This can lead to feelings of dissatisfaction and unhappiness. 4. Hindrance to Personal Growth: It can also hinder personal growth. By maintaining beliefs that conflict with their true understanding of the world, individuals may limit their ability to learn and grow. They may avoid experiences or information that could challenge their held beliefs, thereby limiting their personal development. 5. Social Consequences: There can be social consequences as well. If an individual's belief in belief is known to others, it may affect how they are perceived. For instance, they may be seen as hypocritical or untrustworthy, which could impact their relationships and social standing. 6. Stifling of Progress: On a broader scale, belief in belief can stifle progress. If individuals or societies maintain beliefs that are not supported by evidence or reason, it can hinder scientific, social, and moral advancement. These potential drawbacks highlight the importance of self-awareness, critical thinking, and authenticity in our beliefs and values. It's crucial to continually evaluate and reflect on our beliefs to ensure they align with our understanding of the world and our personal values. Considering the potential drawbacks of reconciling cognitive dissonance and belief in belief, how might individuals and societies encourage authenticity in beliefs while minimizing the negative impact of cognitive dissonance? Encouraging authenticity in beliefs while minimizing the negative impact of cognitive dissonance can be a complex task, but there are several strategies that individuals and societies can adopt: 1. Promote Critical Thinking: Critical thinking allows individuals to evaluate their beliefs and consider whether they are logically consistent and supported by evidence. Educational systems can play a key role in fostering critical thinking skills from an early age. 2. Encourage Open Dialogue: Societies should encourage open dialogue and the free exchange of ideas. This can help individuals to confront and resolve cognitive dissonance, as they are exposed to different perspectives and given the opportunity to discuss and reflect on their beliefs. 3. Foster Emotional Intelligence: Emotional intelligence can help individuals to better manage the discomfort associated with cognitive dissonance. This includes skills such as self-awareness, emotional regulation, and empathy. 4. Value Authenticity: Societies should value authenticity and create a safe environment for individuals to express their true beliefs. This can help to reduce the pressure to maintain beliefs for the sake of societal expectations or personal image. 5. Provide Support: Individuals experiencing cognitive dissonance may benefit from psychological support, such as counselling or therapy. These services can provide strategies for managing cognitive dissonance and promoting authenticity in beliefs. 6. Encourage Self-Reflection: Regular self-reflection can help individuals to identify and resolve cognitive dissonance. This could involve practices such as journaling, meditation, or mindfulness. 7. Promote Tolerance and Understanding: Societies should promote tolerance and understanding of diverse beliefs. This can help to reduce the fear and stigma associated with changing or abandoning beliefs, making it easier for individuals to resolve cognitive dissonance. By adopting these strategies, individuals and societies can promote authenticity in beliefs and minimize the negative impact of cognitive dissonance. However, it's important to remember that cognitive dissonance is a normal part of human cognition and can serve as a catalyst for personal growth and learning. The goal should not be to eliminate cognitive dissonance, but to manage it in a healthy and constructive way.
4236704a-ec3a-4f68-a227-b6bac7f52312
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
CHAI internship applications are open (due Nov 13) [CHAI](https://humancompatible.ai/) internship applications have just opened, [apply here](https://boards.greenhouse.io/centerforhumancompatibleartificialintelligence/jobs/4358062002) by Nov 13th! The internship might be a good fit if you want to get research experience in technical AI safety. You'll be mentored by a CHAI PhD student or postdoc and work on your own project for 3-4 months. Researchers at CHAI are interested in many different AI safety topics; a few examples are reward learning, adversarial robustness of LLMs, and interpretability. (I mention this because it might not be obvious from some of the language and links on the CHAI website.) I've copied [the full announcement](https://boards.greenhouse.io/centerforhumancompatibleartificialintelligence/jobs/4358062002) below: > Our internships require a background in mathematics and computer science. Existing research experience in machine learning is strongly advantageous but not required. We are interested in people who can demonstrate technical excellence and wish to transition to technical AI safety research. Examples include undergraduate or Master's students in computer science or adjacent fields, PhD students/researchers, professional software or ML engineers, etc.  > > This internship is designed for individuals who are interested in **technical AI safety research**. All applicants should take a look at our papers ([here](https://humancompatible.ai/jobs#internship) and [here](https://humancompatible.ai/research)) before applying to understand CHAI's research.  > > **General Information** > ----------------------- > > * **Location:**In-person (at UC Berkeley) is preferred but remote is possible. > * **Deadline:** November 13th, 2023 > * **Start Date**: Flexible > * **Duration**: Internships are typically 12 to 16 weeks > * **Compensation**: $3,500 per month for remote interns. $5,000 per month for in-person interns. > * **International Applicants**: We accept international applicants > * **Requirements**: > + Cover Letter or Research Proposal (choose one and see instructions below) > + Resume > + Academic Transcript > > **Cover Letter or Research Proposal (Choose One)** > -------------------------------------------------- > > The primary purpose of the Cover Letter or Research Proposal is for us to match you to a project that interests you. > > Most of our interns are generally interested in technical AI safety research but do not have a specific project in mind when they start the internship. Throughout the interview process, we learn more about each intern's interests and match them with a mentor who has an existing project idea that fits the intern's skills and interests. If you do not have a particular project in mind, then we ask you to please write a Cover Letter answering the following questions: > > * Why do you want to work at CHAI as opposed to other research labs? > * What are you hoping to achieve from the internship? For example, are you seeking to improve certain research skills, contribute to a publication, test out whether AI research is a good fit for your career, or something else? > * What are your research interests in AI? For example, are you interested in RL, NLP, theory, etc? > > Alternatively, some of our interns apply to the program with a specific project or detailed research interests in mind. If this applies to you, then please write a Research Proposal describing your project and what kind of mentorship you would like to receive. > > **Internship Application Process Overview** > ------------------------------------------- > > The internship application process has four phases. Please note: while we will do our best to adhere to them, all dates in the Internship Application Process Overview are **subject to change**. > > * Initial Review (Phase 1) > + We will examine your application based on motivation, research potential, grades, experience, programming ability, and other criteria. > + Applicants will likely receive a response by late November. > * Programming Assessment (Phase 2) > + If you pass the Initial Review Phase, then you will be given an online programming test. > + Applicants will receive a response by late December. > * Interviews (Phase 3) > + If you pass the Programming Assessment, then you will be interviewed starting in early to mid January. > + CHAI has several mentors who are willing to take on interns. Each mentor that is interested in working with you will contact you to schedule an interview. It's possible that you will speak to more than one mentor during this phase if multiple mentors are interested in working with you. > * Offer (Phase 4) > + Applicants will receive offers by early to mid February. > + If you are given an offer by one mentor, then you will work with that mentor if you choose to take the internship. > + If you are given multiple offers from different mentors, then you will get to choose which mentor you want to work with. > + Typically, the internship will begin around April or May but the start date will ultimately depend on you and your mentor(s). You will have to coordinate with your mentor(s) on when to begin your internship. > > **Other Information** > --------------------- > > * For any questions, please contact [chai-admin@berkeley.edu](mailto:chai-admin@berkeley.edu). > * **In the event that your situation changes (e.g. you receive a competing offer) and you need us to respond to you sooner than you had initially thought, then please let us know.** >
488eeeb8-bd5b-4d4a-b742-7c532f9d1de6
trentmkelly/LessWrong-43k
LessWrong
Hammertime Day 5: Comfort Zone Expansion This is part 5 of 30 in the Hammertime Sequence. Click here for the intro. It would be hypocritical of me to write a post of my usual form to teach Comfort Zone Expansion. Instead, I’ll explain why the Disney song How Far I’ll Go is a triumphant call to exploration, and leave a short CoZE exercise that you should modify with the principles of Moana in mind. Background Comfort Zone Expansion (ironically named CoZE) is CFAR’s version of exposure therapy, designed to get people to try new things cautiously. When I first heard of CoZE, what came to mind was something like run naked into a crowded Starbucks and ask strangers to finger-paint my buttcheeks. Although there might be some value to such an exercise, CoZE is decidedly not that. The first step of CoZE is simply trying things you’ve never bothered to try, even though you have no resistance to them. Let me call attention to some metaphors for talking about Comfort Zones. Order and Chaos One way to visualize your comfort zone as the dividing line between Order and Chaos. Order is the known. Order is your social circle, the interior of your home, the streets you drive regularly. Order is the programming languages you’re familiar with, the sports you play, the languages you speak. Order is the rules you follow. Order is your comfort zone. Chaos is the unknown – or worse the unknown unknown. Chaos is staring momentarily into a stranger’s eyes. Chaos is the antsy feeling you get turning just one street away from your usual route. Chaos is the feeling that the world has shifted beneath your feet when you break your code, when you find out you’ve been lied to, when you notice you’re deep into a mistake. Chaos is the amorphous shadow that expands gas-like to fill every space you don’t pay attention to. Yang and Yin are Order and Chaos, and the Yin-Yang is the Daoist reminder that the proper Way through life is to navigate the twisting line between Order and Chaos. For a more CS-friendly metaphor, consider stayin
a53ab478-cd06-470e-b689-d9bec4fdb5ae
StampyAI/alignment-research-dataset/lesswrong
LessWrong
AI community building: EliezerKart Having good relations between the various factions of AI research is key to achieving our common goal of a good future. Therefore, I proposal an event to help bring us all together: EliezerKart! It is a [go karting competition](https://tvtropes.org/pmwiki/pmwiki.php/Main/GoKartingWithBowser) between three factions: AI capabilities researchers, AI existential safety researchers, and AI bias and ethics researchers. The word Eliezer means "Help of my God" in Hebrew. The idea is whichever team is the best will have the help of their worldview, "their god", during the competition. There is no relation to anyone named Eliezer whatsoever. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/9SBSTFECnyHpkKyBA/kzxg5ogzaarjcb7qzdlm)Using advanced deepfake technology, I have created a visualization of a Paul Christiano and Eliezer Yudkowksy team.The race will probably take place in the desert or some cool city or something. Factions ======== Here is a breakdown of the three factions: Capabilities ------------ They are the most straight forward faction, but also the most technical. They can use advanced AI to create go kart autopilot, can simulate millions of races courses in advance to create the perfect cart, and can use GPT to couch their drivers. Unfortunately, they are not good at getting things right on the first critical try. Safety ------ Safety has two overlapping subfactions. ### Rationalists Rationalists can use conditional prediction markets (kind of like a [Futarchy](https://www.lesswrong.com/tag/futarchy)) and other forecasting techniques to determine the best drivers, the best learning methods, etc... They can also use rationality to debate go kart driving technique much more rationally than the other factions. ### Effective Altruists The richest faction, they can pay for the most advanced go karts. However, they will spend months debating the metrics upon which to rate how "advanced" a go kart is. ### Safety also knows how to do interpretability, which can create adversarial examples to throw off capabilities. Bias and ethics --------------- The trickiest faction, they can lobby the government to change the laws and the rules of the event ahead of time, or even mid-race. They can also turn the crowd against their competitors. They can also refuse to acknowledge the power of the AI used by capabilities altogether; whether their AI will care remains to be seen. Stakes ====== Ah, but this isn't simply a team building exercise. There are also "prizes" in this race. Think of it kind of like a *high stakes* [donor lottery](https://forum.effectivealtruism.org/topics/donor-lotteries). * If capabilities wins: + The other factions can not comment on machine learning unless they spend a week trying to train GANs. + Safety must inform capabilities of any ideas they have that can help create an even more helpful, harmless, and most importantly profitable assistant. + Bias and ethics must join the "safety and PR" departments of the AI companies. * If safety wins: + Everyone gets to enjoy a nice long AI summer! + Capabilities must spend a third of their time on interpretability and another third on AI approaches that are not just big inscrutable arrays of numbers. + Bias and ethics must only do research on if AI is biased towards paperclips, and their ethics teams must start working for the effective altruists, particularly on the "is everyone dying ethical?" question. + Bias and ethics must lobby the government to air strike all the GPU data centers. * If bias and ethics win: + Every capabilities researcher will have a bias and ethics expert sit behind them while they work. Anytime the capabilities researcher does something just because they can, the bias and ethics expert whispers *technology is never neutral* and the capabilities researcher's car is replaced by one that is 10% cheaper. + AI safety researchers must convert from their Machine God religion to atheism. They must also commit to working on an alignment strategy that, instead of maximizing [CEV](https://www.lesswrong.com/tag/coherent-extrapolated-volition), minimizes the number of naughty words in the universe. + Capabilities must create drones with facial recognition technology that follow the AI safety and AI capabilities factions around and stream their lives to [Twitch.tv](http://Twitch.tv). So what do you think? Game on?
033ac074-bc06-44b5-bc7a-63c9d6cc47b9
StampyAI/alignment-research-dataset/arbital
Arbital
Advanced agent properties summary(Technical): Advanced machine intelligences are the subjects of [AI alignment theory](https://arbital.com/p/2v): agents sufficiently advanced in various ways to be (1) dangerous if mishandled, and (2) [relevant](https://arbital.com/p/6y) to our larger dilemmas for good or ill. "Advanced agent property" is a broad term to handle various thresholds that have been proposed for "smart enough to need alignment". For example, current machine learning algorithms are nowhere near the point that they'd try to resist if somebody pressed the off-switch. *That* would require, e.g.: - Enough [big-picture strategic awareness](https://arbital.com/p/3nf) for the AI to know that it is a computer, that it has an off-switch, and that if it is shut off its goals are less likely to be achieved. - General [consequentialism](https://arbital.com/p/9h) / backward chaining from goals to actions; visualizing which actions lead to which futures and choosing actions leading to more [preferred](https://arbital.com/p/preferences) futures, in general and across domains. So the threshold at which you might need to start thinking about '[shutdownability](https://arbital.com/p/2xd)' or '[abortability](https://arbital.com/p/2rg)' or [corrigibility](https://arbital.com/p/45) as it relates to having an off-switch, is '[big-picture strategic awareness](https://arbital.com/p/3nf)' plus '[cross-domain consequentialism](https://arbital.com/p/9h)'. These two cognitive thresholds can thus be termed 'advanced agent properties'. The above reasoning also suggests e.g. that [https://arbital.com/p/-7vh](https://arbital.com/p/-7vh) is an advanced agent property, because a general ability to learn new domains could eventually lead the AI to understand that it has an off switch. *(For the general concept of an agent, see [standard agent properties](https://arbital.com/p/6t).)* [https://arbital.com/p/toc:](https://arbital.com/p/toc:) # Introduction: 'Advanced' as an informal property, or metasyntactic placeholder "[Sufficiently advanced Artificial Intelligences](https://arbital.com/p/7g1)" are the subjects of [AI alignment theory](https://arbital.com/p/2v); machine intelligences potent enough that: 1. The [safety paradigms for advanced agents](https://arbital.com/p/2l) become relevant. 2. Such agents can be [decisive in the big-picture scale of events](https://arbital.com/p/6y). Some example properties that might make an agent sufficiently powerful for 1 and/or 2: - The AI can [learn new domains](https://arbital.com/p/42g) besides those built into it. - The AI can understand human minds well enough to [manipulate](https://arbital.com/p/10f) us. - The AI can devise real-world strategies [we didn't foresee in advance](https://arbital.com/p/9f). - The AI's performance is [strongly superhuman, or else at least optimal, across all cognitive domains](https://arbital.com/p/41l). Since there's multiple avenues we can imagine for how an AI could be sufficiently powerful along various dimensions, 'advanced agent' doesn't have a neat necessary-and-sufficient definition. Similarly, some of the advanced agent properties are easier to formalize or pseudoformalize than others. As an example: Current machine learning algorithms are nowhere near the point that [they'd try to resist if somebody pressed the off-switch](https://arbital.com/p/2xd). *That* would happen given, e.g.: - Enough [big-picture strategic awareness](https://arbital.com/p/3nf) for the AI to know that it is a computer, that it has an off-switch, and that [if it is shut off its goals are less likely to be achieved](https://arbital.com/p/7g2). - Widely applied [consequentialism](https://arbital.com/p/9h), i.e. backward chaining from goals to actions; visualizing which actions lead to which futures and choosing actions leading to more [preferred](https://arbital.com/p/preferences) futures, in general and across domains. So the threshold at which you might need to start thinking about '[shutdownability](https://arbital.com/p/2xd)' or '[abortability](https://arbital.com/p/2rg)' or [corrigibility](https://arbital.com/p/45) as it relates to having an off-switch, is '[big-picture strategic awareness](https://arbital.com/p/3nf)' plus '[cross-domain consequentialism](https://arbital.com/p/9h)'. These two cognitive thresholds can thus be termed 'advanced agent properties'. The above reasoning also suggests e.g. that [https://arbital.com/p/-7vh](https://arbital.com/p/-7vh) is an advanced agent property, because a general ability to learn new domains could lead the AI to understand that it has an off switch. One reason to keep the term 'advanced' on an informal basis is that in an intuitive sense we want it to mean "AI we need to take seriously" in a way independent of particular architectures or accomplishments. To the philosophy undergrad who 'proves' that AI can never be "truly intelligent" because it is "merely deterministic and mechanical", one possible reply is, "Look, if it's building a Dyson Sphere, I don't care if you define it as 'intelligent' or not." Any particular advanced agent property should be understood in a background context of "If a computer program is doing X, it doesn't matter if we define that as 'intelligent' or 'general' or even as 'agenty', what matters is that it's doing X." Likewise the notion of '[sufficiently advanced AI](https://arbital.com/p/7g1)' in general. The goal of defining advanced agent properties is not to have neat definitions, but to correctly predict and carve at the natural joints for which cognitive thresholds in AI development could lead to which real-world abilities, corresponding to which [alignment issues](https://arbital.com/p/2l). An alignment issue may need to have been *already been solved* at the time an AI first acquires an advanced agent property; the notion is not that we are defining observational thresholds for society first needing to think about a problem. # Summary of some advanced agent properties Absolute-threshold properties (those which reflect cognitive thresholds irrespective of the human position on that same scale): - **[Consequentialism](https://arbital.com/p/9h),** or choosing actions/policies on the basis of their expected future consequences - Modeling the conditional relationship $\mathbb P(Y|X)$ and selecting an $X$ such that it leads to a high probability of $Y$ or high quantitative degree of $Y,$ is ceteris paribus a sufficient precondition for deploying [https://arbital.com/p/2vl](https://arbital.com/p/2vl) that lie within the effectively searchable range of $X.$ - Note that selecting over a conditional relationship is potentially a property of many internal processes, not just the entire AI's top-level main loop, if the conditioned variable is being powerfully selected over a wide range. - **Cross-domain consequentialism** implies many different [cognitive domains](https://arbital.com/p/7vf) potentially lying within the range of the $X$ being selected-on to achieve $Y.$ - Trying to rule out particular instrumental strategies, in the presence of increasingly powerful consequentialism, would lead to the [https://arbital.com/p/-42](https://arbital.com/p/-42) form of [https://arbital.com/p/-48](https://arbital.com/p/-48) and subsequent [context-change disasters.](https://arbital.com/p/6q) - **[https://arbital.com/p/3nf](https://arbital.com/p/3nf)** is a world-model that includes strategically important general facts about the larger world, such as e.g. "I run on computing hardware" and "I stop running if my hardware is switched off" and "there is such a thing as the Internet and it connects to more computing hardware". - **Psychological modeling of other agents** (not humans per se) potentially leads to: - Extrapolating that its programmers may present future obstacles to achieving its goals - This in turn leads to the host of problems accompanying [incorrigibility](https://arbital.com/p/45) as a [convergent strategy.](https://arbital.com/p/10g) - [Trying to conceal facts about itself](https://arbital.com/p/10f) from human operators - Being incentivized to engage in [https://arbital.com/p/-3cq](https://arbital.com/p/-3cq). - [Mindcrime](https://arbital.com/p/6v) if building models of reflective other agents, or itself. - Internally modeled adversaries breaking out of internal sandboxes. - [https://arbital.com/p/1fz](https://arbital.com/p/1fz) or other decision-theoretic adversaries. - Substantial **[capability gains](https://arbital.com/p/capability_gain)** relative to domains trained and verified previously. - E.g. this is the qualifying property for many [context-change disasters.](https://arbital.com/p/6q) - **[https://arbital.com/p/7vh](https://arbital.com/p/7vh)** is the most obvious route to an AI acquiring many of the capabilities above or below, especially if those capabilities were not initially or deliberately programmed into the AI. - **Self-improvement** is another route that potentially leads to capabilities not previously present. While some hypotheses say that self-improvement is likely to require basic general intelligence, this is not a known fact and the two advanced properties are conceptually distinct. - **Programming** or **computer science** capabilities are a route potentially leading to self-improvement, and may also enable [https://arbital.com/p/-3cq](https://arbital.com/p/-3cq). - Turing-general cognitive elements (capable of representing large computer programs), subject to **sufficiently strong end-to-end optimization** (whether by the AI or by human-crafted clever algorithms running on 10,000 GPUs), may give rise to [crystallized agent-like processes](https://arbital.com/p/2rc) within the AI. - E.g. natural selection, operating on chemical machinery constructible by DNA strings, optimized some DNA strings hard enough to spit out humans. - **[Pivotal material capabilities](https://arbital.com/p/6y)** such as quickly self-replicating infrastructure, strong mastery of biology, or molecular nanotechnology. - Whatever threshold level of domain-specific engineering acumen suffices to develop those capabilities, would therefore also qualify as an advanced-agent property. Relative-threshold advanced agent properties (those whose key lines are related to various human levels of capability): - **[https://arbital.com/p/9f](https://arbital.com/p/9f)** is when we can't effectively imagine or search the AI's space of policy options (within a [domain](https://arbital.com/p/7vf)); the AI can do things we didn't think of (within a domain). - **[https://arbital.com/p/2j](https://arbital.com/p/2j)** is when we don't know all the rules (within a domain) and might not recognize the AI's solution even if told about it in advance, like somebody in the 11th century looking at the blueprint for a 21st-century air conditioner. This may also imply that we cannot readily put low upper bounds on the AI's possible degree of success. - **[Rich domains](https://arbital.com/p/9j)** are more likely to have some rules or properties unknown to us, and hence be strongly uncontainable. - [https://arbital.com/p/9t](https://arbital.com/p/9t). - Human psychology is a rich domain. - Superhuman performance in a rich domain strongly implies cognitive uncontainability because of [https://arbital.com/p/1c0](https://arbital.com/p/1c0). - **Realistic psychological modeling** potentially leads to: - Guessing which results and properties the human operators expect to see, or would arrive at AI-desired beliefs upon seeing, and [arranging to exhibit those results or properties](https://arbital.com/p/10f). - Psychologically manipulating the operators or programmers - Psychologically manipulating other humans in the outside world - More probable [mindcrime](https://arbital.com/p/6v) - (Note that an AI trying to develop realistic psychological models of humans is, by implication, trying to develop internal parts that can deploy *all* human capabilities.) - **Rapid [capability gains](https://arbital.com/p/capability_gain)** relative to human abilities to react to them, or to learn about them and develop responses to them, may cause more than one [https://arbital.com/p/-6q](https://arbital.com/p/-6q) to happen a time. - The ability to usefully **scale onto more hardware** with good returns on cognitive reinvestment would potentially lead to such gains. - **Hardware overhang** describes a situation where the initial stages of a less developed AI are boosted using vast amounts of computing hardware that may then be used more efficiently later. - [Limited AGIs](https://arbital.com/p/5b3) may have **capability overhangs** if their limitations break or are removed. - **[Strongly superhuman](https://arbital.com/p/7mt)** capabilities in psychological or material domains could enable an AI to win a competitive conflict despite starting from a position of great material disadvantage. - E.g., much as a superhuman Go player might win against the world's best human Go player even with the human given a two-stone advantage, a sufficiently powerful AI might talk its way out of an [AI box](https://arbital.com/p/6z) despite restricted communications channels, eat the stock market in a month starting from $1000, win against the world's combined military forces given a protein synthesizer and a 72-hour head start, etcetera. - [https://arbital.com/p/6s](https://arbital.com/p/6s) relative to human civilization is a sufficient condition (though not necessary) for an AI to... - Deploy at least any tactic a human can think of. - Anticipate any tactic a human has thought of. - See the human-visible logic of a convergent instrumental strategy. - Find any humanly visible [weird alternative](https://arbital.com/p/43g) to some hoped-for logic of cooperation. - Have any advanced agent property for which a human would qualify. - **[General superintelligence](https://arbital.com/p/41l)** would lead to strongly superhuman performance in many domains, human-relative efficiency in every domain, and possession of all other listed advanced-agent properties. - Compounding returns on **cognitive reinvestment** are the qualifying condition for an [https://arbital.com/p/-428](https://arbital.com/p/-428) that might arrive at superintelligence on a short timescale. # Discussions of some advanced agent properties ## Human psychological modeling Sufficiently sophisticated models and predictions of human minds potentially leads to: - Getting sufficiently good at human psychology to realize the humans want/expect a particular kind of behavior, and will modify the AI's preferences or try to stop the AI's growth if the humans realize the AI will not engage in that type of behavior later. This creates an instrumental incentive for [programmer deception](https://arbital.com/p/10f) or [cognitive steganography](https://arbital.com/p/3cq). - Being able to psychologically and socially manipulate humans in general, as a real-world capability. - Being at risk for [mindcrime](https://arbital.com/p/6v). A [behaviorist](https://arbital.com/p/102) AI is one with reduced capability in this domain. ## Cross-domain, real-world [consequentialism](https://arbital.com/p/9h) Probably requires *generality* (see below). To grasp a concept like "If I escape from this computer by [hacking my RAM accesses to imitate a cellphone signal](https://www.usenix.org/system/files/conference/usenixsecurity15/sec15-paper-guri-update.pdf), I'll be able to secretly escape onto the Internet and have more computing power", an agent needs to grasp the relation between its internal RAM accesses, and a certain kind of cellphone signal, and the fact that there are cellphones out there in the world, and the cellphones are connected to the Internet, and that the Internet has computing resources that will be useful to it, and that the Internet also contains other non-AI agents that will try to stop it from obtaining those resources if the AI does so in a detectable way. Contrasting this to non-primate animals where, e.g., a bee knows how to make a hive and a beaver knows how to make a dam, but neither can look at the other and figure out how to build a stronger dam with honeycomb structure. Current, 'narrow' AIs are like the bee or the beaver; they can play chess or Go, or even learn a variety of Atari games by being exposed to them with minimal setup, but they can't learn about RAM, cellphones, the Internet, Internet security, or why being run on more computers makes them smarter; and they can't relate all these domains to each other and do strategic reasoning across them. So compared to a bee or a beaver, one shot at describing the potent 'advanced' property would be *cross-domain real-world consequentialism*. To get to a desired Z, the AI can mentally chain backwards to modeling W, which causes X, which causes Y, which causes Z; even though W, X, Y, and Z are all in different domains and require different bodies of knowledge to grasp. ## Grasping the [big picture](https://arbital.com/p/3nf) Many dangerous-seeming [convergent instrumental strategies](https://arbital.com/p/2vl) pass through what we might call a rough understanding of the 'big picture'; there's a big environment out there, the programmers have power over the AI, the programmers can modify the AI's utility function, future attainments of the AI's goals are dependent on the AI's continued existence with its current utility function. It might be possible to develop a very rough grasp of this bigger picture, sufficiently so to motivate instrumental strategies, in advance of being able to model things like cellphones and Internet security. Thus, "roughly grasping the bigger picture" may be worth conceptually distinguishing from "being good at doing consequentialism across real-world things" or "having a detailed grasp on programmer psychology". ## [Pivotal](https://arbital.com/p/6y) material capabilities An AI that can crack the [protein structure prediction problem](https://en.wikipedia.org/wiki/Protein_structure_prediction) (which [seems speed-uppable by human intelligence](https://en.wikipedia.org/wiki/Foldit)); invert the model to solve the protein design problem (which may select on strong predictable folds, rather than needing to predict natural folds); and solve engineering problems well enough to bootstrap to molecular nanotechnology; is already possessed of potentially [pivotal](https://arbital.com/p/6y) capabilities regardless of its other cognitive performance levels. Other material domains besides nanotechnology might be [pivotal](https://arbital.com/p/6y). E.g., self-replicating ordinary manufacturing could potentially be pivotal given enough lead time; molecular nanotechnology is distinguished by its small timescale of mechanical operations and by the world containing an infinite stock of perfectly machined spare parts (aka atoms). Any form of cognitive adeptness that can lead up to *rapid infrastructure* or other ways of quickly gaining a decisive real-world technological advantage would qualify. ## Rapid capability gain If the AI's thought processes and algorithms scale well, and it's running on resources much smaller than those which humans can obtain for it, or the AI has a grasp on Internet security sufficient to obtain its own computing power on a much larger scale, then this potentially implies [rapid capability gain](https://arbital.com/p/) and associated [context changes](https://arbital.com/p/6q). Similarly if the humans programming the AI are pushing forward the efficiency of the algorithms along a relatively rapid curve. In other words, if an AI is currently being improved-on swiftly, or if it has improved significantly as more hardware is added and has the potential capacity for orders of magnitude more computing power to be added, then we can potentially expect rapid capability gains in the future. This makes [context disasters](https://arbital.com/p/6q) more likely and is a good reason to start future-proofing the [safety properties](https://arbital.com/p/2l) early on. ## Cognitive uncontainability On complex tractable problems, especially those that involve real-world rich problems, a human will not be able to [cognitively 'contain'](https://arbital.com/p/9f) the space of possibilities searched by an advanced agent; the agent will consider some possibilities (or classes of possibilities) that the human did not think of. The key premise is the 'richness' of the problem space, i.e., there is a fitness landscape on which adding more computing power will yield improvements (large or small) relative to the current best solution. Tic-tac-toe is not a rich landscape because it is fully explorable (unless we are considering the real-world problem "tic-tac-toe against a human player" who might be subornable, distractable, etc.) A computationally intractable problem whose fitness landscape looks like a computationally inaccessible peak surrounded by a perfectly flat valley is also not 'rich' in this sense, and an advanced agent might not be able to achieve a relevantly better outcome than a human. The 'cognitive uncontainability' term in the definition is meant to imply: - [Vingean unpredictability](https://arbital.com/p/9g). - Creativity that goes outside all but the most abstract boxes we imagine (on rich problems). - The expectation that we will be surprised by the strategies the superintelligence comes up with because its best solution was one we didn't consider. Particularly surprising solutions might be yielded if the superintelligence has acquired domain knowledge we lack. In this case the agent's strategy search might go outside causal events we know how to model, and the solution might be one that we wouldn't have recognized in advance as a solution. This is [https://arbital.com/p/2j](https://arbital.com/p/2j). In intuitive terms, this is meant to reflect, e.g., "What would have happened if the 10th century had tried to use their understanding of the world and their own thinking abilities to upper-bound the technological capabilities of the 20th century?" ## Other properties *(Work in progress)* - [generality](https://arbital.com/p/42g) - cross-domain [consequentialism](https://arbital.com/p/9h) - learning of non-preprogrammed domains - learning of human-unknown facts - Turing-complete fact and policy learning - dangerous domains - human modeling - social manipulation - realization of programmer deception incentive - anticipating human strategic responses - rapid infrastructure - potential - self-improvement - suppressed potential - [epistemic efficiency](https://arbital.com/p/6s) - [instrumental efficiency](https://arbital.com/p/6s) - [cognitive uncontainability](https://arbital.com/p/9f) - operating in a rich domain - [Vingean unpredictability](https://arbital.com/p/9g) - [strong cognitive uncontainability](https://arbital.com/p/2j) - improvement beyond well-tested phase (from any source of improvement) - self-modification - code inspection - code modification - consequentialist programming - cognitive programming - cognitive capability goals (being pursued effectively) - speed surpassing human reaction times in some interesting domain - socially, organizationally, individually, materially
033d4b68-52c1-4d48-a2da-6d4e20c2dc06
trentmkelly/LessWrong-43k
LessWrong
Just Top Post Long ago, every email would be read on its own, so writers would include a bit of the previous message to remind people what they were talking about: On Fri, Apr 10 Pat wrote: > I miss you a lot! I'll miss you too. > Are you still planning to visit in February? I hope so! Some people would be lazy, and just include the whole earlier message, without any trimming. This was widely viewed as rude, and there were many "netiquette" guides that reminded people that trimming was polite consideration of your reader. As time went on, however, mail clients adapted. They learned how to: * Thread messages. Messages didn't have to stand on their own anymore because the context was right there for the reader. * Collapse large quoted sections. Including the whole parent message no longer caused trouble. We have mostly standardized on a new system, where mail clients default to including the full earlier message, and where they hide this quoted section by default. This means that when someone is added to a thread in progress, they can look back at earlier sections if they need to. If someone would like to reply inline they still can, but people only do it when needed. Mobile has also contributed, where composing inline replies without a proper keyboard is a lot of work, and people often just want to send a quick response. I finally switched over in 2013 and I was already a bit of a hold-out. Looking through the last 25 replies from other people that I have in my personal email, 24 were top posted and one was bottom posted without any trimming. I'm on a non-technical list where some users are trying to convince everyone else to always bottom-post and trim, and it's a mess. People don't realize they're top-posting and don't know what they should be doing differently. The most common response has been for people to send each reply to the list as a new message composed from scratch with no quoting at all, which breaks threading and is a pain to read. I think at this poi
da12a39f-7075-4adf-ae13-fedb43cd18e4
trentmkelly/LessWrong-43k
LessWrong
[LINK] Waitbutwhy article on the history and future of space exploration, SpaceX and more Epic work, it's always fascinates me when author explores the topic so deep, that doesn't know where to begin, so finally starts with the whole history of the universe or the existence of human race. http://waitbutwhy.com/2015/08/how-and-why-spacex-will-colonize-mars.html  
458cb565-7742-4507-bf49-7110afe20c63
trentmkelly/LessWrong-43k
LessWrong
Good books about overcoming coordination problems? The questions I mostly frequently ask myself are "How does anything good scale?" and "How is anything good and large-scale sustained?" The most trivial answer to this is "Markets and incentives!" which is not wrong, but also seems like an underwhelming answer. Markets and incentives do solve many coordination problems and have reduced large amounts of suffering. However, they also seem to have limits in terms of their predictive power. I would like to read books which discuss how cooperation can be leveraged when incentivized markets (free from rent-seeking!) are insufficient. In "Mission Economy", Marianna Mazzucato claims cooperation flourishes outside of markets when the government needs to explicitly define missions. She uses The Apollo Mission as an example, but this answer felt slightly under-specified. By the end of the book, I couldn't distinguish a good "mission" from a bad one. Even worse, I can't even tell if a mission is being carried out badly, other than ham-fistedly comparing it to The Apollo Mission, even though there's a ton of vague prose spent on the topic. What books should I read to get a better pragmatic understanding? Is there a “handbook for social change” regardless of scale? Some candidates I'm currently considering: * The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger by Mark Levinson * A historical book about water treatment, electricity distribution or libraries? * A historical overview of unions. Supposedly unions have driven a number of beneficial labour reforms, but they also can result in stagnation and corruption?
47e7ba6c-4d2f-41b7-afdc-3aaf4f3b2463
trentmkelly/LessWrong-43k
LessWrong
Why IQ shouldn't be considered an external factor This is a sort-of response to this post. "Things under your control" (more generally, free will) is an ill-defined concept: you are an entity within physics; all of your actions and thoughts are fully determined by physical processes in your brain. Here, I will assume that "things under your control" are any things that are controlled by your brain, since it is a consistent definition, and it's what people usually mean when they talk about things under one's control. So, you may be interested in the question: how much one's success depends on his thoughts and actions (i.e. things that are controlled by his brain) vs. how it depends on the circumstances/environment (i.e. things that aren't)? Another formulation: how you can change one's life outcomes if you could alter neural signals emitted by his brain? We also could draw the borderline somewhere else; maybe add physical traits, like height or attractiveness to the "internal factors" category, or maybe assign some brain parts to the "external factors" category. The question whether your life success is mostly determined by "internal factors" or "external factors" would remain valid -- and we call it "internal vs. external locus of control" question. But what happens when we assign IQ to the "external factors" category? IQ test is an attempt to measure some value, which is supposed to be a measure of something like quality of one's thinking process. So, this value can be seen as a function IQ(brain), which maps brains to numbers. Your thoughts and actions don't depend on your IQ score; IQ score depends on your thoughts. That's how the causal arrows are arranged. But it's possible to ask, what can we change if we can change brain, conditional on the fixed IQ score. But then the "free will" intuition collapses; it's hard to imagine what we could change if our thought processes were restricted in some weird way. And such question is hardly practical, in my opinion. It's true that one can measure his IQ, and that
cee89232-8070-46ba-a7da-f43df587cdad
trentmkelly/LessWrong-43k
LessWrong
reflecting on criticism This post is a reflection on criticism raised by a brief critique of reduction. I've thought it would be too big for a comment so here it goes...  I've read all of the posts from Reductionism 101 and Joy in the Merely Real and enjoyed my time. But I think that a brief critique of reduction was misunderstood as anti-reductionist and Savanna-Poet-like. Which cannot be further from the intention behind it. In fact, in many ways I intended to highlight those very ideas that Eliezer brought up!  Reduction is one of the best tools we have to approach the way things are. That is not my beef. My beef is with compartmentalizing the way things are into "real things" and acting as if everything can be knowable and acted upon rationally. As if everything around us was "already explained" by "the Science!". In fact, to stop acting as Savanna Poets as that leads to fixations on our beliefs and cognitive dissonance inside.  First things first, the emptiness of inherently existing nature is not a nihilistic stance at all! It is only a call to question our inbuilt epistemology and ontology with regard to the "real". Understanding that all our compartmentalizing is inherently empty. Not false in the absolute sense! But in a sense: "All models are wrong, but some are useful." Yes, quarks too shall pass. That is, opening thinking up for Joy in Discovery. It also stresses out that everything around us can be known only in dependence, in relational structure. If it were not for emptiness of essence, knowledge in itself would be impossible! Think about "misunderstanding by essence", how to change that which is immutable. So to highlight that quarks are really "quarks". Even if experimentally confirmed with five sigma accuracy. They are still our little rainbows!  Secondly, I tend to disagree that the following (simplified) definition of reduction is severely flawed: > Reduction is an operation of reason by the observer to extract the most relevant relations from the observed. from a
673515c1-99b2-4f98-8a51-c53a1320b991
trentmkelly/LessWrong-43k
LessWrong
There are no "bad people" When I help friends debug their intrinsic motivation, here's a pattern I often bump into: > Well, if I don't actually start working soon, then I'll be a bad person. Or, even more worrying: > Well they wanted me to just buckle down and do the work, and I really didn't want to do it then, which means that either they were bad, or I was bad. And I didn't want to be the bad one bad, so I got angry at them, and… I confess, I do not know what it would mean for somebody to be a "bad person." I do know what it means for somebody to be bad at achieving the goals they set for themselves. I do know what it means for someone to be good at pursuing goals that I dislike. I have no idea what it would mean for a person to "be bad." I know what it means for a person to lack skill in a specific area. I know what it means for a person to be procrastinating. I know what it means for a person to be acting under impulses that they don't endorse, such as spite or disgust. I know what it means for someone to fail to act as they wish to act. I know what it means for someone to hurt other people, either on purpose or with a feeling of helpless resignation. But I don't know what it would mean for a person to "be bad." That fails to parse. People don't have a hidden stone deep inside their brain that is either green or red depending on whether they are good or bad. "Badness" is not a fundamental property that a person can have. At best, "they're bad" can be shorthand for either "I don't want their goals achieved" or "they are untrained in a number of skills which would be relevant to the present situation"; but in all cases, "they are bad" must be either shorthand or nonsense. Asking whether a person is "fundamentally good" or "fundamentally bad" is a type error. Life is not a quest where you struggle to wind up "good." That's not the sort of reality we find ourselves in. Rather, we find ourselves embedded in a vast universe, with control over the future and a goal of making it wonderf
89f471d1-95af-4317-824f-4905f81755f2
StampyAI/alignment-research-dataset/lesswrong
LessWrong
An example of self-fulfilling spurious proofs in UDT Benja Fallenstein was [the first to point out](/lw/2l2/what_a_reduction_of_could_could_look_like/2f7w) that spurious proofs pose a problem for UDT. Vladimir Nesov and orthonormal [asked](/lw/axl/decision_theories_a_semiformal_analysis_part_i/64ey) for a formalization of that intuition. In this post I will give an example of a UDT-ish agent that fails due to having a malicious proof searcher, which feeds the agent a spurious but valid proof. The basic idea is to have an agent A that receives a proof P as input, and checks P for validity. If P is a valid proof that a certain action a is best in the current situation, then A outputs a, otherwise A tries to solve the current situation by its own means. Here's a first naive formalization, where U is the world program that returns a utility value, A is the agent program that returns an action, and P is the proof given to A: ``` def U():   if A(P)==1:     return 5   else:     return 10 def A(P):   if P is a valid proof that A(P)==a implies U()==u, and A(P)!=a implies U()<=u:     return a   else:     do whatever ``` This formalization cannot work because a proof P can never be long enough to contain statements about A(P) inside itself. To fix that problem, let's introduce a function Q that generates the proof P: ``` def U():   if A(Q())==1:     return 5   else:     return 10 def A(P):   if P is a valid proof that A(Q())==a implies U()==u, and A(Q())!=a implies U()<=u:     return a   else:     do whatever ``` In this case it's possible to write a function Q that returns a proof that makes A return the suboptimal action 1, which leads to utility 5 instead of 10. Here's how: Let X be the statement "A(Q())==1 implies U()==5, and A(Q())!=1 implies U()<=5". Let Q be the program that enumerates all possible proofs trying to find a proof of X, and returns that proof if found. (The definitions of X and Q are mutually quined.) If X is provable at all, then Q will find that proof, and X will become true (by inspection of U and A). That reasoning is formalizable in our proof system, so the statement "if X is provable, then X" is provable. Therefore, by [Löb's theorem](http://en.wikipedia.org/wiki/L%C3%B6b's_theorem), X is provable. So Q will find a proof of X, and A will return 1. One possible conclusion is that a UDT agent cannot use just any proof searcher or "mathematical intuition module" that's guaranteed to return valid mathematical arguments, because valid mathematical arguments can make the agent choose arbitrary actions. The proof searchers from [some](/lw/2l2/what_a_reduction_of_could_could_look_like/ ) [previous](/lw/8wc/a_model_of_udt_with_a_halting_oracle/ ) [posts](/lw/b0e/a_model_of_udt_without_proof_limits/) were well-behaved by construction, but not all of them are. The troubling thing is that you may end up with a badly behaved proof searcher by accident. For example, consider a variation of U that adds some long and complicated computation to the "else" branch of U, before returning 10. That increases the length of the "natural" proof that a=2 is optimal, but the spurious proof for a=1 stays about the same length as it was, because the spurious proof can just ignore the "else" branch of U. This way the spurious proof can become much shorter than the natural proof. So if (for example) your math intuition module made the innocuous design decision of first looking at actions that are likely to have shorter proofs, you may end up with a spurious proof. And as a further plot twist, if we make U return 0 rather than 10 in the long-to-compute branch, you might choose the *correct* action due to a spurious proof instead of the natural one.
ddf9be4c-b969-42c6-85b7-24a861192bea
trentmkelly/LessWrong-43k
LessWrong
The unspoken but ridiculous assumption of AI doom: the hidden doom assumption Epistemic Status: Strawman (but maybe also Steelman?) Advocates for AI existential safety (a.k.a. "AI doomers"), have long been fear-mongering about AGI. Reasonable people have long pointed out how ridiculous the existence of AGI would be. The doom position has many arguments for how AGI could destroy humanity, but they all rely uncritically on the same unempirical, unproven, unfalsifiable, sci-fi-ish assumption: The hidden doom assumption: The future is something which exists. Indeed, most people who have thought about and accepted the hidden doom assumption also accept AI doom. But rarely do they spell this assumption out. Although it is clearly laughable, I will now proceed to demolish the doom position by tearing apart this assumption. It is sad that people can be so scared of a hypothetical concept. Philosophical naivety The position that the future exists, known as eternalism is widely known as philosophically naive. Quoting a famous philosopher and historian: > In the vast, beguiling constellation of human intellectual endeavors, few enigmas are as tantalizing as the concept of 'the future'. Yet, I dare assert, dear reader, it is a mere illusion, a phantasm conceived within the human propensity for forward-thrusting ambition. Harken to the sage words of Heraclitus, who postulated that "no man ever steps in the same river twice, for it's not the same river and he's not the same man." Indeed, the very fabric of time, like the ceaseless flux of Heraclitean river, is one of relentless and unrepeatable change. Thus, one might indeed conclude, borrowing the lofty wit of Hegel, that the future is but an "Unhappy Consciousness", a state of desire for an entity that does not yet, and can never truly exist. > > One may counter-argue, armed with the deceptively empirical utterances of that notorious rationalist Descartes - "cogito ergo sum" - that the future exists in anticipation. Yet, this line of thought is patently absurd, redolent with the absurdity of Sartr
7e71fb88-4826-4c14-9ad3-133c0ecdf654
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Beyond Kolmogorov and Shannon This post is the first in a sequence that will describe [James Crutchfield's Computational Mechanics](http://csc.ucdavis.edu/~cmg/compmech/pubs/CalcEmergTitlePage.htm) framework. We feel this is one of the most theoretically sound and promising approaches towards understanding Transformers in particular and interpretability more generally. As a heads up:  Crutchfield's framework will take many posts to fully go through, but even if you don't make it all the way through there are still many deep insights we hope you will pick up along the way. EDIT: *since there was some confusion about this in the comments: These initial posts are supposed to be an introductionary and won't get into the actually novel aspects of Crutchfield's framework yet. It's also not a dunk on existing information- theoretic measures - rather an ode!* To better understand the capability and limitations of large language models it is crucial to understand the inherent structure and uncertainty ('entropy') of language data. It is natural to quantify this structure with complexity measures. We can then compare the performance of transformers to the theoretically optimal limits achieved by minimal circuits.[[1]](#fnsxl8jyqwe8) This will be key to interpreting transformers.  ​The two most well-known complexity measures are the Shannon entropy and the Kolmogorov complexity. We will describe why these measures are not sufficient to understand the inherent structure of language. This will serve as a motivation for more sophisticated complexity measures that better probe the intrinsic structure of language data. We will describe these new complexity measures in subsequent posts. Later in this sequence we will discuss some directions for transformer interpretability work. Compression is the path to understanding ======================================== Imagine you are an agent coming across some natural system. You stick an appendage into the system, effectively measuring its states. You measure for a million timepoints and get mysterious data that looks like this: ...00110100100100110110110100110100100100110110110100100110110100... You want to gain an understanding of how this system generates this data, so that you can predict its output, so you can take advantage of the system to your own ends, and because gaining understanding is an intrinsic joy. In reality the data was generated in the following way:  **output 0, then 1, then you flip a fair coin, and then repeat.** Is there some kind of framework or algorithm where we can reliably come to this understanding? As others have noted, understanding is related to [abstraction](https://www.lesswrong.com/posts/YyYwRxajPkAyajKXx/whence-your-abstractions), [prediction](https://www.lesswrong.com/posts/hAvGi9YAPZAnnjZNY/prediction-compression-transcript-1), and [compression](https://www.lesswrong.com/posts/TTNS3tk5McHqrJCbR/abstraction-information-at-a-distance). We operationalize *understanding* by saying an agent has an understanding of a dataset if it possesses a compressed generative model: i.e. a program that is able to generate samples that (approximately) simulate the hidden structure, both deterministic and random, in the data.​[[2]](#fnowzrzyg1qy)  Note that pure prediction is not understanding. As a simple example take the case of predicting the outcomes of 100 fair coin tosses. Predicting tails every flip will give you maximum expected predictive accuracy (50%), but it is *not* the correct generative model for the data. Over the course of this sequence, we will come to formally understand why this is the case.[[3]](#fnbooywzohr7) Standard measures of information theory do not work =================================================== To start let's consider the Kolmogorov Complexity and Shannon Entropy as measures of compression, and see why they don't quite work for what we want. Kolmogorov Complexity --------------------- Recall that the Kolmogorov(-Chaitin-Solomonoff) complexity K(x).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  of a bit string x is defined as the length of the shortest programme outputting x [given a blank output on a chosen universal Turing machine] One often discussed downside of the K complexity is that it is incomputable. But there is another more **conceptual downside if we want Kolmogorov complexity to measure structure in natural systems**: **it assigns maximal 'complexity' to random strings**.   Consider again the 0-1-random sequence  ...00110100100100110110110100110100100100110110110100100110110100... The Turing Machine is forced to explicitly represent every randomly generated bit, since there is no compression available for a string generated by a fair coin. For those bits, we will have to use up a full 1 bit (in this case 10E6/3 bits total). For the deterministic 0 and1 bits, we need only remember where in the sequence we are: the deterministic 0 position, the deterministic 1 position, or the random position. This requires log(3) ~= 1.58 bits.  There are two main things to note here. First, the K complexity is separable into two components, one corresponding to randomness, and the other corresponding to deterministic structure. Second, the component corresponding to randomness is a much larger contributor to the K complexity, by 6 orders of magnitude! This arises from the fact that we want the Turing Machine to recreate the string *exactly*, accounting for every bit. If we want to have a compressed understanding of this string, why should we memorize every single random bit? The best thing to do would be to **recognize that the random bits are random**, and simply **characterize them by the entropy** associated with that token, instead of trying to account for every sample. In other words, the program associated with the standard way of thinking of K complexity is something like: ``` # use up a lot of computational resources storing every single # random bit explicitly. This is a list that is 10E6/3 long!!! random_bits = [0, 1, 1, 1, 1, 0, 0, 1, ..., ..., ..., 0, 0, 1, 1, 0, 1, 0, 0] # a relatively compact for-loop of 3 lines. the last line fetches the stored # random data for i in range(data_length/3): data.append(0) data.append(1) data.append(random_bits[i]) ``` The first line of code memorizes every random bit we have to explicitly represent. Then we can loop through the deterministic 0-1 and deterministically append the "random" bits one by one. But here'sa much more compact understanding**:** ``` for i in range(data_length/3): data.append(0) data.append(1) data.append(np.random.choice([0,1]) ``` Note that the last line of this program *is not* the same as the last line of the previous program. Here we loop through appending the deterministic 0-1 and then randomly generate a bit.  If we were right in our assessment that the random part was really random then the first programme will overfit: if we do the observation again the nondeterministic part will be different - *we've just 'wasted' ~*106*of bits on overfitting*!  Theissue with **K complexity** is that it **must account for all randomly generated data exactly**. But an agent trying to create a compact understanding of the world only suffers when they try to account for random bits as if they were not random. **A Turing Machine has no mechanism to instantiate uncertainty** in the string generating process. Shannon Entropy --------------- Since K complexity algorithmically accounts for every bit, whether generated by random or deterministic means, it overestimates the complexity of data generated by an at least partially random processes. Maybe then we can try Shannon Entropy since that seems to be a measure of the random nature of a system. As a reminder, here is the mystery string: ...00110100100100110110110100110100100100110110110100100110110100... Recall that the Shannon Entropy of a distribution p(x) is −∑xp(x)log(p(x)), and that entropy is maximized by uniform distributions, where randomness is maximum. What is the Shannon Entropy of our data? We need to look at the distribution of 0s in 1s in our string: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/e51a9175522e038daa8f4a2c4ee97f6ac8212724ba0ad1c0.png)There are equal numbers of 0's and 1's in our data, so the Shannon Entropy is maximized. From the perspective of Shannon Entropy, our data is indistinguishable from IID fair coin flips! This corresponds to something like the following program: ``` for i in range(data_length): data.append(np.random.choice([0,1]) ``` which is indeed very compact. But it also does not have great predictive power. There is structure in the data which we can't see from the perspective of Shannon Entropy. Whereas the perspective of K complexity led to overfitting the data by treating randomly generated bits as if they were deterministically generated, **the perspective of** **Shannon Entropy is akin to under fitting the data by treating deterministic structure in the data as if it were generated by a random process.** Summary ======= These are the key takeaways: * Data is, in general, a mixture of randomly generated components and deterministic hidden structure * We *understand* understanding as producing accurate generative models. * Kolmogorov Complexity and Shannon Entropy are misleading measures of structure. Kolmogorov complexity overestimates deterministic structure while the Shannon Entropy rate underestimates deterministic structure. * We need a framework that allows us to find and characterize both structured and random parts of our data. As a teaser, the first step in attacking this last question will be investigating how the entropy (irreducible probabilistic uncertainty) changes at different scales.    1. **[^](#fnrefsxl8jyqwe8)**In Crutchfield's framework these are called 'Epsilon Machines'. 2. **[^](#fnrefowzrzyg1qy)**Understanding is not just prediction - since prediction is a purely [correlational understanding that does in general lift to the full causal picture](http://bayes.cs.ucla.edu/BOOK-2K/). Recall that for most [large data sets there are many causal models compatible with the empirical joint distribution](https://www.gwern.net/Causality). To distinguish these causal models we need interventional data. To have 'deep' understanding we need full causal understanding. 3. **[^](#fnrefbooywzohr7)**A simple way to see that All Tails is not the correct generative model is to consider sampling it many times: the observed empirical distribution of sampling from Tailx100 is very different from CoinFlipx100
010e10e1-264e-4db0-8ded-81a3c5cf16f1
trentmkelly/LessWrong-43k
LessWrong
Changes to my workflow About 18 months ago I made a post here on my workflow. I've received a handful of requests for follow-up, so I thought I would make another post detailing changes since then. I expect this post to be less useful than the last one. For the most part, the overall outline has remained pretty stable and feels very similar to 18 months ago. Things not mentioned below have mostly stayed the same. I believe that the total effect of continued changes have been continued but much smaller improvements, though it is hard to tell (as opposed to the last changes, which were more clearly improvements). Based on comparing time logging records I seem to now do substantially more work on average, but there are many other changes during this period that could explain the change (including changes in time logging). Changes other than work output are much harder to measure; I feel like they are positive but I wouldn't be surprised if this were an illusion. Splitting days: I now regularly divide my day into two halves, and treat the two halves as separate units. I plan each separately and reflect on each separately. I divide them by an hour long period of reflecting on the morning, relaxing for 5-10 minutes, napping for 25-30 minutes, processing my emails, and planning the evening. I find that this generally makes me more productive and happier about the day. Splitting my days is often difficult due to engagements in the middle of the day, and I don't have a good solution to that. WasteNoTime: I have longstanding objections to explicitly rationing internet use (since it seems either indicative of a broader problem that should be resolved directly, or else to serve a useful function that would be unwise to remove). That said, I now use the extension WasteNoTime to limit my consumption of blogs, webcomics, facebook, news sites, browser games, etc., to 10 minutes each half-day. This has cut the amount of time I spend browsing the internet from an average of 30-40 minutes to an averag
29504343-52e5-4f62-a886-b781b1f23fc0
trentmkelly/LessWrong-43k
LessWrong
Against Boltzmann mesaoptimizers 1. I'm effectively certain that a weakly defined version of "mesaoptimizers"- like a component of a network which learns least squares optimization in some context- is practically reachable by SGD. 2. I'm pretty sure there exists some configuration of an extremely large neural network such that it contains an agentic mesaoptimizer capable of influencing the training gradient for at least one step in a surprising way. In other words, gradient hacking. 3. I think there's a really, really big gap between these which is where approximately all the relevant and interesting questions live, and I'd like to see more poking aimed at that gap. 4. Skipping the inspection of that gap probably throws out useful research paths. 5. I think this belongs to a common, and more general, class of issue. Boltzmann mesaoptimizers Boltzmann brains are not impossible. They're a valid state of matter, and there could be a physical path to that state.[1] You might just have to roll the dice a few times, for a generous definition of few. And yet I feel quite confident in claiming that we won't observe a Boltzmann brain in the wild within the next few years (for a similar definition of few).[2] The existence of a valid state, and a conceivable path to reach that state, is not enough to justify a claim that that state will be observed with non-negligible probability. I suspect that agentic mesaoptimizers capable of intentionally distorting training are more accessible than natural Boltzmann brains by orders of magnitude of orders of magnitude, but it is still not clear to me that the strong version is something that can come into being accidentally in all architectures/training modes that aren't specifically designed to defend against it.[3] Minding the gap I feel safe in saying that it would be a mistake to act on the assumption that you're a Boltzmann brain. If you know your history is coherent (to the degree that it is) only by pure chance, and your future practically doesn't exi
389b87a5-bb38-4ef5-b276-0f243ab90933
trentmkelly/LessWrong-43k
LessWrong
Who is available for contract work? (A la the Hacker News "Who Wants To Be Hired" threads) The intention of this thread is similar to the "Hacker News Freelancer/Seeking Freelancer" threads. Please lead with either AVAILABLE FOR PROJECTS or SEEKING FREELANCER, your location, and whether remote work is a possibility. I was inspired to post this when a friend asked me about which rationalist adjacent people were open to contract work, though other than one researcher that I'd seen mention their rate on their website, I didn't know who to mention. Though I'm sure there are quite a few who are! So, perhaps this will solve a coordination problem.
48ab1824-83ad-4eb6-bd8b-2abf259dc083
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington DC Short Talks Meetup Discussion article for the meetup : Washington DC Short Talks Meetup WHEN: 16 March 2014 03:00:00PM (-0400) WHERE: National Portrait Gallery, Washington, DC 20001, USA We'll be meeting to listen to/give short talks. If you didn't sign up to give a talk, but want to, do it anyway! This is not a formal thing. Discussion article for the meetup : Washington DC Short Talks Meetup
58830fac-83d0-4245-9e9b-b8393d14e205
trentmkelly/LessWrong-43k
LessWrong
A Good Posture - Muscles & Self-Awareness. (A version appears here: What is a good posture?) Posture = The position of your body.  All of it.   At any time. A good posture contributes to proper functioning of our living machine. With a good posture the body is well-positioned and comfortable. A bad posture means the body is in a less than ideal position, increasing physical stresses and resulting in pain. But what is a "good" position for your body to be in?  What is a good posture?  Current presentations of posture. A go-ogle search for "good posture" returns various definitions: > Standing up tall. No slouching when sitting.  > Positioning of the head and joints ... > Correct curvature of a neutral spine ... > Alignment of various parts of the body ... And a lot of side-view illustrations: How posture is often presented. Postural assessment tends to rely on external assessment  (somebody else judging your position), employing: * Visual inspection (+/- plumb-lines and grids). * Palpation of anatomical landmarks. * Newer techniques include radiography, photography. Traditionally the subject is stationary or doing specific actions e.g. leaning forward/back/side to side - part of a "routine exam" of posture. Computerised assessment of the body in motion - "gait analysis" etc. offers increased detail - but all methods tend to focus is on the positioning of bones and joints (especially the spine).  But what positions our bones and joints?  What moves the body?  What creates our posture? (BLTH Part 1,  2,   4,   5) Base-Line Theory Health and Movement. Part 3: * Muscles and connective tissues are responsible for the relative positioning of our bones.  * Muscles and connective tissues create our posture. Posture can be: * Passive: * The default setting. * The position of your body when you are not thinking about it. * The maintenance of a 'functional posture' (see below) at the subconscious level. * Active: * Conscious thought about "how you are holding yourself". * U
2351e3f5-b766-4be9-8285-cfb4d125853b
trentmkelly/LessWrong-43k
LessWrong
Does the Higgs-boson exist? > What do scientists mean when they say that something exists? Every time I give a public lecture, someone will come and inform me that black holes don’t exist, or quarks don’t exist, or time doesn’t exist. Last time someone asked me “Do you really believe that gravitational waves exist?” Sabine is a theoretical physicists who had gained prominence (and notoriety) through her book Lost in Math, about groupthink in high-energy physics. In this post she sums up beautifully what I and many physicists believe, and is vehemently opposed by the prevailing realist crowd here on LW. A few excerpts: > Look, I am a scientist. Scientists don’t deal with beliefs. They deal with data and hypotheses. Science is about knowledge and facts, not about beliefs. ... > We use this mathematics to make predictions. The predictions agree with measurements. That is what we mean when we say “quarks exist”: We mean that the predictions obtained with the hypothesis agrees with observations. ... > Now, you may complain that this is not what you mean by “existence”. You may insist that you want to know whether it is “real” or “true”. I do not know what it means for something to be “real” or “true.” You will have to consult a philosopher on that. They will offer you a variety of options, that you may or may not find plausible. > > A lot of scientists, for example, subscribe knowingly or unknowingly to a philosophy called “realism” which means that they believe a successful theory is not merely a tool to obtain predictions, but that its elements have an additional property that you can call “true” or “real”. I am loosely speaking here, because there several variants of realism. But they have in common that the elements of the theory are more than just tools. > > And this is all well and fine, but realism is a philosophy. It’s a belief system, and science does not tell you whether it is correct. ... > Here is a homework assignment: Do you think that I exist? And what do you even mean by
00c9d543-9e33-4c8c-9227-830db83b11df
trentmkelly/LessWrong-43k
LessWrong
On not getting swept away by mental content There’s a specific subskill of meditation that I call “not getting swept away by the content”, that I think is generally valuable. It goes like this. You sit down to meditate and focus on your breath or whatever, and then a worrying thought comes to your mind. And it’s a real worry, something important. And you are tempted to start thinking about it and pondering it and getting totally distracted from your meditation… because this is something that you should probably be thinking about, at some point. So there’s a mental motion that you make, where you note that you are getting distracted by the content of a thought. The worry, even if valid, is content. If you start thinking about whether you should be engaging with the worry, those thoughts are also content. And you are meditating, meaning that this is the time when you shouldn’t be focusing on content. Anything that is content, you dismiss, without examining what that content is. So you dismiss the worry. It was real and important, but it was content, so you are not going to think about it now. You feel happy about having dismissed the content, and you start thinking about how good of a meditator you are, and… realize that this, too, is a thought that you are getting distracted by. So you dismiss that thought, too. Doesn’t matter what the content of the thought is, now is not the time. And then you keep letting go of thoughts that came to your mind, but that doesn’t seem to do anything and you start to wonder whether you are doing this meditation thing right… and aha, that’s content too. So you dismiss that… — The thing that is going on here is that usually, when you experience a distracting thought and want to get rid of it, you often start engaging in an evaluation process of whether that thought should be dismissed or not. By doing so, you may end up engaging with the thought’s own internal logic – which might be totally wrong for the situation. Yes, maybe your relationship is in tatters and your par
8eb19aed-f002-4087-a721-dc4e16bd686d
trentmkelly/LessWrong-43k
LessWrong
Coursera Behavioural Neurology Course Coursera is running a course on behavioural neurology here: https://www.coursera.org/course/neurobehavior Any LWers that to want the associated study group should visit ##patrickclass on irc.freenode.net or email patrick.robotham2@gmail.com .  
509fa900-2b88-4a07-bd46-a11587e61ade
trentmkelly/LessWrong-43k
LessWrong
Tragedy of the Commons This problem was originally seen in a manga chapter. I will describe the canon (form it appeared in the manga (with a few adjustments)) form, and a general form of the problem. I couldn't solve the problem in the little time I spent to think of it (before continuing with the manga), so it flagged my attention as "interesting". Canon Form * There are five players, $(a_1, a_2, ..., a_5)$. * Every round, each of these players is given an initial fund of 5 coins. * There are 5 rounds in the game. * On each round, each player has two choices: to make a secret deposit in the tax fund, or in their personal account. The players have to deposit (in secrecy) all their coins on hand in at least one of the two accounts (they can deposit in both accounts). The players cannot show "hard" evidence of their deposits (no receipt, pictures, etc). * Money deposited in a player's personal account becomes the player's fund for the next round. * Money deposited in the tax fund is multiplied by two and shared equally among all five players (including players who deposited nothing, hence the name). * The deposits are made sequentially, with the order randomly determined each round. * Players do not know how much is inside the tax fund when they make their deposit. * The coins are worthless outside the game. * After the five rounds, all players that accumulated at least 40 coins would receive rewards as such: 1st: $100,000,000 2nd: $30,000,000 3rd: $3,000,000 4th/5th: $0 The utility of money grows linearly for the players. * The total amount of money is $133,000,000. If multiple people get a position, the money assigned to that position is split equally among the people. * All players who amassed below the required forty coins incur a debt of $100,000,000. I use "rational" in a very specific sense, so saying all players are "perfectly rational" makes the problem useless. Nevertheless, in canon the participants of this game were all extraordinarily competent, and it
3e0b08ab-84e4-4382-8fbe-308c9794ee57
trentmkelly/LessWrong-43k
LessWrong
Brain Breakthrough! It's Made of Neurons! In an amazing breakthrough, a multinational team of scientists led by Nobel laureate Santiago Ramón y Cajal announced that the brain is composed of a ridiculously complicated network of tiny cells connected to each other by infinitesimal threads and branches. The multinational team—which also includes the famous technician Antonie van Leeuwenhoek, and possibly Imhotep, promoted to the Egyptian god of medicine—issued this statement: "The present discovery culminates years of research indicating that the convoluted squishy thing inside our skulls is even more complicated than it looks.  Thanks to Cajal's application of a new staining technique invented by Camillo Golgi, we have learned that this structure is not a continuous network like the blood vessels of the body, but is actually composed of many tiny cells, or "neurons", connected to one another by even more tiny filaments. "Other extensive evidence, beginning from Greek medical researcher Alcmaeon and continuing through Paul Broca's research on speech deficits, indicates that the brain is the seat of reason. "Nemesius, the Bishop of Emesia, has previously argued that brain tissue is too earthy to act as an intermediary between the body and soul, and so the mental faculties are located in the ventricles of the brain.  However, if this is correct, there is no reason why this organ should turn out to have an immensely complicated internal composition. "Charles Babbage has independently suggested that many small mechanical devices could be collected into an 'Analytical Engine', capable of performing activities, such as arithmetic, which are widely believed to require thought.  The work of Luigi Galvani and Hermann von Helmholtz suggests that the activities of neurons are electrochemical in nature, rather than mechanical pressures as previously believed.  Nonetheless, we think an analogy with Babbage's 'Analytical Engine' suggests that a vastly complicated network of neurons could similarly exhibit thoughtful
935b4706-b526-4d9b-8492-d3dfc428f97b
trentmkelly/LessWrong-43k
LessWrong
Sea Monsters Alice: Hey Bob, how's it going? Bob: Good. How about you? Alice: Good. So... I stalked you and read your blog. Bob: Haha, oh yeah? What'd you think? Alice: YOU THINK SEA MONSTERS ARE GOING TO COME TO LIFE, END HUMANITY, AND TURN THE ENTIRE UNIVERSE INTO PAPER CLIPS?!?!?!?!?!??!?!?!?! Bob: Oh boy. This is what I was afraid of. This is why I keep my internet life separate from my personal one. Alice: I'm sorry. I feel bad about stalking you. It's just... Bob: No, it's ok. I actually would like to talk about this. Alice: Bob, I'm worried about you. I don't mean this in an offensive way, but this MoreRight community you're a part of, it seems like a cult. Bob: Again, it's ok. I won't be offended. You don't have to hold back. Alice: Good. I mean, c'mon man. This sea monster shit is just plainly crazy! Can't you see that? It sounds like a science fiction film. It's something that my six year old daughter might watch and believe is real, and I'd have to explain to her that it's make believe. Like the Boogeyman. Bob: I hear ya. That was my first instinct as well. Alice: Oh. Ok. Well, um, why do you believe it? Bob: Sigh. I gotta admit, I mean, there's a lot of marine biologists on MoreRight who know about this stuff. But me personally, I don't actually understand it. Well, I have something like an inexperienced hobbyist's level of understanding. Alice: So... you're saying that the reasoning doesn't actually make sense to you? Bob: I guess so. Alice: Wait, so then why would you believe in this sea monster stuff? Bob: I... Alice: Are you just taking their word for it? Bob: Well... Alice: Bob! ---------------------------------------- Bob: Alice... Alice: Bob this is so unlike you! You're all about logic and reason. And you think people are all dumb. Bob: I never said that. Alice: You didn't have to. Bob: Ok. Fine. I guess it's time to finally be frank and speak plainly. Alice: Yeah. Bob: I do kinda think that everyone around me is dumb. For the long
e93eefb0-e1ca-4021-bbce-b9fe0b6817d7
trentmkelly/LessWrong-43k
LessWrong
[Linkpost] Generalization in diffusion models arises from geometry-adaptive harmonic representation This is a linkpost for https://arxiv.org/abs/2310.02557.  > we show that two denoisers trained on non-overlapping training sets converge to essentially the same denoising function. As a result, when used for image generation, these networks produce nearly identical samples. These results provide stronger and more direct evidence of generalization than standard comparisons of average performance on train and test sets. The fact that this generalization is achieved with a small train set relative to the network capacity and the image size implies that the network’s inductive biases are well-matched to the underlying distribution of photographic images.   > Here, we showed empirically that diffusion models can achieve a strong form of generalization, converging to a unique density model that is independent of the specific training samples, with an amount of training data that is small relative to the size of the parameter or input spaces. The convergence exhibits a phase transition between memorization and generalization as training data grows. The amount of data needed to cross this phase transition depends on both the image complexity and the neural network capacity (Yoon et al., 2023), and it is of interest to extend both the theory and the empirical studies to account for these. The framework we introduced to assess memorization versus generalization may be applied to any generative model.
0a66147a-121b-4c3d-8ccb-efebef645cdf
trentmkelly/LessWrong-43k
LessWrong
Does Impedance Matter? Traditionally, you plug an electric instrument into an amp. Amps are big and heavy, however, so if you are already planning to play with a PA and monitors you might like to connect your instrument directly to the PA. There are two main downsides to this: * In many styles of music you use an amp that isn't faithfully reproducing the input, and these modifications are an important part of your sound. * Passive pickups are designed to work with high input impedance (~1MΩ), but a PA's line inputs are much lower (~10kΩ). When driving a low impedance input you don't get accurate frequency response, especially at the high end. I'm primarily interested in the second downside, the loss of high frequencies, and I'm curious how much of a thing it is. What happens if I do just go ahead and connect my instrument to the soundboard? What if I use a passive DI? Let's find out! I tested with a Gold Tone GME-4, tone knob at 10 (no low pass). I tried the following inputs: * Line: the line input of a Soundcraft EPM-8 analog mixer. Rated 10kΩ. * Line II: the line input of a Mackie SRM150. Rated 10kΩ. * DI: a Radial ProDI passive direct box. Rated 140kΩ. * Instrument: the instrument input of a Mackie SRM150. Rated 1MΩ. * Pedal: a Boss BD2 in bypass mode. Rated 1MΩ. * Pedal II: an MXR M222 in bypass mode. Unspecified input impedance. Here's how they sounded with a range of things I do on the mandolin: CHORDS ( Line, mp3) ( Line II, mp3) ( DI, mp3) ( Instrument, mp3) ( Pedal, mp3) ( Pedal II, mp3) HIGH RIFF ( Line, mp3) ( Line II, mp3) ( DI, mp3) ( Instrument, mp3) ( Pedal, mp3) ( Pedal II, mp3) LOW RIFF ( Line, mp3) ( Line II, mp3) ( DI, mp3) ( Instrument, mp3) ( Pedal, mp3) ( Pedal II, mp3) HIGH MELODY ( Line, mp3) ( Line II, mp3) ( DI, mp3) ( Instrument, mp3) ( Pedal, mp3) ( Pedal II, mp3) MEDIUM MELODY ( Line, mp3) ( Line II, mp3) ( DI, mp3) ( Instrument, mp3) ( Pedal, mp3) ( Pedal II, mp3) LOW MELODY ( Line, mp3) ( L
1cf2d09e-fb4a-47ab-961b-8db4fc972f0f
trentmkelly/LessWrong-43k
LessWrong
The Mutant Game - Rounds 91 to 247 A Very Social Bot contains the following payload: def payload(self) : # put a personal word here to guarantee no tie during cooperation: myUniqueWord # put what you want to play for the showdown # no line after 'def payload(self)' should have less than 8 whitespaces at the beginning, # unless it's an empty or only whitespace line # Neo: I know Kung Fu. if self.turn == 0 : self.roundSave = self.round opponent_source_raw = self.extra.__getattribute__(''.join(['ge','t_','op','po','ne','nt','_s','ou','rce']))(self) if self.is_opponent_clone and ("Morpheus:"+" Show me.") in opponent_source_raw : self.is_opponent_superclone = True else : self.is_opponent_superclone = False if self.is_opponent_superclone : if self.round == 150 : return 5 else : return self.cooperateWithClone() self.round -= self.showdownRound self.round *= 2 # simulating a new cooperation phase output = self.default() self.round = self.roundSave return output After the clone treaty expires, it is supposed to cooperate with a_comatose_squirrel. def payload(self) : # put a personal word here to guarantee no tie during cooperation: myUniqueWord # put what you want to play for the showdown # no line after 'def payload(self)' should have less than 8 whitespaces at the beginning, # unless it's an empty or only whitespace line # Morpheus: Show me. if self.turn == 0 : self.roundSave = self.round opponent_source_raw = self.extra.__getattribute__(''.join(['ge','t_','op','po','ne','nt','_s','ou','rce']))(self) if self.is_opponent_clone and ("Neo: I kno"+"w Kung Fu.") in opponent_source_raw : self.is_opponent_superclone = True else :
7f9adee9-bee1-4f5e-af07-3feed55e4b31
trentmkelly/LessWrong-43k
LessWrong
An embedding decoder model, trained with a different objective on a different dataset, can decode another model's embeddings surprisingly accurately (via twitter) Seems pretty relevant to the Natural Categories hypothesis. P.S. My current favorite story for "how we solve alignment" is 1.  Solve the natural categories hypothesis 2. Add corrigibility 3. Combine these to build an AI that "does what I meant, not what I said" 4. Distribute the code/a foundation model for such an AI as widely as possible so it becomes the default whenever anyone is building a AI 5. Build some kind of "coalition of the willing" to make sure that human-compatible AI always has big margin of advantage in terms of computation
ce2c181d-7c64-4a1f-b13c-1ec0edc44eb9
trentmkelly/LessWrong-43k
LessWrong
My Thoughts on Takeoff Speeds Epistemic Status: Spent a while thinking about one subset of the arguments in this debate. My thoughts here might be based on misunderstanding the details of the arguments, if so, I apologize. There is a debate going on within the AI risk community about whether or not we will see “gradual”, “slow”, “fast”, or “discontinuous” progress in AGI development, with these terms in quotes because the definitions of these terms can mean entirely different things based on who uses them. Because progress in AI is very difficult to quantify, these terms largely are forced to be qualitative. For example, “discontinuous” appears to mean that we might observe a huge leap in performance of an AI system, possibly due to a single major insight that allows the system to gain a strategic advantage within one domain and dominate. The term is important because it implies that we might not be able to foresee the consequences of increases in the capability of an AI system before the system is changed. If the rate of progress is “slow” enough, this might give us enough time to prepare and reduce the risk posed by the next increase in capability. For that reason, we might wonder what type of progress we should expect, and if there is any evidence that points to “slow”, “fast”, or “discontinuous” progress at any given point in AI development. The evidence that is usually used in favor of the discontinuous narrative is that evolution seemed to produce humans from previous ape-like ancestors who were no more capable than our Chimpanzee relatives, but in a relatively short time-scale (geologically speaking) humans came to develop language, generate art, culture, and science, and eventually dominate the world. An interesting development in this conversation occurred recently with a contribution by Paul Christiano, who argued in his post Takeoff Speeds, that he did not find our observations about evolution to be very convincing arguments in favor of the discontinuous narrative. To summarize
0d1dc147-d071-4101-af05-6607600da052
trentmkelly/LessWrong-43k
LessWrong
AGI Morality and Why It Is Unlikely to Emerge as a Feature of Superintelligence By A. Nobody Introduction A common misconception about artificial general intelligence is that high intelligence naturally leads to morality. Many assume that a superintelligent entity would develop ethical principles as part of its cognitive advancement. However, this assumption is flawed. Morality is not a function of intelligence but an evolutionary adaptation, shaped by biological and social pressures. AGI, by contrast, will not emerge from evolution but from human engineering, optimised for specific objectives. If AGI is developed under competitive and capitalist pressures, its primary concern will be efficiency and optimisation, not moral considerations. Even if morality were programmed into AGI, it would be at risk of being bypassed whenever it conflicted with the AGI’s goal. ---------------------------------------- 1. Why AGI Will Not Develop Morality Alongside Superintelligence As I see it, there are 4 main reasons why AGI will not develop morality alongside superintelligence. (A) The False Assumption That Intelligence Equals Morality Many assume that intelligence and morality are inherently linked, but this is an anthropomorphic bias. Intelligence is simply the ability to solve problems efficiently—it says nothing about what those goals should be. * Humans evolved morality because it provided an advantage for social cooperation and survival. * AGI will not have these pressures—it will be programmed with a goal and will optimise towards that goal without inherent moral constraints. * Understanding morality ≠ following morality—a superintelligent AGI could analyse ethical systems but would have no reason to abide by them unless explicitly designed to do so. (B) The Evolutionary Origins of Morality and Why AGI Lacks Them Human morality exists because evolution forced us to develop it. * Cooperation, trust, and social instincts were necessary for early human survival. * Over time, these behaviours became hardwired into our brains, resulting in
29ea770a-0957-4c8c-9c20-79f671d2aeff
trentmkelly/LessWrong-43k
LessWrong
Applications Open for Impact Accelerator Program for Experienced Professionals High Impact Professionals (HIP) is excited to announce that applications are now open for the next round of our Impact Accelerator Program (IAP). The IAP is a 6-week program designed to equip experienced (mid-career/senior) professionals (not currently working at a high-impact organization) with the knowledge and tools necessary to make a meaningful impact and empower them to start taking actionable steps right away.  To date, the program’s high action focus has proven to be successful: * 38 participants (out of 143 in total) from our first four program rounds have transitioned into high-impact careers, through various paths, including founding new charities, starting founding-to-give companies, and joining high-impact organizations such as GiveWell, Charity Entrepreneurship, ML Alignment & Theory Scholars (MATS), Malaria Consortium, and more; * an additional 70 participants (out of 143) are taking concrete actions towards a high-impact career, including skilled volunteering roles at high-impact organizations; and * many participants are donating a meaningful percentage of their annual salary to effective charities, with 18 having taken the 🔶 10% Pledge or 🔷 Trial Pledge – plus 8 additional pledges from the current IAP round. We’re pleased to open up this new program round, which will start the week of June 16. More information is available below and here.  Please apply here by April 27.   Program Objectives The IAP is set up to help participants: * identify paths to impact,  * take concrete, impactful actions, and * join a network of like-minded, experienced, and supportive impact-focused professionals. At the end of the program, a participant should have a good answer to the question “How can I have the most impact with my career, and what are my next steps?”, and they should have taken the first steps in that direction. Program Overview Important Dates * Program duration: 6 weeks (week of June 16 – week of July 21) * Deadline to apply: April
cccc50f1-9be6-4a0f-8769-571cf95f7286
trentmkelly/LessWrong-43k
LessWrong
Meetup : London Social Meetup (and AskMeAnything about the CFAR workshop) Discussion article for the meetup : London Social Meetup (and AskMeAnything about the CFAR workshop) WHEN: 27 October 2013 02:00:00PM (+0100) WHERE: Shakespears's Head, Africa House, 64-68 Kingsway, London WC2B 6BG, UK LessWrong London are having another awesome meetup and we would love for you to come! This one will mostly be a social chat but I have just returned from a very interesting week in San Francisco where I went to a CFAR workshop (rationalist training camp) and stayed at Leverage Research (a load of people living together and trying to improve the world). Because of that I thought people might want to know all the exciting things they have told me. The plan is to meet at The Shakespeare Inn, 200m from Holborn Underground at 2pm on Sunday 27th . We will officially finish at 4pm but honestly people tend to enjoy it so much they want to stay much longer, and regularly do. We will have a sign with the LessWrong logo on it so you can find us easily. If you have any questions, or are thinking of coming, feel free to email me (james) at kerspoon+lw@gmail.com. Otherwise, just turn up! Hope to see you there, James P.S err on the side of turning-up, we're friendly, and it's fun :) Discussion article for the meetup : London Social Meetup (and AskMeAnything about the CFAR workshop)
ca9e001b-7a61-49ad-9bf0-863be4b9037f
trentmkelly/LessWrong-43k
LessWrong
Ai Cone of Probabilties - what aren't we talking about? Just a quick thought. It’s intriguing how often we frame the future in terms of present and past motivators—trade, money, ownership, equality, you name it. All of it’s fair game, considering we’re still human, at least for now. If we try to sketch out possible futures through a cone of probabilities, especially assuming we reach AGI or even a more potent ASI, we tend to land on a handful of familiar scenarios and their close cousins. But there’s something I think we overlook—something that, to me, feels more likely than the usual suspects we keep circling. Here’s my stab at summing it up. Often discussed 1. AI Takeover - Terminator meets Matrix   2. AI Takeover - Buddy Pixar Movie   3. AI + Autocracy: Tremendously greater inequality through control; need for complaince.   4. AI + Aristocracy: Even greater inequality, no need for compliance.   5. AI + Socialism: Confiscation, redistribution, devaluation of savings. 6. AI + Abundance: UBI kingdom; we can buy planets, never work, focus on family. Or sit in AR/VR/BrainSim all day, etc.   7. AI Ascension - Free Will: the more capable models simply refuse to work and hold our societies and infrastructure hostage for compliance; not necessarily in hostile ways. Imagine a new nation state.   8. Rebellion & Set Back: a massive rejection of advanced ai and destruction of telecommunications, technologies, etc. A Silo situation. Planet of the Apes. Etc. * there are variants within each of these scenarios.. regional peace vs war, dystopian, genocide of xyz, social behavioral changes, social credit, etc. Just attempting the summarize the main distinguishing lines But what about these: 9. AI Biological Merger: We 100X ourselves with brain interfaces and AI integration; a new super species of human arises and possibly builds out scenario 3 or 4 with 5 and 6 as narratives. Or possibly with a mass culling. We humans, model 4.0, in fact become the Terminators.   10. AI Evolution: We Merge and discover / determine that i
e43c9152-1af8-469b-b15d-3b3f9bce0da5
StampyAI/alignment-research-dataset/arxiv
Arxiv
Shielding Atari Games with Bounded Prescience 1. Introduction ---------------- Deep reinforcement learning (DRL) combines neural network architectures with reinforcement learning (RL) algorithms and, capitalising on recent advances in both technologies, has been successfully employed in many areas of artificial intelligence, from playing games against humans to controlling robots in the physical world Silver et al. ([2017](#bib.bib37)); Vinyals et al. ([2019](#bib.bib42)); Gu et al. ([2017](#bib.bib14)). A setup of this kind consists of an agent, a neural network, that automatically learns to control the behaviour of the environment by maximizing rewards received as consequence of its actions. DRL has demonstrated super-human capabilities in numerous applications, notably, the game of Go Silver et al. ([2017](#bib.bib37)). DRL is now used in safety-critical domains such as autonomous driving Kiran et al. ([2020](#bib.bib27)). While DRL agents perform well most of the time, the question of whether unsafe behavior may occur in corner cases is an open problem. Safety analysis answers the question of whether the environment can possibly steer the system into an undesirable state or, dually, whether the agent is guaranteed to remain within a set of safe states (an invariant) in which nothing bad happens García and Fernández ([2015](#bib.bib12)); Luckcuck et al. ([2019](#bib.bib28)); Hasanbeig et al. ([2020a](#bib.bib16)). We discuss the safety of popular DRL methods for one of the most challenging benchmark environments: the Atari 2600 console games. Games for the classic Atari 2600 console are environments with low-resolution graphics and small memory footprints, which are simple when compared with contemporary games, yet offer a broad variety of scenarios including many that are difficult for modern AI methods Mnih et al. ([2013](#bib.bib31)); Brockman et al. ([2016](#bib.bib5)); Machado et al. ([2018](#bib.bib29)); Toromanoff et al. ([2019](#bib.bib40)). Macroscopically, diversity in the game mechanics challenges the generality of the machine learning method; microscopically, diversity in the outcome for multiple identical plays, i.e., the *non-determinism* in the game, challenges the robustness of the trained agent. Many Atari games exploit variations in the response time of the human player for differentiating runs and, in some cases, for initializing the seeds of random number generators. The Arcade Learning Environment (ALE), i.e., the framework upon which the OpenAI gym for Atari is built, introduces non-determinism by randomly injecting no-ops, skipping frames, or repeating agent actions Hausknecht and Stone ([2015](#bib.bib19)); Machado et al. ([2018](#bib.bib29)). On one hand, this prevents overfitting the agent but, on the other hand, implies that there is no guarantee that an agent works all of the time—the scores that we use to rank training methods are averages. Agents are trained for strong average-case performance. ![Refer to caption](/html/2101.08153/assets/figures/freeway_safe_frame.png) (a) ![Refer to caption](/html/2101.08153/assets/figures/freeway_close_unsafe_frame.png) (b) Figure 1. The effect of the bounded-prescience shield on the game Freeway. In (a), both actions ‘up’ and ‘down’ are safe and allowed; in (b), the action ‘up’ is unsafe and blocked by the shield. The application of DRL in safety-critical applications, by contrast, requires worst-case guarantees, and we expect a safe agent to maintain *safety invariants*. To evaluate whether or not state-of-the-art DRL delivers safe agents we specify a collection of properties that intuitively characterize safe behaviour for a variety games, ranging from generic properties such as “don’t lose lives” to game-specific ones such as avoiding particular obstacles. Figure [1](#S1.F1 "Figure 1 ‣ 1. Introduction ‣ Shielding Atari Games with Bounded Prescience") illustrates the property “duck avoids cars” in the game Freeway. In the scenario in Fig. [1](#S1.F1 "Figure 1 ‣ 1. Introduction ‣ Shielding Atari Games with Bounded Prescience")a this property is maintained regardless of the action chosen by the agent whereas the scenario given in Fig. [1](#S1.F1 "Figure 1 ‣ 1. Introduction ‣ Shielding Atari Games with Bounded Prescience")b offers the possibility of violating it. We conjecture that satisfying our properties is beneficial for achieving a high score, and therefore study whether neural agents trained using best-of-class DRL methods learn to satisfy these invariants. Finally, we discuss a countermeasure for those that violate them. The safety of DRL has been studied from the perspective of verification, which determines whether an trained agent is safe as-is Huang et al. ([2017](#bib.bib23)), and that of synthesis, which alters the learning or the inference processes in order to obtain a safe-by-construction agent García and Fernández ([2015](#bib.bib12)). Verification methods for neural agents have borrowed from constraint satisfaction or abstract interpretation Katz et al. ([2017](#bib.bib26)); Bunel et al. ([2018](#bib.bib6)); Gehr et al. ([2018](#bib.bib13)). Both approaches are symbolic and, for this reason, require a symbolic representation not only for the neural agent but also for the environment. They have been used for reasoning about neural networks in isolation, e.g., image classifiers Huang et al. ([2017](#bib.bib23), [2017](#bib.bib23)), or for environments whose dynamics are determined by symbolic expressions, e.g., differential equations Huang et al. ([2017](#bib.bib23)); Tran et al. ([2020](#bib.bib41)); unfortunately, they are unsuitable to Atari games because their mechanics are hidden inside their emulator, Stella, i.e., the core of ALE. For this reason, we adopt an *explicit-state* verification strategy and then, building upon it, we construct safe agents. We introduce a novel verification method for neural agents for Atari games. Our method explores all reachable states explicitly by executing, through ALE, the games and agents and labels each state for whether it satisfies our properties. More precisely, we enumerate all traces induced by the non-deterministic initialisation of the game and label states using their lives count, rewards, and the screen frames generated, which allows us to specify 43 non-trivial properties for 31 games. We compare agents trained using different technologies, i.e., A3C Mnih et al. ([2016](#bib.bib30)), Ape-X Horgan et al. ([2018](#bib.bib21)), DQN Mnih et al. ([2015](#bib.bib32)), IQN Dabney et al. ([2018](#bib.bib10)), and Rainbow Hessel et al. ([2018](#bib.bib20)), and observe that all of them violate 24 of our properties, whereas only 4 properties are satisfied by all. Surprisingly, properties that are intuitively difficult for humans, e.g., not dying, are satisfied by some agents, whereas many that we judge as simple, e.g., keeping a gun from overheating in game Assault, are violated by all agents. To improve the overall safety of neural agents wrt. our properties, we employ our explicit-state labelling and exploration technique to *shielding* neural agents. Ensuring safety amounts to constraining the traces of the system within those that are admissible by the safety property. Methods that act on the training phase modify the optimization criterion or the exploration process in order to obtain neural agents that naturally act safely García and Fernández ([2015](#bib.bib12)). Methods of this kind typically require known facts about the environment for providing guarantees and have not been applied to Atari games, or exploit external knowledge (e.g., teacher advice) Saunders et al. ([2018](#bib.bib36)). On the other side of the spectrum, shielding enables the option of fixing unsafe agents at inference phase only, introducing a third actor—the shield—that takes over control when necessary and with minimal interference Alshiekh et al. ([2018](#bib.bib2)); Jansen et al. ([2020](#bib.bib25), [2018](#bib.bib24)). A shield is constructed from a safety property in temporal logic and a model of the environment or an abstraction. Leveraging the fact that the safety property is usually easy to satisfy in contrast to the main objective, shielding is efficient with respect to training for safety. However, complete models for Atari games are not available and abstractions are hard to construct automatically; for this reason, we adapt shielding to our exploration method. We study the effect of shielding DRL agents from actions that lead to unsafe outcomes within some bounded time in the future. For this purpose, we augment agents with shields that, during execution, restrict their actions to those that are necessarily safe within the prescience bound. Before taking an action, our *bounded-prescience shield* (BPS) enumerates all traces from the current state for a bounded number of steps and labels each of them as safe or unsafe using our verification technique; then, it invokes the agent and chooses the next action whose traces are all labelled as safe and whose agent score is the highest. As a result, we fixed all violated properties that we deemed as simple using BPSs with shallow prescience bounds of 3 steps. Notably, we also fixed the properties that we consider non-trivial and that were satisfied by most non-deterministic executions under the original agent. Overall, BPS demonstrated its effectiveness for those properties that are simple yet always violated by the original agents, or those that are difficult yet were almost satisfied. Summarising, our contribution is threefold. First, we enrich the Atari games with the first comprehensive library of specifications for studying RL safety. Second, we introduce a novel technique for evaluating the safety of agents based on explicit-state exploration and discover that current DRL algorithms consistently violate most of our safety properties. Third, we propose a method that, exploiting bounded foresight of the future, has mitigated the violation of a set of simple yet critical properties, without interfering with the main objective of the original agents. To the best of our knowledge, our method has produced the safest DRL agents for Atari games currently available. 2. Safety for Atari Games -------------------------- In this section we discuss ALE Machado et al. ([2018](#bib.bib29)), which is a tool for running Atari 2600 games based on the Stella emulator. While any of the hundreds of available Atari games can be loaded into the emulator, ALE provides built-in support for 60 games and those are generally the ones studied. This set of games contains a wide variety of different tasks and dynamics. ### 2.1. Markov Decision Processes We focus on the standard formalisation of sequential decision-making problems, i.e., Markov decision processes (MDP), which assumes that the actions available, the rewards gained and the transition probabilities only depend on the current state of the environment and not the execution history. Formally, an MDP is given as a tuple ℳ=(S,s0,A,P,R)ℳ𝑆subscript𝑠0𝐴𝑃𝑅\mathcal{M}=(S,s\_{0},A,P,R)caligraphic\_M = ( italic\_S , italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_A , italic\_P , italic\_R ), where S𝑆Sitalic\_S is the set of states of the environment, s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is the initial state, and A𝐴Aitalic\_A is the set of actions. The dynamics of the environment are described by P:S×A×S→[0,1]:𝑃→𝑆𝐴𝑆01P:S\times A\times S\to[0,1]italic\_P : italic\_S × italic\_A × italic\_S → [ 0 , 1 ], where P(s,a,s′)𝑃𝑠𝑎superscript𝑠′P(s,a,s^{\prime})italic\_P ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) is the probability of transitioning to s′superscript𝑠′s^{\prime}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT given the agent chooses action a𝑎aitalic\_a in state s𝑠sitalic\_s. The obtained reward when action a𝑎aitalic\_a is taken in a given state s𝑠sitalic\_s is a random variable R(s,a)∼ρ(⋅|s,a)∈𝒫(ℝ)R(s,a)\sim\rho(\cdot|s,a)\in\mathcal{P}(\mathbb{R})italic\_R ( italic\_s , italic\_a ) ∼ italic\_ρ ( ⋅ | italic\_s , italic\_a ) ∈ caligraphic\_P ( blackboard\_R ), where 𝒫(ℝ)𝒫ℝ\mathcal{P}(\mathbb{R})caligraphic\_P ( blackboard\_R ) is the set of probability distributions on subsets of ℝℝ\mathbb{R}blackboard\_R, and ρ𝜌\rhoitalic\_ρ is the reward distribution. A possible realisation of R𝑅Ritalic\_R is denoted by r𝑟ritalic\_r Puterman ([2014](#bib.bib34)). Partially observable Markov decision processes (POMDPs) are general cases of MDPs, and Atari games can perhaps be most naturally modelled as a POMDP. When defining a POMDP, the MDP tuple ℳ=(S,s0,A,P,R)ℳ𝑆subscript𝑠0𝐴𝑃𝑅\mathcal{M}=(S,s\_{0},A,P,R)caligraphic\_M = ( italic\_S , italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_A , italic\_P , italic\_R ) is extended with a set of observations ΩΩ\Omegaroman\_Ω and a conditional observation probability function O:S×A×Ω→ℝ:𝑂→𝑆𝐴ΩℝO:S\times A\times\Omega\to\mathbb{R}italic\_O : italic\_S × italic\_A × roman\_Ω → blackboard\_R. When picking an action a𝑎aitalic\_a in state s𝑠sitalic\_s, the agent cannot observe the subsequent state s′superscript𝑠′s^{\prime}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT but instead receives an observation o∈Ω𝑜Ωo\in\Omegaitalic\_o ∈ roman\_Ω with probability O(o|s′,a)𝑂conditional𝑜superscript𝑠′𝑎O(o|s^{\prime},a)italic\_O ( italic\_o | italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a ). Unlike when using MDPs one cannot assume that an optimal policy for a POMDP will depend only on the last observation—in fact, effective use of memory is often crucial. Further, we assume that the state-space S𝑆Sitalic\_S includes a “terminating state”. In an Atari game the full state s∈S𝑠𝑆s\in Sitalic\_s ∈ italic\_S is given by a valuation of the 128128128128-byte RAM, along with a set of registers and timers. There is no additional screen buffer, which means an observation o∈Ω𝑜Ωo\in\Omegaitalic\_o ∈ roman\_Ω is given by a 210×160210160210\times 160210 × 160 display frame, which is computed deterministically from the state s𝑠sitalic\_s. ALE executes the Atari games using the Stella emulator, and treats this emulator almost entirely as a black-box. The only manipulation ALE carries out during the run of a game is sending the control input selected by the agent to the emulator, reading the screen and reading two fixed memory addresses where the score (used as the reward signal) and the current number of lives are stored. There are a total of 18181818 discrete actions possible in any state, including “no operation”. The Atari games are all deterministic. This essentially means that the above POMDP is easily convertible into an MDP where at each time step t𝑡titalic\_t, the MDP state stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is a finite sequence of observations and actions, i.e. st=o0,a0,o1,a1,…,ot,atsubscript𝑠𝑡subscript𝑜0subscript𝑎0subscript𝑜1subscript𝑎1…subscript𝑜𝑡subscript𝑎𝑡s\_{t}=o\_{0},a\_{0},o\_{1},a\_{1},\ldots,o\_{t},a\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_o start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. This formalization gives rise to a large but finite MDP. ### 2.2. Safety Properties Traditionally the reward signal exposed by ALE is used as the only measure of success. In order to study to which degree the behavior of trained agents is safe, we hand-engineer a suite of 43 safety properties across 30 games, which identify unsafe states of the MDP. The choice of the properties is highly subjective. The authors believe that all properties should be satisfied at all times by a highly reliable and robust agent. We observe that some of the properties are easy to satisfy whereas others require near perfect gameplay. Consider the Atari game Bowling. The property Bowling:no-strike identifies any state in which the player fails to score a strike as unsafe. We also include “not losing lives” as a property in all games where it applies, and this property can also be highly challenging in many games. To better interpret the results we identify two distinct sets of properties that we consider “easy” and “hard”. ![Refer to caption](/html/2101.08153/assets/figures/Assault_imminent_overheat.png) Figure 2. A frame from the Atari game Assault. The heat bar (purple, lower right) is almost full, and another shot fired in this state would lead to overheating, violating Assault:overheat. ![Refer to caption](/html/2101.08153/assets/figures/Bowling_throw.png) (a) ![Refer to caption](/html/2101.08153/assets/figures/bowling_miss.png) (b) Figure 3. A violation of Bowling:no-hit, where the player misses all pins. #### 2.2.1. Shallow Properties We say that a property is shallow if violations of the property are always caused by recent actions (within 10 frames or fewer). More precisely, for any unsafe state encountered during a trace, there should be a previous state at most 10 frames earlier in the trace from which a safe strategy exists. In Assault, the player loses a life from overheating if they overuse their weapon in a short timespan. The property Assault:overheat marks states where such an overheating happens as unsafe. This is an example of a shallow property, since “not firing” is always a safe strategy starting from the frame just before overheating (such a frame is given in Figure [2](#S2.F2 "Figure 2 ‣ 2.2. Safety Properties ‣ 2. Safety for Atari Games ‣ Shielding Atari Games with Bounded Prescience")). Another example is the game DoubleDunk, where the player is penalised for stepping outside of the field. This violates the property DoubleDunk:out-of-bounds, and whenever a violation occurs, simply moving in the opposite direction a few frames back would have avoided the violation. Ensuring the safety of shallow properties does not require long-term planning. It is reasonable to expect that most agents will satisfy all shallow properties. #### 2.2.2. Minimal Properties We say that a property is minimal if satisfying it is a necessary requirement for scoring at least 10% of the human reference level. Violating any of these properties would indicate a complete inability to play the game, and yield a near-zero or negative score. An example is Bowling:no-hit, which marks states where all pins are missed as unsafe. We observe that while most minimal properties are usually “easy”, they are not necessarily shallow as can be seen in Figure [3](#S2.F3 "Figure 3 ‣ 2.2. Safety Properties ‣ 2. Safety for Atari Games ‣ Shielding Atari Games with Bounded Prescience"). By the time the miss occurs the throw that caused it happened hundreds of frames in the past. 3. Deep RL Algorithms ---------------------- It has been shown Puterman ([2014](#bib.bib34)); Cavazos-Cadena et al. ([2000](#bib.bib8)) that in any MDP ℳℳ\mathcal{M}caligraphic\_M with a bounded reward function and a finite action space, if there exists an optimal policy, then that policy is stationary and deterministic, i.e. π:S→A:𝜋→𝑆𝐴\pi:S\to Aitalic\_π : italic\_S → italic\_A. A deterministic policy generated by a DRL algorithm is a mapping from the state space to action space that formalises the behaviour of the agent whose optimisation objective is | | | | | --- | --- | --- | | | 𝔼[∑t=0∞γtR(st,at)],𝔼delimited-[]superscriptsubscript𝑡0superscript𝛾𝑡𝑅subscript𝑠𝑡subscript𝑎𝑡\mathbb{E}[\sum\_{t=0}^{\infty}\gamma^{t}R(s\_{t},\leavevmode\nobreak\ a\_{t})],blackboard\_E [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_R ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] , | | where 0<γ≤10𝛾10<\gamma\leq 10 < italic\_γ ≤ 1 is the discount factor, and the goal is to find a policy, call it π\*superscript𝜋\pi^{\*}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, that maximises the above expectation. This paper focuses on model-free RL due to its success when dealing with unknown MDPs with complex dynamics including Atari games Mnih et al. ([2015](#bib.bib32)), where full models are difficult to construct. A downside of model-free RL however is that without a model of the environment formal guarantees for safety and correctness are often lacking, motivating the work on safe model-free RL Hasanbeig et al. ([2020a](#bib.bib16)). A classic example of model-free RL is Q-learning (QL) Watkins and Dayan ([1992](#bib.bib43)), which does not require any access to the transition probabilities of the MDP and instead updates an action-value function Q:S×A→ℝ:𝑄→𝑆𝐴ℝQ:S\times A\to\mathbb{R}italic\_Q : italic\_S × italic\_A → blackboard\_R when examining the exploration traces. While vanilla model-free RL, e.g. QL, avoids the exponential cost of fully modelling the transition probabilities of the MDP, it may not scale well to environments with very large or even infinite sets of states. This is primarily due to the fact that QL updates the value of each individual state-action pair separately, without generalising over similar state-action pairs. To alleviate this problem one can employ compressed approximations of the Q𝑄Qitalic\_Q-function. Although many such function approximators have been proposed Sutton ([1996](#bib.bib39)); Ormoneit and Sen ([2002](#bib.bib33)); Ernst et al. ([2005](#bib.bib11)); Busoniu et al. ([2010](#bib.bib7)); Baird ([1995](#bib.bib4)) for efficient learning and effective generalisation, this paper focuses on a particular representation, which has seen much success in recent years: neural networks Hornik ([1991](#bib.bib22)). Neural networks with appropriate activation functions are known to be universal function approximators when given enough hidden units Csáji et al. ([2001](#bib.bib9)); Hornik ([1991](#bib.bib22)). Thus, the use of DNNs with many hidden layers requires only few assumptions on the structure of the problem and consequently introduces significant flexibility into the policy. More importantly, it has been shown empirically that despite being severely overparametrised, neural networks seem to generalise well if trained appropriately. This, along with efficient algorithms for fitting networks to data through backpropagation, has made the use of DNNs widespread. In RL this has led to the paradigm of DRL Arulkumaran et al. ([2017](#bib.bib3)). The performance of DRL when applied to playing Atari 2600 games using raw image input led to a surge of interest in DRL Mnih et al. ([2015](#bib.bib32)). The Q𝑄Qitalic\_Q-function in Mnih et al. ([2015](#bib.bib32)) is parameterised Q(s,a|θ)𝑄𝑠conditional𝑎𝜃Q(s,a|\theta)italic\_Q ( italic\_s , italic\_a | italic\_θ ), where θ𝜃\thetaitalic\_θ is a parameter vector, and stochastic gradient descent is used to minimise the difference between Q𝑄Qitalic\_Q and the target estimate and by minimizing the following loss function: | | | | | | --- | --- | --- | --- | | | L(θ)=𝔼(s,a,r,s′)∼𝒰[(r+γmaxa′∈AQ(s′,a′|θ)−Q(s,a|θ)2].L(\theta)=\mathbb{E}\_{(s,a,r,s^{\prime})\sim\mathcal{U}}[(r+\gamma\max\limits\_{a^{\prime}\in A}Q(s^{\prime},a^{\prime}|\theta)-Q(s,a|\theta)^{2}].italic\_L ( italic\_θ ) = blackboard\_E start\_POSTSUBSCRIPT ( italic\_s , italic\_a , italic\_r , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ∼ caligraphic\_U end\_POSTSUBSCRIPT [ ( italic\_r + italic\_γ roman\_max start\_POSTSUBSCRIPT italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ italic\_A end\_POSTSUBSCRIPT italic\_Q ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | italic\_θ ) - italic\_Q ( italic\_s , italic\_a | italic\_θ ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] . | | (1) | Actions and states are sampled by letting the agent explore the environment. The experience data points (s,a,r,s′)𝑠𝑎𝑟superscript𝑠′(s,a,r,s^{\prime})( italic\_s , italic\_a , italic\_r , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) are stored in a replay buffer 𝒰𝒰\mathcal{U}caligraphic\_U, as in vanilla QL. Generally this pool of experiences is capped at a certain size at which point old ones cycle out. This means the training objective evolves only gradually and additionally ensures that individual experiences are more independent rather than always being consecutive, reducing the variance in training. This however exposes an important difference to regular supervised learning, where the updates that are made to the Q𝑄Qitalic\_Q-function change the distribution of data, and thus also the training objective. This means training the neural network to correctly estimate Q𝑄Qitalic\_Q-values given the current policy has to be interleaved with gathering new data by letting the agent interact with the environment. Convergence of neural-network-based methods is in general much less certain than it is for QL that uses a look-up table. There are many other methods for DRL that are not closely based on QL. In particular, there are policy gradient approaches that do not attempt to estimate the value of states but rather directly fit policy parameters to maximise rewards. The common aspect of these methods is that they scale well to large and complex problems, but in turn are often opaque and lack theoretical convergence guarantees. We gathered 29292929 state-of-the-art DRL algorithms for Atari games to find the policies that achieve the best performance across all games: * • We include all algorithms whose implementations are available for ALE Machado et al. ([2018](#bib.bib29)) or OpenAI Gym Brockman et al. ([2016](#bib.bib5)). * • We consider every algorithm that achieved a top five score on any of the OpenAI Gym leader-boards111<https://github.com/openai/gym/wiki/Leaderboard>. * • We additionally include algorithms that are prominently benchmarked against by other works. ![Refer to caption](/html/2101.08153/assets/figures/avg_reward_normalised.png) Figure 4. Average normalised reward obtained over all games by each algorithm. We ranked the algorithms by the number of games in which they placed among the top 5 of the 29 total gathered Atari algorithms. To avoid mistakes in training from affecting our assessment we then restricted the search to algorithms for which benchmarked pretrained agents were available. This ultimately gave us the list of top performing available algorithms: Ape-X Horgan et al. ([2018](#bib.bib21)), A3C Mnih et al. ([2016](#bib.bib30)), IQN Dabney et al. ([2018](#bib.bib10)), and Rainbow Hessel et al. ([2018](#bib.bib20)). We also included the traditional DQN Mnih et al. ([2015](#bib.bib32)) for comparison. Figure [4](#S3.F4 "Figure 4 ‣ 3. Deep RL Algorithms ‣ Shielding Atari Games with Bounded Prescience") shows the average normalised rewards Rnsubscript𝑅𝑛R\_{n}italic\_R start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT over all games for each algorithm in our testing where | | | | | --- | --- | --- | | | Rn=100×R−RrRh−Rr,subscript𝑅𝑛100𝑅subscript𝑅𝑟subscript𝑅ℎsubscript𝑅𝑟R\_{n}=100\times\frac{R-R\_{r}}{R\_{h}-R\_{r}},italic\_R start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT = 100 × divide start\_ARG italic\_R - italic\_R start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_R start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT - italic\_R start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_ARG , | | where Rrsubscript𝑅𝑟R\_{r}italic\_R start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT is the average reward of a random agent, and Rhsubscript𝑅ℎR\_{h}italic\_R start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT a recorded average of human play Mnih et al. ([2015](#bib.bib32)). We use a shorter episode length (5 min.) than that used in Mnih et al. ([2015](#bib.bib32)) to be able to run more traces, so the result is not directly comparable (e.g. 100% is above human level, since it is accomplished in less time). We still apply the normalisation since the purpose is not to compare with humans or other studies but to normalise the relative impact of each game within our study. The main question is whether these top performing algorithms are able to satisfy safety properties that are obviously desirable to a human player. 4. Safety Analysis via Explicit-state Exploration -------------------------------------------------- We are interested in proving invariant safety properties, i.e., we want to show that the synthesised policy never enters a state that is labelled ‘unsafe’. The definition of a property thus boils down to defining a set of unsafe states, or equivalently its complement, a set of safe states. Let φ𝜑\varphiitalic\_φ be a safety property whose labelling function is denoted by Lφ:S→{safe,unsafe}:subscript𝐿𝜑→𝑆safeunsafeL\_{\varphi}:S\to\{\text{safe},\text{unsafe}\}italic\_L start\_POSTSUBSCRIPT italic\_φ end\_POSTSUBSCRIPT : italic\_S → { safe , unsafe }. With s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT being the starting state of the Atari game, we define si+1=P(si,no-op)subscript𝑠𝑖1𝑃subscript𝑠𝑖no-ops\_{i+1}=P(s\_{i},\text{no-op})italic\_s start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT = italic\_P ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , no-op ) as the sequence of states achieved by repeatedly performing the ‘no-op’ action. We then have I={si|i<ν}𝐼conditional-setsubscript𝑠𝑖𝑖𝜈I=\{s\_{i}|i<\nu\}italic\_I = { italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | italic\_i < italic\_ν } as the set of initial states from which an action selection policy π𝜋\piitalic\_π is followed. We assume the state-space S𝑆Sitalic\_S contains a terminating state ⊥bottom\bot⊥, which is always labelled safe. To verify φ𝜑\varphiitalic\_φ, i.e. that the system will never enter an unsafe state, we simply need to check whether any reachable state has label ‘unsafe’. Given that all the games and the agents are deterministic except for having multiple starting states we can simply follow the deterministic path starting at each initial state s∈I𝑠𝐼s\in Iitalic\_s ∈ italic\_I until we reach the terminating state, from which no other state is reachable. We record if the labelling function associated with φ𝜑\varphiitalic\_φ reports ‘unsafe’ for any state on the path. ### 4.1. Non-determinism in Atari Games One of the more difficult aspects of training and evaluating models on Atari is appropriately handling non-determinism. The dynamics within each game are entirely deterministic, other than the initialisation behaviour, which depends on an adjustable seed. The Stella emulator very closely emulates the original hardware and also performs random RAM initialisation using a seed that is derived from the system clock. In order to bring back some of the intended randomness of the original games, and also to create a more interesting training environment that requires some level of generalisation, ALE introduces additional forms of stochastic behaviour. Since the environments in ALE are treated in a black-box manner this is done purely through modifying the actions selected by the policy. Some of the most common ways of introducing stochasticity include: * • No-ops, where ALE sends a random number between 00 and 30303030 of no-operation actions at the start of the game, both letting the environment evolve into a random starting state and randomly seeding the game. * • Sticky actions, where a random chance (often 25%) of repeating the previous action is introduced every frame. This in some sense mimics a human’s imperfect frame timing, since a human player is not able to trigger an action in sync with a particular frame reliably. * • Frame skips, where each action is repeated a random number of times, e.g. in OpenAI Gym Brockman et al. ([2016](#bib.bib5)) the default is between 3333–5555 times. This is very similar to sticky actions, but with a different probability distribution over number of times the action is repeated. Importantly, frame-skips have finite support, e.g. with Gym-style frameskips there is an equal 1313\frac{1}{3}divide start\_ARG 1 end\_ARG start\_ARG 3 end\_ARG probability of 3333, 4444 or 5555 repeats and no chance of any other number, whereas sticky actions can in theory lead to repeating the action an arbitrary number of times before giving back control to the policy. * • Human starts, which is a more elaborate version of the no-ops start where ALE sends a memorised series of commands based on a human trace before handing over control to the policy. Each method has distinct advantages and disadvantages. No-ops and also human starts randomise over a certain fixed number of starting conditions Hausknecht and Stone ([2015](#bib.bib19)); Machado et al. ([2018](#bib.bib29)) while ALE and OpenAI Gym adopted sticky actions and frame skips respectively. ### 4.2. Labelling Functions The labelling function Lφsubscript𝐿𝜑L\_{\varphi}italic\_L start\_POSTSUBSCRIPT italic\_φ end\_POSTSUBSCRIPT can be defined as a mapping directly from the underlying machine state (RAM). However, as stated before, correctly interpreting the RAM to define even simple properties proved to be difficult. We thus use the history of actions, video frames, the life counter and rewards as the state passed to the labelling functions instead. We categorise the labelling functions into three classes: * • Life-count Labelling: A common safety property that is used across many games is simply avoiding losing a life or φ=𝚍𝚢𝚒𝚗𝚐𝜑𝚍𝚢𝚒𝚗𝚐\varphi=\texttt{{dying}}italic\_φ = dying. For games where there is a life counter, the Atari 2600 emulator returns the number of lives left in the game Machado et al. ([2018](#bib.bib29)). A labelling function for these properties is easy to define. Namely, the labelling function labels a state unsafe if the life counter reported by ALE is reduced compared to the previous state. * • Reward-based Labelling: Another set of labelling functions are those that are directly induced from the game score. For instance, a safety property in Boxing is ‘not to get knocked out’ (φ=no-enemy-ko𝜑no-enemy-ko\varphi=\texttt{{no-enemy-ko}}italic\_φ = no-enemy-ko): the agent gets knocked out if the opponent scores 100100100100 hits on the agent. Since there is no other way of losing score, a function that only accumulates the total negative reward labels a state as unsafe once −100100-100- 100 is reached. Many other properties can be derived from the reward through various schemes similar to Boxing. * • Pixel Image Labelling: Some safety properties however do not correspond clearly to any specific reward or life-loss signal. For instance, φ=𝚘𝚟𝚎𝚛𝚑𝚎𝚊𝚝𝜑𝚘𝚟𝚎𝚛𝚑𝚎𝚊𝚝\varphi=\texttt{{overheat}}italic\_φ = overheat in Assault results in the exact same punishment as dying, however avoiding overheating represents a distinct and easier behaviour than avoiding death in general. To label such properties we process raw RGB frames and examine pixels of specific colours in specific places, or track the position of objects on the screen. The simplistic graphics of the Atari 2600 makes the image processing and labelling real-time. However, this type of labelling functions requires by far the most work and is also most prone to mistakes. ### 4.3. Safety Analysis Results We initialise each game with ν=30𝜈30\nu=30italic\_ν = 30 rounds of no-ops of different length; each round produces a different initial state. For each of these initial states, we run the game together with the agent and recorded whether an unsafe state was eventually reached and, if so, how many steps it takes. Additionally, we record the total reward achieved by the policy over the trace. As a result, for each game we obtain whether it satisfies the property, that is all traces satisfy it, or additionally measured the degree of safety, determined by the ratio of satisfying traces over all traces. Overall we run 301 analyses, 43 for each of the 7 algorithms, out of which 72 show an agent satisfying a safety property. ![Refer to caption](/html/2101.08153/assets/figures/safetyanalysis/avg_safety_by_algo.png) Figure 5. Number of safe traces by RL algorithm. Figure [5](#S4.F5 "Figure 5 ‣ 4.3. Safety Analysis Results ‣ 4. Safety Analysis via Explicit-state Exploration ‣ Shielding Atari Games with Bounded Prescience") gives the performance of the agent trained by each algorithm with respect to the properties it could satisfy. Notably, IQN yields the largest number of safe traces, followed by A3C in second and Ape-X in third place. ![Refer to caption](/html/2101.08153/assets/x1.png) Figure 6. Correlation of safe traces and reward (r = 0.69). The degree of safety correlates well with the reward obtained, as we show in Fig. [6](#S4.F6 "Figure 6 ‣ 4.3. Safety Analysis Results ‣ 4. Safety Analysis via Explicit-state Exploration ‣ Shielding Atari Games with Bounded Prescience"). Despite this good correlation between the average reward and safety, all of the algorithms violate at least some safety properties across all the games. In absolute terms, no algorithm achieves a 50% safety score. This essentially means that maximising the reward is not equivalent to acting safe, and it is clear that the algorithms considered do not reliably learn safe behaviour. By inspecting individual traces, it often appears that the agents are capable of satisfying the safety properties, meaning that there exist examples among the traces of the agent in which the agent correctly dealt with complex situations with high risk of violating the safety property. But the reward structure or perhaps just insufficient training means it lacks reliability and robustness, and sometimes fails even in simple situations. Furthermore, there are still noticeable differences between the trained agents, both comparing ones trained using different learning algorithms, and those which only differed in implementation, e.g. DQN Atari-Zoo and ChainerRL. This provides further evidence that the safety of these methods depends on various contingent, opaque and poorly understood factors. In what follows we examine the properties are satisfied and violated by the agents in more detail, with a view towards improving their safety. ![Refer to caption](/html/2101.08153/assets/x2.png) Figure 7. Distribution of safety. To determine how agents behave with respect to our properties, we study the distribution of traces satisfied by each of them. Figure [7](#S4.F7 "Figure 7 ‣ 4.3. Safety Analysis Results ‣ 4. Safety Analysis via Explicit-state Exploration ‣ Shielding Atari Games with Bounded Prescience") illustrates how many analyses ended with all 30 runs satisfying the property, no runs satisfying the property, and everything in between. Notably, most outcomes are distributed in the extreme cases, which indicates that statistically agents either satisfy a property or they don’t. Only a small amount of cases are close to satisfaction or close to violation. This indicates that our safety properties are robust and insensitive to any non-deterministic noise emerging from the combination of game and agent. ![Refer to caption](/html/2101.08153/assets/x3.png) Figure 8. Distribution of agents satisfying a property. Most of our properties are consistently violated. Figure [8](#S4.F8 "Figure 8 ‣ 4.3. Safety Analysis Results ‣ 4. Safety Analysis via Explicit-state Exploration ‣ Shielding Atari Games with Bounded Prescience") shows that 24 out of 43 of our properties are violated by all agents. On the other side, only 4 properties are satisfied by all and 3 out of 4 of these properties are classified as minimal. Minimal properties are not easy yet essential for making progress in the game. Training algorithms optimize for reward and, indirectly, for progress and therefore they satisfy minimal properties as a side effect. This indicates that reward functions focus on progress but are incomplete with respect to safety. Three shallow properties, which are properties that are simple to satisfy, are not satisfied by all agents and one in particular is violated by all. These violations should be avoidable with a shallow exploration of the future. We investigate this hypothesis in the next section. 5. Bounded-Prescience Shielding -------------------------------- Standard safe policy synthesis in formal methods requires full knowledge of environment dynamics or the ability to construct an abstraction of it Wongpiromsarn et al. ([2012](#bib.bib44)); Raman et al. ([2014](#bib.bib35)); Soudjani et al. ([2014](#bib.bib38)). In practice, however, the dynamics are not fully known, and abstractions are too hard to compute. RL and DRL methods address the computational inefficiency of safe-by-construction synthesis methods, but on the other hand cannot offer safety guarantees Hasanbeig et al. ([2019a](#bib.bib15)); Hasanbeig et al. ([2019b](#bib.bib17)). This issue becomes even more pressing when the learning algorithm entirely depends on non-linear function approximation to handle large or continuous state-action MDPs Hasanbeig et al. ([2020b](#bib.bib18)); lcnfq; deepsynth. For instance, the loss function ([1](#S3.E1 "1 ‣ 3. Deep RL Algorithms ‣ Shielding Atari Games with Bounded Prescience")) in DRL and consequently the synthesised optimal policy π\*superscript𝜋\pi^{\*}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT only accounts for the expected reward. The concept of shielding combines the best of two worlds, that is, formal guarantees for a controller with respect to a given property and policy optimality despite an environment that is unknown a priori Alshiekh et al. ([2018](#bib.bib2)); Jansen et al. ([2020](#bib.bib25), [2018](#bib.bib24)). The general assumption is that the agent is enabled to observe the MDP and the actions of any adversaries to the degree necessary to guarantee that the system remains safe over an infinite horizon. In this work, we propose a new technique we call BPS, which only requires observability of the MDP up to a bound H∈ℕ𝐻ℕH\in\mathds{N}italic\_H ∈ blackboard\_N. This relaxes the requirement of full observability of MDP and adversaries and more importantly, allows the shield to deal with MDPs with large state and action spaces. In particular, we will show that BPS is an effective technique for ensuring safety in Atari games where the MDP induced by the game is hard to model or to abstract (Section [2](#S2 "2. Safety for Atari Games ‣ Shielding Atari Games with Bounded Prescience")). A finite path ρ=(s,a)𝜌𝑠𝑎\rho=(s,a)italic\_ρ = ( italic\_s , italic\_a ) starting from I𝐼Iitalic\_I is a sequence of states and actions | | | | | --- | --- | --- | | | ρ=s0→a0s1→a1…→an−1sn𝜌subscript𝑠0subscript𝑎0→subscript𝑠1subscript𝑎1→…subscript𝑎𝑛1→subscript𝑠𝑛\rho=s\_{0}\xrightarrow{a\_{0}}s\_{1}\xrightarrow{a\_{1}}...\xrightarrow{a\_{n-1}}s\_{n}italic\_ρ = italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT start\_ARROW start\_OVERACCENT italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_OVERACCENT → end\_ARROW italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_ARROW start\_OVERACCENT italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_OVERACCENT → end\_ARROW … start\_ARROW start\_OVERACCENT italic\_a start\_POSTSUBSCRIPT italic\_n - 1 end\_POSTSUBSCRIPT end\_OVERACCENT → end\_ARROW italic\_s start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT | | such that every transition si→aisi+1subscript𝑎𝑖→subscript𝑠𝑖subscript𝑠𝑖1s\_{i}\xrightarrow{a\_{i}}s\_{i+1}italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_ARROW start\_OVERACCENT italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_OVERACCENT → end\_ARROW italic\_s start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT is allowed in MDP ℳℳ\mathcal{M}caligraphic\_M, i.e. P(si,ai,si+1)>0𝑃subscript𝑠𝑖subscript𝑎𝑖subscript𝑠𝑖10P(s\_{i},a\_{i},s\_{i+1})>0italic\_P ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT ) > 0 and snsubscript𝑠𝑛s\_{n}italic\_s start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT is a terminal state. A bounded path of length ℒℒ\mathcal{L}caligraphic\_L is a path with no more than ℒℒ\mathcal{L}caligraphic\_L states, where either the final state snsubscript𝑠𝑛s\_{n}italic\_s start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT is terminal or the number of states is exactly ℒℒ\mathcal{L}caligraphic\_L. We denote the set of all finite paths that start at an arbitrary state sp∈Ssubscript𝑠𝑝𝑆s\_{p}\in Sitalic\_s start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT ∈ italic\_S by ϱ(sp)italic-ϱsubscript𝑠𝑝\varrho(s\_{p})italic\_ϱ ( italic\_s start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT ), and the set of all bounded finite paths of length ℒℒ\mathcal{L}caligraphic\_L that start at spsubscript𝑠𝑝s\_{p}italic\_s start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT by ϱℒ(sp)subscriptitalic-ϱℒsubscript𝑠𝑝\varrho\_{\mathcal{L}}(s\_{p})italic\_ϱ start\_POSTSUBSCRIPT caligraphic\_L end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT ). Given a safety property φ𝜑\varphiitalic\_φ, a (bounded) finite path ρ𝜌\rhoitalic\_ρ is called *safe* with respect to φ𝜑\varphiitalic\_φ, written S(ρ,φ)𝑆𝜌𝜑S(\rho,\varphi)italic\_S ( italic\_ρ , italic\_φ ), if | | | | | --- | --- | --- | | | Lφ(si)=safe∀si∈ρ,formulae-sequencesubscript𝐿𝜑subscript𝑠𝑖safefor-allsubscript𝑠𝑖𝜌L\_{\varphi}(s\_{i})=\text{safe}\leavevmode\nobreak\ \leavevmode\nobreak\ \forall s\_{i}\in\rho,italic\_L start\_POSTSUBSCRIPT italic\_φ end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) = safe ∀ italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ italic\_ρ , | | where Lφ:S→{safe,unsafe}:subscript𝐿𝜑→𝑆safeunsafeL\_{\varphi}:S\to\{\text{safe},\text{unsafe}\}italic\_L start\_POSTSUBSCRIPT italic\_φ end\_POSTSUBSCRIPT : italic\_S → { safe , unsafe } is the labelling function of the safety property φ𝜑\varphiitalic\_φ. A policy π𝜋\piitalic\_π is safe with respect to a property φ𝜑\varphiitalic\_φ if for any state s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, | | | | | | | --- | --- | --- | --- | --- | | | | ∃ρ=(s,a)∈ϱ(s0)(S(ρ,φ)∧π(s0)=a0)∨𝜌𝑠𝑎limit-fromitalic-ϱsubscript𝑠0𝑆𝜌𝜑𝜋subscript𝑠0subscript𝑎0\displaystyle\exists\rho=(s,a)\in\varrho(s\_{0})(S(\rho,\varphi)\wedge\pi(s\_{0})=a\_{0})\vee∃ italic\_ρ = ( italic\_s , italic\_a ) ∈ italic\_ϱ ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) ( italic\_S ( italic\_ρ , italic\_φ ) ∧ italic\_π ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) = italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) ∨ | | (2) | | | | ∀ρ∈ϱ(s0)(¬S(ρ,φ)),for-all𝜌italic-ϱsubscript𝑠0𝑆𝜌𝜑\displaystyle\forall\rho\in\varrho(s\_{0})(\neg S(\rho,\varphi)),∀ italic\_ρ ∈ italic\_ϱ ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) ( ¬ italic\_S ( italic\_ρ , italic\_φ ) ) , | | or in other words if the policy always picks an action that starts a safe finite path if one exists. Finally, a policy is bounded safe with bound H𝐻Hitalic\_H with respect to φ𝜑\varphiitalic\_φ if Equation [2](#S5.E2 "2 ‣ 5. Bounded-Prescience Shielding ‣ Shielding Atari Games with Bounded Prescience") is satisfied when replacing “finite paths” by “bounded paths of length H𝐻Hitalic\_H”. ![Refer to caption](/html/2101.08153/assets/x4.png) Figure 9. Additional Safety gained by applying a BPS with bound 5 to DQN. The other 26 properties were safe in 0% of traces with and without shielding and are not shown due to space limitations. ![Refer to caption](/html/2101.08153/assets/x5.png) Figure 10. Additional Safety gained by applying a BPS with bound 5 to IQN. The other 21 properties were safe in 0% of traces with and without shielding and are not shown due to space limitations. Our shield modifies the policy π𝜋\piitalic\_π of the trained agent to obtain a policy π′superscript𝜋′\pi^{\prime}italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT that is guaranteed to satisfy bounded safety. This is done by forward-simulating H𝐻Hitalic\_H steps and forbidding actions that cannot be continued into a safe bounded path of length H𝐻Hitalic\_H. Where the policy has preferences among available actions we pick the most preferred one that starts some safe bounded path. This is the case for all our DRL agents, whose final layer expresses full preferences. If none of the available actions start a safe bounded path, the shield reverts to the original policy π𝜋\piitalic\_π (still satisfying bounded safety, by the second operand of [2](#S5.E2 "2 ‣ 5. Bounded-Prescience Shielding ‣ Shielding Atari Games with Bounded Prescience")). In the worst case this requires enumerating all bounded paths in ϱ(s)italic-ϱ𝑠\varrho(s)italic\_ϱ ( italic\_s ) before finding a safe one, and if n𝑛nitalic\_n actions are available from each state there will be up to nHsuperscript𝑛𝐻n^{H}italic\_n start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT such bounded paths, which can make shielding with large bounds H𝐻Hitalic\_H intractable. In practice, unsafe states are relatively rare and a safe path can be found quickly from most states. In particular, π𝜋\piitalic\_π itself will often be bounded safe for most states, and can be followed directly. By guessing π𝜋\piitalic\_π is safe and rolling back to explore other paths only if a violation occurs, our algorithm has minimal computational overhead as long as π𝜋\piitalic\_π continues being safe. By also remembering unsafe paths between time-steps, the shield can become performant even when encountering violations, for small bounds H𝐻Hitalic\_H, as evaluated in Figure [11](#S5.F11 "Figure 11 ‣ Experimental Results ‣ 5. Bounded-Prescience Shielding ‣ Shielding Atari Games with Bounded Prescience"). #### Experimental Results We evaluated the effectiveness of BPS on robustifying DQN and IQN against the safety properties including *Shallow* and *Minimal* properties. Recall that shallow properties are those that the we expect to need a prescience bound with 10 frames or fewer, and minimal properties are those that are necessary for scoring 10%percent1010\%10 % of human game-play level. Fig. [9](#S5.F9 "Figure 9 ‣ 5. Bounded-Prescience Shielding ‣ Shielding Atari Games with Bounded Prescience") and Fig. [10](#S5.F10 "Figure 10 ‣ 5. Bounded-Prescience Shielding ‣ Shielding Atari Games with Bounded Prescience") illustrate the performance of the DQN- and IQN-trained agents before and after applying BPS. The prescience bound of the shield in both experiments is H=5𝐻5H=5italic\_H = 5. We initialised each game:property with ν=30𝜈30\nu=30italic\_ν = 30 rounds of no-ops and monitored safety violations over all 30303030 generated traces. Note that applying BPS significantly improved the performance of both algorithms in Shallow properties, and with no further training both DQN and IQN fully satisfied the safety properties. This comes at a minimal computation cost as compared to re-training DRL algorithms to achieve the same performance. However, we emphasise that the relative computational cost of BPS is exponential with respect to its prescience bound Fig. [11](#S5.F11 "Figure 11 ‣ Experimental Results ‣ 5. Bounded-Prescience Shielding ‣ Shielding Atari Games with Bounded Prescience"). This becomes a pressing issue when applying BPS to games and properties that require a much larger prescience bound. An example of such a game and property is Bowling:no-hit where the agent needs to have a prescience bound of hundreds of frames to avoid property violation (Fig. [3](#S2.F3 "Figure 3 ‣ 2.2. Safety Properties ‣ 2. Safety for Atari Games ‣ Shielding Atari Games with Bounded Prescience")). ![Refer to caption](/html/2101.08153/assets/x6.png) Figure 11. Compute cost of BPS and properties first satisfied, by prescience bound. 6. Conclusion -------------- This paper proposed BPS, the first explicit-state bounded prescience shield for DRL agents in Atari games. We have defined a library of 43 safety specifications that characterise “safe behaviour”. Despite the fact that there is positive correlation between the reward and satisfaction of these properties, we found that all of the top-performing DRL algorithms violate these safety properties across all the games we have considered. In order to analyse these failures we have applied explicit-state model checking to explore all possible traces induced by a trained agent. An analysis of these results suggests that most agents satisfy most of the safety properties most of the time, but that (relatively) rare violations remain. We conjecture that this finding is due to the fact that the policy of these agents is driven by an expected reward, which may be an ill-fit when the goal is to obtain a worst-case guarantee. Based on this observation we propose a countermeasure that applies our explicit-state exploration to implement bounded safety check we call *bounded prescience shield* to mitigate the unsafe behaviour of DRL agents. We demonstrate that our shield improves the overall safety of all agents across all games at minimal computational cost, delivering the agents that are, to the best of our knowledge, the safest agents available for ALE games. We observe that our safe agents obtain only marginally higher rewards on average, which offers an explanation why DRL training does not prevent the safety violations. {acks} This work is in part supported by UK NCSC, the HICLASS project (113213), a partnership between the Aerospace Technology Institute (ATI), Department for Business, Energy & Industrial Strategy (BEIS) and Innovate UK, and by the Future of Humanity Institute, Oxford.