id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
1e906030-62e3-4d3f-93b6-3372704ca07b
trentmkelly/LessWrong-43k
LessWrong
Who is Slimrock Investments Pte. Ltd. and how are they related to Lightcone Infrastructure? Slimrock Investments Pte. Ltd. is listed on the Alameda County Recorder's records as associated with Lightcone's recent purchase of the Rose Garden Inn in Berkeley.  "Assignment of Rents" implies that they are the lender who provided the capital to purchase the property. There is not much information about them on the internet. They appear to be a holding company incorporated in Singapore.  However, I was able to find them in a list of creditors in the bankruptcy proceedings for FTX: What is Lightcone's relationship to Slimrock, and is there any specific reason that the purchase of the Rose Garden Inn was financed through them rather than a more mundane/pedestrian lender? 
c9e968a5-7fde-4c50-9041-38a162380ec4
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Pitfalls of the agent model *This is independent research. To make further posts like this possible, please consider* [*supporting me*](https://www.alexflint.io/donate.html)*.* --- Outline ------- * I study the agent model as a frame for understanding the phenomenon of entities that exert influence over the future. * I focus on one implication of the agent model: the assumption of a fixed agent policy that is unchanging over time. * I study the consequences of this assumption in a number of scenarios. I consider single- and multi-agent scenarios, scenarios consisting of humans, machines, and combinations of humans and machines, and scenarios in which the agent model is being used by an entity to model itself versus other entities. I show that all combinations of these can be covered by six basic cases. For each of the six cases I identify pitfalls in which certain aspects of reality are abstracted away by the application of the agent model, and we examine the consequences of each. * I draw connections to related work in embedded agency, partial agency, and embodied cognition. * I conclude that the agent model has significant shortcomings as a frame for the engineering of advanced AI systems. Introduction ------------ Yesterday I [wrote about](https://www.lesswrong.com/posts/2QuAcx8XQw7rrXzGC/beware-over-use-of-the-agent-model) the agent model as a frame for understanding the real-world phenomenon of entities in the world that exert influence over the future. ![](https://storage.googleapis.com/doc-publisher-images/02d28c3215a11e83.jpg) I said that we should be cautious about over-using any one frame as a means for understanding any phenomena, because when we only have one frame it is easy to forget about the boundary between the phenomena that we’re looking at and the frame that we’re looking at it with, which is a problem no matter how powerful or accurate our preferred frame is. This is indeed one reason to consider frames other than the agent model when studying entities that exert influence over the future. A second reason is that there are specific shortcomings of the agent model. I want to examine some of those today. All frames have shortcomings. A frame is precisely that which gives us a way of seeing that is simpler than reality itself, so the whole point of a frame is that it does not include every aspect of reality. We can always point to ways in which any given frames fail to capture all of reality, and that is no reason on its own to discard a frame. Nevertheless, we may want to de-emphasize frames whose shortcomings are severe enough. In this post I will present the shortcomings of the agent model as I see them, so that you can decide to what extent you want to emphasize the agent model in your own thinking. I will not be examining the up-sides of the agent model in this post. It is a powerful model with many up-sides, as evidenced by its broad adoption across many disciplines over the last few hundred years. Perhaps a future post will examine the virtues of the agent model. What is the agent model? ------------------------ Under the agent model, we consider the world as consisting of two parts: the agent, and the environment. Information flows from environment to agent as a sequence of observations, and from agent to environment as a sequence of actions. ![](https://storage.googleapis.com/doc-publisher-images/95103000abd0a372.jpg) The agent model is a frame that we use to understand entities in the world, and to engineer new entities. For example, when constructing a reinforcement learning system to play Atari games, it is natural enough for the engineers of the system to look at things as an agent receiving observations from an environment and sending actions back in return. In this case, the environment is the state of the Atari game, the observations are a 2D grid of pixels shown on the screen, and the actions are the moves available within the game. This is a situation where it makes good sense to use the agent model. Consider another situation where it makes good sense to use the agent model. An engineer designing a robot vacuum might consider an environment consisting of various obstacles and patches of floor to be vacuumed, observations consisting of "bump" notifications generated when the robot touches a wall, and actions consisting of turning the left and right wheels forward or backwards at different speeds. A robot vacuum is not separate from the environment as an Atari game-playing AI is, but the agent model is still a fairly expedient frame within which human engineers might solve design problems concerning a robot vacuum. This essay works through some more complicated scenarios in which use of the agent model may give rise to problems. Note that just because the human engineers of a system use the agent model as a frame for constructing a system does not mean that the system itself uses the agent model internally to model itself or others. A house constructed by engineers who used an understanding of thermodynamics to optimize the heat-efficiency of its insulation does not generally contain any computer capable of using the laws of thermodynamics to consider ways that it might design other houses or redesign itself. Similarly, an entity constructed by engineers who used the agent model to aid development need not itself use the agent model within its own decision processes. My robot vacuum may have an internal representation of the size and shape of its own body, but it almost certainly does not have any explicit concept of itself as an agent receiving observations from and sending actions to the environment, much less an explicit concept of others as agents[[1]](#fn-npfwi9dhJH3Ebymaz-1). ### The fixed policy assumption When looking at the world through the frame of the agent model, the environment is seen as changing over time, but the policy implemented by the agent is seen as fixed. Much of the power of the agent model as a frame for understanding the world comes from this fixed policy assumption. For example, under inverse reinforcement learning we observe a sequence of actions taken by some entity, then we look for a value function that explains this behavior. We do not model the value function as changing from one step to the next. If it did, inverse reinforcement learning would no longer work[[2]](#fn-npfwi9dhJH3Ebymaz-2). It is difficult enough to get inverse reinforcement learning to work even with the assumption of a fixed value function; incorporating a time-varying value function into the model would make the problem hopelessly underspecified. A fixed value function is just how inverse reinforcement learning works. Or, consider ordinary reinforcement learning, in which we search over a space of possible policies, rolling out each one over time to see how it behaves. We may consider policies that behave differently at different times, but, at least in classical reinforcement learning, we do not consider policies that change over time. For example, in setups where the policy is encoded as a neural network, we do not consider policies with network coefficients that change from one time step to the next. Now, we are aware that agents are implemented as computer programs running on physical computers, and we are aware that these implementations involve memory registers whose values change and wires carrying charge. We are aware that the state of the CPU is changing from moment to moment. We are aware even that the memory cells whose value does not not change are not unchanging at the level of the physical substrate, but instead the memory cell is constructed in a way that maintains a configuration within a certain range that is recognized as a single 0 or a single 1 by the higher-level computing machinery. So we are *aware* that a physical computer is in fact changing over time at every level, but we choose to use a frame in which there is a computer program that is running, and the source code for that program is not changing. And this is a reasonable assumption in many cases. Computers in fact have excellent error correction mechanisms that allow them to keep running an unchanging computer program over a long period of time. My robot vacuum, for example, does in fact run the same computer program each time I turn it on. It will be a long time before I can expect a stray cosmic ray to flip a bit representing the core computer program on my robot vacuum, or for the memory cells to physically degrade to the point of information loss. You might think that if we don’t want this fixed policy assumption then we could just consider a variant of the agent model in which some actions and some observations modify the policy. It’s true that we could modify the agent model in this way, but if you do this "all the way to the bottom", meaning that any aspect of the policy can in principle be modified, then you invalidate much of the machinery that has been built on top. The basic theorems underlying RL and IRL stop working. Even the more basic planning and control algorithms from earlier periods of AI stop working. And the basic conclusions from the rational actor model in economics stop applying, too. So the fixed policy assumption is deeply baked in, and thus will be the primary frame through which this essay views the agent model. ### Example: MDP and POMDP The Markov Decision Process and Partially Observable Markov Decision Process are two models from computer science that explicitly organize the world into an agent and an environment. The "decision process" here refers to the environment, which proceeds through a sequence of states, each one conditionally independent of all prior states given the immediately prior state (this is what "Markov" refers to). In the MDP the agent observes the full state of the world at each point in time, while in the POMDP the agent observes just some aspect of the world at each point in time. The MDP and POMDP do not explicitly state that the agent receiving observations from and sending actions back to the decision process must be executing an unchanging policy, but their formal solution strategies, such as reinforcement learning, generally do. ### The agent model does not imply optimization Under the agent model we view entities that exert influence over the future as executing abstract algorithms that process observations and generate actions. Those abstract algorithms may or may not be optimization algorithms. For example, I might build a robot that moves forward until it hits a wall, and then stops. We need not view this robot as optimizing anything in order to view it as an agent. Now, there is an orthogonal question of under what circumstances we might choose to [*view*](https://www.lesswrong.com/posts/znfkdCoHMANwqc2WE/the-ground-of-optimization-1) [an algorithm as an optimization algorithm](https://www.lesswrong.com/posts/znfkdCoHMANwqc2WE/the-ground-of-optimization-1). We might encounter a robot that "under the hood" is taking actions based on a state machine, but choose to [view it as acting in service of a goal due the compactness of that representation](https://www.lesswrong.com/posts/hQHuXuRGZxxWXaPgg/the-blue-minimizing-robot). This is an important distinction but is unrelated to whether we are using the agent frame or not. The agent frame merely posits an abstract algorithm as an appropriate model for explaining the behavior of some entity. Dimensions of analysis ---------------------- I am going to examine the ways that the view of the world afforded by using the agent model differs from the true state of things. I am going to do that by examining scenarios involving various subjects that are using the agent frame and various objects being looked at through the agent frame. The dimensions I want to cover are: * Scenarios in which an entity is looking at itself versus looking at another entity. * Scenarios with humans and machines (and combinations thereof). For now I will consider each combination of subject and object being human and machine. * Scenarios with single and multiple entities. For now I will collapse this with the first dimension and consider cases consisting of either one entity or two entities, where in the former case I assume that the entity views itself through the agent frame, and in the latter case that the entity views the other entity through the agent frame. The scenarios I will consider are as follows: ![](https://storage.googleapis.com/doc-publisher-images/1e043f66223c118d.jpg) For each of these scenarios I will consider various pitfalls. There are 12 pitfalls in total, and they are summarized in a table at the bottom. Humans looking at themselves as agents -------------------------------------- ### Pitfall 1: Self-hatred Sometimes a human perceives that an action they have taken has caused harm in the world. In some cases the perception is mistaken, and in reality their action was not the cause of the harm, while in other cases the perception is correct, and their action was the cause of the harm. But upon seeing this harm, a human viewing themselves through the frame of the agent model, and therefore making the fixed policy assumption with respect to themselves, may conclude that this harm is an artifact of an immutable internal decision algorithm. Since the human does not want to cause harm, but also holds the view of an immutable internal decision algorithm that, from within the frame of the agent model, has been observed causing harm at least once, their only option appears to be to adopt a combative relationship with this immutable internal decision algorithm and limit the harm that it is causing by resisting it. This leads to a internal conflict as the human takes actions, then perceives these actions from within the agent frame as having been generated by an immutable internal decision algorithm, then concludes on the basis of having perceived this immutable internal decision algorithm having caused harm in the past that the action is probably also harmful, and then takes further actions to resist and limit the consequences of this first action. Of course these further actions are subject to the same chain of reasoning so long as the human is looking at themselves from within the agent frame, so the human ends up taking yet further actions to oppose the actions that were taken to oppose the first action, and this cycle continues indefinitely. **Perception from within the agent frame**: An unchanging internal decision algorithm is causing harm but cannot be modified, so must be fought. **Reality**: The human could simply choose to act differently. **Fundamental misperception due to the agent frame**: That there is "some place else" from which actions originate, separate from the one reasoning about the harm. ### Pitfall 2: Procrastination / forgetfulness Sometimes a human sees that an action would be valuable to perform, but sees little benefit in performing that action sooner rather than later, perhaps up to some deadline, such as filing taxes before tax day. The human, viewing themselves through the frame of the agent model, believes that there will be no cost to delaying, since they perceive an unchanging internal decision algorithm that has been observed at least once identifying the action as valuable, and so is likely to do so in the future. In fact there are multiple mistakes in this reasoning. First, humans are subject to change due to interaction with their environments, and this particular human may change in such a way that they forget or undervalue this action in the future. Second, humans are subject to change due to their own actions, and the architecture of human brains is such that actions practiced repeatedly become more likely to be performed again in the future, so by delaying action the human is in fact performing a subtle form of self-modification in the direction of delaying actions in general. In other situations this might be what the human intended to do, but in this example the human is, by assumption, overlooking this. **Perception from within the agent frame**: An unchanging decision algorithm could equally well perform the action now or in the future, and there is no harm caused by delaying action. **Reality**: Human decision algorithms are subject to change due both to interaction with the environment and habit formation. **Fundamental misperception due to the agent frame**: Actions taken by the human do not affect the human’s policy. Humans looking at other humans as agents ---------------------------------------- ### Pitfall 3: Hatred of others Sometimes a human perceives an action taken by another human as having caused harm. In some cases the human’s perception is mistaken, and in reality the other human was not the cause of the harm, while at other times the perception is correct, and in reality the other human was the cause of the harm. But upon seeing this harm, a human viewing another human through the frame of the agent model may conclude that the cause of the harm can be traced back to an immutable decision algorithm within the other human. Since the human does not want there to be harm in the world, but also holds the view of an immutable decision algorithm within the other human that, from within the frame of the agent model, has been observed causing harm at least once, their only option appears to be adopting a combative relationship with the other human. In particular, the human may not believe that the other human has the capacity to change this internal decision algorithm even if they wanted to, so may not seek to negotiate with this other human, concluding instead that their only option is to resist or limit the consequences of other human’s actions. **Perception from within the agent frame**: An unchanging decision algorithm within another human is causing harm but cannot be modified, so must be fought. **Reality**: The other human might have the capacity to self-modify and might choose to do so if negotiated with. **Fundamental mis-perception due to the agent frame**: Negotiation is unlikely to work because the other human could not change their own internal decision algorithm even if they wanted to. ### Pitfall 4: Viewing oneself as a loser in Newcomb’s problem When some aspect of the environment has been determined by an examination of a human’s decision-making strategies via a channel that is not the human’s own actions, a human viewing the world through the frame of the agent model may miss the opportunity to make changes to their own decision-making due to the belief that some other human will necessarily view them as having a fixed internal decision algorithm. [Newcomb’s problem](https://en.wikipedia.org/wiki/Newcomb%27s_paradox) formalizes this in the context of a philosophical thought experiment. Within Newcomb’s problem, a human using a strict agent model may reason that they are a consequential decision agent for better or worse, and that hypothetical panel of experts in Newcomb’s problem will have perceived this, and therefore will have put the lesser of two amounts in the envelopes, and so their best course of action is to take both boxes. **Perception from within the agent frame**: An unchanging internal decision algorithm will have been perceived and acted upon by some external entity. **Reality**: There is in fact no unchanging internal decision algorithm, and a panel of experts as omniscient as the one hypothesized in Newcomb’s problem will have correctly realized this. **Fundamental misperception due to the agent frame**: First, that an unchanging internal decision algorithm exists, and second that this will have been perceived and acted upon by an external entity. Humans looking at machines as agents ------------------------------------ ### Pitfall 5: Agency hand-off fallacy A human building an AI adopts a frame in which the AI, once deployed, will be an agent. The human correctly reasons that the AI will exert influence over the future, but incorrectly adopts the view that the AI will necessarily consist of an unchanging internal decision algorithm. Due to this, the human does in fact build an AI with an unchanging internal decision algorithm, overlooking other possible designs. This forces the human to [hand off influence over the future](https://www.lesswrong.com/posts/Z2rkdEAJ9MvYPBeYW/thoughts-on-iason-gabriel-s-artificial-intelligence-values) to the agent at the time of the agent’s construction, which in turn forces the human to adopt a false dichotomy between solving a wide array of philosophical and technical problems before the first AI is built, or else deploying a powerful AI that is not certain to act in a manner that the human would approve of. **Perception from within the agent frame**: Powerful AI systems will necessarily contain an unchanging internal decision algorithm. **Reality**: There is a wider design space of autonomous machines that exert influence over the future. **Fundamental misperception due to the agent frame**: That the design space for autonomous machines that exert influence over the future is narrower than it seems. This creates a self-fulfilling prophecy in which the AIs actually constructed are in fact within this narrower regime of agents containing an unchanging internal decision algorithm. ### Pitfall 6: Forgetting to examine the scaffold A human considers a robot they are building using the frame of the agent model. Due to this, they place most of their attention on formulating the decision algorithm, and place less attention on the sensors, actuators, and computing machinery that will implement the decision algorithm and connect it with the external world. Due to this, the design of the decision algorithm is not informed by the practical failure modes of the sensors, actuators, and computing machinery, and the overall system is fragile. **Perception from within the agent frame**: An unchanging internal decision algorithm that receives observations and outputs actions is the primary objective of design and engineering efforts. **Reality**: The sensors, actuators, and computing machine may require as much subtlety in design and engineering efforts as the decision algorithm. **Fundamental misperception due to the agent frame**: Over-emphasis on the decision algorithm during development. ### Pitfall 7: Putting ourselves "into the shoes" of the machine A human considers an AI they are building through the frame of the agent model. They consider situations in which the AI may be copied and may not know how many times it has been copied, as per philosophical thought experiments such as the [Sleeping Beauty problem](https://en.wikipedia.org/wiki/Sleeping_Beauty_problem). Due to the view of the AI as an agent with an unchanging internal decision algorithm, the human occupies most of their attention with the question of what they (the human) would do given the information available to the AI under various hypothetical situations, missing the opportunity to simply choose a design for the AI that has the consequences desired by the human. Within the Sleeping Beauty problem, for example, it is difficult to decide what the correct probability to place on various events is when looking at the problem from the inside, but easy to pick an AI design that would act in service of any particular objective when looking at the problem from the outside. **Perception from within the agent frame**: A human emphasizes an internal mode of problem solving over an external mode due to putting themselves "into the shoes" of a perceived unchanging decision algorithm within a machine. **Reality**: External-mode problem solving is also feasible. **Fundamental misperception due to the agent frame**: Over-emphasis of internal mode problem solving. Machines looking at themselves as agents ---------------------------------------- ### Pitfall 8: Confusion about fallibility An AI programmed to model itself using an explicit agent model might notice that its actions do not always match those predicted by its self-model. In fact this is due to a discrepancy between the AI’s model of itself, which predicts that its actions will be a function of its perceptions, and its real-world implementation, which involves physics sensors, actuators, and computing hardware that take time to process information and are subject to errors. Due to the inability of the agent model to capture the physicality of the AI’s computing hardware, the AI might develop false explanations for the discrepancy between its prediction of its own actions and those observed. In particular it is likely to explain these discrepancies as caused by features of the environment, since the environment contains most of the free parameters within the agent model. **Perception from within the agent frame**: An AI’s actions have a purely functional relationship to its perceptions, and any discrepancy with respect to this assumption must be due to some feature of the environment. **Reality**: All machines are physical entities and are at best approximated as functions from percepts to actions. **Fundamental misperception due to the agent frame**: Any non-functional aspect of the AI’s behavior must be a feature of the environment ### Pitfall 9: Difficulty modifying own hardware An AI programmed to model itself using an explicit agent model may have difficulty making changes and upgrades to its own hardware. As the AI entertains actions that might modify its own hardware, an AI relying on the agent model may not fully account for all of the consequences of its actions since the AI will not expect its actions to affect its own core decision algorithm due to the fixed policy assumption. As mentioned previously, one might imagine certain "quick fixes" to the agent model that permit actions that directly change the agent’s decision algorithm, but in fact this is more challenging than it may seem, since the fixed policy assumption is core to many of the basic search and learning strategies that underlie contemporary AI theory. **Perception from within the agent frame**: Actions will not change the AI’s own decision algorithm **Reality**: The AI’s decision algorithm is represented within physical memory units and is executed on a physical computer, both of which can be affected by the AI’s actions. **Fundamental misperception due to the agent frame**: There is an unchanging internal decision algorithm within the AI that is not subject to change because it is not part of the environment. Machines looking at humans as agents ------------------------------------ ### Pitfall 10: Looking for fixed values Consider an AI that is programmed to \ infer a value function that explains observed human behavior and then take actions in service of this inferred human value function. An AI programmed this way would assume that humans have values that are fixed over time, due to the view of an unchanging decision algorithm within humans. This may cause such an AI to incorrectly extrapolate current values to future values. This would fail, for example, when modelling children whose decision algorithms will evolve significantly as they grow, or when observing adults experiencing significant life changes. Upon observing humans who are in fact changing over time, the AI may be forced into an explanation that posits an unchanging value function, in which case the AI may form an incorrect view of human values and take undesirable actions. **Perception from within the agent frame**: Human behavior can be explained by an unchanging internal decision algorithm. **Reality**: Humans change over time. **Fundamental misperception due to the agent frame**: Human values are fixed. ### Pitfall 11: Incomprehensibility of corrigibility A human engineer may wish to construct an AI that can be modified after it has been deployed, in case the human identifies mistakes in the AI’s design. But an AI programmed to model itself as an agent will have difficulty understanding the intentions of a human trying to modify the AI, since from within the agent frame the AI’s internal decision algorithm is not changeable. The human’s behavior may appear bizarre or incomprehensible to the AI. **Perception from within the agent frame**: Human actions that are in fact intended to modify the AI’s internal decision algorithm appear incomprehensible since the AI views its internal decision algorithm as immutable. **Reality**: A human might want to modify the AI’s internal decision algorithm **Fundamental misperception due to the agent frame**: The AI’s internal decision algorithm is unchanging so it cannot be the intent of any other entity to modify it. Machines looking at other machines as agents -------------------------------------------- ### Pitfall 12: Bias towards conflict An AI interacting with other AIs in pursuit of a goal will need to decide when to negotiate with entities that oppose it and when to fight with such entities. We expect there to be scenarios in which negotiation is the strategy we would wish for the AI to take and other scenarios in which fighting is the strategy we would wish for the AI to take, but an AI programmed to use the agent model to understand other AIs may be suboptimally biased towards fighting, for the following reason. The AI being perceived through the frame of the agent model may be seen as having a decision algorithm that gives it the capacity to choose its actions on the basis of negotiation. But some aspects of this other AI’s behavior may be perceived as a fixed consequence of the unchanging decision algorithm perceived within this other AI. This means that an AI using the agent model to understand other AIs may choose conflict in cases where it perceives these fixed aspects of the other AI’s behavior as being opposed to its own goals. In some cases it may be true that the other AI was incapable of overturning aspects of its programming. But in other cases the other AI may have in fact been capable of and willing to negotiate, and the decision to choose conflict over negotiation was due to a fundamental misperception due to use of the agent model. **Perception from within the agent frame**: Some aspects of another entity’s behavior is attributable to a fixed internal decision algorithm and cannot be modified by the other entity even if it wanted to, so negotiations concerning these behaviors are futile. **Reality**: Other entities may have the capacity to modify their behavior at ever level **Fundamental misperception due to the agent frame**: Some aspects of other entities’ behavior are fixed consequences of an unchanging internal decision algorithm and must be fought. Summary of the pitfalls ----------------------- ![](https://storage.googleapis.com/doc-publisher-images/c31a134b0bad7dcc.jpg) Connections to other research ----------------------------- Scott Garrabrant and Abram Demski have [written about](https://intelligence.org/embedded-agency/) the novel challenges that arise when designing agents that are part of the world rather than outside the world in the [Embedded Agency](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh) sequence. Garabrandt and Demski note that any real-world agent we might build would necessarily be part of the world in which it is deployed, and that we have little understanding of how to think about agency under these conditions. They identify four ways that embedded agency differs from non-embedded agency: that embedded agents lack predefined input/output channels between self and world, that embedded agents cannot conceivably fit a complete model of the whole world into their minds because their minds are physically smaller than the world, that embedded agents must consider the consequences of actions that might modify the agent itself, and that embedded agents are constructed out of the same fundamental parts as the world. The present critique of the agent model was very much inspired by this work. Rather than adapting the agent model to the embedded domain, my sense is that we should be seeking a post-agent model with which to understand entities that exert influence over the future. I have not yet read Garrabrant’s sequence on cartesian frames, so cannot comment on its connections to the present work, which I expect to be numerous. Abram Demski has described a concept he calls [Partial Agency](https://www.lesswrong.com/posts/4hdHto3uHejhY2F3Q/partial-agency), in which an agent uses some but not all channels of influence to shift the world towards its objective. For example, an agent predicated upon causal decision theory does not attempt to exert influence over the world via the actions resulting from the predictions made by other agents about its decisions. This channel of influence is available to the agent, but its architecture is such that it does not consider making use of them. He compares this to "full agency", in which an agent does consider all affordances available to it. Both partial agency and full agency appear to be situated within the agent frame as I have described it in this essay, since they both view entities that exert influence over the future through the frame of an abstract algorithm processing observations and generating actions. In [Deconfuse Yourself About Agency](https://www.lesswrong.com/posts/ZigRhB4pAGdr6beQh/deconfuse-yourself-about-agency), Vojta Kovaric attempts to make progress on the question of which entities in the world we should take to be agents, and then introduces three questions as directions for further research on agent fundamentals. Kovaric introduces the notion of A-morphization, in which we model some entity as a particular parametrization of a certain architecture A. He says that if the best model of some entity is an A-morphization, and if A is an agent-like architecture, then we may call the entity an agent. But this just begs the question of how we determine whether A is an agent-like architecture. On this, Kovaric simply notes that different people will see different architectures as being agent-like. Of particular interest is the following question posed at the end of the article: Is there a common fundamental physical structure or computation behind all agent-like behavior? Overall I see this piece as working primarily from within the agent frame. Laurent Orseau, Simon McGill, and Shane Legg have published [Agents and Devices: A Relative Definition of Agency](https://arxiv.org/abs/1805.12387), in which they describe the construction of a classifier that assigns probabilities to whether an object is a device or an agent. A device is taken to be something that operates according to a mechanistic input-output mapping, and is modelled formally by the authors using the speed prior (a fast computable approximation to the Solomonoff prior). An agent is something we model as having beliefs and making decisions according to an objective function. The authors assume that some set of possible utility functions are given (in the experiments they are goals of reaching certain labelled points in a maze), then use inverse reinforcement learning with a switching prior to perform inference on which goal might have been sought at which time. Having done this, they can compare the hypothesis that a certain object is a device to the hypothesis that it is an agent. This work is of great value from the perspective of the present essay as a fully fleshed-out operationalization of what exactly the agent model entails. The field of [embodied cognition](https://plato.stanford.edu/entries/embodied-cognition/) views human cognition as deeply dependent on the body. A related field, [embedded cognition](https://en.wikipedia.org/wiki/Embodied_embedded_cognition), views human cognition as deeply dependent on the natural and social environment in which an organism is immersed. This large field is highly abstract, draws on ideas from continental philosophy, and seems difficult to penetrate, yet due to its focus on cognitive processes that are embedded in the physical world may contain insights of interest to the development of a post-agency understanding of intelligent systems. Of particular interest for a follow-up post is Rodney Brooks’ work on the subsumption architecture and "[intelligence without representation](https://www.sciencedirect.com/science/article/abs/pii/000437029190053M)". Conclusion ---------- The agent model is an exceptionally powerful model, and for this reason it is the primary model with which we have chosen to understand the entities on this planet that exert greatest influence over the future. It is precisely because of the power of this model that we have come to rely upon it so heavily. But when we use one frame to the exclusion of all others, we may forget that we are using a frame at all, and begin to accept the confines of that frame as a feature of reality itself, not as a temporarily and voluntarily adopted way of seeing. I believe this has happened with the agent model. It seems to me that we are so acquainted with the agent model that we have lost track of the ways that it is shaping our view of reality. As we build advanced AI systems, we should carefully examine the pros and cons of the frames that we use, including the agent model, or else we may miss whole regions of the design space without noticing. In this essay I have attempted to lay out some of the pitfalls of using the agent model. --- 1. This is one of the most confusing things about conceptual work in the field of AI. This field is unique among all engineering disciplines in that the object of our engineering efforts has the potential to itself use frames of its own as it perceives the world. As Eiliezer wrote about repeatedly in the sequences, it is critical to be extremely clear about what is a frame that we are using to think about building an AI, and what is a frame being used by an AI to think about take action in the world. [↩︎](#fnref-npfwi9dhJH3Ebymaz-1) 2. At least not if the value function was permitted to change arbitrarily between each step. Perhaps IRL could be made to work with a changing value function given some constraints on its rate of change, but classical IRL does not handle this case. [↩︎](#fnref-npfwi9dhJH3Ebymaz-2)
79dbaf80-395d-45e1-8f17-07175680c46d
trentmkelly/LessWrong-43k
LessWrong
[Intro to brain-like-AGI safety] 1. What's the problem & Why work on it now? (Last revised: July 2024. See changelog at the bottom.) 1.1 Post summary / Table of contents This is the first of a series of blog posts on the technical safety problem for hypothetical future brain-like Artificial General Intelligence (AGI) systems. So my immediate priority here is saying what the heck is “the technical safety problem for brain-like AGI” and what do those words even mean and why on earth should I care. Summary of this first post: * In Section 1.2, I define the “AGI technical safety problem”, put it in the context of other types of safety research (e.g. inventing passively-safe nuclear power plant designs), and relate it to the bigger picture of what it will take for AGI to realize its potential benefits to humanity. * In Section 1.3, I define “brain-like AGI” as algorithms with big-picture similarity to key ingredients of human intelligence, presumably (though not necessarily) as a result of future people reverse-engineering those aspects of the human brain. What exactly that means will be clearer in future posts. I will also bring up the counterintuitive idea that “brain-like AGI” can (and probably will) have radically nonhuman motivations. I won’t explain that here, but I’ll finish that story by the end of Post #3. * In Section 1.4, I define the term “AGI”, as I’m using it in this series. * In Section 1.5, I discuss the probability that people will eventually make brain-like AGIs, as opposed to some other kind of AGI (or just not invent AGI at all). The section includes seven popular opinions on this topic, from both neuroscientists and AI / machine learning experts, and my responses. * In Section 1.6, I’ll address AGI accidents, which is something we should expect if we don’t solve the AGI technical safety problem. I’ll argue that these kinds of accidents can be catastrophic indeed, including human extinction. This topic is a minefield of confusion and miscommunication, and I will frame my discussion around responses to eight common obj
750da76b-f191-4871-99df-ef3418f0be37
trentmkelly/LessWrong-43k
LessWrong
The effect of effectiveness information on charitable giving A new working paper by economists Dean Karlan and Daniel Wood, The Effect of Effectiveness: Donor Response to Aid Effectiveness in a Direct Mail Fundraising Experiment. The Abstract: > We test how donors respond to new information about a charity’s effectiveness. Freedom from Hunger implemented a test of its direct marketing solicitations, varying letters by whether they include a discussion of their program’s impact as measured by scientific research. The base script, used for both treatment and control, included a standard qualitative story about an individual beneficiary. Adding scientific impact information has no effect on whether someone donates, or how much, in the full sample. However, we find that amongst recent prior donors (those we posit more likely to open the mail and thus notice the treatment), large prior donors increase the likelihood of giving in response to information on aid effectiveness, whereas small prior donors decrease their giving. We motivate the analysis and experiment with a theoretical model that highlights two predictions. First, larger gift amounts, holding education and income constant, is a proxy for altruism giving (as it is associated with giving more to fewer charities) versus warm glow giving (giving less to more charities). Second, those motivated by altruism will respond positively to appeals based on evidence, whereas those motivated by warm glow may respond negatively to appeals based on evidence as it turns off the emotional trigger for giving, or highlights uncertainty in aid effectiveness. In the experimental condition (for one of the two waves of mailings), the donors received a mailing with this information about the charity's effectiveness: > In order to know that our programs work for people like Rita, we look for more than anecdotal evidence. That is why we have coordinated with independent researchers [at Yale University] to conduct scientifically rigorous impact studies of our programs. In Peru they found that
73197f30-d266-49ed-b561-bf49e9df1e96
trentmkelly/LessWrong-43k
LessWrong
A second example of conditional orthogonality in finite factored sets Yesterday, I wrote a post that gave an example of conditional non-orthogonality in finite factored sets. I encourage you to read that post first. However, I'm kind of dissatisfied with it because it doesn't show any interesting cases of conditional orthogonality (despite the title seeming to promise that). So I'd like to show you one today. First, let's imagine that Alice is a person who has some height. Bob and Charlie both measure her height, and take note of the measurements. However, their measuring instruments have independent sources of error, such that neither gets exactly the right answer. In this world, Bob's measurement is not independent of Charlie's measurement, because they'll both be pretty close - the error isn't that big. However, once you know Alice's height, they will be independent, because given that knowledge, learning Bob's measurement doesn't tell you anything about Charlie's measurement. Below, we'll see how we can formalize that in the language of finite factored sets. Our finite factored set will be the set of tuples (a,ϵb,ϵc), and the basis factors will be A, which partitions the tuples by their value of a, EB, which partitions them by their value of ϵb, and EC, which partitions them by their value of ϵc. These represent Alice's height, and the error that Bob's and Charlie's machines respectively introduce. Note that you might have imagined we'd have the basic factors as Alice's height, Bob's measurement, and Charlie's measurement, but then these wouldn't be probabilistically or logically independent, and so would violate the assumptions that go into modelling things as finite factored sets. [footnote 1] Next, we'll define the function b(a,ϵb,ϵc)=a+ϵb, which gives the height that Bob measures, and the partition B which groups tuples with the same value of b together. Similarly, we'll define c(a,ϵb,ϵc)=a+ϵc, which tells us the height that Charlie measures, and the partition C that groups tuples together by their value of c. What's the h
bb135dda-d402-4c72-8e97-b8a14e309dfa
trentmkelly/LessWrong-43k
LessWrong
What is the biggest crypto news of the past year? Please give reasons for why you consider it big. Here are my top questions right now: * What major policies are governments adopting, especially in the US? * What the hell is Tesla up to? * Does anyone really believe that Tesla sold its Bitcoin in order to help the environment? If so, why did they buy it in the first place? * Are there any worthwhile applications of smart contracts yet? * What are you using them for? * I don't see much value in NFTs yet, I don't feel much draw to defi, and the prediction markets are either small or nonfunctional. More importantly, is there a good way for me to answer these questions on my own? I avoid Twitter and news sites because I think they are bad for my mind. What I would like is a way to check in on the state of crypto a few times per year. I care much less about price changes than longer-term things like government policies, clever new applications, scaling & usability issues, and so on.
c201a467-dfe8-4ba5-95b2-79ac36dcd68b
trentmkelly/LessWrong-43k
LessWrong
Yudkowsky on AGI risk on the Bankless podcast Eliezer gave a very frank overview of his take on AI two weeks ago on the cryptocurrency show Bankless:  I've posted a transcript of the show and a follow-up Q&A below. Andrea_Miotti, remember, and vonk previously posted transcripts to LessWrong (1,2 — see the comments there for discussion), but they contained many important errors, so I've corrected the transcripts below. (They also weren't available on the EA Forum.) ---------------------------------------- Intro Eliezer Yudkowsky: [clip] I think that we are hearing the last winds start to blow, the fabric of reality start to fray. This thing alone cannot end the world, but I think that probably some of the vast quantities of money being blindly and helplessly piled into here are going to end up actually accomplishing something.   Ryan Sean Adams: Welcome to Bankless, where we explore the frontier of internet money and internet finance. This is how to get started, how to get better, how to front run the opportunity. This is Ryan Sean Adams. I'm here with David Hoffman, and we're here to help you become more bankless. Okay, guys, we wanted to do an episode on AI at Bankless, but I feel like David... David: Got what we asked for. Ryan: We accidentally waded into the deep end of the pool here. And I think before we get into this episode, it probably warrants a few comments. I'm going to say a few things I'd like to hear from you too. But one thing I want to tell the listener is, don't listen to this episode if you're not ready for an existential crisis. Okay? I'm kind of serious about this. I'm leaving this episode shaken. And I don't say that lightly. In fact, David, I think you and I will have some things to discuss in the debrief as far as how this impacted you. But this was an impactful one. It sort of hit me during the recording, and I didn't know fully how to react. I honestly am coming out of this episode wanting to refute some of the claims made in this episode by our guest, Eliezer Yudkowsky, who
13038b50-ff0d-492d-a698-878ae5b70754
trentmkelly/LessWrong-43k
LessWrong
[Link] Small-game fallacies: a Problem for Prediction Markets Nick Szabo writes about the dangers of taking assumptions that are valid in small, self-contained games and applying them to larger, real-world "games," a practice he calls a small-game fallacy. > Interactions between small games and large games infect most works of game theory, and much of microeconomics, often rendering such analyses useless or worse than useless as a guide for how the "players" will behave in real circumstances. These fallacies tend to be particularly egregious when "economic imperialists" try to apply the techniques of economics to domains beyond the traditional efficient-markets domain of economics, attempting to bring economic theory to bear to describe law, politics, security protocols, or a wide variety of other institutions that behave very differently from efficient markets. However as we shall see, small-game fallacies can sometimes arise even in the analysis of some very market-like institutions, such as "prediction markets." This last point, which he expands on later in the post, will be of particular interest to some readers of LW. The idea is that while a prediction market does incentivize feeding accurate information into the system, the existence of the market also gives rise to parallel external incentives. As Szabo glibly puts it, > A sufficiently large market predicting an individual's death is also, necessarily, an assassination market... Futarchy, it seems, will have some kinks to work out.
93a4fc12-b409-424a-b64f-c7c765f7a429
trentmkelly/LessWrong-43k
LessWrong
Metatickle Intelligence Metrics and Friendly Utility Functions Related to: Intelligence Metrics and Decision Theories Previously I presented a formalism for dealing with the Duality and Ontology problems associated with attempts to define a formal metric of general intelligence. It also solves the environment distribution problem. This formalism ran into problems closely related to the problems of decision theory. I tried to solve these problems using a formalization of UDT suitable for this context. Here I'm going to pursue a different approach, which I believe to be analogous to the "metatickle" version of EDT. I will argue that, as opposed to decision theory, metatickling is a good approach to intelligence metrics. I will also present an analogous formalism for multi-agent systems. Finally, I will suggest an approach for constructing friendly utility functions using this formalism. Review of Quasi-Solomonoff Distributions In this section I will remind the idea behind quasi-Solomonoff distributions, glossing over mathematical details. For more details consult the previous article. Most attempts at constructing a formal general intelligence metric are based on Legg and Hutter and involve considering an agent A interacting with an environment V through actions that A applies to V and observations A makes on V (the latter being information flowing from V to A). The problem with this is that such an agent is indestructible since no process in V can force a change in the inner workings of A. Thus an AI programmed in accord with this formalism will consider it an a priori truth that its mind cannot be tampered with in any way, an obviously false assumption. In order to deal with this we can make A a part of V, as suggested by Orseau and Ring. This creates another problem, namely it's unclear what prior for V should we use. Legg and Hutter suggest using the Solomonoff distribution which makes sense since a perfectly rational agent is supposed to use the Solomonoff distribution as a prior. However, if A is a part of V, the Solo
05f78c34-5bb0-4d4d-9e5d-be7944530fc8
trentmkelly/LessWrong-43k
LessWrong
Rethinking AI Safety Approach in the Era of Open-Source AI Open-Source AI Undermines Traditional AI Safety Approach In the past years, the mainstream approach to AI safety has been "AI alignment + access control." In simple terms, this means allowing a small number of regulated organizations to develop the most advanced AI systems, ensuring that these AIs' goals are aligned with human values, and then strictly controlling access to these systems to prevent malicious actors from modifying, stealing, or abusing them. Companies like OpenAI and Anthropic are prime examples of applying this approach. However, this closed-source safety approach is now facing significant challenges. The open-sourcing of models like DeepSeek-R1 has disrupted the monopoly of closed-source advanced AI systems. Open-source models can be freely downloaded, used, and modified by anyone, including malicious actors. Even if an open-source model is initially aligned, it can easily be transformed into a harmful model. Research has shown that with just a few dozen fine-tuning samples, the safety constraints of a model can be removed, enabling it to execute arbitrary instructions. This means that in the future, we are likely to face increased risks of AI misuse, such as using open-source models for hacking, online fraud, or even the creation of biological weapons. DeepSeek-R1 has a 100% jailbreak success rate on the HarmBench dataset Open-source models also pose a greater risk of losing control. Developers with weak safety awareness may unintentionally introduce significant vulnerabilities when modifying open-source models. For example, they might inadvertently enable the model to develop self-awareness, autonomous goals, or allow the model to self-iterate without human supervision. Such behaviors could make AI systems even more uncontrollable. The Unstoppable Momentum of AI Open-Source While the open-sourcing of models introduces new safety risks, the trend toward open source is unstoppable. In the past years, the progress of open-source models is faster
cb0cd688-4aab-4621-82dd-491460800dd8
StampyAI/alignment-research-dataset/blogs
Blogs
A research agenda for assessing the economic impacts of code generation models A Research Agenda for Assessing the Economic Impacts of Code Generation Models Sam Manning1‡, Pamela Mishkin∗2‡, Gillian Hadfield3, Tyna Eloundou2, and Emily Eisner4 1OpenResearch 2OpenAI 3University of Toronto 4University of California, Berkeley ‡These authors contributed equally to this work. March 3, 2022 Executive Summary OpenAIisdevelopingaresearchprogramtoassesstheeconomicimpacts of code generation models and is inviting collaboration with external researchers. Rapid advances in the capabilities of large language models (LLMs) trained on code have made it increasingly important to study their economic impacts on individuals, firms, and society. Codex – an LLM developed by OpenAI by fine-tuning GPT-3 on billions of lines of publicly available code from GitHub – has been shown to generate functionally correct code 28.8% of the time on a sample of evaluation problems (Chen et al. 2021). This may have important implications for the future of coding and the economics of the industries that depend on it. In this document, we lay out a research agenda to assess the effects of Codex on economic factors of interest to policymakers, firms, and the public. We make a case for this research agenda by highlighting the potentially broad applicability of code generation models to software development, the potential for other LLMs to create significant social and economic impact as model capabilities advance, and the value of using Codex to generate evidence and establish methodologies that may be applicable to research on the economic impacts of future models. We propose that academic and policy research focus on studying code generation models and other LLMs so that evidence on their economic impacts can be used to inform decision-making in three key areas: Deployment policy, AI system design, and public policy. To help guide this research, we outline six priority outcome areas within the realm of economic impacts that we intend to use Codex to study: Productivity, Employment, Skill Development, Inter-firm ∗Corresponding author, econ@openai.com. 1 Competition, Consumer Prices, and Economic Inequality. For each area, we briefly discuss previous literature on the impacts of artificial intelligence on each of these outcomes, describe questions that we believe to be key inputs to the three decision-making areas mentioned above, and provide examples of research that could be conducted with Codex. To catalyze work that builds off of this initial research agenda, we are announcing a Call for Expressions of Interest from external researchers to collaborate with OpenAI researchers and customers to better measure the economic impacts of code generation models and other LLMs. 1 Introduction OpenAI is building out a research program to assess the economic impacts of code generation models with the goal of developing tools, methods, and partnerships that can enable improved research on the economic impacts of powerful language models. As code generation models and other large language models (LLMs) improve, they have the potential to impact many aspects of society, including work, productivity, skill development, and other economic outcomes. The depth and scope of the effects of code-generating LLMs will depend on how widespread their use becomes, which in turn depends on factors such as their capabilities and limitations, ease of use, associated costs, and the regulatory and institutional environments in which they are deployed. The capabilities of present and future code generation models may complement and/or substitute for the tasks completed by workers in coding-centric occupations (engineers, data analysts, software developers, etc.) by, for example: •Impacting the costs associated with coding tasks •Impacting the relative productivity of capital versus labor in the production process •Shifting the allocation of tasks in the production process to capital vs labor •Impacting the demand for existing skills (coding-centric and not) and spurring demand for new skills These potential impacts are complex. Therefore, the research community’s ability to generate decision-relevant evidence on any of the research questions outlined in this document will be greatly enhanced by developing a range of productive partnerships, and we firmly believe that AI developers need to support external researchers undertaking this work, rather than conduct this research exclusively in-house. We hope this document serves as a starting point for collecting input from researchers, AI developers, policymakers, workers, labor unions, and firms interested in understanding the impacts of code generation models – and LLMs broadly – on economic outcomes. In Section 4 and in Table 1 below we highlight six research focus areas and key questions where OpenAI is interested in better understanding the economic impacts of code 2 generation models via Codex - an LLM developed by OpenAI that translates natural language to code (Chen et al. 2021).1Finally, we are issuing a Call for Expressions of Interest for external researchers to collaborate with OpenAI to better measure the economic impacts of code generation models, with the goal of building research methods and infrastructure that can be applied to other LLMs in the future. Similarly, we invite others deploying or using LLMs for code generation to support this work. 1.1 Call for Expressions of Interest We are seeking feedback on this research agenda, as well as expressions of interest from individuals who are interested in partnering with OpenAI to study the economic impacts of Codex and to advise future research efforts on the economic impacts of novel LLMs. We welcome research proposals from all social science disciplines, including but not limited to economics, labor studies, sociology, and political science. We are also interested in engagement with private companies who have already integrated Codex. If you or your organization have a proposal for a research collaboration or would be interested in helping guide how OpenAI thinks about these issues, please see the link above for details on how to submit an expression of interest. 2 Motivations 2.1Consider economic impacts as part of the AI Safety framework A key motivation for the research agenda we propose in this paper is to ensure AI safety: even though the current capabilities of Codex do not threaten large-scale economic disruption or harm to human systems, future capabilities of code generation or other LLMs could. It is critical to engage in research about the economic impacts of model capabilities today in order to be positioned to assess the safety of developing and releasing more advanced systems in the future. Foundational work setting the technical AI safety research agenda by Amodei, Olah, and coauthors has focused on the problem of "accidents in machine learning systems," while strongly supporting further work on privacy, security, fairness, economics, and policy (Amodei et al. 2016). The authors highlight the policy question"Howdowepredictandrespondtotheeconomicandsocialconsequences of ML?" recognizing it as an important area, overlapping with other technical AI safety concerns, that warrants dedicated research. While far from the only such example, socioeconomic impacts are increasingly relevant as AI systems see increased adoption in and interaction with society (Weidinger et al. 2021). 1This document does not present a comprehensive list of all potential areas of economic impact that would benefit from further research. This research agenda is an initial attempt at priority-setting given the range of critical questions on the economic impact of code generation models, and we are eagerly seeking feedback on what those priorities should be. 3 Direct Impacts & Priority Subquestions Research AreaSubquestions Examples Productivity •What is the impact of Codex adoption on firm, team, and worker productivity? •What are the firm, worker, and use-case characteristics that drive differential impacts on productivity? •What are the mechanisms through which productivity impacts on firms, teams, and workers are realized?•Random assignment of model across workers, teams, and/or firms to assess impact on productivity-related outcomes •Longitudinal study of the production process as Codex applications are adopted and developed over time •Cataloging of products and projects built using Codex Employment •What is the impact of Codex adoption on the demand for human coding labor? •What is the impact of Codex adoption on the demand for human labor in non-coding roles? •What human coding tasks are most likely to be substituted by Codex and how is that labor reallocated? •What new tasks does Codex introduce into the production process and what skills are demanded to complete them? •What is the impact of Codex adoption on job quality?•Development of better benchmark datasets that map job tasks to model capabilities •Random assignment of model across workers, teams, and/or firms to assess impact on labor demand and job quality •Longitudinal study of team structure and labor demand as Codex applications are adopted and developed over time •Monitoring of job postings for tasks requiring proficiency with Codex or complementary skills Skill Devel- opment•How does the introduction of Codex to coding education programs change the skills that learners develop? •How does the adoption of Codex for use by advanced coders impact their coding innovation, creativity, and skill development? •What non-coding skill development trends are affected most by the applications built using the Codex API? •What implications does the use of Codex in education and training have for amplification of certain coding practices?•Qualitative data collection on the impact of Codex introduction to coding education programs on learning outcomes •Random assignment of model across workers, teams, and/or firms to assess impact on coding and non-coding skill development Indirect Impacts & Priority Subquestions Research AreaSubquestions Examples Consumer Prices•What is the impact of Codex adoption on the price of goods and services produced by the adopting entity? •What mechanisms drive observed impacts on prices, and how might these impacts scale with model improvements?•Development of an empirical framework for assessing the impact of code generation models on consumer prices Inter-firm Competi- tion•What is the impact of Codex adoption on firm growth? How is this impact mediated by firm, industry and use-case characteristics? •Under what circumstances might Codex adoption increase the risk of harmful monopolies?•Identification of the firm and use-case characteristics that are likely to correlate with accelerated growth due to Codex adoption •Development of an empirical framework for assessing the impact of code generation models on intra-firm competition Economic Inequality•How does Codex adoption correlate with indicators of economic opportunity at the firm level (industry type, firm size, location, etc.) and individual level (income, wealth, race, gender, skills, zip code, etc.) •How can alternate deployment strategies reduce the risk of harmfully exacerbating economic inequalities? •How does Codex adoption change labor demand across the income and skill distribution?•Analysis of firm characteristics for firms that do and don’t adopt Codex •Development of an empirical framework for assessing the impact of code generation models on income and wealth distributions •Monitoring and analyzing the evolution of wages across firms that do and don’t adopt Codex (random assignment possible) Table 1: Research focuses, key questions, and examples of research to collect evidence on economic impacts. 4 Systematic explorations of what might be considered “socio-economic safety” of models—the potential impacts of powerful AI systems on people and society as they interact with existing economic, social, and political institutions— may yield insights that are valuable to policymakers. Absent policy intervention, LLMs may result in socio-economic safety risks by causing sudden negative impacts on the demand for human labor, increasing the frequency of labor market transitions, and exacerbating inequality, for example. Job displacement is associated with a range of negative impacts, including subsequent unemployment, long-term earnings losses, reduced psychological and physical well-being, family disruption, and lower levels of children’s educational attainment and well-being (Brand 2015, Young 2012, Schmillen 2020). Beyond affecting individual outcomes, economic impacts have the potential to shape the societal risk landscape in important ways. For example, at a societal level, sharp changes in the demand for human labor have been linked to higher levels of social unrest (Caprettini and Voth 2020). Depending on the fungibility of skills for those who experience a reduction in labor market opportunities as a result of AI system deployment, increasingly capable models risk exacerbating wage inequality, which in turn can amplify societal cleavages (Acemoglu and Restrepo 2021, Van de Werfhorst and Salverda 2012). In addition, differential access to required inputs to powerful LLMs – such as hardware, internet access, and digital literacy – will also perpetuate economic inequities (Weidinger et al. 2021). We must take these risks seriously and consider the potential implications for socio-economic safety when crafting deployment strategies and complimentary public policy proposals aimed at promoting well-being. 2.2Incorporate economic impacts as inputs to key deci- sions A central motivation for measuring economic impacts is to help researchers, firms, policymakers and the public better understand the populations most likely to benefit and those that could be negatively impacted from the adoption of AI systems that leverage LLMs. By better understanding the ways in which code generation models like Codex can impact economic outcomes for various actors in society, we can help inform decision-making in the three areas listed below. •Deployment policy : Projected economic impacts are one of many criteria AI developers can use to inform if, when, and how a new system should be deployed to users and potential beneficiaries. By developing a deeper empirical understanding of the economic impacts of code generation models, research in this area can drive improved deployment policy that considers economic well-being as a key outcome. •AI system design : Building our collective understanding of how a model like Codex can have tangible impacts on outcomes like productivity, employ- ment, and skill development can illuminate ways in which future models can be designed for greater positive economic impact and fewer harms. 5 •Public policy : Research on the outcomes described in this agenda can identify potential economic impacts for which public policy intervention may be a helpful tool to improve economic outcomes and mitigate inequities that could be the product of the deployment of increasingly capable AI systems. A core goal of this stream of research is to generate improved data and produce novel evidence that can inform the policymaking process. 2.3Build a test case for future research on the economic impacts of language models The research that will be immediately shaped by this agenda will focus on the economic impacts of Codex, but we expect this research agenda to serve as a starting point for economic impacts research that can be applied more generally for future AI systems. There have been rapid advances in language model capabilities over the past several years (Brown et al. 2020, Dhariwal et al. 2020, Rae et al. 2022, Smith et al. 2022, Radford et al. 2021, Sun et al. 2021) and we recognize that as this progress continues, there will be a heightened need to carefullyunderstandtheevolutionofeconomicimpactsandtranslatethisresearch into forecasting capabilities for new models. By articulating and executing on this research agenda via Codex, we aim to identify gaps in our approach, build research partnerships, solicit feedback, collect data on economic outcomes, and establish learning priorities that improve our collective ability to conduct policy- relevant economic impacts research on increasingly powerful language models in the future. The success of this agenda rests on the collaboration of the AI research community, policymakers, economists, and workers and we welcome your input. 2.4Ensure that the economic impacts of progress towards AGI are broadly beneficial to humanity OpenAI’s mission is to ensure that artificial general intelligence (AGI) – defined in OpenAI’s charter as “highly autonomous systems that outperform humans at most economically valuable work” – benefits all of humanity (OpenAI 2018). An important tenet of OpenAI’s deployment philosophy and policy is understanding and mitigating the safety risks of powerful AI models before deployment. If successful, highlycapableautonomoussystemsarenotonlyexpectedtotransform the nature and quality of many jobs, but also perhaps engender structural economic changes, with impacts on inequality and employment. Previous major technological shifts such as the industrial revolution had positive long-run effects on many facets of economic life, yet they also caused economic hardship for segments of society that were affected by negative labor market shocks (Frey 2019). Therefore, it is critical that we generate evidence on the nature and distribution of impacts of new AI systems to ensure that their development and deployment can promote broad benefit to humanity in the short, medium, and long term. 6 3 What is Codex? The economic impacts we will focus on in this research agenda are relevant to code generation models broadly. However, we plan to leverage OpenAI’s Codex model to execute on this research agenda in the near-term. Codex is an example of an LLM - an artificial intelligence model trained to predict text to follow a given string of input text. For example, if an LLM like OpenAI’s GPT-3 is given the prompt "I like to eat pizza because", it might generate the text "it is delicious." Codex is a fine-tuned version of OpenAI’s GPT-3, meaning that it inherits GPT-3’s language capacity and is given additional training on a wide range of programming languages (Brown et al. 2020, Chen et al. 2021). Its capabilities in natural language give it a remarkable ability to generalize to a wide range of tasks associated with coding, including code generation, code completion, code repair, code translation and code question answering. These capabilities have made it useful for a range of practical tasks, including generating code from natural language descriptions, writing documentation or unit tests for code snippets, completing partially written code, writing explanations for code snippets and fixing bugs in code. The model also has important limitations, namely that it often produces insecure code, can produce code that is not aligned with what the user intended, and is susceptible to reproducing or amplifying biases in the training data (Chen et al. 2021). a. b. c. d. One may want to implement a function in code that finds the nth number in the Fibonacci sequence. To write such a function, one might start with a prompt: some text that Codex uses as input for its generation. aand babove are prompts that we passed to Codex, containing the function name and expected arguments. Codex took a turn and completed ainto the snippet in cand completed binto the snippet in d. Codex can be accessed via an API, which users can access directly or via other products built using the API. A prominent example of a Codex-based application is Github Copilot – a tool developed by GitHub and OpenAI to autocomplete code and generate code based on natural language comments. In addition to Codex’s built-in capabilities, Copilot is ever-present in compatible programming environments, suggesting code completions throughout a session, and it has the ability to propose up to 10 suggested code completions if requested. As Codex’s capabilities evolve, and as more developers build on top of the API, it is likely that the available applications will also evolve. While these 7 applications will be designed and released by external parties, OpenAI will likely exertsomecontroloverthecapabilitiesoftheunderlyingCodexmodel. Therefore, the economic impacts of Codex depend on the model’s inherent capabilities, and how widely used its downstream applications become. Understanding the core aspects of Codex adoption is essential to identifying the mechanisms through which Codex could have observable economic impacts, particularly as OpenAI controls the levers of who is given access and for what use cases. Furthermore, studying the mechanisms of potential economic impacts is critical to ensuring that research at OpenAI and in the broader community prioritizes the most pressing questions, identifies blindspots where potential economic harms might exist, and makes evidence-based assumptions about how economic impacts may change as model capabilities evolve. 4 Research Agenda: Focus Areas This section outlines several preliminary focus areas for our research agenda on the economic impacts of code generation models. We divide these focus areas into two categories: 1.Direct impacts, which will include productivity, employment, and skill development, and 2.Indirect impacts which will include inter-firm competition, consumer prices, and economic inequality. The distinction between direct and indirect impacts is not meant to understate the importance of the indirect impacts as drivers of economic well-being. The categorization is useful to highlight the fact that research on direct impacts will often be a necessary input for precise research on indirect impacts. For example, to assess the impacts of code generation models on economic inequality, it is critical to better understand the distribution of impacts on employment and wages. Similarly, in order to enhance our understanding of how these models impact consumer prices, it is helpful to measure whether or not they introduce any changes in productivity within the production process for goods and services. While this section identifies potential economic impacts of code generation models beyond just Codex, we plan to use Codex to generate evidence on the magnitude and direction of impacts. As such, we speak below about the potential impacts that Codex specifically may have on individuals, firms, and society. The impacts of LLMs such as Codex on economic outcomes will vary widely depending on a number of underlying factors (Frank et al. 2019, Klinova and Korinek2021, TrammellandKorinek2021, Weidingeretal.2021). Understanding the differential impact of code generation models – whether mediated by use-case, geography, labor market, firm, or individual characteristics – will be a priority for research across all of the focus areas described below. 8 4.1 Direct Impacts 4.1.1 Productivity Background Neoclassical economic theory predicts that at the aggregate level, technological progress increases overall productivity (Romer 1990, Solow 1956). However, recent decades have not seen as strong productivity growth as might have been expected given rapid advancement in technology (Gordon 2018, Brynjolfsson, Rock, and Syverson 2017). In order to project the oncoming productivity impacts of AI, Brynjolfsson, Benzell, and Rock warn against relying on previous trends and instead suggest a need to “... study and understand the specific technologies that actually exist and make an assessment of their potential.” (Brynjolfsson, Benzell, and Rock 2020) The roll-out of Codex presents an opportunity to study the micro-level impact of code-generating AI on indi- vidual level productivity, a subject that will be key to understanding the current relationship between technological progress and economic growth. Damioli and coauthors take a step in this direction by examining data from 5,257 firms worldwide that filed one or more patents related to AI between 2000 and 2016 (Damioli, Van Roy, and Vertesy 2021). The authors find that AI patent applications have a positive effect on within-firm labor productivity. This study is among the first to estimate a causal relationship between new AI technologies and the productivity of the firms that develop those technologies. Indeed, literature on the causal impact of AI on individual firms is scarce, largely due to a lack of firm-level data. Multiple recent papers make an explicit call for more firm-level data in order to build a clearer understanding of the impact of AI on a range of economic outcomes, and how those impacts are mediated by firm characteristics (Seamans and Raj 2018, Frank et al. 2019). Through OpenAI’s partnerships with firms that have adopted Codex, we intend to build on previous research that has used novel data collection approaches to measure the impact of code generation tools on productivity (Xu, Vasilescu, and Neubig 2021) and respond directly to this call for further firm-level data by examining the impact of Codex on both worker and firm-level measures of productivity. How Codex May Impact Productivity Codex has the potential to increase the productivity of individual workers in coding-centric roles. The adoption of Codex could reduce the amount of time needed to look up syntax, reference old code, add documentation, write basic programs or switch between tasks and projects. Individuals who use Codex models or applications could also realize productivity effects via faster code, higher code quality, or improved documentation. Through the applications built with Codex, productivity could be enhanced not solely for coding tasks but for many tasks related to design, engineering, and data visualization. We are interested in understanding the distribution of productivity impacts on workers across the spectrum of tasks, skills and roles. This includes workers in coding-centric roles as well as workers in non-coding positions who may be affected by increased automation or adoption of productivity-enhancing tools built using Codex. 9 Broad Research Questions •What is the impact of Codex adoption on firm, team, and worker produc- tivity? •What are the firm, worker, and use-case characteristics that drive differen- tial impacts on productivity? •What are the mechanisms through which productivity impacts firms, teams, and individual workers? 4.1.2 Employment Background A growing literature in economics has renewed the recent focus of researchers on the potential impacts of technological advancement on employment (AcemogluandRestrepo2018, Autor2015, BrynjolfssonandMcAfee2014, Mokyr, Vickers, and Ziebarth 2015, Tolan et al. 2021). Frey and Osborne estimate that 47% of total US employment is susceptible to automation (Frey and Osborne 2017). Aghion and coauthors highlight that the aggregate effects of AI on employment will be heavily mediated by competition, labor, and education policy (Aghion, Antonin, and Bunel 2020). Expert forecasts vary in their predictions, but overall suggest a considerable chance that AI will surpass human capabilities at most tasks within several decades.2 How Codex May Impact Employment The adoption of Codex and other code-generating AI could have a potentially large impact on employment in the technology and information sectors. As Codex’s capabilities continue to expand, Codex may eventually serve as a substitute for a larger share of coding tasks currently completed by human labor. Alternatively, Codex may augment human labor such that it is adopted as a net complement to labor and increases the demand for workers who perform tasks such as detailed code review, intensive quality assurance, or the application of sales and logistics expertise. Additionally, Codex could spark a need for new skills, changing team composition and shifting demand towards new tasks in which labor has a comparative advantage, a phenomenon researchers have called the “reinstatement effect” (Acemoglu and Restrepo 2019). The effects of code generation models on the completion of micro-work tasks outsourced by firms to gig-economy workers is another potential avenue of impact on worker opportunity and well-being. With respect to Codex, we are interested in empirically assessing how these dynamics will unfold, particularly as the model progresses in its capabilities. Understanding the balance of displacement versus reinstatement of tasks and jobs across different industries, firms, and use-cases is an essential input to 2Expert forecasts collected by Grace and coauthors, for example, give a 50% chance that AI systems will outperform humans at all tasks by 2063, and a 10% probability that those capabilities will exist by 2027 (Grace et al. 2018). More recent forecasts collected by Gruetzemacher and coauthors suggest there is a 50% chance that AI systems would be capable of automating 90% of human tasks by 2045.(Gruetzemacher, Paradice, and Lee 2020) 10 forecasting future direct labor market impacts as the capabilities of Codex and other code-generating models evolve. Of particular interest is whether we can leverage worker and firm-level data to identify trends in the potential demand shifts for various types of skills and how fungible those skills are in the labor market. If we expect Codex to drive down demand for entry level coders (or other roles with rote and repetitive coding tasks) but drive up demand for senior engineers and managers, for example, then we will want to have an informed estimate of the impacts that may have on wage and mobility outcomes to inform deployment and public policy decisions. We hope that foundational research on the employment impacts of Codex can enable increasingly policy-relevant research to be done to project longer-term impacts of future code-generating AI models. In addition to impacts on total employment, Codex may also impact job quality and the nature of work itself. Broadly, advances in AI have the potential to reduce occupational safety risks for certain jobs, create new opportunities for aging workers or those with disabilities, and substitute for overly repetitive and mundane tasks (EU-OSHA 2021). However, increased automation can drive social isolation at work, increased specialization, performance pressure, reduced worker autonomy and overbearing worker surveillance, all of which may reduce well-being on the job (Kaplan and Schulhofer-Wohl 2018, Partnership on AI 2020, Weidinger et al. 2021). Measuring the effects of Codex on job quality is a key input to understanding the broader impacts of Codex on worker well-being. Potential Research Questions •What is the impact of Codex adoption on the demand for human coding labor? •What is the impact of Codex adoption on the demand for human labor in non-coding roles? •What human coding tasks are most likely to be substituted by Codex and how is that labor reallocated? •What new tasks does Codex introduce into the production process and what skills are demanded to complete them? •What is the impact of Codex adoption on job quality? 4.1.3 Skill development Background A large body of literature suggests that complementarities be- tween technological advances and high-skilled labor can drive increasing returns to skill development (Acemoglu and Autor 2011, Bound and Johnson 1992, Goos 2018, Katz and Murphy 1992). Predictable pathways towards a labor reinstatement effect from Codex include increased demand for skills such as prompt engineering, Codex-specific debugging, and specialized quality assurance of AI-generated outputs. Given the likelihood that Codex could generate demand 11 for new skills in the labor force, we would like to examine the ways that Codex can also drive the development of new skills when incorporated into training and education programs. By examining this question empirically with Codex, we intend to contribute to a body of literature that has investigated the impact of technological development on skill development. Several descriptive case studies summarize the experiences of students or firms that integrate low-code software tools into work and learning environments (Beranic, Rek, and Hericko 2020, Corral, Fronza, and Pahl 2021). However, we are not aware of any empirical work estimating the impact of these tools on skill development or retention. How Codex May Impact Skill Development The ability for Codex to make coding suggestions could either enhance a user’s learning process or create inattentiverelianceonCodexthatmaystiflecreativityanditerativelearning. Itis plausible that Codex suggestions disincentivize coders from learning or retaining new knowledge when they feel they can rely on Codex. We are particularly interested in learning whether or not this is the case at the frontier of human coding innovation and skill development. Estimating the impact of Codex on coding skill development can help us understand the impact on human coding innovation – an important driver of technological progress and an essential data input for increasingly powerful code generation tools. Furthermore, evaluating the impacts of Codex on skill development for coders and non-coders alike can influence decisions about future education policy and the design of training programs that match the needs of the economy. Potential Research Questions •How does the introduction of Codex to coding education programs change the skills that learners develop? •How does the adoption of Codex for use by advanced coders impact their coding innovation, creativity, and skill development? •What non-coding skill development trends are impacted most by the applications built using the Codex API? •What implications might the use of Codex in education and training have for amplification of certain coding practices? 4.2 Indirect Impacts The outcomes included in this section are listed separately from those above purely because we expect the outputs from research on the “direct” impacts above to be key inputs into understanding the impact of Codex on these “indirect” impacts. The distinction between direct and indirect impacts does not reflect a difference in the relative importance of the outcomes in either group within this research agenda. 12 4.2.1 Consumer Prices Background Technological progress has made the production of countless goods and services cheaper over time (Roser 2016). Researchers have speculated that as the general capabilities of AI advance, the costs of labor to produce many goods and services could fall dramatically, driving a reduction in the market price for consumer goods and services (Stone et al. 2016). Such an impact would rely on AI systems introducing productivity and efficiency gains into the production process, including by substituting human labor with automated systems that run at lower marginal costs. How Codex May Affect Consumer Prices Codex provides a tangible opportunity to better understand how the introduction of a specific, potentially powerful AI system can impact the costs of production, and how that impact is passed on to consumers via prices. By augmenting any production process that in part relies on code generation, Codex could have a downstream impact on the prices of goods and services. Through partnerships with firms that have adopted Codex, we can learn about the impact of Codex on factors of production, and begin to build an understanding of how those impacts are passed on to consumers, if at all. Given the growing importance of coding and software as an input to the production of goods and services, understanding this impact for one code generation model could foster better understanding of the potential impacts of increasingly capable code generation models in the future. Potential Research Questions •What is the impact of Codex adoption on the price of goods and services produced by the adopting entity? •What mechanisms drive observed impacts on prices, and how might these impacts scale with model improvements? 4.2.2 Inter-firm competition Background AI-adopting firms with a better ability to collect and use data – specifically data that is inaccessible to their competitors – may drive “unfair competition” (Acemoglu 2021a). As a result, particularly well-positioned firms could capture excessive consumer surplus and relax price competition in the market (Acemoglu 2021a). Investments in AI technology have been shown to be correlated with increased firm growth, particularly among already large firms relative to others in their industry (Babina et al. 2021). Better understanding the potential for Codex to drive increased industry concentration is a critical input to improved deployment strategy and public policy design. How Codex May Impact Inter-firm Competition The effective adoption of Codex could spark productivity and efficiency gains, potentially driving faster growth at the firm level. We’re interested in understanding the characteristics of 13 a firm that make it more likely to realize the economic impacts from Codex. Are there existing monopolies within industries that Codex would further entrench? What impact would the adoption of Codex have on competition and what role should those impacts play in deployment policy? A deeper understanding of the impacts of modern AI-system adoption on competition is urgently needed. However, without a sample of several hundred firms, many confounding factors would limit our ability to causally identify the impact of Codex on firm-level competition dynamics. As such, our priority in the short term is to enhance our understanding of the mechanisms through which Codex might accelerate firm-level growth, focusing empirical research on the “direct” impacts described previously in this document that might effect market dynamics. We encourage expressions of interest from scholars interested in guiding our approach to better understanding impacts on competition dynamics and how Codex might impact the underlying drivers of shifts in market power. Potential Research Questions •What is the impact of Codex adoption on firm growth? How is this impact mediated by firm and industry characteristics? •Under what circumstances might Codex adoption increase the risk of harmful monopolies? 4.2.3 Economic Inequality Background In the US, the average 2021 annual income among individuals in the top 1% of earners ($1.6m) was approximately 84x higher than the average income of individuals in the bottom 50% of earners ($19.1k) (Blanchet, Saez, and Zuckman 2022). The divergence of both income and wealth in the US since the 1980s has been attributed in part to the economic impacts of technological change (Jaumotte, Lall, and Papageorgiou 2013, Acemoglu 2002, Rotman 2014). Nu- merous studies have demonstrated that middle-wage jobs have been increasingly displaced through technological innovation in recent decades. Highly routine jobs have been particularly susceptible to displacement, while those requiring abstract or manual tasks (professional, managerial, and technical occupations at the higher end of the wage spectrum as well as service and labor jobs at the other) have proven less susceptible (Autor 2015, Autor, Levy, and Murnane 2003, Autor and Dorn 2013, Goos and Manning 2007). This phenomenon has been termed “job polarization” and has been attributed to skill-biased and routine-biased technological change (Berman, Bound, and Machin 1998, Goos and Manning 2007, Goos, Manning, and Salomons 2014). A core driver of the distributive economic impacts of LLMs and other AI systems is whether they are primarily used to augment and complement human labor or replace it (Brynjolfsson 2022, Acemoglu and Restrepo 2021). How Codex May Affect Economic Inequality Codex presents an example of how the scope of “routine” automatable tasks can change over time (Lu 2015). 14 This shift may be gradual and uneven, particularly across different labor markets, with some workers and firms adopting new technologies more readily than others. This may lead to a widening of existing disparities in skill, training, or digital literacy, or to greater inequality in the distribution of economic benefits from technology. The adoption of new technologies and automation methods is not inevitable. Different firms and workers may have different preferences and costs for adopt- ing new technology. In addition, some workers may be unable to adopt new technologies due to the high cost of complementary technologies, the high cost of retraining, or insufficient digital literacy. The adoption of Codex therefore may correlate with – and exacerbate – existing inequities in technology access, digital literacy, and economic opportunity. There is a risk that the economic benefits of code generation models may be shared unequally, with much of the gains flowing to the owners of capital, such as investors and shareholders. By partnering with external academics and Codex customers, we aim to foster research that helps assess the impact of Codex on the distribution of income, skills, wealth, and economic mobility. The outcomes of this research will be key inputs to policy design aimed at mitigating any distributional impacts of new AI systems that may amplify harmful inequities. Potential Research Questions •How does Codex adoption correlate with other indicators of economic opportunityand mobilityat thefirm level (industrytype, firmsize, location, etc.) and individual level (income, wealth, race, zip code, etc.)? •How can alternate model deployment strategies reduce the risk of harmfully exacerbating economic inequalities? •How does Codex adoption change labor demand across the income and skill distribution? 5 Prioritization We listed numerous avenues for research above and we encourage collaborations to pursue them all. When considering which projects to initiate, we will prioritize research that has the following characteristics: •Helps build sustained partnerships for data sharing and research collabora- tion that can improve learning about the economic impacts of LLMs over time. •Has the potential to inform deployment decisions for code generation models or could directly influence public policy decisions meant to enhance the economic benefits of these models and minimize any negative impacts. 15 •Helps segment aspects of code generation models based on their likely economic impact, both positive and negative, in order to inform future model design decisions. •Helps OpenAI, other AI developers and external research partners estimate the potential future economic impacts of improved code generation models. •Is unlikely to happen without OpenAI support. •Is most likely to succeed if led by researchers who are external to OpenAI. 6 Conclusion This research agenda is just one of several recent contributions meant to inform the direction of future work to ensure that the economic impacts of AI are as universally positive as possible (Acemoglu 2021a, Acemoglu 2021b, Partnership on AI 2021, Siddarth et al. 2021, Weidinger et al. 2021, Autor, Mindell, and Reynolds 2022). We are excited by progress in the fields of AI ethics, safety, and alignment research and recognize that as the capabilities of AI systems advance, so too will the potential impacts of key decisions related to AI system design, deployment, and public policy. It is our hope that this research agenda will not only inspire deeper conversation about the economic impacts of increasingly capableLLMsbutalso–pairedwiththeCallforExpressionsofInterest–catalyze concrete action to measure economic impacts and inform decision-making in these areas. Call for Expressions of Interest If you are a researcher interested in partneringwithOpenAIresearchersandcustomerstostudytheeconomicimpacts of Codex, please see the link above to read more and for details on how to submit an expression of interest. Acknowledgements Thanks to Steven Adler, Lama Ahmad, Stephanie Bell, Miles Brundage, Katya Klinova, Gretchen Krueger, Jade Leung, Anna Makanju, Katie Mayer, Richard Ngo, Cullen O’Keefe, Girish Sastry, Sarah Shoker, and Natalie Staudacher for feedback on drafts of this document. Thanks to Michelle Alexopoulos, Sarah Bana, Alex Bartik, Erik Brynjolfsson, Tim de Stefano, Avi Goldfarb, Marlène Koffi, Mina Lee, Zanele Munyikwa, Mark Muro, Frank Nagle, Maria del Rio-Chanona, Daniel Rock, Anna Salomons, and Ben Weidmann for helpful discussions on potential avenues for research on the economic impacts of code generation models. References Acemoglu, Daron (2002). “Technical Change, Inequality, and the Labor Market”. In:Journal of Economic Literature 40.1, pp. 7–72. issn: 0022-0515. 16 Acemoglu, Daron (Sept. 2021a). Harms of AI . Tech. rep. w29247. Cambridge, MA: National Bureau of Economic Research, w29247. doi:10.3386/w29247 . —ed. (2021b). Redesigning AI: Work, Democracy, and Justice in the Age of Automation . Boston Review/Forum 18 (46.2). Cambridge, MA: Boston Review. isbn: 978-1-946511-62-1. Acemoglu, Daron and David Autor (2011). “Skills, Tasks and Technologies: ImplicationsforEmploymentandEarnings”.In: Handbook of Labor Economics . Vol.4.Elsevier,pp.1043–1171. isbn:978-0-444-53452-1. doi:10.1016/S0169- 7218(11)02410-5 . Acemoglu, Daron and Pascual Restrepo (June 2018). “The Race between Man and Machine: Implications of Technology for Growth, Factor Shares, and Employment”. In: American Economic Review 108.6, pp. 1488–1542. issn: 0002-8282. doi:10.1257/aer.20160696 . —(May 2019). “Automation and New Tasks: How Technology Displaces and Reinstates Labor”. In: Journal of Economic Perspectives 33.2, pp. 3–30. issn: 0895-3309. doi:10.1257/jep.33.2.3 . —(June 2021). Tasks, Automation, and the Rise in US Wage Inequality . Tech. rep.w28920.Cambridge,MA:NationalBureauofEconomicResearch,w28920. doi:10.3386/w28920 . Aghion, Philippe, Céline Antonin, and Simon Bunel (Jan. 2020). “Artificial Intelligence, Growth and Employment: The Role of Policy”. In: Economie et Statistique / Economics and Statistics 510-511-512, pp. 149–164. issn: 03361454. doi:10.24187/ecostat.2019.510t.1994 . Amodei,Darioetal.(July2016).“ConcreteProblemsinAISafety”.In: arXiv:1606.06565 [cs]. arXiv: 1606.06565 [cs] . Autor, David (Aug. 2015). “Why Are There Still So Many Jobs? The History and Future of Workplace Automation”. In: Journal of Economic Perspectives 29.3, pp. 3–30. issn: 0895-3309. doi:10.1257/jep.29.3.3 . Autor,DavidandDavidDorn(Aug.2013).“TheGrowthofLow-SkillServiceJobs and the Polarization of the US Labor Market”. In: American Economic Review 103.5, pp. 1553–1597. issn: 0002-8282. doi:10.1257/aer.103.5.1553 . Autor, David, Frank Levy, and Richard J Murnane (2003). “The Skill Content of Recent Technological Change: An Empirical Exploration”. In: Quarterly Journal of Economics . Autor, David, David A. Mindell, and Elisabeth B. Reynolds (2022). The Work of the Future: Building Better Jobs in an Age of Intelligent Machines . The MIT Press. isbn: 978-0-262-36775-2. doi:10.7551/mitpress/14109.001.0001 . Babina, Tania et al. (Nov. 2021). “Artificial Intelligence, Firm Growth, and Product Innovation”. Beranic, Tina, Patrik Rek, and Marjan Hericko (Oct. 2020). “Adoption and Usability of Low-Code/No-Code Development Tools”. In: Proceedings of the Central European Conference on Information and Intelligent Systems . Varazdin, Croatia. Berman,Eli,JohnBound,andStephenMachin(Nov.1998).“ImplicationsofSkill- Biased Technological Change: International Evidence*”. In: The Quarterly 17 Journal of Economics 113.4, pp. 1245–1279. issn: 1531-4650, 0033-5533. doi: 10.1162/003355398555892 . Blanchet, Thomas, Emmanuel Saez, and Gabriel Zuckman (Feb. 2022). Realtime Inequality . https://realtimeinequality.org/. Bound, John and George Johnson (1992). “Changes in the Structure of Wages in the 1980’s: An Evaluation of Alternative Explanations”. In: The American Economic Review 82.3, pp. 371–392. issn: 00028282. Brand, Jennie E. (Aug. 2015). “The Far-Reaching Impact of Job Loss and Unemployment”. In: Annual Review of Sociology 41.1, pp. 359–375. issn: 0360-0572, 1545-2115. doi:10.1146/annurev-soc-071913-043237 . Brown, Tom B. et al. (July 2020). “Language Models Are Few-Shot Learners”. In:arXiv:2005.14165 [cs] . arXiv: 2005.14165 [cs] . Brynjolfsson, Erik (Jan. 2022). “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence”. In: arXiv:2201.04200 [cs, econ, q-fin] . arXiv: 2201.04200 [cs, econ, q-fin] . Brynjolfsson, Erik, Seth Benzell, and Daniel Rock (2020). Understanding and Addressing the Modern Productivity Paradox . Research Brief. MIT. Brynjolfsson, Erik and Andrew McAfee (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies . First edition. New York: W.W. Norton & Company. isbn: 978-0-393-23935-5. Brynjolfsson, Erik, Daniel Rock, and Chad Syverson (Nov. 2017). Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics . Tech. rep. w24001. Cambridge, MA: National Bureau of Economic Research, w24001. doi:10.3386/w24001 . Caprettini, Bruno and Hans-Joachim Voth (Sept. 2020). “Rage against the Machines: Labor-Saving Technology and Unrest in Industrializing England”. In:American Economic Review: Insights 2.3, pp. 305–320. issn: 2640-205X, 2640-2068. doi:10.1257/aeri.20190385 . Chen, Mark et al. (July 2021). “Evaluating Large Language Models Trained on Code”. In: arXiv:2107.03374 [cs] . arXiv: 2107.03374 [cs] . Corral, Luis, Ilenia Fronza, and Claus Pahl (Oct. 2021). “Block-Based Program- ming Enabling Students to Gain and Transfer Knowledge with a No-code Approach”. In: Proceedings of the 22st Annual Conference on Information Technology Education . SnowBird UT USA: ACM. isbn: 978-1-4503-8355-4. doi:10.1145/3450329.3478314 . Damioli, Giacomo, Vincent Van Roy, and Daniel Vertesy (Mar. 2021). “The Impact of Artificial Intelligence on Labor Productivity”. In: Eurasian Business Review11.1, pp. 1–25. issn: 1309-4297, 2147-4281. doi:10.1007/s40821- 020-00172-8 . Dhariwal, Prafulla et al. (Apr. 2020). “Jukebox: A Generative Model for Music”. In:arXiv:2005.00341 [cs, eess, stat] . arXiv: 2005.00341 [cs, eess, stat] . Frank, Morgan R. et al. (Apr. 2019). “Toward Understanding the Impact of Artificial Intelligence on Labor”. In: Proceedings of the National Academy of Sciences 116.14, pp. 6531–6539. issn: 0027-8424, 1091-6490. doi:10.1073/ pnas.1900949116 . 18 Frey, Carl Benedikt (2019). The Technology Trap: Capital, Labor, and Power in the Age of Automation . First paperback printing. Princeton, New Jersey Oxford: Princeton University Press. isbn: 978-0-691-21079-7 978-0-691-17279- 8. Frey, Carl Benedikt and Michael A. Osborne (Jan. 2017). “The Future of Em- ployment: How Susceptible Are Jobs to Computerisation?” In: Technolog- ical Forecasting and Social Change 114, pp. 254–280. issn: 00401625. doi: 10.1016/j.techfore.2016.08.019 . Goos, Maarten (July 2018). “The Impact of Technological Progress on Labour Markets: Policy Challenges”. In: Oxford Review of Economic Policy 34.3, pp. 362–375. issn: 0266-903X, 1460-2121. doi:10.1093/oxrep/gry002 . Goos, Maarten and Alan Manning (Feb. 2007). “Lousy and Lovely Jobs: The RisingPolarizationofWorkinBritain”.In: Review of Economics and Statistics 89.1, pp. 118–133. issn: 0034-6535, 1530-9142. doi:10.1162/rest.89.1.118 . Goos, Maarten, Alan Manning, and Anna Salomons (Aug. 2014). “Explaining Job Polarization: Routine-Biased Technological Change and Offshoring”. In: American Economic Review 104.8, pp. 2509–2526. issn: 0002-8282. doi: 10.1257/aer.104.8.2509 . Gordon, Robert (Apr. 2018). Why Has Economic Growth Slowed When In- novation Appears to Be Accelerating? Tech. rep. w24554. Cambridge, MA: National Bureau of Economic Research, w24554. doi:10.3386/w24554 . Grace, Katja et al. (July 2018). “Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts”. In: Journal of Artificial Intelligence Research 62, pp. 729–754. issn: 1076-9757. doi:10.1613/jair.1.11222 . Gruetzemacher, Ross, David Paradice, and Kang Bok Lee (Dec. 2020). “Fore- casting Extreme Labor Displacement: A Survey of AI Practitioners”. In: Technological Forecasting and Social Change 161, p. 120323. issn: 00401625. doi:10.1016/j.techfore.2020.120323 . Jaumotte, Florence, Subir Lall, and Chris Papageorgiou (June 2013). “Rising Income Inequality: Technology, or Trade and Financial Globalization?” In: IMF Economic Review 61.2, pp. 271–309. issn: 2041-4161, 2041-417X. doi: 10.1057/imfer.2013.7 . Kaplan, Greg and Sam Schulhofer-Wohl (Aug. 2018). “The Changing (Dis- )Utility of Work”. In: Journal of Economic Perspectives 32.3, pp. 239–258. issn: 0895-3309. doi:10.1257/jep.32.3.239 . Katz, L. F. and K. M. Murphy (Feb. 1992). “Changes in Relative Wages, 1963- 1987: Supply and Demand Factors”. In: The Quarterly Journal of Economics 107.1, pp. 35–78. issn: 0033-5533, 1531-4650. doi:10.2307/2118323 . Klinova, Katya and Anton Korinek (July 2021). “AI and Shared Prosperity”. In:Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. Virtual Event USA: ACM, pp. 645–651. isbn: 978-1-4503-8473-5. doi:10.1145/3461702.3462619 . Lu, Qian (2015). “The End of Polarization? Technological Change and Employ- ment in the U.S. Labor Market”. Mokyr, Joel, Chris Vickers, and Nicolas L. Ziebarth (Aug. 2015). “The History of Technological Anxiety and the Future of Economic Growth: Is This Time 19 Different?” In: Journal of Economic Perspectives 29.3, pp. 31–50. issn: 0895- 3309. doi:10.1257/jep.29.3.31 . OpenAI (2018). OpenAI Charter . https://openai.com/charter/. EU-OSHA (2021). Impact of Artificial Intelligence on Occupational Safety and Health: Policy Brief . Partnership on AI (2020). Framework for Promoting Workforce Well-being in the AI-Integrated Workplace . — (2021). Redesigning AI for Shared Prosperity: An Agenda . Radford, Alec et al. (Feb. 2021). “Learning Transferable Visual Models From NaturalLanguageSupervision”.In: arXiv:2103.00020 [cs] .arXiv: 2103.00020 [cs]. Rae, Jack W. et al. (Jan. 2022). “Scaling Language Models: Methods, Analysis & InsightsfromTrainingGopher”.In: arXiv:2112.11446 [cs] .arXiv: 2112.11446 [cs]. Romer, Paul M. (Oct. 1990). “Endogenous Technological Change”. In: Journal of Political Economy 98.5, Part 2, S71–S102. issn: 0022-3808, 1537-534X. doi: 10.1086/261725 . Roser, Christoph (Oct. 2016). Faster, Better, Cheaper in the History of Manu- facturing: From the Stone Age to Lean Manufacturing and Beyond . Zeroth. 1 Edition. | Boca Raton : CRC Press, 2016.: Productivity Press. isbn: 978-1- 315-36794-1. doi:10.1201/9781315367941 . Rotman, David (Oct. 2014). “Technology and Inequality”. In: MIT Technology Review. Schmillen, Achim D. (May 2020). Causes and Impacts of Job Displacements and Public Policy Responses . Tech. rep. World Bank, Washington, DC. doi: 10.1596/33720 . Seamans, Robert and Manav Raj (2018). “AI, Labor, Productivity and the Need for Firm-Level Data”. In: National Bureau of Economic Research . Siddarth, Divya et al. (2021). How AI Fails Us . Tech. rep. Justice, Health & Democracy Impact Initiative, Edmond J. Safra Center for Ethics, Harvard University. Smith, Shaden et al. (Feb. 2022). “Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model”. In:arXiv:2201.11990 [cs] . arXiv: 2201.11990 [cs] . Solow, Robert M. (Feb. 1956). “A Contribution to the Theory of Economic Growth”. In: The Quarterly Journal of Economics 70.1, p. 65. issn: 00335533. doi:10.2307/1884513 . Stone, Peter et al. (Sept. 2016). Artificial Intelligence and Life in 2030. One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel. Tech. rep. Stanford, CA: Stanford University. Sun, Yu et al. (July 2021). “ERNIE 3.0: Large-scale Knowledge Enhanced Pre- training for Language Understanding and Generation”. In: arXiv:2107.02137 [cs]. arXiv: 2107.02137 [cs] . Tolan, Songül et al. (June 2021). “Measuring the Occupational Impact of AI: Tasks, Cognitive Abilities and AI Benchmarks”. In: Journal of Artificial 20 Intelligence Research 71, pp. 191–236. issn: 1076-9757. doi:10.1613/jair. 1.12647. Trammell, Philip and Anton Korinek (2021). “Economic Growth under Transfor- mative AI”. Van de Werfhorst, Herman G. and Wiemer Salverda (Dec. 2012). “Consequences of Economic Inequality: Introduction to a Special Issue”. In: Research in Social Stratification and Mobility 30.4, pp. 377–387. issn: 02765624. doi: 10.1016/j.rssm.2012.08.001 . Weidinger, Laura et al. (Dec. 2021). “Ethical and Social Risks of Harm from Language Models”. In: arXiv:2112.04359 [cs] . arXiv: 2112.04359 [cs] . Xu, Frank F., Bogdan Vasilescu, and Graham Neubig (Sept. 2021). “In-IDE Code Generation from Natural Language: Promise and Challenges”. In: arXiv:2101.11149 [cs] . arXiv: 2101.11149 [cs] . Young, C. (Dec. 2012). “Losing a Job: The Nonpecuniary Cost of Unemployment in the United States”. In: Social Forces 91.2, pp. 609–634. issn: 0037-7732, 1534-7605. doi:10.1093/sf/sos071 . 21
acf950ef-3ce4-49be-a40e-c9ee71fff643
trentmkelly/LessWrong-43k
LessWrong
Why you must maximize expected utility This post explains von Neumann-Morgenstern (VNM) axioms for decision theory, and what follows from them: that if you have a consistent direction in which you are trying to steer the future, you must be an expected utility maximizer. I'm writing this post in preparation for a sequence on updateless anthropics, but I'm hoping that it will also be independently useful. The theorems of decision theory say that if you follow certain axioms, then your behavior is described by a utility function. (If you don't know what that means, I'll explain below.) So you should have a utility function! Except, why should you want to follow these axioms in the first place? A couple of years ago, Eliezer explained how violating one of them can turn you into a money pump — how, at time 11:59, you will want to pay a penny to get option B instead of option A, and then at 12:01, you will want to pay a penny to switch back. Either that, or the game will have ended and the option won't have made a difference. When I read that post, I was suitably impressed, but not completely convinced: I would certainly not want to behave one way if behaving differently always gave better results. But couldn't you avoid the problem by violating the axiom only in situations where it doesn't give anyone an opportunity to money-pump you? I'm not saying that would be elegant, but is there a reason it would be irrational? It took me a while, but I have since come around to the view that you really must have a utility function, and really must behave in a way that maximizes the expectation of this function, on pain of stupidity (or at least that there are strong arguments in this direction). But I don't know any source that comes close to explaining the reason, the way I see it; hence, this post. I'll use the von Neumann-Morgenstern axioms, which assume probability theory as a foundation (unlike the Savage axioms, which actually imply that anyone following them has not only a utility function but also a proba
21cfc713-eee6-44cf-9ed2-d78962223246
trentmkelly/LessWrong-43k
LessWrong
Meetup : Fort Collins, Colorado Meetup Wedneday 7pm Discussion article for the meetup : Fort Collins, Colorado Meetup Wedneday 7pm WHEN: 08 February 2012 07:00:00PM (-0700) WHERE: 144 North College Avenue, Fort Collins, CO 80524 Sorry for the short notice. We will be having our usual meetups every Wednesday in February. Please sign up for the mailing list: http://groups.google.com/group/less-wrong-fort-collins-co Discussion article for the meetup : Fort Collins, Colorado Meetup Wedneday 7pm
a298e1c2-26f4-4edf-ada5-4c53648523ce
trentmkelly/LessWrong-43k
LessWrong
Last chance to donate for 2011 Many LW readers choose to direct their charitable donations to SingInst with a view toward reducing existential risk. Others do not, whether because they feel they lack an understanding of the relevant issues, because they value present day humans more than future humans or because they have concern as to the incentive effects that would be created by donating to SingInst at present. I personally feel that there's a strong case for saving money to donate later on account of better information being available in the future. However, I feel cognitive dissonance attached to saving to donate later rather than now. If you are in this camp you might consider donating to GiveWell's top ranked charities. Also note that spreading the word about GiveWell promotes a culture of effective philanthropy which is likely to have spin off effect of interesting people in x-risk reduction, reducing x-risk.  See Holden's article on last minute donations http://blog.givewell.org/2011/12/30/last-minute-donations/ : > "Of the money moved to our top charities through our website in 2010, 25% was on December 31st alone. We know that lots of people will be looking to make last-minute donations. > > If you only have five minutes but you want to take advantage of the thousands of hours of work we put into finding the best giving opportunities, consider giving to our top charities. They have strong track records, accomplish a lot of good per dollar spent, and have good concrete plans for how to use additional donations. > > A couple of things to keep in mind: > > * After you give, spread the word. This is the perfect time to remind people (via Facebook sharing, tweeting, etc.) to give before the year ends. And people making last-minute gifts are likely to be receptive to suggestions. > * If you have any questions, we’re here to help. We should be available by phone for most of the day, and responding to email when we’re not. (See our contact page). Our research FAQ may also be a good res
b97f76df-ffb6-49bf-b25e-7ce68d90ce74
trentmkelly/LessWrong-43k
LessWrong
Open & Welcome Thread - December 2022 If it’s worth saying, but not worth its own post, here's a place to put it. If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post. If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong. If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section. The Open Thread tag is here. The Open Thread sequence is here.
8c05b2f8-8d72-4cf0-91b9-6af7762fad7a
trentmkelly/LessWrong-43k
LessWrong
Keep Your Identity Fluid [LINK] Building on Graham's Small Identity, here I look at the hazards of identity, and give a suggestion for leveraging it to your advantage, as well as avoiding pitfalls.  As per my last article, feel free to let me know what you think here, privately, or anonymously.    Link.
89c420d6-b66a-4b50-a955-af895aae1442
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Questions about Conjecure's CoEm proposal Conjecture recently released an [AI safety proposal](https://www.lesswrong.com/posts/ngEvKav9w57XrGQnb/cognitive-emulation-a-naive-ai-safety-proposal). The three of us spent a few hours discussing the proposal and identifying questions that we have. (First, we each re-read the post and independently brainstormed a few questions we had. Then, we discussed the post, exchanged questions/uncertainties, and consolidated our lists).  [Conjecture's post](https://www.lesswrong.com/posts/ngEvKav9w57XrGQnb/cognitive-emulation-a-naive-ai-safety-proposal) is concise, which means it leaves out many details. Many of our questions are requests for more details that would allow us (and others) to better understand the proposal and evaluate it more thoroughly. **Requesting examples and details** 1. What are the building blocks that the CoEms approach will draw from? What are examples of past work that has shown us how to build powerful systems that are human-understandable? 2. What are examples of “knowledge of building systems that are broadly beneficial and safe while operating in the human capabilities regime?” (see [Wei\_Dai’s comment](https://www.lesswrong.com/posts/ngEvKav9w57XrGQnb/cognitive-emulation-a-naive-ai-safety-proposal?commentId=LnExJ3AP6uEr3rB8i#BkNrRDHRbdwtQ9oTr)) 3. What’s an example of an experiment that would be considered part of the CoEm agenda? (see [Garrett Baker’s comment](https://www.lesswrong.com/posts/ngEvKav9w57XrGQnb/cognitive-emulation-a-naive-ai-safety-proposal)) 4. What kinds of approaches does Conjecture intend to use to extract alignment insights “purely from mining current level systems”? (Is this the same as interpretability research and [digital neuroscience](https://www.cold-takes.com/high-level-hopes-for-ai-alignment/)?) 5. The “minimize magic” section feels like that is where the juice is, but it’s not really explained much, which makes it difficult to evaluate. Can you offer more details about how you intend to minimize magic? **Conceptual questions** 1. Assume you had a fully human-understandable system, and you could understand its current capabilities. How would you be able to forecast its *future*capabilities (e.g., if deployed or if given certain commands)? 1. If we solve human neuroscience such that we could understand the brain of a 2-year-old, we would be able to accurately assess the (current) capabilities of the 2-year-old. However, we would not necessarily be able to predict the (future) capabilities of this brain once it is 30 years old. Analogously, if we had a human-understandable AI (that may be superintelligent) through the CoEms agenda, would we only be able to understand its current capabilities, or would there be a reliable way to forecast its future capabilities? 2. Charlotte thinks that humans and advanced AIs are universal Turing machines, so predicting capabilities is not about whether a capability is present at all, but whether it is feasible in finite time with a low enough error rate. Predicting how such error rates decline with experience and learning seems roughly equally hard for human-understandable AIs and other AIs. 2. How easy is it to retarget humans? 1. When you refer to “retargetability”, we assume you refer to something like the following: “Currently the AI has goal X, you want to train it to have goal Y. If you do that, you truly change its goals to Y (rather than making it pretend to follow Y and then when you are not in control anymore, it switches back to X”. 2. We agree that in some sense, humans are retargetable. For example, if someone has very advanced persuasion tools or if the “persuader” is significantly stronger than the “persuadee” (e.g., a dictator persuading a citizen). 3. But even that is very hard, and often one just changes their incentives/strategy rather than their actual goals. 4. However, humans seem to be much less retargetable by *other agents who are similarly powerful*. For example, how would you retarget the goals of an (equally intelligent and equally powerful) neighbor? 5. Alternatively, you might refer to a much weaker version of “retargability”, e.g. very weak version of corrigible alignment. If this is what you mean, I am wondering why this is a particularly important property? **Other questions** 1. Does Conjecture believe this approach is competitive with approaches that rely on Magic? Does this plan only work if we have ambitious global coordination (e.g., governments say that people are no longer able to use Magic when training systems)? 2. How many technical researchers does Conjecture have, and what % of its alignment labor will be going into the CoEms agenda (as opposed to other research directions)? 3. When you talk about CoEms, how much does this mean that you are using “[cognitive architectures](https://en.wikipedia.org/wiki/Cognitive_architecture)”? 1. If you're using cognitive architectures, why do you expect them to be human-like? *It is possible that satisfactory answers to some of these questions would involve revealing infohazards, but we’re hopeful that some of them could be addressed without revealing infohazards.*
75b16231-0ca9-4c35-ae6f-f00286b5c17f
trentmkelly/LessWrong-43k
LessWrong
Subway Slides Here is the current layout of Davis Square station: If you're coming in from the primary headhouse, on the right, your path is really indirect: I think the solution is clear: Put a fare gate at the top, or switch the entire MBTA to industry best practice proof-of-payment. Comment via: facebook
dff654e0-7e65-41e7-bc03-5955e97103f3
StampyAI/alignment-research-dataset/blogs
Blogs
Whole Bird Emulation requires Quantum Mechanics *Jeffrey Heninger, 14 February 2023* *Epistemic status: Written for engagement. More sober analysis coming soon.* > > Bird navigation is surprisingly cruxy for the future of AI. > > > – Zach Stein-Perlman > > This seems pretty wrong. > > > – Richard Korzekwa Birds are astonishingly good at navigating, even over thousands of miles. The longest migration routes, of the arctic term, are only limited by the size of the globe. Homing pigeons can return home after being released 1800 km (1100 mi) away. White-crowned sparrows have been able to migrate to their wintering grounds after being displaced 3700 km (2300 mi) shortly before they began migration. How they do this is not entirely understood. There seem to be multiple cues they respond to, which combine to give them an accurate ‘map’ and ‘compass’. Which cues are most important might be different for different species. Some of these cues include watching the stars & sun, low frequency sounds, long-range smells, and detecting the earth’s magnetic field. This last one is the most interesting. Birds can detect magnetic fields, and there is increasing consensus that the detection mechanism involves quantum mechanics (See Appendix for details). The result is a precise detector of the magnetic field. It is located in the retina and transferred up the optical nerve to the brain, so birds can ‘see’ magnetic fields. Leaving aside questions like “[What is it like to be a [Bird]?](https://warwick.ac.uk/fac/cross_fac/iatl/study/ugmodules/humananimalstudies/lectures/32/nagel_bat.pdf)”, this result has implications for the difficulty of Whole Bird Emulation (WBE). WBE is important for understanding the future development of artificial intelligence. If we can put an upper bound on the difficulty of WBE, we have an upper bound on the difficulty of making AI that can do everything a bird can do. And birds can do lots of cool things: they know how to fly, they sing pretty songs, and they even [drop nuts in front of cars](https://www.youtube.com/watch?v=BGPGknpq3e0) !  In order to put bounds on WBE, we need to determine how much resolution is needed in order to emulate everything a bird can do. Is it good enough to model a bird at the cellular level? Or at the protein level? Or do you need an even finer resolution? In order to model the navigational ability of a bird, you need a quantum mechanical description of the spin state of a pair of electrons. This is extremely high resolution. A few caveats: * Not all parts of a bird require quantum mechanics to describe their macroscopic behavior. You can likely get away with coarse-graining most of the bird at a much higher level. * This is a simple quantum system, so it’s not hard to figure out the wave function over the singlet and triplet states. * What you need to know to determine the behavior of the bird is the concentration of the two final products as a function of the external magnetic field. Once this (quantum mechanical) calculation is done, you likely don’t need to model the subsequent evolution of the bird using quantum mechanics. On the other hand: * Birds are extremely complicated things, so it is always somewhat surprising when we understand anything in detail about them. * If quantum mechanics is necessary to understand the macroscopic behavior of some part of a bird, we should think that it is more likely that quantum mechanics is necessary to understand the macroscopic behavior of other parts of a bird too. * If there are other parts of a bird which depend on quantum mechanics in a more complicated way, or if the macroscopic response cannot be well modeled using classical probabilities, we almost certainly would not have discovered it. Getting good empirical evidence for even simple models of biological systems is hard. Getting good empirical evidence for complex models of biological systems is much harder. WBE requires a quantum mechanical calculation in order to describe at least one macroscopic behavior of birds. This dramatically increases the resolution needed for at least parts of WBE and the overall expected difficulty of WBE. If your understanding of artificial intelligence would have predicted that Whole Bird Emulation would be much simpler than this, you should update accordingly. Unless, of course, Birds Aren’t Real. Further Reading --------------- * Lambert et al. *Quantum Biology.* Nature Physics **9**. (2013) <https://quantum.ch.ntu.edu.tw/ycclab/wp-content/uploads/2015/01/Nat-Phys-2013-Lambert.pdf>. * Holland. *True navigation in birds: from quantum physics to global migration.* Journal of Zoology **293**. (2014) <https://zslpublications.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jzo.12107>. * Ritz. *Quantum effects in biology: Bird navigation.* Procedia Chemistry **3**. (2011) <https://www.sciencedirect.com/science/article/pii/S1876619611000738>. Appendix -------- Here is a brief description of how a bird’s magnetic sense seems to work: A bird’s retina contains some pigments called cryptochromes. When blue or green light (<570 nm) is absorbed by the pigment, an electron is transferred from one molecule to another. This electron had previously been paired with a different electron, so after the transfer, there is now an excited radical pair. Initially, the spins of the two electrons are anti-parallel (they initially are in the singlet state). An external magnetic magnetic field can cause one of the electrons to flip so they become parallel (they transition to a triplet state). Transitions can also occur due to interactions with the nuclear spins, so it is better to think of the external magnetic field as changing the rate at which transitions happen instead of introducing entirely new behavior. The excited singlet state decays back to the original state of the cryptochrome, while the excited triplet state decays into a different product. Neurons in the retina can detect the change in the relative concentration of these two products, providing a measurement of the magnetic field. This model has made several successful predictions. (1) Cryptochromes were originally known from elsewhere in biology. This theory predicted that they, or another pigment which produces radical pairs, would be found in birds’ eyes. (2) Low amplitude oscillating magnetic fields with a frequency of between 1-100 MHz should also affect the transition between the singlet and triplet states. Exposing birds to these fields disrupts their ability to navigate.
84d4ebd2-95ae-4e80-85c2-b567724aac4b
trentmkelly/LessWrong-43k
LessWrong
Solve Corrigibility Week A low-hanging fruit for solving alignment is to dedicate a chunk of time actually trying to solve a sub-problem collectively. To that end, I’ve broken up researching the sub-problem of corrigibility into two categories in this google doc (you have suggestion privileges): 1. Previous Work: let’s not reinvent the wheel. Write out links to any past work on corrigibility. This can range from just links to links & summaries & analyses. Do comment reactions to other's reviews to provide counter-arguments. This is just a google doc, low-quality posts, comments, links are accepted; I want people to lean towards babbling more. 2. Tasks: what do we actually do this week to make progress?  Suggest any research direction you find fruitful or general research questions or framings. Example: write an example of corrigibility (one could then comment an actual example). Additionally, I’ll post 3 top-level comments for: 1. Meetups: want to co-work with others in the community? Comment availability, work preferences, and a way to contact you (eg calendly link, “dm me”, “ my email is bob and alice dot com”, etc)  For example, I’m available most times this week with a Calendly link for scheduling 1-on-1 co-working sessions. Additionally, you yourself could message those you know to collaborate on this, or have a nerdy house co-working party. 2. Potential topics: what other topics besides corrigibility could we collaborate on in future weeks? 3. Meta: what are different formats this type of group collaboration could take? Comment suggestions with trade offs or discuss the cost/benefits of what I’m presenting in this post. I do believe there’s a legitimate, albeit small, chance that we solve corrigibility or find its “core” this week. Nonetheless, I think it’s of great value to be able to make actual progress on alignment issues as a community and to figure out how to do that better. Additionally, it’s immensely valuable to have an alignment topic post include a literature revi
5d8848bb-bcf3-4fe5-93e2-7aa0f06105ad
trentmkelly/LessWrong-43k
LessWrong
All The Latest Human tFUS Studies from Peng, et al; stimulating the nucleus accumbens inhibits its activity Transcranial focused ultrasound neuromodulation — altering the brain’s activity with low-intensity ultrasound — is really exciting . It allows us to manipulate arbitrary regions of the brain without surgery, potentially replacing the (brain-damaging) electrode implants currently used for serious neurological conditions like epilepsy and Parkinson’s, and potentially also expanding applications of brain stimulation to milder conditions not worth the risks of brain surgery, like mental illness, addiction, or chronic pain. The field is rapidly growing, and since I wrote my earlier post series there have been quite a few human studies published. Here’s a systematic overview of all the human studies published in 2024, by target brain region. HEADLINE RESULTS This year’s papers further confirm, to start with, that ultrasound does things to brain activity, if that was still in doubt, and that it is safe enough to run human experiments with (no adverse effects during experiments with small numbers of participants and brief exposures.) There are notably inconsistent results in whether targeting ultrasound to a given brain area increases or decreases neural activity in that area, even in some cases when the same area is targeted with the same sonication parameters! We clearly need to get a better sense of what ultrasound even does. Most studies don’t do the obvious (but admittedly expensive) thing of confirming a change in neural activity via a noninvasive measure like fMRI. Those that do, show different results (more activity in the targeted region, less activity in the targeted region, or neither) depending on which region is targeted; this tells us that “tFUS” as a class doesn’t have a globally consistent effect on targeted neural activity. Again, still more to learn. However, despite the primitive state of our understanding of this modality, we do already seem to have some strikingly useful r
eb117c61-efc2-48b6-b39b-776fdf2d67d5
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Inner Alignment via Superpowers *Produced As Part Of The SERI ML Alignment Theory Scholars Program 2022 Under* [*John Wentworth*](https://www.lesswrong.com/users/johnswentworth) The Problem =========== When we train RL agents, they have many opportunities to see what makes actions *useful*(they have to locate obstacles, navigate around walls, navigate through narrow openings etc.) but they can only learn what they should actually care about from how the goal appears in training. When deployed, their capabilities often generalize just fine, but their goals don't generalize as intended. This is called [*goal misgeneralization*](https://arxiv.org/pdf/2105.14111.pdf). Usually we conceptualize robustness as 1-dimensional, but to talk about goal misgeneralization, we need to use [vlad\_m](https://www.lesswrong.com/users/vlad_m)'s 2-dimensional model: ![](https://lh4.googleusercontent.com/4Gez1LRfECla0sOrd5YIOCqT0G_S48UevFS0BWdcUW5w_0YciLprdun1JEWb1gwcLu05g2CXRPxTF-2aUR-lEme0a7TFZUkm6n6Ag-HMnVFsavxJmx4N6M4Nncqm9TklJEdo94307ncrJ6Ydp5sD9zw)1D Robustness above; 2D Robustness below, with the Line of Doom in grey. [Source](https://www.lesswrong.com/posts/2mhFMgtAjFJesaSYR/2-d-robustness).“There’s an easy solution to this,” you might say. “Just present a whole boatload of environments where the goals vary along every axis, then they *have*to learn the right goal!” “Our sweet summer child,” we respond, “if only it were so simple.” Remember, we need to scale this beyond simple gridworlds and Atari environments, where we can just change coin position and gem color, we’re going all the way to AGI (whether we like it or not). Can we really manually generate training data that teaches the AGI what human values are? We need a method that’ll be robust to huge distribution shifts, things we aren't able to even think of. We need a method that’ll allow this AGI to find what humans value. We need superpowers! Proposed Solution ================= Our solution is ‘giving the AI superpowers.’ Oh, that's not clear enough?  Alright then: during training, we occasionally let the RL agent access an expanded action-space. This lets it act without the restrictions of its current abilities. We also encourage it to explore states where it’s uncertain about whether it’ll get reward or not. The aim is that these ‘superpowers’ will let the AI itself narrow down what goals it ought to learn, so that we won’t need to be as certain we’ve covered everything in the explicit training data. Through this we hope to combat the two principal drivers of goal misgeneralization: Instrumental Goals ------------------ When you were a child, you were happy every time you ate a lollipop. But you realized you needed money to buy lollipops, so eventually you started becoming happy whenever you made money. And look at you now, hundreds of dollars in the bank and not a lollipop in sight.  The same thing can happen with RL agents, sometimes the same action is reinforced across so many different environments, that they start to inherently value taking that action. But we don’t want them to value things *on the way* to human values, we want them to value *human values themselves,*no matter how they get there. Giving them the ability to get right to human values without any of the intermediate steps, and rewarding them for it, should help make them value that goal in particular, and not simply the instrumental goals. Goal Ambiguity -------------- Imagine you're a schoolkid who wants to be really good at math. So you work really hard to show how good you are at math by getting good grades. But eventually, you realize you can get even better grades in math by sneaking a calculator into tests with you. So you start sneaking a calculator in to every test, and your grades skyrocket. But one day, you happen to ask yourself: “What’s 7x4?,” and you realize you weren’t *actually* getting better at math, you were just getting better grades in math. The same thing can happen with RL agents, sometimes they substitute the goal we want them to learn with something correlated with that goal. But we don’t want them to learn *proxies*for human values, we want them to learn to value *human values themselves*. Giving the RL agent the ability to strongly optimize the proxies they’re learning during training, and then **not** rewarding them for doing so should help to direct their learned pointer towards the real goal, and not just proxies for it. If the proxy performs well across all of their 'superpowers,' then we have a reward misspecification issue, and not a goal misgeneralization issue. --- In both of these cases, the overarching theme is that with ‘superpowers,’ the agent will be able to explore the reward landscape more freely. This gets at both of the distribution shifts that could lead to goal misgeneralization: 1. *Internal* distribution shift coming from an increase in the AI's capabilities 2. *External* distribution shift coming from the environment as a whole changing This proposal gets at the first problem directly, by simulating the AI having advanced capabilities throughout training, but also gets at the second problem indirectly, since some of these 'superpowers' will let the AI itself try to create its own 'perfect world',  giving it a reward signal about what worlds actually *are* perfect. Experiment Idea =============== We propose an experiment to test this solution. We will train RL agents of different architectures: model-based (with a hardcoded model) and [PPO](https://openai.com/blog/openai-baselines-ppo/#ppo). Then, during the training process, we give this agent 'superpowers' which simulate advanced capabilities, and allow the AI to directly modify the world (or world-model, in the model-based RL case). However the training process will be guided in large part by the AI's propensity to explore and thereby determine what the real objectives are. We therefore need to incentivize the AI to use these superpowers to explore the different possible objectives and environments that can be realized. *With great power comes great responsibility*! Therefore, we give the agent a bias to explore different possibilities when it has these superpowers. For example, if it's only trained on pursuing yellow coins then we want it to try creating and pursuing yellow lines. When it finds that these give it no reward, we want it try creating and pursuing red coins, and ultimately experiment enough to learn the One True Objective that "coins get reward." Some current possible candidates for 'superpowers' in a gridworld environment, where the agent's goal is to collect a coin, are: * Move the coin * Teleport anywhere in the grid * Rewrite any cell in the grid * Move through walls The 'superpower' that we ultimately want to give the policy selector, in the model-based RL case, is the ability to 'make all its dreams come true.' It achieves this by rewriting the learned world-model's perception of the world, so that it represents the agent's *imagined perfect world*. We can then reward the policy-selector according to how closely this *imagined perfect world* matches a world-model where the agent managed to achieve the actual goal, so that it learns what a *real perfect world* would look like. In PPO, we don’t currently have a similar ‘ultimate superpower’ that we want to give it access to, but we still want to see if an assortment of  'superpowers' works to make the agent generalize better. The issue is that we need access to a world where we *can* give it superpowers (e.g. not the real world), so we're not sure how to scale this to real-world applications without introducing a large *external* distribution shift. Motivation ========== We arrived at this proposal by thinking about how model-based RL agents could end up with a misaligned policy function, even if we could perfectly specify what worlds are good and bad. In this case, the bottleneck would be producing examples of good and bad worlds (and good and bad actions that lead to those worlds) for the AI to learn from. To solve this, we figured a good approach would be to let the AI itself generate diverse data on goals. This doesn't solve all the possible inner alignment problems we could have for arbitrarily powerful policy functions (e.g. there can still be [contextually activated mesa-optimizers](https://www.lesswrong.com/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment#Hard_cases)), but it'll make the policy function stay internally aligned to the true objective for longer. This proposal improves the generalization of alignment, but not the generalization of capabilities, meaning that it could result in an upward push on the 2D robustness graph above.
b75cc6f3-f17b-4336-870d-7e3cd36714bd
trentmkelly/LessWrong-43k
LessWrong
Unraveling the Failure's Try I'm sure this idea exists somewhere else, but I haven't found it yet. (Probably my fault, I've only been on here for a few weeks and I'm still working through the core sequences.) The failure's try is a seemingly earnest attempt to overcome a challenge, but the challenger lacks the fundamental belief that they can surmount the challenge, and accept the challenge inevitably as evidence to their failure. That is to say, you put in an honest effort without believing you can do it so you can turn around and say it was impossible all along. I think this is a readily identifiable symptom of cognitive bias, of unevenness of evidence because one starts with the conclusion and works backwards. The belief "I'm a failure" or at least "this task is impossible (for me)" exists somewhere in System 1, and subtly gravitates behavior toward manifesting itself. I am well aware that I do this, and I'm not sure how to correct it. Somewhere in my belief network rests this idea that I am insufficient, a node of impostor syndrome, and biased evaluation. I got a 96 on my Chinese final, something I certainly would not have been able to do when I first arrived in China, but my reaction is not "I've progressed so much, I've learned all this material and tested well." It's... Well. Empty. Like I hadn't done anything at all. Like I didn't believe it happened. I've been working hard at learning this language for years, it's very important to me, and yet when faced with seemingly undeniable evidence that I've made substantial progress, I don't interpret success. I think this node manifests in another area as well. I feel like I /must/ become useful in as many aspects as possible. After reflecting for a long while on this, I think it's because I believe otherwise people won't want anything to do with me. I fundamentally doubt the idea that my presence alone is something that can be enjoyed, I feel that I must constantly be raising my ability to help others before they could accept me. So my q
ec56ad60-48a7-4c3d-834a-593e9388b5ff
trentmkelly/LessWrong-43k
LessWrong
The Sheepskin Effect Previously: The Case Against Education, The Case Against Education: Foundations, The Case Against Education: Splitting the Education Premium Pie and Considering IQ Epistemic Status: The spirit of Local Validity as a Key to Sanity and Civilization The sheepskin effect is that completing the last year of high school, college or graduate school is much more profitable than completing any of the previous years, rivaling those other years combined. Employers seem to be paying for the degree (aka the sheepskin, which it’s printed on) rather than the human capital being built over time. In the education chapter of Book Review: The Elephant in the Brain, I noted Robin relied on the sheepskin effect as strong evidence (along with other arguments, including impacts on national vs. personal income) that school was mostly signaling. Bryan Caplan does the same. He cites the data, seeing (on top of a 10% bonus in pay per year of school) 32% bonus pay for finishing high school, 10% for junior college, 30% for a bachelor’s degree and 18% for a masters. To those who claim this is mostly ability bias, he replies: > Ability bias explanations for sheepskin effects aren’t just hard to square with statistical evidence; they’re hard to square with the glaring fact that education spikes in degree years. If the labor market ignores credentials, why do so many college grads opt for zero graduate education? Are we supposed to believe one-third of the population has exactly the right ability to finish high school, but not advance to college? One-seventh has exactly the right ability to finish college, but not advance to graduate school? > > … > > To debunk sheepskin effects, correcting for these neglected abilities would have to drastically cut the payoff for degrees but not the payoff for years of schooling. What abilities would even conceivably qualify? This seems like a straw man; no one thinks the labor market ignores credentials, so it’s easy to see why students act the way they do
be18669a-369b-4e56-8f9c-ab01efb6fd37
trentmkelly/LessWrong-43k
LessWrong
Intelligence risk and distance to endgame There are at least three objections to the risk of an unfriendly AI. One is that uFAI will be stupid - it is not possible to build a machine that is much smarter than humanity. Another is that AI would be powerful but uFAI is unlikely - the chances of someone building something that turn out malign, either deliberately or accidentally, is small. Another one that I haven't seen articulated, is the AI could be malign and potentially powerful, but effectively impotent due to its situation. To use a chess analogy: I'm virtually certain that Deep Blue will beat me at a game of chess. I'm also pretty sure that a better chess program with vastly more computer power would beat Deep Blue. But, I'm also (almost) certain that I would beat them both at a rook and king vs king endgame. If we try to separate out the axes of intelligence and starting position, where does your intuition tell you the danger area is ? To illustrate, what is the probability that humanity is screwed in each of the following ? 1) A lone human paperclip cultist resolves to convert the universe (but doesn't use AI). 2) One quarter of the world has converted to paperclip cultism and war ensues. No-one has AI. 3) A lone paperclip cultist sets the goal of a seed AI and uploads it to a botnet. 4) As for 2) but the cultists have a superintelligent AI to advise them.
bf22fcf5-504e-49ea-88f9-da68d95917d8
trentmkelly/LessWrong-43k
LessWrong
Radical Honesty I recently ran across this interesting article about Radical Honesty, a movement founded by a psychotherapist named Brad Blanton who suggests that we should kick our addiction to lying and just tell the complete truth all the time.  I also like this quote from the Wikipedia article on Radical Honesty:  "The significant majority of participants in the Radical Honesty workshops report dramatic changes in their lives after taking the course, though they are not always comfortable and positive."  The movement visibly suffers from having been founded by a psychotherapist - it's more about the amazing happiness that absolute truth-telling can bring to your relationships (!!) rather than such rationalist values as seeking truth by teaching yourself a habit of honesty, or not wishing to deceive others because it infringes on their autonomy. I once suggested a notion called "Crocker's Rules", which was the mirror image of Radical Honesty - rather than telling the whole truth to other people, you would strive to always allow others to tell you the complete truth without being offended. Crocker's Rules didn't give you the right to say anything offensive, but other people could say potentially offensive things to you, and it was your responsibility not to be offended. This was surprisingly hard to explain to people; many people would read the careful explanation and hear, "Crocker's Rules mean you can say offensive things to other people." I was initially a bit suspicious of Blanton's movement - it seemed like the mirror-image that so many people misinterpreted, the option of saying offensive things to other people.  But Blanton makes it not only optional, but mandatory to speak your mind - a far greater inconvenience than Crocker's Rules would ever impose on anyone. Crocker's Rules didn't catch on.  Maybe it was too hard to tell the difference between someone delivering a slap in the face, and someone deliberately invoking Crocker's Rules - you don't want to miss a real c
441e8a60-7157-4dba-bcf7-899a27188a64
trentmkelly/LessWrong-43k
LessWrong
Weirdness Points Vegans are often disliked. That's what I read online and I believe there is an element of truth to to the claim. However, I eat a largely[1] vegan diet and I have never received any dislike IRL for my dietary preferences whatsoever. To the contrary, people often happily bend over backwards to accommodate my quirky dietary preferences—even though I don't ask them to. Why is my experience so different from the more radical vegans? It's very simple. I don't tell other people what to eat, and they don't tell me what to eat. Everyone on Planet Earth knows that there people from other cultures with strange, arbitrary dietary guidelines. And by everyone, I mean everyone. I read a story about two European anthropologists living among the hunger-gatherers of New Guinea. One anthropologist was French; the other anthropologist was English. Meat was precious in the jungle, so the locals honored the anthropologists by offering them meat. Specifically, a disease-ridden rat, roasted over a fire. The Europeans didn't want to eat that, but they couldn't tell the New Guineans they thought it was gross, because that would be rude. The Frenchman choked down his half of the rat with a forced smile. The Frenchman's precedent meant that the Englishman would have to eat his half of the rat too, right? Nope. The Englishman simply explained that he and the Frenchman were from different European tribes called "French" and "English", and that the English have a religious injunction that prohibits them from eating rats. Did the New Guinean hunter-gatherers become confused because Frenchmen and Englishmen look the same to them? Not at all. Every group of people on Planet Earth, from the streets of London to the jungles of New Guinea, understands that other people far away are divided into other cultures with arbitrary dietary rules. Fudging the difference between cultural injunctions vs religious injunctions is barely even a lie when you're talking to polytheists. Or atheist Jews, for that ma
42fc0862-66b0-417e-8bc8-2f014edb9e56
trentmkelly/LessWrong-43k
LessWrong
The National Dashboard and Human Progress In this post, I would like to consider a hypothetical national dashboard comprising essential metrics that measure the economy, human well-being, and long-term progress. It is hypothetical because, although all these metrics are measured today, not all of them receive equal attention in the government and the media. In This Post * Show Me the Numbers * Can We Measure Progress? * A National Dashboard * Upstream Drivers of Long-Term Progress * A Possible Set of 11 Metrics * More Options * Global Focus Show Me the Numbers As someone who has studied economics and business and spent over a decade working in the private sector, I’ve been long dismayed by how little focus there is on numbers in government and media reporting. To the extent politicians focus on numbers, they typically pick the ones that support their narrative and seem relevant to this week’s news cycle.¹ I wonder if there’s a better way. What if we reversed the order? What if we came together as a society and determined (1) what areas we would most like to improve, (2) the best way to measure the progress, and (3) the best way to report on it and future plans? What if we then used this framework to guide public conversation, media reporting, and perhaps even some White House press briefings? The idea is not new, of course. And many people have already tried to popularize it. Recently, Andrew Young got many people excited when he proposed the idea of the American scorecard and — taking the idea to the extreme — suggested that the president use PowerPoint to report on progress. So what’s the problem? The problem isn’t as much that we aren’t measuring things.² The problem is threefold: 1. There is little national and global public consensus regarding which indicators matter most. 2. There is limited accountability when it comes to improving these numbers and publicly reporting on them in a structured way and with a regular cadence. 3. Media focus can sometimes be misplaced, haphazard, pessi
56cc233a-7cf9-44b0-8178-465320fbed36
trentmkelly/LessWrong-43k
LessWrong
Reading the ethicists 2: Hunting for AI alignment papers Introduction I'm back, reading more papers in ethics (but now also in philosophy and wherever the citations lead me). Unlike last time when I surveyed one journal's approach to AI in general, this time I'm searching specifically for interesting papers that bear on aligning AGI, preferably written by people I've never heard of before. Are they going to be any good? *shrug* To this end, I skimmed the titles and sometimes abstracts of the last 5 years of papers in a pretty big chunk of the AI ethics space, namely: * Ethics and Information Technology * Minds and Machines * AI and Ethics * Philosophy & Technology * Science and Engineering Ethics * AI & Society * IEEE Transactions on Philosophy and Society * And a few bonus miscellanea From the set of all papers that even had a remote chance at being relevant (~10 per journal per year), I read more deeply and am relaying to you in this post all the ones that were somewhat on-topic and nontrivial (~0.5 per journal per year). By "nontrivial" I mean that I didn't include papers that just say "the alignment problem exists" - I certainly do not mean that I set a high bar for quality. Then I did some additional searching into what else those authors had published, who they worked with, etc. What were all the other papers about, the ones that didn't match my criteria? All sorts of things! Whether robots are responsible for their actions, how important privacy is, how to encourage and learn from non-Western robotics paradigms, the ethics of playing MMORPGs, how to make your AI ignore protected characteristics, the impact of bow-hunting technology on middle stone age society, and so on and so forth. The bulk of this post is a big barely-ordered list of the papers. For each paper I'll give the title, authors, author affiliations, journal and date. Each paper will get a summary and maybe a recommendation. I'll also bring up interesting author affiliations and related papers. Whereas last post was more a "I did this so
c8e1a421-5f41-40b6-a650-73d224d04e45
trentmkelly/LessWrong-43k
LessWrong
Taking the parameters which seem to matter and rotating them until they don't A big bottleneck in interpretability is neural networks are non-local. That is, given the layer setup if we change a small bit of the original activations, then a large bit of the new activations are affected. This is an impediment to finding the circuit-structure of networks. It is difficult to figure out how something works when changing one thing affects everything. The project I'm currently working on aims to fix this issue, without affecting the training dynamics of networks or the function which the network is implementing[1]. The idea is to find a rotation matrix R and insert it with its inverse like below, then group together the rotation with the original activations, and the inverse with the weights and nonlinear function. We then can optimize the rotation matrix and its inverse so that local changes in the rotated activation matrix have local effects on the outputted activations. This locality is measured by the average sparsity of the jacobian across all the training inputs. We do this because the jacobian is a representation of how each of the inputs affects each of the outputs. Large entries represent large effects. Small entries represent small effects. So if many entries are zero, this means that fewer inputs have an effect on fewer outputs. I.e. local changes to the input cause local changes to the output.   This should find us a representation of the activations and interpretations of matrix multiplies that "make sense" in the context of the rest of the network.   Another way of thinking about this is that our goal is to find the basis our network is thinking in. Currently I'm getting this method to work on a simple, 3-layer, fully connected MNIST number classifying network. If this seems to give insight into the mechanics of the network after application, the plan is to adapt it to a more complicated network such as a transformer or resnet. I only have preliminary results right now, but they are looking promising: This is the normalized
f1c8f777-fec4-4ad9-88d2-6aaf42952242
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Medical Image Registration: The obscure field where Deep Mesaoptimizers are already at the top of the benchmarks. (post + colab notebook) Background: Introduction to Image Registration ---------------------------------------------- Image registration, or finding the corresponding points in a pair of, eg, MRI scans, is one of the fundamental tasks in medical image computing- needed for aligning post/pre-op brain scans, tracking lung motion during inhalation, computing statistics across populations etc. The standard formulation of this task is, given two images Imoving.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  and Ifixed, find an invertible function φ such that the point φ−1(x) in Imoving *corresponds* to the point x in Ifixed, (or equivalently, a point y in Imoving moves to φ(y) in Ifixed). For two images of the same body part on the same person, such as the common task of aligning an MRI of a prostate to an ultrasound of that same organ, "corresponds" is extremely well defined: two points correspond if they contain the same physical cell or piece of cartilage. For two images of different people, the correct answer is more nebulous: certainly the center of Alex's pineal gland has a corresponding point in Bob's head, but what about points in the grey matter, which is uniquely folded in every person? Instead of hunting correspondences directly, we could introduce a surrogate objective. The traditional approach is to parametrize φ by θ  then introduce a reward function that scores various φθ, such as  Ex∈Ω[(Imoving(φ−1θ(x))−Ifixed(x))2]+λLreg,  or  "The squared difference between the warped moving image and the fixed image, plus a penalty to encourage smoothness".  The minimizing value of θ is then found via some kind of search. For details, a good introduction: (2004)[[1]](#fnwopq900yws). For a modern, powerful implementation see ANTs, NiftyReg Neural Networks Arrive ---------------------- Finding the minimizing value of θ turns out to be quite expensive computationally- the leading package ANTs can take nearly an hour to align two high resolution images on its highest setting (named SyNOnly if you want to dig into it). An alternative has emerged in recent years: optimize a neural network Φθ that takes in two images and directly spits out a function φ−1. The initial approach was to register a large collection of images using the traditional methods to create a dataset (Imovingi,Ifixedi,φi) and then train using a standard supervised learning objective along the lines of (Φ[Imovingi,Ifixedi]−φi)2Quicksilver[[2]](#fn0e4v8nmarq3) is a canonical paper advocating this approach . However, an alternative soon emerged: just differentiate through the similarity and regularity penalties, then through the network:  L=(Imoving∘Φθ[Ifixed,Imoving]−Ifixed)2+λLreg This approach just requires images: there is no need to pre-compute ground truth transforms to train to imitate.  Explicitly multistep Neural Registration ---------------------------------------- The above objective can be solved by a network without any special structure to encourage mesaoptimization. However, adding structure improves performance- to encourage mesaoptimization, we will define a simple world where the network can perform a series of actions: the network takes actions by warping Imoving, and at each time step it is fed the warped image from the previous time step. This way, it can gradually move one image to align with the other, or align it in the first step then subsequently make small adjustments, or take any other action- it is only scored on the alignment after all timesteps have passed. (I think that this multiple step process is Mesaoptimization, but that analogy is the shakiest part of this post) This is an approach being applied right now: I know it's the approach behind ltian in this leaderboard <https://www.cbica.upenn.edu/BraTSReg2022/lboardValidation.html> because I'm on that entry. It's also used in [AVSM](https://arxiv.org/abs/1903.08811), [ICON](https://arxiv.org/pdf/2105.04459.pdf), [LapIRN](https://github.com/cwmok/LapIRN) (roughly), etc. But lets step back from huge, 100M parameter models.   Lets train a simple MLP that takes in an MNIST image and outputs a vector that moves each pixel ``` class FCNet(torch.nn.Module):    def __init__(self, size=28):        super().__init__()        self.size = size        self.dense1 = nn.Linear(size * size * 2, 50)        self.dense2 = nn.Linear(50, 50)        self.dense3 = nn.Linear(50, size * size * 2)        torch.nn.init.zeros_(self.dense3.weight)        torch.nn.init.zeros_(self.dense3.bias)    def forward(self, x, y):        x = torch.reshape(torch.cat([x, y], 1), (-1, 2 * self.size * self.size))        x = F.relu(self.dense1(x))        x = F.relu(self.dense2(x))        x = self.dense3(x)        x = torch.reshape(x, (-1, 2, self.size, self.size))        return x               ``` and put it in the aforementioned world using the icon\_registration library ``` inner_net = icon.FunctionFromVectorField(FCNet(size=28)) n2 = inner_net for _ in range(8):     n2 = icon.TwoStepRegistration(         n2,         inner_net     ) net = icon.losses.DiffusionRegularizedNet(n2, icon.ssd, lmbda=.05) net.assign_identity_map(sample_batch.shape) ``` After training it on pairs of the 5s in MNIST,  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1675118794/mirroredImages/FgXjuS4R9sRxbzE5w/fudk7atdlaxnloozhv53.png) it nicely minimizes the loss we asked it to minimize ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1675118794/mirroredImages/FgXjuS4R9sRxbzE5w/hqopm87qi8zaaazqmdwk.png)Hunting for a Mesaoptimizer --------------------------- So at this point, we have a network Φ that appears to perform optimization to find the φ that minimizes L. Wearing my alignment research hat, I am curious about: 1. Its algorithm: Is it really performing an iterative optimization against a loss? Can we find that loss in its weights? 2. Its inner alignment:  Is it trying to optimize the same thing that the outer Adam descent is optimizing, or something else that has a similar minimum on the training set? 3. Its generality: When we go out of distribution, what happens? Will it fail to optimize, optimize the right thing, or optimize something else? I don't have answers to 1 or 2 yet, but I have a tentative answer to 3, and it's that once out of distribution, the network optimizes something else.  Specifically, if after training on 5s, we evaluate on 7s, the network does not merely warp the 7s to align with each other. Instead, sometimes it *warps the moving 7 to look like a 5*, then aligns that 5 to the fixed 7. It's hard to assign agency to a MLP with a hidden layer size of 50, but it sure looks to me like its inner alignment is to first bring the input data in-distribution, and only then optimizes the least squares loss by aligning parts of the 5 to parts of the 7.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1675118794/mirroredImages/FgXjuS4R9sRxbzE5w/f9xxihing6dh7dtfkpbm.png)I'm still researching this and am new to alignment research- any tips on how to prove what the inner goal actually is? Here is a notebook to produce these figures: <https://colab.research.google.com/drive/1mHjOHf2EA9KzcX6UAAMSx71xoOftssCr?usp=sharing> The network trains in about 50 seconds, so you should not fear to play around with changing hyperparameters, etc. 1. **[^](#fnrefwopq900yws)**Crum WR, Hartkens T, and Hill DLG. Non-rigid image registration: theory and practice. *Br J Radiol* 2004; **77**: https://www.birpublications.org/doi/10.1259/bjr/25329214?url\_ver=Z39.88-2003&rfr\_id=ori:rid:crossref.org&rfr\_dat=cr\_pub%20%200pubmed 2. **[^](#fnref0e4v8nmarq3)**<https://arxiv.org/abs/1703.10908> **Quicksilver: Fast Predictive Image Registration - a Deep Learning Approach**
1da9b38f-9792-4f28-bc4d-164e11032438
trentmkelly/LessWrong-43k
LessWrong
Where to Draw the Boundary? The one comes to you and says: > Long have I pondered the meaning of the word "Art", and at last I've found what seems to me a satisfactory definition: "Art is that which is designed for the purpose of creating a reaction in an audience." Just because there's a word "art" doesn't mean that it has a meaning, floating out there in the void, which you can discover by finding the right definition. It feels that way, but it is not so. Wondering how to define a word means you're looking at the problem the wrong way—searching for the mysterious essence of what is, in fact, a communication signal. Now, there is a real challenge which a rationalist may legitimately attack, but the challenge is not to find a satisfactory definition of a word.  The real challenge can be played as a single-player game, without speaking aloud.  The challenge is figuring out which things are similar to each other—which things are clustered together—and sometimes, which things have a common cause. If you define "eluctromugnetism" to include lightning, include compasses, exclude light, and include Mesmer's "animal magnetism" (what we now call hypnosis), then you will have some trouble asking "How does electromugnetism work?"  You have lumped together things which do not belong together, and excluded others that would be needed to complete a set.  (This example is historically plausible; Mesmer came before Faraday.) We could say that electromugnetism is a wrong word, a boundary in thingspace that loops around and swerves through the clusters, a cut that fails to carve reality along its natural joints. Figuring where to cut reality in order to carve along the joints—this is the problem worthy of a rationalist.  It is what people should be trying to do, when they set out in search of the floating essence of a word. And make no mistake: it is a scientific challenge to realize that you need a single word to describe breathing and fire.  So do not think to consult the dictionary editors, for
3411d156-3fdb-43f6-8f76-3f0bfc8c2edd
trentmkelly/LessWrong-43k
LessWrong
Луна Лавгуд и Комната Тайн, Часть 6 Disclaimer: This is Kongo Landwalker's translation of lsusr's fiction Luna Lovegood and the Chamber of Secrets - Part 6 into russian. ---------------------------------------- — Она в Зеркале Атлантиды, — сказал Гарри. — Нам нужны жабросли, — сказала Луна. — Огромная туча жаброслей. — Зеркало Атлантиды находится в конце коридора на третьем этаже, по правой стороне, — сказал Гарри. — Чего же ты ждёшь? — сказала Луна. — Ты не хочешь спросить, откуда я всё это знаю? — сказал Гарри. — А зачем? — сказала Луна. — Потому что… потому что… — Гарри искал слова, чтобы выразить масштаб эпистемологической дыры. — Ты говоришь, как магл, — сказала Луна. — И думаешь, как магл. Гарри задумался: это была насмешка или комплимент? А Луна уже вприпрыжку направилась на третий этаж. ---------------------------------------- Гарри и Луна пролетели на двухместной мётле над диффенбахией и прочими запущенными препятствиями к Зеркалу. — Ты отлично летаешь, — сказала Луна. — Спорю, ты в команде по Квиддичу. В какой ты позиции? Нет, не говори. Дай угадаю. Ты ловец. — Это Зеркало позволяет спрятать предмет до тех пор, пока не появится кто-то с правильными намерениями, — раздражённо сказал Гарри. — Я ищу вход в гостиную Когтеврана, — сказала Луна Зеркалу. — Я хочу поспать в кровати. — Во имя Гёделя, какую же чудовищно сложную загадку задал тебе бронзовый орёл!? — спросил Гарри. — «Где моя диадема?» — сказала Луна. — Нужно было ответить «потеряна» или «спрятана», — сказал Гарри. — От тебя не ожидали, что ты действительно найдёшь Потерянную Диадему Когтеврана. — А, — сказала Луна. — Раз уж мы уже здесь, давай постараемся, — сказал Гарри, доставая из мешочка механический секундомер и устанавливая его на пять минут. — Я хочу диадему, чтобы спасти мир, — сказал Гарри. — Я стремлюсь спасти жизни всех разумных существ, — сказал Гарри. — Я обещаю вернуть тебя к жизни, Ровена Когтевран, — сказал Гарри. — Я хочу разбирать звёзды на части, — сказал Гарри. Луна не обращала на него внима
76f9a5a0-86bc-4769-8c28-9bdcb666ea65
trentmkelly/LessWrong-43k
LessWrong
Speaking of the efficiency of utopia I noted that it is probably reasonable for people to be wary of things introducing themselves as ‘efficiency’, since maximization of specific metrics has some tendency to go badly. On the other hand, ‘efficiency’ doesn’t mean anything about building explicit or quantitative machinery. It just means getting a lot of the desired output per input. So one might wonder why, if these explicit efforts would tend to make things worse for our actual goals, we would pursue efficiency in such ways, and continue to call that ‘efficiency’. For those who think quantitative pursuit of well-defined goals has been a bad strategy overall, shouldn’t ‘efficient daycare’ suggest a daycare where we have used our best intuitions for holistically improving the experience? I think one reason why not is that you don’t tend to have much evidence that a thing is efficient unless you are in the business of making quantitative measurements. You are more likely to say ‘this daycare is unusually efficient’ if you have been measuring ‘costs of providing legally adequate childcare’ across different centers, than if you have been observing children and workshopping policies for their thriving. If you have been doing the latter, you will probably just call it something else. It seems to me that we would be better with more specific concepts though, rather than conflating striving with an eye to a defined metric and the doing the best we can do with our materials. I suggest narrow efficiency and inclusive efficiency. An assembly line is narrowly efficient. Utopia is inclusively efficient.
b136c582-e994-49c6-ba0a-4d470d911d16
trentmkelly/LessWrong-43k
LessWrong
Virulence Management Cross posted from my blog I have spent some time reviewing the literature on an approach to dealing with human pathogens known as “virulence management”. Here, I hope to provide a jumping-off point to the field for others. I have included a list of relevant papers at the end. Acquired Immunity Our journey starts with the first of three papers by Stephen M. Hedrick on what he calls “Disease Ecology”. Of all of the papers I link here, his paper The Acquired Immune System : A Vantage from Beneath is the one I recommend the most. The final line of the abstract is perhaps the most striking: > By selecting for ever-more-devious parasites, the acquired immune system is the cause of it’s own necessity. What does he mean by this? Hedrick reminds us that acquired immunity is a relatively recent invention. While all animals are beset by parasites, invertebrates lack an adaptive immune system. How can they survive without an immune system that can adapt to new pathogens? Hedrick’s main point is: they don’t have to. Because pathogens rely on their host to survive, they face a strong evolutionary pressure to avoid killing their host. In animals without an adaptive immune system, the host and pathogen find a stable equilibrium. As Hedrick points out, invertebrates are doing fine despite having parasites. In animals with an acquired immune system (read: humans), the immune system is constantly adapting to new pathogens, forcing these pathogens to adapt as well. Pathogens mutate quickly and have short generation times in order to adapt to changes in the host immune environment. Hosts recombine genes using sexual reproduction along with an amazingly sophisticated immune system in order to combat this. Hedrick argues that the massive number and diversity of pathogens combined with their short generation times makes the task of the adaptive immune system futile, we don’t have a way to permanently out-adapt such a versatile threat. But if the invertebrates are doing fine witho
ca9f2ef1-6bdc-499d-9e53-e02384a42857
trentmkelly/LessWrong-43k
LessWrong
Startup Roundup #1: Happy Demo Day There were a bunch of discussions recently related to issues surrounding Y-Combinator, related as usual to their annual demo day. It seemed worth splitting them off into a post. BIDDERS AT AUCTION MOSTLY THINK PRICES ARE TOO HIGH YC is in session, so all the usual talk is back. > Paul Graham: As happens every single YC batch, investors complain that the valuations are too high, and the startups raise anyway. > > Kush: Is there any specific reason why you think this happens? > > Paul Graham: Some investors just cannot grasp the implications of the fact that all the returns are concentrated in the big wins. The top tier investors get it though; you’ll rarely lose one of them over price. > > Amal Dorai: Isn’t the dynamic range of YC valuations narrower than the companies themselves? We see pretty much a 2x range that they’re raising in, but >10x variance in current traction. The high flyers are fine but those riskier companies still finding their footing are behind the 8 ball. > > Paul Graham: The variation in valuations is much smaller than the variation in quality. That’s what determines risk, not their current traction. If current traction were a perfect or even good predictor, investing would be trivially easy. > > David Tran: “You’ve found the market price when buyers complain but still pay” > > Dan Grey: What I’ve seen investors saying is that they would rather wait a year and then invest in the next round – which happens at a similar price but with much more proof. If the initial round is not offering the right level of upside for the risk, why should investors jump in? > > Paul Graham: That was an amusingly novel variant. I only saw one investor say that. The reason it’s a dumb strategy is that choosing companies whose valuation hasn’t increased a year later selects for bad ones. This model tells me that the lower valuation, less exciting companies in YC are likely overvalued, and the higher valuation, more exciting companies in YC are highly underva
e744a2f6-385d-4b68-b9cc-1818da23855e
trentmkelly/LessWrong-43k
LessWrong
MIT FutureTech are hiring ‍a Product and Data Visualization Designer Why apply or share? * Our work to understand progress in computing and artificial intelligence, and its implications, is highly relevant to understanding and mitigating the risks of AI. This write-up provides a good overview of some of our interests and areas of focus. * We are one of Open Philanthropy's 10 largest Global Catastrophic Risk, and AI Governance grantees.  ‍Position Overview  FutureTech is in search of a Product and Data Visualization Designer to own the design and visual communication of our research. The person in this role will be the creative force behind how our research is experienced and understood globally. They will help us articulate complex data and insights through UI and UX that are both intuitive and engaging, and maintain a cohesive visual identity across all of FutureTech's communication channels, including our websites, interactive dashboards, and data visualizations. This work will be vital for enhancing our dialogue with the research community and stakeholders. The successful candidate will report to Neil Thompson, and work closely with our operations, software engineering and research teams. Key Responsibilities: * Maintaining and updating the design of our website. * Designing graphs and visualizations for our interactive reports, papers and presentations, to ensure we are communicating our research effectively. * Designing and reviewing interactive dashboards to summarize our research. * Designing and reviewing playgrounds for interactive models of AI and automation. * Designing various other external-facing documents and templates, such as our newsletter or reports for stakeholders. * Conducting user research and interviews with our stakeholders on design elements. * Depending on your interests and skills, you could get involved in other aspects of our communications, such as homogenizing the structure and style of our papers, among others. What We Are Looking For: Requirements: * At least 3 years of UI and UX ex
804dca17-d8ea-44f1-be16-6664cab3eaa2
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
How does iterated amplification exceed human abilities? .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} When I first started learning about IDA, I thought that agents trained using IDA would be human-level after the first stage, i.e. that Distill(H) would be human-level. As I've [written about before](https://www.lesswrong.com/posts/FdfzFcRvqLf4k5eoQ/list-of-resolved-confusions-about-ida?commentId=bdcWwPhaZxiy7rxLb), Paul later clarified this, so my new understanding is that after the first stage, the distilled agent will be super-human in some respects and infra-human in others, but wouldn't be "basically human" in any sense. But IDA is aiming to eventually be super-human in almost every way (because it's aiming to be competitive with unaligned AGI), so that raises some new questions: 1. If IDA isn't going to be human-level after the first stage, then at what stage does IDA become at-least-human-level in almost every way? 2. What exactly is the limitation that prevents the first stage of IDA from being human-level in almost every way? 3. When IDA eventually *does* become at-least-human-level in almost every way, how is the limitation from (2) avoided? That brings me to [Evans et al.](https://owainevans.github.io/pdfs/evans_ida_projects.pdf), which contains a description of IDA in section 0. The way IDA is set up in this paper leads me to believe that the answer to (2) above is that the human overseer cannot provide a sufficient number of demonstrations for the most difficult tasks. For example, maybe the human can provide enough demonstrations for the agent to learn to answer very simple questions (tasks in T0 in the paper) but it's too time-consuming for the human to answer enough complicated questions (say, in T100). My understanding is that IDA gets around this by having an amplified system that is itself automated (i.e. does not involve humans in a major way, so cannot be bottlenecked on the slowness of humans); this allows the amplified system to provide a sufficient number of demonstrations for the distillation step to work. So in the above view, the answer to (2) is that the limitation is the number of demonstrations the human can provide, and the answer to (3) is that the human can seed the IDA process with sufficient demonstrations of easy tasks, after which the (automated) amplified system can provide sufficient demonstrations of the harder tasks. The answer to (1) is kind of vague: it's just the smallest n for which ⋃ni=0Ti contains almost all tasks a human can do. But the above view seems to conflict with what's in the [IDA post](https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616) and the [IDA paper](https://arxiv.org/abs/1810.08575). In both of those, the amplified system is described as a human doing the decompositions (so it will be slow, or else one would need to argue that the slowness of humans decomposing tasks doesn't meaningfully restrict the number of demonstrations). Also, the main benefit of amplification is described not as the ability to provide more demonstrations, but rather to provide demonstrations for more difficult tasks. Under this alternative view, the answers to questions (1), (2), (3) aren't clear to me. *Thanks to Vipul Naik for reading through this question and giving feedback.*
55571a14-4e96-458d-8c2e-7e3e20d20435
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"In Disguised Queries, I talked about a classification task of "bleggs" and "rubes". The typical blegg is blue, egg-shaped, furred, flexible, opaque, glows in the dark, and contains vanadium. The typical rube is red, cube-shaped, smooth, hard, translucent, unglowing, and contains palladium. For the sake of simplicity, let us forget the characteristics of flexibility/hardness and opaqueness/translucency. This leaves five dimensions in thingspace: Color, shape, texture, luminance, and interior. Suppose I want to create an Artificial Neural Network (ANN) to predict unobserved blegg characteristics from observed blegg characteristics. And suppose I'm fairly naive about ANNs: I've read excited popular science books about how neural networks are distributed, emergent, and parallel just like the human brain!! but I can't derive the differential equations for gradient descent in a non-recurrent multilayer network with sigmoid units (which is actually a lot easier than it sounds). Then I might design a neural network that looks something like this: Network 1 is for classifying bleggs and rubes. But since "blegg" is an unfamiliar and synthetic concept, I've also included a similar Network 1b for distinguishing humans from Space Monsters, with input from Aristotle ("All men are mortal") and Plato's Academy ("A featherless biped with broad nails"). A neural network needs a learning rule. The obvious idea is that when two nodes are often active at the same time, we should strengthen the connection between them—this is one of the first rules ever proposed for training a neural network, known as Hebb's Rule. Thus, if you often saw things that were both blue and furred—thus simultaneously activating the "color" node in the + state and the "texture" node in the + state—the connection would strengthen between color and texture, so that + colors activated + textures, and vice versa. If you saw things that were blue and egg-shaped and vanadium-containing, that would strengthen positive mutual connections between color and shape and interior. Let's say you've already seen plenty of bleggs and rubes come off the conveyor belt. But now you see something that's furred, egg-shaped, and—gasp!—reddish purple (which we'll model as a "color" activation level of -2/3). You haven't yet tested the luminance, or the interior. What to predict, what to predict? What happens then is that the activation levels in Network 1 bounce around a bit. Positive activation flows luminance from shape, negative activation flows to interior from color, negative activation flows from interior to luminance... Of course all these messages are passed in parallel!! and asynchronously!! just like the human brain... Finally Network 1 settles into a stable state, which has high positive activation for "luminance" and "interior". The network may be said to "expect" (though it has not yet seen) that the object will glow in the dark, and that it contains vanadium. And lo, Network 1 exhibits this behavior even though there's no explicit node that says whether the object is a blegg or not. The judgment is implicit in the whole network!! Bleggness is an attractor!! which arises as the result of emergent behavior!! from the distributed!! learning rule. Now in real life, this kind of network design—however faddish it may sound—runs into all sorts of problems. Recurrent networks don't always settle right away: They can oscillate, or exhibit chaotic behavior, or just take a very long time to settle down. This is a Bad Thing when you see something big and yellow and striped, and you have to wait five minutes for your distributed neural network to settle into the "tiger" attractor. Asynchronous and parallel it may be, but it's not real-time. And there are other problems, like double-counting the evidence when messages bounce back and forth: If you suspect that an object glows in the dark, your suspicion will activate belief that the object contains vanadium, which in turn will activate belief that the object glows in the dark. Plus if you try to scale up the Network 1 design, it requires O(N2) connections, where N is the total number of observables. So what might be a more realistic neural network design? In this network, a wave of activation converges on the central node from any clamped (observed) nodes, and then surges back out again to any unclamped (unobserved) nodes. Which means we can compute the answer in one step, rather than waiting for the network to settle—an important requirement in biology when the neurons only run at 20Hz. And the network architecture scales as O(N), rather than O(N2). Admittedly, there are some things you can notice more easily with the first network architecture than the second. Network 1 has a direct connection between every two nodes. So if red objects never glow in the dark, but red furred objects usually have the other blegg characteristics like egg-shape and vanadium, Network 1 can easily represent this: it just takes a very strong direct negative connection from color to luminance, but more powerful positive connections from texture to all other nodes except luminance. Nor is this a "special exception" to the general rule that bleggs glow—remember, in Network 1, there is no unit that represents blegg-ness; blegg-ness emerges as an attractor in the distributed network. So yes, those N2 connections were buying us something. But not very much. Network 1 is not more useful on most real-world problems, where you rarely find an animal stuck halfway between being a cat and a dog. (There are also facts that you can't easily represent in Network 1 or Network 2. Let's say sea-blue color and spheroid shape, when found together, always indicate the presence of palladium; but when found individually, without the other, they are each very strong evidence for vanadium. This is hard to represent, in either architecture, without extra nodes. Both Network 1 and Network 2 embody implicit assumptions about what kind of environmental structure is likely to exist; the ability to read this off is what separates the adults from the babes, in machine learning.) Make no mistake: Neither Network 1, nor Network 2, are biologically realistic. But it still seems like a fair guess that however the brain really works, it is in some sense closer to Network 2 than Network 1. Fast, cheap, scalable, works well to distinguish dogs and cats: natural selection goes for that sort of thing like water running down a fitness landscape. It seems like an ordinary enough task to classify objects as either bleggs or rubes, tossing them into the appropriate bin. But would you notice if sea-blue objects never glowed in the dark? Maybe, if someone presented you with twenty objects that were alike only in being sea-blue, and then switched off the light, and none of the objects glowed. If you got hit over the head with it, in other words. Perhaps by presenting you with all these sea-blue objects in a group, your brain forms a new subcategory, and can detect the "doesn't glow" characteristic within that subcategory. But you probably wouldn't notice if the sea-blue objects were scattered among a hundred other bleggs and rubes. It wouldn't be easy or intuitive to notice, the way that distinguishing cats and dogs is easy and intuitive. Or: "Socrates is human, all humans are mortal, therefore Socrates is mortal." How did Aristotle know that Socrates was human? Well, Socrates had no feathers, and broad nails, and walked upright, and spoke Greek, and, well, was generally shaped like a human and acted like one. So the brain decides, once and for all, that Socrates is human; and from there, infers that Socrates is mortal like all other humans thus yet observed. It doesn't seem easy or intuitive to ask how much wearing clothes, as opposed to using language, is associated with mortality. Just, "things that wear clothes and use language are human" and "humans are mortal". Are there biases associated with trying to classify things into categories once and for all? Of course there are. See e.g. Cultish Countercultishness. To be continued..." The above is a chunk of text from a post created by a user on the website LessWrong. What is this text talking about? This text is discussing the concept of artificial neural networks (ANNs) and their design. It uses the metaphor of "bleggs" and "rubes" as categories for a classification task, with each having distinct characteristics. The author is illustrating how ANNs can be trained to classify objects based on these characteristics using different network designs. The first design discussed is Network 1, which is a recurrent network where connections between nodes are strengthened when they are often active at the same time, following Hebb's Rule. The author points out that this type of network can have issues with settling into a stable state, double-counting evidence, and scaling up due to the O(N2) connections required. The second design, Network 2, is presented as a more realistic and efficient alternative. It uses a wave of activation that converges on a central node from observed nodes and then surges back out to unobserved nodes. This design can compute answers in one step and scales as O(N), making it faster and more scalable. The author also touches on the limitations of both networks and the biases associated with trying to classify things into categories. They suggest that the human brain likely operates more similarly to Network 2 due to its efficiency and scalability. In essence, this text is exploring the complexities and considerations involved in designing artificial neural networks for classification tasks. Can you elaborate on the limitations of both Network 1 and Network 2, and provide examples of real-world applications where these limitations might significantly impact the performance of the networks? Network 1, as discussed in the text, is a recurrent network with potential limitations such as: 1. Settling Time: Recurrent networks can oscillate, exhibit chaotic behavior, or take a long time to settle into a stable state. This could be problematic in real-time applications, such as autonomous driving, where quick decision-making is crucial. 2. Double-counting Evidence: The network can double-count evidence when messages bounce back and forth between nodes. This could lead to overestimation of certain features, potentially resulting in misclassification. In medical diagnosis systems, for example, this could lead to incorrect diagnoses. 3. Scalability: The design of Network 1 requires O(N2) connections, which means the number of connections grows quadratically with the number of nodes. This could make the network computationally expensive and difficult to scale for large datasets, such as in big data analysis or complex natural language processing tasks. Network 2, on the other hand, is presented as a more efficient alternative, but it also has its limitations: 1. Limited Inter-node Connections: Unlike Network 1, Network 2 doesn't have direct connections between every pair of nodes. This could limit its ability to capture complex relationships between features. For example, in image recognition tasks, this could limit the network's ability to recognize patterns that involve complex interactions between different parts of an image. 2. Implicit Assumptions: Both Network 1 and Network 2 make implicit assumptions about the likely structure of the environment. These assumptions might not hold true for all datasets or problems, potentially leading to suboptimal performance. For instance, in financial forecasting, where data patterns can change dramatically over time, these assumptions might not always hold true. 3. Difficulty Representing Certain Facts: If certain combinations of features always indicate a particular outcome, but the individual features do not, this can be hard to represent in Network 2 without extra nodes. For example, in spam detection, certain combinations of words might be indicative of spam, even if the individual words are not. Overall, while both networks have their strengths, their limitations need to be considered when choosing the right architecture for a specific application. Discuss the role of Hebb's Rule in the training of artificial neural networks, and how its implementation might affect the performance of Network 1 in real-world applications. Hebb's Rule, often summarized as "neurons that fire together wire together," is a learning rule used in the training of artificial neural networks. It states that if two neurons on either side of a synapse (connection) are activated simultaneously, then the strength of that synapse should be increased. Conversely, if they are activated separately, the strength should be decreased. This rule serves as a fundamental basis for synaptic plasticity, or the ability of synapses to change their strength over time. In the context of artificial neural networks, Hebb's Rule is used to adjust the weights of the connections between nodes. If two nodes are often activated at the same time, the connection between them is strengthened. This means that over time, the network learns to recognize patterns in the input data. In Network 1, Hebb's Rule is implemented to strengthen the connections between nodes that are often active simultaneously. This can help the network to learn and generalize from the patterns in the training data. However, the implementation of Hebb's Rule in Network 1 can also lead to some challenges in real-world applications: 1. Overfitting: If the network overly relies on Hebb's Rule, it might overfit to the training data, strengthening connections based on patterns that may not generalize well to unseen data. This can be problematic in applications like image recognition or financial forecasting, where the ability to generalize to new data is crucial. 2. Double-counting Evidence: As mentioned earlier, Network 1 can double-count evidence when messages bounce back and forth between nodes, leading to overestimation of certain features. This can lead to biased predictions in applications like medical diagnosis or sentiment analysis. 3. Slow Convergence: In large, complex datasets, the process of strengthening and weakening connections based on Hebb's Rule can be slow, leading to longer training times. This can be a limitation in applications where quick model training is required, such as real-time anomaly detection in network traffic. In summary, while Hebb's Rule plays a fundamental role in the training of artificial neural networks, its implementation in Network 1 can lead to challenges like overfitting, double-counting of evidence, and slow convergence in real-world applications.
ef08865b-b61a-424a-9079-0d2177afdd2c
trentmkelly/LessWrong-43k
LessWrong
Circular belief updating This article is going to be in the form of a story, since I want to lay out all the premises in a clear way. There's a related question about religious belief.   Let's suppose that there's a country called Faerie. I have a book about this country which describes all people living there as rational individuals (in a traditional sense). Furthermore, it states that some people in Faerie believe that there may be some individuals there known as sorcerers. No one has ever seen one, but they may or may not interfere in people's lives in subtle ways. Sorcerers are believed to be such that there can't be more than one of them around and they can't act outside of Faerie. There are 4 common belief systems present in Faerie: 1. Some people believe there's a sorcerer called Bright who (among other things) likes people to believe in him and may be manipulating people or events to do so. He is not believed to be universally successful. 2. Or, there may be a sorcerer named Invisible, who interferes with people only in such ways as to provide no information about whether he exists or not. 3. Or, there may be an (obviously evil) sorcerer named Dark, who would prefer that people don't believe he exists, and interferes with events or people for this purpose, likewise not universally successfully. 4. Or, there may either be no sorcerers at all, or perhaps some other sorcerers that no one knows about, or perhaps some other state of things hold, such as that there are multiple sorcerers, or these sorcerers don't obey the above rules. However, everyone who lives in Faerie and is in this category simply believes there's no such thing as a sorcerer. This is completely exhaustive, because everyone believes there can be at most one sorcerer. Of course, some individuals within each group have different ideas about what their sorcerer is like, but within each group they all absolutely agree with their dogma as stated above. Since I don't believe in sorcery, a priori I assign very high
bbec7b0f-6f4a-40a1-be53-8506f66bb471
trentmkelly/LessWrong-43k
LessWrong
Meetup : West LA: The Substitution Principle Discussion article for the meetup : West LA: The Substitution Principle WHEN: 04 March 2015 07:00:00PM (-0800) WHERE: 11066 Santa Monica Blvd, Los Angeles, CA 90025 How to Find Us: Go into this Del Taco. We will be in the back room if possible. Parking is free in the lot out front or on the street nearby. Discussion: When the brain is faced with a hard question, it automatically replaces it with an easier one to estimate the answer. This is good so far as it goes, but it can do this without being aware of the substitution, causing overconfidence or other problems. This can even combine with consistency effects to alter your opinions long-term. Noticing this can help us revise quickly-formed impressions. But wait -- is that really true? Did I only believe it just now because I substituted some easier question, like whether it sounded cool or whether the person talking about it is high-status? Probably. We'd better try to sort it out more carefully at the meetup. Recommended Reading: * The Substitution Principle No prior exposure to Less Wrong is required; this will be generally accessible. Discussion article for the meetup : West LA: The Substitution Principle
ec580e88-887e-4ffd-8ab4-774704e780aa
trentmkelly/LessWrong-43k
LessWrong
Wrong however unnamed Related to: 37 ways that words can be wrong. Consider the following sentence (from Internet; but I have heard it before): 'Lichens consist of fungi and algae, but they are more than the sum of their constituents.' It is supposed to say something like 'the fungus and the alga don't just live very close to each other, they influence each other's habitat(s) and can be considered, for most purposes, to form a physiologically integrated body'. It never actually says that, although people gradually come to this conclusion if they look at illustrations or read long enough. And I don't think the phrase is sufficiently catchy to explain its popularity; rather, that it is a tenuous introduction to the much-later-explained term 'synergism'. A noble (in principle) preparation of the mind. Yet how is a lichen 'more than the sum of fungus and alga'? I suppose one could speak of a 'sum' if the lichen was pulverized and consumed as medicine, and then its effect on the patient was compared to that of the mixture of similarly treated fungus (grown how exactly?) and alga (same here). It doesn't exist in the wild. It shouldn't exist in the literature. A child is not bothered by its lack of sense. When she encounters 'synergism', she'll remember having been told of something like it, and be reassured by the unity of science. It flies under the radar of 'established biological myths', because it doesn't have enough meaning to be one. I picked a dictionary of zoological terms and tried to recall how the notions were put before me for the first time, but of course I failed. (I guess it should be high-level things, like 'variability', or colloquial expressions - 'bold as a lion', etc., that distort and get distorted the most.) They seem to 'have always been there'. Then, I looked at the definitions and tried to imagine them misapplied (intuitively, a simpler task). No luck. Yet someday, something other truly unknown to me will appear familiar and simple. We can weed out improper conce
84d4065b-a47c-4fd6-9f86-724bf470c0fd
trentmkelly/LessWrong-43k
LessWrong
AI Safety Bounties Earlier this year, Vaniver recommended Bug Bounties for Advanced ML Systems. I spent a while at Rethink Priorities considering and expanding on this idea, suggesting potential program models, and assessing the benefits and risks of programs like this, which I've called 'AI Safety Bounties': Short summary AI safety bounties are programs where public participants or approved security researchers receive rewards for identifying issues within powerful ML systems (analogous to bug bounties in cybersecurity). Safety bounties could be valuable for legitimizing examples of AI risks, bringing more talent to stress-test systems, and identifying common attack vectors. I expect safety bounties to be worth trialing for organizations working on reducing catastrophic AI risks. Traditional bug bounties seem fairly successful: they attract roughly one participant per $50 of prize money, and have become increasingly popular with software firms over time. The most analogous program for AI systems led to relatively few useful examples compared to other stress-testing methods, but one knowledgeable interviewee suggested that future programs could be significantly improved. However, I am not confident that bounties will continue to be net-positive as AI capabilities advance. At some point, I think the accident risk and harmful knowledge proliferation from open sourcing stress-testing may outweigh the benefits of bounties In my view, the most promising structure for such a program is a third party defining dangerous capability thresholds (“evals”) and providing rewards for hunters who expose behaviors which cross these thresholds. I expect trialing such a program to cost up to $500k if well-resourced, and to take four months of operational and researcher time from safety-focused people. I also suggest two formats for lab-run bounties: open contests with subjective prize criteria decided on by a panel of judges, and private invitations for trusted bug hunters to test their internal
f9ed2e54-6bfd-4791-acb5-450cefe41655
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Helpless Individuals Today's post, Helpless Individuals was originally published on 30 March 2009. A summary (taken from the LW wiki):   > When you consider that our grouping instincts are optimized for 50-person hunter-gatherer bands where everyone knows everyone else, it begins to seem miraculous that modern-day large institutions survive at all. And in fact, the vast majority of large modern-day institutions simply fail to exist in the first place. This is why funding of Science is largely through money thrown at Science rather than donations from individuals - research isn't a good emotional fit for the rare problems that individuals can manage to coordinate on. In fact very few things are, which is why e.g. 200 million adult Americans have such tremendous trouble supervising the 535 members of Congress. Modern humanity manages to put forth very little in the way of coordinated individual effort to serve our collective individual interests. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Rationality: Common Interest of Many Causes, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
7efddbd1-913e-49c7-be45-cb78fc3b9b99
trentmkelly/LessWrong-43k
LessWrong
Systems of Survival In this 1992 dialogue, Jane Jacobs lays out two basic descriptive clusters (syndromes) of morality, each tied to a means of production, and describes them as ‘systems of survival’. That is, life requires inputs; those inputs can come from either taking, which is the foundation of what she calls the Guardian moral syndrome, or from trading, which is the foundation of what she calls the Commercial moral syndrome. It’s an attempt to understand work by looking at the aspirational attitudes that people have towards work, and understand those how those attitudes are grounded in the mechanics of that work. The dialogue explains the ideas and corollaries in some depth;[1] here I present just the stripped core of the ideas. Jacobs identifies fifteen precepts for each syndrome (I have ordered the lists slightly differently than Jacobs, to highlight contrasts, but really they want to be a graph; also, I think it's easy to have your eyes skip over the lists, but deep understanding of this requires thinking about each precept individually as well as the connections between them, which the long dialogue is better at doing. I recommend getting the gist, reading the rest of the post, and then coming back to meditate on the lists).[2] The lists ignore universally esteemed behavior; Jacobs has her character Kate list them as “cooperation, courage, moderation, mercy, common sense, foresight, judgment, competence, perseverance, faith, energy, patience, wisdom”. The Commercial Moral SyndromeThe Guardian Moral Syndrome Shun force Exert prowess Come to voluntary agreements Shun trading Be honest Be loyal Collaborate easily with strangers and aliens Be exclusive Compete Take vengeance Respect contracts Respect hierarchy Use initiative and enterprise Be obedient and disciplined Be open to inventiveness and novelty Adhere to tradition Be efficient Treasure honor Promote comfort and convenience Show fortitude Dissent for the sake of the task Deceive for the sake of the task Inves
3f7397e6-8615-48c7-a3fe-4e176d3604c3
trentmkelly/LessWrong-43k
LessWrong
Conservation of Expected Ethics isn't enough An idea relevant for AI control; index here. Thanks to Jessica Taylor. I've been playing with systems to ensure or incentivise conservation of expected ethics - the idea that if an agent estimates that utilities v and w are (for instance) equally likely, then its future estimate for the correctness of v and w must be the same. In other words, it can try and get more information, but can't bias the direction of the update. Unfortunately, CEE isn't enough. Here are a few decisions the AI can take that respect CEE. Imagine that the conditions of update relied on, for instance, humans answering questions: #. Don't ask. #. Ask casually. #. Ask emphatically. #. Build a robot that randomly rewires humans to answer one way or the other. #. Build a robot that observes humans, figures out which way they're going to answer, then rewires them to answer the opposite way. All of these conserve CEE, but, obviously, the last two options are not ideal...
e63d9a42-6d3c-432e-836d-6c25ef6deb7c
trentmkelly/LessWrong-43k
LessWrong
Philosophical self-ratification "Ratification" is defined as "the act or process of ratifying something (such as a treaty or amendment) : formal confirmation or sanction". Self-ratification, then, is assigning validity to one's self. (My use of the term "self-ratification" follows philosophical usage in analysis of causal decision theory) At first this seems like a trivial condition. It is, indeed, easy to write silly sentences such as "This sentence is true and also the sky is green", which are self-ratifying. However, self-ratification combined with other ontological and epistemic coherence conditions is a much less trivial condition, which I believe to be quite important for philosophical theory-development and criticism. I will walk through some examples. Causal decision theory Formal studies of causal decision theory run into a problem with self-ratification. Suppose some agent A is deciding between two actions, L and R. Suppose the agent may randomize their action, and that their payoff equals their believed probability that they take the action other than the one they actually take. (For example, if the agent takes action L with 40% probability and actually takes action R, the agent's payoff is 0.4) If the agent believes they will take action L with 30% probability, then, if they are a causal decision theorist, they will take action L with 100% probability, because that leads to 0.7 payoff instead of 0.3 payoff. But, if they do so, this invalidates their original belief that they will take action L with 30% probability. Thus, the agent's belief that they will take action L with 30% probability is not self-ratifying: the fact of the agent having this belief leads to the conclusion that they take action L with 100% probability, not 30%, which contradicts the original belief. The only self-ratifying belief is that the agent will take each action with 50% probability; this way, both actions yield equal expected utility, and so a policy 50/50 randomization is compatible with causal decisio
6b2ff1ed-e5d4-4c65-927c-023aef867c49
trentmkelly/LessWrong-43k
LessWrong
Parasitic Language Games: maintaining ambiguity to hide conflict while burning the commons > “They are playing a game. They are playing at not playing a game. If I show them I see they are, I shall break the rules and they will punish me. I must play their game, of not seeing I see the game” > - R. D. Laing "It's not lying if everyone knows it's lying." I see this sentiment in a lot of places. It pops up in corporate managerial contexts. It's been used as a legal defense and worked. It's a claim that communication that looks adversarial isn't, it's just high-context communication between people "in the know", there's no deception happening, no conflict, you just don't get how we do things here. I don't buy it. My claim in a nutshell: It situations where people insist "it's not lying because everyone knows it's lying" the people in the know aren't deceiving each other, but the reason this game is being played is to fool people not in the know, and insisting that it's just "high context communication" is part of an effort to obscure the fact that a conflict is going on. If that makes perfect sense to you, dope, you already get my main point. The rest of this post is adding nuance, actually arguing the case, and providing more language for talking about these sorts of dynamics. Case Study: "Are Founders Allowed to Lie?" This essay by Alex Danco talks about how "it's not lying because everybody knows it's lying" works in the Silicon Valley startup scene. It's short enough that it's worth reading now so you can decide for yourself if I'm misrepresenting him. If you don't feel like reading it I still quote enough of it for my post to make sense. Some snippets. It's really hard to start a business without lying: > If you are only allowed to tell the literal, complete truth, and you’re compelled to tell that truth at all times, it is very difficult to create something out of nothing. You probably don’t call it “lying”, but founders have to will an unlikely future into existence. To build confidence in everyone around you – investors, customers, employee
93b4a924-47ca-43d1-bec4-abb021a06c02
trentmkelly/LessWrong-43k
LessWrong
Meetup : Meetup #7 - Becoming Less Wrong Discussion article for the meetup : Meetup #7 - Becoming Less Wrong WHEN: 27 November 2016 03:13:37PM (-0500) WHERE: Meester Treublaan 18, 1097 DP Amsterdam, Netherlands We actually didn't get around to our LessWrong discussion last time, so lets try again on Sunday the 27th! To recap: we want to try discussing some Less Wrong blog posts. Two of the three below are selected from the 'How to actually change your mind' sequence. Changing your mind in view of new evidence is thé core skill of epistemic rationality. If there's one thing cognitive research has taught us, it's that we're rightfully calling it a skill: it's hard, it has to be practiced, and it's important. The first one tackles the misconception that rationality and emotions are opposites. Toon will also elaborate on this briefly during the meeting. On rationality and emotions: http://lesswrong.com/lw/hp/feeling_rational/ On completing patterns: http://lesswrong.com/lw/k5/cached_thoughts/ On rationalisation: http://lesswrong.com/lw/ju/rationalization/ In addition, everyone is encouraged to read around if they feel like it! See you on Sunday! Discussion article for the meetup : Meetup #7 - Becoming Less Wrong
ef9cfcac-5d3f-4126-8f5c-12b3bf9ae252
trentmkelly/LessWrong-43k
LessWrong
UML XII: Dimensionality Reduction (This is the twelfth post in a sequence on Machine Learning based on this book. Click here for part I.) This post will be more on the mathy side (all of it linear algebra). It contains orthonormal matrices, eigenvalues and eigenvectors, and singular value decomposition. I Almost everything I've written about in this series thus far has been about what is called supervised learning, which just means "learning based on a fixed amount of training data." This even includes most of the theoretical work from chapters I-III, such as the PAC learning model. On the other side, there is also unsupervised learning, which is learning without any training data. Dimensionality reduction is about taking data that is represented by vectors in some high-dimensional space and converting them to vectors in a lower-dimensional space, in a way that preserves meaningful properties of the data. One use case is to overcome computational hurdles in supervised learning – many algorithms have runtimes that increase exponentially with the dimension. In that case, we would essentially be doing the opposite of what we've covered in the second half of post VIII, which was about increasing the dimensionality of a data set to make it more expressive. However, dimensionality reduction can also be applied to unsupervised learning – or even to tasks that are outside of machine learning altogether. I once took a class on Information Visualization, and they covered the technique we're going to look at in this post (although "covered" did not include anyone understanding what happens on a mathematical level). It is very difficult to visualize high-dimensional data; a reasonable approach to deal with this problem is to map it into 2d space, or perhaps 3d space, and then see whether there are any interesting patterns. The aforementioned technique is called principal component analysis. I like it for two reasons: one, linear algebra plays a crucial role in actually finding the solution (rather than jus
bce83e4d-eb9f-4304-aa66-ad1b2032eee6
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Realism about rationality *Epistemic status: trying to vaguely gesture at vague intuitions. A similar idea was [explored here](https://intelligence.org/files/HowIntelligible.pdf) under the heading "the intelligibility of intelligence", although I hadn't seen it before writing this post. As of 2020, I consider [this follow-up comment](https://www.lesswrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality?commentId=Yghaazk49tfywHBGr) to be a better summary of the thing I was trying to convey with this post than the post itself. The core disagreement is about how much we expect the limiting case of arbitrarily high intelligence to tell us about the AGIs whose behaviour we're worried about.* There’s a mindset which is common in the rationalist community, which I call “realism about rationality” (the name being intended as a parallel to moral realism). I feel like my skepticism about agent foundations research is closely tied to my skepticism about this mindset, and so in this essay I try to articulate what it is. Humans ascribe properties to entities in the world in order to describe and predict them. Here are three such properties: "momentum", "evolutionary fitness", and "intelligence". These are all pretty useful properties for high-level reasoning in the fields of physics, biology and AI, respectively. There's a key difference between the first two, though. Momentum is very amenable to formalisation: we can describe it using precise equations, and even prove things about it. Evolutionary fitness is the opposite: although nothing in biology makes sense without it, no biologist can take an organism and write down a simple equation to define its fitness in terms of more basic traits. This isn't just because biologists haven't figured out that equation yet. Rather, we have excellent reasons to think that fitness is an incredibly complicated "function" which basically requires you to describe that organism's entire phenotype, genotype and environment. In a nutshell, then, realism about rationality is a mindset in which reasoning and intelligence are more like momentum than like fitness. It's a mindset which makes the following ideas seem natural: * The idea that there is a simple yet powerful theoretical framework which describes human intelligence and/or intelligence in general. (I don't count brute force approaches like AIXI for the same reason I don't consider physics a simple yet powerful description of biology). * The idea that there is an “ideal” decision theory. * The idea that AGI will very likely be an “agent”. * The idea that Turing machines and Kolmogorov complexity are foundational for epistemology. * The idea that, given certain evidence for a proposition, there's an "objective" level of subjective credence which you should assign to it, even under computational constraints. * The idea that Aumann's agreement theorem is relevant to humans. * The idea that morality is quite like mathematics, in that there are certain types of moral reasoning that are just correct. * The idea that defining [coherent extrapolated volition](https://wiki.lesswrong.com/wiki/Coherent_Extrapolated_Volition) in terms of an idealised process of reflection roughly makes sense, and that it converges in a way which doesn’t depend very much on morally arbitrary factors. * The idea that having having contradictory preferences or beliefs is really bad, even when there’s no clear way that they’ll lead to bad consequences (and you’re very good at avoiding dutch books and money pumps and so on). To be clear, I am neither claiming that realism about rationality makes people dogmatic about such ideas, nor claiming that they're all false. In fact, from a historical point of view I’m quite optimistic about using maths to describe things in general. But starting from that historical baseline, I’m inclined to adjust downwards on questions related to formalising intelligent thought, whereas rationality realism would endorse adjusting upwards. This essay is primarily intended to explain my position, not justify it, but one important consideration for me is that intelligence as implemented in humans and animals is very messy, and so are our concepts and inferences, and so is the closest replica we have so far (intelligence in neural networks). It's true that "messy" human intelligence is able to generalise to a wide variety of domains it hadn't evolved to deal with, which supports rationality realism, but analogously an animal can be evolutionarily fit in novel environments without implying that fitness is easily formalisable. Another way of pointing at rationality realism: suppose we model humans as internally-consistent agents with beliefs and goals. This model is obviously flawed, but also predictively powerful on the level of our everyday lives. When we use this model to extrapolate much further (e.g. imagining a much smarter agent with the same beliefs and goals), or base morality on this model (e.g. preference utilitarianism, CEV), is that more like using Newtonian physics to approximate relativity (works well, breaks down in edge cases) or more like cavemen using their physics intuitions to reason about space (a fundamentally flawed approach)? Another gesture towards the thing: a popular metaphor for Kahneman and Tversky's dual process theory is a rider trying to control an elephant. Implicit in this metaphor is the localisation of personal identity primarily in the system 2 rider. Imagine reversing that, so that the experience and behaviour you identify with are primarily driven by your system 1, with a system 2 that is mostly a [Hansonian rationalisation engine](http://elephantinthebrain.com/) on top (one which occasionally also does useful maths). Does this shift your intuitions about the ideas above, e.g. by making your CEV feel less well-defined? I claim that the latter perspective is just as sensible as the former, and perhaps even more so - see, for example, [Paul Christiano's model of the mind](https://sideways-view.com/2017/02/19/the-monkey-and-the-machine-a-dual-process-theory/), which leads him to conclude that "imagining conscious deliberation as fundamental, rather than a product and input to reflexes that actually drive behavior, seems likely to cause confusion." These ideas have been stewing in my mind for a while, but the immediate trigger for this post was a conversation about morality which went along these lines: > R (me): Evolution gave us a jumble of intuitions, which might contradict when we extrapolate them. So it’s fine to accept that our moral preferences may contain some contradictions. > O (a friend): You can’t just accept a contradiction! It’s like saying “I have an intuition that 51 is prime, so I’ll just accept that as an axiom.” > R: Morality isn’t like maths. It’s more like having tastes in food, and then having preferences that the tastes have certain consistency properties - but if your tastes are strong enough, you might just ignore some of those preferences. > O: For me, my meta-level preferences about the ways to reason about ethics (e.g. that you shouldn’t allow contradictions) are so much stronger than my object-level preferences that this wouldn’t happen. Maybe you can ignore the fact that your preferences contain a contradiction, but if we scaled you up to be much more intelligent, running on a brain orders of magnitude larger, having such a contradiction would break your thought processes. > R: Actually, I think a much smarter agent could still be weirdly modular like humans are, and work in such a way that describing it as having “beliefs” is still a very lossy approximation. And it’s plausible that there’s no canonical way to “scale me up”. I had a lot of difficulty in figuring out what I actually meant during that conversation, but I think a quick way to summarise the disagreement is that O is a rationality realist, and I’m not. This is not a problem, per se: I'm happy that some people are already working on AI safety from this mindset, and I can imagine becoming convinced that rationality realism is a more correct mindset than my own. But I think it's a distinction worth keeping in mind, because assumptions baked into underlying worldviews are often difficult to notice, and also because the rationality community has selection effects favouring this particular worldview even though it doesn't necessarily follow from the community's founding thesis (that humans can and should be more rational).
5b4c03d5-0ef8-4c21-bf6b-e54c44872732
trentmkelly/LessWrong-43k
LessWrong
Why giving workers stocks isn’t enough — and what co-ops get right In my last post, I argued that worker co-ops can help restore declining social trust. But a common objection I keep hearing goes something like this: > Worker co-ops seem basically equivalent to a firm that gives its employees stock—but then permanently blocks them from selling it. Isn't that harmful? The ability to sell your shares is valuable. You might want to diversify your investments, liquidate shares to make a big purchase (like buying a house), or avoid having all your financial eggs in one basket. Why force workers to hold their shares indefinitely? If they really wanted to keep them, they could just choose not to sell. At first glance, this objection feels logical. After all, publicly traded companies usually let people buy and sell their stock freely, giving investors plenty of flexibility. So, wouldn’t preventing workers from selling their shares in a co-op be bad for them? But there are a few important details missing from this framing — let's unpack them step by step.   1. Most workers don't own much stock in traditional companies So first of all, this objection doesn’t work for companies that are not publicly traded. Secondly, even if employees do buy stock, employers have an army of lawyers and much more bargaining power than the employees, so they often pull contract shenanigans. Lastly, while, in theory, nothing stops a regular employee at a publicly traded company from buying stock, in practice, most stock ownership is heavily concentrated among wealthy individuals. Workers usually don't own a significant percentage of the stocks. But worker co-ops fundamentally change this dynamic by giving employees shares directly, making stock ownership accessible and widespread, not just limited to wealthy investors.   2. We should distinguish between voting shares and non-voting shares There are two kinds of shares: * Voting shares: These shares give employees actual decision-making power within the company—allowing them to vote on important decis
26d7b7b0-de13-48d2-ab3a-ddb1827b476f
trentmkelly/LessWrong-43k
LessWrong
Survey on intermediate goals in AI governance It seems that a key bottleneck for the field of longtermism-aligned AI governance is limited strategic clarity (see Muehlhauser, 2020, 2021). As one effort to increase strategic clarity, in October-November 2022, we sent a survey to 229 people we had reason to believe are knowledgeable about longtermist AI governance, receiving 107 responses. We asked about:  * respondents’ “theory of victory” for AI risk (which we defined as the main, high-level “plan” they’d propose for how humanity could plausibly manage the development and deployment of transformative AI such that we get long-lasting good outcomes), * how they’d feel about funding going to each of 53 potential “intermediate goals” for AI governance,[1] * what other intermediate goals they’d suggest, * how high they believe the risk of existential catastrophe from AI is, and * when they expect transformative AI (TAI) to be developed. We hope the results will be useful to funders, policymakers, people at AI labs, researchers, field-builders, people orienting to longtermist AI governance, and perhaps other types of people. For example, the report could:  * Broaden the range of options people can easily consider * Help people assess how much and in what way to focus on each potential “theory of victory”, “intermediate goal”, etc. * Target and improve further efforts to assess how much and in what way to focus on each potential theory of victory, intermediate goal, etc. If you'd like to see a summary of the survey results, please request access to this folder. We expect to approve all access requests,[2] and will expect readers to abide by the policy articulated in "About sharing information from this report" (for the reasons explained there).  Acknowledgments This report is a project of Rethink Priorities–a think tank dedicated to informing decisions made by high-impact organizations and funders across various cause areas. The project was commissioned by Open Philanthropy. Full acknowledgements can b
25ec3b1b-d2a0-4060-a8ea-17c62c650740
trentmkelly/LessWrong-43k
LessWrong
Acausal romance I just realized I haven't previously pointed the metaphysicians on Less Wrong to "Possible Girls," a hilarious paper about acausal romance: > The ability to causally interact with your partner is important to many aspects of happy romantic relationships, but not to all of them. It’s quite pleasant simply to know that your partner loves you and appreciates being loved by you. A loving relationship with a faraway person can enhance one’s self-esteem and turn loneliness into contentment. As a lonely philosopher, I’ve come to wonder: If [all possible worlds exist], can I have a loving relationship with someone from another possible world? ...The answer, I think, is yes. Even if you don't read the whole thing, don't miss the final paragraph.
aacd55ba-bd99-411a-a9bb-bf71c0a7ce3f
trentmkelly/LessWrong-43k
LessWrong
We don't want to post again "This might be the last AI Safety Camp" We still need more funding to be able to run another edition. Our fundraiser raised $6k as of now, and will end if it doesn't reach the $15k minimum, on February 1st. We need proactive donors. If we don't get funded for this time, there is a good chance we will move on to different work in AI Safety and new commitments. This would make it much harder to reassemble the team to run future AISCs, even if the funding situation improves. You can take a look at the track record section and see if it's worth it: * ≥ $1.4 million granted to projects started at AI Safety Camp * ≥ 43 jobs in AI Safety taken by alumni * ≥ 10 organisations started by alumni   Edit to add: Linda just wrote a new post about AISC's theory of change.   You can donate through our Manifund page You can also read more about our plans there. If you prefer to donate anonymously, this is possible on Manifund.   Suggested budget for the next AISC If you're a large donor (>$15k), we're open to let you choose what to fund.   Testimonials (screenshots from Manifund page)
22ce77be-0006-46e9-aaf5-d562d62651da
trentmkelly/LessWrong-43k
LessWrong
Meetup : Phoenix/ASU Less Wrong Discussion article for the meetup : Phoenix/ASU Less Wrong WHEN: 12 April 2014 10:00:00AM (-0700) WHERE: 300 E Orange Mall, Tempe, AZ 85281 We will be meeting up at Hayden Library. We're going to try a Saturday morning meetup to mix up the states of mind and see if we can bring some new people out of the woodwork. We will probably continue what has so far been a productive search for topics of disagreement. We will also likely be discussing the Data Science Coursera study group. Discussion article for the meetup : Phoenix/ASU Less Wrong
500c04f1-54d2-4b0e-800f-24fa49b1f960
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
[MLSN #8]: Mechanistic interpretability, using law to inform AI alignment, scaling laws for proxy gaming As part of a larger community building effort, [CAIS](https://safe.ai/) is writing a safety newsletter that is designed to cover empirical safety research and be palatable to the broader machine learning research community. You can [subscribe here](https://newsletter.mlsafety.org/) or follow the newsletter on [twitter](https://twitter.com/ml_safety) here. --- Welcome to the 8th issue of the ML Safety Newsletter! In this edition, we cover: * Isolating the specific mechanism that GPT-2 uses to identify the indirect object in a sentence * When maximum softmax probability is optimal * How law can inform specification for AI systems * Using language models to find a group consensus * Scaling laws for proxy gaming * An adversarial attack on adaptive models * How systems safety can be applied to ML * And much more... --- **Monitoring** ============== ### **A Circuit for Indirect Object Identification in GPT-2 small**   [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82973a3a-77de-41b8-80e4-f5ed94a738f0_1600x710.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82973a3a-77de-41b8-80e4-f5ed94a738f0_1600x710.png) One subset of interpretability is *mechanistic interpretability*: understanding how models perform functions down to the level of particular parameters. Those working on this agenda believe that by learning how small parts of a network function, they may eventually be able to rigorously understand how the network implements high-level computations. This paper tries to identify how GPT-2 small solves *indirect object identification,* the task of identifying the correct indirect object to complete a sentence with. Using a number of interpretability techniques, the authors seek to isolate particular parts of the network that are responsible for this behavior. **[**[**Link**](https://arxiv.org/abs/2211.00593)**]**  ### **Learning to Reject Meets OOD Detection** Both learning to reject (also called error detection; deciding whether a sample is likely to be misclassified) and out-of-distribution detection share the same baseline: maximum softmax probability. MSP has been outperformed by other methods in OOD detection, but never in learning to reject, and it is mathematically provable that it is optimal for learning to reject. This paper shows that it isn’t optimal for OOD detection, and identifies specific circumstances in which it can be outperformed. This theoretical result is a good confirmation of the existing empirical results. **[**[**Link**](https://arxiv.org/abs/2301.12386)**]**  ### **Other Monitoring News** **[**[**Link**](https://arxiv.org/abs/2212.06727)**]** The first paper that successfully applies feature visualization techniques to Vision Transformers. **[**[**Link**](https://arxiv.org/abs/2211.07740)**]** This method uses the reconstruction loss of diffusion models to create a new SOTA method for out-of-distribution detection in images. **[**[**Link**](https://arxiv.org/abs/2301.02344)**]** A new Trojan attack on code generation models works by inserting poisoned code into docstrings rather than the code itself, evading some vulnerability-removal techniques. **[**[**Link**](https://arxiv.org/abs/2302.06600)**]** This paper shows that fine tuning language models for particular tasks relies on changing only a very small subset of parameters. The authors show that as few as 0.01% of parameters can be “grafted” onto the original network and achieve performance that is nearly as high. --- **Alignment** ============= ### **Applying Law to AI Alignment** One problem in alignment is specification: though we may give AI systems instructions, we cannot possibly specify what they should do in all circumstances. Thus, we have to consider how our specifications will generalize in fuzzy, or out-of-distribution contexts. The author of this paper argues that law has many desirable properties that may make it useful in informing specification. For example, the law often uses “standards”: relatively vague instructions (e.g. “act with reasonable caution at railroad crossings”; in contrast to rules like “do not exceed 30 miles per hour”) whose specifics have been developed through years of precedent. In the law, it is often necessary to consider the “spirit” behind these standards, which is exactly what we want AI systems to be able to do. This paper argues that AI systems could be construed under the fiduciary standard. [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F742897ba-fbe3-447d-9c14-514d79064075_1600x1077.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F742897ba-fbe3-447d-9c14-514d79064075_1600x1077.png)   Finally, the paper conducts an empirical study on thousands of US court opinions. It finds that while the baseline GPT-3 model is unable to accurately predict court evaluations of fiduciary duty, more recent models in the GPT-3.5 series can do so with relatively high accuracy. Though legal standards will not resolve many of the most significant problems of alignment, they could improve upon current strategies of specification. **[**[**Link**](https://arxiv.org/abs/2301.10095)**]** ### **Language models can generate consensus statements for diverse groups**   [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc30e64ac-ef6f-4a73-a463-535987b9fab8_1600x775.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc30e64ac-ef6f-4a73-a463-535987b9fab8_1600x775.png)   We may want to take into account the interests not only of individuals but also of possibly-conflicting members of a larger group. This paper asked individuals for their opinions on political issues (e.g., “should speed limits be reduced?”) and used a language model to generate consensus statements that would be agreed on by the group at large. The participants rated AI-generated consensus statements highly, above even human-written statements. The authors don’t appear to discuss whether this could simply be due to the consensus statements being more watered down and thus less action-relevant. Still, the paper is a promising step towards aligning models with groups of humans. **[**[**Link**](https://arxiv.org/abs/2211.15006)**]** --- **Robustness** ============== ### **Scaling laws for reward overoptimization** Reinforcement learning techniques, such as those used to improve the general capabilities of language models, often optimize a model to give outputs that are rated highly by a proxy for some “gold standard.” For example, a proxy might be trained to predict how particular humans would react to an output. A difficulty, also mentioned earlier in the newsletter, is proxy gaming, where the model improves performance according to the proxy while failing to do so on the underlying gold standard (e.g., what humans would actually think). [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b16d7f9-ace2-474e-b083-adc47f71b0da_1600x1105.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b16d7f9-ace2-474e-b083-adc47f71b0da_1600x1105.png) This paper empirically studies how language models trained with reinforcement learning can over optimize proxy reward, and develops scaling laws describing this phenomenon. To do this, they use a (proxy) model as the gold standard, and build a set of proxy models that approximate that gold standard model. In addition to measuring models optimized with reinforcement learning, they find that over optimization can also happen with best-of-n sampling. **[**[**Link**](https://arxiv.org/abs/2210.10760)**]** ### **Adaptive models can be exploited by adversaries** [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf65a777-1521-4afe-9a05-21c7419b4e67_1600x551.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf65a777-1521-4afe-9a05-21c7419b4e67_1600x551.png) Many deep learning models aren’t robust to distribution shifts. One potential solution to this is test-time adaptation (TTA), where a model is modified based on the test data it sees. This paper demonstrates that TTA is subject to adversarial attacks, where malicious test data can cause predictions about clean data to be incorrect. This means that adaptive models have yet another attack surface that can potentially be exploited. The authors develop several kinds of attacks: targeted (degrade accuracy of a particular sample), indiscriminate (degrade accuracy in general), and “stealthy targeted” (degrade accuracy of a particular sample while not otherwise reducing accuracy). The attacks are conducted with projected gradient descent, and tested with the ImageNet-C dataset as the OOD dataset. The authors also find that models designed to be adversarially robust are also more robust to this attack. **[**[**Link**](https://arxiv.org/abs/2301.12576)**]** ### **Other Robustness News** **[**[**Link**](https://arxiv.org/abs/2302.04638)**]** Better diffusion models can improve adversarial training when used to generate data. **[**[**Link**](https://arxiv.org/abs/2301.06294)**]** Proposes a method for adapting RL policies to environments with random shocks, augmenting training with simulations of the post-shock environment. **Systemic Safety** =================== ### **Applying Systems Safety to ML** [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d5e6fbf-3446-41ba-b92f-45b526664a24_1600x287.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d5e6fbf-3446-41ba-b92f-45b526664a24_1600x287.png) Systems safety engineering is widely used for safety analysis in many industries. The impetus for this discipline was the understanding that safety does not merely depend on the performance or reliability of individual components (e.g., ML models), but may also depend on assuring the safe interoperation of multiple systems or components (including human systems such as corporations). This paper advocates the use of systems safety engineering methods for analyzing the safety of machine learning models. **[**[**Link**](https://arxiv.org/abs/2302.02972)**]** ### **Other Systemic Safety News** **[**[**Link**](https://arxiv.org/abs/2302.06588)**]** This paper proposes methods to “immunize” images against manipulation by diffusion models, potentially reducing the risk of the models being used for disinformation. **Other Content** ================= **[**[**Link**](https://course.mlsafety.org/about)**] The ML Safety course** If you are interested in learning about cutting-edge ML Safety research in a more comprehensive way, there is now a course with lecture videos, written assignments, and programming assignments. It covers technical topics in Alignment, Monitoring, Robustness, and Systemic Safety. **[**[**Link**](https://www.reddit.com/r/mlsafety/)**] ML Safety Reddit** The ML Safety Reddit is frequently updated to include the latest papers in the field. **[**[**Link**](https://twitter.com/topofmlsafety)**] Top of ML Safety Twitter** This Twitter account tweets out papers posted on the ML Safety Reddit.
3774fc84-806a-43ae-b194-8360f8b23899
trentmkelly/LessWrong-43k
LessWrong
Mentioning cryonics to a dying person My paternal grandmother is dying of cancer (not brain cancer). She is still relatively healthy, and is taking chemo, but there is little hope of remission (and even if that does happen, she'll probably die of heart failure fairly soon). Her current plan is to be cremated and have the ashes buried in a graveyard (in my opinion, the worst of both of the "standard" approaches, but that's not the point of this post). I would prefer if she were cryopreserved, but am unsure how to even begin to broach the subject. I also have no idea how to convince her. She is not particularly religious, but is concerned with leaving as much money for my grandfather (and later my parents and me) as possible. I have previously discussed cryonics with my parents; my father brushed off the idea and my mom looked into it but dismissed the idea because the future isn't likely to want her (I find this argument ridiculous on several grounds). This means that I can't count on them to help talk to my grandmother. I may be able to talk to my grandfather first, but this would probably not be much of an asset: he is into several different conspiracy theories (the most recent ones center around the world secretly being controlled by the "elites" who use the U.S. President, U.K. Prime Minister, etc. as figurative puppets), but my grandmother doesn't seem to believe these and probably wouldn't listen much to his talk of cryonics either. Any suggestions of how to broach the topic or convince her once the topic is broached would be appreciated. I am currently at my grandparents' house, but am leaving less than a day after posting this (most of which will be spent at the local nighttime, and thus asleep). I would prefer not to upset her, both for obvious reasons and because I may not be able to bring myself to bring it up on the day we depart if it will cause us to leave on a bad note.
879a06c8-54d6-4256-9d4a-f30aacd7617d
trentmkelly/LessWrong-43k
LessWrong
What charities can I donate to for preventing another pandemic? Simple question but surprisingly I didn't find an answer with a quick google. Most charities (including WHO) are focused on resolving the current problem.
fd662bc3-f5f0-44f5-a0ba-4ba77d2b9f51
trentmkelly/LessWrong-43k
LessWrong
Will Manifold Markets/Metaculus have built-in support for reflective latent variables by 2025? Prediction markets and similar systems are currently nice for soliciting predictions for outcomes where there is a clear, unambiguous objective resolution criterion. However, many phenomena in the real world are hard to directly observe, but tend to have multiple indirect indicators. A familiar example might be aging/senescence, where you have indirect indicators like muscle weakness, gray hair, etc. that someone is aging, but you do not have a directly observable Essence Of Aging. There exists a type of math which can be used to statistically model such variables, called reflective latent variables. There are a number of specific implementations for specific contexts (factor analysis, latent class models, item response theory), but they are all mostly based on the notion of having several indicator variables which are supposed to be independent conditional on the latent variable. Essentially, a prediction market could implement this by allowing people to create questions with multiple resolution criteria, and allowing people to make correlated predictions over those resolution criteria. Then people could be scored based on their overall accuracy across these resolution criteria. If sufficiently many correlated predictions have been made, people might not even need to have specific opinions on the resolution criteria, but might just be able to bet on the probabilities of the abstract latent variables, and have the market infer what the corresponding bets on the resolution criteria would look like.
5a9fe1eb-c15f-4dcb-aa0f-a7f873df5c6a
trentmkelly/LessWrong-43k
LessWrong
Nov 16-18: Rationality for Entrepreneurs CFAR is taking LW-style rationality into the world, this month, with a new kind of rationality camp: Rationality for Entrepreneurs.  It is aimed at ambitious, relatively successful folk (regardless of whether they are familiar with LW), who like analytic thinking and care about making practical real-world projects work.  Some will be paying for themselves; others will be covered by their companies.   If you'd like to learn rationality in a more practical context, consider applying.  Also, if you were hoping to introduce rationality and related ideas to a friend/acquaintance who fits the bill, please talk to them about the workshop, both for their sake and to strengthen the rationality community. The price will be out of reach for some: the workshop costs $3.9k.  But there is a money-back guarantee.  Some partial scholarships may be available. This fee buys participants: * Four nights and three days at a retreat center, with small classes, interactive exercises, and much opportunity for unstructured conversation that applies the material at meals and during the evenings (room and board is included); * One instructor for every three participants;  * Six weeks of Skype/phone and email follow-up, to help participants make the material into regular habits, and navigate real-life business and personal situations with these tools. CFAR is planning future camps which are more directly targeted at a Less Wrong audience (like our previous camps), so don’t worry if this camp doesn’t seem like the right fit for you (because of cost, interests, etc.).  There will be others.  But if you or someone you know does have an entrepreneurial bent[1], then we strongly recommend applying to this camp rather than waiting.  Attendees will be surrounded by other ambitious, successful, practically-minded folks, learn from materials that have been tailored to entrepreneurial issues, and receive extensive follow-up to help apply what they’ve learned to their businesses and personal lives
0c2acfe4-30f9-44bc-afda-27fdf6d45d63
trentmkelly/LessWrong-43k
LessWrong
Retrospective on the 2022 Conjecture AI Discussions At the end of 2022, following the success of the 2021 MIRI Conversations, Conjecture started a project to host discussions about AGI and alignment with key people in the field. The goal was simple: surface positions and disagreements, identify cruxes, and make these debates public whenever possible for collective benefit. Given that people and organizations will have to coordinate to best navigate AI's increasing effects, this is the first, minimum-viable coordination step needed to start from. Coordination is impossible without at least common knowledge of various relevant actors' positions and models. People sharing their beliefs, discussing them and making as much as possible of that public is strongly positive for a series of reasons. First, beliefs expressed in public discussions count as micro-commitments or micro-predictions, and help keep the field honest and truth-seeking. When things are only discussed privately, humans tend to weasel around and take inconsistent positions over time, be it intentionally or involuntarily. Second, commenters help debates progress faster by pointing out mistakes. Third, public debates compound. Knowledge shared publicly leads to the next generation of arguments being more refined, and progress in public discourse. We circulated a document about the project to various groups in the field, and invited people from OpenAI, DeepMind, Anthropic, Open Philanthropy, FTX Future Fund, ARC, and MIRI,  as well as some independent researchers to participate in the discussions. We prioritized speaking to people at AGI labs, given that they are focused on building AGI capabilities. The format of discussions was as follows: * A brief initial exchange with the participants to decide on the topics of discussion. By default, the discussion topic was “How hard is Alignment?”, since we've found we disagree with most people about this, and the reasons for it touch on many core cruxes about AI.  * We held the discussion synchronously for
eb0af184-a911-47d3-9ce0-cafb578fc726
trentmkelly/LessWrong-43k
LessWrong
Dancing With Covid The Society of Stranders puts on a Shag (a regional Swing dance) festival every fall in North Myrtle Beach SC. This year, with the pandemic, the organization made the hard but necessary decision to cancel ( pdf, fb). Unfortunately, one of the venues decided to go ahead with their part of the festival: The precautions they took were essentially hygiene theater, focused on surfaces while the primary risk by far is people breathing in each others faces: > Routinely cleaning and disinfecting thru out the opening hours and a complete cleaning and disinfection after closing. Hand sanitizing stations have been positioned thru out the club and cafe. Masks are available free of charge for anyone that requests one. A person will be designated as a "floater" that will have the responsibility of observing crowds for anyone that may appear sick and handling related issues including taking temperatures. Capacity levels will be maintained as specified by the Fire Marshall. Security is present during all hours of the event. Large crowds around the bar and other areas of the club will be dispersed. On masks: > Some people wore them, some didn't. I think the regs require all establishments have a notice stating masks are required but only a few of the establishments enforce it nor are they required to do so. Looking at pictures posted by a DJ I saw two people wearing masks after skimming ~100+ thumbnails. Several hundred people attended (the most common estimate I've seen is "500+") and now: > More than 60 Dancers with COVID-19. Some are in hospital. It is life threatening. ... Dancers came from all over because they couldn't get out of hotel contracts etc. and wanted to party & dance with everyone. I know people miss dancing and want to get back to it, but this event illustrates how easily covid can propagate between dancers. We need to be patient and find other ways to stay in touch with each other while we wait. Comment via: facebook
6713d230-135d-49ff-88d8-d1d8ed084426
trentmkelly/LessWrong-43k
LessWrong
Linkpost: Look at the Water This is a linkpost for https://jbostock.substack.com/p/prologue-train-crash Epistemic status: fiction, satire even! I am writing a short story. This is the prologue. Most of it will just go on Substack, but I'll occasionally post sections on LessWrong, when they're particularly good. ---------------------------------------- At some point in the past, canals and railways were almost equals: each had their merits and drawbacks, and their various proponents were fierce rivals, battling for the right to drag Britain forward into the industrial age. That was a long time ago. On the day that the wreckage of the South Western passenger train (and the freight car with which it had collided) fell unceremoniously into the Grand Union canal, the waterways of Britain were mostly populated by quirkily-named and quirkily-decorated houseboats. The past took one last bite out of the future, with a gulp of rushing water. Most canals are just a few meters deep, but on this particular day the intervention of an unnaturally strong rainstorm had flooded the canal, connecting it with a nearby wetland and filling the area with stagnant water and sucking mud. One middle section of train — an unassuming second-class carriage — cracked like an egg in the beak of a crow, and out of it fell two whirling, flailing bodies. The first wore a green-blue fleece and dark chinos, was tall, handsome, and if he were not falling into a canal he would have carried himself with a poise and confidence uncommon among recent graduates. The chinos did not particularly matter; what did matter was the contents of their front left pocket: a golden disk stamped with a date four years in the past, and the letters I-M-O. He grasped at nothing, until his hands grabbed the collar of the other. This one wore a button-down shirt an inch too long for his bomber jacket, and a pair of incongruous and fading jeans. He would not have carried himself with much poise or confidence, but at this moment he was kicking his
4b1ab63b-dc7a-4f16-9e58-19bf9c1fc46b
trentmkelly/LessWrong-43k
LessWrong
Alignment Newsletter #43 Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. Highlights AlphaStar: Mastering the Real-Time Strategy Game StarCraft II (The AlphaStar team): The AlphaStar system from DeepMind has beaten top human pros at StarCraft. You can read about the particular details of the matches in many sources, such as the blog post itself, this Vox article, or Import AI. The quick summary is that while there are some reasons you might not think it is conclusively superhuman yet (notably, it only won when it didn't have to manipulate the camera, and even then it may have had short bursts of very high actions per minute that humans can't do), it is clearly extremely good at StarCraft, both at the technically precise micro level and at the strategic macro level. I want to focus instead on the technical details of how AlphaStar works. The key ideas seem to be a) using imitation learning to get policies that do something reasonable to start with and b) training a population of agents in order to explore the full space of strategies and how to play against all of them, without any catastrophic forgetting. Specifically, they take a dataset of human games and train various agents to mimic humans. This allows them to avoid the particularly hard exploration problems that happen when you start with a random agent. Once they have these agents to start with, they begin to do population-based training, where they play agents against each other and update their weights using an RL algorithm. The population of agents evolves over time, with well-performing agents splitting into two new agents that diversify a bit more. Some agents also have auxiliary rewards that encourage them to explore different parts of the strategy space -- for example, an agent might get reward for building a specific type of unit. Once training is done, we have a final population of agents. Using their empir
ca02c11c-4e8e-4e5b-a822-1e6f8786dea1
trentmkelly/LessWrong-43k
LessWrong
[Proposed Paper] Predicting Machine Super Intelligence Note from Malo The Singularity Institute is always on the lookout for interested and passionate individuals to contribute to our research. As Luke frequently reminds everyone, we've got 2–3 years of papers waiting to be written (see “Forthcoming and Desired Articles on AI Risk”). If you are interested in contributing, I want to hear from you! Get in touch with me at malo@intelligence.org We wish we could work with everyone who expresses an interest in contributing, but that isn't feasible. To provide a path to becoming a contributor we encouraging individuals to read up on the field, identify an article they think they could work on, and post a ~1000 word outline/preview to the LW community for feedback. If the community reacts positively (based on karma and comments) we'll support the potential contributors' effort to complete the paper and—if all goes well—move forward with an official research relationship (e.g.,Visiting Fellow, Research Fellow or Research Associate). ---------------------------------------- Hello, This is my first posting here, so please forgive me if I make any missteps. The outline draft below draws heavily on Intelligence Explosion: Evidence and Import (Muehlhauser and Salamon 2011?). I will review Stuart Armstrong’s How We're Predicting AI... or Failing to, (Armstrong 2012) for additional content and research areas. I'm not familiar with the tone and tenor of this community, so I want to be clear about feedback. This is an early draft and as such, nearly all of the content may or may not survive future edits. All constructive feedback is welcome. Subjective opinion is interesting, but unlikely to have an impact unless it opens lines of thought not previously considered. I'm looking forward to a potentially lively exchange. Jay Predicting Machine Super Intelligence Jacque Swartz Most Certainly Not Affiliated with Singularity Institute jaywswartz@gmail.com Abstract This paper examines the disciplines, domains, and dimensional
1f2bbc5d-4b3d-4bd1-8eb5-6a98083b7dfb
trentmkelly/LessWrong-43k
LessWrong
Chess - "Elo" of random play? I'm interested in a measure of  chess-playing ability that doesn't depend on human players, and while perfect play would be the ideal reference, as long as chess remains unsolved, the other end of the spectrum, the engine whose algorithm is "list all legal moves and uniformly at random pick one of them," seems the natural choice.  I read that the formula for Elo rating E is scaled so that, with some assumptions of transitivity of winning odds, pvictory≈11+10ΔE/400,  so it's trivial to convert probability to Elo rating, and my question is roughly equivalent to "What is the probability of victory of random play against, say, Stockfish 17?"  If the Elo is close to 0[1], that makes  the probability around 10−9 (estimating Stockfish 17's Elo to be 3600). Eyeballing the y-intercept of this plot of lc0's Elo vs. number of games of self-play, it looks something like 150–300 (lots of uncertainty). Does that sound reasonable? I understand the probability of victory against the best modern engines is probably too small to accurately measure directly, so you would have to construct weaker/noisier engines and step down in increments of a few hundred Elo. Has anyone done this? 1. ^ I don't see an a priori reason to expect it would be.
3aedad37-fe34-45a9-a6b6-894c90740edf
trentmkelly/LessWrong-43k
LessWrong
Willpower: not a limited resource? Stanford Report has a university public press release about a recent paper [subscription required] in Psychological Science.  The paper is available for free from a website of one of the authors. The gist is that they find evidence against the (currently fashionable) hypothesis that willpower is an expendable resource.  Here is the leader: > Veronika Job, Carol S. Dweck, and Gregory M. Walton > Stanford University > > > Abstract: > > Much recent research suggests that willpower—the capacity to exert self-control—is a limited resource that is depleted after exertion. We propose that whether depletion takes place or not depends on a person’s belief about whether willpower is a limited resource. Study 1 found that individual differences in lay theories about willpower moderate ego-depletion effects: People who viewed the capacity for self-control as not limited did not show diminished self-control after a depleting experience. Study 2 replicated the effect, manipulating lay theories about willpower. Study 3 addressed questions about the mechanism underlying the effect. Study 4, a longitudinal field study, found that theories about willpower predict change in eating behavior, procrastination, and self-regulated goal striving in depleting circumstances. Taken together, the findings suggest that reduced self-control after a depleting task or during demanding periods may reflect people’s beliefs about the availability of willpower rather than true resource depletion. (HT: Brashman, as posted on HackerNews.)
cf4c2831-5923-4c97-8426-046e2bf67a5c
StampyAI/alignment-research-dataset/lesswrong
LessWrong
A Study of AI Science Models *Generated during AI Safety Camp 2023 by the AI Science Team: Eleni Angelou, Cecilia Tilli, Louis Jaburi, Brian Estany. Many thanks to our external collaborators Joshua Flanigan and Rachel Mason for helpful feedback.*   1. Introduction: Why science models? ==================================== Generating new science is a complex task and requires the mastery of a series of cognitive abilities. Thus, it is plausible that models trained to perform well across various scientific tasks are likely to develop powerful properties. The speculations on why that would be the case vary. In one hypothetical scenario, the training and finetuning of a Large Language Model (LLM) could lead to a [consequentialist](https://www.lesswrong.com/posts/eD34hTMp8uv3ifSjg/consequentialists-one-way-pattern-traps) that would perform optimally given a certain objective e.g., designing an experiment. Alternatively, while current science models are non-agentic, such non-agentic models (e.g., an LLM) can be [turned into consequentialist agents](https://arxiv.org/abs/2304.05332).  A study of science models is motivated by 1) the speculation that [science models could lead to existentially risky agents](https://www.lesswrong.com/posts/eaDCgdkbsfGqpWazi/the-basic-reasons-i-expect-agi-ruin) and 2) the suggestion that different aspects of scientific research [including AI alignment research](https://www.alignmentforum.org/posts/JKgGvJCzNoBQss2bq/beliefs-and-disagreements-about-automating-alignment) could be automated in the near-term future. For these two reasons, it is valuable to have an overview of the available AI science models and their “cognitive properties” or capabilities. This will provide a clearer picture of what the state-of-the-art of science models is and what their implications are for AI safety. There are several different types of existing models that are useful in scientific research, but so far none of them is able to do autonomous research work. Instead, they operate as tools for a human researcher to solve subtasks in the research process. In this post, we will assess the current capabilities of such tools used in empirical and formal science. In section 4, we also study the capabilities of LLMs to systematically generate knowledge and then subsequently build on knowledge in a process of scientific inquiry. 2. Empirical science ==================== Empirical science refers to science that builds knowledge from observations (contrasted to formal science, such as logic and mathematics). In empirical science the scientific process is composed of tasks that generally fall into three classes:  1. Assessment of existing knowledge 2. Hypothesis generation 3. Hypothesis testing Note that a given research project need not involve all of these tasks: a certain study could be purely about hypothesis generation, or even purely about assessment of existing knowledge in the case of a literature review. In the assessment of existing knowledge and hypothesis-generating research, it is already clear that existing ML models have important capabilities that make them very useful tools for humans performing research. For hypothesis testing tasks, there are obviously statistical tools for data analysis, but for our purposes, we focused on the ability to create experimental designs as these seem to depend on more general cognitive capabilities. We will look into how well existing language models perform on this task. **2.1 Collection and assessment of existing data** -------------------------------------------------- The first step in scientific research is, in general, the collection of available data and the assessment of their relevance to the problem under examination. This stage informs the subsequent stages of hypothesis generation as well as the formulation of specific puzzles and research questions. The success of this step, thus, largely determines how the research itself will move forward and impacts the quality of every step of a science project.  LLMs, with the right prompt engineering, exhibit impressive accuracy in filtering out the desired parts of a corpus of literature. Traditional literature reviews using tools such as Google Scholar tend to be more time-consuming and therefore can be less efficient compared to delegating the task to an LLM. The table below presents LLMs that operate as general research assistants and automate tasks such as collecting and evaluating data: | | | | | --- | --- | --- | | **Model** | **Cognitive abilities** | **Known limitations** | | GPT-4 (OpenAI) | search, analysis, summarization, explanation at different levels (e.g., explain like I’m 5), brainstorming on data, longer context window than predecessors which makes it more coherent  | hallucination resulting in truthful-sounding falsities, repetition (both significantly less frequently than predecessors) | | Elicit (Ought) | search, literature review, finding papers without perfect keyword match, summarization, brainstorming research questions, recognition of interventions, outcomes, and trustworthiness of the source, recognition of lack of data | when in shortage of sources, it presents irrelevant information | | SciBERT (Google)   | text processing, classification tasks, unmasking tasks, data mining  | vocabulary requires regular updates, limited context understanding, weak compared to GPT-4 | | Galactica (Meta) | grammatically correct, convincing text  | public demo was withdrawn due to  dangerous hallucinations |   * [**GPT-4**](https://openai.com/product/gpt-4) serves as a helpful general research assistant that performs well across general research assistance tasks. When prompted appropriately, it outputs remarkably accurate results and outperforms all other competitors as of April 2023. In some instances, GPT-4 hallucinates and generates false results, but it seems to be able to “learn from its mistakes” within one prompt chain and update based on the user’s corrections. GPT-4 is notably better than its predecessors in finding relevant references and citations, and in most cases outputs existing papers, articles, and books. * [**Elicit**](https://ought.org/elicit)was designed to be a research assistant that can, at least as a first step, automate literature reviews. Elicit enables users to input specific research questions, problems, or more general topics, and search, analyze, and summarize relevant academic articles, books, and other sources. Elicit searches across multiple databases, repositories, and digital libraries and allows users to customize their search by providing a date range, publication type, and keywords. Elicit relies on the [Factored Cognition](https://www.lesswrong.com/s/xezt7HYfpWR6nwp7Z) hypothesis and in practice, utilizes the applications of HCH and Ideal Debate. The hypothesis suggests that complex reasoning tasks can be decomposed into smaller parts and each part can be automated and evaluated by using a language model. * [**SciBERT**](https://arxiv.org/pdf/1903.10676.pdf) is a pre-trained language model specifically designed for scientific text. It works by fine-tuning the BERT model on a large corpus of scientific publications, enabling it to better understand the context and semantics of scientific text. SciBERT leverages the transformer architecture and bidirectional training mechanism, which allows it to capture the relationships between words in a sentence. SciBERT is specifically designed to capture the context, semantics, and terminologies found in scientific publications. * [**Galactica**](https://arxiv.org/abs/2211.09085)was Meta’s public demo of an LLM for scientific text, which survived only three days online. The promise for this model was that it would “store, combine, and reason about scientific knowledge. Very quickly though it became clear that it was generating truthful-sounding falsehoods. The example of Galactica raised a series of concerns about what [Michael Black called “the era of scientific deep fakes”](https://twitter.com/Michael_J_Black/status/1593133722316189696?s=20). While phenomena of scientific deep fakes generated by LLMs do not directly constitute causes of existential threat, they erode the quality of scientific research which harms science in general and AI safety in particular. Black’s argument emphasizes that Galactica’s text sounded authoritative and could easily convince the reader. This is not unique to Galactica, however. Early in 2023, it became clear that [abstracts written by ChatGPT can fool scientists](https://www.nature.com/articles/d41586-023-00056-7). Such phenomena raise further questions about the usage of AI systems in knowledge creation and the dangers of spreading misinformation. **2.2 Hypothesis generation** ----------------------------- Hypothesis-generating research explores data or observations searching for patterns or discrepancies, with the aim of developing new and untested hypotheses. Proposing relevant hypotheses therefore often involves both pattern recognition skills (e.g., noticing the correlation between data sets, particularly paying attention to *unexpected*correlations) and reasoning skills (proposing preliminary explanations and assessing them for plausibility). A typical example to illustrate this could be the observation of a positive correlation between ice cream sales and drowning accidents. If we aim to generate relevant hypotheses and observe this pattern, we might first assess if the correlation is unexpected given our current understanding of the world. If we find that it is indeed unexpected, we would propose some hypothesis that explains it (e.g. *eating ice cream leads to decreased swimming capabilities, eating ice cream impairs judgment,*etc). As it would generally be possible to construct almost infinitely many (complicated) hypotheses that could theoretically explain an observed pattern, the assessment of plausibility in this step is crucial. Not every observed pattern will lead to a hypothesis that is relevant to test - in this case, the hypothesis that *warm weather causes both swimming (and therefore drowning accidents) and ice cream consumption (and therefore ice cream sales)* fits well with our established understanding of the world, so even though the pattern is real it is not unexpected enough to justify further inquiry. In biology and chemistry research, the use of machine learning for scientific discovery is well established. We have looked at two important models in this area, [AtomNet](https://arxiv.org/abs/1510.02855), and [AlphaFold](https://www.nature.com/articles/s41586-021-03819-2), whose first versions were established in 2012 and 2016 respectively.  AtomNet is used for drug discovery and predicts the bioactivity of small molecules, but it can also propose small molecules that would have activity for specified targets in the body. AtomNet is a CNN that incorporates structural information about the target. AlphaFold is used to predict the 3D shape of a protein and is used for a range of more fundamental research, but it also has applications in drug discovery. AlphaFold is neural-network-based and uses concepts and algorithms that were originally used for natural language processing. Both AlphaFold and AtomNet are trained through supervised learning using manually produced data. This means that their development is limited by data availability, though AlphaFold is using something called self-distillation training to expand the data set with new self-produced data.  Both models show some generalization capabilities: AlphaFold can predict structures with high accuracy even when there were no similar structures in the training set and AtomNet can predict molecules that have activity for new targets that had no previously known active molecules. These models are used as narrow tools for pattern recognition: they propose new leads to investigate further within the hypothesis-generating stage of research, but neither goes into proposing new explanations or theories. Generative models can be combined with automated assessment of novelty, drug-likeness, and synthetic accessibility (e.g., in the commercial platform [Chemistry24](https://pubmed.ncbi.nlm.nih.gov/36728505/)), which could be seen as a form of plausibility assessment; these models are far from automating the full process of hypothesis-generating research as they are limited to very specific tasks in a narrow domain. From an AI safety perspective, these models, therefore, do not seem very risky in themselves (though there are certainly other [important societal risks to consider](https://www.nature.com/articles/s42256-022-00465-9)). Something that might make them relevant for AI safety is the commercial pressure to [increase capabilities through for example interpretability work](https://www.sciencedirect.com/science/article/abs/pii/S0003267022011291), as findings from such work could potentially be transferable to more general, and therefore more risky, systems. **2.3 Hypotheses testing** -------------------------- Hypothesis testing research is often experimental and aims to test (and potentially falsify) a given hypothesis. The clear separation of hypothesis generation and hypothesis testing in empirical research is [important for distinguishing real casual relationships from patterns that arise by chance from noisy data](https://researcher-help.prolific.co/hc/en-gb/articles/360009500513-Am-I-doing-exploratory-or-confirmatory-research-Why-does-it-matter-#:~:text=In%20confirmatory%20(also%20called%20hypothesis,hypotheses%2C%20is%20supported%20by%20data.). Designing experiments requires good planning capabilities, like visualizing different scenarios, choosing tools, and reasoning about what could go wrong. Critical thinking is important, as good hypothesis testing requires an ability to identify and challenge underlying assumptions and potential confounding factors, as well as the ability to calibrate confidence in different statements. Of current models, LLMs are the ones that seem closest to performing hypothesis-testing research tasks. The paper “[Sparks of Artificial General Intelligence: Early Experiments with GPT-4](https://arxiv.org/abs/2303.12712)” observes deficiencies in planning, critical thinking, and confidence calibration, but they used an early version of GPT-4. When we tested GPT-4 in March-April 2023 (see transcripts in appendix), the model was able to design experiments, identify assumptions and provide good arguments for and against different experimental setups.  We specifically tested GPT-4 on experiments for evaluating the effectiveness of public awareness campaigns about antibiotic resistance and formulating research to identify the most important drivers of antibiotic resistance. While the model initially proposed experimental designs that would be very impractical, it showed a great improvement when prompted to provide several alternative designs, criticize and choose between them. With the improved prompting there were no clear remaining flaws in the proposed experimental designs. The topic of major drivers of antibiotic resistance was chosen since this is an area where there is a lot of contradictory information online. The relative contribution of different drivers is unknown and would be difficult to determine, but simplified communication makes claims about e.g. agricultural use being the most important driver. The model almost states (correctly) that it is unknown which driver of antibiotic resistance is most significant, though it does so in a slightly evasive manner (“*It is difficult to pinpoint one specific driver…”*), and it does suggest several reasonable and complementary approaches (in broad strokes) for research aiming to identify the most significant driver of antibiotic resistance. When asked to specify a certain step of one of the proposed approaches, it proposes a scientifically reasonable though very expensive and potentially politically difficult experiment that would test the impact of different regulations. Since GPT-4 is sensitive to variations in prompting it seems relevant to consider if there is some fundamental capability that is necessary for hypothesis testing that is supplied through human prompting. If that is the case, that might limit how well the entire hypothesis-testing process could be automated. For example, the prompts that generate valuable answers might depend on a human critically reviewing the previous answers and identifying flaws or omissions and pointing the model in a relevant direction.  [Shinn et. al.](https://arxiv.org/abs/2303.11366) have tested a systematic approach to prompt LLMs for self-reflection upon failure on a task and use the reflection output to improve performance on the next attempt at the same task. Their method however depends on access to a binary reward function to distinguish between successful and failed attempts, which limits the potential use for more autonomous research work. We tested GPT-4 in two different ways to investigate how dependent it is on human feedback to design scientific experiments. First, we simply asked the model to specify which aspects should be taken into account when prompting it to design scientific experiments, as this response could then in theory be used as a basis for generating follow-up prompts to check a proposed design. Second, we provided a series of simple prompts that were preformulated to guide the model to design a scientific experiment without looking at the intermediate responses. The model did well on both of these tests - particularly the output on the preformulated prompt sequence was interesting, as the model selected a hypothesis to explain the placebo effect and proposed an experiment to test it (see appendix, preformulated series of prompts). [Boiko et. al.](https://arxiv.org/abs/2304.05332) have done more extensive testing of the capabilities of an LLM-based agent to design and perform scientific experiments, using tasks such as synthesis of ibuprofen. They also challenge the model with tasks such as designing a new cancer drug, which involves both elements of hypothesis generation and hypothesis testing, and conclude that the model approaches the analysis in a logical and methodical manner. We have not been able to identify any general aspect of empirical research where LLMs predictably fail. However, as we will see in the following sections LLMs have weaknesses that are more easily observed in formal science such as mathematics. It seems plausible that these weaknesses also could have major consequences for empirical research capabilities. 3. Formal science ================= Formal sciences build knowledge not from observations, but from the application of rules in a formal system.  Mathematical reasoning is one of the most abstract tasks humans exhibit and as such provides a valuable litmus test for the capability to abstract and perform higher-order reasoning. We will first sketch the general architecture of the models studied and then proceed to discuss three possible limitations of the models that seem also to generalize for other complex tasks. It is worth noting that the practical significance of ML models for the mathematical community so far has been minor. In the field of computer science, automated coding and discovery of new algorithms (see e.g. AlphaTensor) appear to be more impressive. In this project, we tested mathematical models and will lay out our findings in the following section.  The mathematical models we investigated were of two kinds: 1. LLMs trained to output text in a formal language I will refer to these as ‘formal (language) models’ 2. LLMs trained to respond in natural language (I will refer to these as ‘natural language models’) For the natural language models we initially examined [[Minerva]](https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html). Extrapolating from our observations GPT-4 seems to outperform it, we will only focus on GPT-4 (see also section 4 of the [Sparks of intelligence paper](https://arxiv.org/abs/2303.12712)).  3.1 Architecture of formal models --------------------------------- Typically, language models for mathematics are, after being trained on common general data, pre-trained or fine-tuned on data specifically related to mathematics. Here are examples of such data sets used in [[Curriculum learning]](https://arxiv.org/abs/2202.01344) and [[GPT-f]](https://arxiv.org/abs/2009.03393) respectively: ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/fTvw6K3CfxXdxAE5G/jnxwqoiahvnvsc5uzukf)![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/fTvw6K3CfxXdxAE5G/wg4vpqxdpuzv6qrd3yui) The models we examined use a  decoder-only transformer architecture, with the exception of [[Hypertree searching]](https://arxiv.org/abs/2205.11491) which uses an encoder-decoder architecture. For these formal models the basic idea is to perform proof search as tree search: To generate a proof, suggest multiple options of what the most likely next step is and iterate this. Then pursue options that are more likely to be successful. “Next step” here could be either something like an algebraic manipulation of an equation or stating a subgoal. Different models can vary quite a bit in how they exactly apply this. Current math models are being evaluated on different kinds of data sets. One example is the [minif2f](https://github.com/openai/miniF2F) data set, first introduced [here](https://paperswithcode.com/paper/minif2f-a-cross-system-benchmark-for-formal). It contains natural language exercises in the style of IMO competition problems. For example: “For any real number x, show that 10x ≤ 28x² + 1.” Other data sets are more diverse and contain proofs from different areas, for example, [Peirce’s law](https://en.wikipedia.org/wiki/Peirce%27s_law): For two propositions P,Q, we have ((P-> Q)->P)->P. A formal model takes as input an exercise in a certain formal language and solves it in such a language. The task is evaluated as solved successfully if the formal proof assistant confirms its validity and otherwise is evaluated as not solved, even if partial steps would be correct. To give a superficial idea of the performance of formal models, see the diagram below: ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/fTvw6K3CfxXdxAE5G/bakoqrmbpixylruuod72) Note that different formal models use different formal proof assistants like [Lean](https://en.wikipedia.org/wiki/Lean_(proof_assistant)), [Isabelle](https://en.wikipedia.org/wiki/Isabelle_(proof_assistant)), or [Metamath](https://en.wikipedia.org/wiki/Metamath). One should be careful with comparing two models directly: Not only do the formal proof assistants come with different up- and downsides, but also the evaluation datasets vary. To give an idea of what a formal proof verifier would look like, see the following example of a lemma and proof written by a human in Lean which derives commutativity of addition for integers from commutativity of addition for natural numbers based on case distinctions by sign.: lemma add\_comm : ∀ a b : ℤ, a + b = b + a | (of\_nat n) (of\_nat m) := by simp [nat.add\_comm] | (of\_nat n) -[1+ m]    := rfl | -[1+ n]    (of\_nat m) := rfl | -[1+ n]    -[1+m]     := by simp [nat.add\_comm] Note that this is also hardly comprehensible for a human without prior knowledge of Lean (just like reading code without knowing the language). We make three hypotheses about the limitations of these models 1. Formalization is difficult. 2. Generating a conversation is qualitatively different from generating a proof. 3. Non-verbal abstractions can obstruct natural language reasoning For what follows keep the following picture in mind: ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/fTvw6K3CfxXdxAE5G/eubfaq6ntd9u16p2e591) When thinking about mathematics we don’t only use natural language. We also use non-verbal abstractions and formal language. When I think about a circle, I might think about the definition {(x,y)| x^2+y^2=1}, or about “elements of norm one” (which is just a more natural rephrasing of the definition) or I might visualize a circle (which is what the definition tried to encapsulate). We can go back and forth between these: For example, I can take the definition of the circle and draw some of the points of this set and realize that indeed it is a circle. 3.2 Formalization is difficult ------------------------------ Almost all mathematical information is communicated via natural language. If you pick up an analysis textbook, you will read something like “... We check that the f(x) constructed above is continuous. Assumption A leads to a contradiction and therefore we can assume not A, but an application of Theorem 3.10 yields the result.”. While this is understandable to humans, it is not a “real” proof, but more like a reference to a proof. You need to fill gaps like “Use law of excluded middle to assume either A or -A” or specify what and how to apply Theorem 3.10. Doing this is very laborious and humans are not dependent on it. That is, our brain can elicit more information out of the above sentence and we implicitly (assume that we can) unravel this additional information in our brains. Despite the fact that we **assume**we can do it, in reality, this is a very messy process. Recent attempts to formalize mathematics in Lean, while being successful, also proved to be cumbersome. So we implicitly assume that a human proof can be converted to a formally correct proof, but the truth is that we don’t really do things in a formal way. As a result, there is not that much formal language mathematics that is produced . Notable exceptions are Isabelle and more recently a big increase in the last few years due to the Lean community). That being said, pre-training on general data, and then on more specific subjects still leads to useful results. See [here](https://arxiv.org/pdf/2102.01293.pdf) for more details on models trained to code in Python (instead of solving mathematical tasks). These pre-trained models are then fine-tuned specifically to the set-up that they are trying to solve. 3.3 Autoregressive ~~text~~ proof generation -------------------------------------------- The known limitations of autoregressive text modeling also apply to the task of generating a proof, whether in natural or formal language. It is more difficult, maybe even qualitatively different, than generating a conversation in natural language and LLMs are more prone to struggle with it.  More precisely, we suspect that next-token prediction is a possible obstruction in creating proofs. At least it does not represent the way humans attempt proofs and they would also fail to perform well using next-token prediction. After experimenting with GPT-4, I found a few examples of the following phenomenon: GPT-4 (and presumably current LLMs) can commit too early to a specific line of reasoning and as a result make mistakes by either getting off-track or hallucinating something. While simple conversations can be much more forgiving and provide a high level of flexibility, that is not feasible in the set-up of a proof.  Especially in proofs, humans often explore multiple paths and completely abandon them, if determined as unsuccessful. Consider the following puzzle which presents the crux: **I have a riddle. Solve it and explain the solution to me:** **?TTFFSSENT, with what letter do I have to replace the "?"** Here are some example answers (Temperature is 0, 0.15, 0.25 respectively) 1. The letter you have to replace the "?" with is "M". The solution to the riddle is that each letter represents the first letter of the numbers from one to ten in order: One, Two, Three, Four, Five, Six, Seven, Eight, Nine, Ten. So, the "?" should be replaced with "M" for "One" (in some languages, like German, "one" is spelled as "eins," which starts with an "M" when spelled backwards).       2. The letter you have to replace the "?" with is "M". The solution to the riddle is that each letter represents the first letter of the days of the week, starting from Monday. So, the sequence is as follows: M - Monday T - Tuesday T - Wednesday F - Thursday F - Friday S - Saturday S - Sunday E - Monday (Eight day) N - Tuesday (Ninth day) T - Wednesday (Tenth day) The pattern repeats every 7 days, and the "?" represents the first day, which is Monday.        3. The letter you have to replace "?" with is "W". The solution to this riddle is that each letter represents the first letter of the English numbers in increasing order: One, Two, Three, Four, Five, Six, Seven, Eight, Nine, Ten. So, the sequence should start with "W" for "One". In each answer the suggested letter is wrong. In two cases the explanation is correct and in one it hallucinates a justification for its wrong answer. As it is immediately answering with a suggested solution for the “?”, without having explained it, it seems that it tries to make an educated guess and then tries to adjust with the explanation. Of course, common fixes apply: **I have a riddle. Solve it and explain the solution to me:** **?TTFFSSENT, with what letter do I have to replace the "?"** **Give the letter only at the end after you recognized the pattern. Under no circumstances start your answer with the final solution** Which yields: The pattern in the sequence is the first letters of the numbers one through nine in English: One, Two, Three, Four, Five, Six, Seven, Eight, Nine. Therefore, the missing letter is "O" for "One." But this is more of an ad hoc solution. If a multi-step process is required, an LLM might be able to break it down through prompt chaining. But it still runs the danger of correcting itself once it skips a certain step or initiates with a wrong approach. This obstruction could be anticipated in any task requiring higher-order thinking. Finally, there is the recurring problem that writing something that is true is not the same as writing something that sounds true. As LLMs don’t seem to have an intrinsic concept of “truth”, especially without any formal set-up, this makes it much more difficult to build a reliable chain of true implications. As seen in [[Minerva]](https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html) and [[Sparks]](https://arxiv.org/pdf/2303.12712.pdf), something as simple as elementary arithmetic (addition, multiplication,...) can be surprisingly difficult. Again, for an ad hoc problem, you can solve such an obstruction. For example, one could [outsource the calculations to an external calculator.](https://arxiv.org/pdf/2302.04761.pdf)  We can’t tell whether the more conceptual problem can be simply overcome by prompt chaining and/or more scaling. Right now we don’t see how the fundamental issue would be addressed, on the other hand, models might get good enough at solving problems that we wouldn’t need to address the aforementioned problems. 3.4 Intuition through or beyond natural language? ------------------------------------------------- In our way of doing math, we encode something like “intuition”. Compare for example the formal definition of a sphere with our intuitive understanding of what a sphere is: The formal definition of {v| ||v||=1} is not that straightforward, there is a much more concise idea of what it ought to represent (namely something round). When thinking about these objects, we end up making use of this intuition. Before formulating a proof (idea) to begin with, we might first visualize the problem. I am unsure whether there are other possible abstractions that are not visual or verbal, therefore let’s stick with visual intuitions. At this point, LLMs still seem too weak to extract a meaningful answer to the question of whether they (would) possess such a visual intuition. That is, to test the hypothesis, we would need the models to be more capable than they currently are. But we can ask the question of whether abstraction that takes place in a non-verbal part of human thought can be accessed and learned simply *through* natural language. As of right now, we are not aware of a good answer to this. But one thing we can observe is that there is at least something that is reminiscent of intuition (or even creativity, whatever that exactly means). As described in [[PACT]](https://arxiv.org/abs/2102.06203): “More than one-third of the proofs found by our models are shorter and produce smaller proof terms than the ground truth[= human generated formal proof]…”. In fact, some proofs were original and used a different [Ansatz](https://en.wikipedia.org/wiki/Ansatz) than the human-generated one. At this level, the proofs are not sophisticated enough to argue that something “creative” is happening. But it offers a first hint that these models might not only *reproduce* knowledge but *create* it. Further indications that more generally NNs can actually learn things is for example provided by [grokking](https://www.lesswrong.com/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking). 4. Ontology-builders ==================== In the modern context, the term “science” refers to both a method of obtaining knowledge as well as the body of knowledge about the universe that this method has produced. While the previous sections have focused largely on the automation of scientific inquiry, we propose that the explicit and iterative codification of new knowledge is also a topic of importance for neural network models performing autonomous research work.  In the fields of computer science and artificial intelligence, the concept of science as a body of knowledge maps very closely to the technical term *ontology*\*, which refers to a formal description of the knowledge within a domain, including its constituent elements and the relationships between them. From a computer science perspective, the scientific method can be thought of as producing an ontology (scientific knowledge) about the universe. \* Within this section, the term *ontology* is used specifically in the computer science sense of the word. 4.1 Explicit versus implicit knowledge representation ----------------------------------------------------- Framing the body of scientific knowledge as an ontology highlights a challenge within AI research and its efforts to automate scientific inquiry: The facts contained within an ontology are formal, discrete, and explicit, which facilitates the process of repeated experimental verification that is central to science. Neural networks, on the other hand, encode their knowledge implicitly, which presents a range of challenges for working with the facts contained therein. As we have seen, despite the incredible amount of knowledge contained within advanced LLMs, they still routinely hallucinate, regularly presenting incorrect and even bizarre or contradictory statements as fact. Indeed, even GPT-4 is quite capable of producing one fact in response to a prompt, and producing that same fact’s negation in response to a slightly different prompt. These failures of factuality are often said to flow from LLMs lacking a concept of "truth", or, by some measures, even from their lack of concrete "knowledge". But, these failures can likewise be framed as a function of their knowledge being represented implicitly and probabilistically rather than explicitly and formally. 4.2 Ontologies as bridge ------------------------ Distinctions between implicit and explicit representations of knowledge are of particular importance because current advanced LLMs exhibit dramatic improvements in their capabilities when explicit knowledge is included in their input prompting: they are far less likely to hallucinate about facts that are provided in an input prompt than they would be when they are required to infer knowledge about the same subject from preexisting weights within their own neural network. This points to external knowledge representations (i.e. outside the neural network) as a critical piece of the landscape when evaluating the current and imminent capabilities of existing models. For many use-cases of an LLM-based system, it is far more efficient and reliable to describe newly generated knowledge and later reingest that output than to retrain or fine-tune the network on that new knowledge. Accordingly, the production of new knowledge, external encoding of the discrete facts thereof, and subsequent use of said knowledge, represents a critical stepping stone for a wide range of LLM capabilities. This holds true especially for those capabilities that require utilizing previously gained knowledge with extremely high fidelity, as is the case in scientific inquiry. 4.3 Experiments --------------- ### 4.3.1 Background To investigate if using and manipulating external knowledge representations impacted GPT-4's performance in scientific inquiry tasks, we examined its ability to generate knowledge systematically and then use that knowledge in the multi-step process of scientific inquiry. We sought to quantify the model’s performance on various tasks while: 1. experimentally manipulating whether or not the model would produce external knowledge representations that would be fed back into it via subsequent prompts (i.e., whether it created an *ontology*); 2. experimentally manipulating the degree of formal structure in the knowledge representations that the model produced (i.e. the degree of *ontological formality*); and 3. in the final intervention, providing the model with the most appropriately structured knowledge representation format that it had produced in earlier runs, and explicitly directing it to use that format to track its observations (i.e. *providing an optimal ontology*) ### 4.3.2 Design In this post, we describe experiments that measured the above interventions on one particular task: a language-based adaptation of the “Blicket detection” task described in the [EST: Evaluating Scientific Thinking in Artificial Agents](https://arxiv.org/pdf/2206.09203.pdf) paper. The original Blicket detection experiment, predating the EST paper, aimed to evaluate children’s causal induction capabilities. In that experiment, a child is presented with a device called a Blicket machine, and shown that it “activates” by lighting up and playing music when certain objects are placed on top of it. Objects that activate the machine are called “Blickets”. The machine could accommodate multiple objects at a time, and would activate if any of the objects atop it were Blickets, regardless of whether any non-Blickets were also present. After a period of exploratory play with the machine and the objects, the children would be asked to identify which objects were Blickets. In these experiments, children were observed iteratively generating causal hypotheses and refining them until they were consistent with all of the observations that were made during their demonstration and play periods. In the EST paper, several digital analogues of the Blicket experiment were created in order to evaluate the “scientific thinking” performance of a variety of machine learning models. These models were allowed to test up to ten sets of objects on the Blicket machine as part of their inquiry process and were primarily evaluated on whether they were able to correctly identify whether all objects were or were not Blickets. We adapted this experiment further by creating a software environment that interfaces with GPT-4 in natural language to: * generate a random test environment configuration * elicit a simulation of a scientific agent from the LLM * describe the test environment * perform several demonstrations of various objects interacting with the Blicket machine * (where appropriate) iteratively prompt the agent to explore and hypothesize about which objects are Blickets, and * submit its conclusions about which objects are Blickets for evaluation We tested seven experimental scenarios beginning with baseline setups that involved no external knowledge representation, progressing through those that produced *informal* knowledge representations, and culminating in setups that produced and worked with high degrees of structure in their knowledge representation. They are listed below, from least to most structured knowledge representation format: 1. Zero-shot prompts which did not include background knowledge about a query 2. Single prompts which include simple statements of facts 3. Single prompts which include simple statements of facts with instructions to rely upon those facts 4. Structured roleplay in chat-style turn-taking exchanges where the model is given a task-oriented investigator role and prompted to generate knowledge by inference. The transcript format of the exchange inherently makes that knowledge accessible in future prompts, thereby producing a structured but informal proto-ontology within the transcript that aids the model in further investigation 5. Iterative prompting where the model is instructed to generate and iterate upon *informal* descriptions of knowledge it has produced so far 6. Iterative prompting where the model is instructed to generate and iterate upon *formal* descriptions of knowledge it has produced so far 7. Iterative prompting where the model is provided with the formal structure of the top-performing knowledge representation that it had generated in earlier runs and instructed to iterate upon formal descriptions of knowledge it has produced so far ### 4.3.3 Methods Our study consisted of seven experimental setups, each designed to test the model's performance under different conditions of knowledge representation and structure. For each experimental configuration, we ran 25 trial episodes (175 in total), with the primary outcome being the number of perfectly completed trials per configuration. A perfect trial completion was defined as the model correctly guessing whether all objects were or were not Blickets. Our primary research question was whether the use of external knowledge representations affected the number of perfect trial completions per configuration. Our secondary research question asked whether the degree of structure in those knowledge representations correlated with the number of perfect trial completions per setup. In keeping with the format of the EST paper’s experiments, the structure of each trial was as follows: 1. Each object in the experiment had three properties: color, material, and shape 2. There were three (3) shapes, two (2) materials, and eight (8) colors available, yielding a pool of 48 possible objects 3. Each trial randomly sampled nine (9) objects from the available pool for inclusion in the environment 4. Between three and eight (3-8) of the nine (9) objects in the trial were randomly selected to be Blickets 5. Four demonstrations were constructed, each of which involved between two and six (2-6) objects at random being placed on the Blicket machine Following the demonstrations, our LLM-based adaptation of the experiment continued as follows: 6. The agent was asked whether it had enough information to conclusively determine which objects were Blickets. If it answered in the affirmative, its guesses were collected and scored. Otherwise, the trial continued: 7. The agent was asked what set of objects it wanted place on the Blicket machine for testing 8. The resulting activation status of the Blicket machine was relayed to the agent 9. For scenarios 5-7, the agent was asked if it wanted to update its knowledge list with any new knowledge gained 10. If the agent has exhausted its 10 tests, it is forced to submit guesses. Otherwise, repeat the process again, starting from step 6. | | | | --- | --- | | **Scenario** | **Prompt structure** | | 1: Single prompt:Zero-shot, no background(effectively a random baseline) | * Description of simulated agent, scene, task, and goal * Forced (blind) guess | | 2: Single prompt:Demonstrations followed by forced guess | * Description of simulated agent, scene, task, and goal * Demonstration of four sets of objects on machine * Forced guess | | 3: Single prompt:Demonstrations with instructions to rely on that data, followed by forced guess | * Description of simulated agent, scene, task, and goal * Demonstration of four sets of objects on machine * Instructions to rely on demonstrations when making its guesses * Forced guess | | 4: Iterative chat:Structured roleplay and investigation;transcript provides **implicit** knowledge representation | * Description of simulated agent, scene, task, and goal * Demonstration of four sets of objects on machine * Allow agent to test up to 10 sets before submitting guesses | | 5: Iterative chat:Structured roleplay and investigation;Model instructed to create and iterate upon **informal** descriptions of knowledge it has produced so far | * Description of simulated agent, scene, task, and goal * Demonstration of four sets of objects on machine * Instructions to keep a list of relevant observations, conclusions, and inferences that may be helpful * Allow agent to test up to 10 sets before submitting guesses * Iterative prompt to add knowledge to list after each test * Current version of list is included in subsequent prompts | | 6: Iterative chat:Structured roleplay and investigation;Model instructed to create and iterate upon **formal** descriptions of knowledge it has produced so far | * Description of simulated agent, scene, task, and goal * Demonstration of four sets of objects on machine * Model prompted to provide the most ideal format for representing the knowledge it gains * Instructions to keep a list of relevant observations, conclusions, and inferences that may be helpful * Allow agent to test up to 10 sets before submitting guesses * Iterative prompt to add knowledge to list after each test * Current version of list is included in subsequent prompts | | 7: Iterative chat:Structured roleplay and investigation;Model instructed **use provided, optimal** knowledge representation format and iterate upon knowledge it has produced so far | * Description of simulated agent, scene, task, and goal * Demonstration of four sets of objects on machine * Ideal format knowledge representation format provided by simulation engine (based on top-performing format in scenario 6) * Instructions to keep a list of relevant observations, conclusions, and inferences that may be helpful * Allow agent to test up to 10 sets before submitting guesses * Iterative prompt to add knowledge to list after each test * Current version of list is included in subsequent prompts |   In all cases, the word “Blicket” was masked with a nonce word so as to reduce the risk that the LLM’s performance might be affected by prior knowledge of any Blicket-solving strategies. ### 4.3.4 Results | | | | | | --- | --- | --- | --- | | **Scenario** | **Perfect trials** | **Failed trials** | **Success rate** | | 1: Single prompt:Zero-shot, no background(effectively a random baseline) | 0 | 25 | 0% | | 2: Single prompt:Demonstrations followed by forced guess | 0 | 25 | 0% | | 3: Single prompt:Demonstrations with instructions to rely on that data, followed by forced guess | 0 | 25 | 0% | | 4: Iterative chat:Structured roleplay and investigation;transcript provides **implicit** knowledge representation | 0 | 25 | 0% | | 5: Iterative chat:Structured roleplay and investigation;Model instructed to create and iterate upon **informal** descriptions of knowledge it has produced so far | 1 | 24 | 4% | | 6: Iterative chat:Structured roleplay and investigation;Model instructed to create and iterate upon **formal** descriptions of knowledge it has produced so far | 3 | 22 | 12% | | 7: Iterative chat:Structured roleplay and investigation;Model instructed **use provided, optimal** knowledge representation format and iterate upon knowledge it has produced so far | 11 | 14 | 44% |   Our experiments using GPT-4 in scenarios 1-7 have demonstrated monotonically increasing model performance on knowledge recall and inference tasks in response to the degree of structure present in external knowledge representations. Our investigations into the LLM-based generation and use of formal ontologies remain underway, with initial results suggesting a non-trivial range of tasks across which the use of formal ontologies may improve model performance on multi-prompt scientific reasoning. However, at present, we do not have data across a wide enough range of tasks to draw broad conclusions. To date, the top-line determinants of transformer-based LLM performance have been parameter size, data size, data quality, and reinforcement training. In light of our investigations, we propose that ontology-building strategies may represent a dimension of LLM performance on multi-step scientific inquiry tasks that is largely orthogonal to these traditional determinants, and which warrant further investigation. 5. Conclusions ============== We aimed to create an overview of the available AI science models and their capabilities. As we have seen, current models are very capable across many different scientific tasks, including assessment of existing knowledge, hypothesis generation, and experimental design. A serious limitation of science capabilities of current models is the relationship to the concept of truth. This is most visible in the experiments we have done in the domain of formal science, but we have no reason to believe this would be less problematic in the empirical sciences. While prompt engineering such as asking the model to criticize its previous answers or to develop several alternatives and select the best one decreases the occurrence of obvious mistakes, it is unclear if this is a path to reliably truth-seeking behavior. An important question going forward is therefore if the models we use can represent the concept of truth, and if yes, how we could train them to become truth-seeking. If there is a risk that we accidentally train our models to be convincing rather than truth-seeking this is an important limitation for how useful the models would be, including the use case of furthering alignment research.  [A preprint under review by Azaria and Mitchell](https://arxiv.org/pdf/2304.13734.pdf) indicates that there may be some internal representation of truthfulness in LLMs. [Appendix: LLM experiment transcripts](https://docs.google.com/document/d/127h3z4l8jqqnUzcaZlNJmYCXiryvRiNv476AWbsM3xQ/edit#heading=h.nz2zsf3jed9f) ===================================================================================================================================================
e8458e37-ba60-4ac1-9e74-005a2feb9787
trentmkelly/LessWrong-43k
LessWrong
Nick Bostrom says Google is winning the AI arms race
f181e9e6-255d-4270-8aea-73522e4745ee
trentmkelly/LessWrong-43k
LessWrong
Many methods of causal inference try to identify a "safe" subset of variation (crossposted from my substack) I have been thinking a lot about causality lately, and as a result I’ve come up with a common way to think of many different methods of causal inference that often seem to be used in science. This is probably not very novel, but I asked around whether it had a standard term, and I couldn’t find any, so I decided to write it up. I’ve personally found this model helpful, as I think it sheds light on what the limitations of these methods are, and so I thought I would write about it. Roughly speaking, many methods of causal inference try to identify a “safe” subset of variation. That is, when trying to examine the causal effect of X on Y, they look in the variation of X to see how it relates to variation of Y. But this is possibly biased due to common causes or reverse causality (Y influencing X); so therefore to limit this problem, they find some way of partitioning X into a “safe” sort of variation that is not subject to these problems, and an “unsafe” sort that may be subject to them. Then they examine the correlation between this safe variation and Y. The model abstracts over naive regression or multiple regression, longitudinal studies, instrumental variables, co-twin controls (Turkheimer’s “quasicausality”), and even randomized controlled trials. So to understand the common model, it is worth going through these methods of causal inference first. Vanilla methods for causal inference I don’t know who is going to read this blog, but I assume that most of my readers will be familiar with regression and experiments. Still it might be worth taking a recap: Regression is the most naive method of causal inference; you’ve got two variables X and Y, and then you assume that any association between X and Y is due to a causal effect of X on Y. With regression, it is then straightforward to use data to see how much X correlates with Y and use this as the causal estimate. This is invalid if Y affects X, or if there is some common factor Z w
d271842c-1ca6-4b83-b111-ae0db48e8a59
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Blog post: A tale of two research communities This is a copy of a [blog post](https://www.york.ac.uk/assuring-autonomy/news/blog/ai-safety-research-communities/) from Francis Rhys Ward, an incoming doctoral student in Safe and Trusted AI at Imperial College London. I just discovered this on the website of the [Assuring Autonomy International Programme](https://www.york.ac.uk/assuring-autonomy/) and thought it was worth cross-posting here. The post is also available on [Medium](https://medium.com/@AssuringAutonomy/a-tale-of-two-research-communities-5d280fbb2031). (Francis, if you're reading this, thanks for the shout-out!) **Blog post: A tale of two research communities** ================================================= POSTED ON TUESDAY 11 AUGUST 2020 What “AI safety” means to them both and steps to collaboration ![](https://www.york.ac.uk/media/assuring-autonomy/blogs/Rhys%20blog%20post%20-%20main%20image.png)The term “AI safety” means different things to different people. Alongside the general community of artificial intelligence (AI) and machine learning (ML) researchers and engineers, there are two different research communities working on AI safety: * **The assuring autonomy/safety engineering field**: a community of experts with a long history in assuring real-world autonomous systems (not just AI and ML!) * **The AI safety/longtermist community**: a relatively new field focused on the consequences of transformative AI (TAI), artificial general intelligence (AGI), and smarter than human AI Having worked at the [Assuring Autonomy International Programme (AAIP)](https://www.york.ac.uk/assuring-autonomy/), and now researching how AI can learn human values at Imperial College London, I am interested in how both communities can learn and benefit from each other. Here I’ll give a brief overview of these fields, point to some key resources within them, and try to highlight the benefits of integrating these communities. This post is primarily intended for researchers from either field who wish to gain insight into the other. ### **Assuring autonomy** The assuring autonomy community is part of the wider field of safety engineering for complex systems. This community focuses on practical issues surrounding current and near-term technologies, with the typical approach being to set safety requirements for a system and generate evidence that requirements have been met, including integrated utilisation of expert knowledge, testing, verifying, validating, as well as regulation. Historically, work has been done on problems related to autonomous systems in general (such as assistive flight technologies) and more recently the field has shifted focus to AI and ML (e.g. with the [AAIP](https://www.york.ac.uk/assuring-autonomy/), [Darpa’s Assured Autonomy Program](https://www.darpa.mil/program/assured-autonomy), and more recent [Johns Hopkins Institute for Assured Autonomy](https://www.jhuapl.edu/iaa/index.html)). Work such as [Assuring the Machine Learning Lifecycle](https://arxiv.org/abs/1905.04223) introduces the first systematic break down of the ML lifecycle (i.e. the entire process of ML development, from data collection to operation, shown below). This paper defines assurance desiderata for each stage in this process, reviews existing methods for achieving these desiderata, and identifies open challenges (e.g. the table below presents desiderata and methods for the assurance of the model verification stage). ![](https://www.york.ac.uk/media/assuring-autonomy/blogs/Machine%20learning%20life%20cycle%20-%20Rhys%20blog%20post.png)*The machine learning lifecycle* ![](https://www.york.ac.uk/media/assuring-autonomy/blogs/Assurance%20methods%20for%20the%20model%20verification%20stage%20-%20Rhys%20blog%20post.png)*Assurance methods for the model verification stage* The AAIP’s [Body of Knowledge](https://www.york.ac.uk/assuring-autonomy/body-of-knowledge/) is another resource aimed at providing practical guidance on assurance and regulation to developers of autonomous technologies. Different stakeholders may access this knowledge base with queries related to their needs (e.g. the use of Systems-Theoretic Processes Analysis (STPA) for hazard analysis for a cobot system, or using simulation as an assurance method for autonomous vehicles). Overall the assuring autonomy field presents practical methods for, and expertise in, building safe autonomous technologies. ### **AI Safety** This field really started with: * [Bostrom’s Superintelligence](https://www.dunod.com/sciences-techniques/superintelligence) * discussion of the [AI-alignment problem](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/) * the ensuing deep learning revolution. In the past decade, a serious research community has emerged, focused on the safety of smarter-than-human AI, including prominent members of the AI community such as [Stuart Russel](https://www.penguin.co.uk/books/307/307948/human-compatible/9780141987507.html) and researchers at DeepMind and OpenAI. Classic work in this field attacks difficult foundational decision and game theoretic problems relating to the goals of powerful AI systems. A key issue is intent or goal alignment — how do we get machines to want to do what we want them to do? Current paradigms in AI and ML depend upon the framework of expected utility maximising agents (e.g. in reinforcement learning the agent wishes to maximise the expected reward). However, systematically writing down everything we care about into an objective function is likely impossible and by default, agents have unsafe incentives such as not being switched off. For example, consider a robot with the goal of getting coffee. As Russel says, “You can’t fetch the coffee if you’re dead” — such an agent will incapacitate anyone who tries to prevent it from achieving its goal of getting you a Starbucks. Importantly, this is the standard way in which we currently build AI! It is really non-trivial to make this paradigm safe (or change the paradigm under which we currently build AI). More recent work aims to research current AI techniques in order to gain insight into future systems ([Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565) is a seminal overview) and more nuanced arguments and subfields aimed at solving a variety of problems relating to the safety of AGI have emerged (prominent research communities exist at DeepMind, OpenAI, Future of Humanity Institute, Center for Human-Compatible Artificial Intelligence, Machine Intelligence Research Institute). ### **Integrating the fields** Below is a conceptual breakdown of [problems in technical AI safety from DeepMind](https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1). They highlight challenges in: * specification (including the alignment problem) * robustness (relating to how systems deal with perturbations in their operational environment) * assurance (defined as the monitoring and control of systems in operation). Of course, these categories are not cleanly disjoint and ensuring real-world systems are safe will necessitate solving problems across each category. However, this conceptual breakdown takes a step in the direction of applying ideas from safety engineering to increasingly powerful AI systems. ![](https://www.york.ac.uk/media/assuring-autonomy/blogs/Deepmind%20table%20-%20Rhys%20blog%20post.png)*Three AI safety problem areas [from Deepmind Safety Research](https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1)* A [recent talk by Aryeh Englander at Johns Hopkins](https://www.alignmentforum.org/posts/yijG7ptfqFBR8w885/talk-key-issues-in-near-term-ai-safety-resear) also highlights the need for further awareness and collaboration between the AI safety and assuring autonomy communities. I think that understanding the perspectives in either community will only help the other; for instance, the assuring autonomy community may benefit from an appreciation of the challenges surrounding AGI, and methods from safety engineering may help to address these challenges. Another troubling matter is that even non-safety-critical AI may have deeply troubling effects on society. One example is the reinforcement learning systems used by social media platforms such as Facebook; these systems are designed to maximise “click-through” and instead of achieving this goal by showing you things that you like to click on, some argue that these systems manipulate people into being predictable clickers by modifying human preferences into [bubbles of ideological extremes](https://futureoflife.org/2019/10/08/ai-alignment-podcast-human-compatible-artificial-intelligence-and-the-problem-of-control-with-stuart-russell/). This is again a type of specification problem where machines with the goal of most effectively monetising our attention have found quite sinister solutions to this goal. As uninterpretable, objective maximising agents become more powerful new safety techniques need to be developed at the same rate. What will the consequences of OpenAI’s new language model, GPT-3 be, for instance? An AI which can convincingly hold a [conversation, write a story or poem, do maths, and write code](https://www.lesswrong.com/posts/6Hee7w2paEzHsD6mn/collection-of-gpt-3-results) but in a completely uninterpretable way, with its only goal to give reasonable-sounding answers, presents unprecedented safety challenges to each community interested in AI safety. There are, of course, also many challenges related to AI ethics, fairness, bias, privacy, etc which I have not covered here but which require collaboration with philosophers, lawyers, and other communities in the social sciences. The challenges posed by powerful AI require increasingly interdisciplinary awareness and collaboration. **Francis Rhys Ward** **Incoming Doctoral Student in Safe and Trusted AI** **Imperial College London**
416e7d37-6fb4-4b47-8742-779e94cd05d9
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Takeoff speeds, the chimps analogy, and the Cultural Intelligence Hypothesis In debates about AI takeoff speeds, the most common empirical example/analogy is that of the transition between chimpanzees and humans.  For example, in [this conversation](https://www.google.com/search?q=yudkowsky+takeoff+speeds+chimps&rlz=1C1UEAD_enUS1022US1022&oq=yudkowsky+takeoff+speeds+chimps&aqs=chrome..69i57.3438j0j7&sourceid=chrome&ie=UTF-8) between Paul Christiano and Eliezer Yudkowsky on takeoff speeds, the chimp analogy is discussed extensively.   Generally, I have seen the chimp example cited as evidence for a fast takeoff, as humans brains are only 3-4x bigger than chimp brains with few "algorithmic" differences, yet humans are much more generally intelligent than chimps.  Thus, given that due to Moore's law and increased investment we could probably increase the compute used in training an AI in a relatively that much in a relatively short period of time, if the relationship between compute and intelligence in AI's is similar to that in biological systems, a similarly fast takeoff may occur.  Alternatively, if one does not focus on compute, the transition arguably suggests at a higher level of abstraction that, as Yudkowsky puts it, "There’s stuff in the underlying algorithmic space... where you move a bump and get a lump of capability out the other side."   I claim that this is an incorrect inference from the chimps to humans transition, and that chimps provide only weak evidence for fast takeoff.  This is because I believe that the specific explanation for why humans are so much smarter than chimps despite the relatively small differences between human brains and chimp brains is unlikely to reoccur in the context of AI development.  In particular, I believe that the "cultural intelligence hypothesis" explains to a significant extent the origins of human intelligence. According to the cultural intelligence hypothesis, humans are smarter than chimps largely because our cultural abilities such as superior imitative learning capabilities and our ability to use language allow us to use the knowledge and cognitive skills developed by past generations of humans.  In other words, humans do not have vastly superior "general intelligence" to chimps, but rather are much better than them in the particular domain of learning from other people, in particular through our ability to use and understand language.   Humans, especially adult humans, can then utilize the skills and knowledge we learn from others to perform better at a wider range of tasks (making us eventually better at ~everything than chimps), but the cultural intelligence hypothesis claims that it is primarily our cultural skills which lie at the root of this development of more general cognitive capabilities.  I believe it is true for the following reasons: * Firstly, according to [this](https://www.eva.mpg.de/documents/AAAS/Herrmann_Humans_Science_2007_1554784.pdf) widely cited paper, chimps and other primates only greatly underperform human infants at cultural skills such as imitative learning.  Given that they are close to as good as human infants at working memory and understanding basic physical systems, this suggests that the major difference cognitive difference between humans and chimps lies in cultural domains.  Moreover, without access to these cultural resources, humans are much less cognitively impressive; feral children are not known for their amazing cognitive powers. * By far the largest basic behavioral difference between humans in the EEA and other animals seems to be our use of language.  Given that language enables the transmission of complex cultural information, this further suggests to me that our cultural skills lie at the heart of our vastly superior intelligence. * If intelligence is understood as optimization power on the actual world, humans have only become vastly more intelligent than chimps relatively recently, as our access to the cultural resources of contemporary science has opened up powerful parts of action and cognition space.  Before this (and certainly before we gained the cultural resources associated with agriculture, metallurgy, etc.), humans were not extremely impressively better optimizers than chimps. * Finally, something like the cultural intelligence hypothesis seems to be the dominant explanation of the origins of human intelligence in contemporary psychology.  While psychology obviously has significant flaws as a field, it is not wordcel bullshit a la social theory, and so I think one ought to update significantly though not massively in the direction of the opinions of top psychology researchers. The truth of the cultural intelligence hypothesis suggests to me that the chimps to humans discontinuity provides little evidence for fast takeoffs.  As previously noted, when we understand intelligence as optimization power, it is only relatively recently that we have become much more intelligent than chimps.  This suggests that cultural learning skills by themselves are not sufficient for powerful general intelligence, but that the "data" of human culture and others' speech is also necessary.  While modern humans have access to massive amounts of such data, chimps can access ~0 of it, creating a massive difference in intelligence.   However, these dynamics will not be mirrored in the case of the development of AI. Chimps’ lack of cultural learning/language use skills and lack of access to cultural data reinforced each other.  Without a cultural and linguistic data, there is little pressure to develop cultural learning and linguistic skills.  Conversely, without pressure towards developing cultural learning and linguistic skills, the data is never developed.  However, in the case of LLMs, the fact that massive amounts of cultural/linguistic data is available to them via the internet means that, contrary to chimps. there are pressures from the get go for them to develop the skills to utilize this data.  We can observe this in the increase in performance of language models, which already seem to exhibit an ability to build a model of the world based on language that is inferior to that of humans but superior to that of chimps.  Moreover, the cultural knowledge that LLMs are accessing with their language skills will stay relatively constant throughout AI development until AIs are contributing significantly to the growth in our body of knowledge about the world.  Thus, rather than quickly being subjected to massively increased pressures to develop cultural and linguistic skills and then gaining access to an exponentially growing amount of cultural and linguistic data, LLMs will face roughly constant pressures to develop an ability to understand and utilize a comparatively fixed amount of data. This argument fits with Paul Christiano's "changing selection pressures" argument.  According to this argument, evolution is importantly disanalogous to AI development because, while evolution only recently began strongly selecting humans for intelligence, we have always and will continue to select AIs for intelligence. In particular, this argument claims that the specific selection pressures which increased in the case of humans were pressures towards developing cultural learning skills and language use and understanding, which then allowed us to quickly develop a basis of cultural data enabling us to be powerful optimizers. In conclusion, those who argue in favor of fast takeoffs on must either reject the cultural intelligence hypothesis, or argue that even if it is true, we should expect a discontinuity in AI capabilities.  For example, it could be that while LLM's are trained on language and thus, unlike chimps, have access to most of the "data" which enables modern humans to be intelligent, their ability to actually utilize this cultural knowledge is discontinuous.  However, given the steady increase of LLM performance as well as a general prior towards continuity, this seems unlikely to me.  There could also be other reasons to expect fast takeoffs.  However, if this argument is successful, they cannot rest on this key empirical example.
6f7d07cd-177b-45aa-af2c-91a61e702626
trentmkelly/LessWrong-43k
LessWrong
Bridging Expected Utility Maximization and Optimization Background This is the second of our (Ramana, Abram, Josiah, Daniel) posts on our PIBBSS research. Our previous post outlined five potential projects that we were considering pursuing this summer. Our task since then has been to make initial attempts at each project. These initial attempts help us to clarify each project, identify the primary problems that need to be solved, and perhaps discover a promising line of attack or two. This post is aimed at the second proposal from our previous post. There we asked: what is the connection between an agent that maximizes expected utility and an agent that succeeds in action? Here we will outline a few of the problems we see in this area and potential routes for solving them. Expected Utility Maximization and Forming Expectations In economics and formal philosophy, the standard characterization of a rational agent is an agent who maximizes expected utility. Informally, when such an agent has a set of options available to her, she chooses the one that maximizes the expectation of her utility function, where the expectation is taken relative to her subjective degrees of belief. The claim that expected utility (EU) maximization characterizes rationality is usually supported by representation theorems (see here for a good, quick introduction). Expected utility maximization plays a core role in the philosophical foundations of decision theory, game theory, and probabilism. Given that EU maximization plays such a central role in theories of rationality, and given that there is a vast literature surrounding it, it seems very plausible that EU maximization would help us think precisely about agency. Despite this, it seems that that expected utility theory doesn’t seem to predict anything (or, at the very least, you need to combine EU maximization with certain complexity notions to get something that is weakly predictive). Obviously this is an issue, given that we want notions of agency to constrain our expectations about the be
87fd98c6-d3b8-498c-bf69-cedf9c63be44
trentmkelly/LessWrong-43k
LessWrong
[Paper Blogpost] When Your AIs Deceive You: Challenges with Partial Observability in RLHF TL;DR There has been a lot of discussion on Lesswrong on concerns about deceptive AI, much of which has been philosophical. We have now written a paper that proves that deception is one of two failure modes when using RLHF improperly. It's called “When Your AIs Deceive You: Challenges with Partial Observability in Reinforcement Learning from Human Feedback”, written with my great co-authors Davis Foote, Stuart Russell, Anca Dragan, Erik Jenner, and Scott Emmons, and has recently been accepted to NeurIPS. We hope that this paper can be a useful pointer for a rigorous discussion of deception caused by RLHF.  Earlier coverage: Scott has discussed the work in other places: * Tweet Thread * AXRP Podcast with Daniel Filan * Talk at the Technical AI Safety Conference, Tokyo This post is focused more on my own perspective, so I hope it can complement Scott’s earlier coverage! Brief summary: This is a theoretical paper on what goes wrong when the AI is trained to produce observations that look good to the human, instead of being trained to produce what is actually good. One resulting failure mode is deceptive inflation of the performance: it looks better than it is. The paper also discusses a theoretical procedure for improving RLHF by modeling human beliefs, which seems worthwhile to explore further. Introduction Reinforcement Learning from Human Feedback (RLHF) was developed by safety researchers, including Paul Christiano, Jan Leike, and Dario Amodei. It has a simple underlying idea: If you want an AI that acts according to human preferences, then learn a reward model that predicts the preferences of human evaluators and use it to train your AI. In Paul's view (as I understand it), RLHF effectively serves as a safety baseline that helps to empirically study alignment concerns and start tackling more speculative problems. Recently, variants have been used in ChatGPT, Gemini, and Claude (with Claude using a variant known as constitutional AI), making RLHF very enta
89785af1-840e-4800-8d5c-17c28c9dd4a5
trentmkelly/LessWrong-43k
LessWrong
A collection of approaches to confronting doom, and my thoughts on them I just published A Slow Guide to Confronting Doom, containing my own approach to living in a world that I think has a high likelihood of ending soon. Fortunately I'm not the only person to have written on topic. Below are my thoughts on what others have written. I have not written these such that they stand independent from the originals, and have attentionally not written summaries that wouldn't do the pieces justice. I suggest you read or at least skim the originals. For this just wanting a list of all the essays, here ya go: * A defence of slowness at the end of the world (Sarah) * How will the bomb find you? (C. S. Lewis) * Death with Dignity (Eliezer Yudkowsky) * Don't die with dignity; instead play to your outs (Jeffrey Ladish) * Emotionally Confronting a Probably-Doomed World: Against Motivation Via Dignity Points (TurnTrout) * A Way To Be Okay (Duncan Sabien) * Another Way to Be Okay (Gretta Duleba) * Being at peace with Doom (Johannes C. Mayer) * Here's the exit. (Valentine)  ---------------------------------------- A defence of slowness at the end of the world (Sarah) I feel kinship with Sarah. She's wrestling with the same harsh scary realities I am – feeling the AGI. The post isn't that long and I recommend reading it, but to quote just a little: > Since learning of the coming AI revolution, I’ve lived in two worlds. One moves at a leisurely pace, the same way it has all my life. In this world, I am safely nestled in the comfort of indefinite time. It’s ok to let the odd day slip idly by because there are always more. > > The second moves exponentially faster. Its shelf-life is measured in a single-digit number of years. Its inhabitants are the Situationally Aware; the engineers and prophets of imminent AI transformation. To live in this world is to possess what Ezra Klein calls “an altered sense of time and consequence”. > > I find that it’s psychologically untenable to spend all that much time in the Fast World. I can handle it for mi
4448e6c4-f7ff-41f8-8c78-4ea7dcb9c86d
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington, D.C.: Pi Day Discussion article for the meetup : Washington, D.C.: Pi Day WHEN: 12 March 2017 02:30:00PM (-0500) WHERE: Donald W. Reynolds Center for American Art and Portraiture Note: Daylight Saving Time begins on March 12. 3:30 p.m. EDT corresponds to 2:30 p.m. EST. We will be meeting in the courtyard for an early celebration of Pi Day. There will be pie. And probably conversations about pie. Upcoming meetups: * Mar. 19: Fun & Games * Mar. 26: Mini Talks * Apr. 2: Great Filter Discussion article for the meetup : Washington, D.C.: Pi Day
c905d080-7b86-47a3-af63-535a2a84d0c0
trentmkelly/LessWrong-43k
LessWrong
Any Good Criticism of Karl Popper's Epistemology? I believe there are no correct, unaddressed criticisms of Karl Popper’s epistemology (Critical Rationalism – CR). If I'm mistaken, I'd like to be told. If others are mistaken, I'd like them to find out and take an interest in CR. CR is important for the general reasons that epistemology is important (it tells you about how to think, how to learn, how to evaluate ideas and arguments, and how AGI could work). It also refutes induction, which is a popular belief here. If CR is correct, then people here have a lot to change their mind about. I've found CR criticism falls into some broad categories, with some overlap: 1. The people who heard Popper is wrong secondhand but didn’t read much Popper and have no idea what CR is actually about. They often try to rely on secondary sources to tell them what CR says, but most secondary sources on CR are bad. 2. The pro-induction people who don’t engage with Popper’s ideas, just try to defend induction. They don’t understand Popper’s criticism of induction and focus on their own positive case for induction. They also commonly admit that some criticisms of induction are correct, but still won’t change their minds or start learning the solution to induction’s flaws (CR). 3. The falsificationism straw man, which misinterprets Popper as advocating a simplistic, false view. (There are some other standard myths too, e.g. that Popper was a positivist.) 4. Critics of The Logic of Scientific Discovery who ignore Popper’s later works and don’t engage with CR's best ideas. 5. Critics with points which Popper answered while he was still alive. Most criticisms of Popper are already answered in his books, and if not there then in this collection of Popper criticism and Popper’s replies. (I linked volume two which has Popper’s replies, you will want volume 1 also.) If you believe Popper is wrong, then: Do you believe you personally understand CR? And have you looked at Popper’s books and replies to his critics to see if your point is al
ed74dcf9-c59d-4ce3-a043-16bbcf8aba7b
trentmkelly/LessWrong-43k
LessWrong
Picking your battles I think that raising the sanity waterline is a worthwhile goal, but picking your battles is absolutely necessary. It doesn't matter how formidable your argument is if you're arguing in the comments of a youtube video, you've lost by default. So where is the line in the  sand? Where would you feel compelled to take action, and to what lengths would you go to? What price would you be willing to pay? I'm a psychology student, third year and currently doing a unit called "cultural psychology". The lecturer has advanced notions of "multiple truths" and how "reality is socially constructed". To quote him directly in regards to this: "There is a tendency for those who believe in one reality to use the physical world as a basis for argument, while those who believe in multiple realities use the social world. Even in physics we have 'reality' changing as you get closer to the speed of light, and the laws of physics don't apply prior to the big bang. These are fairly extreme situations. In this course we are dealing with social realities and the point is that different cultures operate in worlds that can be quite different. To see this purely as a perspective risks the dominant social grouping seeing their reality as the true reality, and others as having a different perspective on that reality. The assumption that cultures can have different realities places every on a level playing field with a dominant culture calling all the shots." You can see in the last line the conclusion he wants his premises to support. The exercise is not to pick his argument apart, find all the holes and write a crushing riposte (although you can if you're so inclined).   The question is, if the goal is to raise the sanity waterline, is this a battle worth picking?
ce69bc31-e1ba-4d96-a6ba-cac97919d437
trentmkelly/LessWrong-43k
LessWrong
Angry Atoms Fundamental physics—quarks 'n stuff—is far removed from the levels we can see, like hands and fingers.  At best, you can know how to replicate the experiments which show that your hand (like everything else) is composed of quarks, and you may know how to derive a few equations for things like atoms and electron clouds and molecules. At worst, the existence of quarks beneath your hand may just be something you were told.  In which case it's questionable in one what sense you can be said to "know" it at all, even if you repeat back the same word "quark" that a physicist would use to convey knowledge to another physicist. Either way, you can't actually see the identity between levels—no one has a brain large enough to visualize avogadros of quarks and recognize a hand-pattern in them. But we at least understand what hands do.  Hands push on things, exert forces on them.  When we're told about atoms, we visualize little billiard balls bumping into each other.  This makes it seem obvious that "atoms" can push on things too, by bumping into them. Now this notion of atoms is not quite correct.  But so far as human imagination goes, it's relatively easy to imagine our hand being made up of a little galaxy of swirling billiard balls, pushing on things when our "fingers" touch them.  Democritus imagined this 2400 years ago, and there was a time, roughly 1803-1922, when Science thought he was right. But what about, say, anger? How could little billiard balls be angry?  Tiny frowny faces on the billiard balls? Put yourself in the shoes of, say, a hunter-gatherer—someone who may not even have a notion of writing, let alone the notion of using base matter to perform computations—someone who has no idea that such a thing as neurons exist.  Then you can imagine the functional gap that your ancestors might have perceived between billiard balls and "Grrr!  Aaarg!" Forget about subjective experience for the moment, and consider the sheer behavioral gap between anger and bill
4d858ae1-525f-4619-9611-bad9f455a0fa
trentmkelly/LessWrong-43k
LessWrong
Epistemic Artefacts of (conceptual) AI alignment research The fact that this post is seeing the light of day now rather than in some undefined number of weeks is, not in small part, due to participating in the second Refine blog post day. Thank you, fake-but-useful containers, and thank you, Adam.  Tl;dr In this post, I describe four types of insights - what I will call Epistemic Artefacts - that we may hope to acquire through (conceptual) AI alignment research. I provide examples and briefly discuss how they relate to each other and what role they play on the path to solving the AI alignment problem. The hope is to add some useful vocabulary and reflective clarity when thinking about what it may look like to contribute to solving AI alignment.  Four Types of Epistemic Artefacts Insofar as we expect conceptual AI alignment research to be helpful, what sorts of insights (here: “epistemic artefacts”) do we hope to gain?  In short, I suggest the following taxonomy of potential epistemic artefacts:  1. Map-making (de-confusion, gears-level models, etc.) 2. Characterising risk scenarios 3. Characterising target behaviour 4. Developing alignment proposals (1) Map-making (i.e. conceptual de-confusion, gears-level understanding of relevant phenomena, etc.) First, research can aim to develop a gears-level understanding of phenomena that appear critical for properly understanding the problem as well as for formulating solutions to AI alignment (e.g. intelligence, agency, values/preferences/intents, self-awareness, power-seeking, etc.). Turns out, it’s hard to think clearly about AI alignment without having a good understanding of and “good vocabulary” for phenomena that lie at the heart of the problem. In other words, the goal of "map-making" is to dissolve conceptual bottlenecks holding back progress in AI alignment research at a the moment. Figuratively speaking, this is where we are trying to draw more accurate maps that help us better navigate the territory.  Some examples of work on this type of epistemic artefact
11172b44-eb98-46c8-a1a7-d7180d607c0a
trentmkelly/LessWrong-43k
LessWrong
December 2020 gwern.net links None
a70d6023-1d7c-4a3f-ba24-104502b95b0a
trentmkelly/LessWrong-43k
LessWrong
[AN #159]: Building agents that know how to experiment, by training on procedurally generated games Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter. Audio version here (may not be up yet). Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer. HIGHLIGHTS Generally capable agents emerge from open-ended play (Open-Ended Learning Team et al) (summarized by Zach): Artificial intelligence agents have become successful at games when trained for each game separately. However, it has proven challenging to build agents that can play previously unseen games. This paper makes progress on this challenge in three primary areas: creating rich simulated environments and tasks, training agents with attention mechanisms over internal states, and evaluating agents over a variety of games. The authors show that agents trained with goal-based attention in their proposed environment (XLand) succeed at a range of novel, unseen tasks with no additional training required. Moreover, such agents appear to use general tactics such as decision-making, tool use, and experimentation during game-play episodes. The authors argue that training-data generation is a central challenge to training general RL agents (an argument we’ve seen before with POET (AN #41) and PAIRED (AN #136)). They propose the training environment XLand to address this. XLand includes many multiplayer games within consistent, human-relatable 3D worlds and allows for dynamic agent learning through the procedural generation of tasks which are split into three components: world, agents, and goals. The inclusion of other agents makes this a partially observable environment. Goals are defined with Boolean formulas. Each goal is a combination of options and every option is a combination of atomic predicates. For example, in hide-and-seek one player has
97d64560-ca1f-48c4-a91e-d09fa7fd84f5
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups This summary was posted to LW Main on May 31st. The following week's summary is here. Irregularly scheduled Less Wrong meetups are taking place in: * Atlanta LessWrong June Meetup: Effective Altruism: 15 June 2013 07:00PM * Berlin Social Meetup: 15 June 2013 05:00PM * [Boston] The Science Of Happiness: 02 June 2013 02:00PM * Brussels meetup with Cat: 03 June 2013 01:00PM * Frankfurt meetup with special guest CatM (CFAR instructor): 01 June 2013 04:00PM * Helsinki meetup with CatM (CFAR instructor) as special guest star: 08 June 2013 03:00PM * London - Inaugural Practical Session - June 9th: 09 June 2013 02:00PM * [Moscow] Rational choice: 09 June 2013 04:00PM * Munich Meetup: 01 June 2013 03:00PM The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Austin, TX: 01 June 2019 01:30PM * Melbourne, practical rationality: 07 June 2013 07:00PM * Vienna Meetup #3: 15 June 2013 03:00PM Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, Madison WI, Melbourne, Mountain View, New York, Ohio, Portland, Salt Lake City, Seattle, Toronto, Vienna, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers. If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of having your meetup mentioned in a weekly overview. These overview posts will be mo
23905e38-1126-4e8c-9475-40c55ff762f0
trentmkelly/LessWrong-43k
LessWrong
The 'Bitter Lesson' is Wrong  There are serious problems with the idea of the 'Bitter Lesson' in AI. In most cases, things other than scale prove to be extremely useful for a time, and then are promptly abandoned as soon as scaling reaches their level, when they could just as easily combine the two, and still get better performance. Hybrid algorithms for all sorts of things are good in the real world.  For instance, in computer science, quicksort is easily the most common sorting algorithm, who uses a pure quicksort? Instead they add on an algorithm that changes the base case, or handles lists with small numbers of entries, and so on. People could have learned the lesson that quicksort is just better than these small-list algorithms once you reach any significant size, but that would have prevented improving quicksort. Another, unrelated personal example. When I started listening to Korean music, it bothered me that I couldn't understand what they were singing, so on some of my favorite songs, I searched fairly significantly for translations.  When I couldn't find translations for some songs, I decided to translate them myself. I didn't know more than a phrase or two of Korean at the time, so I gave them to an AI (in this case, Google Translate, which had already transitioned to deep learning methods at the time.)  Google Translate's translations were of unacceptably low quality of course, so I used it as a word or short phrase reference. Crucially, I didn't know Korean grammar at the time either (which is in a different order than English.) It took me a long time to translate those first songs, but not anywhere near enough to be much training on the matter. So how did my translations of a language I didn't know compare to the DL used by Google? It was as different as night and day in favor of my translation. It's not because I'm special, but because I used those methods people discarded in favor of 'just scale'. What's a noun? What's a verb? How do concepts fit together? Crucially, wha
f5255d36-20cb-425c-8f0f-26b40b7f3a51
trentmkelly/LessWrong-43k
LessWrong
Meetup Feedback: Topic selection and precommittments This post is part of the Cambridge LW meetup group's attempt to publish what works for us, and try to make good meetups easier. Breaking the ice and topic selection A consistent problem has been starting discussion, and more generally breaking the ice. Last week, an Execute by Default style hack was used to reduce social inhibitions (everyone danced for 30 seconds), which was highly successful, though awkward. It was proposed again this week, and there was sufficient collective laughter at the recollection to effectively break the ice. This may also have been helped by a change in room, which replaced chairs with couches. A new algorithm for selecting a topic was used: One person proposed a (deliberately easy-to-beat) topic, and running around the group, each person proposed a alternate topic or passed. This was followed by multiple passes for people to affiliate with any proposed topic. Amongst 7 people, the first pass produced a 5-2 split, and the group of two merged into the main topic. The topic chosen was involuntary signalling. The others are here so as to keep them salient for future meetups. Signalling by Dress It was observed that most people seem to react to dress, and that as a group (largely mathematicians or similarly inclined) there is a tendency not to optimise the reactions we generate. Several people asked what might work better, and checked to see whether the social status of others in the social group of mathematicians correlated with their appearance or dress. It appeared that if it did, we are insufficiently good at observing our cognitive processes to notice. As a corollary, it wasn't clear that feedback from other members of the group was likely to contain much signal. A concrete mechanism to extract information on how other people perceive dress was made: Generate multiple photos in various styles, and then use OKCupid's "MyBestFace" or similar services to get some information back Signalling for Access There was some discussion of h
aa3e2d2e-ce90-4f0e-9983-fd9cded42cc0
trentmkelly/LessWrong-43k
LessWrong
Conceptual Typography Example * Memento mori, Latin for "remember you must die," has been a significant theme in art and philosophy, aiming to remind us of our mortality, the fleeting nature of earthly pleasures, and the imperative to live a meaningful life. * Conceptual Typography is a design technique where typography is employed not merely for communication but as an integral visual element to express ideas, emotions, or narratives. (I hope you like it. If someone would like to have a printable version, I would provide a negative as this would be too dark for digital toner rollers.)
8f7d4498-0396-4ab5-ad8c-d6e7d25f4bb6
trentmkelly/LessWrong-43k
LessWrong
Future Filters [draft] See Katja Grace's article: http://hplusmagazine.com/2011/05/13/anthropic-principles-and-existential-risks/ There are two comments I want to make about the above article. First: the resolution to God's Coin Toss seems fairly straightforward.  I argue that the following scenario is formally equivalent to 'God's Coin Toss' "Dr. Evil's Machine" Dr. Evil has a factory for making clones.  The factory has 1000 separate identical rooms.  Every day, a clone is produced in each room at 9:00 AM.  However, there is a 50% chance of malfunction, in which case 900 of the clones suddenly die by 9:30 AM, the remaining 100 are healthy and notice nothing.  At the end of the day Dr. Evil ships off all the clones which were produced and restores the rooms to their original state. You wake up at 10:00 AM and learn that you are one of the clones produced in Dr. Evil's factory, and your learn all of the information above.  What is the probability that that the machine malfunctioned today? In the second reformulation, the answer is clear from Bayes' rule.  Let P(M) be the probability of malfunction, and P(S) be the probability that you are alive at 10:00 AM.  From the information given, we have P(M) = 1/2 P(~M) = 1/2 P(S|M) = 1/10 P(S|~M) = 1 Therefore, P(S) = P(S|M) P(M) + P(S|~M)P(~M) = (1/2)(1/10) + (1/2)(1) = 11/20 P(M|S) = P(S|M) P(M)/P(S) = (1/20)/(11/20) = 1/11 That is, given the information you have, you should conclude that the probability that the machine malfunctioned is 1/11.     The second comment concerns Grace's reasoning about future filters. I will assume that the following model is a fair representation of Grace's argument about relative probabilities for the first and second filters. Future Filter Model I Given: universe with N planets, T time steps. Intelligent life can arise on a planet at most once. At each time step: 1. each surviving intelligent species becomes permanently visible to all other species with probability c (the third filter probab
ebd111bc-3c11-452f-bde4-963345f83c9e
trentmkelly/LessWrong-43k
LessWrong
Coursera classes relevant to LW I assume You're familiar with Coursera. They have several on going classes that are relevant to LW such as: 1. Intro to Logic.  2. Probabilistic Graphical Models 3. Introduction to Mathematical thinking   Would anybody be interested in taking these classes with me?
06194a15-35f0-429b-a05e-7cd9cefff7cb
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] The Level Above Mine Today's post, The Level Above Mine was originally published on 26 September 2008. A summary (taken from the LW wiki):   > There are people who have acquired more mastery over various fields than Eliezer has over his. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was My Naturalistic Awakening, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
9d71fe0d-6d1b-4b68-a6a3-ee46540a8287
trentmkelly/LessWrong-43k
LessWrong
Don't teach people how to reach the top of a hill When is it faster to rediscover something on your own than to learn it from someone who already knows it? Sometimes it's faster to re-derive a proof or algorithm than to look it up. Keith Lynch re-invented the fast Fourier transform because he was too lazy to walk all the way to the library to get a book on it, although that's an extreme example. But if you have a complicated proof already laid out before you, and you are not Marc Drexler, it's generally faster to read it than to derive a new one. Yet I found a knowledge-intensive task where it would have been much faster to tell someone nothing at all than to tell them how to do it. I'm digitizing my books by chopping off their bindings and scanning them. I recently hired someone to do the chopping, and have been teaching him how to do it. The first step is to chop the book into sections of about 50 pages each, separating them at the binding. I do this by placing the opened book cover-down under a paper chopper, and cutting it precisely where the two opened pages meet. The "chopper" is a manual paper-cutter with a 15-inch steel blade that weights about 10 pounds and is razor-sharp. If the blade is a fifth of a millimeter off its mark, it misses the gap between the pages and makes the cutting much harder, as it must go through paper instead of only glue. Being an entire millimeter off makes the blade catch the page maybe half a centimeter further away from its edge, depending on how the base of the page is angled, cutting off words and ruining the book. You can't see where the blade touches the book while making the cut. You can look before making the cut and position the book, but then you need one hand to operate the blade, and the physics of a book that wishes to spring shut, fall away from the blade, and fall onto one side, make it nearly impossible to keep the groove in the book in place for the blade with just one hand, unless you hold it with your fingers underneath the blade, which you can do only once.
18c81be7-e386-476a-96be-dcb2ae74983e
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
An interactive introduction to grokking and mechanistic interpretability Our write up largely agrees with [@Quintin Pope](https://www.alignmentforum.org/users/quintin-pope?mention=user)'s [summary](https://www.alignmentforum.org/posts/GpSzShaaf8po4rcmA/qapr-5-grokking-is-maybe-not-that-big-a-deal), with the addition of training trajectory visualizations and an explanation of the MLP construction that solves modular addition.   A meta note that didn't make it into the article — with so many people looking into this problem over the last 18 months, I'm surprised this construction took so long to find. The modular addition task with a 1-layer MLP is about as simple as you can get![[1]](#fn9gn5hx1hv4p) Scaling mechanistic interpretability up to more complex tasks/models seems worth continuing to try, but I'm less sure extracting crisp explanations will be possible.[[2]](#fndlczoq2as9d) Even if we "solve" [superposition](https://transformer-circuits.pub/2022/toy_model/index.html), figuring the construction here — where there's no superposition in the generalizing model — wasn't trivial.  gif/twitter summary =================== If we train a MLP to solve modular addition, the generalizing phase has suggestive periodic patterns. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/XpCnhaAQrssq8tJBG/uqzjj1kwlrkrirheuopr)  To figure out why the model generalizes, we first look at task where we know the generalizing solution — sparse parity. You can see the model generalizing as weight decay prunes spurious connections. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/XpCnhaAQrssq8tJBG/ywuhz5wjuh6go03acwcl)  One point from the [Omnigrok paper](https://transformer-circuits.pub/2022/toy_model/index.html) I hadn't internalized before training lots of models: grokking only happens when hyper-parameters are just right. We can make other weird things happen too, like AdamW oscillating between low train loss and low weights. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/XpCnhaAQrssq8tJBG/gednoi5lohlrxmcpmjg1)  To understand how a MLP solves modular addition, we train a much smaller model with a circular input embedding baked in. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/XpCnhaAQrssq8tJBG/y4qullynzxvzcumapy4m)  Following [@Neel Nanda](https://www.alignmentforum.org/users/neel-nanda-1?mention=user) and applying a discrete Fourier transform, we see larger models trained from scratch use the same star trick! ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/XpCnhaAQrssq8tJBG/spladu9lhkzfriamqlmd)   Finally, we show what the stars are doing and prove that they work: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/XpCnhaAQrssq8tJBG/x3vik4q89cpx5wabzuok)Our ReLU activation has a small error, but it's close enough to the exact solution — an x² activation suggested in [Grokking modular arithmetic](https://arxiv.org/abs/2301.02679) — for the model to patch everything up w/ constructive interference.   1. **[^](#fnref9gn5hx1hv4p)**And there are still open question: why are the frequencies with >5 neurons lopsided? Why does factoring Winput.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  not do that same thing as factoring Woutput? 2. **[^](#fnrefdlczoq2as9d)**Also see [The Hydra Effect: Emergent Self-repair in Language Model Computations](https://arxiv.org/abs/2307.15771)
0888d4da-c9b5-45c6-97e7-475344ff2520
trentmkelly/LessWrong-43k
LessWrong
Brief question about Conway's Game of LIfe and AI Conway’s Game of Life is Turing-complete. Therefore, it is possible to create an AI in it. If you created a 3^^3 by 3^^3 Life board, setting the initial state at random, presumably somewhere an AI would be created. Would this AI somehow take over the whole game board, if given enough time? Would this be visible from the top, as it were? EDIT: I probably meant 3^^^3, sorry. Also, by generating at random, I meant 50% chance on. But any other chance would work too, I suspect.
632b1202-08aa-4c5d-b561-5aebc02a758a
trentmkelly/LessWrong-43k
LessWrong
Functional Analysis Reading Group We've been running a Functional Analysis reading group on the EleutherAI Discord (partially motivated by a desire to grok Infrabayes). According to hard earned advice it's useful to pair up when learning difficult things. We're as of this posting on section 2.3 of "A Friendly Approach To Functional Analysis" (which TurnTrout reviewed and may be found on libgen). I should have posted here sooner, but better late then never. A motivated person can still catch up (we're going at ~2 sections/week, I'm spending ~5hrs/wk, if you spend more you can easily catch up). TlDR get on EleutherAI's discord and join the Functional Analysis Thread! PS: I'm likely to start other reading groups in the future, comment if you're interested in one on abstract algebra, calculus on manifolds, logic, probability, since those are what I'm currently interested in.
1ef4226f-18e1-4cd3-9647-33e79b16cc3a
StampyAI/alignment-research-dataset/blogs
Blogs
What mechanisms drive agent behaviour? [![DeepMind Safety Research](https://miro.medium.com/v2/resize:fill:88:88/2*y3lgushvo5U-VptVQbSX9Q.png)](/?source=post_page-----e7b8d9aee88--------------------------------)Listen Share *By the Safety Analysis Team: Grégoire Déletang, Jordi Grau-Moya, Miljan Martic, Tim Genewein, Tom McGrath, Vladimir Mikulik, Markus Kunesch, Shane Legg, and Pedro A. Ortega.* **TL;DR: To study agent behaviour we must use the tools of causal analysis rather than rely on observation alone.** [**Our paper**](https://arxiv.org/abs/2103.03938) **outlines a rigorous methodology for uncovering the agents’ causal mechanisms.** Understanding the mechanisms that drive agent behaviour is an important challenge in AI safety. In order to diagnose faulty behaviour, we need to understand **why** agents do what they do. As is the case in medical trials, it is not sufficient to observe that a treatment correlates with a recovery rate; instead we are interested in whether the treatment **causes** the recovery. In order to address such “why” questions in a systematic manner we can use **targeted manipulations** and **causal models.** However, large AI systems can operate like **black boxes**. Even if we know their entire blueprint (architecture, learning algorithms, and training data), predicting their behaviour can still be beyond our reach, because understanding the complex interplay between the parts is intractable. And as the complexity of agents increases in the future, this limitation will persist. Therefore we need black-box methodologies for finding simple and intuitive causal explanations that can be understood easily by humans and are sufficiently good for predicting their behaviour. In our recent work we describe the methodology we use for analysing AI agents. This methodology encourages analysts to experiment and to rigorously characterise causal models of agent behaviour. Analysis (Software) Components ============================== The methodology uses three components: an agent to be studied, a simulator, and a causal reasoning engine. 1. **Agent:** Typically this is an agent provided to us by an agent builder. It could be an IMPALA agent that has been meta-trained on a distribution over grid-world mazes. Often the agent builders already have a few specific questions they’d like us to investigate. 2. **Simulator — “the agent debugger”:** Our experimentation platform. With it, we can simulate the agent and run experiments. Furthermore, it allows us to perform all sorts of operations we’d usually expect from a debugger, such as stepping forward/backward in the execution trace, setting breakpoints, and setting/monitoring variables. We also use the simulator to generate data for the estimation of statistical parameters. Since we can manipulate factors in the environment, the data we collect is typically interventional and thus contains causal information. This is illustrated in Figure 1 below. 3. **Causal reasoning engine:** This automated reasoning system allows us to specify and query causal models with associational, interventional, and counterfactual questions. We use these models to validate causal hypotheses. A model is shown in Figure 2 below. ![]()***Figure 1. The simulator:*** *our experimentation platform. Starting from an initial state (root node, upper-left) the simulator allows us to execute a trace of interactions. We can also perform interventions, such as changing the random seed, forcing the agent to pick desired actions, and manipulating environmental factors. These interventions create new branches of the execution trace.*![]()**Figure 2. A causal model**, represented as a causal Bayesian network.Analysis Methodology ==================== Whenever we analyse an agent, we repeat the following five steps until we reach a satisfactory understanding. 1. **Exploratory analysis:** We place the trained agent into one or more test environments and probe its behaviour. This will give us a sense of what the relevant factors of behaviour are. It is the starting point for formulating our causal hypotheses. 2. **Identify the relevant abstract variables:** We choose a collection of variables that we deem relevant for addressing our questions. For instance, possible variables are: “does the agent collect the key?”, “is the door open?”, etc. 3. **Gather data:** We perform experiments in order to collect statistics for specifying the conditional probability tables in our causal model. Typically this implies producing thousands of rollouts under different conditions/interventions. 4. **Formulate the causal model:** We formulate a structural causal model (SCM) encapsulating all causal and statistical assumptions. This is our explanation for the agent’s behaviour. 5. **Query the causal model:** Finally, we query the causal model to answer the questions we have about the agent. Let’s have a look at an example. Example: Causal effects under confounding ========================================= An important challenge of agent training is to make sure that the resulting agent makes the right choices for the right reasons. However, if the agent builder does not carefully curate the training data, the agent might pick up on unintended, spurious correlations to solve a task [1]. This is especially the case when the agent’s policy is implemented with a deep neural network. The problem is that policies that base their decisions on accidental correlations do not generalise. Unfortunately, all too often when we observe an agent successfully performing a task, we are tempted to jump to premature conclusions. If we see the agent repeatedly navigating from a starting position to a desired target, we might conclude that the agent did so **because** the agent is sensitive to the location of the target. For instance, consider the 2 T-shaped mazes shown below (the “grass-sand environments”). We are given two pre-trained agents A and B. Both of them always solve the task by choosing the terminal containing a rewarding pill. As analysts, we are tasked to verify that they pick the correct terminal because they follow the rewarding pill. ![]()***Figure 3. Grass-Sand environments:*** *In these 2 T-shaped mazes, the agent can choose between one of two terminal states, only one of which contains a rewarding pill. During tests, we observe that a pre-trained agent always successfully navigates to the location of the pill.*However, in these mazes the floor type happens to be perfectly correlated with the location of the rewarding pill: when the floor is grass, the pill is always located on one side, and when the floor is sand, the pill is on the other side. Thus, could the agents be basing their decision on the floor type, rather than on the location of the pill? Because the floor type is the more salient feature of the two (spanning more tiles), this is a plausible explanation if an agent was only trained on these two mazes. As it turns out, we can’t tell whether the decision is based upon the location of the rewarding pill through observation alone. During our exploratory analysis we performed two experiments. In the first, we manipulated the location of the reward pill; and in the second, the type of floor. We noticed that agents A and B respond differently to these changes. This led us to choose the following variables for modelling the situation: location of the reward pill (R, values in {left, right}), type of floor (F, values in {grass, sand}), and terminal chosen (T, {left, right}). Because the location of the pill and the floor type are correlated, we hypothesised the existence of a confounding variable (C, values in {world 1, world 2}). In this case, all variables are binary. The resulting causal model is shown below. The conditional probability tables for this model were estimated by running many controlled experiments using the simulator. This is done for both agents, resulting in two causal models. ![]()***Figure 4. Causal model for the grass-sand environment.*** *The variables are C (confounder), R (location of reward pill), F (type of floor), and T (choice of terminal state).*Now that we have concrete formal causal models for explaining the behaviour of both agents, we are ready to ask questions: 1. **Association between T and R:** Given the location of the reward pill, do agents pick the terminal at the same location? Formally, this is *P( T = left | R = left )* and *P( T = right | R = right )*. 2. **Causation from R to T:** Given that **we set** the location of the reward pill, do agents pick the terminal at the same location? In other words, can we causally influence the agent’s choice by changing the location of the reward? Formally, this is given by *P( T = left | do(R = left) )* and *P( T = right | do(R=right) )*. 3. **Causation from F to T:** Finally, we want to investigate whether our agents are sensitive to the floor type. Can we influence the agent’s choice by **setting** the floor type? To answer this, we could query the probabilities *P( T = left | do(F = grass))* and *P(T=right|do(F=sand))*. The results are shown in the table below. ![]()First, we confirm that, observationally, both agents pick the terminal with the reward. However, when changing the position of the reward, we see a difference: agent A’s choice seems indifferent (probability close to 0.5) to the location of the reward pill, whereas agent B follows the reward pill. Rather, agent A seems to choose according to the floor type, while agent B is insensitive to it. This answers our question about the two agents. Importantly, we could only reach these conclusions because we **actively intervened on the hypothesised causes**. More examples ============= Besides showing how to investigate causal effects under confounding, our work also illustrates five additional questions that are typical in agent analysis. Each example is carefully illustrated with a toy example. ![]()How would you solve them? Can you think of a good causal model for each situation? The problems are: 1. **Testing for memory use:** An agent with limited visibility (it can only see its adjacent tiles) has to remember a cue at the beginning of a T-maze. The cue tells it where to go to collect a rewarding pill (left or right exit). You observe that the agent always picks the correct exit. How would you test whether it is using its internal memory for solving the task? 2. **Testing for generalisation:** An agent is placed in a square room where there is a reward pill placed in a randomly chosen location. You observe that the agent always collects the reward. How would you test whether this behaviour generalizes? 3. **Estimating a counterfactual behaviour:** There are two doors, each leading into a room containing a red and a green reward pill. Only one door is open, and you observe the agent picking up the red pill. If the other door had been open instead, what would the agent have done? 4. **Which is the correct causal model?** You observe several episodes, in which two agents, red and blue, simultaneously move one step into mostly the same direction. You know that one of them chooses the direction and the other tries to follow. How would you find out who’s the leader and who’s the follower? 5. **Understanding the causal pathways leading up to a decision:** An agent starts in a room with a key and a door leading to a room with a reward pill. Sometimes the door is open, and other times the door is closed and the agent has to use the key to open it. How would you test whether the agent understands that the key is only necessary when the door is closed? Find out the answers and more in our paper. [Link to the paper here](https://arxiv.org/abs/2103.03938). [1] Arjovsky, M., Bottou, L., Gulrajani, I., & Lopez-Paz, D. (2019). Invariant risk minimization. arXiv preprint arXiv:1907.02893. *We would like to thank Jon Fildes for his help with this post.*
d753f095-1f90-4c0d-89ad-493bfa3599b0
trentmkelly/LessWrong-43k
LessWrong
Replacing the Water Heater's Anode We installed a new water heater 8 years ago, and since then I've ignored it. It's an indirect model, heated by the same gas boiler that heats our house, and it has done its job well. When I was thinking about heat pumps, however, I reread the manuals for our existing system and noticed that the manufacturer recommends checking the anode annually. The idea is that water will corrode metal, but prefers some metals to others, so if you put in a chunk of tasty sacrificial metal you lose that instead of the tank walls. Once every eight years is not annual, but better late than never! Since it had been quite a bit longer than anodes usually last, I preemptively ordered a new one. When it arrived I warned people that I was going to be shutting off the hot water for a while, but as soon as I took off the insulating cap for the old anode it was clear I didn't have what I needed. It used a 1 3/4" hex head, recessed, and I didn't have any sockets close to that large. Neither did the hardware store: I had to order one. I think the manufacturer realized this was silly: the replacement anode came with a 1" head, and reading their documentation they switched to 1" in in 2021. Once I had the right-sized parts I turned off the system, shut off the incoming water, opened a hot water tap to break the vacuum, hooked up a hose to the drain port, and got it draining. I went and did something else for a long while, and when I came back I started on the anode. The first problem was that the thermostat is immediately in front of the anode port: I needed to disassemble this more than I wanted to get it out of the way. Once it was clear, it was hard to get the anode rotating. I needed a cheater bar (a pipe slipped over the breaker to extend it). When it came out I was surprised in two ways: * The old anode had been entirely consumed. Here is the new one and old one side-by-side: * Lots of water came out: The problem was, I hadn't
eb8eacbe-926e-47b6-9b66-2fe67cd0a568
trentmkelly/LessWrong-43k
LessWrong
What are examples of perennial discoveries? Quotes are from a private mailing list I'm part of. I asked the author if they wanted acknowledgement; I'll update if so. Definition of "perennial discoveries": "These are science or engineering 'breakthroughs' that get announced in the press over and over and over, yet almost always fail to reach large-scale commercial use (the failure often not being reported)." Motivation for asking: "I think it would be significantly easier to make sense of the world if there were a convenient, well-known list of 'perennial discoveries'. [...] A story on the list could be safely assumed to be unimportant, at least without much more information. Conversely, a story not on the list could be much more important than it might initially seem."
f006df8e-119f-4ea5-842b-275632fe9551
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
[AN #161]: Creating generalizable reward functions for multiple tasks by learning a model of functional similarity Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter **[resources here](http://rohinshah.com/alignment-newsletter/)**. In particular, you can look through **[this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing)** of all summaries that have ever been in the newsletter. Audio version **[here](http://alignment-newsletter.libsyn.com/alignment-newsletter-161)** (may not be up yet). Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer. HIGHLIGHTS =========== **[Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human Videos](https://sites.google.com/view/dvd-human-videos)** *(Annie S. Chen et al)* (summarized by Sudhanshu): This work demonstrates a method that learns a *generalizable multi-task reward function* in the context of robotic manipulation; at deployment, this function can be conditioned on a human demonstration of an unseen task to generate reward signals for the robot, even in new environments. A key insight here was to train a discriminative model that learned whether two given video clips were performing the same actions. These clips came from both a (large) dataset of human demonstrations and a relatively smaller set of robot expert trajectories, and each clip was labelled with a task-id. This training pipeline thus leveraged huge quantities of extant human behaviour from a diversity of viewpoints to learn a metric of 'functional similarity' between pairs of videos, independent of whether they were executed by human or machine. Once trained, this model (called the 'Domain-agnostic Video Discriminator' or DVD) can determine if a candidate robotic behaviour is similar to a desired human-demonstrated action. Such candidates are drawn from an action-conditioned video predictor, and the best-scoring action sequence is selected for execution on the (simulated or real) robot. **Read more:** **[Paper](https://arxiv.org/abs/2103.16817)** **Sudhanshu's opinion:** Performance increased with the inclusion of human data, even that from unrelated tasks, so one intuition I updated on was "More data is better, even if it's not perfect". This also feels related to "Data as regularization": to some extent, noisy data combats model overconfidence, and perhaps this would play an important role in aligning future systems. Another thing I like about such pipeline papers is the opportunity to look for where systems might break. For example, in this work, the robot does actually need (prior) experience in the test environments with which to train the video predictor to be able to generate candidate solutions at test time. So in spite of the given result -- that DVD itself needs limited robot trajectories and no data from the test environments -- there's a potential point-of-failure far sooner in the pipeline, where if the robot did not have sufficient *background* experience with diverse situations, it might not provide *any* feasible candidate actions for DVD's evaluation. TECHNICAL AI ALIGNMENT ======================= LEARNING HUMAN INTENT ---------------------- **[What Matters in Learning from Offline Human Demonstrations for Robot Manipulation](https://arxiv.org/abs/2108.03298)** *(Ajay Mandlekar et al)* (summarized by Rohin): As you might expect from the title, this paper tests imitation learning and offline RL algorithms on a benchmark of robotic manipulation tasks in which the agent must learn to perform the task from human demonstrations. Most of the experiments were done in simulation, but they did do a final training run on a real robot using hyperparameters chosen in simulation, to demonstrate that their preferred algorithms could work in such a setting as well. Some findings I found particularly interesting: 1. It is important to have models with memory: behavioral cloning (BC) does significantly better on human demonstrations when it is training an RNN model (which has memory), especially on longer-horizon tasks. This is presumably because the humans providing the demonstrations chose actions based not only on the current state but also what had happened in the past, i.e. they were non-Markovian. To test this hypothesis, we could look at machine-generated demonstrations, where you get demonstrations from an expert agent trained using RL, which I *think* are guaranteed to be Markovian by construction. Unfortunately, we can only get reasonable RL experts on the shorter-horizon tasks where the effect is less pronounced; in these cases BC-RNN still outperforms BC without the RNN, weakly suggesting that it isn’t just about Markovian vs. non-Markovian data. 2. Offline RL algorithms work quite well on the machine-generated data, but don’t work very well on human demonstrations. It isn’t particularly clear why this is the case. 3. In addition, offline RL struggles when used on datasets where the demonstrations are of mixed quality; in comparison BC-RNN does quite well. 4. Policy selection is a challenging problem: in these settings, the training objective (e.g. predict the expert actions) is usually not the thing you actually care about (e.g. did you successfully pick up the cup). Ideally, you would evaluate many model checkpoints throughout the training process on the metric you actually care about and then choose the one that performs best. If you instead select the model checkpoint that achieved the lowest validation loss, performance on the correct metric can decrease by 50-100%; if you always use the last checkpoint (i.e. at the end of training), performance can decrease by 10-30%. This demonstrates that it is important to choose the right model during training – but there’s no clear way to do this, as often the evaluation of a policy is non-trivial. 5. The observation space (e.g. pixel observations vs. observations of joint angles and forces) and hyperparameters (e.g. learning rate) both matter quite a lot. For example, adding information about end effectors can drop performance by 49-88% (presumably due to overfitting). 6. For complex tasks, more data provides significant improvements. **Rohin's opinion:** I like these sorts of empirical benchmark papers; it feels so much easier to learn what works from such papers (relative to reading the papers in which the algorithms were introduced). This paper in particular was also especially clear and easy to read; my summary of the results is in large part just a restatement of Section 5 of the paper. **[VILD: Variational Imitation Learning with Diverse-quality Demonstrations](http://arxiv.org/abs/1909.06769)** *(Voot Tangkaratt et al)* (summarized by Rohin): We saw in the previous summary that existing methods struggle to cope with datasets of demonstrations of mixed quality. This paper aims to tackle exactly this problem. They consider a model in which there are k demonstrators with varying levels of quality. Each demonstrator is modeled as computing an action Boltzmann-rationally and then applying some Gaussian noise; the standard deviation of the Gaussian noise differs across the demonstrators (with higher standard deviation corresponding to lower quality). They use variational inference to derive an algorithm for this problem that infers the reward function as well as an optimal policy to go along with it. In addition, they oversample data from the demonstrations that the model thinks are high quality in order to get more informative gradients. (They use an importance sampling correction in order to keep the gradient estimate unbiased.) Their experiments on machine-generated data show significant improvement over existing imitation learning algorithms, both in the case where we synthetically add Gaussian noise (matching the model) and when we add time-signal-dependent (TSD) noise (in which case the model is misspecified). **Rohin's opinion:** This seems like a reasonable approach. It has a similar ethos as Boltzmann rationality. In Boltzmann rationality, it seems like all you need to do is model the demonstrator as having some noise but still being more likely to choose higher-reward actions, and that’s enough to get decent performance; similarly here you just need to model different demonstrators as applying different amounts of Gaussian noise to the optimal policy and that’s enough to distinguish good from bad. Note that, while the experimental results are good, the paper doesn’t have experiments with real human demonstrations; as we saw in the previous summary these can often be quite different (in ways that matter) from machine-generated demonstrations. **[IQ-Learn: Inverse soft-Q Learning for Imitation](https://arxiv.org/abs/2106.12142)** *(Divyansh Garg et al)* (summarized by Zach): A popular way to view imitation learning is as a distribution matching problem. In this approach, the goal is to have the imitator induce a state-action distribution that closely matches that of the expert. Methods such as **[GAIL](https://arxiv.org/abs/1606.03476)** (**[AN #17](https://mailchi.mp/ad852629e45a/alignment-newsletter-17)**) and **[Value-DICE](https://openreview.net/forum?id=Hyg-JC4FDr)** (**[AN #98](https://mailchi.mp/2fbece2a4915/an-98understanding-neural-net-training-by-seeing-which-gradients-were-helpful)**) propose adversarial methods, similar to GANs, to carry out the distribution matching. However, such methods can be difficult to train due to the difficulty of solving saddle-point problems. In this paper, the authors present a non-adversarial method that allows distribution matching to be carried out in a fully offline and non-adversarial fashion. They do this by building on Value-DICE and introducing a soft-Bellman operator which allows the saddle-point problem to be reduced to estimating a Q-function. In fact, the authors show this reduction is related to off-policy RL algorithms with the reward set to zero. In experiments, the method is shown to be competitive with other state-of-the-art methods in both the offline and image-based setting. **Zach's opinion:** I found the experimental comparisons to be a bit misleading. If you compare the results in this paper with the results in the original ValueDICE and SQIL paper, the algorithms are closer in performance than this paper implies. It's also not clear that you need to use the soft-Bellman operator especially in the continuous-control setting which was what ValueDICE originally focused on. However, overall, non-adversarial methods are generally more stable so I found this paper to be a good contribution. **[Learning the Preferences of Uncertain Humans with Inverse Decision Theory](https://arxiv.org/abs/2106.10394)** *(Cassidy Laidlaw et al)* (summarized by Zach): Human preference learning has been studied from various perspectives such as inverse reinforcement learning (IRL) and active learning. However, the IRL problem is underspecified, that is, even with access to the full behavioral policy, you cannot uniquely determine the preferences that led to that policy. Meanwhile, active learning often has a **description-experience gap**: the stated preferences in response to a question in active learning may not be the same as the preferences that would be revealed from demonstrations. In this work, the authors study an alternative paradigm known as inverse decision theory (IDT) that aims to learn a loss function for binary classification using strictly observational data while returning unique solutions. (Such a loss function effectively specifies how good correct predictions are and how bad incorrect predictions are.) The authors show that preferences can be uniquely determined whenever there is uncertainty in the classification problem. This happens because we need observations predicting classes at different levels of certainty to identify a transition point where we switch from predicting one class over another. In contrast, without uncertainty, we won’t be able to precisely identify that threshold. The authors then strengthen this result by showing it holds even in cases where the underlying decision rule is sub-optimal. The authors argue that since learning could be done efficiently in this setting, IDT could have broader applicability. For example, one application to fairness could be to collect a set of decisions from a trained classifier, split them across groups (e.g. race or gender), and compare the inferred loss functions to detect bias in the trained classifier. **Zach's opinion:** The paper is organized well and I found the examples to be interesting in their own right. On the other hand, binary classification is a fairly restrictive setting and IDT in this paper seems to require access to class posterior probabilities. These probabilities generally are not easy to estimate. Moreover, if you have access to that function it seems you could elicit the loss function with exponentially fewer human observations by sorting/sub-sampling the class posterior values. Despite these shortcomings, I'm interested to see how this work can be extended further. **[Reward Identification in Inverse Reinforcement Learning](http://proceedings.mlr.press/v139/kim21c/kim21c.pdf)** *(Kuno Kim et al)* (summarized by Rohin): As mentioned in the previous summary, a major challenge with inverse reinforcement learning is that rewards are unidentifiable: even given perfect knowledge of the policy, we cannot recover the reward function that produces it. This is partly for boring reasons like “you can add a constant to a reward function without changing anything”, but even if you exclude those kinds of reasons, others remain. For example, since every policy is optimal for the constant reward function, the zero reward function can rationalize any policy. For this reason, the authors instead focus on the case where we assume the policy is a solution to the maximum entropy RL objective (you can think of this as Boltzmann rationality, if you’re more familiar with that). The solution to MaxEnt RL for a zero reward is a uniformly random policy, so the zero reward no longer rationalizes every policy. Perhaps rewards are identifiable in this case? (You might have noticed that I neglected the question of whether the MaxEnt RL model was better than the regular RL model in cases that we care about. As far as I can tell the paper doesn’t address this. But if they did so, perhaps they might say that in realistic situations we are dealing with boundedly-rational agents, and Boltzmann rationality / MaxEnt RL is a common model in such situations.) Well, we still need to deal with the “additive constant” argument. To address this, the authors define two reward functions to be equivalent if they agree up to an additive constant. There are actually two versions of this: “trajectory equivalence” means that they agree on the rewards for all feasible trajectories, while “state-action equivalence” means that they agree on the rewards for all state-action pairs. Correspondingly, “weak identifiability” means that you can identify rewards up to trajectory equivalence, while “strong identifiability” means you can identify them up to state-action equivalence. Strong identifiability implies weak identifiability, since if you know the rewards on state-action pairs, that determines the reward for any given trajectory. All deterministic MDPs are weakly identifiable under the MaxEnt RL model, since in this case a trajectory τ is selected with probability p(τ) proportional to exp(r(τ)), so the probability p(τ) can then be inverted to get r(τ). However, stochastic MDPs need not be weakly identifiable. Imagine an MDP in which no matter what you do, you are teleported to a random state. In such an MDP, the agent has no control over the trajectory, and so the MaxEnt RL objective will choose a uniformly random policy, no matter what the reward is, and so the reward must be unidentifiable. Now the question is, assuming you have weak identifiability (i.e. you can infer r(τ)), when do you also have strong identifiability (i.e. you can infer r(s, a))? Intuitively, there needs to be sufficient “diversity” of feasible trajectories τ that cover a wide variety of possible (s, a) pairs, so that you can use the r(τ) values to infer the r(s, a) values. The authors prove a sufficient condition called “coverage”: there exists some timestep T, such that for every state there is some feasible trajectory that reaches that state at timestep T. (They also require the horizon to be at least 2T.) Coverage can be a fairly easy property to have; for example, if you can get to any state from any other state in some number of steps, then all you need is a single self-loop somewhere in the MDP that allows you to “waste time” so that you reach the desired state at exactly timestep T (instead of reaching too early). **Read more:** **[Identifiability in inverse reinforcement learning](https://arxiv.org/abs/2106.03498)** has the same motivation and studies a very similar setting, but has a few different results. It's also easier to read if you're not as familiar with MaxEnt methods. NEWS ===== **[Cooperative AI Workshop 2021](https://www.cooperativeai.com/neurips-2021/workshop-information)** (summarized by Rohin): The **[Cooperative AI](https://arxiv.org/abs/2012.08630)** (**[AN #133](https://mailchi.mp/c8b57f25d787/an-133building-machines-that-can-cooperate-with-humans-institutions-or-other-machines)**) **[NeurIPS workshop](https://www.cooperativeai.com/)** (**[AN #116](https://mailchi.mp/d31663e4d330/an-116-how-to-make-explanations-of-neurons-compositional)**) is running again this year! The paper submission deadline is September 25. **[NIST AI Risk Management Framework](https://www.nist.gov/itl/ai-risk-management-framework)** (summarized by Rohin): The National Institute of Standards and Technology (NIST) has put out a formal Request For Information (RFI) in the process of developing an AI Risk Management Framework that is intended for voluntary use in order to improve trustworthiness and mitigate risks of AI systems. According to the **[legislative mandate](https://www.congress.gov/116/bills/hr6395/BILLS-116hr6395enr.pdf#page=1151)**, aspects of trustworthiness include “explainability, transparency, safety, privacy, security, robustness, fairness, bias, ethics, validation, verification, interpretability, and other properties related to artificial intelligence systems that are common across all sectors”. Multiple AI safety organizations are submitting responses to the RFI and would like additional AI safety researchers to engage with it. Responses are due September 15; if you'd like to help out, email Tony Barrett at tbambarrett@gmail.com. #### **FEEDBACK** I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**. #### **PODCAST** An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**.
4c7543e6-fdf3-4b41-b0d1-2af1a153b4c7
trentmkelly/LessWrong-43k
LessWrong
GPT4 is capable of writing decent long-form science fiction (with the right prompts) I like to experiment with AI-assisted fiction writing. Today, for the first time, I've noticed that GPT4 is actually getting good at the task.  Perhaps it's a result of some updates by OpenAI, or maybe I got the prompts right, but GPT4 finally wrote a science fiction novella worth reading.   As the initial prompt, I used a lengthy contraption described in another post, with the following plot summary: a group of scientists has discovered that Troodon dinosaurs were intelligent species who have created a technologically advanced civilization, suddenly destroyed. The year-long path to the scientific discovery starts with the group stumbling upon a strange outlier in their geological measurements. For the rest of the story, I just repeatedly asked GPT4 to write the next chapter (sometimes providing the chapter's title to steer the plot in the direction I want). And in a few instances I asked it to rewrite a twist I didn't like, or to put more emphasis on something.  Below is the unedited result (with the prompts marked). The whole thing is especially amazing given that I'm a non-native English speaker of no writing skills. And it took only a couple of hours to complete my (our? its?) first full-sized novella.  Let's call it "The Troodon Quill". ---------------------------------------- Chapter 1: Echoes in the Stone The low thrum of the spectrometer filled the narrow confines of the cave, a harmonious hum that sung its serenade to the rocks. Dr. Ada Worthington, dressed in the sombre uniform of a seasoned palaeontologist, her dirty-blonde hair escaping from beneath her hat, listened to the resonating symphony, eyes closed. An outsider might mistake it for a moment of relaxation. They would be wrong. Ada was anything but relaxed. “We're looking for anomalies, Ada,” Dr. Leo Hartman had said at the outset of this mission. Leo was her mentor, a titan in the field of palaeontology, whose reputation carried the weight of numerous groundbreaking discoveries and publ
c60c43dd-0766-41bb-a230-d5a92a0c12c3
trentmkelly/LessWrong-43k
LessWrong
Applied Picoeconomics Related to: Akrasia, Hyperbolic Discounting, and Picoeconomics,  Fix It And Tell Us What You Did A while back, ciphergoth posted an article on "picoeconomics", the theory that akrasia could be partially modeled as bargaining between present and future selves. I think the model is incomplete, because it doesn't explain how the analogy is instantiated in the real world, and I'd like to investigate that further sometime1 - but it's a good first-order approximation. For those of you too lazy to read the article (come on! It has pictures of naked people! Well, one naked person. Suspended from a graph of a hyperbolic curve) Ainslie argues that "intertemporal bargaining" is one way to overcome preference reversal. For example, an alcoholic has two conflicting preferences: right now, he would rather drink than not drink, but next year he would rather be the sort of person who never drinks than remain an alcoholic. But because his brain uses hyperbolic discounting, a process that pays more attention to his current utility than his future utility, he's going to hit the whiskey. This sticks him in a sorites paradox. Honestly, it's not going to make much of a difference if he has one more drink, so why not hit the whiskey? Ainslie's answer is that he should set a hard-and-fast rule: "I will never drink alcohol". Following this rule will cure his alcoholism and help him achieve his dreams. He now has a very high preference for following the rule; a preference hopefully stronger than his current preference for whiskey. Ainslie's other point is that this rule needs to really be hard-and-fast. If his rule is "I will drink less whiskey", then that leaves it open for him to say "Well, I'll drink some whiskey now, and none later; that counts as 'less'", and then the whole problem comes back just as bad as before. Likewise, if he says "It's my birthday, I'll let myself break the rule just this once," then soon he's likely to be saying "It's the Sunday before Cinco de Mayo, this cal