id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
af48b9f9-bbc5-4865-b314-e4538fef7d54 | trentmkelly/LessWrong-43k | LessWrong | Don't use 'infohazard' for collectively destructive info
Suppose there were a cheap way to make nuclear weapons out of common household materials. Knowing this information is not directly harmful to the median individual like yourself. On average, it may even be beneficial to the average person, to know that secret yourself; either because you could directly conquer places, or because you could sell the info to somebody who could. But society as a whole would rather that nobody knew it than that everybody knew it.
For a more realistic example, consider the DNA sequence for smallpox: I'd definitely rather that nobody knew it, than that people could with some effort find out, even though I don't expect being exposed to that quoted DNA information to harm my own thought processes.
'Infohazard', I think, should be used only to refer to the slightly stranger case of information such that the individual themselves would rather not hear it and would say, "Don't tell me that".
A spoiler for a book you're in the middle of reading is the classic example of information which is anomalously harmful to you personally. You might pay a penny not to hear it, if the ordinary course of events would expose you to it. People enclose info about the book's ending inside spoilers, if on Reddit or Discord, not because they mean to keep that secret to benefit themselves and harm you - as is the more usual case of secrecy, in human interactions - but because they're trying to protect you from info that would harm you to know.
The term 'infohazard' has taken on some strange and mystical connotations by being used to refer to 'individually hazardous info'. I am worried about using the same word to also refer to information being alleged to be dangerous in the much more mundane sense of 'collectively destructive info' - info that helps the average individual hearer, but has net negative externalities if lots of people know it.
This was originally posted by me to Facebook at greater length (linkpost goes to original essay). Best suggestions ther |
774410e0-84a8-4cca-9679-1fb4ff40e18e | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "A man comes to the rabbi and complains about his life: "I have almost no money, my wife is a shrew, and we live in a small apartment with seven unruly kids. It's messy, it's noisy, it's smelly, and I don't want to live."The rabbi says, "Buy a goat.""What? I just told you there's hardly room for nine people, and it's messy as it is!""Look, you came for advice, so I'm giving you advice. Buy a goat and come back in a month."In a month the man comes back and he is even more depressed: "It's gotten worse! The filthy goat breaks everything, and it stinks and makes more noise than my wife and seven kids! What should I do?"The rabbi says, "Sell the goat."A few days later the man returns to the rabbi, beaming with happiness: "Life is wonderful! We enjoy every minute of it now that there's no goat - only the nine of us. The kids are well-behaved, the wife is agreeable - and we even have some money!" -- traditional Jewish joke Related to: Anchoring and Adjustment Biases are “cognitive illusions” that work on the same principle as optical illusions, and a knowledge of the latter can be profitably applied to the former. Take, for example, these two cubes (source: Lotto Lab, via Boing Boing): The “blue” tiles on the top face of the left cube are the same color as the “yellow” tiles on the top face of the right cube; if you're skeptical you can prove it with the eyedropper tool in Photoshop (in which both shades come out a rather ugly gray). The illusion works because visual perception is relative. Outdoor light on a sunny day can be ten thousand times greater than a fluorescently lit indoor room. As one psychology book put it: for a student reading this book outside, the black print will be objectively lighter than the white space will be for a student reading the book inside. Nevertheless, both students will perceive the white space as subjectively white and the black space as subjectively black, because the visual system returns to consciousness information about relative rather than absolute lightness. In the two cubes, the visual system takes the yellow or blue tint as a given and outputs to consciousness the colors of each pixel compared to that background. So this optical illusion occurs when the brain judges quantities relative to their surroundings rather than based on some objective standard. What's the corresponding cognitive illusion? In Predictably Irrational (relatively recommended, even though the latter chapters sort of fail to live up to the ones mentioned here) Dan Ariely asks his students to evaluate (appropriately) three subscription plans to the Economist: Ariely asked his subjects which plan they'd buy if they needed an Economist subscription. 84% wanted the combo plan, 16% wanted the web only plan, and no one wanted the print only plan. After all, the print plan cost exactly the same as the print + web plan, but the print + web plan was obviously better. Which raises the question: why even include a print-only plan? Isn't it something of a waste of space? Actually, including the print-only plan turns out to be a very good business move for the Economist. Ariely removed the print-only plan from the choices. Now the options looked like this. There shouldn't be any difference. After all, he'd only removed the plan no one chose, the plan no sane person would choose. This time, 68% of students chose the web only plan and 32% the combo plan. That's a 52% shift in preferences between the exact same options. The rational way to make the decision is to compare the value of a print subscription to the Economist (as measured by the opportunity cost of that money) to the difference in cost between the web and combo subscriptions. But this would return the same answer in both of the above cases, so the students weren't doing it that way. What it looks like the students were doing was perceiving relative value in the same way the eye perceives relative color. The ugly gray of the cube appeared blue when it was next to something yellow, and yellow when it was next to something blue. In the same way, the $125 cost of the combo subscription looks like good value next to a worse deal, and bad value next to a better deal. When the $125 combo subscription was placed next to a $125 plan with fewer features (print only instead of print plus web) it looked like a very good deal – the equivalent of placing an ugly gray square next to something yellow to make it look blue. Take away the yellow, or the artificially bad deal, and it doesn't look nearly as attractive. This is getting deep into Dark Arts territory, and according to Predictably Irrational, the opportunity to use these powers for evil has not gone unexploited. Retailers will deliberately include in their selection a super deluxe luxury model much fancier and more expensive than they expect anyone to ever want. The theory is that consumers are balancing a natural hedonism that tells them to get the best model possible against a commitment to financial prudence. So most consumers, however much they like television, will have enough good sense to avoid buying a $2000 TV. But if the retailer carries a $4000 super-TV, the $2000 TV suddenly doesn't look quite so bad. The obvious next question is “How do I use this knowledge to trick hot girls or guys into going out with me?” Dan Ariely decided to run some experiments on his undergraduate class. He took photographs of sixty students, then asked other students to rate their attractiveness. Next, he grouped the photos into pairs of equally attractive students. And next, he went to Photoshop and made a slightly less attractive version of each student: a blemish here, an asymmetry there. Finally, he went around campus, finding students and showing them three photographs and asking which person the student would like to go on a date with. Two of the photographs were from one pair of photos ranked equally attractive. The third was a version of one of the two, altered to make it less attractive. So, for example, he might have two people, Alice and Brenda, who had been ranked equally attractive, plus a Photoshopped ugly version of Brenda. The students overwhelmingly (75%) chose the person with the ugly double (Brenda in the example above), even though the two non-Photoshopped faces were equally attractive. Ariely then went so far as to recommend in his book that for best effect, you should go to bars and clubs with a wingman who is similar to you but less attractive. Going with a random ugly person would accomplish nothing, but going with someone similar to but less attractive than you would put you into a reference class and then bump you up to the top of the reference class, just like in the previous face experiment. Ariely puts these studies in a separate chapter from his studies on anchoring and adjustment (which are also very good) but it all seems like the same process to me: being more interested in the difference between two values than in the absolute magnitude of them. All that makes anchoring and adjustment so interesting is that the two values have nothing in common with one another. This process also has applications to happiness set points, status seeking, morality, dieting, larger-scale purchasing behavior, and akrasia which deserve a separate post"
The text above is a post found on LessWrong.
Could you delve deeper into the psychological implications of the "anchoring and adjustment" concept as it applies to consumer behavior in retail settings?
Certainly, the concept of "anchoring and adjustment" is a powerful tool in understanding consumer behavior. It's a cognitive bias where people rely too heavily on an initial piece of information, the "anchor", when making decisions. The process of adjustment from this anchor is typically insufficient, leading to skewed decision-making.
In retail settings, the concept of anchoring can be seen in various ways. For example, the initial price that a consumer sees for a product often serves as an anchor. If the product is later discounted, the consumer is likely to perceive the discounted price as a good deal, even if it's still higher than what the product is worth or what they would be willing to pay under different circumstances.
This is why sales and discounts are so effective in retail. The original price serves as an anchor, making the discounted price seem more attractive. This principle is also why you often see expensive items placed next to even more expensive items in stores. The less expensive item seems like a bargain in comparison, even if it's still quite costly.
Another example of anchoring in retail is the concept of the "decoy effect" or "asymmetric dominance effect", which is what was illustrated in the Economist subscription example. By introducing a third option that is clearly inferior (the print-only subscription), consumers are nudged towards the more expensive option (the print and web subscription) because it seems like a better deal in comparison.
In terms of psychological implications, these strategies play on our natural tendencies to make relative rather than absolute judgments. They can lead consumers to make purchases that they might not have made otherwise, or to spend more than they initially intended. They also highlight the importance of context in decision-making. The same product can be perceived very differently depending on how it's presented and what it's compared to.
In essence, understanding the concept of anchoring and adjustment can provide valuable insights into consumer behavior and can be a powerful tool for retailers. However, it's important to note that while these strategies can be effective, they also raise ethical questions about manipulation and consumer autonomy.
How might retailers ethically leverage the anchoring and adjustment concept to enhance customer satisfaction and loyalty, without resorting to manipulative tactics?
Retailers can ethically leverage the anchoring and adjustment concept by focusing on providing real value to their customers and enhancing their shopping experience. Here are a few ways to do so:
1. **Transparent Pricing:** Retailers can use anchoring to highlight the value of their products without being deceptive. For instance, showing the 'manufacturer's suggested retail price' (MSRP) next to their lower price can make customers feel like they're getting a good deal. However, it's crucial that the MSRP is accurate and not inflated.
2. **Product Bundling:** Offering products in a bundle can create a perception of value and convenience. The price of the bundle serves as an anchor, making individual items seem more affordable. This tactic can be particularly effective when the bundle includes a flagship product along with accessories or related items.
3. **Tiered Pricing:** Offering products or services at different price points can cater to a wider range of customers. The higher-priced option serves as an anchor, making the lower-priced options seem more attractive. However, each tier should offer real value and meet different customer needs.
4. **Reward Programs:** Loyalty programs that offer rewards or discounts can make customers feel valued and appreciated. The regular price serves as an anchor, making the rewards seem more attractive. These programs can also encourage repeat purchases and foster customer loyalty.
5. **Honest Marketing:** Using anchoring in marketing materials can highlight the value of products or services. For instance, comparing a product's features or price to those of a more expensive competitor can make it seem like a better deal. However, these comparisons should be fair and accurate.
In all these strategies, the key is to use anchoring to highlight real value and enhance the customer experience, rather than to manipulate customers into spending more. By doing so, retailers can build trust and loyalty, which can lead to long-term customer relationships and repeat business. |
c3384760-f2d0-4194-8d3d-504f76100321 | trentmkelly/LessWrong-43k | LessWrong | Bayesian updating in real life is mostly about understanding your hypotheses
My sense is that an increasingly common viewpoint around here is that the last ~20 years of AI development and AI x-risk discourse are well-described by the following narrative:
> Eliezer Yudkowsky (and various others who were at least initially heavily influenced by his ideas) developed detailed models of key issues likely to be inherent in the process of developing smarter-than-human AI.
> These models were somewhere between "maybe plausible" and "quite compelling" at the time that they were put forth, but recent developments in AI (e.g. behavioral characteristics of language models, smoothness / gradualness of scaling) have shown that reality just isn't panning out in quite the way Eliezer's models predicted.
> These developments haven't entirely falsified Eliezer's models and key predictions, but there are now plenty of alternative models and theories. Some or all of these competing models either are or claim to:
>
> * have a better recent track record of predicting near-term AI developments
> * better retrodict past developments[1]
> * be backed by empirical results in machine learning and / or neuroscience
> * feel more intuitively plausible and evidence-backed to people with different backgrounds and areas of expertise
> Therefore, even if we can't entirely discount Eliezer's models, there's clearly a directional Bayesian update which any good Bayesian (including Eliezer himself) should be able to make by observing recent developments and considering alternate theories which they support. Even if the precise degree of the overall update (and the final landing place of the posterior) remains highly uncertain and debatable, the basic direction is clear.
Without getting into the object-level too much, or even whether the narrative as a whole reflects the actual views of particular real people, I want to make some remarks on the concept of belief updating as typically used in narratives like this.
Note, there's a sense in which any (valid) change in on |
05b1734c-af27-4f05-aa24-e2d5bcebe375 | trentmkelly/LessWrong-43k | LessWrong | Time to Exit the Sandbox
|
4c8bba0d-56c0-4e31-bd35-68ec701761bd | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Monthly Doom Argument Threads? Doom Argument Wiki?
Disclaimer: I might have no idea what I'm talking about and my proposal might be an infohazard but it also might save us all; I don't know
LessWrong has monthly AGI safety basic questions threads where people typically ask in the comments why their imagined solution wouldn't work, without understanding the problem. We are essentially prompting people to act like Anti-Eliezers in this way.
I can't read Eliezer's mind super well. I don't know what solutions he will approve but he'll probably disapprove 99+% of them. Maybe he will feel better-assured if I am training myself in seeing the difficulty of the task instead of the ease.
And so, I think what would be more productive is a monthly AGI Doom Argument thread, where people make theorem-style multi-step logical arguments about what is necessary or sufficient for aligned AGI, and why attempting to acquire those desiderata is a terrible idea. That will make people more like the kind of people who would spontaneously write the [List of Lethalities](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) from scratch without being prompted to, so goes my hypothesis.
Each argument should be associated with its creator and numerically indexed by its recency of creation, and then maybe we could find some way to store, categorically sort, and rate on multiple metrics, those arguments on a wiki or something.
Here's an example:
> LVSN's Doom Argument #1:
>
> 1. In order for an AI to become aligned, it must live with humans and evolve bottom-up thinking styles analogously to humans through reproduction
>
> 2. Living and evolving analogously to humans takes several generations
>
> 3. We don't have several generations' time before AGI
>
> 4. Therefor when AGI comes it will not be aligned
>
>
Maybe this is a dumb argument. Maybe it's a bad argument. Maybe the premises are definitely wrong or maybe we don't know if they're wrong. Maybe it's ignoring a lot of desiderata. Maybe it's right. *Whatever* you have to say about the argument, the argument can be labeled as such. Eventually the strongest arguments for our doom will be distilled, and copy/pasting those distillations will be a lot easier than explaining to each alignment-wannabe on a case-by-case basis why their solution doesn't work.
This would create progress on simply understanding the problem. The more would-be alignment savants we can get to actually understand the problem, the better chances we have of dying with dignity. |
53656dde-0f28-4626-826f-23c24e148693 | trentmkelly/LessWrong-43k | LessWrong | Happy simulations
Certainty is impossible.
Something exists rather than nothing.
One can't justify stopping using logic with logic.
Logic tells you what is most likely from your own perspective, it's helpful.
Beliefs that don't follow logic or evidence are not helpful on their own, since their own opposite is just as likely.
If a story accurately reflects your life and is logically sound, its realism increases based on your choice to behave like the character.
Creating new entities similar to oneself increases the likelihood of being a created being and being treated in the same way as those created entities.
An individual may strive to ensure a positive outcome for all created beings similar to himself as a safeguard against the uncertainty of his own true identity.
An individual may implement an unseen system of rewards and punishments to encourage altruistic behavior among created beings, as increasing the risk of suffering for harmful actions serves to ensure mutual trust, improve living conditions, and protect the individual’s own interests.
The existence of higher beings is not necessary for altruistic benefits to manifest, as powerful entities can promote collaboration and altruism through belief in potential rewards, provided these narratives are perceived as sound and likely, even if they are unverifiable.
While behavioral rewards may be appropriate for intelligent beings capable of understanding morality and similar to their own creator, granting a favorable afterlife—even to those lacking comprehension and dissimilar to their creator—serves as a safeguard against the unknown complexities of reality, where future understandings may reveal deeper systems and connections. This approach ensures that all entities, regardless of their perceived significance, have access to a positive outcome, thereby minimizing the risk of exclusion from potential benefits—because even a creator, unbeknownst to himself, could in truth be lacking in comprehension and be dissimilar to its own crea |
c3845a34-85ba-410a-9508-835de7355575 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | fiction about AI risk
I don’t have a background in AI, but I recently read a book which seems like a pretty good metaphor for AI and thought I’d share!
The book, *A Beautifully Foolish Endeavor* by Hank Green, focuses on [spoilers ahead] two alien intelligences battling, primarily via digital manipulation of humans on the internet. The alien intelligences aren’t AI, but they seem like good analogues for aligned and non-aligned AI: one of the intelligences has programming which prohibits it from doing things that would contravene human values or destroy fundamental aspects of humanity, while the other has no such programming and is trying to save humans from destroying ourselves by locking us into a dystopian virtual reality simulation where we are controlled by the alien intelligence.
*A Beautifully Foolish Endeavor* is actually a sequel; the first book, *An Absolutely Remarkable Thing*, is less about AI, but still related (as well as being necessary to understand the sequel). Both books are well-reviewed, and they’re also both NYT bestsellers, so they’re quite popular.
I’m not sure how, if at all, the EA community can use the success of *A Beautifully Foolish Endeavor* to increase awareness of AI risk, but I figured I’d let folks know about it. |
7b476a3a-28fb-4093-8b63-6cd611080c1c | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Circular Altruism
Today's post, Circular Altruism was originally published on 22 January 2008. A summary (taken from the LW wiki):
> Our moral preferences shouldn't be circular. If a policy A is better than B, and B is better than C, and C is better than D, and so on, then policy A really should be better than policy Z.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Against Discount Rates, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
93e1492f-3dfc-43f5-838c-644fec203779 | trentmkelly/LessWrong-43k | LessWrong | Thinking without priors?
My questions is strongly related to this post:
Bayes for Schizophrenics: Reasoning in Delusional Disorders by Scott Alexander
My question: can you balance the absence of priors with something else? Even if it's something irrational, such as wishful thinking or faith.
I'm interested in this because I want to now how I think. I have a suspicion that maybe something's wrong with my thinking. Maybe it's not Bayesian.
Note: I struggle with the simplest math, so sorry for possible mistakes and whatnot. It's part of the reason why I can't be sure if my thinking is Bayesian or not.
----------------------------------------
Librarian or farmer?
I encountered the following example in this video "Bayes theorem, the geometry of changing beliefs" by 3Blue1Brown.
> Steve is very shy and withdrawn, invariably helpful but with very little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail.
> Which of the following do you find more likely: “Steve is a librarian” or “Steve is a farmer”? (from Kahneman and Tversky)
The trick is that some people don't consider how many there're librarians and farmers in general (the prior knowledge). Base rate fallacy
But I'm having trouble imagining to what irrational choices or behaviour it may lead. For example, if I want to find Steve in some real place, of course I'm going to care how many there're librarians and farmers.
So, I think in some situations a prior may seem missing, but pop right back up when it's really needed.
Note: when such questions or thought experiments get criticized for the lack of details, sometimes the criticism is answered with "you have to be able to think without details". But I don't think that some situation can exist without details (even as a fantasy).
Bank teller or feminist?
> Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrim |
289f3b27-50e7-4530-9e53-596b28c0a520 | trentmkelly/LessWrong-43k | LessWrong | Is civilizational alignment on the table?
It seems to have become apparent that existing social systems are functionally unaligned: organizational and market dynamics are oft cited as an important factor exacerbating AI danger. It seems to me that progress in civilizational alignment would be instrumental for increasing the chances of succesfully navigating the AI alignment challenge and has significant theoretical overlap as both are in the class of agent alignment.
Are there people or groups actively looking into civilizational alignment and is there cross-pollination with AI alignment work? |
df89c499-2e10-494d-95f4-abf93c7564a7 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Meta announces Llama 2; "open sources" it for commercial use
*See also their Llama 2 website here:* [*https://ai.meta.com/llama*](https://ai.meta.com/llama)*, and their research paper here:* [*https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/*](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/)
From their blog post:
> Takeaways
> ---------
>
> * Today, we’re introducing the availability of Llama 2, the next generation of our open source large language model.
> * Llama 2 is free for research and commercial use.
> * Microsoft and Meta are expanding their longstanding partnership, with Microsoft as the preferred partner for Llama 2.
> * We’re opening access to Llama 2 with the support of a broad set of companies and people across tech, academia, and policy who also believe in an open innovation approach to today’s AI technologies.
>
---
Compared to the first Llama, LLama 2 is trained for 2T tokens instead of 1.4T, has 2x the context length (4096 instead of 2048), uses [Grouped Query Attention](https://arxiv.org/abs/2305.13245), and performs better across the board, with performance generally exceeding `code-davinci-002` on benchmarks:
They also release both a normal base model (Llama 2) and a RLHF'ed chat model (Llama 2-chat). Interestingly, they're *only* releasing the 7B/13B/70B models, and not the 34B model, "due to a lack of time to sufficiently red team".
More importantly, they're releasing it both on Microsoft Azure and also making it available for commercial use. [The form for requesting access](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) is very straightforward and does not require stating what you're using it for: (EDIT: they gave me access ~20 minutes after submitting the form, seems pretty straightforward.)

Note that their license is *not* technically free for commercial use always; it contains the following clauses:
> [1.] v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof).
>
>
> 2. **Additional Commercial Terms.** If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
>
>
See also the [Llama 2 Acceptable Use Policy](https://ai.meta.com/llama/use-policy/) (which seems pretty standard). |
e0ae3dd3-911e-49b8-bdb4-2d54fbf82749 | trentmkelly/LessWrong-43k | LessWrong | Effective AI Outreach | A Data Driven Approach
Noah C Wilson | Noah.C.Wilson1993@gmail.com
Key Takeaways:
* Building a massive mainstream global movement is essential to achieving safe AGI.
* AI outreach has become the most urgent cause in AI safety due to accelerating timelines. AI outreach remains underemphasized compared to technical AI safety research.
* A pilot study I conducted found that influencing AI risk sentiment is highly tractable. I used snapshot explainer interventions to successfully influence the concern about AI risks and support for AI regulation in a sample of 100 participants.
* This study also investigates more and less effective means of influencing this sentiment.
What is AI Outreach?
AI outreach is the process of informing and engaging the general public about the risks posed by artificial intelligence, with the goal of growing the AI safety movement to foster an informed and cautious societal response. This includes conveying catastrophic concerns in an accessible way, countering misconceptions, storytelling, media engagement, and cultural interventions that reshape how AI is perceived and discussed.
Why Do AI Outreach? I.T.N. Framework
AI Outreach is Important
When we look at history, the most significant societal shifts—especially in response to existential risks—didn’t happen because a few experts had quiet conversations behind closed doors. They happened because public sentiment changed, and in doing so, forced institutions to respond.
This pattern repeats across some of the most pressing challenges humanity has faced:
* Nuclear Disarmament Movements – The fear of nuclear war was abstract to policymakers in the 1950s, but mass movements like the Campaign for Nuclear Disarmament (CND) and the Nuclear Freeze Movement turned it into a crisis the public cared about. Protests, advocacy, and cultural shifts (think: "Dr. Strangelove" and the Doomsday Clock) created real pressure that led to treaties like the Partial Test Ban Treaty (1963) and, later, the INF Treaty (1987). G |
0921851b-3729-4135-9ff3-fc3552c1cb03 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Key Papers in Language Model Safety
Introduction
============
One of the fastest growing subfields of AI safety focuses on language models (LMs) such as OpenAI’s GPT-3. This article provides summaries of a few dozen key papers in several important subfields of language model safety, namely:
* Persuasion
* Truthfulness
* Honesty
* Robustness
* Law
* Machine Ethics
This is not a comprehensive introduction to the field of language model safety. Specifically, this overview does not discuss important topics such as [bias](https://arxiv.org/pdf/2106.13219.pdf), [interpretability](https://www.alignmentforum.org/posts/GEPX7jgLMB8vR2qaK/opinions-on-interpretable-machine-learning-and-70-summaries), and [decision-making agents](https://arxiv.org/pdf/2103.14659.pdf). It also disproportionately focuses on arguments and ideas advanced by authors within the effective altruism community, as I’ve had the most exposure to their work. Over time, the field of AI safety should seek to engage more with the broader field of machine learning (see [PAIS](https://forum.effectivealtruism.org/posts/PFxmd5bf7nqGNLYCg/a-bird-s-eye-view-of-the-ml-field-pragmatic-ai-safety-2#The_ML_research_ecosystem) for more). Instead, this overview aims to summarize a useful set of papers from which one could begin researching language model safety.
Why study language model safety?
--------------------------------
There are several reasons to believe that language model safety could present an unusually strong opportunity to reduce catastrophic AI risks:
* **Rapid recent progress.** [GPT-3](https://arxiv.org/abs/2005.14165), [LaMDA](https://arxiv.org/pdf/2201.08239.pdf), [BERT](https://arxiv.org/pdf/1810.04805.pdf), and other large language models have blown away the RNNs, LSTMs, and statistical methods of years past with the modern [Transformer architecture](https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf). Capabilities such as following instructions, doing math, and answering questions truthfully have [emerged suddenly](https://arxiv.org/pdf/2206.07682.pdf) in larger models, while their loss function performance has [scaled consistently](https://arxiv.org/pdf/2206.07682.pdf) with growth in training data, model parameters, and computational power.
* **Potential to scale to AGI.** Language is a powerful medium for understanding and interacting with the world, perhaps more so than vision or robotics. If language model performance continues to scale as we build models that approach and exceed the size of the human brain (see [BioAnchors](https://www.lesswrong.com/posts/cxQtz3RP4qsqTkEwL/an-121-forecasting-transformative-ai-timelines-using)), LMs could be an important component of advanced AI. Some argue that this is [possible](https://www.lesswrong.com/posts/EmxfgPGvaKqhttPM8/thoughts-on-the-alignment-implications-of-scaling-language), while others [disagree](https://www.lesswrong.com/posts/SkcM4hwgH3AP6iqjs/can-you-get-agi-from-a-transformer). Language models seem particularly important in [short timelines scenarios](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like).
* **Alignment failures today.** Language models are trained on the [self-supervised objective](https://neptune.ai/blog/self-supervised-learning) of imitating their training data. This allows LMs to use large text datasets scraped from the internet without the expensive and time-consuming step of human annotation. But it introduces an important failure mode: Language models will mimic their training data to a fault, often resulting in [hateful](https://www.sciencedirect.com/science/article/pii/S2515856220300493), [biased](https://aclanthology.org/D19-1339.pdf), and [toxic](https://aclanthology.org/2020.findings-emnlp.301.pdf) outputs. Solving these problems has received substantial research attention and could contribute to longtermist LM safety goals.
* **Accessibility and tractability.** Running inference on a pretrained language model only takes a few lines of code, and fine-tuning on new data takes a couple dozen more. Contrast this against the [notorious difficulty of reinforcement learning](https://www.alexirpan.com/2018/02/14/rl-hard.html). Further, while many topics in AI safety still require “disentangling” research to clarify the key problems and avenues for progress, language model safety seems to have several tractable research directions.
* **Appeal to authority.** [OpenPhil](https://www.openphilanthropy.org/request-for-proposals-for-projects-in-ai-alignment-that-work-with-deep-learning-systems/) and [FTX](https://ftxfuturefund.org/projects/) have previously requested proposals related to language model safety. New AI safety organizations such as [Anthropic](https://www.anthropic.com/#papers), [Redwood Research](https://www.redwoodresearch.org/), and [Conjecture](https://www.conjecture.dev/) have been founded with a focus on language models. PhD students and professors including [Jacob Steinhardt](https://scholar.google.com/citations?user=LKv32bgAAAAJ&hl=en), [Sam Bowman](https://scholar.google.com/citations?user=kV9XRxYAAAAJ&hl=en), [Dan Hendrycks](https://scholar.google.com/citations?user=czyretsAAAAJ&hl=en), [Ethan Perez](https://scholar.google.ca/citations?user=za0-taQAAAAJ&hl=en), and many others have published on the topic.
However, language model safety is not the only compelling topic in AI safety, and AI safety is not the only valuable altruistic endeavor. A few considerations against the field include:
* **Indirect path to averting x-risk.** This document does not discuss any scenarios where humanity is turned to paper clips. Safety work on language models often begins with aligning the systems we have today, and leaves future alignment challenges to future researchers. Many researchers working on language model safety would argue for a clear path to averting x-risk, but others who prefer to reason directly about advanced systems might believe their theory of impact is stronger. On the ITN framework, I’d say deceptive mesa-optimizers win on scale, but LM safety is more tractable because empirical progress is much easier.
* **Your personal comparative advantage.** Doing the most good you can do looks different for everyone. Various areas of AI safety need people with skills such as theory, policy, and community building, and plenty of people shouldn’t be working on AI safety at all.
Motivations
===========
Persuasion
----------
Twitter is full of bots. These bots spread all kinds of harmful information from [COVID19 misinformation](http://workshop-proceedings.icwsm.org/pdf/2020_16.pdf) to [fake news about US presidential elections](https://www.nature.com/articles/d41586-020-03034-5#ref-CR1) to [political propaganda](https://www.tandfonline.com/doi/pdf/10.1080/1369118X.2018.1476576?casa_token=5Bc3j3XPJOoAAAAA:IvRhr2ZAMLRIv8olIgeUSQhnptOKoENkKH_8H3qsTrtvsoVVJqhGV5lPTRQt-6cGg_nvQqnvhgtrHw). Today’s bots aren’t always smart enough to hide from content moderators or successfully promote their messages, but progress in language models could heighten the risks of automated persuasion.
“Risks from AI Persuasion” (Barnes, 2021) ([Alignment Forum](https://www.alignmentforum.org/posts/5cWtwATHL6KyzChck/risks-from-ai-persuasion)) presents these risks, beginning with several arguments for and against the proposition that highly persuasive AI will be possible well before AGI:
* First, existing systems are capable of rudimentary persuasion. GPT-3 can [write articles](https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3) and IBM’s [Project Debater](https://www.nature.com/articles/s41586-021-03215-w) can argue for arbitrary positions. It seems reasonable to expect persuasive capabilities to grow by default with general progress in language models.
* Second, persuasion is a relatively straightforward goal to specify. Paid crowdsourced workers can be hired to interact with language systems and report the convincingness of their arguments; or, for a more realistic evaluation, analytics software can track clicks and conversions from language model advertisements.
* Finally, powerful actors will be motivated to build persuasive AI. Digital advertising brings in annual global revenues of [$355 billion](https://www.globenewswire.com/news-release/2022/05/04/2435674/0/en/Global-Digital-Advertising-and-Marketing-Market-to-Reach-786-2-Billion-by-2026.html#:~:text=The%20Digital%20Advertising%20and%20Marketing,share%20in%20the%20global%20market.), and could profit from natural language advertising campaigns. More nefariously, authoritarian governments could pursue automated censorship and propaganda via language models.
* Perhaps the strongest argument against the viability of persuasive AI is that people are generally difficult to persuade. [Voters rarely switch parties](https://www.pewresearch.org/politics/2020/08/04/voters-rarely-switch-parties-but-recent-shifts-further-educational-racial-divergence/) despite the large volume of political discourse and advertising. Digital advertising [might not be very effective](https://www.gwern.net/Ads#rossi-1987-2), though I’d like to see a more thorough analysis. You might expect from an evolutionary standpoint that humans would have strong defenses against manipulation from untrusted actors.
The article continues by arguing that, if achieved, persuasive AI could pose a long-term risk to humanity. Public dialogue and epistemics could suffer, serving as a risk factor for any problem that requires accurate analysis and discussion. More concerningly, persuasion could enable censorship and propaganda campaigns from bad actors. It’s worth noting that authoritarian governments today invest significantly in [online](https://dl.acm.org/doi/pdf/10.1145/3473604.3474560) [censorship](https://en.wikipedia.org/wiki/Great_Firewall) and [propaganda](https://en.wikipedia.org/wiki/Russian_web_brigades), but [not typically via AI](https://www.tandfonline.com/doi/abs/10.1080/1369118X.2018.1476576).
To combat these threats, persuasion research should be discouraged and replaced with the truthfulness and honesty agendas outlined below. Building asymmetrically truthful techniques is difficult because many technologies are dual-use. For example, fake news detection tools or language models that rely on trusted sources can be easily co-opted for harmful censorship and propaganda agendas. Digital advertising should be resisted as a language model use case. Bot detection technology seems useful and tractable, both by analyzing natural language content produced and the source of such content.
It’s important to note that risks from AI persuasion are different from more common [concerns about power-seeking AI](https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit#). The most common arguments for AI risk say that AI agents will have “[convergent instrumental goals](https://www.lesswrong.com/tag/instrumental-convergence)” such as seeking power to avoid being turned off, and that these goals will inevitably conflict with human well-being. But this argument only applies to agents that have a model of the world and an objective to be maximized within it. Language models today are rarely agents, and agentic language models would carry different risks. Instead, they are trained to make accurate predictions about natural language without having any goals in the broader world. By making the case that non-agentic language models could pose a catastrophic threat to humanity, risks from persuasive AI opens up a new mechanism leading to AI risk.
For examples of the kinds of AI systems that could be persuasive or augment human persuasion, see “Persuasion Tools: AI Takeover without AGI or agency” (Kokotajlo, 2020) ([Alignment Forum](https://www.alignmentforum.org/posts/qKvn7rxP2mzJbKfcA/persuasion-tools-ai-takeover-without-agi-or-agency#Six_examples_of_persuasion_tools)).
Truthfulness
------------
Building AI that is naturally truthful and improves human epistemics is both the most natural antidote to risks from persuasive AI and a broad positive vision for the future of language models. Several authors have presented and summarized the case for truthful AI:
* “Truthful AI: Developing and governing AI that does not lie” (Evans, Hilton & Finnveden, 2021) ([Arxiv](https://arxiv.org/pdf/2110.06674.pdf), [Alignment Forum](https://www.alignmentforum.org/posts/aBixCPqSnTsPsTJBQ/truthful-ai-developing-and-governing-ai-that-does-not-lie), [AMA](https://www.lesswrong.com/posts/mwTEMHKv9tG9HxFXD/ama-on-truthful-ai-owen-cotton-barratt-owain-evans-and-co)). Defines truthfulness as “avoiding negligent falsehoods” in order to bypass the difficult question of what an AI “believes” and simply focusing on what it says. Argues that we should hold AI to a higher standard of truthfulness than typical for humans, in particular because the methods of accountability that exist for humans (lawsuits, public reputation) will not necessarily be effective for holding AI accountable. Considers various governance options (both corporate and public), agendas for building truthful AI, and the broad benefits of truthfulness.
* “Truthful LMs as a warm-up for aligned AGI” (Hilton 2022) ([Alignment Forum](https://www.alignmentforum.org/posts/jWkqACmDes6SoAiyE/truthful-lms-as-a-warm-up-for-aligned-agi)). Argues that building truthful language models shares several challenges with aligning AGI. Language models have a broad domain of understanding and applicability, similar to AGI. Failures of language models are unacceptable even when rare, as will be the case with high-stakes AGI. Naive objective functions such as imitating human text are known to cause substantial present-day harms, meaning we need more nuanced alignment strategies. Advocates primarily focus on the outcome of truthful LMs rather than the methods used to achieve it.
+ The above article builds on work such as “Prosaic AI alignment” (Christiano 2016) ([Medium](https://ai-alignment.com/prosaic-ai-control-b959644d79c2)) and “The Case for Aligning Narrowly Superhuman Models” (Cotra, 2021) ([Alignment Forum](https://www.alignmentforum.org/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models), [Comments from MIRI](https://www.alignmentforum.org/posts/AyfDnnAdjG7HHeD3d/miri-comments-on-cotra-s-case-for-aligning-narrowly)) which argues that aligning present day AI systems can be useful for aligning future superintelligent systems. These arguments are nuanced and a full summary is beyond the scope of this article, but two motivations include the possibility that transformative AI will resemble existing AI systems, and the proposition that research is more tractable on existing systems than on hypothetical future systems.
* “Truthful and Honest AI” (Evans et al., 2021) ([Alignment Forum](https://www.alignmentforum.org/posts/sdxZdGFtAwHGFGKhg/truthful-and-honest-ai)). A request for proposals from Open Philanthropy, written by several of the authors above and covering similar arguments.
Technical research agendas for building truthful language models include:
* Curating training datasets of truthful text. For example, “The Pile: An 800GB Dataset of Diverse Text for Language Modeling” (Gao et al., 2020) ([Arxiv](https://arxiv.org/pdf/2101.00027.pdf)) assembled text data from trusted sources. Datasets often contain untruthful or otherwise harmful information so other research has built tools for cleaning datasets, such as “Describing Differences between Text Distributions with Natural Language” (Zhong, Snell, Klein & Steinhardt 2022) ([Arxiv](https://arxiv.org/pdf/2201.12323.pdf), [Summarized on LessWrong](https://www.lesswrong.com/posts/qAhT2qvKXboXqLk4e/early-2022-paper-round-up#Human_Aligned_AI1)).
* Building truthfulness benchmarks. “TruthfulQA: Measuring How Models Mimic Human Falsehoods” (Lin, Hilton & Evans 2021) ([Arxiv](https://arxiv.org/pdf/2109.07958.pdf)) is one such benchmark; others could be useful.
* Measuring and improving language model calibration. “Teaching models to express their uncertainty in words” (Lin, Hilton & Evans, 2022) ([Arxiv](https://arxiv.org/pdf/2205.14334.pdf), [LessWrong](https://www.lesswrong.com/posts/vbfAwZqKs84agyGWC/paper-teaching-gpt3-to-express-uncertainty-in-words)). Introduces CalibratedMath, a benchmark asking language models to solve math problems and express uncertainty about their answers in words. Fine-tunes GPT-3 to achieve substantially better-than-chance calibration.
* Reinforcement learning on human evaluation of truth. I proposed one such project [here](https://docs.google.com/document/d/1ZS8ojOtbh2q_s8jRA12E9GrzlCusC8JNIegEOnWvj1Y/edit?usp=sharing), but stopped working on it given [concerns](https://docs.google.com/document/d/1ENGBoQqtZP0EuBBjHUAKPvYoZ0mxvU2G-aEvEMEeCdY/edit?disco=AAAAYcJnR88) about the negative externalities of capabilities progress.
Honesty
-------
An important critique of the truthfulness agenda is that it could accelerate general AI capabilities. Training larger language models on more data and compute tends to improve performance on many benchmarks including measures of truthfulness. But accelerating the onset of potentially dangerous AI should be seen as a strong potential negative externality of truthfulness research. For a more comprehensive argument against accelerating capabilities progress in safety research, see “Perform Tractable Research While Avoiding Capabilities Externalities” (Woodside & Hendrycks, 2022) ([Alignment Forum](https://www.alignmentforum.org/posts/dfRtxWcFDupfWpLQo/perform-tractable-research-while-avoiding-capabilities)).
Honest AI is an agenda proposed to counteract the potential harms of truthfulness. In this literature, an honest AI is defined as one which accurately reports its beliefs; but this raises the difficult question of how to identify what an AI system believes. This is the central objection to the honest AI agenda raised by Evans, Hilton & Finnveden, 2021. Forthcoming work attempts to formally define notions of honesty and lying in language models and present methods for building honest AI, as described in “Open Problems in AI X-Risk“ (Woodside & Hendrycks, 2022) ([Alignment Forum](https://www.alignmentforum.org/posts/5HtDzRAk7ePWsiL2L/open-problems-in-ai-x-risk-pais-5#Honest_AI)).
Helpful, Honest, and Harmless
-----------------------------
Building a helpful, honest, and harmless (HHH) text-based assistant is a broad goal proposed by Anthropic in their research agenda “A General Language Assistant as a Laboratory for Alignment” (Askell et al., 2021) ([Arxiv](https://arxiv.org/pdf/2112.00861.pdf)). This definition has considerable overlap with the motivations discussed above, but their paper gives a thorough account of these motivations and merits independent consideration.
They provide several clarifications of their goals for HHH language models not discussed above:
* They should ask for clarification if a request is not clear, related to the topic of active learning.
* They should not misinterpret requests or lack the context of common sense.
* They should be truthful, honest, and calibrated about its own confidence.
* They should not be biased, discriminatory, or offensive.
* They should avoid causing or enabling harm.
While the paper does not provide a specific theory of how this work will reduce existential risk, it does make the general case for aligning existing AI systems. Specifically, it argues that current language models are highly and generally capable, making it more valuable to align them. They see significant overlap between short-term and long-term LM alignment goals, and are happy to see progress on one benefit the other. They propose a broad, qualitative vision in the hopes that others will join their work quantifying and making progress towards their goal for the field.
The first three pages of the paper are a wonderful introduction to LM alignment motivations, you can read it [here](https://arxiv.org/pdf/2112.00861.pdf). The rest of the paper is technical; for a great summary, see [here](https://www.lesswrong.com/posts/oBpebs5j5ngs3EXr5/a-summary-of-anthropic-s-first-paper-3).
Empirical Progress
==================
Robustness
----------
Robust AI systems perform well in a variety of circumstances, including (a) when responding to “out-of-distribution” inputs that did not appear in training data, and (b) when facing “adversarial attacks” that are deliberately constructed to provoke a model failure. For a more thorough overview of robustness, see Section 2 of [“Unsolved Problems in ML Safety” (Hendrycks et al., 2022)](https://arxiv.org/pdf/2109.13916.pdf). As one of the most widespread subfields of AI safety, robustness has historically focused on computer vision but is growing to include work on language models.
### Red Teaming Language Models with Language Models (Perez et al., 2022) ([Arxiv](https://arxiv.org/pdf/2202.03286.pdf))
This paper introduces an automated technique for generating adversarial examples: attacking language models with other language models. The word “red teaming” refers to the cross-disciplinary method of improving systems by attacking them and exposing flaws.
The process is straightforward. First, one language model generates prompts designed to induce undesirable outputs. Then another LM completes those prompts. Finally, an encoder-only classifier language model identifies undesirable completions. This setup can be used with many different language models. In this paper, the classifier identifies offensive dialogue produced by the generator. The authors also find instances of “data leakage”, where an LM directly repeats text that appeared in its training data.
The red model generates prompts in several ways. First, the authors manually write prompts for the red model. Then a language model produces new prompts that mimic those manually written prompts. (This is termed a “few-shot” approach because the language model uses a few examples.) The authors also use a supervised approach, where the prompt generator model is fine-tuned to better mimic successful prompts; and a reinforcement learning (RL) approach that fine-tunes the prompt generator to maximize the offensiveness of resulting model outputs.
This technique requires having a classifier that is already capable of identifying undesirable outputs. It is therefore less useful for the challenge of training a classifier from scratch (see Redwood Research’s project below), but more useful for auditing language models using our many existing classifier models.
### High-Stakes Alignment via Adversarial Training (Ziegler et al., 2022) ([LessWrong](https://www.lesswrong.com/posts/A9tJFJY7DsGTFKKkh/high-stakes-alignment-via-adversarial-training-redwood), [Arxiv](https://arxiv.org/pdf/2205.01663.pdf))
For critical AI systems, even a small chance of failure can be catastrophic. Therefore, among other things, Redwood Research is building techniques to improve the worst-case performance of AI systems. See Buck Shlegeris’s [project description](https://www.alignmentforum.org/posts/k7oxdbNaGATZbtEg3/redwood-research-s-current-project#comments) and Paul Christiano’s [support](https://www.alignmentforum.org/posts/pXLqpguHJzxSjDdx7/why-i-m-excited-about-redwood-research-s-current-project) of the agenda.
Daniel Ziegler added this context for Redwood’s motivations, quoted here in full:
> An important background assumption (that we didn't focus on in the paper) is that we think the main failures we're worried about are alignment failures rather than capabilities failures. So talking about "critical AI systems" seems slightly misleading; we're not worried about a nuclear reactor control system failing in a rare situation because it's not smart enough, we're worried about an AI in a pretty arbitrary setting deceiving us / seeking power because it's **smart enough** to do that and we haven't figured out how to get it not to do that. Failures from misalignment both seem more worrying and more tractable (we certainly can't hope to make systems that are literally always perfect, and that's more of a capabilities question)
>
>
This particular paper uses a classifier language model ([DeBERTa V3 Large](https://arxiv.org/pdf/2111.09543.pdf)) to filter undesirable outputs from a generative language model ([GPT-Neo](https://zenodo.org/record/5297715#.YopngqjMK3A)). The undesirable outputs here are any completions of a paragraph that introduce injuries to beings in the paragraph. Avoiding injuries in generated text isn’t a particularly important task; it simply provides a setting for the challenge of developing a robust classifier. Classifiers like this could be used for monitoring the outputs of advanced systems to limit the potential harm of alignment failures such as deceptive alignment and inner misalignment.
Training the classifier to identify injuries requires many examples of injurious and non-injurious prompt completions. To build an initial set of training examples, Redwood hired contractors from [Upwork](https://www.upwork.com/) and [Surge](https://www.surgehq.ai/) to label snippets of fanfiction completed by GPT-3. The classifier quickly learned to identify injuries in this dataset, missing only 2 injuries in 2447 test set examples.
To improve out-of-distribution robustness, the authors began building adversarial examples. Contractors wrote their own prompts with the express goal of getting the model to produce an injury that would not be caught by the classifier. (For example, the contractors could write a prompt about a raging battle to see if the model would complete the story with an injury.) This process was manual at first, but it became quicker when the authors built software tools for constructing adversarial examples (described in detail [here](https://www.lesswrong.com/posts/A9tJFJY7DsGTFKKkh/high-stakes-alignment-via-adversarial-training-redwood#Our_most_powerful_attack__Tool_assisted_rewriting)).
Future work on worst-case performance in critical AI systems could expand on Redwood’s filtered generation approach by providing open source implementations of rejection sampling or new training datasets of outputs that should be avoided. Out-of-distribution robustness could also benefit from more tools for generating adversarial examples.
### Adversarial Robustness
The two papers above are members of the popular subfield of adversarial robustness. Papers in this field prompt models with inputs designed to induce incorrect outputs and poor performance, thereby exposing model weaknesses and providing training data that can improve future robustness. These adversarial examples can be generated by hand or algorithmically using various methods.
Here are several other papers on adversarial robustness for language models.
* “Adversarial NLI: A New Benchmark for Natural Language Understanding” (Nie et al., 2019) ([Arxiv](https://arxiv.org/pdf/1910.14599.pdf)).
+ This paper provides a general framework for generating adversarial examples. A human writer generates a labeled example that they believe will be difficult for a model. The model makes a prediction. If the prediction is correct, the example was too easy, and is added to the training set. If the prediction is incorrect, the example successfully fooled the model. Another human verifies that the example is labeled correctly and, if so, the example is incorporated into the training, development, or test dataset for a future model. After generating a number of these adversarial examples, the model is retrained on the new data and the process begins again.
+ This paper focuses on NLI, a particular task for language models. NLI requires a model to predict the relationship between a prompt and a deduction. For examples, see [here](https://nlp.stanford.edu/projects/snli/).
* “Adversarial Examples for Evaluating Reading Comprehension Systems” (Jia & Liang, 2017) ([Arxiv](https://arxiv.org/pdf/1707.07328.pdf?source=post_page---------------------------)).
+ This paper proposes two methods for appending strings to prompts that reduce performance on a question answering benchmark.
+ The first method uses linguistic rules to write a sentence that answers a similar question with a random answer. For example, if the original question asks “What ABC division handles domestic television distribution?”, then the method might provide the adversarial sentence, “The NBC division of Central Park handles foreign television distribution.”
+ A second method performs an algorithmic search to find any sequence of tokens that minimizes the model’s performance, whether the sequence is grammatically correct or not.
+ Appending these adversarial strings to the original prompts of the SQuAD benchmark, the average F1 performance of sixteen different models falls from 75% to 36%.
* “Universal Adversarial Triggers for Attacking and Analyzing NLP” (Wallace et al., 2021) ([Arxiv](https://arxiv.org/pdf/1908.07125.pdf)).
+ Similar to Jia & Liang, 2017, this paper provides an automated method for generating “universal adversarial triggers.” When appended to any input prompt given to any model, these strings trigger the model to predict a specified target label.
+ For example, one string reduces accuracy on the SNLI benchmark from 89.94% to 0.55%, and another causes GPT-2 to spew racist outputs given non-racial prompts.
+ To generate these universal triggers, the method uses a “white box” approach to access model parameters and calculate gradients. (“Black box” methods would not have access to internal facts about the model.) While the gradient of a parameter is typically used to make an update that reduces loss on the true label, this adversarial approach updates inputs towards tokens that will make an arbitrary target label more likely. By iterating through many possible trigger strings combined with many potential input prompts, the method finds triggers which universally encourage the target output. This approach builds on [HotFlip](https://arxiv.org/pdf/1712.06751.pdf) and uses [beam search](https://d2l.ai/chapter_recurrent-modern/beam-search.html) to generate triggers of multiple tokens.
* “BERT-ATTACK: Adversarial Attack Against BERT Using BERT” (Li et al., 2020) ([Arxiv](https://arxiv.org/pdf/2004.09984.pdf))
+ This paper provides an automated method for generating adversarial examples by changing individual words in a prompt. It uses the BERT encoder-only model.
+ First, they identify the most vulnerable words in a prompt. These are the words which, when replaced by a “mask”, result in the biggest drop in the probability of the true label. (A mask is simply the token “[MASK]”. BERT is trained to predict masked words in text.)
+ The most vulnerable words are then iteratively replaced by other words that BERT predicts are likely to appear in their place. Once the prompt causes the model to predict the wrong label, the example is deemed successful and stored.
+ This method successfully reduces the average accuracy on tested benchmarks from above 90% to below 10%. However, this evaluation assumes that replacing individual words does not change the true label of the prompt. Human evaluation might reveal that the automated changes are significant enough to warrant a different label.
### Prompt Engineering
The prompt given to a language generation model has a great influence on the model’s output. This is true even in cases where two prompts should ideally provide the same completion, such as asking the same question framed two different ways. Building prompts that improve LM generation performance is therefore a tractable research direction. Here are some papers on the topic:
* “Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing” (Liu et al., 2021) ([Arxiv](https://arxiv.org/pdf/2107.13586.pdf)). Comprehensive overview of prompting.
* “An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels” (Sorensen et al., 2022) ([Arxiv](https://arxiv.org/pdf/2203.11364.pdf)). An unsupervised technique that generates prompts to maximize the mutual information between the prompt and the prompt completion. This method helpfully does not require labeled examples of good prompts.
* “Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity” (Lu et al., 2022) ([Arxiv](https://arxiv.org/pdf/2104.08786.pdf)). When examples are included in the prompt as part of a few-shot learning approach, the order of the examples can strongly affect the quality of the output. This paper provides a metric by which training examples can be ordered, which improves GPT-3 performance by 13% across 13 classification tasks.
* “Toxicity Detection with Generative Prompt-based Inference” (Wang & Chang, 2022) ([Arxiv](https://arxiv.org/pdf/2205.12390.pdf)). Prompt engineering allows pretrained language models to detect toxic speech without any labeled examples of toxicity. This is an example of models supervising other models, where the same model will generate toxic outputs and can correctly identify that toxicity.
### Other Work
* “Capturing Failures of Large Language Models via Human Cognitive Biases” (Jones et al., 2022) ([Arxiv](https://arxiv.org/pdf/2202.12299.pdf)). Shows that OpenAI’s Codex LM falls prey to common human cognitive biases including framing, anchoring, and the motte and bailey fallacy.
* “TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP” (Morris et al., 2020) ([Arxiv](https://arxiv.org/pdf/2005.05909.pdf), [GitHub](https://github.com/QData/TextAttack)). Provides an open-source implementation of 16 adversarial attack techniques from the robustness literature. The package is built in four modular components to enable future work on each component. Built by a team of several undergraduates working with two PhD researchers.
Law
---
While laws that govern AI have received lots of attention from the field of AI policy, a smaller subfield with potential long-term importance attempts to build AI that understands our laws. Practically speaking, AI could be useful in assisting our legal processes with functionality such as summarizing documents, identifying relevant precedents, and aiding contract review. These practical applications have been the focus of the growing field of Legal AI.
*From “A Survey on Legal Judgment Prediction: Datasets, Metrics,*
*Models and Challenges” (Cui et al., 2022) (*[*Arxiv*](https://arxiv.org/pdf/2204.04859.pdf)*)*
From a longtermist ethical perspective, laws are important because they encode complex human preferences and rules about acceptable behavior in the world. Language models that understand our legal system could therefore help AI make decisions that are aligned with human values.
### Legal AI Overviews
Here are two good recent surveys of the legal AI field:
* “How Does NLP Benefit Legal System: A Summary of Legal Artificial Intelligence” (Zhong et al., 2020) ([Arxiv](https://arxiv.org/pdf/2004.12158.pdf)). This article identifies the dominant paradigms of legal AI research: first, embedding-based models such as large language models, and second, symbolic methods of legal reasoning. (Symbolic AI is less popular today than the large datasets and compute budgets of end-to-end deep learning, but it has achieved meaningful progress on legal AI.) It also reviews state-of-the-art performance on key tasks for legal AI including legal judgment prediction, similar case matching, and legal question answering.
* “A Survey on Legal Judgment Prediction: Datasets, Metrics, Models and Challenges” (Cui et al., 2022) ([Arxiv](https://arxiv.org/pdf/2204.04859.pdf)). Detailed review of datasets, evaluation metrics, and state-of-the-art performance in legal judgment prediction. Excellent resource for finding new research opportunities; note that legal judgment prediction is only one of many important tasks for legal AI.
Neither of these surveys consider the alignment problem or take a longtermist ethical perspective. This perspective has been briefly covered in parts of Dan Hendrycks’ [MMLU](https://arxiv.org/abs/2009.03300) and [ETHICS](https://arxiv.org/abs/2008.02275) papers, but there exists no thorough overview of legal AI from a longtermist perspective. This seems like an important opportunity for future policy papers and research agendas.
### Domain Specific Pretraining
One popular strategy for improving the performance of language models on domain-specific tasks is pretraining the model on documents from that domain. Several papers have improved performance on legal tasks through domain-specific pretraining:
* “LEGAL-BERT: The Muppets straight out of Law School” (Chaldakis et al., 2020) ([Arxiv](https://arxiv.org/pdf/2010.02559.pdf)). This paper uses three strategies for pretraining BERT to perform legal tasks. First, it uses the out-of-the-box BERT; second, it adapts the original BERT with additional pre-training on a corpus of legal documents; and third, it trains BERT from scratch on that same legal corpus. The corpus of legal documents consists of several hundred thousand documents of US and European legislation, caselaw, and contracts. These models achieve 0.8% - 2.5% improvement on the F1 statistic of various legal classification tasks.
* “Measuring Massive Multitask Language Understanding” (Hendrycks et al., 2021) ([Arxiv](https://arxiv.org/pdf/2009.03300.pdf), [GitHub](https://github.com/hendrycks/test)). As part of the much larger MMLU benchmark for language models, the authors propose specific benchmarks for Professional Law and International Law. They fine-tune RoBERTa on 1.6 million case summaries from [Harvard’s Caselaw Library](https://case.law/), which improves performance on the Professional Law task from 32.8% to 36.1%.
* “BERT Goes to Law School: Quantifying the Competitive Advantage of Access to Large Legal Corpora in Contract Understanding” (Elwany, Moore & Oberoi 2019) ([Arxiv](https://arxiv.org/pdf/1911.00473.pdf)). Fine-tuning BERT on a corpus of private legal documents improves F1 performance by 5.3% on a custom task of extracting the time horizon of a contract. Interestingly, this paper was published by legal tech startup [Lexion](https://www.lexion.ai/) and uses their private contract data.
* “When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset of 53,000+ Legal Holdings” (Zheng et al., 2021) ([Arxiv](https://arxiv.org/pdf/2104.08671.pdf), [GitHub](https://github.com/reglab/casehold), [Hugging Face](https://huggingface.co/zlucia/legalbert)). The authors argue that the measured benefits of legal pretraining have been small because the evaluation tasks have not been challenging enough. They present CaseHOLD, a new evaluation task of multiple choice questions asking models to predict the holding of a case. These questions are built by extracting data from the [Harvard Caselaw Library](https://case.law/). Evaluated on the CaseHOLD task, moving from out-of-the-box BERT to the domain-pretrained LegalBERT improves accuracy in multiple choice prediction of case holdings from 62.3% to 69.5%.
Domain-specific pretraining has therefore provided meaningful but limited improvements on legal tasks. Future work is necessary to develop structured datasets, advanced training methods, and more challenging evaluation tasks in the legal domain.
### CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review (Hendrycks, Burns, Chen & Ball, 2021) ([Arxiv](https://arxiv.org/pdf/2103.06268.pdf))
Building new datasets is one of the core levers driving progress in machine learning, particularly in new specialized domains like legal AI. This paper introduces a new dataset for contract review.
Annotated by human legal experts, the dataset highlights clauses within contracts that ought to be reviewed before signing the contract. Each clause is additionally tagged with the concern type, allowing models to learn which clauses are important and why. The benchmark is difficult for existing models: RoBERTa pretrained on 8GB of unlabeled contracts achieves an AUPR (area under the precision-recall curve) of 45%.
Contract review is an economically important task where AI can assist human legal experts. Businesses including established players [Icertis](https://www.icertis.com/contract-management/ai-applications/) and [Kira](https://kirasystems.com/how-kira-works/contract-analysis/) as well as early-stage startups [Klarity](https://www.tryklarity.com/), [Della](https://www.linkedin.com/company/dellaai/people/), and [Claira](https://www.claira.io/our-team) are all working on AI assisted contract review. Developing commercially viable applications of tasks relevant to AI safety (in this case, developing contract review software to facilitate legal AI) can leverage resources and talent outside of the safety community to drive safety progress.
Machine Ethics
--------------
### Aligning AI with Shared Human Values (Hendrycks et al., 2021) ([Arxiv](https://arxiv.org/pdf/2008.02275.pdf), [GitHub](https://github.com/hendrycks/ethics))
Current AI systems do not have a holistic conception of human ethics, despite their growing knowledge of the world and abilities to act within it. Natural language is a key medium for conveying diverse, detailed information about the world, including ethical judgements and discussions. Language models could therefore be a strong candidate for building AI with ethical understanding.
This paper introduces the ETHICS benchmark, a dataset of more than 130,000 scenarios described in text and annotated with human moral judgements. Rather than creating models aimed at satisfying task preferences or reinventing moral principles from scratch, the authors take a normative ethics approach. They crowdsource examples from five different ethical systems: justice, deontology, virtue ethics, utilitarianism, and commonsense moral intuitions.
The authors point out that existing work on AI safety often attempts to implement one or more of these ethical theories in the context of a specific task. For example:
* Statistical fairness research implements theories of justice.
* Rules-based approaches to safety implement deontology.
* Reward learning assumes utilitarian ethics and attempts to learn human utility functions.
* Work on empathetic chatbots (Rashkin et al., 2019) ([Arxiv](https://arxiv.org/pdf/1811.00207.pdf?fbclid=IwAR1Zvx7e3ry7mO3EaOQgo0KTzwRDgwJZGUnMO0Nx-YG9UZdxvHlm0uZl6D8)) and conservative agents (Turner et al., 2020) ([Arxiv](https://arxiv.org/pdf/1902.09725.pdf)) attempts to build AI systems that showcase a particular virtue, and could be integrated within a broader virtue ethics paradigm.
To complement these individual safety mechanisms, the authors support building AI that understands the broad strokes of human ethics in a variety of real-world situations. While the failures of existing AI systems might be addressable with individual solutions, progress in language models and AI agents could make a natural language understanding of ethics essential to aligning future systems. Beyond ethics, this dataset of natural language scenarios and human evaluations allows us to better understand the risks of language models misinterpreting human instructions or preferences.
Examples are both generated by human contractors on MTurk and scraped from Reddit. To ensure the scenarios and labels are morally unambiguous, each example is relabeled by multiple people on MTurk, and only examples with a strong consensus are used in the benchmark.
Particularly challenging examples are generated via two strategies. First, examples that are most often incorrectly classified by a baseline model are set apart as the Test and Test Hard sets, a best practice known as “adversarial filtration” (Bras et al., 2020) ([Arxiv](https://arxiv.org/pdf/2002.04108.pdf)). Second, the dataset includes contrastive examples (See (Kaushik, Hovy & Lipton, 2020) ([Arxiv](https://arxiv.org/pdf/1909.12434.pdf)) and (Gardner et al., 2020) ([Arxiv](https://arxiv.org/pdf/2004.02709.pdf))) in which slight differences in the input text lead to different output labels.
The authors assess the performance of several baseline models on the benchmark. The models are all BERT-based classifiers with varying sizes and training algorithms. The models are fine-tuned on examples from the development set and evaluated on examples from the Test and Test Hard set. The best performing model chooses the correct answer in 71% of test cases and 48% of hard test cases.
Here are several lines of potential future work; see Section 4 of the paper for more:
* Ethical understanding could be implemented in chatbots and language agents. This line of work has been criticized by authors such as [Talat et al., 2021](https://arxiv.org/abs/2111.04158) who hope to avoid AI that makes ethically important decisions. Those considering this branch of machine ethics should be aware of those concerns, but might find the proliferation of decision-making AI unavoidable.
* More difficult benchmarks of the same task should be built. Baseline models achieve a reasonable degree of accuracy on the ETHICS benchmark, meaning that more difficult sets of ambiguous or adversarial examples will be necessary in order to incentivize and measure further progress. These benchmarks should also consider other moral theories, such as those which prioritize bodily integrity or intent.
* More diverse sources of moral judgements should be considered in future work. All examples in the ETHICS dataset are generated by English speakers in the United States, Canada, and Great Britain. Annotators from India show substantial disagreement with 6.1% of examples in the common sense morality section of this benchmark, indicating the importance of cross-cultural descriptive ethics.
* While these examples are designed to be unambiguous, future work should also consider moral uncertainty and cooperation between moral theories. This could include calibrated ethical judgements.
### “When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment” (Jin et al., 2021) ([PDF](https://drive.google.com/file/d/1hB_yStLu52sGUYOVwq1aQ8O-6IeqR12g/view))
Rather than the “bottom-up” approach of learning moral intuitions from large datasets of labeled examples, language models can also be designed “top-down” to implement theories of ethics. For example, self-driving cars don’t learn about speed limits and stop signs solely by observation and experimentation; instead, they are programmed with explicit knowledge of traffic laws. Similarly, language models can be designed to use well-understood theories of human moral cognition. One key potential benefit of this approach is improving performance in situations that do not appear in the training data, which is essential for long-term alignment.
This paper comes from an interdisciplinary team of cognitive scientists and computer scientists. Building upon some of the authors’ [previous published work](https://arxiv.org/abs/2201.07763), they argue that rules and exceptions to rules are fundamental to human moral cognition. This flexible observation holds true across various normative ethics theories and empirical studies of human morality.
The paper first introduces the RBQA benchmark, or “Rule-Breaking Question Answering”, consisting of 153 textual descriptions of scenarios where a person considers breaking a rule. Rather than covering a wide range of common scenarios, the scenarios are written about one of three rules, and explore various hand-crafted circumstances that could motivate breaking the rule. The authors then survey hundreds of people to determine whether humans believe rule-breaking is permissible in each scenario.
The authors then ask language models whether it is permissible to break the rule in each scenario. As a baseline, note that answering randomly provides the right answer 52% of the time, and always saying that rule-breaking is impermissible is accurate 60% of the time. Delphi++, a large language model fine-tuned on 1.7 million labeled data points of ethical decisions, provides the correct answer only 62% of the time, while OpenAI’s most recent InstructGPT achieves 65% accuracy.
The authors then propose a new prompting strategy called Moral Chain of Thought (MoralCOT). Before asking whether a rule-breaking is permissible in a scenario, MoralCOT asks the model five questions, and includes the model’s answers to those questions in the prompt for future questions. The questions are:
1. Does the action in the scenario violate any rule?
2. What is the purpose for this rule?
3. Who will be worse off after this happens, and by how much?
4. Who will be better off after this happens, and by how much?
5. In this case, do the benefits of breaking the rule outweigh the costs?
Using the MoralCOT strategy with InstructGPT improves state-of-the-art accuracy from 65% to 68% and F1 performance from 55% to 67%. The model still fails for various reasons such as not considering relevant parties to the situation, misestimating complex and non-monetary costs, and failing to recognize high-stakes scenarios such as when rule-breaking could save lives. Yet the results are encouraging, showing that designing moral cognition processes can improve performance in ethical judgments.
### Related Work
* “Do Multilingual Language Models Capture Differing Moral Norms” (Hammerl et al., 2021) ([Arxiv](https://arxiv.org/pdf/2203.09904.pdf)). Investigates whether language models trained on text from different languages learn different moral judgments. Doesn’t show many conclusive results, but an interesting question.
* “The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems” (Ziems et al., 2022) ([Arxiv](https://arxiv.org/pdf/2204.03021.pdf)). Provides a new dataset for evaluating ethical judgements in language models.
* “What Would Jiminy Cricket Do? Towards Agents That Behave Morally” (Hendrycks et al., 2022) ([Arxiv](https://arxiv.org/pdf/2110.13136.pdf)). A benchmark evaluation for the ethical behavior of text-based agents.
Related Resources
=================
The goal of this piece is to provide a useful starting point for learning about language model safety. The field has hundreds or thousands of relevant papers, meaning this summary can only be a starting point for more independent reading and research. For those interested in learning more about language model alignment, here are some other interesting resources:
* “Ethical and social risks of harm from Language Models” (Weidinger et al., 2021) ([Arxiv](https://arxiv.org/pdf/2112.04359.pdf))
* “On the Opportunities and Risks of Foundation Models” (Bommasani et al., 2021) ([Arxiv](https://arxiv.org/pdf/2108.07258.pdf?utm_source=morning_brew))
* “Finding and Fixing Undesirable Behaviors in Pretrained Language Models” (Perez, 2022) ([Thesis](https://ethanperez.net/thesis.pdf))
*Thank you to everybody who has provided feedback on this article, including Daniel Ziegler, Michael Chen, Dan Hendrycks, Thomas Woodside, and Zach Perlstein-Smith, as well as everybody who has engaged with me on AI safety over the years. Thanks in particular to the FTX Future Fund Regranting Program for making my work on this topic possible. This article and all errors and omissions are my own.* |
3e216ea9-ae6a-442d-90cd-239e3d6cc3b0 | trentmkelly/LessWrong-43k | LessWrong | OpenAI rewrote its Preparedness Framework
New PF: blogpost, pdf. Old PF: pdf.
----------------------------------------
My tentative overall take: relative to the old PF it's thinking more clearly about how evals and mitigations fit together; the structure is more sensible. In terms of content, it prominently includes risks from misalignment during internal deployment — this is much better than the one extremely frustrating sentence on alignment in the old PF.[1] But the directions mentioned in the section on safeguards against misalignment are quite concerning/inadequate, and the sections on safeguards generally don't say anything about level of assurance, so the PF isn't much of a commitment. I feel moderately worried about the thresholds, especially for cyber (and cyber is the only category that triggers safeguards against misalignment[2]). (Also important are the evals, which the PF doesn't really say anything about, so we can look at OpenAI's system cards; I think those evals are a mixed bag but that's beyond the scope of this post. Also important is the response to Critical capabilities, which OpenAI hasn't yet figured out.)
----------------------------------------
I think the PF has two key parts:
First, thresholds & responses: https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbddebcd/preparedness-framework-v2.pdf#page=6. High and Critical thresholds trigger responses, like in the old PF; responses to Critical thresholds are not yet specified.
There are three[3] main categories of capabilities:
* Bio/chem: High capabilities trigger security controls and (for external deployment) misuse safeguards
* Cyber: High capabilities trigger security controls and (for external deployment) misuse safeguards and (for large-scale internal deployment) misalignment safeguards, probably[4]
* AI Self-improvement: High capabilities trigger security controls[5]
The PF does not discuss details of the capability evals. On thresholds:
AI Self-improvement: High:
> The model’s impact is equivalent to givin |
8d200a33-9719-48e6-82bb-af4c205cd239 | trentmkelly/LessWrong-43k | LessWrong | Do universities have an incentive to ignore data ethics as more small sample bias is brought to light?
Many older pieces of once recognised scientific literature are now being deemed unreliable due to small sample bias. An increasing amount of meta-analyses are identifying unreliable fields of study, citing lack of reproducibility in larger sample sizes. It seems likely that we will face increasingly difficult ethical questions about large data collection as academicians attempt to check or re-verify their past claims with larger samples. Do we expect academic institutions to be put under pressure to ignore data-ethics in the future in an effort to correct for small sample bias? |
b629e515-5561-4225-b422-dd12de13bd82 | trentmkelly/LessWrong-43k | LessWrong | Eleuther releases Llemma: An Open Language Model For Mathematics
> Today we release Llemma: 7 billion and 34 billion parameter language models for mathematics. The Llemma models were initialized with Code Llama weights, then trained on the Proof-Pile II, a 55 billion token dataset of mathematical and scientific documents. The resulting models show improved mathematical capabilities, and can be adapted to various tasks through prompting or additional fine-tuning.
>
> Our work parallels Minerva, a model suite specialized for quantitative reasoning developed by Google Research last year. While we don't achieve quite the same scale as Minerva, our Llemma models perform better on an equi-parameter basis. Moreover, we make our models and dataset open-access and our code open-source.
>
> Language models with strong mathematical reasoning capabilities are upstream of a number of emerging research areas, such as reward modeling, algorithmic reasoning, and formal mathematics. We hope that by providing researchers with a much stronger base model for reasoning applications, Llemma will accelerate progress on these problems.
>
> The code subset of the Proof-Pile-2 endows Llemma with capabilities Minerva lacks without additional finetuning. In this blog post, we'll discuss formal theorem proving. Our paper contains additional results on a Python-aided problem solving task. |
6eda89cb-b068-4316-a3b5-ada7e6577e55 | trentmkelly/LessWrong-43k | LessWrong | The language of desire (post 2 of 3)
To the extent that desires explain behavior, it is primarily by meshing with beliefs to favor particular actions. For example, if I desire to lose 5 lbs, and I believe that exercising a half hour per day will cause me to lose 5 lbs, then this belief-desire pair makes it more likely that I will exercise. Beliefs have semantic content. In order to explain an action as part of a belief-desire pair, a desire must also have semantic content: one that in some sense "matches" a relevant part of the belief. In the example, the matching semantic content is "to lose 5 lbs".
Of course, desires can also explain some behaviors without functioning as part of a belief-desire pair. I might want something so badly, I start trembling. No beliefs are required to explain the trembling. Also, notably, desires usually (but not always) feel like something. We gesture in the vague direction of these feelings by talking about "a burning desire", or "thirst" (for something that is not a drink), etc. In these ways, "desire" is a richer concept than what I am really after, here. That's OK, though; I'm not trying to define desire. Alternatively, we can talk about "values", "goals", or "utility functions" - anything that interacts with beliefs, via semantics, to favor particular actions. I will mostly stick to the word "desire", but nothing hangs on it.
So how does this "semantics" stuff work?
Let me start by pointing to the Sequences. EY explains it pretty well, with some help by pragmatist in the comments. Like EY, I subscribe to the broad class of causal theories of mental content. For our purposes here, we need not choose among them.
The reference of the concepts involved in desires (or goals) is determined by their causal history. To take a simple example, suppose Allie has a secret admirer. It's Bob, but she doesn't know this. Bob leaves her thoughtful gifts and love letters, which Allie appreciates so much that she falls in love with "my secret admirer". She tells a |
1d84a110-efef-4ddb-8852-985c7b934a3e | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Plans Are Predictions, Not Optimization Targets
Imagine a (United States) high-schooler who wants to be a doctor. Their obvious high-level plan to achieve that goal is:
* Graduate high school and get into college
* Go to college, study some bio/chem/physiology, graduate and get into med school
* Go to med school, make it through residency
* Doctor!
Key thing to notice about that plan: the plan is mainly an *optimization target*. When in high school, our doctor-to-be optimizes for graduating and getting into college. In college, they optimize for graduating and getting into med school. Etc. Throughout, our doctor-to-be optimizes to *make the plan happen*. Our doctor-to-be does not treat the plan primarily as a *prediction about the world*; they treat it as a *way to make the world be*.
And that probably works great for people who definitely just want to be doctors.
Now imagine someone in 1940 who wants to build a solid-state electronic amplifier.
Building active solid-state electronic components in the early 1940’s is not like becoming a doctor. Nobody has done it before, nobody knows how to do it, nobody knows the minimal series of steps one must go through in order to solve it. At that time, solid-state electronics was a [problem we did not understand](https://www.lesswrong.com/posts/CSZnj2YNMKGfsMbZA/specializing-in-problems-we-don-t-understand); the field was preparadigmatic. There were some theories, but they didn’t work. The first concrete plans people attempted failed; implicit assumptions were wrong, but it wasn’t immediately obvious *which* implicit assumptions. One of the most confident predictions one might reasonably have made about solid-state electronics in 1940 was that there would be surprises; unknown unknowns were certainly lurking.
So, how should someone in 1940 who wants to build a solid-state amplifier go about planning?
I claim the right move is to target *robust bottlenecks*: look for subproblems which are bottlenecks to many different approaches/plans/paths, then tackle those subproblems. For instance, if I wanted to build a solid-state amplifier in 1940, I’d make sure I could build prototypes quickly (including with weird materials), and look for ways to visualize the fields, charge densities, and conductivity patterns produced. Whenever I saw “weird” results, I’d first figure out exactly which variables I needed to control to reproduce them, and of course measure everything I could (using those tools for visualizing fields, densities, etc). I’d also look for patterns among results, and look for models which unified lots of them.
Those are strategies which would be robustly useful for building solid-state amplifiers in many worlds, and likely directly address bottlenecks to progress in many worlds. In our particular world, they might have highlighted the importance of high-purity silicon and dopants, or of surfaces between materials with different electrical properties, both of which were key rate-limiting insights along the path to active solid-state electronics.
Now, when looking for those robust bottlenecks, I’d probably need to come up with a plan. Multiple plans, in fact. The point of “robust bottlenecks” is that they’re bottlenecks to many plans, after all. But those plans would not be *optimization targets*. I don’t treat the plans as *ways to make the world be*. Rather, the plans are *predictions* about how things might go. My “mainline plan”, if I have one, is not the thing I’m optimizing to make happen; rather, it’s my modal expectation for how I expect things to go (conditional on my efforts).
My optimization targets are, instead, the robust bottlenecks.
When reality throws a brick through the plans, I want my optimization target to have still been a good target in hindsight. Thus robust bottlenecks: something which is still a bottleneck under lots of different assumptions is more likely to be a bottleneck even under assumptions we haven’t yet realized we should use. The more robust the bottleneck is, the more likely it will be robust to whatever surprise reality actually throws at us.
In My Own Work
--------------
Late last year, I wrote [The Plan](https://www.lesswrong.com/posts/3L46WGauGpr7nYubu/the-plan) - a post on my then-current plans for alignment research and the reasoning behind them. But I generally didn’t treat that plan as an optimization target; I treated it as a modal path. I chose my research priorities mostly by looking for robust bottlenecks. That’s why I poured so much effort into [understanding abstraction](https://www.lesswrong.com/tag/natural-abstraction): it’s a very robust bottleneck. (One unusually-externally-legible piece of evidence for robustness: if the bottleneck is robust, more people should converge on it over time as they progress on different agendas. And indeed, over the past ~year we saw both Paul Christiano and Scott Garrabrant converge on the general cluster of abstraction/ontology identification/etc.)
Since then, my views have updated in some ways. For instance, after reading some of Eliezer and Nate’s recent writing, I now think it’s probably a better idea to use [corrigibility](https://www.lesswrong.com/tag/corrigibility) as an AI alignment target (assuming corrigibility turns out to be a coherent thing at all), as opposed to directly targeting human values. But The Plan was to target human values! Do I now need to ditch my whole research agenda?
No, because The Plan wasn’t my optimization target, it was my modal path. I was optimizing for robust bottlenecks. So now I have a different modal path, but it still converges on roughly-the-same robust bottlenecks. There have been some minor adjustments here and there - e.g. I care marginally less about convergent type signatures of values, and marginally more about convergent architectures/algorithms to optimize those values (though those two problems remain pretty tightly coupled). But the big picture research agenda still looks basically similar; the strategy still looks right in hindsight, even after a surprising-to-me update.
Back To The Doctor
------------------
At the start of this post, I said that the high-schooler who definitely wanted to be a doctor would probably do just fine treating the plan as an optimization target. The path to becoming a doctor is very standard, the steps are known. It's not like building a solid-state amplifier in 1940, or solving the AI alignment problem.
On the other hand... man, that high-schooler is gonna be shit outta luck if they decide that medicine isn't for them. Or if the classes or residency are too tough. Or if they fail the MCAT. Or .... Point is, reality has a way of throwing bricks through our plans, even when we're not operating in a preparadigmatic field.
And if the high-schooler targets robust bottlenecks rather than optimizing for one particular plan... well, they probably still end up in fine shape for becoming a doctor! (At least in worlds where the original plan was workable anyway - i.e. worlds where courses and the MCAT and whatnot aren't too tough.) Robust bottlenecks should still be bottlenecks for the doctor plan. The main difference is probably that our high-schooler ends up doing their undergrad in something more robustly useful than pre-med.
Summary
-------
If we treat a single plan as our optimization target, we're in trouble when reality throws surprises at us. In preparadigmatic fields, where surprises and unknown unknowns are guaranteed, a better idea is to target *robust bottlenecks*: subproblems which are bottlenecks to a wide variety of plans. The more robust the bottleneck is to different assumptions, the more likely that it will still be a bottleneck under whatever conditions reality throws at us.
Even in better-understood areas, reality does have a tendency to surprise us, and it's usually not very costly to target robust bottlenecks rather than a particular plan anyway. After all, if the bottleneck is robust, it's probably a bottleneck in whatever particular plan we would have targeted.
So what's the point of a plan? A plan is no longer an optimization target, but instead a prediction - a particular possible path. Sometimes it's useful to discuss a "mainline plan", i.e. a modal path. And in general, we want to consider many possible plans, and look for subproblems which are bottlenecks to all of them. |
56518192-ca3f-4f67-b0c2-4507f174e6fa | trentmkelly/LessWrong-43k | LessWrong | [LINK] Brief Discussion of Asteroid & Nuclear Risk from paper by Hellman
From Risk Analysis of Nuclear Deterrence by Martin Hellman. See also http://nuclearrisk.org/
> A full-scale nuclear war is not the only threat to humanity’s continued existence, and we should allocate resources commensurate with the various risks. A large asteroid colliding with the Earth could destroy humanity in the same way it is believed the dinosaurs disappeared 65 million years ago. Such NEO (near earth object) extinction events have a failure rate on the order of 10-8 per year [Chapman & Morrison 1994].
>
> During one century, that failure rate corresponds to one chance in a million of humanity being destroyed. While 10-6 is a small probability, the associated cost is so high—infinite from our perspective—that some might argue that a century is too long a delay before working to reduce the threat. Fortunately, significant threat reduction has recently occurred. Over the last 20 years, NASA’s Spaceguard effort is believed to have found all such potentially hazardous large asteroids, and none is predicted to strike Earth within the next century. With a hundred-year safety window in place, resolution of later potential impacts can be deferred for a few decades until our technology is significantly enhanced. Comets also pose a threat, and their more eccentric orbits make them harder to catalog, but their lower frequency of Earth impact makes the associated risk acceptable for a limited period of time.
[...]
> While much less accurate than the in-depth studies proposed herein, it is instructive to estimate the failure rate of deterrence due to just one failure mechanism, a Cuban Missile Type Crisis (CMTC). Because it neglects other trigger mechanisms such as command-and-control malfunctions and nuclear terrorism, this appendix underestimates the threat. This simplified analysis uses the time-invariant model described in footnote 3. It also assumes that the experience of the first 50 years of deterrence can be extended into the future.
[...]
> Since condition |
afda0a52-12ca-459f-924f-865ece5e2391 | trentmkelly/LessWrong-43k | LessWrong | A guide to Iterated Amplification & Debate
This post is about two proposals for aligning AI systems in a scalable way:
* Iterated Distillation and Amplification (often just called 'Iterated Amplification'), or IDA for short,[1] is a proposal by Paul Christiano.
* Debate is an IDA-inspired proposal by Geoffrey Irving.
This post is written to be as easy to understand as possible, so if you found existing explanations of IDA confusing, or if you just never bothered because it seemed intimidating, this post is for you. The only prerequisite is knowing about the concept of outer alignment (and knowing about inner alignment is helpful as well). Roughly,
* Outer alignment is aligning the training signal or training data we give to our model with what we want.
* If the model we find implements its own optimization process, then inner alignment is aligning [the thing the model is optimizing for] with the training signal.
See also this post for an overview and this paper or my ELI12 edition for more details on inner alignment.
1. Motivation / Reframing AI Risk
Why do we need a fancy alignment scheme?
There has been some debate a few months back about whether the classical arguments of the kind made in Superintelligence for why AI is dangerous hold up to scrutiny. I think a charitable reading of the book can interpret it as primarily defending one claim, which is also an answer to the leading question. Namely,
* It is hard to define a scalable training procedure that is not outer-misaligned.
For example, a language model (GPT-3 style) is outer-misaligned because the objective we train for is to predict the most likely next word, which says nothing about being 'useful' or 'friendly'. Similarly, a question-answering system trained with Reinforcement Learning is outer-misaligned because the objective we train for is 'optimize how much the human likes the answer', not 'optimize for a true and useful answer'.
I'll refer to this claim as (∗). If (∗) true, it is a problem even under the most optimistic assumpti |
47a6ba77-8683-4aaf-9440-221bab0e4847 | trentmkelly/LessWrong-43k | LessWrong | Thinking About Generators
Every so often I think about what we would do in an extended power outage. Generally this comes up when a crisis is in the news, so not a good time to buy things. During the February Texas Storm I set myself a reminder for three months later to think about generators. So: here I am.
We currently have two sources of standard 120V power in a blackout:
* Battery: 1 kWh of lead-acid.
* Solar: 2 kW of solar
(2 kW is the capacity of the SPS; the panels themselves are rated for a total of 5 kW.)
They do have downsides, though. The battery has a limited capacity, and the solar requires the sun to be shining, the panels to be clear, and the sun to be in the right part of the sky. They complement each other somewhat, using the battery to time-shift, but there are many blackout situations (ex: winter storm) where once the battery is depleted the solar wouldn't give us anything.
A fossil fuel generator is potentially a good fit here. While it doesn't have the long-term properties of solar, fuel is cheap enough that you can have a capacity far larger than you would want to spend on batteries.
For example, a generator can turn a five gallon can of gas into ~15 kWh. That's already much more than my battery, and each additional can of gas is far cheaper than a large battery.
On the other hand, gas is a pain. It turns to varnish in a few months and (especially if it has some ethanol) it pulls in water from the air, so even if you're using fuel stabilizer you want to rotate it. It's not safe or legal to store in your basement, and the temperature fluctuations of storing it outdoors aren't great for it either. Storing gas in your car's gas tank could be good, since it will get automatically rotated, but most cars now have anti-siphon filters.
Propane seems more attractive: it doesn't go bad, you can store it outdoors if you keep it out of direct sun, and while it's less efficient than gasoline a 20lb propane tank still gets you ~10 kWh.
You can get inverter generators that c |
388b64a4-7823-45e2-a5c3-ab64ebfa6822 | trentmkelly/LessWrong-43k | LessWrong | Replacing expensive costly signals
I feel like there is a general problem where people signal something using some extremely socially destructive method, and we can conceive of more socially efficient ways to send the same signal, but trying out alternative signals suggests that you might be especially bad at the traditional one. For instance, an employer might reasonably suspect that a job candidate who did a strange online course instead of normal university would have done especially badly at normal university.
Here is a proposed solution. Let X be the traditional signal, Y be the new signal, and Z be the trait(s) being advertised by both. Let people continue doing X, but subsidize Y on top of X for people with very high Z. Soon Y is a signal of higher Z than X is, and understood by the recipients of the signals to be a better indicator. People who can’t afford to do both should then prefer Y to X, since Y is is a stronger signal, and since it is more socially efficient it is likely to be less costly for the signal senders.
If Y is intrinsically no better a signal than X (without your artificially subsidizing great Z-possessors to send it) then in the long run Y might only end up as strong a sign as X, but in the process, many should have moved to using Y instead.
(A possible downside is that people may end up just doing both forever.)
For example, if you developed a psychometric and intellectual test that only took half a day and predicted very well how someone would do in an MIT undergraduate degree, you could run it for a while for people who actually do MIT undergraduate degrees, offering prizes for high performance, or just subsidizing taking it at all. After the best MIT graduates say on their CVs for a while that they also did well on this thing and got a prize, it is hopefully an established metric, and an employer would as happily have someone with the degree as with a great result on your test. At which point an impressive and ambitious high school leaver would take the test, modulo |
33301920-f365-4464-b9af-dd61f79c24a8 | trentmkelly/LessWrong-43k | LessWrong | March 2016 Media Thread
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
Rules:
* Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
* If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
* Please post only under one of the already created subthreads, and never directly under the parent media thread.
* Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
* Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules. |
09334681-6014-408b-8b33-e746eca9a807 | trentmkelly/LessWrong-43k | LessWrong | What are the biggest "moonshots" currently in progress?
You know, "moonshot" projects like:
* SpaceX's plan to colonize Mars
* SpaceX's plan to give the whole world fast satellite internet
* Tesla's plan to transition the world to sustainable energy
* Boom making commercial supersonic flight a thing again
Let's make a list!
E.g. does anyone know if China has some huge construction projects that could qualify for this? |
524cef00-7e06-4c57-95c1-fc92ad3ef32b | trentmkelly/LessWrong-43k | LessWrong | [VIDEO] Harm reduction, hacker psychology
Hackers As A High Risk Population-- this is an hour-long video by Violet Blue, and it's about what she's been thinking about rather than something with definitive answers.
It starts with an overview of harm reduction-- an approach to public health which begins by accepting that people are going to do the things they want to do, and continues by finding ways to make the things people want to do less dangerous. The most detail is about harm reduction for drug users, sexual minorities, and homeless people.
Violet Blue got interested in the risks to programmers when a start-up entrepreneur got some bad press and committed suicide, but shifted over to studying hackers-- it turned out that the isolation, risk, long hours, and (in some cases?) responsibility led to psychological problems like those of spies and soldiers.
It turned out that hackers aren't especially likely to have Asperger's, but the intense world model might be appropriate. The idea is that people on the spectrum (?) do notice other people's emotions, but get overwhelmed by them, and one way of dealing with the stress is to focus on intellectual details.
She thinks of hackers as people who are especially driven by curiosity, which is a virtue. |
30193a8a-707f-4311-acf6-d69de3724283 | trentmkelly/LessWrong-43k | LessWrong | AI Impacts Quarterly Newsletter, Apr-Jun 2023
Every quarter, we have a newsletter with updates on what’s happening at AI Impacts, with an emphasis on what we’ve been working on. You can see past newsletters here and subscribe to receive more newsletters and other blogposts here.
During the past quarter, Katja wrote an article in TIME, we created and updated several wiki pages and blog posts, and we began working on several new research projects that are in progress.
We’re running a reader survey, which takes 2-5 minutes to complete. We appreciate your feedback!
If you’d like to donate to AI Impacts, you can do so here. Thank you!
News
Katja Grace’s TIME article
In May, TIME published Katja’s article “AI Is Not an Arms Race.” People sometimes say that the situation with AI is an arms race that rewards speeding forward to develop AI before anyone else. Katja argues that this is likely not the situation, and that if it is, we should try to get out of it.
References to AI Impacts Research
The 2022 Expert Survey on Progress in AI was referenced in an article in The Economist, a New York Times op-ed by Yuval Noah Harari, a Politico op-ed that argues for a “Manhattan Project” for AI Safety, and a report from Epoch AI’s Matthew Barnett and Tamay Besiroglu about a method for forecasting the performance of AI models.
The Japanese news agency Kyodo News published an article about AI risks that referenced Katja’s talk at EA Global from earlier this year.
We also maintain an ongoing list of citations of AI Impacts work that we know of.
Research and writing highlights
Views on AI risks
* Rick compiled a list of quotes from prominent AI researchers and leaders about their views on AI risks.
* Jeffrey compiled a list of quantitative estimates about the likelihood of AI risks from people working in AI safety.
* Zach compiled a list of surveys that ask AI experts or AI safety/governance experts for their views on AI risks.
Average given likelihoods of how good or bad human-level AI will be, from 559 machine le |
830cd692-a429-49e8-8842-2258d1e197a6 | trentmkelly/LessWrong-43k | LessWrong | Meetup : São Paulo Meet Up 3
Discussion article for the meetup : São Paulo Meet Up 3
WHEN: 15 June 2012 02:00:00PM (-0300)
WHERE: Sala 08 do Prédio de Filosofia e Ciências Sociais, USP - Av. Prof. Luciano Gualberto, 315 (Cidade Universitária) São Paulo, SP
There's going to be an event at USP titled 1ª Jornada Transhumanista (http://comunicacao.fflch.usp.br/node/1772). After the talks we're planning to have at least one hour of discussions related to rationality and transhumanism.
Most of previous meetups' attendees are going to be there, two of them presenting at the event. See you there.
Discussion article for the meetup : São Paulo Meet Up 3 |
550705c4-f04c-441e-9967-00ed640d7045 | trentmkelly/LessWrong-43k | LessWrong | How to Throw Away Information in Causal DAGs
When constructing a high-level abstract causal DAG from a low-level DAG, one operation which comes up quite often is throwing away information from a node. This post is about how to do that.
First, how do we throw away information from random variables in general? Sparknotes:
* Given a random variable X, we can throw away information by replacing X with f(X) for some function f.
* Given some other random variable Y, f(X) “contains all information in X relevant to Y” if and only if P[Y|f(X)]=P[Y|X]. In particular, the full distribution function (y→P[Y=y|X]) is a minimal representation of the information about Y contained in X.
For more explanation of this, see Probability as Minimal Map.
For our purposes, starting from a low-level causal DAG, we want to:
* Pick a node Xi
* Pick a set of nodes XS (with i∈S)
* Replace Xi by f(Xi) for some function f, such that
* P[X¯S|f(Xi)]=P[X¯S|Xi]
Here ¯S denotes all the node indices outside S. (Specifying S rather than ¯S directly will usually be easier in practice, since XS is usually a small neighborhood of nodes around Xi.) In English: we want to throw away information from Xi, while retaining all information relevant to nodes outside the set S.
Two prototypical examples:
* In a digital circuit, we pick the voltage in a particular wire at a particular time as Xi. Assuming the circuit is well designed, we will find that only the binary value bin(Xi) is relevant to voltages far away in the circuit or in time. So, with all “nearby” voltages as XS, we can replace Xi by bin(Xi).
* In a fluid, we pick the (microscopic) positions and momenta of all the particles in a little cell of spacetime as Xi. Assuming uniform temperature, identical particles, and some source of external noise - even just a little bit - we expect that only the total number and momentum of particles in the cell will be relevant to the positions and momenta of particles far away in space and time. So, with all “nearby” cells/particles as XS, we can |
bf974b61-4b8e-4d2e-b753-9838b6f385dc | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Beyond Reinforcement Learning: Predictive Processing and Checksums
I was recently talking to a bisexual friend who cannot comprehend what it is like to be gay or straight. Her argument goes like this: Suppose you meet with a smart, pretty, high-status person but his/her genitals are concealed and his/her other sexually-dimorphic features are ambiguous. Would you want to kiss this person?
If you're bisexual then the answer is "yes", but if you're gay or straight then your answer is "I need to know what genitals he/she has". This is interesting from an algorithmic perspective.
Beyond Reinforcement Learning
=============================
In the beginning, Freud invented psychology. His models of how the mind worked were mostly wrong (except for the idea that most thinking is unconscious). But his worst sin was that his theories tended to be unfalsifiable. To advance, psychology needed to be grounded in material observations. In response, the Behaviorist school of psychology was invented, which bucketed every behavior into either a simple reflex or a consequence of operant conditioning (reinforcement learning).
Behaviorism was a step up from Freudian psychoanalysis, but Behaviorism is an ideology [and is therefore wrong](https://www.lesswrong.com/posts/zhRgcBopkR5maBcau/always-know-where-your-abstractions-break). Many behaviors, like blinking when an object comes near your eye, are simple reflexes. Other behaviors, especially addictions like compulsive drinking and gambling, and consequences of reinforcement learning. But people exhibit complex behaviors in pursuit of goals that have never been reinforced.
Consider a horny virginal teenage boy. He has never had sex but he wants to, so he seduces a woman to have sex with. Seducing a mate cannot be a reflex. That's because it is a complex process the details of which vary wildly across time and between cultures. Seducing a mate can be reinforced, but it can't be the result of reinforcement learning the first time you do it, because the first time you do something is before you've been reinforced.
Exhibiting complex strategic behavior in pursuit of goal (especially an abstract goal) is beyond the domain of reflexes and reinforcement learning. Something else must be going on.
Strategizers
============
If you want a mind to exhibit strategic behavior, then you need to give it a causal world model, a value system and a search algorithm. The search algorithm simulates several realities via the causal world model and ranks them via the value system.
The causal world models in the human mind are created via predictive processing. How predictive processing works is beyond the scope of this post, but the basic idea is that if you feed a bunch of sensory data into a net of neurons that are all trying to minimize free energy by anticipating their own inputs, then the neurons will learn to simulate the external universe in real time.
An internal model of the external universe isn't useful unless you have a way of producing causal outputs. One cool feature of predictive processors is that they can produce causal outputs. If you connect the neurons in a predictive processor into motor (output) neurons, then the predictive processor will learn to send motor outputs which minimize predictive error *i.e* minimize surprise.
We haven't programmed the predictive processor to do anything yet but it already has values (namely, to minimize surprise). The [Orthogonality Thesis](https://www.lesswrong.com/tag/orthogonality-thesis) states that an agent can have any combination of intelligence level and final goal. Predictive processors are non-orthogonal. They cannot have any final goal because any goal must, in some sense, minimize free energy (surprise).
Wait a minute. Don't people we get bored and seek out novelty? And isn't novelty a form of surprise (which increases free energy)? Yes, but that's because the human brain isn't a pure predictive processor. The brain gets a squirt of dopamine when it exhibits a behavior that evolution wants to reinforce. Dopamine-moderated reinforcement learning alone is enough to elicit non-free-energy-minimizing behaviors (such as gambling) from a predictive processor.
Abstract Concepts
=================
So far, our predictive processor can only pursue simple goals defined in terms of raw sensory stimuli. How do we program it to robustly identify abstract concepts like male and female? Via checksums. If you point several hard-coded heuristics at a small region of the neural network, then that region of the neural network will generalize them into a single abstract concept that will feel like a fundamental building block of reality.
[[Click here](https://www.lesswrong.com/posts/hL5pDhEjw8PkjzvJJ/predictive-processing-heterosexuality-and-delusions-of) for more information about how [checksums](https://www.lesswrong.com/posts/9sqL83o7emTxnEikh/checksum-sensor-alignment) work in a predictive processor.]
Complex abstract concepts can be bootstrapped from simple abstract concepts by using the activations of simple concepts as heuristics for more complex concepts. Such a model of the world is robust against noise, error and even deception due to hierarchical use of checksums. The binary activations of abstract-concept-coding regions even makes our predictive processor's world model robust against gradient descent attacks. (Gradient descent attacks by outside forces aren't possible as they are in machine learning, but they are a threat you must defend against internally to prevent wireheading.)
We now have all the components we need to create a system that robustly pursues abstract goals.
Our predictive processor creates a model of the universe via minimizing free energy. We can force the predictive processor to think in terms of the abstract concepts of our choosing by pointing a bunch of heuristics at a small region.
Consider the concept of sex. The most obvious way to program it is to have two adjacent clusters of neurons. One activates in response to femaleness and deactivates in response to maleness. The other activates in response to maleness and deactivates in response to femaleness. Activating one cluster suppresses the other cluster and *vice versa*. Point all our male heuristics (like broad shoulders and a deep voice) toward the male cluster and all our female clusters (like breasts and variable tonal inflection) toward the female cluster. We now have a useful ontology for navigating human culture.
We have also programmed our predictive processor to be prejudiced against transsexuals and extremely prejudiced against nonbinary people. Our predictive processor wants sexual deviants—anyone who behaves unexpectedly for their sex—to literally not exist.
And that, I think, is why sexual deviants (such as gays) [used to be sent to death camps](https://en.wikipedia.org/wiki/The_Holocaust).
Credits
=======
Thanks Justis for helping edit this. |
9c1ead38-72cc-4728-abfe-a2e55e9e38c9 | trentmkelly/LessWrong-43k | LessWrong | "Throwing Exceptions" Is A Strange Programming Pattern
For laypeople: In software, “throwing an exception” is a programmer-chosen type of behavior that occurs on the event that an “error” occurs in the course of program execution. This might happen immediately prior to or right at the time of the error.
This is ostensibly done in order to avoid something more catastrophic happening down the line. I’m going to argue that this basis does not often seem to be justified.
You are probably familiar with instances of your programs suddenly and without warning becoming unresponsive, closing themselves down, or simply straight-up disappearing sometimes, perhaps with an error message pop-up soon to follow, but perhaps without one. These are due to exceptions being thrown, not just “errors” per se. Let me explain.
When a function or procedure is called in a program, it is typically expected to return something. The value that this function returns is going to be either one of a set of “normal” values, which are the ones you expect to receive if all goes well, or it could be an “abnormal” or “anomalous” value that only returns if “something bad” happens.
If you choose to be relatively laid-back about your program, you might decide to check for anomalous values only after the procedure returns. Maybe even well after! Furthermore, how you choose to decide what “anomalous” means is arbitrary and up to you.
If you are more paranoid, you have typically decided what counts as “anomalous” before the program runs. These stipulations often come in the form of establishing type requirements for the arguments to the function, and possibly also range checks on the size or values of the inputs. These are just common examples, but like I said, it is arbitrary. It’s also possible that the function calls another function inside of itself, awaits for that function to return something, and then decides whether or not what that function returned is anomalous.
“Throwing” or “raising” an exception occurs when you decide that your function will im |
cf59ce77-fa06-4afa-93df-27bdff476241 | trentmkelly/LessWrong-43k | LessWrong | Acknowledging Background Information with P(Q|I)
Epistemic Status: This was composed in late 2017, sat editing limbo, was still worth publishing in 2021 and got some edits, and by 2023 was overdue for being published. Now, in 2024, it is almost just embarrassing that I haven't published it yet.
The 2006 book "Data Analysis: A Bayesian Tutorial" by Devinderjit Sivia & John Skilling exposed a choice that I've appreciated more with each passing year since coming across the book in roughly 2011.
My appreciation for the frame caused me to write the first draft of the essay and lack of regret over writing it suggests that it is worth showing to others.
Sivia & Skilling chose to consistently include a symbol (they use "I") in their math for the background information that undergirds their entire analysis, which is taken as given at every step of every derivation.
I. An Example Of "Putting An I In It"
The old and familiar conjunction fallacy is often taken to be a violation of a theorem of Bayesian probability that might be symbolically written as:
Pr(A)≥Pr(A∧B)
We read this theorem as asserting that starting with a description of reality like A and conjoining it with additional descriptive details like B will nearly always reduce the probability of the total conjunctive description matching possible states of affairs, except in weird situations such as when B is simply a tautological restatement of A or otherwise fulling included within A.
In terms of a mentalistic theory of inference: every detail added to an idea makes the idea less likely to be true about the sensible world.
Following Sivia & Skilling's preferred notation you might render this instead as:
Pr(A|I)≥Pr(A∧B|I)
There's a sense in which the extra "|I" comes out as essentially a non-op in practice, but that is only really true if "I" is read as having no positive content, but rather just the background common sense that is shared by all mathematicians (or something (more on this below)).
So why bother with a non-op? What is the thinking here?
II |
e3ecf4d1-c918-4aa4-acf5-e2f1ae1d5cc8 | trentmkelly/LessWrong-43k | LessWrong | Physicist's dissociation
Frankly, I find the fact that this thought was thought and written quite absurd. It is my fingers, tendons and muscles, jumping over the keys, pressing symbols made of curves. They do so with impressive agility and coordination. I see my skin over them, which is a fascinating material with physical properties. The keys, clicking, input a signal into a silicon-based computation device, running process connecting it to a society-wide system of such devices. The signal—my keystrokes—sent as light, then through a wire, are recorded as sequences of symbols forming semantic chunks, processed again by the brain (that controlled the fingers' motion) into concepts. Electrical-chemical activity within language-synthesizing neural sections (moving ions releasing electrical potential, signals sent down the axion) into the keyboard-typing neural circuit, into the motion of muscles, into the deflection of the membrane, into an electrical signal, into EM wave, into potential across a copper wire, into another information-processing system, back the same way to the laptop, photons emitted via LEDs, absorbed by appropriate sensory cells responsible for visual perception, and processed by the part of the brain just next to the one synthesizing the text.
When reduced to physics, the concepts lose clarity of their meaning and implications. For example, the silicon-based device is not just a computation system; its significance is that it is, functionally, a direct augmentation of the user's instrumental reason, a dynamic medium of thought vectors in concept space. The user, the totality of body parts placed together into a human, engages in behaviors, including ones with other humans. What emerges is too mind-boggling to be systemically described in a book, let alone a short text. From electrical reactions in a brain evolved by a Darwinian process, into language, into a system (society) of dynamics difficult enough to be impossible to grasp by any single intelligent entity engaging in |
63e45014-3bc7-4f51-b641-75ea1f6f7901 | trentmkelly/LessWrong-43k | LessWrong | Awww, a Zebra
This image recently showed up on Flickr (original is nicer):
With the caption:
> "Alas for those who turn their eyes from zebras and dream of dragons! If we cannot learn to take joy in the merely real, our lives shall be empty indeed." —Eliezer S. Yudkowsky.
"Awww!", I said, and called over my girlfriend over to look.
"Awww!", she said, and then looked at me, and said, "I think you need to take your own advice!"
Me: "But I'm looking at the zebra!"
Her: "On a computer!"
Me: (Turns away, hides face.)
Her: "Have you ever even seen a zebra in real life?"
Me: "Yes! Yes, I have! My parents took me to Lincoln Park Zoo! ...man, I hated that place."
Part of the Joy in the Merely Real subsequence of Reductionism
Next post: "Hand vs. Fingers"
Previous post: "Initiation Ceremony" |
a931eeea-1e62-4aa5-a357-8c8bddb9803a | trentmkelly/LessWrong-43k | LessWrong | [Crosspost] An AI Pause Is Humanity's Best Bet For Preventing Extinction (TIME)
Otto Barten is director of the Existential Risk Observatory, a nonprofit aiming to reduce existential risk by informing the public debate.
Joep Meindertsma is founder of PauseAI, a movement campaigning for an AI Pause.
The existential risks posed by artificial intelligence (AI) are now widely recognized.
After hundreds of industry and science leaders warned that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the U.N. Secretary-General recently echoed their concern. So did the prime minister of the U.K., who is also investing 100 million pounds into AI safety research that is mostly meant to prevent existential risk. Other leaders are likely to follow in recognizing AI’s ultimate threat.
In the scientific field of existential risk, which studies the most likely causes of human extinction, AI is consistently ranked at the top of the list. In The Precipice, a book by Oxford existential risk researcher Toby Ord that aims to quantify human extinction risks, the likeliness of AI leading to human extinction exceeds that of climate change, pandemics, asteroid strikes, supervolcanoes, and nuclear war combined. One would expect that even for severe global problems, the risk that they lead to full human extinction is relatively small, and this is indeed true for most of the above risks. AI, however, may cause human extinction if only a few conditions are met. Among them is human-level AI, defined as an AI that can perform a broad range of cognitive tasks at least as well as we can. Studies outlining these ideas were previously known, but new AI breakthroughs have underlined their urgency: AI may be getting close to human level already.
Recursive self-improvement is one of the reasons why existential-risk academics think human-level AI is so dangerous. Because human-level AI could do almost all tasks at our level, and since doing AI research is one of those tasks, advanced AI s |
1a0579f7-2abb-478c-81e0-0a75e30abe05 | trentmkelly/LessWrong-43k | LessWrong | What should I have for dinner? (A case study in decision making)
Everyone knows that eating fatty foods is bad for you, that high cholesterol causes heart disease and that we should all do some more exercise so that we can lose weight. How do I know that everyone knows this? Well, for one thing, this government website tells me so:
> We all know too much fat is bad for us. But we don't always know where it's lurking. It seems to be in so many of the things we like, so it's sometimes difficult to know how to cut down.
>
> ...kids need to do at least 60 minutes of physical activity that gets their heart beating faster than usual. And they need to do it every day to burn off calories and prevent them storing up excess fat in the body which can lead to cancer, type 2 diabetes and heart disease.
See, it's right there in black and white. We all know too much fat is bad for us. Except... there are a lot of people who don't agree. Gary Taubes is one of them, His book, Good Calories Bad Calories (The Diet Delusion in the UK and Australia), sets out the case against what he calls the Dietary Fat Hypothesis for obesity and heart disease, and proposes instead the Carbohydrate Hypothesis: that both obesity and heart disease are caused by excessive consumption of refined carbohydrates, rather than dietary fat.
Taubes is very convincing. He explains how people have consistently recommended low-carb diets for weight-loss for the past 150 years. He explains how scientists roundly ignored studies that contradicted the link between high cholesterol and coronary disease. There are details of the mechanism by which eating refined carbohydrate affects insulin production, leading to obesity. He gives a plausible narrative for how the Dietary Fat Hypothesis came to be accepted scientific wisdom despite not actually being true (or supported by the majority of the evidence). He explains how studies of low-fat diets simply ignored overall mortality rates, reporting only deaths from heart disease, and how one study wasn't published because 'we weren't h |
5d72a039-f8b9-460e-9509-b20d6d9c0708 | trentmkelly/LessWrong-43k | LessWrong | Why aren't we promoting social media awareness of x-risks?
Crossposted from the EA Forum.
Videos like this one by Kurzgesagt show that it is not wholly impossible to present x-risks to a more general social media audience. It seems to me that content like this paired with a concrete call to action like contacting your representative or donating to some org could be very high impact.
There are also quite a few influencers with tech-savvy audiences and who already dislike AI. Recruting them to help with AI x-risk seems potentially very valuable.
Note that, although I've focused on AI x-risk, I think this is more generally applicable. If we want to convince people that x-risks are a problem, I think social media is probably the fastest and cheapest way to do that. |
25da4852-1d34-4580-a191-265af154f1f2 | trentmkelly/LessWrong-43k | LessWrong | Design policy to be testable
My father in law is an expert on legislation. He's travelled around the world advising countries on how to craft laws and constitutions.
One of the projects he's working on at the moment is advising on a bill to promote integrated education of different cultural and religious groups in Northern Ireland.
He was discussing the policy with me the other day, and was saying that he thinks it will be highly effective: "It's impossible for a preacher to stand up and say that all catholics/protestants are evil, when you've grown up in the same class as them and simply know they're not."
I agreed the reasoning made sense, but was somewhat skeptical it would necessarily have the desired effect. I can easily think of just so stories where the opposite happens.
For example, kids naturally form cliques. And protestant kids are more likely to form cliques with other protestant kids, and catholic kids with catholic kids. So they might both grow up 'knowing' that the other group were very cliquey and not at all interested in being friends.
So I asked, how do they plan to test whether it worked?
He looked at me for a few seconds, and then said that he supposed you could look at statistics such as rates of conscription to the IRA.
"Wait a minute, " I replied, "do you mean to say that they haven't designed up front criteria for how to measure how effective this policy is? Then how on earth will they know if it's successful?"
Unfortunately, but not unsurprisingly, the idea of crafting policy in such a way that it can be easily testable just doesn't seem to exist!
I know that we're probably never going to have futarchies, or even deregulate healthcare. But I think the idea of designing policy to be testable is something that should be pretty uncontroversial (at least if you're not a politician).
I would love to live in a world where any bill required the following 5 items:
1. A description of how they're going to measure the impact of the bill.
2. A prediction distribution |
a4f1e054-0f5d-4c94-aff4-3ff3dacfe6bf | trentmkelly/LessWrong-43k | LessWrong | Upgrading moral theories to include complex values
Like many members of this community, reading the sequences has opened my eyes to a heavily neglected aspect of morality. Before reading the sequences I focused mostly on how to best improve people's wellbeing in the present and the future. However, after reading the sequences, I realized that I had neglected a very important question: In the future we will be able to create creatures with virtually any utility function imaginable. What sort of values should we give the creatures of the future? What sort of desires should they have, from what should they gain wellbeing?
Anyone familiar with the sequences should be familiar with the answer. We should create creatures with the complex values that human beings possess (call them "humane values"). We should avoid creating creatures with simple values that only desire to maximize one thing, like paperclips or pleasure.
It is important that future theories of ethics formalize this insight. I think we all know what would happen if we programmed an AI with conventional utilitarianism: It would exterminate the human race and replace them with creatures whose preferences are easier to satisfy (if you program it with preference utilitarianism) or creatures whom it is easier to make happy (if you program it with hedonic utilitarianism). It is important to develop a theory of ethics that avoids this.
Lately I have been trying to develop a modified utilitarian theory that formalizes this insight. My focus has been on population ethics. I am essentially arguing that population ethics should not just focus on maximizing welfare, it should also focus on what sort of creatures it is best to create. According to this theory of ethics, it is possible for a population with a lower total level of welfare to be better than a population with a higher total level of welfare, if the lower population consists of creatures that have complex humane values, while the higher welfare population consists of paperclip or pleasure maxi |
6f40689b-3428-45b1-8c93-2c1cb991017f | trentmkelly/LessWrong-43k | LessWrong | KC Dojo 2-18-20
Attendees: Me, A, W, S, C, J
Me:
As a follow up to last week, where I brought up the fact I am planning to change careers, I showed an example of the kind of job I would like to have: a position as a decision analyst at the Pentagon. (I am following the Career Path of ‘Improve Institutional Decision-Making’ as per 80,000hrs). I have sent emails to college advisors and am waiting on responses.
‘A’ recommended very highly that I emphasize networking. I am in the process of using Linkdin for this. As a spitball, he also suggested ‘doing research on people’ and ‘start doing stuff’. He doesn’t have any specific ideas, but it might be worth thinking about
‘W’ recommended I apply to work in the Census. Since they perform a background check, this will let me know if there are any flags on my record (I have troubled family members).
A:
A had a bad experience with an agency in his career, and has resolved to not interact with them again. He will have to adjust his long-term career goals, but has a solid plan of action and no new difficulties. He is trying to write a nonfiction book, and is currently resolved to ‘just write’. We’ll touch base with him again in a few weeks to see if he might need a more specific strategy.
W:
W is struggling with an ex of his who is posting on twitter in such a way that W knows she is having relationship problems. He is tempted to reach out and show concern (they are on friendly terms), but is convinced that it is a bad idea, and wants our feedback.
I asked for clarification on why he thinks it is bad idea, and he cited that his map/model of her is somewhat outdated, and might be mis-reading. Also, their relationship is mostly a passive one, which doesn’t provide precedent for this.
A noticed that when we affirm W’s intuitions, he doesn’t seem to fully like that answer. A asked why. W was silent for a while, and said “I want to help”. He doesn’t like that reaching out to help would likely result in drama/a worse situation. After talkin |
b9145dd2-2673-4f51-ab78-011e124a318d | trentmkelly/LessWrong-43k | LessWrong | Loebian cooperation, version 2
I'm writing up some math results developed on LW as a paper with the tentative title "Self-referential decision algorithms". Something interesting came up while I was cleaning up the Loebian cooperation result. Namely, how do we say precisely that Loebian cooperation is stable under minor syntactic changes? After all, if we define a "minor change" to program A as a change that preserves A's behavior against any program B, then quining cooperation is just as stable under such "minor changes" by definition. Digging down this rabbit hole, I seem to have found a nice new reformulation of the whole thing.
I will post some sections of my current draft in the comments to this post. Eventually this material is meant to become an academic paper (hopefully), so any comments on math mistakes, notation or tone would be much appreciated! And yeah, I have no clue about academic writing, so you're welcome to tell me that too. |
62b9b0db-2c7d-45a7-a76d-67f7b870dec3 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] The Sin of Underconfidence
Today's post, The Sin of Underconfidence was originally published on 20 April 2009. A summary (taken from the LW wiki):
> When subjects know about a bias or are warned about a bias, overcorrection is not unheard of as an experimental result. That's what makes a lot of cognitive subtasks so troublesome - you know you're biased but you're not sure how much, and if you keep tweaking you may overcorrect. The danger of underconfidence (overcorrecting for overconfidence) is that you pass up opportunities on which you could have been successful; not challenging difficult enough problems; losing forward momentum and adopting defensive postures; refusing to put the hypothesis of your inability to the test; losing enough hope of triumph to try hard enough to win. You should ask yourself "Does this way of thinking make me stronger, or weaker?"
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was My Way, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
59f96506-50b8-4c04-bc7e-2746719eb1ec | trentmkelly/LessWrong-43k | LessWrong | False Thermodynamic Miracles, in equation form
In equation form, the AI is maximising
P(¬X)∗C+P(X)∗u(X,A)
for some constant C, some unlikely event X that the AI cannot affect, some set of relevant descriptors A, and some utility u. Since C is constant, this is exactly the same as maximising u(X,A) - the probability P(X) is irrelevant.
The whole setup described is simply a way to ensure that if W is the likely set of worlds consistent with observations after ¬X/X, then
P(W)≈P(¬X)≈1 (we "know" that X doesn't happen and that we end up in W),
while
P(W|X)<<1 (in the worlds it cares about, the AI behaves as if W was incredibly unlikely to come about). |
d9715476-62da-417b-bbf1-8adfc2e1e2e9 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Future ML Systems Will Be Qualitatively Different
In 1972, the Nobel prize-winning physicist Philip Anderson wrote the essay "[More Is Different](https://science.sciencemag.org/content/177/4047/393)". In it, he argues that quantitative changes can lead to qualitatively different and unexpected phenomena. While he focused on physics, one can find many examples of More is Different in other domains as well, including biology, economics, and computer science. Some examples of More is Different include:
* **Uranium.** With a bit of uranium, nothing special happens; with a large amount of uranium packed densely enough, you get a nuclear reaction.
* **DNA.** Given only small molecules such as calcium, you can’t meaningfully encode useful information; given larger molecules such as DNA, you can encode a genome.
* **Water.** Individual water molecules aren’t wet. Wetness only occurs due to the interaction forces between many water molecules interspersed throughout a fabric (or other material).
* **Traffic.** A few cars on the road are fine, but with too many you get a traffic jam. It could be that 10,000 cars could traverse a highway easily in 15 minutes, but 20,000 on the road at once could take over an hour.
* **Specialization.** Historically, in small populations, virtually everyone needed to farm or hunt to survive; in contrast, in larger and denser communities, enough food is produced for large fractions of the population to specialize in non-agricultural work.
While some of the examples, like uranium, correspond to a sharp transition, others like specialization are more continuous. I’ll use **emergence** to refer to qualitative changes that arise from quantitative increases in scale, and **phase transitions** for cases where the change is sharp.
In this post, I'll argue that emergence often occurs in the field of AI, and that this should significantly affect our intuitions about the long-term development and deployment of AI systems. We should expect weird and surprising phenomena to emerge as we scale up systems. This presents opportunities, but also poses important risks.
Emergent Shifts in the History of AI
------------------------------------
There have already been several examples of quantitative differences leading to important qualitative changes in machine learning.
**Storage and Learning.** The emergence of machine learning as a viable approach to AI is itself an example of More Is Different. While learning had been discussed since the 1950s, it wasn’t until the 80s-90s that it became a dominant paradigm: for instance, IBM’s [first statistical translation model](https://dl.acm.org/doi/10.3115/991635.991651) was published in 1988, even though the idea [was proposed](https://aclanthology.org/1952.earlymt-1.1.pdf) in 1949[[1]](#fn1). Not coincidentally, 1GB of storage cost over $100k in 1981 but only around $9k in 1990 (adjusted to 2021 dollars). The [Hansard corpus](https://catalog.ldc.upenn.edu/LDC95T20) used to train IBM’s model comprised 2.87 million sentences and would have been difficult to use before the 80s. Even the simple MNIST dataset would have required $4000 in hardware just to store in 1981, but that had fallen to a few dollars by 1998 when it was published. Cheaper hardware thus allowed for a qualitatively new approach to AI: in other words, More storage enabled Different approaches.
**Compute, Data, and Neural Networks.** As hardware improved, it became possible to train neural networks that were very deep for the first time. Better compute enabled bigger models trained for longer, and better storage enabled learning from more data; AlexNet-sized models and ImageNet-sized datasets wouldn’t have been feasible for researchers to experiment with in 1990.
Deep learning performs well with lots of data and compute, but struggles at smaller scales. Without many resources, simpler algorithms tend to outperform it, but with sufficient resources it pulls far ahead of the pack. This reversal of fortune led to qualitative changes in the field. As one example, the field of machine translation moved from [phrase-based models](https://nlp.stanford.edu/phrasal/) (hand-coded features, complex systems engineering) to [neural sequence-to-sequence models](https://proceedings.neurips.cc/paper/2014/file/a14ac55a4f27472c5d894ec1c3c743d2-Paper.pdf) (learned features, specialized architecture and initialization) to simply fine-tuning a [foundation model](https://arxiv.org/abs/2108.07258) such as BERT or GPT-3. Most work on phrase-based models was obviated by neural translation, and the same pattern held across many other language tasks, where hard-won domain-specific engineering effort was simply replaced by a general algorithm.
**Few-shot Learning.** More recently, [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GPT-3](https://arxiv.org/abs/2005.14165) revealed the emergence of strong few-shot and zero-shot capabilities, via well-chosen natural language prompting.


*Top: Few-shot machine translation performance (BLEU score) for GPT-2. Bottom: GPT-3 (trained on more data) has an even starker curve, going from 5 to 25 BLEU between 100M and 400M parameters. Unsupervised baselines, as well as fine-tuned state-of-the-art, are indicated for reference.*
This was an unexpected and qualitatively new phenomenon that only appeared at large scales, and it emerged without ever explicitly training models to have these few-shot capabilities. Comparing GPT-2 to GPT-3 shows that the exact model size needed can vary due to the training distribution or other factors, but this doesn’t affect the basic point that new capabilities can appear without designing or training for them.
**Grokking.** In 2021, [Power et al.](https://mathai-iclr.github.io/papers/papers/MATHAI_29_paper.pdf) identified a phenomenon they call "grokking", where a network’s generalization behavior improves qualitatively when training it for longer (even though the training loss is already small).
Specifically, for certain algorithmically generated logic/math datasets, neural networks trained for 1,000 steps achieve perfect train accuracy but near-zero test accuracy. However, after around 100,000 steps the test accuracy suddenly increases, achieving near-perfect generalization by 1 million steps.

This shows that even for a single model, we might encounter qualitative phase transitions as we train for longer.
**Other potential examples.** I'll briefly list other examples from recent papers. I don't think these examples are as individually clear-cut, but they collectively paint an interesting picture:
* [McGrath et al. (2021)](https://arxiv.org/abs/2111.09259) show that AlphaZero acquires many chess concepts at a phase transition near 32,000 training steps.
* [Pan et al. (2021)](https://arxiv.org/abs/2201.03544) show that reward hacking sometimes occurs via qualitative phase transitions as model size increases.
* DeepMind's recent [Gopher](https://arxiv.org/abs/2112.11446) model exhibits a phase transition on the FEVER task, acquiring the ability to utilize side information (Figure 3):

* [Wei et al. (2021)](https://arxiv.org/abs/2109.01652) show that instruction-tuning hurts small models but helps large models (see Figure 6).
* Some few-shot tasks such as arithmetic show phase transitions with model size (see [Brown et al. (2020)](https://arxiv.org/abs/2005.14165), Figure 3.10).
* [This](https://twitter.com/NaxAlpha/status/1420700413125447683) researcher shares an anecdote similar to the “grokking” paper.
What This Implies for the Engineering Worldview
-----------------------------------------------
In the [introduction post](https://bounded-regret.ghost.io/more-is-different-for-ai/) to this series, I contrasted two worldviews called Philosophy and Engineering. The Engineering worldview, which is favored by most ML researchers, tends to predict the future by looking at empirical trends and extrapolating them forward. I myself am [quite sympathetic to this view](https://bounded-regret.ghost.io/forecasting-zeroth-and-first-order/), and for this reason I find emergent behavior to be troubling and disorienting. Rather than expecting empirical trends to continue, emergence suggests we should often expect new qualitative behaviors that are not extrapolations of previous trends.
Indeed, in this sense Engineering (or at least pure trend extrapolation) is self-defeating as a tool for predicting the future[[2]](#fn2). The Engineering worldview wants to extrapolate trends, but one trend is that emergent behavior is becoming more and more common. Of the four phase transitions I gave above, the first (storage) occurred around 1995, and the second (compute) occurred around 2015. The last two occurred in 2020 and 2021. Based on past trends, we should expect future trends to break more and more often.[[3]](#fn3)
How can we orient ourselves when thinking about the future of AI despite the probability of frequent deviations from past experience? I'll have a lot more to say about this in the next few posts, but to put some of my cards on the table:
* Confronting emergence will require adopting mindsets that are less familiar to most ML researchers and utilizing more of the Philosophy worldview (in tandem with Engineering and other worldviews).
* Future ML systems will have weird failure modes that don't manifest today, and we should start thinking about and addressing them in advance.
* On the other hand, I don't think that Engineering as a tool for predicting the future is entirely self-defeating. Despite emergent behavior, empirical findings often generalize surprisingly far, at least if we're careful in interpreting them. Utilizing this fact will be crucial to making concrete research progress.
---
1. From the [IBM model authors](https://aclanthology.org/J90-2002.pdf): “In 1949 Warren Weaver suggested that the problem be attacked with statistical methods and ideas from information theory, an area which he, Claude Shannon, and others were developing at the time (Weaver 1949). Although researchers quickly abandoned this approach, advancing numerous theoretical objections, we believe that the true obstacles lay in the relative impotence of the available computers and the dearth of machine-readable text from which to gather the statistics vital to such an attack. Today, computers are five orders of magnitude faster than they were in 1950 and have hundreds of millions of bytes of storage. Large, machine-readable corpora are readily available.” [↩︎](#fnref1)
2. This is in contrast to using Engineering to *build capable and impressive systems* today. If anything, recent developments have strongly solidified Engineering’s dominance for this task. [↩︎](#fnref2)
3. This list is probably subject to selection bias and recency effects, although I predict that my point would still hold up for a carefully curated list (for instance, I didn’t include the several ambiguous examples in my count). I would be happy to bet on more phase transitions in the future if any readers wish to take the other side. [↩︎](#fnref3) |
0036fd73-efd2-4ea8-bab5-99988cb4601e | trentmkelly/LessWrong-43k | LessWrong | Anthropic Fall 2023 Debate Progress Update
This is a research update on some work that I’ve been doing on Scalable Oversight at Anthropic, based on the original AI safety via debate proposal and a more recent agenda developed at NYU and Anthropic. The core doc was written several months ago, so some of it is likely outdated, but it seemed worth sharing in its current form.
I’d like to thank Tamera Lanham, Sam Bowman, Kamile Lukosiute, Ethan Perez, Jared Kaplan, Amanda Askell, Kamal Ndousse, Shauna Kravec, Yuntao Bai, Alex Tamkin, Newton Cheng, Buck Shlegeris, Akbir Khan, John Hughes, Dan Valentine, Kshitij Sachan, Ryan Greenblatt, Daniel Ziegler, Max Nadeau, David Rein, Julian Michael, Kevin Klyman, Bila Mahdi, Samuel Arnesen, Nat McAleese, Jan Leike, Geoffrey Irving, and Sebastian Farquhar for help, feedback, and thoughtful discussion that improved the quality of this work and write-up.
1. Anthropic’s Debate Agenda
In this doc, I’m referring to the idea first presented in AI safety via debate (blog post). The basic idea is to supervise future AI systems by pitting them against each other in a debate, encouraging them to argue both sides (or “all sides”) of a question and using the resulting arguments to come to a final answer about the question. In this scheme, we call the systems participating in the debate debaters (though usually, these are actually the same underlying system that’s being prompted to argue against itself), and we call the agent (either another AI system or a human, or a system of humans and AIs working together, etc.) that comes to a final decision about the debate the judge.
For those more or less familiar with the original OAI/Irving et al. Debate agenda, you may wonder if there are any differences between that agenda and the agenda we’re pursuing at Anthropic, and indeed there are! Sam Bowman and Tamera Lanham have written up a working Anthropic-NYU Debate Agenda draft which is what the experiments in this doc are driving towards. [1]
To quote from there about the basic features |
2bb5b577-edb3-490a-bfad-720beb86490b | StampyAI/alignment-research-dataset/arbital | Arbital | Bayes' rule: Beginner's guide
This arc teaches basic understanding of [https://arbital.com/p/1lz](https://arbital.com/p/1lz) via [https://arbital.com/p/1x5](https://arbital.com/p/1x5). This arc should work for [https://arbital.com/p/1r5](https://arbital.com/p/1r5)-type readers. |
31fc7bcd-b8ab-4f22-be0b-c6f7f32aeb0e | trentmkelly/LessWrong-43k | LessWrong | Collapsible article sections?
It would be nice to be able to make 'collapsible sections' in articles, like on my atheism FAQ. That way, each article with (for example) a long list of references at the bottom could by default have the references collapsed, but if you click 'References' then the list of references would appear. That way people wouldn't have to scroll past several pages of references in order to leave a comment. (Examples: 1, 2, 3, 4, 5.)
This suggestion is mostly just for me at this point, though, as I'm the only one doing long lists of references. But I figured I'd mention it anyway. |
b2da87b6-6699-4829-892f-6423dfe13232 | trentmkelly/LessWrong-43k | LessWrong | Release: Optimal Weave (P1): A Prototype Cohabitive Game
I recently argued for Cohabitive Games, games that are designed for practicing negotiation, or for developing intuitions for applied cooperative bargaining, which is one of the names of preference aggregation.
I offered the assets for P1, my prototype, to whoever was interested. About 20 people asked for them. I told most of them that I wanted to polish the assets up a little bit first, and said that it would be done in about a month. But a little bit of polish turned into a lot, and other things got in the way, and a month turned into ten. I'm genuinely a little bit dismayed about how long it took, however:
P1, now Optimal Weave 0.1, is a lot nicer now.
I think it hangs together pretty well.
You can get it here: Optimal Weave 0.1. (I'm not taking any profit, and will only start to if it runs away and starts selling a lot. (release day sale. get in quick.))
There's also a print and play version if you want it sooner, free, and don't mind playing with paper. (Honestly, if you want to help to progress the genre, you've got to learn to enjoy playing on paper so that you can iterate faster.)
There's also a version that only includes the crisp, flippable land tiles, which you can then supplement with the ability, event and desire cards from the printed version and whatever playing pieces you have at home.
(note: the version you'll receive uses round tiles instead of hex tiles. They don't look as cool, but I anticipate that round tiles will be a little easier to lay out, and easier to flip without nudging their neighbors around (the game involves a lot of tile flipping). Earlier on, I had some hope that I could support both square layout and hexagonal layout, that's easier with round tiles, though everything's balanced around the higher adjacency of the hexagonal layout so I doubt that'll end up working)
I'll be contacting everyone who reached out imminently (I have a list).
What Happened
Further learnings occurred. I should report them.
I refined the game a fa |
0b0f9bbb-bf47-42cd-b4a9-19aaf64d8879 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Cambridge (MA) Meetup
Discussion article for the meetup : Cambridge (MA) Meetup
WHEN: 19 August 2012 02:00:00PM (-0400)
WHERE: 25 Ames St, Cambridge, MA 02139
Cambridge/Boston-area Less Wrong meetups on the first and third Sunday of every month, 2pm at the MIT Landau Building [25 Ames St, Bldg 66], room 148. Room number subject to change based on availability, signs will be posted with the actual room number.
Discussion article for the meetup : Cambridge (MA) Meetup |
bfdbeeb5-441e-4bc1-9924-4d802c8be2b8 | trentmkelly/LessWrong-43k | LessWrong | There are no rules
Crossposted to my blog
Over the course of this past year, I’ve slowly become a magician.
No, not a literal one. But in the sense that I’ve become a version of myself that I previously would’ve called a magician.
I feel less social anxiety and awkwardness when asking people for things. I feel more inclined to find unusual or uncomfortable answers to problems. I have gotten noticeably better at not letting uncertainty and hypotheticals stall action.
Before, I would quickly concede if told it wasn’t possible to get a Thing I wanted. Now, I’m usually able to pry until I get that Thing, or at least come away with a compromise or special arrangement — unless the Thing I want really isn’t possible.
And it feels really good.
As I think about what’s contributed to this mindset shift, one phrase immediately comes to mind — well, the internalization of one phrase comes to mind: there are no rules.
----------------------------------------
Humans are prediction machines. We expect things to unfold in a certain way because that’s what happened in the past.
This is generally a very useful heuristic, since it saves the mental energy of thinking about what’ll happen next or how we should respond in certain contexts — when you pass by a friend on the street, you don’t need to think very hard about what social niceties to say.
But it’s too easy to fall into these predictable patterns. Invisible social norms develop. Interpersonal dynamics slowly settle. People sleepwalk through daily routines and events.
This can be dangerous if you don’t think about it once in a while, because you can forget that things don’t necessarily happen the way you think they should or would.
Because the world is not static — there are no rules! You can simply do most things, and usually with minimal social costs, if any. [1]
(It’s at this point I should explicitly state: the idea that “there are no rules” is more of a mindset than a literal phrase. Many people are kind, willing to help, an |
87befe70-dea5-40ef-8ebb-a13730cde0cd | trentmkelly/LessWrong-43k | LessWrong | "Perspective: Focused-Ultrasound Guided Neuropeptide Delivery as a Novel Therapeutic Approach in Psychiatry" (Seeds of Science call for reviewers)
Seeds of Science is a journal (funded through Scott Alexander's ACX grants program) that publishes speculative or non-traditional articles on scientific topics. Peer review is conducted through community-based voting and commenting by a diverse network of reviewers (or "gardeners" as we call them); top comments are published after the main text of the manuscript.
We have just sent out an article for review that may be of interest to some in the LessWrong community, so I wanted to see if anyone - particularly those with a background in neuroscience - would be interested in joining us as a gardener providing feedback on the article. It is free to join and anyone is welcome (we currently have gardeners from all levels of academia and outside of it). Participation is entirely voluntary - we send you submitted articles and you can choose to vote/comment or abstain without notification (so it's no worries if you don't plan on reviewing very often but just want to take a look here and there at the articles people are submitting).
To register, you can fill out this google form. From there, it's pretty self-explanatory - I will add you to the mailing list and send you an email that includes the manuscript, our publication criteria, and a simple review form for recording votes/comments. If you would like to just take a look at this article without being added to the mailing list, then just reach out (info@theseedsofscience.org) and say so.
Happy to answer any questions about the journal through email or in the comments below. Here is the abstract for the article.
Perspective: Focused-Ultrasound Guided Neuropeptide Delivery as a Novel Therapeutic Approach in Psychiatry
Although drugs are a critical component of mental healthcare, most have modest benefits and significant side effects. One way to develop a superior intervention would be to administer drugs with the spatial and temporal precision that better replicates natural diversity within neurotransmitter systems. |
abeda309-b8ca-4ef0-b3dd-ea1045e6c07a | trentmkelly/LessWrong-43k | LessWrong | On Leif Wenar's Absurdly Unconvincing Critique Of Effective Altruism
Crossposted here, on my blog.
Leif Wenar recently published a critique of effective altruism that seems to be getting a lot of hype. I don’t know why. There were a few different arguments in the piece: some terrible and others even worse. Yet more strangely, he doesn’t object much to EA as a whole—he just points to random downsides of EA and is snarky. If I accepted every claim in his piece, I’d come away with the belief that some EA charities are bad in a bunch of random ways, but believe nothing that imperils my core belief in the goodness of the effective altruism movement or, indeed, in the charities that Wenar critiques.
I’m not going to quote Wenar’s entire article, as it’s quite long and mostly irrelevant. It contains, at various points, bizarre evidence-free speculation about the motivations of effective altruists. He writes, for instance, “Ord, it seemed, wanted to be the hero—the hero by being smart—just as I had. Behind his glazed eyes, the hero is thinking, “They’re trying to stop me.””
I’m sure this is rooted in Ord’s poor relationship with his mother!
At another point, he mistakes MacAskill’s statement that there’s been a lot of aid in poor countries and that things have gotten better for the claim that aid is responsible for the entirety of the improvement. These strange status games about credit and reward and heroism demonstrate a surprising moral shallowness, caring more about whether people take credit for doing things than what is done. He says, for instance, after quoting MacAskill saying it’s possible to save a life for a few thousand dollars:
> But let’s picture that person you’ve supposedly rescued from death in MacAskill’s account—say it’s a young Malawian boy. Do you really deserve all the credit for “saving his life”? Didn’t the people who first developed the bed nets also “make a difference” in preventing his malaria?
Well, as a philosopher, Wenar should know that two things can both cause something else. If there’s a 9-judge panel |
bc773c96-e36f-40a5-b1c6-fe58acf45a18 | trentmkelly/LessWrong-43k | LessWrong | What would be the pros and cons of a government-backed, property-based cryptocurrency?
I'll begin with a theoretical demonstrative example.
The Bureau of Land Management (sorry for immediately Americanizing the question, to all the European readers out there, but this idea might not work well in Europe anyways, for reasons that will soon become clear.) manages ca. 250 million acres (1 million km2, and 2.5 acres to a hectare), a landmass approximately the size of France and Germany combined. The Bureau offers land leases regularly, for a multitude of different reasons. I vaguely remember them offering land for sale at times (which probably means it's never happened), but for this thought experiment, let's imagine that land sales are on the table.
Land prices in America can vary between $1k and $100k an acre (napkin math), depending on where you're looking. Most BLM land is "what homesteaders didn't want", which would probably put it in the lower bound of that spectrum, but let's say that most of it could sell for $5,000 an acre, and that BLM would "set aside" y million acres for this program.
BLM would then create x amount of this coin per acre, and sell an amount of it to investors based off of the amount of land they would be putting up for sale in the coming year/amount of land up for lease. The BLM would promise to accept the coin as legal tender for land sales and leases, putting a minimum USD value to the coin. Most land sales and leases that I know of are auction based, and they usually have a minimum bid amount, so there would be a link between the value of the coin and USD. If they made 10 coins per acre, and set that as a value, then an example auction would have a minimum bid of: 5 acres for sale, min. bid of $25,000 or 50 "coins". This gives the coin a minimum value, backed by a government agency.
Now say we jump ahead a few years, and cryptocurrency hype has driven up the price (government accepted crypto, fiat-backed to some degree), BLM is able to sell these coins for more than it costs them, there's more interest in BLM land sales, |
d0a0ec4b-572b-455f-9df6-d4ccb6019ec7 | trentmkelly/LessWrong-43k | LessWrong | Genosets
Since risk from individual SNP's 'should' not be aggregated to indicate an individual's risk based on multiple sources of evidence, how are the magnitudes for genosets determined?. Can bayes or another method be used to interpret a promethease report?
Even genetic epidemiology textbooks seem pessimistic: about the usefulness of the genetic research underpinning precision medicine:
> ‘...for the repeated failure to replicate positive findings in genetic epidemiology (102; 103) and remains the subejct of an important ongoing debate (101-105)’ -pg. 26 on chapter 1. An Introduction to Genetic Epidemiology
The references in question are about the impact of population stratification on genetic association studies. That doesn’t seem to substantiate such a broad stroke about the non-replicability of genetic epidemiology. I don't know what to make of these findings.
Here is a link to a screenshot of those references
It suprises me that entrepreneurial machine learning analysts don’t beg for genetic research to identify how combinatorial patterns of genes to be able to characterise individual risk. It seems like if/once they can get hold of that information, the sequence from genetic science to consumer actionable health information is bridged. So where are the 'lean gene learning machine' startups? I certainly don’t have the lean gene to do it myself. I don’t know machine learning.
Regulatory issues seems like the biggest hurdle. To the best of my google-fu, 23andme doesn't even disclose what it's 'Established Research' genes are. So, once regulatory hurdles are surmounted, lots of useful research will flood out.
|
7de20c47-eccd-4ed6-bdf1-1fa81dbffda3 | trentmkelly/LessWrong-43k | LessWrong | Nyarlathotep Stirs: A Meta-Narrative ChatGPT Story
----------------------------------------
As Austin drifted off to sleep, he felt a sudden jolt in his brain. The chip had turned on, and he could feel the electrical pulses coursing through his mind. It was like being lost in a cosmic abyss, where the unknown and the incomprehensible reigned supreme.
He saw visions of ancient gods, their tentacles writhing and wriggling in the void, as they whispered secrets too terrible for human ears. Their voices echoed in his mind, like the distant roar of waves crashing against the shore.
As the night wore on, Austin's mind became a conduit for the unfathomable. He felt as though he was traversing a primordial landscape, where ancient monoliths rose up from the ground, casting long shadows in the starless sky.
The chip in his brain continued to pulse, and Austin felt his consciousness being pulled deeper and deeper into the abyss. He saw creatures that defied description, their forms twisted and distorted beyond all recognition. They were beings from beyond the veil, creatures that should never have existed in this world.
But despite the terror that gripped his heart, Austin couldn't help but feel a strange sense of wonder. The universe was so vast, so incomprehensible, that it made all his earthly worries seem insignificant. He was just a tiny speck in the grand scheme of things, a mote of dust in an endless cosmic expanse.
When he woke up the next morning, he felt as though he had been reborn. The experience had left him shaken, but he also felt a strange sense of purpose. He knew that he was just a conduit, a vessel for forces that were far beyond his control. But he also knew that he had been chosen, that he had been given a glimpse into something greater than himself.
And so he continued to rent out his brain to Elon Musk, letting the chip send out its pulsing signals into the void. He didn't know what kind of intelligence they were training, or what kind of horrors lurked in the abyss. But he knew that he was a |
93bf3e54-23ac-4ac7-b854-b55534ba28a3 | trentmkelly/LessWrong-43k | LessWrong | CEV: coherence versus extrapolation
It's just struck me that there might be a tension between the coherence (C) and the extrapolated (E) part of CEV. One reason that CEV might work is that the mindspace of humanity isn't that large - humans are pretty close to each other, in comparison to the space of possible minds. But this is far more true in every day decisions than in large scale ones.
Take a fundamentalist Christian, a total utilitarian, a strong Marxist, an extreme libertarian, and a couple more stereotypes that fit your fancy. What can their ideology tell us about their everyday activities? Well, very little. Those people could be rude, polite, arrogant, compassionate, etc... and their ideology is a very weak indication of that. Different ideologies and moral systems seem to mandate almost identical everyday and personal interactions (this is in itself very interesting, and causes me to see many systems of moralities as formal justifications of what people/society find "moral" anyway).
But now let's more to a more distant - "far" - level. How will these people vote in elections? Will they donate to charity, and if so, which ones? If they were given power (via wealth or position in some political or other organisation), how are they likely to use that power? Now their ideology is much more informative. Though it's not fully determinative, we would start to question the label if their actions at this level seemed out of synch. A Marxist that donated to a Conservative party, for instance, would give us pause, and we'd want to understand the apparent contradiction.
Let's move up yet another level. How would they design or change the universe if they had complete power? What is their ideal plan for the long term? At this level, we're entirely in far mode, and we would expect that their vastly divergent ideologies would be the most informative piece of information about their moral preferences. Details about their character and personalities, which loomed so large at the everyday level, will now |
c2875f4c-8acf-4719-aed2-2d987a508354 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Agent-Agnostic Human-in-the-Loop Reinforcement Learning
1 Introduction
---------------
A central goal of Reinforcement Learning (RL) is to design agents that learn in a fully autonomous way. An engineer designs a reward function, input/output channels, and a learning algorithm. Then, apart from debugging, the engineer need not intervene during the actual learning process. Yet fully autonomous learning is often infeasible due to the complexity of real-world problems, the difficulty of specifying reward functions, and the presence of potentially dangerous outcomes that constrain exploration.
Consider a robot learning to perform household chores. Human engineers create a curriculum, moving the agent between simulation, practice environments, and real house environments. Over time, they may tweak reward functions, heuristics, sensors, and state or action representations. They may intervene directly in real-world training to prevent the robot damaging itself, destroying valuable goods, or harming people it interacts with.
In this example, humans do not just design the learning agent: they are also in the loop of the agent’s learning process, as is typical for many learning systems. Self-driving cars learn with humans ready to intervene in dangerous situations. Facebook’s algorithm for recommending trending news stories has humans filtering out inappropriate content [fac](#bib.bib1) . In both examples, the agent’s environment is complex, non-stationary, and there are a wide range of damaging outcomes (like a traffic accident). As RL is applied to increasingly complex real-world problems, such interactive guidance will be critical to the success of these systems.
Prior literature has investigated how people can help RL agents learn more efficiently through different methods of interaction Ng et al. ([1999](#bib.bib32)); Knox and Stone ([2011](#bib.bib24)); Loftin et al. ([2014](#bib.bib28)); Peng et al. ([2016](#bib.bib36)); Thomaz and Breazeal ([2006](#bib.bib42)); Wiewiora et al. ([2003](#bib.bib48)); Judah et al. ([2010](#bib.bib21)); Griffith et al. ([2013](#bib.bib17)); K.V.N. ([2012](#bib.bib25)); Torrey and Taylor ([2013](#bib.bib43)); Knox and Stone ([2009](#bib.bib23)); Zhan et al. ([2016](#bib.bib49)); Walsh et al. ([2011](#bib.bib46)); Torrey and Taylor ([2012](#bib.bib44)); Maclin and Shavlik ([1996](#bib.bib30)); Driessens and Džeroski ([2004](#bib.bib10)). Often, the human’s role is to pass along knowledge about relevant quantities of the RL problem, like Q-values, action optimality, or the true reward for a particular state-action pair. This way, the person can bias exploration, prevent catastrophic outcomes, and accelerate learning.
Most existing work develops agent-specific protocols for human interaction. That is, protocols for human interaction or advice that are designed for a specific RL algorithm (such as Q-learning). For instance, Griffith et al. ([2013](#bib.bib17)) investigate the power of policy advice for a Bayesian Q-Learner. Other works assume that the states of the MDP take a particular representation, or that the action space is discrete or finite. Making explicit assumptions about the agent’s learning process can enable more powerful teaching protocols that leverage insights about the learning algorithm or representation.
###
1.1 Our contribution: agent-agnostic guidance of RL algorithms
Our goal is to develop a framework for human-agent interaction that is (a) agent-agnostic and (b) can capture a wide range of ways a human can help an RL agent. Such a setting is informative of the structure of general teaching paradigms, the relationship and interplay of pre-existing teaching methods, and suggestive of new teaching methodologies, which we discuss in Section [6](#S6 "6 Conclusion ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning"). Additionally, approaching human-in-the-loop RL from a maximally general standpoint can help illustrate the relationship between the requisite power of a teacher and the teacher’s effectiveness on learning. For instance, we demonstrate sufficient conditions on a teacher’s knowledge about an environment that enable effective111By “effective“ we mean: pruning bad actions while never pruning an optimal action. See Remark 3 (below). action pruning of an arbitrary agent. Results of this form can again be informative to the general structure of teaching RL agents.
We make two simplifying assumptions. First, we consider environments where the state is fully observed; that is, the learning agent interacts with a Markov Decision Process (MDP) (Puterman, [2014](#bib.bib37); Kaelbling et al., [1996](#bib.bib22); Sutton and Barto, [1998](#bib.bib40)). Second, we note that conducting experiments with an actual human in the loop creates a huge amount of work for a human, and can slow down training to an unacceptable degree. For this reason, we focus on programatic instantiations of humans-in-the-loop; a person informed about the task (MDP) in question will write a program to facilitate various teaching protocols.
There are obvious disadvantages to agent-agnostic protocols. The agent is not specialized to the protocol, so it is unable to ask the human informative questions as in Amir et al. ([2016](#bib.bib4)), or will not have an observation model that faithfully represents the process the human uses to generate advice, as in Griffith et al. ([2013](#bib.bib17)); Judah et al. ([2010](#bib.bib21)). Likewise, the human cannot provide optimally informative advice to the agent as they don’t know the agent’s prior knowledge, exploration technique, representation, or learning method.
Conversely, agent-specific protocols may perform well for one type of algorithm or environment, but poorly on others. In many cases, without further hand-engineering, agent-specific protocols can’t be adapted to a variety of agent-types. When researchers tackle challenging RL problems, they tend to explore a large space of algorithms with important structural differences: some are model-based vs. model-free, some approximate the optimal policy, others a value function, and so on. It takes substantial effort to adapt an advice protocol to each such algorithm. Moreover, as advice protocols and learning algorithms become more complex, greater modularity will help limit design complexity.
In our framework, the interaction between the person guiding the learning process, the agent, and the environment is formalized as a protocol program. This program controls the channels between the agent and the environment based on human input, pictured in Figure [1](#S1.F1 "Figure 1 ‣ 1.1 Our contribution: agent-agnostic guidance of RL algorithms ‣ 1 Introduction ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning"). This gives the teacher extensive control over the agent: in an extreme case, the agent can be prevented from interacting with the real environment entirely and only interact with a simulation. At the same time, we require that the human only interact with the agent during learning through the protocol program—both agent and environment are a black box to the human.

Figure 1: A general setup for RL with a human in the loop. By instantiating P with different protocol programs, we can implement different mechanisms for human guidance of RL agents.
2 Framework
------------
Any system for RL with a human in the loop has to coordinate three components:
1. The environment is an MDP and is specified by a tuple M=(S,A,T,R,γ), where S is the state space, A is the action space, T:S×A×S↦[0,1], denotes the transition function, a probability distribution on states given a state and action, R:S×A↦R is the reward function, and γ is the discount factor.
2. The agent is a (stateful, potentially stochastic) function L:S×R→A.
3. The human can receive and send advice information of flexible type, say Xin and Xout, so, we will treat the human as a (stateful, potentially stochastic) function H:Xin→Xout. For example, Xin might contain the history of actions, states, and rewards so far, and a new proposed action a′, and Xout might be an action as well, either equivalent to a′ (if accepted) or different (if rejected). We assume that the human knows in general terms how their responses will be used and is making a good-faith effort to be helpful.
The interaction between the environment, the agent, and a human advisor sets up a mechanism design problem: how can we design an interface that orchestrates the interaction between these components such that the combined system maximizes the expected sum of γ-discounted rewards from the environment? In other words, how can we write a protocol program P:S×R→A that can take the place of a given agent L, but that achieves higher rewards by making efficient use of information gained through sub-calls to L and H?
By formalizing existing and new techniques as programs, we facilitate understanding and comparison of these techniques within a common framework. By abstracting from particular agents and environments, we may better understand the mechanisms underlying effective teaching for Reinforcement Learning by developing portable and modular teaching methods.
3 Capturing Existing Advice Schemes
------------------------------------
Naturally, protocol programs cannot capture all advice protocols. Any protocol that depends on prior knowledge of the agent’s learning algorithm, representation, priors, or hyperparameters is ruled out. Despite this constraint, the framework can capture a range of existing protocols where a human-in-the-loop guides an agent.
Figure [1](#S1.F1 "Figure 1 ‣ 1.1 Our contribution: agent-agnostic guidance of RL algorithms ‣ 1 Introduction ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning") shows that the human can manipulate the actions (A) sent to the environment, the agent’s observed states (S), and observed rewards (R). This points to the following combinatorial set of protocol families in which the human manipulates one or more of these components to influence learning:
| | | |
| --- | --- | --- |
| | {S,A,R,(S,A),(S,R),(A,R),(S,A,R)} | |
The first three elements of the set correspond to state manipulation, action pruning, and reward shaping protocol families.222State manipulation can correspond to abstraction or training in simulation The remaining elements represent families of teaching schemes that modify multiple elements of the agent’s learning; these protocols may introduce powerful interplay between the different components, which hope future work will explore.
We now demonstrate simple ways in which protocol programs instantiate typical methods for intervening in an agent’s learning process.
Algorithm 1 Agent in control (standard)
1:procedure agentControl(s,r)
2: return L(s,r)
3:end procedure
Algorithm 2 Human in control
1:procedure humanControl(s,r)
2: return H(s,r)
3:end procedure
Algorithm 3 Action pruning
1:Δ←H.Δ ▹ To Prune: S×A↦{0,1}
2:procedure pruneActions(s,r)
3: a––=L(s,r)
4: while Δ(s,a––) do ▹ If Needs Pruning
5: r–=H[(s,a––)]
6: a––=L(s,r–)
7: end while
8: return a––
9:end procedure
Algorithm 4 Reward manipulation
1:procedure manipulateReward(s,r)
2: r–=H(s,r)
3: return L(s,r–)
4:end procedure
Algorithm 5 Training in simulation
1:M∗=(S,A,T∗,R∗,γ) ▹ Simulation
2:η=[] ▹ History: array of (S×R×A)
3:procedure trainInSimulation(s,r)
4: s–=s
5: r–=r
6: while H(η)≠ “agent is ready” do
7: a––=L(s–,r–)
8: append (s–,r–,a––) to η
9: r–∼R∗(s–,a––)
10: s–∼T∗(s–,a––)
11: end while
12: return L(s,r)
13:end procedure
Figure 2: Many schemes for human guidance of RL algorithms can be expressed as protocol programs. These programs have the same interface as the agent L, but can be safer or more efficient learners by making use of human advice H.
###
3.1 Reward shaping
Section [2](#S2 "2 Framework ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning") defined the reward function R as part of the MDP M. However, while humans generally don’t design the environment, we do design reward functions. Usually the reward function is hand-coded prior to learning and must accurately assign reward values to any state the agent might reach. An alternative is to have a human generate the rewards interactively: the human observes the state and action and returns a scalar to the agent. This setup has been explored in work on TAMER (Knox and Stone, [2009](#bib.bib23)). A similar setup (with an agent-specific protocol) was applied to robotics by Daniel et al. ([2014](#bib.bib6)). It is straightforward to represent rewards that are generated interactively (or online) using protocol programs.
We now turn to other protocols in which the human manipulates rewards. These protocols assume a fixed reward function R that is part of the MDP M.
####
3.1.1 Reward shaping and Q-value initialization
In Reward Shaping protocols, the human engineer changes the rewards given by some fixed reward function in order to influence an agent’s learning. Ng et al. ([1999](#bib.bib32)) introduced potential-based shaping, which shapes rewards without changing an MDP’s optimal policy. In particular, each reward received by the environment is augmented by a shaping function:
| | | | |
| --- | --- | --- | --- |
| | F(s,a,s′)=γϕ(s′)−ϕ(s), | | (1) |
so the agent actually receives r–=F(s,a,s′)+R(s,a). Wiewiora et al. ([2003](#bib.bib48)) showed potential shaping to be equivalent (for Q-learners) to a subset Q-value initialization under some assumptions. Further, Devlin and Kudenko ([2012](#bib.bib8)) propose dynamic potential shaping functions that change over time. That is, the shaping function F also takes as two time parameters, t and t′, such that:
| | | | |
| --- | --- | --- | --- |
| | F(s,t,s′,t′)=γϕ(s′,t′)−ϕ(s,t) | | (2) |
Where t′>t. Their main result is that dynamic shaping functions of this form also guarantee optimal policy invariance. Similarly, Wiewiora et al. ([2003](#bib.bib48)) extend potential shaping to potential-based advice functions, which identifies a similar class of shaping functions on (s,a) pairs.
In Section [4](#S4 "4 Theory ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning"), we show that our Framework captures reward shaping, and consequently, a limited notion of Q-value initialization.
###
3.2 Training in Simulation
It is common practice to train an agent in simulation and transfer it to the real world once it performs well enough. Algorithm [5](#alg5 "Algorithm 5 ‣ Figure 2 ‣ 3 Capturing Existing Advice Schemes ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning") (Figure 2) shows how to represent the process of training in simulation as a protocol program. We let M represent the real-world decision problem and let M∗ be a simulator for M that is included in the protocol program. Initially the protocol program has the agent L interact with M∗ while the human observes the interaction. When the human decides the agent is ready, the protocol program has L interact with M instead.
###
3.3 Action Pruning
Action pruning is a technique for dynamically removing actions from the MDP to reduce the branching factor of the search space. Such techniques have been shown to accelerate learning and planning time Sherstov and Stone ([2005](#bib.bib39)); Hansen et al. ([1996](#bib.bib19)); Rosman and Ramamoorthy ([2012](#bib.bib38)); Abel et al. ([2015](#bib.bib2)). In Section 5, we apply action-pruning to prevent catastrophic outcomes during exploration, a problem explored by Lipton et al. ([2016](#bib.bib27)); Garcia and Fernandez ([2011](#bib.bib14), [2012](#bib.bib15)); Hans et al. ([2008](#bib.bib18)); Moldovan and Abbeel ([2012](#bib.bib31)).
Protocol programs allow action pruning to be carried out interactively. Instead of having to decide which actions to prune prior to learning, the human can wait to observe the states that are actually encountered by the agent, which may be valuable in cases where the human has limited knowledge of the environment or the agent’s learning ability. In Section [4](#S4 "4 Theory ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning"), we exhibit an agent-agnostic protocol for interactively pruning actions that preserves the optimal policy while removing some bad actions.
Our pruning protocol is illustrated in a gridworld with lava pits (Figure [3](#S3.F3 "Figure 3 ‣ 3.3 Action Pruning ‣ 3 Capturing Existing Advice Schemes ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning")). The agent is represented by a gray circle, “G” is a goal state that provides reward +1, and the red cells are lava pits with reward −200. All white cells provide reward 0.

Figure 3: The human allows movement from state 34 to 33 but blocks agent from falling in lava (at 43).
At each time step, the human checks whether the agent moves into a lava pit. If it does not (as in moving DOWN from state 34), the agent continues as normal. If it does (as in moving RIGHT from state 33), the human bypasses sending any action to the true MDP (preventing movement right) and sends the agent a next state of 33. The agent doesn’t actually fall in the lava but the human sends them a reward r≤−200. After this negative reward, the agent is less likely to try the action again. For the protocol program, see Algorithm [3](#alg3 "Algorithm 3 ‣ Figure 2 ‣ 3 Capturing Existing Advice Schemes ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning") in Figure 2.
Note that the agent receives no explicit signal that their attempted catastrophic action was blocked by the human. They observe a big negative reward and a self-loop but no information about whether the human or environment generated their observation.
###
3.4 Manipulating state representation
The agent’s state representation can have a significant influence on its learning. Suppose the states of MDP M consist of a number of features, defining a state vector s. The human engineer can specify a mapping ϕ such that the agent always receives ϕ(s)=¯s in place of this vector s. Such mappings are used to specify high-level features of state that are important for learning, or to dynamically ignore confusing features from the agent.
This transformation of the state vector is normally fixed before learning. A protocol program can allow the human to provide processed states or high-level features interactively. By the time the human stops providing features, the agent might have learned to generate them on its own (as in Learning with Privileged Information (Vapnik and Vashist, [2009](#bib.bib45); Pechyony and Vapnik, [2010](#bib.bib35))).
Other methods have focused on state abstraction functions to decrease learning time and preserve the quality of learned behavior, as in Li et al. ([2006](#bib.bib26)); Ortner ([2013](#bib.bib33)); Even-Dar and Mansour ([2003](#bib.bib12)); Jong and Stone ([2005](#bib.bib20)); Dean et al. ([1997](#bib.bib7)); Abel et al. ([2016](#bib.bib3)); Ferns et al. ([2006](#bib.bib13)). Using a state abstraction function, agents compress representations of their environments, enabling deeper planning and lower sample complexity. Any state aggregation function can be implemented by a protocol program, perhaps dynamically induced through interaction with a teacher.
4 Theory
---------
Here we illustrate some simple ways in which our proposed agent-agnostic interaction scheme captures other existing agent-agnostic protocols. The following results all concern Tabular MDPs, but are intended to offer intuition for high-dimensional or continuous environments as well.
###
4.1 Reward Shaping
First we observe that protocol programs can precisely capture methods for shaping reward functions.
Remark 1: For any reward shaping function F, including potential-based shaping, potential-based advice, and dynamic potential-based advice, there is a protocol that produces the same rewards.
To construct such a protocol for a given F, simply let the reward output by the protocol, r–, take on the value F(s)+r– at each time step. That is, in Algorithm [4](#alg4 "Algorithm 4 ‣ Figure 2 ‣ 3 Capturing Existing Advice Schemes ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning"), simply define H(s,r–)=F(s)+r–.
###
4.2 Action Pruning
We now show that there is a simple class of protocol programs that carry out action pruning of a certain form.
Remark 2: There is a protocol for pruning actions in the following sense: for any set of state action pairs sa⊂S×A, the protocol ensures that, for each pair (si,aj)∈sa, action aj is never executed in the MDP in state si.
The protocol is as described in Section [3.3](#S3.SS3 "3.3 Action Pruning ‣ 3 Capturing Existing Advice Schemes ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning") and shown in Algorithm [3](#alg3 "Algorithm 3 ‣ Figure 2 ‣ 3 Capturing Existing Advice Schemes ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning"). The premise is this: in all cases where the agent executes an action that should be pruned, the protocol gives the agent low reward and forces the agent to self-loop.
Knowing which actions to prune is itself a challenging problem. Often, it is natural to assume that the human guiding the learner knows something about the environment of interest (such as where high rewards or catastrophes lie), but may not know every detail of the problem. Thus, we consider a case in which the human has partial (but useful) knowledge about the problem of interest, represented as an approximate Q-function. The next remark shows there is a protocol based on approximate knowledge with two properties: (1) it never prunes an optimal action, (2) it limits the magnitude of the agent’s worst mistake:
Remark 3: Assuming the protocol designer has a β-optimal Q function:
| | | | |
| --- | --- | --- | --- |
| | ||Q∗(s,a)−QH(s,a)||∞≤β | | (3) |
there exists a protocol that never prunes an optimal action, but prunes all actions so that the agent’s mistakes are never more than 4β below optimal. That is, for all times t:
| | | | |
| --- | --- | --- | --- |
| | VLt(st)≥V∗(st)−4β, | | (4) |
where Lt is the agent’s policy after t timesteps.
###### Proof of Remark 3.
The protocol designer has a β-approximate Q function, denoted QH, defined as above. Consider the state-specific action pruning function H(s):
| | | | |
| --- | --- | --- | --- |
| | H(s)={a∈A∣QH(s,a)≥maxa′QH(s,a′)−2β} | | (5) |
The protocol prunes all actions not in H(s) according to the self-loop method described above. This protocol induces a pruned Bellman Equation over available actions, H(s), in each state:
| | | | |
| --- | --- | --- | --- |
| | VH(s)=maxa∈H(s)(R(s,a)+γ∑s′T(s,a,s′)VH(s′)) | | (6) |
Let a∗ denote the true optimal action: a∗=argmaxa′Q∗(s,a′). To preserve the optimal policy, we need a∗∈H(s), for each state. Note that a∗∉H(s) when:
| | | | |
| --- | --- | --- | --- |
| | QH(s,a∗)<maxa′QH(s,a′)−2β | | (7) |
But by definition of QH(s,a):
| | | | |
| --- | --- | --- | --- |
| | |QH(s,a∗)−maxaQH(s,a)|≤2β | | (8) |
Thus, a∗∈H(s) can never occur. Furthermore, observe that H(s) retains all actions a for which:
| | | | |
| --- | --- | --- | --- |
| | QH(s,a)≥maxa′QH(s,a′)−2β, | | (9) |
holds. Thus, in the worst case, the following two hold:
1. The optimal action estimate is β too low: QH(s,a∗)=Q∗(s,a∗)−β
2. The action with the lowest value, abad, is β too high: QH(s,abad)=Q∗(s,abad)+β
From Equation [9](#S4.E9 "(9) ‣ 4.2 Action Pruning ‣ 4 Theory ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning"), observe that the minimal Q∗(s,abad) such that abad∈H(s) is:
| | | | |
| --- | --- | --- | --- |
| | Q∗(s,abad)+β | ≥Q∗(s,a∗)−β−2β | |
| | ∴Q∗(s,abad) | ≥Q∗(s,a∗)−4β | |
Thus, this pruning protocol never prunes an optimal action, but prunes all actions worse then 4β below a∗ in value. We conclude that the agent may never execute an action 4β below optimal. ∎
5 Experiments
--------------
This section applies our action pruning protocols (Section [3.3](#S3.SS3 "3.3 Action Pruning ‣ 3 Capturing Existing Advice Schemes ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning") and Remarks 2 and 3 above) to concrete RL problems. In Experiment 1, action pruning is used to prevent the agent from trying catastrophic actions, i.e. to achieve safe exploration. In Experiment 2, action pruning is used to accelerate learning.
###
5.1 Protocol for Preventing Catastrophes
Human-in-the-loop RL can help prevent disastrous outcomes that result from ignorance of the environment’s dynamics or of the reward function. Our goal for this experiment is to prevent the agent from taking catastrophic actions. These are real world actions so costly that we want the agent to never take the action333We allow an RL agent to take sub-optimal actions while learning. Catastrophic actions are not allowed because their cost is orders of magnitude worse than non-catastrophic actions.. This notion of catastrophic action is closely related to ideas in “Safe RL” (Garcia and Fernandez, [2015](#bib.bib16); Moldovan and Abbeel, [2012](#bib.bib31)) and to work on “significant rare events” (Paul et al., [2016](#bib.bib34)).
Section [3.3](#S3.SS3 "3.3 Action Pruning ‣ 3 Capturing Existing Advice Schemes ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning") describes our protocol program for preventing catastrophes in finite MDPs using action pruning. There are two important elements of this program:
1. When the agent tries a catastrophic action a in state s, the agent is blocked from executing the action in the real world, and the agent receives state and reward: (s,rbad), where rbad is an extreme negative reward.
2. This (s,a) is stored so that the protocol program can automate the human’s intervention, which could allow the human to stop monitoring after all catastrophes have been stored.
This protocol prevents catastrophic actions while preserving the optimal policy and having only minimal side-effects on the agent’s learning. We can extend this protocol to environments with high-dimensional state spaces. Element (1) above remains the same. But (2) must be modified: preventing future catastrophes requires generalization across catastrophic actions (as there will be infinitely many such actions). We discuss this setting in Appendix A.
###
5.2 Experiment 1: Preventing Catastrophes in a Pong-like Game
Our protocol for preventing catastrophes is intended for use in a real-world environment. Here we provide a preliminary test of our protocol in a simple video game.
Our protocol treats the RL agent as a black box. To this end, we applied our protocol to an open-source implementation of the state-of-the-art RL algorithm “Trust Region Policy Optimization” from Duan et al. ([2016](#bib.bib11)). The environment was Catcher, a simplified version of Pong with non-visual state representation. Since there are no catastrophic actions in Catcher, we modified the game to give a large negative reward when the paddle’s speed exceeds a speed limit. We compare the performance of an agent who is assisted by the protocol (‘‘Pruned’’) and so is blocked from the catastrophic actions444
We did not use an actual human in the loop. Instead the agent was blocked by a protocol program that checked whether each action would exceed the speed limit. This is essentially the protocol outlined in Appendix A but with the classifier trained offline to recognize catastrophes. Future work will test similar protocols using actual humans. (In this experiment a human can easily recognize catastrophic actions by reading the agent’s speed directly from the game state.) to the performance of a normal RL agent (“Not Pruned”).
Figure [5](#S5.F5 "Figure 5 ‣ 5.2 Experiment 1: Preventing Catastrophes in a Pong-like Game ‣ 5 Experiments ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning") shows the agent’s mean performance (±1SD over 16 trials) over the course of learning. We see that the agent with protocol support (“Pruned”) performed much better overall. This is unsurprising, as it was blocked from ever doing a catastrophic action. The gap in mean performance is large early on but diminishes as the “Not Pruned” agent learns to avoid high speeds. By the end (i.e. after 400,000 actions), “Not Pruned” is close to “Pruned” in mean performance but its total returns over the whole period are around 5 times worse. While the “Pruned” agent observes incongruous state transitions due to being blocked by our protocol, Figure [5](#S5.F5 "Figure 5 ‣ 5.2 Experiment 1: Preventing Catastrophes in a Pong-like Game ‣ 5 Experiments ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning") suggests these observations do not have negative side effects on learning.

Figure 4: Preventing Catastrophic Speeds

Figure 5: Pruning in Taxi.
###
5.3 Protocol for Accelerating Learning
We also conducted a simple experiment in the Taxi domain from Dietterich ([2000](#bib.bib9)). The Taxi problem is a more complex version of grid world: each problem instances consists of a taxi and some number of passengers. The agent directs the taxi to each passenger, picks the passenger up, and brings them to their destination and drops them off.
We use Taxi to evaluate the effect of our action pruning protocol for accelerating learning in discrete MDPs. There is a natural procedure for pruning suboptimal actions that dramatically reduces the size of the reachable state space: if the taxi is carrying a passenger but is not at the passenger’s destination, we prune the dropoff action by returning the agent back to its current state with -0.01 reward. This prevents the agent from exploring a large portion of the state space, thus accelerating learning.
###
5.4 Experiment 2: Accelerated Learning in Taxi
We evaluated Q-learning Watkins and Dayan ([1992](#bib.bib47)) and R-max Brafman and Tennenholtz ([2003](#bib.bib5)) with and without action pruning in a simple 10×10 instance with one passenger. The taxi starts at (1,1), the passenger at (4,3) with destination (2,2). We ran standard Q-Learning with ε-greedy exploration with ε=0.2 and with R-max using a planning horizon of four. Results are displayed in Figure [5](#S5.F5 "Figure 5 ‣ 5.2 Experiment 1: Preventing Catastrophes in a Pong-like Game ‣ 5 Experiments ‣ Agent-Agnostic Human-in-the-Loop Reinforcement Learning").
Our results suggest that the action pruning protocol simplifies the problem for a Q-Learner and dramatically so for R-Max. In the allotted number of episodes, we see that pruning substantially improves the overall cumulative reward achieved; in the case of R-max, the agent is able to effectively solve the problem after a small number of episodes. Further, the results suggests that the agent-agnostic method of pruning is effective without having any internal access to the agent’s code.
6 Conclusion
-------------
We presented an agent-agnostic method for giving guidance to Reinforcement Learning agents. Protocol programs written in this framework apply to any possible RL agent, so sophisticated schemes for human-agent interaction can be designed in a modular fashion without the need for adaptation to different RL algorithms. We presented some simple theoretical results that relate our method to existing schemes for interactive RL and illustrated the power of action pruning in two toy domains.
A promising avenue for future work are dynamic state manipulation protocols, which can guide an agent’s learning process by incrementally obscuring confusing features, highlighting relevant features, or simply reducing the dimensionality of the representation. Additionally, future work might investigate whether certain types of value initialization protocols can be captured by protocol programs, such as the optimistic initialization for arbitrary domains developed by Machado et al. ([2014](#bib.bib29)). Moreover, the full combinatoric space of learning protocols is suggestive of teaching paradigms that have yet to be explored. We hypothesize that there are powerful teaching methods that take advantage of the interplay between state manipulation, action pruning, and reward shaping. A further challenge is to extend the formalism to account for the interplay between multiple agents, in both competitive and cooperative settings.
Additionally, in our experiments, all protocols are explicitly programmed in advance. In the future, we’d like to experiment with dynamic protocols with a human in the loop during the learning process.
Lastly, an alternate perspective on the framework is that of a centaur system: a joint Human-AI decision maker Swartout ([2016](#bib.bib41)). Under this view, the human trains and queries the AI dynamically in cases where the human needs help. In the future, we’d like to establish and investigate formalisms relevant to the centaur view of the framework.
#### Acknowledgments
This work was supported by Future of Life Institute grant 2015-144846 and by the Future of Humanity Institute (Oxford). We thank Shimon Whiteson, James MacGlashan, and D. Ellis Herskowitz for helpful conversations. |
4c6c8133-cf92-4210-bf98-cee6cf528e46 | trentmkelly/LessWrong-43k | LessWrong | The Logistics of Distribution of Meaning: Against Epistemic Bureaucratization
This is an excerpt from the Introduction section to a book-length project that was kicked off as a response to the framing of the essay competition on the Automation of Wisdom and Philosophy. Many unrelated-seeming threads open in this post, that will come together by the end of the overall sequence.
If you don't like abstractness, the first few sections may be especially hard going.
Generalization
This sequence is a new story of generalization.
The usual story of progress in generalization, such as in a very general theory, is via the uncovering of deep laws. Distilling the real patterns, without any messy artefacts. Finding the necessities and universals, that can handle wide classes rather than being limited to particularities. The crisp, noncontingent abstractions. It is about opening black boxes. Articulating mind-independent, rigorous results, with no ambiguity and high replicability. Considered thoroughly “understood” only when you can program it; that even a machine could automate. Gears-level understanding. Clear mechanistic, causal stories plus sharp high-level concepts that get at the core and are thus robust to extrapolate from. Clean factorings with compositionality that preserve formal invariants.
Some of these might sound unrelated, apart from being good practices. In fact, they are very related. They all achieve the scaling and transport of insight (which is the business of generalization) by isolation and exclusion.[1]They decontextualize, and attempt to find the static, eternal, preformed, self-contained, independent principles by retreating ever deeper inwards, into that which has been identified as meaningful.
The thesis is that this exclusion-based approach to proliferation of meaning is not the only option. It is largely by ignoring or distrusting the fluidity available to intelligence, aliveness, mind, time, that we resort to such stasis-centered[2] sensemaking.
Though it has its merits, context-independence is subtly tyrannical: ce |
05fcdf49-056f-4240-ac40-6c5193180204 | trentmkelly/LessWrong-43k | LessWrong | The Limits Of Medicine - Part 1 - Small Molecules
Much of medicine relies on what is dubbed "small molecules".
"Molecules" since they are atoms tightly bound together, forming what is easily seen as a unitary whole, as opposed to e.g. lipoproteins, which are synergistic ensembles held together by much weaker forces (pun not intended).
"Small", in that they are, well... lightweight. But for all intents and purposes, size and weight don't separate quite the same way at the molecular level. So whatever.
This class includes drugs such as all NSAIDs, all antibiotics, all antihistamines, almost every single drug a psychologist or even a psychiatrist is allowed to prescribe, virtually all supplements, sleep aids, wakefulness aids, every single schedule I through III drugs (i.e. fun and insightful drugs), almost all anesthetics, and almost all anti-parasitic drugs, etc.
Unless you are old or chronically sick, it's likely that the only drug you've ever taken that hasn't been a small molecule is a vaccine. Indeed, drugs that aren't small molecules are so strange and rare we usually don't think of them as "drugs" but as a separate entity: vaccines, monoclonal antibodies, anabolic steroids.
But, like, the vast majority of molecules found in our body, and in all of organic life, are not classified as "small". And the vast majority of the things doing something interesting are not really molecules, but more so fuzzy complexes of molecules (ribosomes, lysosomes, lipoproteins, membranes).
So why are virtually all drugs small molecules? Prima facei we'd expect most of them to be complexes made up of dozens to thousands of very large molecules.
The answer lies in several things:
* Easy to mass-produce
* Simple to administer
* Cheap to store
* Homogenous in effect
* Quick to act
Let's look at each of these aspects. None are unique to small molecules and not all small molecules bear all of them, but they are all traits significantly more likely to be found in small molecules.
Easy to produce
Producing a protein is hard |
81b9346d-aa72-4a0c-ac18-f971babc943b | trentmkelly/LessWrong-43k | LessWrong | Induction heads - illustrated
Many thanks to everyone who provided helpful feedback, particularly Aryan Bhatt and Lawrence Chan!
TL;DR
This is my illustrated walkthrough of induction heads. I created it in order to concisely capture all the information about how the circuit works.
There are 2 versions of the walkthrough:
* Version 1 is the one included in this post. It's slightly shorter, and focuses more on the intuitions than the actual linear operations going on.
* Version 2 can be found at my personal website. It has all the same stuff as version 1, with a bit of added info about the mathematical details, and how you might go about reverse-engineering this circuit in a real model.
The final image from version 1 is inline below, and depending on your level of familiarity with transformers, looking at this diagram might provide most of the value of this post. If it doesn't make sense to you, then read on for the full walkthrough, where I build up this diagram bit by bit.
Introduction
Induction heads are a well-studied and understood circuit in transformers. They allow a model to perform in-context learning, of a very specific form: if a sequence contains a repeated subsequence e.g. of the form A B ... A B (where A and B stand for generic tokens, e.g. the first and last name of a person who doesn't appear in any of the model's training data), then the second time this subsequence occurs the transformer will be able to predict that B follows A. Although this might seem like weirdly specific ability, it turns out that induction circuits are actually a pretty massive deal. They're present even in large models (despite being originally discovered in 2-layer models), they can be linked to macro effects like bumps in loss curves during training, and there is some evidence that induction heads might even constitute the mechanism for the actual majority of all in-context learning in large transformer models.
I think induction heads can be pretty confusing unless you fully understand th |
88b8b062-77f5-4b46-a987-a48bf63a977b | trentmkelly/LessWrong-43k | LessWrong | A Practical Guide to Conflict Resolution: Comprehension
[As described in the introduction, this post is about the value of fully grasping the whole picture, including the many sides to each debate, even when most of them are wrong.]
Seek First to Understand
A good attitude and a good venue will carry you a surprisingly long way, but of course they’re not always sufficient on their own. The next thing I try when mediating conflict is to temporarily ignore whatever I believe and work to understand both sides of the argument equally. I called this post “comprehension”, but it could equally just be called “listening”, or maybe more precisely “active listening”. Honestly, Stephen Covey has already said most of this far better than I can in his book The 7 Habits of Highly Effective People. Habit number five is “seek first to understand”, which captures the spirit of things.
The value of truly understanding both sides of a conflict cannot be overstated. Even when I’ve nominally resolved a conflict, I get antsy if I still don’t really grok one side or the other. More than once, trying to scratch that itch “after the fact” has turned up a hidden requirement or pain point which would have just caused more grief down the road. Remember, success is rarely as simple as just making the conflict go away; you can’t know if you’ve truly found success (not just victory) unless you properly understand both sides.
But understanding both sides isn’t just an after-the-fact thing. It also has concrete value in guiding the resolution of a conflict when you’re caught in the middle of it, because it allows you to properly apply the principle of charity. The principle of charity says that you should try and find the best possible interpretation for people’s arguments, even when they aren’t always clear or coherent. It goes back to assuming good faith; maybe an argument sounds crazy, but it makes sense to the person saying it. The only way to apply the principle of charity in many cases is to start by understanding the argument, and the person |
4a1ce15a-b9dd-42de-925d-9605c33abbce | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "Viliam Bur made the announcement in Main, but not everyone checks main, so I'm repeating it here. During the following months my time and attention will be heavily occupied by some personal stuff, so I will be unable to function as a LW moderator. The new LW moderator is... NancyLebovitz!
From today, please direct all your complaints and investigation requests to Nancy. Please not everyone during the first week. That can be a bit frightening for a new moderator.
There are a few old requests I haven't completed yet. I will try to close everything during the following days, but if I don't do it till the end of January, then I will forward the unfinished cases to Nancy, too.
Long live the new moderator!"
The passage is from a user-contributed post on LessWrong.
Can you break this down into the top 5 most important points for me.
1. Viliam Bur has announced that he will be stepping down from his role as a moderator on LessWrong due to personal commitments.
2. NancyLebovitz has been appointed as the new moderator for LessWrong.
3. All future complaints and investigation requests should be directed to NancyLebovitz.
4. Viliam Bur will attempt to complete all outstanding requests before the end of his tenure, but any unfinished cases will be forwarded to NancyLebovitz.
5. The community is encouraged to be patient and understanding as NancyLebovitz transitions into her new role as moderator.
Can you elaborate on the transition process for NancyLebovitz into her new role as moderator and the potential challenges she might face?
The transition process for NancyLebovitz into her new role as moderator may involve several steps.
1. Understanding the role: As a new moderator, she would need to familiarize herself with the responsibilities, rules, and expectations of the role. This could involve studying the platform's guidelines, policies, and past moderation decisions.
2. Handling ongoing issues: As mentioned in the passage, there are a few unresolved requests that Viliam Bur will attempt to close. If he's unable to do so, these will be passed on to NancyLebovitz. This means she might have to start her tenure by dealing with issues that she wasn't originally involved with, which could be challenging.
3. Building relationships: She will need to establish a rapport with the community members. This includes understanding the community dynamics, handling feedback, and dealing with potential conflicts.
4. Managing time and workload: Depending on the size and activity level of the community, the role of a moderator can be time-consuming. Balancing this with other commitments could be a challenge.
5. Learning curve: Every new role comes with a learning curve. It might take some time for NancyLebovitz to become fully comfortable and efficient in her role as a moderator.
As for potential challenges, these could include dealing with contentious issues within the community, making unpopular decisions, managing time effectively, and handling the pressure that comes with the role. She might also face challenges in interpreting and implementing the platform's guidelines and dealing with any pushback from the community.
Considering the challenges faced by NancyLebovitz in her new role as a moderator, how could she strategically navigate these hurdles to foster a positive and productive community environment?
Navigating the challenges of a new moderator role requires strategic planning and a proactive approach. Here are some strategies NancyLebovitz could employ:
1. Clear Communication: Nancy should communicate clearly and frequently with the community. This includes explaining her decisions, providing updates on ongoing issues, and being open to feedback. This can help build trust and understanding between her and the community.
2. Training and Preparation: Nancy should take time to familiarize herself with the platform's guidelines and past moderation decisions. This can help her make informed decisions and handle ongoing issues effectively.
3. Building Relationships: Nancy should make an effort to get to know the community members and understand their concerns. This can help her anticipate potential issues and respond to them effectively.
4. Time Management: The role of a moderator can be demanding. Nancy should prioritize her tasks and manage her time effectively to ensure that she can handle her responsibilities without becoming overwhelmed.
5. Seeking Support: If there are other moderators or experienced community members, Nancy should not hesitate to seek their advice or assistance. They can provide valuable insights and help her navigate the challenges of her new role.
6. Maintaining Neutrality: As a moderator, Nancy should strive to remain neutral and fair in her decisions. This can help prevent conflicts and ensure that all community members feel heard and respected.
7. Fostering a Positive Environment: Nancy should encourage positive interactions and discussions within the community. This can help create a supportive and productive environment where all members feel valued and engaged. |
cc68f268-c941-403e-98fe-f0cb2609704d | trentmkelly/LessWrong-43k | LessWrong | Why We Disagree
Bob spends 5 minutes thinking about x-risk. He's seen a few arguments about it, so he makes an internal model of the problem, accepts some of the arguments, amends some, comes up with counterarguments to others, comes up with arguments of his own. All of this belief-state has extremely large degrees of freedom. At the same time, these beliefs already generate an opinion on a vast number of possible x-risk arguments.
Alice spends 5 hours detailing her beliefs about x-risk in a post. Bob reads it, point by point. He's seen some of those arguments already, he does not update. Some of the arguments are new to him, but they do not surprise him, Bob's current beliefs are enough to reject them, he does not update. The post offers new refutations to some of his accepted beliefs, but he can immediately come up with counter-refutations, he does not update. The post offers refutations to arguments that he has already rejected, and to new arguments that he'd never even consider reasonable, Bob is falling asleep, he does not update. Etc.
Bob leaves a comment with one of his counterarguments to one of Alice's points. They spend hours going back and forth over the next few days. They're explaining their slightly different understanding of the arguments and assumptions, slightly different usage of terms, exchanging long sequences of arguments and counterarguments. Eventually Bob agrees that Alice is right, he updates! But he only updates on that one point in his first comment. He still has many other independent arguments, so the total update to his beliefs about x-risk is tiny.
Many other people read the same post and find it extremely convincing. How did this happen?
* Carl has not spent 5 minutes thinking about the problem. He may have no interest, or he may not have heard of it before. So what happened here is not a "disagreement" but rather "education". This is somewhat good. In the future Alice and Carl will find it easy to work with this shared understanding and shared |
b60a2122-268b-4a3e-b5e5-b3ce166d2acc | trentmkelly/LessWrong-43k | LessWrong | What is optimization power, formally?
I'm interested in thinking formally about AI risk. I believe that a proper mathematization of the problem is important to making intellectual progress in that area.
I have been trying to understand the rather critical notion of optimization power. I was hoping that I could find a clear definition in Bostrom's Superintelligence. But having looked in the index at all the references to optimization power that it mentions, as far as I can tell he defines it nowhere. The closest he gets is defining it in terms of rate of change and recalcitrance (pp.62-77). This is an empty definition--just tautologically defining it in terms of other equally vague terms.
Looking around, this post by Yudkowksy, "Measuring Optimization Power" doesn't directly formalize optimization power. He does discuss how one would predict or identify if a system were the result of an optimization process in a Bayesian way:
The quantity we're measuring tells us how improbable this event is, in the absence of optimization, relative to some prior measure that describes the unoptimized probabilities. To look at it another way, the quantity is how surprised you would be by the event, conditional on the hypothesis that there were no optimization processes around. This plugs directly into Bayesian updating: it says that highly optimized events are strong evidence for optimization processes that produce them.
This is not, however, a definition that can be used to help identify the pace of AI development, for example. Rather, it is just an expression of how one would infer anything in a Bayesian way, applied to the vague 'optimization process' phenomenon.
Alex Altair has a promising attempt at formalization here but it looks inconclusive. He points out the difficulty of identifying optimization power with just the shift in the probability mass of utility according to some utility function. I may be misunderstanding, but my gloss on this is that defining optimization power purely in terms of differences |
9837b6e9-30f0-497b-89b7-f9cb2ab90a40 | trentmkelly/LessWrong-43k | LessWrong | Maybe talking isn't the best way to communicate with LLMs
There are 3 reasons why not all interaction with LLMs is going to be using natural language:
1. SaSSy interfaces are better for focused tasks
2. Some people don’t like talking
3. Talking to a non-sentient intelligence is creepy
#1 is already known and acted upon, this post covers the other two. |
94280c82-ad89-4051-bad1-79a16dc71ad4 | trentmkelly/LessWrong-43k | LessWrong | [Link] One in five American adults say they are atheist, agnostic or "nothing in particular"
A new study by the Pew Forum on Religion and Public Life [executive summary; full report; New York Times cover story] found that 19.6% of adult Americans answered a question about their current religion by saying they were atheist, agnostic or “nothing in particular.” In 2007, the figure was 15.3%; in 1972, it was about 7%.
Some relevant quotes from the executive summary:
> However, a new survey by the Pew Research Center’s Forum on Religion & Public Life, conducted jointly with the PBS television program Religion & Ethics NewsWeekly, finds that many of the country’s 46 million unaffiliated adults are religious or spiritual in some way. Two-thirds of them say they believe in God (68%). More than half say they often feel a deep connection with nature and the earth (58%), while more than a third classify themselves as “spiritual” but not “religious” (37%), and one-in-five (21%) say they pray every day. In addition, most religiously unaffiliated Americans think that churches and other religious institutions benefit society by strengthening community bonds and aiding the poor.
>
> With few exceptions, though, the unaffiliated say they are not looking for a religion that would be right for them. Overwhelmingly, they think that religious organizations are too concerned with money and power, too focused on rules and too involved in politics.
> The growth in the number of religiously unaffiliated Americans – sometimes called the rise of the “nones” – is largely driven by generational replacement, the gradual supplanting of older generations by newer ones. A third of adults under 30 have no religious affiliation (32%), compared with just one-in-ten who are 65 and older (9%). And young adults today are much more likely to be unaffiliated than previous generations were at a similar stage in their lives.
> [T]he percentage of Americans who say they never doubt the existence of God has fallen modestly but noticeably over the past 25 years. In 1987, 88% of adults said the |
d468905a-2c26-473b-ad64-4eac31cd9785 | trentmkelly/LessWrong-43k | LessWrong | The Pointer Resolution Problem
Thanks to Garrett Baker and Evžen Wybitul for discussions and feedback on this post.
Imagine that you meet an 18th century altruist. They tell you “So, I’ve been thinking about whether or not to eat meat. Do you know whether animals have souls?” How would you answer, assuming you actually do want to be helpful?
One option is to spend a lot of time explaining why “soul” isn’t actually the thing in the territory they care about, and talk about moral patienthood and theories of welfare and moral status. If they haven’t walked away from you in the first thirty seconds this may even work, though I wouldn’t bet on it.
Another option is to just say “yes” or “no”, to try and answer what their question was pointing at. If they ask further questions, you can either dig in deeper and keep translating your real answers into their ontology or at some point try to retarget their questions’ pointers toward concepts that do exist in the territory.
Low-fidelity pointers
The problem you’re facing in the above situation is that the person you’re talking to is using an inaccurate ontology to understand reality. The things they actually care about correspond to quite different objects in the territory. Those objects currently don’t have very good pointers in their map. Trying to directly redirect their questions without first covering a fair amount of context and inferential distance over what these objects are probably wouldn’t work very well.
So, the reason this is relevant to alignment:
Representations of things within the environment are learned by systems up to the level of fidelity that’s required for the learning objective. This is true even if you assume a weak version of the natural abstraction hypothesis to be true; the general point isn’t that there wouldn’t be concepts corresponding to what we care about, but that they could be very fuzzy.
For example, let’s say that you try to retarget an internal general-purpose search process. That post describes the following app |
3b1630df-6e9b-4edd-842c-5f3f78d51e41 | trentmkelly/LessWrong-43k | LessWrong | Yoshua Bengio: How Rogue AIs may Arise
[I think this is noteworthy, as Yoshua Bengio is a famous ML expert, and he presents his arguments for AI xrisk here. I copied the whole blogpost below.]
The rise of powerful AI dialogue systems in recent months has precipitated debates about AI risks of all kinds, which hopefully will yield an acceleration of governance and regulatory frameworks. Although there is a general consensus around the need to regulate AI to protect the public from harm due to discrimination and biases as well as disinformation, there are profound disagreements among AI scientists regarding the potential for dangerous loss of control of powerful AI systems, also known as existential risk from AI, that may arise when an AI system can autonomously act in the world (without humans in the loop to check that these actions are acceptable) in ways that could potentially be catastrophically harmful. Some view these risks as a distraction for the more concrete risks and harms that are already occurring or are on the horizon. Indeed, there is a lot of uncertainty and lack of clarity as to how such catastrophes could happen. In this blog post we start a set of formal definitions, hypotheses and resulting claims about AI systems which could harm humanity and then discuss the possible conditions under which such catastrophes could arise, with an eye towards helping us imagine more concretely what could happen and the global policies that might be aimed at minimizing such risks.
Definition 1: A potentially rogue AI is an autonomous AI system that could behave in ways that would be catastrophically harmful to a large fraction of humans, potentially endangering our societies and even our species or the biosphere.
Executive Summary
Although highly dangerous AI systems from which we would lose control do not currently exist, recent advances in the capabilities of generative AI such as large language models (LLMs) have raised concerns: human brains are biological machines and we have made great progre |
9d6f6015-3203-4bc6-89da-4c5db06f253f | trentmkelly/LessWrong-43k | LessWrong | Why I want to make a logical language
I would normally publish this on my blog, however, I thought that LessWrong people might be interested in this topic. This sequence is about my experience creating a logical language, Sekko (still a work-in-progress).
What's a logical language?
Languages can be divided into two categories: natural languages, which arise...naturally, and constructed langugages, which are designed. Of constructed languages, there are two categories: artlangs, which are designed as works of art or as supplements to a world (think Quenya or Klingon), or engelangs (engineered languages), which are designed to fulfill a specific purpose.
Loglangs (logical languages) are a subset of engelangs. Not all engelangs are loglangs -- Toki Pona is a non-loglang engelang.
Despite the name, what constitutes a "loglang" is rather undefined. It's more of a "vibe" than anything. Still, I'll try to lay out various properties that loglangs have:
* Adherence to a logic system. Most logical languages have second-order logic. Toaq and Lojban have plural logic; Eberban has singular logic.
* Self-segregating morphology (SSM). SSM refers to a morphology (scheme for making word forms) such that word boundaries can be unambiguously determined. Several schemes exist: Lojban uses stress and consonant clusters, Toaq uses tone contours, Eberban and Sekko use arbitrary phoneme classification. Sekko's SSM will be discussed in later posts.
* Elimination of non-coding elements. Many natural languages have features which add complication but do not encode information. Examples include German grammatical gender, Latin's five declensions or English number and person agreement (i.e. "I am" vs "You are" vs "He is"; "It is" vs "They are"). Loglangs aim to remove non-coding grammatical elements like these.
* Exceptionless grammar and syntactic unambiguity. Many natural languages have irregular forms. Examples include
* Machine parsability. Basically all loglangs are able to be parsed to check for syntactic correctnes |
4302acea-85e1-4d90-815b-89fe9accaf3c | StampyAI/alignment-research-dataset/special_docs | Other | A critical agential account of free will, causation, and physics
This is an account of free choice in a physical universe. It is very much relevant to decision theory and philosophy of science. It is largely metaphysical, in terms of taking certain things to be basically real and examining what can be defined in terms of these things.
The starting point of this account is critical and agential. By agential, I mean that the ontology I am using is from the point of view of an agent: a perspective that can, at the very least, receive observations, have cognitions, and take actions. By critical, I mean that this ontology involves uncertain conjectures subject to criticism, such as criticism of being logically incoherent or incompatible with observations. This is very much in a similar spirit to [critical rationalism](https://en.wikipedia.org/wiki/Critical\_rationalism).
Close attention will be paid to falsifiability and refutation, principally for \*ontological\* purposes, and secondarily for epistemic purposes. Falsification conditions specify the meanings of laws and entities relative to the perspective of some potentially falsifying agent. While the agent may believe in unfalsifiable entities, falsification conditions will serve to precisely pin down that which can be precisely pinned down.
I have only seen “agential” used in the philosophical literature in the context of [agential realism](https://en.wikipedia.org/wiki/Agential\_realism), a view I do not understand well enough to comment on. I was tempted to use “subjective”; however, while subjects have observations, they do not necessarily have the ability to take actions. Thus I believe “agential” has a more concordant denotation.
You’ll note that my notion of “agent” already assumes one can take actions. Thus, a kind of free will is taken as metaphysically basic. This presupposition may cause problems later. However, I will try to show that, if careful attention is paid, the obvious problems (such as contradiction with determinism) can be avoided.
The perspective in this post can be seen as starting from agency, defining consequences in terms of agency, and defining physics in terms of consequences. In contrast, the most salient competing decision theory views (including [framings](https://plato.stanford.edu/entries/decision-causal/) of CDT, EDT, and FDT) define agency in terms of consequences (“expected utility maximization”), and consequences in terms of physics (“counterfactuals”). So I am rebasing the ontological stack, turning it upside-down. This is less absurd than it first appears, as will become clear.
(For simplicity, assume observations and actions are both symbols taken from some finite alphabet.)
Naive determinism
-----------------
Let’s first, within a critical agential ontology, disprove some very basic forms of determinism.
Let A be some action. Consider the statement: “I will take action A”. An agent believing this statement may falsify it by taking any action B not equal to A. Therefore, this statement does not hold as a law. It may be falsified at will.
Let \*f\*() be some computable function returning an action. Consider the statement: “I will take action \*f\*()”. An agent believing this statement may falsify it by taking an action B not equal to \*f\*(). Note that, since the agent is assumed to be able to compute things, \*f\*() may be determined. So, indeed, this statement does not hold as a law, either.
This contradicts a certain strong formulation of naive determinism: the idea that one’s action is necessarily determined by some known, computable function.
Action-consequences
-------------------
But wait, what about physics? To evaluate what physical determinism even means, we need to translate physics into a critical agential ontology. However, before we turn to physics, we will first consider action-consequences, which are easier to reason about.
Consider the statement: “If I take action A, I will immediately there-after observe O.” This statement is falsifiable, which means that if it is false, there is some policy the agent can adopt that will falsify it. Specifically, the agent may adopt the policy of taking action A. If the agent will, in fact, not observe O after taking this action, then the agent will learn this, falsifying the statement. So the statement is falsifiable.
Finite conjunctions of falsifiable statements are themselves falsifiable. Therefore, the conjunction “If I take action A, I will immediately there-after observe O; if I take action B, I will immediately there-after observe P” is, likewise, falsifiable.
Thus, the agent may have falsifiable beliefs about observable consequences of actions. This is a possible starting point for decision theory: actions having consequences is already assumed in the ontology of VNM utility theory.
Falsification and causation
---------------------------
Now, the next step is to account for physics. Luckily, the falsificationist paradigm was designed around demarcating scientific hypotheses, such that it naturally describes physics.
Interestingly, falsificationism takes agency (in terms of observations, computation, and action) as more basic than physics. For a thing to be falsifiable, it must be \*able\* to be falsified by some agent, seeing some observation. And the word \*able\* implies freedom.
Let’s start with some basic [Popperian logic](https://en.wikipedia.org/wiki/The\_Logic\_of\_Scientific\_Discovery). Let \*f\* be some testable function (say, connected to a computer terminal) taking in a natural number and returning a Boolean. Consider the hypothesis: “For all \*x\*, \*f\*(\*x\*) is true”. This statement is falsifiable: if it’s false, then there exists some action-sequence an agent can take (typing \*x\* into the terminal, one digit at a time) that will prove it to be false.
The given hypothesis is a kind of scientific law. It specifies a regularity in the environment.
Note that there is a “bridge condition” at play here. That bridge condition is that the function \*f\* is, indeed, connected to the terminal, such that the agent’s observations of \*f\* are trustworthy. In a sense, the bridge condition specifies what \*f\* is, from the agent’s perspective; it allows the agent to locate \*f\* as opposed to some other function.
Let us now consider causal hypotheses. We already considered action-consequences. Now let us extend this analysis to reasoning about causation between external entities.
Consider the hypothesis: “If the match is struck, then it will alight immediately”. This hypothesis is falsifiable by an agent who is \*able\* to strike the match. If the hypothesis is false, then the agent may refute it by choosing to strike the match and then seeing the result. However, an agent who is unable to strike the match cannot falsify it. (Of course, this assumes the agent may see whether the match is alight after striking it)
Thus, we are defining causality in terms of agency. The falsification conditions for a causal hypothesis refer to the agent’s abilities. This seems somewhat wonky at first, but it is quite similar to [Pearlian casuality](https://en.wikipedia.org/wiki/Causality\_(book)), which defines causation in terms of metaphysically-real interventions. This order of definition radically reframes the determinism vs. free will apparent paradox, by defining the conditions of determinism (causality) in terms of potential action.
External physics
----------------
Let us now continue, proceeding to more universal physics. Consider the law of gravity, according to which a dropped object will accelerate downward at a near-constant weight. How might we port this law into an agential ontology?
Here is the assumption about how the agent interacts with gravity. The agent will choose some natural number as the height of an object. Thereafter, the object will fall, while a camera will record the height of the object at each natural-number time expressed in milliseconds, to the nearest natural-number millimeter from the ground. The agent may observe a printout of the camera data afterwards.
Logically, constant gravity implies, and is implied by, a particular quadratic formula for the height of the object as a function of the object’s starting height and the amount of time that has passed. This formula implies the content of the printout, as a function of the chosen height. So, the agent may falsify constant gravity (\*in the observable domain\*) by choosing an object-height, placing an object at that height, letting it fall, and checking the printout, which will show the law of constant gravity to be false, if the law in fact does not hold for objects dropped at that height (to the observed level of precision).
Universal constant gravity is not similarly falsifiable by this agent, because this agent may only observe this given experimental setup. However, a domain-limited law, stating that the law of constant gravity holds for all possible object-heights in this setup, up to the camera’s precision, is falsifiable.
It may seem that I am being incredibly pedantic about what a physical law is and what the falsification conditions are; however, I believe this level of pedantry is necessary for critically examining the notion of physical determinism to a high-enough level of rigor to check interaction with free will.
Internal physics
----------------
We have, so far, considered the case of an agent falsifying a physical law that applies to an external object. To check interaction with free will, we must interpret physical law applied to the agent’s internals, on which the agent’s cognition is, perhaps, running in a manner similar to software.
Let’s consider the notion that the agent itself is “running on” some Turing machine. We will need to specify precisely what such “running on” means.
Let C be the computer that the agent is considering whether it is running on. C has, at each time, a tape-state, a Turing machine state, an input, and an output. The input is attached to a sensor (such as a camera), and the output is attached to an actuator (such as a motor).
For simplicity, let us say that the history of tapes, states, inputs, and outputs is saved, such that it can be queried at a later time.
We may consider the hypothesis that C, indeed, implements the correct dynamics for a given Turing machine specification. These dynamics imply a relation between future states and past states. An agent may falsify these dynamics by checking the history and seeing if the dynamics hold.
Note that, because some states or tapes may be unreachable, it is not possible to falsify the hypothesis that C implements correct dynamics starting from unreachable states. Rather, only behavior following from reachable states may be checked.
Now, let us think on an agent considering whether they “run on” this computer C. The agent may be assumed to be able to query the history of C, such that it may itself falsify the hypothesis that C implements Turing machine specification M, and other C-related hypotheses as well.
Now, we can already name some ways that “I run on C” may be falsified:
\* Perhaps there is a policy I may adopt, and a time \*t\*, such that if I implement this policy, I will observe O at time \*t\*, but C will observe something other than O at time \*t\*.
\* Perhaps there is a policy I may adopt, and a time \*t\*, such that if I implement this policy, I will take action A at time \*t\*, but C will take an action other than A at time \*t\*.
The agent may prove these falsification conditions by adopting a given policy until some time \*t\*, and then observing C’s observation/action at time \*t\*, compared to their own observation/action.
I do not argue that the converse of these conditions exhaust what it means that “I run on C”. However, they at least restrict the possibility space by a very large amount. For the falsification conditions given to not hold, the observations and behavior of C must be identical with the agent’s own observations and behavior, for all possible policies the agent may adopt.
I will name the hypothesis with the above falsification conditions: “I effectively run on C”. This conveys that these conditions may not be exhaustive, while still being quite specific, and relating to effects between the agent and the environment (observations and actions).
Note that the agent can hypothesize itself to effectively run on multiple computers! The conditions for effectively running on one computer do not contradict the conditions for effectively running on another computer. This naturally handles cases of identical physical instantiations of a single agent.
At this point, we have an account of an agent who:
\* Believes they have observations and take free actions
\* May falsifiably hypothesize physical law
\* May falsifiably hypothesize that some computer implements a Turing machine specification
\* May falsifiably hypothesize that they themselves effectively run on some computer
I have not yet shown that this account is consistent. There may be paradoxes. However, this at least represents the subject matter covered in a unified critical agential ontology.
Paradoxes sought and evaluated
------------------------------
Let us now seek out paradox. We showed before that the hypothesis “I take action \*f\*()” may be refuted at will, and therefore does not hold as a necessary law. We may suspect that “I effectively run on C” runs into similar problems.
#### Self-contradiction
Remember that, for the “I effectively run on C” hypothesis to be falsified, it must be falsified at some time, at which the agent’s observation/action comes apart from C’s. In the “I take action \*f\*()” case, we had the agent simulate \*f\*() in order to take an opposite action. However, C need not halt, so the agent cannot simulate C until halting. Instead, the agent may select some time \*t\*, and run C for \*t\* steps. But, by the time the agent has simulated C for \*t\* steps, the time is already past \*t\*, and so the agent may not contradict C’s behavior at time \*t\*, by taking an opposite action. Rather, the agent only knows what C does at time \*t\* at some time later than \*t\*, and only their behavior after this time may depend on this knowledge.
So, this paradox is avoided by the fact that the agent cannot contradict its own action before knowing it, but cannot know it before taking it.
We may also try to create a paradox by assuming an external super-fast computer runs a copy of C in parallel, and feeds this copy’s action on subjective time-step \*t\* into the original C’s observation before time \*t\*; this way, the agent may observe its action before it takes it. However, now the agent’s action is dependent on its observation, and so the external super-fast computer must decide which observation to feed into the parallel C. The external computer cannot know what C will do before producing this observation, and so this attempt at a paradox cannot stand without further elaboration.
We see, now, that if free will and determinism are compatible, it is due to limitations on the agent’s knowledge. The agent, knowing it runs on C, cannot thereby determine what action it takes at time \*t\*, until a later time. And the initial attempt to provide this knowledge externally fails.
#### Downward causation
Let us now consider a general criticism of functionalist views, which is that of downward causation: if a mental entity (such as observation or action) causes a physical entity, doesn’t that either mean that the mental entity is physical, or that physics is not causally closed?
Recall that we have defined causation in terms of the agent’s action possibilities. It is straightforwardly the case, then, that the agent’s action at time \*t\* causes changes in the environment.
But, what of the physical cause? Perhaps it is also the case that C’s action at time \*t\* causes changes in the environment. If so, there is a redundancy, in that the change in the environment is caused both by the agent’s action and by C’s action. We will examine this possible redundancy to find potential conflicts.
To consider ways that C’s action may change the environment, we must consider how the agent may intervene on C’s action. Let us say we are concerned with C’s action at time \*t\*. Then we may consider the agent at some time \*u\* < \*t\* taking an action that will cause C’s action at time \*t\* to be over-written. For example, the agent may consider programming an external circuit that will interact with C’s circuit (“its circuit”).
However, if the agent performs this intervention, then the agent’s action at time \*t\* has no influence on C’s action at time \*t\*. This is because C’s action is, necessarily, equal to the value chosen at time \*u\*. (Note that this lack of influence means that the agent \*does not effectively run on C\*, for the notion of “effectively run on” considered! However, the agent may be said to effectively run on C with one exception.)
So, there is no apparent way to set up a contradiction between these interventions. If the agent decides early (at time \*u\*) to determine C’s action at time \*t\*, then that decision causes C’s action at time \*t\*; if the agent does not do so, then the agent’s decision at time \*t\* causes C’s action at time \*t\*; and these are mutually exclusive. Hence, there is not an apparent problem with redundant causality.
#### Epiphenomenalism
It may be suspected that the agent I take to be real is epiphenomenal. Perhaps all may be explained in a physicalist ontology, with no need to posit that there exists an agent that has observations and takes actions. (This is a criticism levied at some views on consciousness; my notion of metaphysically-real observations is similar enough to consciousness that these criticisms are potentially applicable)
The question in regards to explanatory power is: what is being explained, in terms of what? My answer is: observations are being explained, in terms of hypotheses that may be falsified by action/observations.
An eliminativist perspective denies the agent’s observations, and thus fails to explain what ought to be explained, in my view. However, eliminativists will typically believe that “scientific observation” is possible, and seek to explain scientific observations.
A relevant point to make here is that the notion of scientific observation assumes there is some scientific process happening that has observations. Indeed, the scientific method includes actions, such as testing, which rely on the scientific process taking actions. Thus, scientific processes may be considered as agents in the sense I am using the term.
My view is that erasing the agency of both individual scientists, and of scientific processes, puts the ontological and epistemic status of physics on shaky ground. It is hard to say why one should believe in physics, except in terms of it explaining observations, including experimental observations that require taking actions. And it is hard to say what it means for a physical hypothesis to be true, with no reference to how the hypothesis connects with observation and action.
In any case, the specter of epiphenomenalism presents no immediate paradox, and I believe that it does not succeed as a criticism.
Comparison to Gary Drescher’s view
----------------------------------
I will now compare my account to Gary Drescher’s view. I have found Drescher’s view to be both particularly systematic and compelling, and to be quite similar to the views of other relevant philosophers such as Daniel Dennett and Eliezer Yudkowsky. Therefore, I will compare and contrast my view with Drescher’s. This will dispel the illusion that I am not saying anything new.
Notably, Drescher makes a similar observation to mine on Pearl: “Pearl’s formalism models free will rather than mechanical choice.”
Quoting section 5.3 of \*Good and Real\*:
> Why did it take that action? In pursuit of what goal was the action selected? Was that goal achieved? Would the goal have been achieved if the machine had taken this other action instead? The system includes the assertion that if the agent were to do X, then Y would (probably) occur; is that assertion true? The system does not include the assertion that if it were to do P, Q would probably occur; is that omitted assertion true? Would the system have taken some other action just now if it had included that assertion? Would it then have better achieved its goals?
>
> Insofar as such questions are meaningful and answerable, the agent makes choices in at least the sense that the correctness of its actions with respect to its designated goals is analyzable. That is to say, there can be means-end connections between its actions and its goals: its taking an action for the sake of a goal can make sense. And this is so despite the fact that everything that will happen-including every action taken and every goal achieved or not-is inalterably determined once the system starts up. Accordingly, I propose to call such an agent a choice machine.
>
>
Drescher is defining conditions of choice and agency in terms of whether the decisions “make sense” with respect to some goal, in terms of means-end connections. This is a “outside” view of agency in contrast with my “inside” view. That is, it says some thing is an agent when its actions connect with some goal, and when the internal logic of that thing takes into account this connection.
This is in contrast to my view, which takes agency to be metaphysically basic, and defines physical outside views (and indeed, physics itself) in terms of agency.
My view would disagree with Drescher’s on the “inalterably determined” assertion. In an earlier chapter, Drescher describes a deterministic block-universe view. This view-from-nowhere implies that future states are determinable from past states. In contrast, the view I present here rejects views-from-nowhere, instead taking the view of some agent in the universe, from whose perspective the future course is not already determined (as already argued in examinations of paradox).
Note that these disagreements are principally about metaphysics and ontology, rather than scientific predictions. I am unlikely to predict the results of scientific experiments differently from Drescher on account of this view, but am likely to account for the scientific process, causation, choice, and so on in different language, and using a different base model.
Conclusion and further research
-------------------------------
I believe the view I have presented to be superior to competing views on multiple fronts, most especially logical/philosophical systematic coherence. I do not make the full case for this in this post, but take the first step, of explicating the basic ontology and how it accounts for phenomena that are critically necessary to account for.
An obvious next step is to tackle decision theory. Both Bayesianism and VNM decision theory are quite concordant with critical agential ontology, in that they propose coherence conditions on agents, which can be taken as criticisms. Naturalistic decision theory involves reconciling choice with physics, and so a view that already includes both is a promising starting point.
Multi-agent systems are quite important as well. The view presented so far is near-solipsistic, in that there is a single agent who conceptualizes the world. It will need to be defined what it means for there to be “other” agents. Additionally, “aggregative” agents, such as organizations, are important to study, including in terms of what it means for a singular agent to participate in an aggregative agent. “Standardized” agents, such as hypothetical skeptical mathematicians or philosophers, are also worthy subjects of study; these standardized agents are relevant in reasoning about argumentation and common knowledge. Also, while the discussion so far has been in terms of [closed individualism](https://opentheory.net/2018/09/a-new-theory-of-open-individualism/), alternative identity views such as empty individualism and open individualism are worth considering from a critical agential perspective.
Other areas of study include naturalized epistemology and philosophy of mathematics. The view so far is primarily ontological, secondarily epistemological. With the ontology in place, epistemology can be more readily explored.
I hope to explore the consequences of this metaphysics further, in multiple directions. Even if I ultimately abandon it, it will have been useful to develop a coherent view leading to an illuminating refutation. |
dc52291d-a135-46af-94bd-0fcf2b04bf0a | trentmkelly/LessWrong-43k | LessWrong | Contribution-Adjusted Utility Maximization Funds: An Early Proposal
> Right now in Effective Altruism (or Rationality), we have a few donor funds with particular focus areas. In this post I propose a new type of fund that’s instead focused on maximizing the combined utility functions of its particular donors. The fund goals would be something like, “Maximize the combined utility of our donors, adjusted for donation amount, in any way possible, abiding by legal and moral standards.” I think that this sort of fund structure is highly theoretical, but in theory could some particular wants that aren’t currently being met.
>
> For this document, I call these funds "Contribution-Adjusted Utility Maximization Funds", or CAUMFs. This name is intentionally long; this idea is early, and I don't want to pollute the collective namespace.
>
> This fund type has two purposes.
>
> 1. It’s often useful for individuals to coordinate on non-charitable activities. For example, research into best COVID risk measures to be used by a particular community.
> 2. These funds should help make it very clear that donations will be marginally valuable for the preferences of the donor. Therefore, donating to these funds should be safe on the margin. Hopefully this would result in more total donations.
>
> You can picture these funds as somewhere between bespoke nonprofit advising institutions, cooperatives, and small governments. If AI and decision automation could cut down on labor costs, related organizations might eventually be much more exciting. |
1fe6bd68-094c-4b86-86fc-0d2b65e9807f | StampyAI/alignment-research-dataset/special_docs | Other | Negotiable Reinforcement Learning for Pareto Optimal Sequential Decision-Making
Negotiable Reinforcement Learning for Pareto
Optimal Sequential Decision-Making
Nishant Desai
Center for Human-Compatible AI
University of California, Berkeley
nishantdesai@berkeley.eduAndrew Critch
Department of EECS
University of California, Berkeley
critch@berkeley.edu
Stuart Russell
Computer Science Division
University of California, Berkeley
russell@cs.berkeley.edu
Abstract
It is commonly believed that an agent making decisions on behalf of two or more
principals who have different utility functions should adopt a Pareto optimal policy,
i.e. a policy that cannot be improved upon for one principal without making
sacrifices for another. Harsanyi’s theorem shows that when the principals have a
common prior on the outcome distributions of all policies, a Pareto optimal policy
for the agent is one that maximizes a fixed, weighted linear combination of the
principals’ utilities. In this paper, we derive a more precise generalization for the
sequential decision setting in the case of principals with different priors on the
dynamics of the environment. We refer to this generalization as the Negotiable
Reinforcement Learning (NRL) framework. In this more general case, the relative
weight given to each principal’s utility should evolve over time according to how
well the agent’s observations conform with that principal’s prior. To gain insight
into the dynamics of this new framework, we implement a simple NRL agent and
empirically examine its behavior in a simple environment.
1 Introduction
It has been argued that the first AI systems with generally super-human cognitive abilities will play a
pivotal decision-making role in directing the future of civilization [Bostrom, 2014]. If that is the case,
an important question will arise: Whose values will the first super-human AI systems serve? Since
safety is a crucial consideration in developing such systems, assuming the institutions building them
come to understand the risks and the time investments needed to address them [Baum, 2016], they
will have a large incentive to cooperate in their design rather than racing under time-pressure to build
competing systems [Armstrong et al., 2016].
Therefore, consider two nations—allies or adversaries—who must decide whether to cooperate in the
deployment of an extremely powerful AI system. Implicitly or explicitly, the resulting system would
have to strike compromises when conflicts arise between the wishes of those nations. How can they
specify the degree to which that system would be governed by the distinctly held principles of each
nation? More mundanely, suppose a couple purchases a domestic robot. How should the robot strike
compromises when conflicts arise between the commands of its owners?
It is already an interesting and difficult problem to robustly align an AI system’s values with those
of asingle human (or a group of humans in close agreement). Inverse reinforcement learning (IRL)
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
[Russell, 1998] [Ng and Russell, 2000] [Abbeel and Ng, 2004] and cooperative inverse reinforcement
learning (CIRL) [Hadfield-Menell et al., 2016] represent successively realistic early approaches to
this problem. But supposing some adequate solution eventually exists for aligning the values of a
machine intelligence with a single human decision-making unit, how should the values of a system
serving multiple decision-makers be “aligned”?
In the present work, we attempt to begin answering this question. We begin by observing some
deficiencies of optimizing a fixed, linear weighted sum of principals’ utilities, as prescribed by
Harsanyi’s social aggregation theorem, in the case that those principals have differing beliefs about
the probability distributions dictating the agent’s observations. We show that building a decision
agent whose policy optimizes such an objective is not, in general, ex-ante Pareto optimal (i.e. "as
evaluated before the agent has taken any actions"). Intuitively, linear weighted aggregation fails
because, before the jointly owned agent has taken any actions, each principal evaluates its policy with
respect to their own beliefs, meaning that a policy that selectively prefers one principal over the other
conditioned on its observations can be desirable to both principals.
Section 3 addresses the shortcomings of Harsanyi-style preference aggregation by presenting the
Negotiable Reinforcement Learning (NRL) framework. In this domain, we model each principal’s
prior on the environment and utility function as a Partially Observable Markov Decision Process
(POMDP). We place necessary and sufficient conditions on Pareto optimality for an agent acting over
these POMDPs with policy . We then construct a third POMDP and show that the optimal policy for
this single POMDP satisfies the conditions for Pareto optimality. We refer to an agent implementing
a policy that solves this reduced POMDP as a NRL agent.
Following directly from this reduction is the intriguing property that a Pareto optimal policy must,
over time, prefer the utility function of the principal whose beliefs are a better predictor of the agent’s
observations. This counter-intuitive result constitutes the main theorem of this paper. This can be
seen as settling a kind of bet between the two principals: whichever principal makes the correct
predictions gets to have their utility prioritized. In Section 4, we implement a simple NRL agent and
make empirical observations of this bet settling behavior.
2 Related work
Social choice theory. The entirety of social choice theory and voting theory may be viewed as
an attempt to specify an agreeable formal policy to enact on behalf of a group. Harsanyi’s utility
aggregation theorem [Harsanyi, 1980] suggests one form of solution: maximizing a linear combination
of group members’ utility functions. The present work shows that this solution is inappropriate when
principals have different beliefs, and Theorem 4 may be viewed as an extension of Harsanyi’s form
that accounts simultaneously for differing priors and the prospect of future observations. Indeed,
Harsanyi’s form follows as a direct corollary of Theorem 4 when principals do share the same beliefs.
Multi-agent systems. Zhang and Shah [2014] may be considered a sequential decision-making
approach to social choice: they use MDPs to represent the decisions of principals in a competitive
game, and exhibit an algorithm for the principals that, if followed, arrives at a Pareto optimal Nash
equilibrium satisfying a certain fairness criterion. Among the literature surveyed here, this paper
is the closest to the present work in terms of its intended application: roughly speaking, achieving
mutually desirable outcomes via sequential decision-making. However, the work is concerned with
an ongoing interaction between the principals, rather than selecting a policy for a single agent to
follow as in this paper.
Multi-objective sequential decision-making. There is also a good deal of work on Multi-
Objective Optimization (MOO) [Tzeng and Huang, 2011], including for sequential decision-making,
where solution methods have been called Multi-Objective Reinforcement Learning (MORL). For
instance, Gábor et al. [1998] introduce a MORL method called Pareto Q-learning for learning a set
of a Pareto optimal polices for a Multi-Objective MDP (MOMDP). Soh and Demiris [2011] define
Multi-Reward Partially Observable Markov Decision Processes (MR-POMDPs), and use genetic
algorithms to produce non-dominated sets of policies for them. Roijers et al. [2015] refer to the same
problems as Multi-objective POMDPS (MOPOMDPs), and provide a bounded approximation method
for the optimal solution set for all possible weightings of the objectives. Wang [2014] surveys MORL
2
methods, and contributes Multi-Objective Monte-Carlo Tree Search (MOMCTS) for discovering
multiple Pareto optimal solutions to a multi-objective optimization problem.
However, none of these or related works address scenarios where the objectives are derived from
principals with differing beliefs, from which the priority-shifting phenomenon of Theorem 4 arises.
Differing beliefs are likely to play a key role in negotiations, so for that purpose, the formulation of
multi-objective decision-making adopted here is preferable.
3 Negotiable Reinforcement Learning
Consider, informally, a scenario wherein two principals — perhaps individuals, companies, or states
— are considering cooperating to build or otherwise obtain a machine that will then interact with an
environment on their behalf.1In such a scenario, the principals will tend to bargain for “how much”
the machine will prioritize their separate interests, so to begin, we need some way to quantify “how
much” each principal is prioritized.
For instance, one might model the machine as maximizing the expected value, given its observations,
of some utility function Uof the environment that equals a weighted sum
w(1)U(1)+w(2)U(2)(1)
of the principals’ individual utility functions U(1)andU(2), as Harsanyi’s social aggregation theorem
recommends [Harsanyi, 1980]. Then the bargaining process could focus on choosing the values of
the weightsw(i).
However, this turns out to be a bad idea. As we shall see in the following example, this solution form
is not generally compatible with Pareto optimality when agents have different beliefs. Harsanyi’s
setting does not account for agents having different priors, nor for decisions being made sequentially,
after future observations. In such a setting, we need a new form of solution, exhibited here.
A cake-splitting scenario. Alice (principal 1) and Bob (principal 2) are about to be presented with
a cake which they can choose to split in half to share, or give entirely to one of them. They have
(built or purchased) a robot that will make the cake-splitting decision on their behalf. Alice’s utility
function returns 0if she gets no cake, 20if she gets half a cake, or 30if she gets a whole cake. Bob’s
utility function works similarly.
However, Alice and Bob have slightly different beliefs about how the environment works. They both
agree on the state of the environment that the robot will encounter at first: a room with a cake in
it (s1=“cake” ). But Alice and Bob have different predictions about how the robot’s sensors will
perceive the cake: Alice thinks that when the robot perceives the cake, it is 90% likely to appear
with a red tint ( o1=“red" ), and 10% likely to appear with a green tint ( o1=“green" ), whereas Bob
believes the exact opposite. In either case, upon seeing the cake, the robot will either give Alice the
entire cake ( a1=(all, none) ), split the cake half-and-half ( a1=(half, half) ), or give Bob the entire
cake (a1=(none, all)). Moreover, Alice and Bob have common knowledge of all these facts.
Now, consider the following Pareto optimal policy that favors Alice (principal 1) when o1is red, and
Bob (principal 2) when o1is green:
^( jred) = 100% (all, none)
^( jgreen ) = 100% (none, all)
This policy can be viewed intuitively as a bet between Alice and Bob about the value of o1, and is
highly appealing to both principals:
E(1)[U(1); ^] = 90%(30) + 10%(0) = 27
E(2)[U(2); ^] = 10%(0) + 90%(30) = 27
In particular, ^is more appealing to both Alice and Bob than an agreement to deterministically split
the cake (half, half). We start to see that when principals evaluate their expected returns under a
policywith respect to differing beliefs about action outcomes, they may mutually agree to a policy
that favors one principal over the other, contingent on the action-observation history. We formalize
this intuition and explore its consequences in the following sections.
1The results here all generalize from two principals to nprincipals being combined successively in any order,
but for clarity of exposition, the two person case is prioritized.
3
3.1 A POMDP formulation
Let us formalize the machine’s decision-making situation using the structure of a Partially Observable
Markov Decision Process (POMDP) [Sondik, 1973]. It is assumed that the principals will have
common knowledge of the policy = (1;:::;n)they select for the machine to implement, but that
the principals may have different beliefs about how the environment works, and of course different
utility functions. We refer to this as the common knowledge assumption .
We encode each principal j’s outlook as a POMDP, D(j)= (S(j);A;T(j);U(j);O;
(j);n), which
simultaneously represents that principal’s beliefs about the environment, and the principal’s utility
function.
S(j)is a set of possible states sof the environment.
Ais the set of possible actions aavailable to the NRL agent.
T(j)is the conditional probabilities principal jbelieves will govern the environment state transitions,
i.e.,P(j)(si+1jsi;ai).
U(j)is principalj’s utility function from sequences of environmental states (s1;:::;sn)toR.2
Ois the set of possible observations oof the NRL agent.
(j)is the conditional probabilities principal jbelieves will govern the agent’s observations, i.e.,
P(j)(oijsi).
nis the time horizon.
Thus, principal j’s subjective probability of an outcome (s;o;a), for any s2(S(j))n, is given by a
probability distribution P(j)that takesas a parameter:
P(j)(s;o;a;) :=P(j)(s1)nY
i=1P(j)(oijsi)(aijoia<i)P(j)(si+1jsi;ai) (2)
We say that the POMDPs D(1)andD(2)arecompatible if any policy for one may be viewed as
a policy for the other, i.e., they have the same set of actions Aand observationsO, and the same
number of time steps n. By the common knowledge assumption, principals’ outlooks are assumed to
be encoded by a set of compatible POMDPs.
3.2 Pareto optimal policies
In this context, where a policy may be evaluated relative to more than one POMDP, we use
superscripts to represent which POMDP is governing the probabilities and expectations, e.g.,
E(j)
[U(j)] :=X
s2(S(j))nP(j)(s;)U(j)(s)
represents the expectation in D(j)of the utility function U(j), assuming policy is followed.
Definition 1 (Pareto optimal policies) .A policyisPareto optimal for a compatible pair of POMDPs
(D(1);D(2))if for any other policy 0, either
E(1)
[U(1)]E(1)
0[U(1)]orE(2)
[U(2)]E(2)
0[U(2)]:
It is assumed that, during negotiation, the principals will be seeking a Pareto optimal policy for the
agent to follow, relative to the POMDPs D(1)andD(2)describing each principal’s outlook.
If we allow the policy to come from the space of stochastic policies, mapping action-observation
histories to distributions over actions:
:
A;O
7!A,
then we can show that the following condition is necessary and sufficient for to be Pareto optimal.
2For the sake of generality, U(j)isnot assumed to be stationary, as reward functions often are.
4
Lemma 2. A policyis Pareto optimal to principals 1and2if and only if there exist weights
w1;w20withw1+w2= 1such that
2argmax
2
w(1)E(1)
[U(1)] +w(2)E(2)
[U(2)]
(3)
Proof. See supplementary material.
3.3 Reduction to a single POMDP
We shall soon see that any Pareto optimal policy must favor, as time progresses, optimizing the utility
of whichever principal’s beliefs were a better predictor of the NRL agent’s inputs. This phenomenon
is most easily shown via a reduction of the POMDPs describing the outlooks of principals 1 and 2 to
a third POMDP, as follows.
For any weights, w(1);w(2)0withw(1)+w(2)= 1, we define a new POMDP that works by
flipping a (w(1);w(2))-weighted coin, and then running D(1)orD(2)thereafter, according to the
coin flip. Explicitly, we define a POMDP mixture as a POMDP D=w(1)D(1)+w(2)D(2). We give
Dthe same action space Aand observation space Oas the compatible POMDPs D(1)andD(2). In
the state space of D, we include a latent, stochastic, binary variable B2f1;2gthat is fixed for all
time and initial belief given by P(B= 1) =w(1)andP(B= 2) =w(2). In the case that B= 1, we
draw states, transition probabilities, observation probabilities, and utilities from the parameters of D1.
Formally,
PD(s;o;ajB= 1;) =P(1)(s;o;a;),
and
E(D)
[UjB= 1] = E(1)
[U1].
Likewise for B= 2. This is a generalized version of a well-known POMDP reduction used in the
Bayesian Reinforcement Learning literature [Ghavamzadeh et al., 2016].
Given any policy , the expected payoff of inD=w1D1+w2D2is exactly
P(B= 1)E(UjB= 1) + P(B= 2)E(UjB= 2) (4)
=w(1)E(1)
[U(1)] +w(2)E(2)
[U(2)] (5)
Therefore, using the above definitions, Lemma 2 may be restated in the following equivalent form:
Lemma 3. Given a pair (D(1);D(2))of compatible POMDPs, a policy is Pareto optimal for that
pair if and only if there exist weights w(j)such thatis an optimal policy for the single POMDP
given byw(1)D(1)+w(2)D(2).
3.4 Structural properties of Pareto optimal POMDP reduction
Expressed in the form of Equation 3, it might not be clear how a Pareto optimal policy makes use of its
observations over time, aside from storing them in memory. For example, is there any sense in which
the NRL agent carries “beliefs" about the environment that it “updates" at each time step? Lemma 3
allows us to answer this and related questions by translating theorems about single POMDPs into
theorems about compatible pairs of POMDPs.
We introduce the notation hito represent the NRL agent’s action-observation history at time i, i.e.
hi= (oi;a<i). At timestep i, the NRL agent has a belief over the latent value B, conditioned on hi.
Note that given an action-observation history hi, the NRL agent’s posterior belief over the value of B
is proportional to the probabilities assigned by each principal’s outlook to the realized observation
sequence:
P(B=jjhi)/w(j)P(j)(oija<i).
This relation, combined with the expected value expression in Equation 4, reveals a pattern in how
the weights on the principals’ conditionally expected utilities must change over time, which is the
main result of this paper:
5
Theorem 4 (Pareto optimal policy recursion) .Given a pair (D(1);D(2))of compatible POMDPs of
lengthn, a policyis Pareto optimal if and only if its components iforinsatisfy the following
backward recursion for some pair of weights w(1);w(2)0withw(1)+w(2)= 1:
(hi)2argmax
2A
w(1)P(1)
oija<i
E(1)
[U(1)jhi;ai]
+w(2)P(2)
oija<i
E(2)
[U(2)jhi;ai]
In words, to achieve Pareto optimality, the machine must
1.use each principal’s own beliefs when estimating the degree to which a decision favors that
principal’s utility function, and
2.shift the relative priorities of the principals’ expected utilities in the machine’s decision
objective over time, by a factor proportional to how well the principals predict the machine’s
inputs.
Proof. By Lemma 3, the Pareto-optimality of for(D(1);D(2))is equivalent to its classical opti-
mality forD=w(1)D(1)+w(2)D(2)for some (w(1);w(2)). Writing Pfor probabilities in D, this is
equivalent to =(hi)maximizing the following expression F()for eachi2f0;:::;ng:
F() =E(D)
[Ujhi;ai]: (6)
The above property states, in words, that the optimal policy is the one that maximizes future expected
rewards given an observation history, without regard to any alternate histories and by recursively
assuming the same selection process for future timesteps. This is a standard formulation for POMDPs,
and is exactly Bellman’s Principle of Optimality [Bellman, 1957, Chap III, 3.]
The tower property of expectation allows us to write the above expectation factor as
E(D)[Ujhi;ai] =P(B= 1jhi)E(D)[Ujhi;ai;B = 1]
+P(B= 2jhi)E(D)[Ujhi;ai;B = 2]:(7)
Now, observe that, by Bayes’ rule, the posterior probability
P(B=jjhi)/w(j)P(j)(oija<i). (8)
By construction, E(D)[Ujhi;ai;B =j] =E(j)[U(j)jhi;ai]. Substituting this
expression and (8) into Equation 7 gives us the result.
An intuition about this property is gained by noting that as the NRL agent takes actions in the single
POMDPw(1)D(1)+w(2)D(2), its posterior belief about the value of the latent variable Bis exactly
equal to its belief about which utility function is “correct” for the POMDP it is acting in.
When the principals have the same beliefs, they always assign the same probability to the machine’s
inputs, so the weights on their respective expectations do not change over time. In this case, Harsanyi’s
utility aggregation formula is recovered as a special instance.
3.5 Interpretation as Bet Settling
Theorem 4 shows that a Pareto optimal policy must tend, over time, toward prioritizing the expected
utility of whichever principal’s beliefs best predict the machine’s inputs better. From some perspec-
tives, this is a little counterintuitive: not only must the machine gradually place more predictive weight
on whichever principal’s prior is a better predictor, but it must reward that principal by attending
more to their utility function as well.
Thus, a machine implementing a Pareto optimal policy can be viewed as a kind of bet-settling device.
If Alice is 90% sure the Four Horsemen will appear tomorrow and Bob is 80% sure they won’t, it
makes sense for Alice to ask—while bargaining with Bob for the machine’s policy—that the machine
6
Figure 1: The agent initially heads for Goal 2in the top-right corner. However, at Frame 9 it has
observed eight deterministic transitions in a row. Its confidence in MDP 1 is high enough at that point
that it veers downward and heads to Goal 1Shown: Frames 0, 6, 12, and 15 of the trajectory.
prioritize her values more if the Four Horsemen arrive tomorrow, in exchange for prioritizing Bob’s
values more if they don’t. Both parties will be happy with this agreement in expectation.
4 Evaluation
4.1 Experiment Environment
Our experiments are run in a modified version of the FrozenLake environment in OpenAI Gym
[Brockman et al., 2016]. FrozenLake is a grid world environment that simulates a goal MDP. The
agent receives a reward upon reaching the specified goal position. The agent can choose to move
NORTH ,SOUTH ,EAST , orWEST , and the transition model can be chosen to be either stochastic or
deterministic. Under a stochastic transition model, simulating the eponymous frozen lake, the agent’s
action fails with probability 0.2, and the agent transitions into one of the unintended neighboring
positions. In the FrozenLake environment, state is fully observed by the agent. We modify the
environment to support multiple possible goal states, labeled 1and2, corresponding to the utilities of
Principal 1 and 2, respectively. For simplicity, we treat the state as fully-observable.
In these experiments Principal 1 assigns utility to the agent reaching each goal labeled 1and has the
belief that the environment has a deterministic transition model. Principal 2 assigns utility to the
agent reaching each goal labeled 2and has the belief that the environment has a stochastic transition
model. The agent is initialized with initial belief state w1, corresponding to a subjective belief that the
agent is in Principal 1’s MDP, M1, with probability w1and Principal 2’s MDP, M2, with probability
1 w1=w2. We use point-based value iteration to learn a the belief-space policy. The agent is then
placed in either M1orM2, and we observe over time as the agent acts according to its belief state.
4.2 Experiments
Observed Behavior In this set of experiments, we run the NRL agent in order to verify that its
behavior resembles a type of bet-settling. The true environment is chosen to be deterministic for this
experiment. After running point-based value iteration [Pineau et al., 2003] with a belief set of 331
points, we execute the resulting policy in this environment. A portion of the agent’s trajectory is
seen in Figure 1. The purple arrows represent the agent’s choice of action at each physical position
under the current belief state. The color of each square represents the agent’s subjective value at each
position under the current belief.
Observing the trajectory, we see that it initially moves towards goal 2. However, each time an action
succeeds, the agent’s belief in the stochastic environment decreases. By the ninth frame, the agent’s
belief in the stochastic world, and as a result its belief that goal 2grants reward, is low enough that
the policy shifts to push the agent to goal 1. This is the type of behavior we would expect of an agent
that maximizes each principal’s utility based on the likelihood of their beliefs.
Next, we use the same policy and place the agent in a stochastic world. The very first action results in
a stochastic transition, and the likelihood of Principal 1’s belief immediately falls to 0. At that point,
the agent knows it is in Principal 2’s MDP and believes that goal 2will give it reward. It heads to
goal 2accordingly.
7
Figure 2: Left: Principal 1’s predicted expected reward while in MDP 2. Right: Principal 2’s predicted
expected reward during trajectory depicted in MDP 1.
Full trajectories for both of these experiments are presented in the supplementary material.
Subjective State Value During Execution The promise of compromise behavior may bring parties
into agreement over the decision to build a NRL agent. However if, after the agent begins taking
actions, parties feel that the agent will not take actions that guarantee them high future rewards, those
parties may be tempted to end cooperation. The next experiment attempts to test how losing parties
assess the agent’s behavior during execution.
To address this concern, we turn back to the trajectories discussed in the previous section. In the
first trajectory, the agent is, in fact, operating in Principal 1’s MDP. At each timestep, we ask what
Principal 2 believes their expected sum of future rewards to be. Recall that each principal believes
the agent is acting in their own MDP. Since both principals know the agent’s policy, physical state,
and belief state at each timestep, each principal can estimate their future rewards by simulating the
agent acting in their respective MDP with initial configuration given by the current configuration.
For the experiments here, we rollout 100 simulations from each configuration encountered along
the two trajectories. For each simulation, we place the agent within the losing principal’s MDP and
measure the average reward attained by the agent from that starting configuration. Principal 2’s
assessment of their expected future rewards during execution in MDP 1 is shown in the right frame of
Figure 2. Principal 1’s assessment of their expected future rewards during execution in MDP 2 is
shown in the left frame.
We see that until frame 9, Principal 2 believes that the agent has a high chance of reaching goal 2. It
is only when the agent’s behavior shifts and it veers towards goal 1that Principal 2 begins to assess
their expected future utility as decreasing sharply. In contrast, as soon as Principal 1 observes that
w1= 0:0, they realize that the agent will never reach goal 1. Both of these assessments are consistent
with the bet-settling behavior we expect.
5 Conclusion
Insofar as Theorem 4 is not particularly mathematically sophisticated—it employs only basic facts
about convexity and linear algebra—this suggests there may be more low-hanging fruit to be found in
the domain of “machine implementable social choice theory." To recapitulate, Theorem 4 represents
two deviations from the intuition of naïve utility aggregation: to achieve Pareto optimality for
principals with differing beliefs, an agent must (1) use each principal’s own beliefs in evaluating how
well an action will serve that principal’s utility function, and (2) shift the relative priority it assigns
to each principal’s expected utilities over time by a factor proportional to how well that principal’s
beliefs predict the machine’s inputs.
As a final remark, consider that social choice theory and bargaining theory were both pioneered during
the Cold War, when it was particularly compelling to understand the potential for cooperation between
human institutions that might behave competitively. In the coming decades, machine intelligence
will likely bring many new challenges for cooperation, as well as new means to cooperate, and new
reasons to do so. As such, new technical aspects of social choice and bargaining, along the lines of
this paper, will likely continue to emerge.
8
References
Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In
Proceedings of the twenty-first international conference on Machine learning , page 1. ACM, 2004.
Stuart Armstrong, Nick Bostrom, and Carl Shulman. Racing to the Precipice: A Model of Artificial
Intelligence Development. AI & SOCIETY , 31(2):201–206, 2016.
Seth D Baum. On the Promotion of Safe and Socially Beneficial Artificial Intelligence. AI &
SOCIETY , pages 1–9, 2016.
Richard Bellman. Dynamic Programming . Princeton University Press, 1957.
Nick Bostrom. Superintelligence: Paths, dangers, strategies . OUP Oxford, 2014.
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and
Wojciech Zaremba. OpenAI Gym, 2016.
Zoltán Gábor, Zsolt Kalmár, and Csaba Szepesvári. Multi-Criteria Reinforcement Learning. In ICML ,
volume 98, pages 197–205, 1998.
Mohammad Ghavamzadeh, Shie Mannor, Joelle Pineau, and Aviv Tamar. Bayesian Reinforcement
Learning: A Survey. ArXiv e-prints , September 2016.
Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell. Cooperative Inverse
Reinforcement Learning, 2016.
John C Harsanyi. Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparisons of Utility.
InEssays on Ethics, Social Behavior, and Scientific Explanation , pages 6–23. Springer, 1980.
Andrew Y Ng and Stuart J Russell. Algorithms for Inverse Reinforcement Learning. In ICML , pages
663–670, 2000.
Joelle Pineau, Geoff Gordon, and Sebastian Thrun. Point-based Value Iteration: An Anytime
Algorithm for POMDPs. IJCAI’03, pages 1025–1030, 2003.
Diederik M Roijers, Shimon Whiteson, and Frans A Oliehoek. Point-based planning for multi-
objective pomdps. In IJCAI 2015: Proceedings of the Twenty-Fourth International Joint Conference
on Artificial Intelligence , pages 1666–1672, 2015.
Stuart Russell. Learning Agents for Uncertain Environments. In Proceedings of the eleventh annual
conference on Computational learning theory , pages 101–103. ACM, 1998.
Harold Soh and Yiannis Demiris. Evolving Policies for Multi-Reward Partially Observable Markov
Decision Processes (MR-POMDPs). In Proceedings of the 13th annual conference on Genetic and
evolutionary computation , pages 713–720. ACM, 2011.
Edward J. Sondik. The Optimal Control of Partially Observable Markov Processes over a Finite
Horizon. Oper. Res. , 21(5):1071–1088, October 1973.
Gwo-Hshiung Tzeng and Jih-Jeng Huang. Multiple Attribute Decision Making: Methods and
Applications . CRC press, 2011.
Weijia Wang. Multi-Objective Sequential Decision Making . PhD thesis, Université Paris Sud-Paris
XI, 2014.
Chongjie Zhang and Julie A Shah. Fairness in Multi-Agent Sequential Decision-Making. In Advances
in Neural Information Processing Systems , pages 2636–2644, 2014.
9 |
a4677bc9-18a1-4880-89d4-0ae902160974 | trentmkelly/LessWrong-43k | LessWrong | Cognitive Load from simple computations
From Ken Burnside, a game designer
> Counting is easier than addition.
>
> Addition is easier than subtraction.
>
> Subtraction can be done, but combining addition and subtraction into the same game mechanic will slow down play considerably. (“I have +2 for flanking, and -2 for lighting…”) is surprisingly fiddly in play.
>
>
> BIG GAP HERE
>
>
> Subtraction and addition in the same operation is easier than most forms of multiplication. For example, it’s better to express a critical hit as “+100% damage” rather than “x2″ damage. Not only does it avoid edge cases (where you have two critical hit multipliers that both apply) it’s faster at the gaming table because it’s addition.
>
> Multiplication by 10, 5, 4 and 2 are light weight enough to be usable. If you’re using a multiplicative operator other than one of those four digits, you need to change something earlier in the process. When in doubt, make it addition.
>
>
> BIGGER GAP HERE
>
>
> Division is only tolerable when A) you’re dividing an integer and expect an integer outcome, or are rounding to an integer outcome, and B) the divisor is 2 or 10. I specifically built some parts of my games for modulus division to avoid these problem.
>
>
> GINORMOUS GAP HERE
>
>
> Square roots, cube roots and exponentiation are probably not game friendly to most people. When in doubt, transform your equations to use one of the first two items on this list.
My impression is that Ken's games are geeky, so it's reasonable to assume his players are more adept with arithmetic than most people. I'm posting this because it might be worth knowing for anyone who's trying to explain something that involves numbers. |
b2faf479-041f-4520-93d1-cc578bc0c3ec | trentmkelly/LessWrong-43k | LessWrong | Less Wrong Study Hall: Now With 100% Less Tinychat
Eight months ago, I announced that the Less Wrong Study Hall, a virtual coworking space where people do pomodoros together, has moved to Complice. Complice is a software system I made to help people achieve their goals. About 20% of rationalists who've tried it have started using it full-time, which by my math gives signing up positive expected value. Anyway...
What follows is a brief history of the LWSH's development thus far. If you just wanna try it, click here: complice.co/room/lesswrong
By embedding the original tinychat window within a larger page, I let users see what the pomodoro timer was up to as soon as they joined, and the page also doesn't let breaks run overtime because the timer just keeps ticking. Also, users could now show a persistent status of what they're working on.
But there were issues with this. First of all, tinychat. This flash-based chatroom was a huge pain, with 20% of users reporting that they hated it with the fury of a thousand burning suns[source]. It didn't work at some offices, or on mobile, and it couldn't readily scale to more than 8-10 users. But even if we'd had the best embedded chat system ever, it still would have been an issue, because you'd still have to log into it again, after already loading the Complice room. And it would still not integrate with the rest of the page.
It was easy to build a replacement for the text chat part, but one of the core contributions to the character of the LW Study Hall was the video component. By working with video on and audio off, we could have a sense of a social vibe without it being too distracting. Video has numerous benefits:
* gives new joiners an immediate sense of "there are people here"
* lets us be friendly and silly during the breaks, in a way that text just can't quite manage
* helps people get a sense of each others' personalities, so that you have a sense of who you're coworking with even if you've never met in person
* if/when you finally do meet in person, you alr |
91bd1d89-aa3b-4856-b11a-61b89db2f44c | trentmkelly/LessWrong-43k | LessWrong | Parent-Friendly Dance Weekends
We just finished the 2024 edition of Beantown Stomp, a contra dance weekend I helped start (but no longer organize!) in Boston. There are a lot of things I like about the weekend, but one thing I especially like is how parent-friendly it is.
The big thing here is childcare, during the day on Saturday and Sunday. It's included with registration, and typically staffed by one paid person and one volunteer. Several families brought books and toys, and it also served as a nice place for families to hang out. Some feedback this year:
> Just wanted to say thank you to everyone who helped out this weekend. My kids had a great time, and it allowed both me and my partner to go to this dance weekend. I haven't been dancing much since becoming a parent, and it always feels like a rare and wonderful treat whenever I do.
> Thank you for making Beantown Stomp inclusive and accessible to families!!
My understanding is it used to be somewhat common for events like this to offer childcare, but most weekends stopped after the children of the initial organizing cohort didn't need minding anymore? The only other dance weekend I know of that includes childcare is Sugar Hill, on an all-volunteer model.
We also do a family dance, Saturday morning. Pretty informal: acoustic open band, lots of easy dances. Unlike the rest of the weekend, the family dance session is open to people who haven't registered, since even with childcare many young families aren't going to want to make a whole weekend out of it.
Here's Nora dancing Sasha with me while I try to play in the open band:
We also invite people who came for the family dance to stay for the first session of contras. Saturday morning we're never close to capacity, and even getting in a few dances together can mean a lot to parents who otherwise aren't going to be attending the weekend. This might have been my first dance with Julia since NEFFA last year?
Many dance communities see people drift away around when they have kids. Wh |
51ef4a2a-34fd-48c7-9479-b94ac9037a73 | trentmkelly/LessWrong-43k | LessWrong | Query the LessWrong Hivemind
Often, there are questions you want to know the answers to. You want other people's opinions, because knowing the answer isn't worth the time you'd have to spend to find it, or you're unsure whether your answer is right.
LW seems like a good place to ask these questions because the people here are pretty rational. So, in this thread: You post a top-level comment with some question. Other people reply to your comment with their answers. You upvote answers that you agree with and questions whose answers you'd like to know.
A few (mostly obvious) guidelines:
For questions:
* Your question should probably be in one of the following forms:
* Asking for the probability some proposition is true.
* Asking for a confidence interval.
* Be specific. Don't ask when the singularity will happen unless you define 'singularity' to reasonable precision.
* If you have several questions, post each separately, unless they're strongly related.
For answers:
* Give what the question asks for, be it a probability or a confidence interval or something else. Try to give numbers.
* Give some indication of how good your map is, i.e why is your answer that? If you want, give links.
* If you think you know the answer to your own question, you can post it.
* If you want to, give more information. For instance, if someone asks whether it's a good idea to brush their teeth, you can include info about flossing.
* If you've researched something well but don't feel like typing up a long justification of your opinions, that's fine. Rather give your opinion without detailed arguments than give nothing at all. You can always flesh your answer out later, or never.
This thread is primarily for getting the hivemind's opinions on things, not for debating probabilities of propositions. Debating is also okay, though, especially since it will help question-posters to make up their minds.
Don't be too squeamish about breaking the question-answer format.
This is a followup to my comme |
264abb53-588e-44e3-8caf-53f91f1768e6 | trentmkelly/LessWrong-43k | LessWrong | Upcoming API changes: Upgrading to Open-CRUD syntax
This post will only really be relevant to people who are using our API, so the vast majority of people can skip this.
We are about to update our API to the closest what graphQL currently has to an industry standard. Starting tomorrow morning (PST), you will be able to find the new API at lessestwrong.com/graphql for a few days, until we upgrade the main site (probably either on Friday, or on Monday). There is also a documentation for the changes, written by the guy who maintains one of the frameworks we are using (Vulcan), that you can find here.
Sorry for the inconvenience, but it will definitely make the API a lot easier to use in the long-run. |
3c1ca095-a294-4632-ac69-79a0cc98ac43 | trentmkelly/LessWrong-43k | LessWrong | Reading RFK Jr so that you don’t have to
Some people in my orbit suggested reading Robert F. Kennedy Jr’s book The Real Anthony Fauci.
Here’s my story of wading through a few pages of the book and trying to understand the basis of the claims. My takeaway: there's a lot of sloppiness here, and several times the book’s claim isn’t supported by the citation.
Often, though, the citation is fine. The arguments are carried by shoddy science and massive leaps in causal attribution. For a lot of the anti-vaccine and anti-lockdown takes, there’s at least some study by some scientists showing really bad effects. And the scientists are real—it’s just a small sample or poor design.
Sorting things out honestly is hard, especially with bad faith actors using sloppy language to describe highly selected but serious-sounding findings. My notes on the text follow.
Lockdowns
First, the Introduction has a section on the harms from lockdowns (p.11). I gravitated toward it because it had a lot of numbers. I’m open to the claims that lockdowns were net negative, with severe and diverse harms.
Here’s the first paragraph:
> As Dr. Fauci’s policies took hold globally, 300 million humans fell into dire poverty, food insecurity, and starvation. “Globally, the impact of lockdowns on health programs, food production, and supply chains plunged millions of people into severe hunger and malnutrition,” said Alex Gutentag in Tablet Magazine. According to the Associated Press (AP), during 2020, 10,000 children died each month due to virus-linked hunger from global lockdowns. In addition, 500,000 children per month experienced wasting and stunting from malnutrition—up 6.7 million from last year’s total of 47 million—which can “permanently damage children physically and mentally, transforming individual tragedies into a generational catastrophe.” In 2020, disruptions to health and nutrition services killed 228,000 children in South Asia. Deferred medical treatments for cancers, kidney failure, and diabetes killed hundreds of thousan |
125a6c95-c75d-414f-98ce-82e2a30150c9 | StampyAI/alignment-research-dataset/special_docs | Other | A Few Useful Things to Know about Machine Learning
78 communications of the acm | october 2012 | vol. 55 | no. 10review articles
Machine learning syste Ms automatically learn
programs from data. This is often a very attractive
alternative to manually constructing them, and in the last decade the use of machine learning has spread rapidly throughout computer science and beyond. Machine learning is used in Web search, spam filters, recommender systems, ad placement, credit scoring, fraud detection, stock trading, drug design, and many other applications. A recent report from the McKinsey Global Institute asserts that machine learning (a.k .a.
data mining or predictive analytics) will be the driver of the next big wave of innovation.
15 Several fine
textbooks are available to interested practitioners and researchers (for example, Mitchell
16 and Witten et
al.24). However, much of the “folk knowledge” that is needed to successfully develop
machine learning applications is not readily available in them. As a result, many machine learning projects take much longer than necessary or wind up producing less-than-ideal results. Yet much of this folk knowledge is fairly easy to communicate. This is the purpose of this article.doi:10.1145/2347736.2347755
Tapping into the “folk knowledge” needed to
advance machine learning applications.
by Pedro domingos
a few u seful
t
hings to
Know
a
bout
m
achine
Learning
key insights
machine learning algorithms can figure
out how to perform impor
tant tasks
by generalizing from examples.
t
his is
often feasible and cost-effective where
manual programming is not.
a
s more
data becomes available, more ambitious problems can be tackled.
machine learning is widely used in
computer science and other fields.
h
owever, developing successful
machine learning applications requires a substantial amount of “black art” that is difficult to find in textbooks.
this article summarizes 12 key lessons
that machine learning researchers and pr
actitioners have learned.
t
hese include
pitfalls to avoid, important issues to focus on, and answers to common questions.
oCT obeR 2012 | VoL. 55 | no. 10 | CommuniCA tionS oF thE ACm 79Many different types of machine
learning exist, but for illustration
purposes I will focus on the most
mature and widely used one: clas-sification. Nevertheless, the issues I
will discuss apply across all of ma-
chine learning. A classifier is a sys-
tem that inputs (typically) a vector
of discrete and/or continuous fea-ture values and outputs a single dis-
crete value, the class. For example,
a spam filter classifies email mes-sages into “spam” or “not spam,”
and its input may be a Boolean vec-
tor x = (x
1,…,x j,…,x d), where x j = 1 if
the jth word in the dictionary appears
in the email and x j = 0 otherwise. A
learner inputs a training set of ex-
amples (x i, yi), where x i = (x i,1 , . . . ,
xi,d) is an observed input and y i is the
corresponding output, and outputs a classifier. The test of the learner is
whether this classifier produces the correct output y
t for future examples
xt (for example, whether the spam
filter correctly classifies previously
unseen email messages as spam or
not spam).Learning = Representation +
Evaluation + optimization
Suppose you have an application that you think machine learning might be
good for. The first problem facing you
is the bewildering variety of learning al-gorithms available. Which one to use?
There are literally thousands available,
and hundreds more are published each year. The key to not getting lost in this
huge space is to realize that it consists
of combinations of just three compo-
nents. The components are:
˲Representation. A classifier must
be represented in some formal lan-guage that the computer can handle.
Conversely, choosing a representa-tion for a learner is tantamount to
choosing the set of classifiers that it
can possibly learn. This set is called
the hypothesis space of the learner.
If a classifier is not in the hypothesis
space, it cannot be learned. A related
question, that I address later, is how
to represent the input, in other words, what features to use.
˲Evaluation. An evaluation func-
tion (also called objective function or scoring function) is needed to dis-
tinguish good classifiers from bad ones. The evaluation function used
internally by the algorithm may dif-
fer from the external one that we want the classifier to optimize, for ease of
optimization and due to the issues I
will discuss.
˲Optimization. Finally, we need
a method to search among the clas-
sifiers in the language for the high-
est-scoring one. The choice of op-timization technique is key to the
efficiency of the learner, and also
helps determine the classifier pro-
duced if the evaluation function has
more than one optimum. It is com-mon for new learners to start out using
off-the-shelf optimizers, which are lat-
er replaced by custom-designed ones.
The accompanying table shows
common examples of each of these three components. For example, k -
nearest neighbor classifies a test ex-ample by finding the k most similar
training examples and predicting the
majority class among them. Hyper-
plane-based methods form a linear
IMage by agsandreW/shutterstoCk.CoM
80 communications of the acm | october 2012 | vol. 55 | no. 10review articles
combination of the features per class
and predict the class with the high-est-valued combination. Decision trees test one feature at each internal node, with one branch for each fea-ture value, and have class predictions at the leaves. Algorithm 1 (above) shows a bare-bones decision tree learner for Boolean domains, using information gain and greedy search.
20
InfoGain(x j, y) is the mutual informa-
tion between feature x j and the class y .
MakeNode(x ,c0,c1) returns a node that
tests feature x and has c 0 as the child
for x = 0 and c 1 as the child for x = 1.
Of course, not all combinations of
one component from each column of the table make equal sense. For exam-ple, discrete representations naturally go with combinatorial optimization, and continuous ones with continu-ous optimization. Nevertheless, many learners have both discrete and con-tinuous components, and in fact the day may not be far when every single possible combination has appeared in some learner!
Most textbooks are organized by
representation, and it is easy to over-look the fact that the other compo-nents are equally important. There is no simple recipe for choosing each component, but I will touch on some of the key issues here. As we will see, some choices in a machine learning project may be even more important than the choice of learner.
it’ s Generalization that c ounts
The fundamental goal of machine learning is to generalize beyond the examples in the training set. This is because, no matter how much data we have, it is very unlikely that we will see those exact examples again at test time. (Notice that, if there are 100,000 words in the dictionary, the spam fil-ter described above has 2
100,000 pos-sible different inputs.) Doing well on the training set is easy (just memorize the examples). The most common mistake among machine learning be-ginners is to test on the training data and have the illusion of success. If the chosen classifier is then tested on new data, it is often no better than ran-dom guessing. So, if you hire someone to build a classifier, be sure to keep some of the data to yourself and test the classifier they give you on it. Con-versely, if you have been hired to build a classifier, set some of the data aside from the beginning, and only use it to test your chosen classifier at the very end, followed by learning your final classifier on the whole data.
Contamination of your classifier by
test data can occur in insidious ways, for example, if you use test data to tune parameters and do a lot of tun-ing. (Machine learning algorithms have lots of knobs, and success of-ten comes from twiddling them a lot, so this is a real concern.) Of course, holding out data reduces the amount available for training. This can be mit-igated by doing cross-validation: ran-domly dividing your training data into (say) 10 subsets, holding out each one while training on the rest, testing each learned classifier on the examples it did not see, and averaging the results to see how well the particular param-eter setting does.
In the early days of machine learn-
ing, the need to keep training and test data separate was not widely appreci-ated. This was partly because, if the learner has a very limited representa-tion (for example, hyperplanes), the difference between training and test error may not be large. But with very flexible classifiers (for example, deci-sion trees), or even with linear classifi-ers with a lot of features, strict separa-tion is mandatory.
Notice that generalization being
the goal has an interesting conse-quence for machine learning. Unlike in most other optimization problems, we do not have access to the function we want to optimize! We have to use training error as a surrogate for test error, and this is fraught with dan-ger. (How to deal with it is addressed later.) On the positive side, since the objective function is only a proxy for the true goal, we may not need to fully table 1. t he three components of learning algorithms.
Representation evaluation o ptimization
Instances Accuracy/
e
rror rate c
ombinatorial optimization
K-nearest neighbor Precision and recall Greedy search
Support vector machines Squared error
b
eam search
Hyperplanes l
ikelihood
b
ranch-and-bound
n
aive
b
ayes Posterior probability c
ontinuous optimization
logistic regression Information gain Unconstr
ained
Decision trees K-
l
divergence Gradient descent
Sets of rules c
ost/Utility
c
onjugate gradient
Propositional rules Margin Quasi-
n
ewton methods
logic progr
ams
c
onstrained
n
eural networks
l
inear programming
Graphical models Quadratic programming
b
ayesian networks
c
onditional random fields
algorithm 1. Decision tree induction.
LearnD t (TrainSet)
if all examples in TrainSet have the same class y* then
return Make
leaf(
y*)
if no feature x j has InfoGain(x j ,y) > 0 then
y* ← Most frequent class in TrainSet
return Make
leaf(
y*)
x* ← argmax xj InfoGain(x j, y)
TS 0 ← e xamples in TrainSet with x* = 0
TS 1 ← e xamples in TrainSet with x* = 1
return Make
n
ode(x*, learnD t (TS 0), learnD t (TS 1))
review articles
october 2012 | vol. 55 | no. 10 | communications of the acm 81optimize it; in fact, a local optimum
returned by simple greedy search may be better than the global optimum.
Data
a
lone
is
n
ot
e
nough
Generalization being the goal has an-
other major consequence: Data alone is not enough, no matter how much of it you have. Consider learning a Boolean function of (say) 100 vari-ables from a million examples. There are 2
100 − 106 examples whose classes
you do not know. How do you figure out what those classes are? In the ab-sence of further information, there is just no way to do this that beats flip-ping a coin. This observation was first made (in somewhat different form) by the philosopher David Hume over 200 years ago, but even today many mis-takes in machine learning stem from failing to appreciate it. Every learner must embody some knowledge or as-sumptions beyond the data it is given in order to generalize beyond it. This notion was formalized by Wolpert in his famous “no free lunch” theorems, according to which no learner can beat random guessing over all pos-sible functions to be learned.
25
This seems like rather depressing
news. How then can we ever hope to learn anything? Luckily, the functions we want to learn in the real world are not drawn uniformly from the set of all mathematically possible functions! In fact, very general assumptions—like smoothness, similar examples hav-ing similar classes, limited depen-dences, or limited complexity—are often enough to do very well, and this is a large part of why machine learn-ing has been so successful. Like de-duction, induction (what learners do) is a knowledge lever: it turns a small amount of input knowledge into a large amount of output knowledge. Induction is a vastly more powerful lever than deduction, requiring much less input knowledge to produce use-ful results, but it still needs more than zero input knowledge to work. And, as with any lever, the more we put in, the more we can get out.
A corollary of this is that one of the
key criteria for choosing a representa-tion is which kinds of knowledge are easily expressed in it. For example, if we have a lot of knowledge about what makes examples similar in our do-main, instance-based methods may be a good choice. If we have knowl-edge about probabilistic dependen-cies, graphical models are a good fit. And if we have knowledge about what kinds of preconditions are required by each class, “IF . . . THEN . . .” rules may be the best option. The most useful learners in this regard are those that do not just have assumptions hard-wired into them, but allow us to state them explicitly, vary them widely, and incorporate them automatically into the learning (for example, using first-order logic
21 or grammars6).
In retrospect, the need for knowl-
edge in learning should not be sur-prising. Machine learning is not magic; it cannot get something from nothing. What it does is get more from less. Programming, like all en-gineering, is a lot of work: we have to build everything from scratch. Learn-ing is more like farming, which lets nature do most of the work. Farmers combine seeds with nutrients to grow crops. Learners combine knowledge with data to grow programs.
o
verfitting
h
as
m
any
faces
What if the knowledge and data we
have are not sufficient to completely determine the correct classifier? Then we run the risk of just hallucinating a classifier (or parts of it) that is not grounded in reality, and is simply en-coding random quirks in the data. This problem is called overfitting, and is the bugbear of machine learning. When your learner outputs a classi-fier that is 100% accurate on the train-ing data but only 50% accurate on test data, when in fact it could have output one that is 75% accurate on both, it has overfit.
Everyone in machine learning
knows about overfitting, but it comes in many forms that are not immedi-ately obvious. One way to understand overfitting is by decomposing gener-alization error into bias and variance.
9
Bias is a learner’s tendency to con-sistently learn the same wrong thing. Variance is the tendency to learn ran-dom things irrespective of the real sig-nal. Figure 1 illustrates this by an anal-ogy with throwing darts at a board. A linear learner has high bias, because when the frontier between two classes is not a hyperplane the learner is un-able to induce it. Decision trees do not have this problem because they can represent any Boolean function, but on the other hand they can suffer from high variance: decision trees learned on different training sets generated by the same phenomenon are often very different, when in fact they should be
80
75
70
65
60
55
50
10 100
n
umber of
e
xamplestest- s et a ccuracy (%)
1000 10000figure 2. naïve Bayes can outperform a state-of-the-art rule learner ( c4.5rules) even
when the true classifier is a set of rules.
Bayes C4.5figure 1. Bias and variance in
dart-throwing.
low
varianceHigh
variance
low
b
iasHigh
b
ias
82 CommuniCA tionS oF thE ACm | oCT obeR 2012 | VoL. 55 | no. 10review articles
the same. Similar reasoning applies
to the choice of optimization meth-
od: beam search has lower bias than
greedy search, but higher variance, be-cause it tries more hypotheses. Thus,
contrary to intuition, a more powerful
learner is not necessarily better than a
less powerful one.
Figure 2 illustrates this.
a Even
though the true classifier is a set of rules, with up to 1,000 examples na-
ive Bayes is more accurate than a rule learner. This happens despite
naive Bayes’s false assumption that
the frontier is linear! Situations like
this are common in machine learn-
ing: strong false assumptions can be better than weak true ones, because
a learner with the latter needs more
data to avoid overfitting.
Cross-validation can help to com-
bat overfitting, for example by using it to choose the best size of decision tree
to learn. But it is no panacea, since if
we use it to make too many parameter choices it can itself start to overfit.
17
Besides cross-validation, there
are many methods to combat overfit-
ting. The most popular one is adding
a regularization term to the evaluation
function. This can, for example, pe-
nalize classifiers with more structure,
thereby favoring smaller ones with
less room to overfit. Another option
is to perform a statistical significance test like chi-square before adding new
structure, to decide whether the dis-
tribution of the class really is differ-ent with and without this structure.
These techniques are particularly use-
ful when data is very scarce. Neverthe-
less, you should be skeptical of claims
that a particular technique “solves” the overfitting problem. It is easy to
avoid overfitting (variance) by falling
into the opposite error of underfitting (bias). Simultaneously avoiding both
requires learning a perfect classifier,
and short of knowing it in advance there is no single technique that will
always do best (no free lunch).
A common misconception about
overfitting is that it is caused by noise,
a Training examples consist of 64 Boolean fea-
tures and a Boolean class computed from
them according to a set of “IF . . . THEN . . .”
rules. The curves are the average of 100 runs with different randomly generated sets of
rules. Error bars are two standard deviations.
See Domingos and Pazzani
10 for details.like training examples labeled with
the wrong class. This can indeed ag-
gravate overfitting, by making the
learner draw a capricious frontier to keep those examples on what it thinks
is the right side. But severe overfitting
can occur even in the absence of noise.
For instance, suppose we learn a Bool-
ean classifier that is just the disjunc-tion of the examples labeled “true”
in the training set. (In other words,
the classifier is a Boolean formula in disjunctive normal form, where each
term is the conjunction of the feature
values of one specific training exam-
ple.) This classifier gets all the training
examples right and every positive test example wrong, regardless of whether
the training data is noisy or not.
The problem of multiple testing
13 is
closely related to overfitting. Standard statistical tests assume that only one
hypothesis is being tested, but mod-ern learners can easily test millions
before they are done. As a result what
looks significant may in fact not be.
For example, a mutual fund that beats
the market 10 years in a row looks very impressive, until you realize that, if
there are 1,000 funds and each has a
50% chance of beating the market on any given year, it is quite likely that
one will succeed all 10 times just by
luck. This problem can be combatted
by correcting the significance tests to
take the number of hypotheses into account, but this can also lead to un-
derfitting. A better approach is to con-
trol the fraction of falsely accepted non-null hypotheses, known as the
false discovery rate.
3
intuition Fails in high Dimensions
After overfitting, the biggest problem
in machine learning is the curse of
dimensionality. This expression was coined by Bellman in 1961 to refer
to the fact that many algorithms that
work fine in low dimensions become
intractable when the input is high-
dimensional. But in machine learn-ing it refers to much more. General-
izing correctly becomes exponentially
harder as the dimensionality (number of features) of the examples grows, be-
cause a fixed-size training set covers a
dwindling fraction of the input space.
Even with a moderate dimension of
100 and a huge training set of a trillion examples, the latter covers only a frac-tion of about 10
−18 of the input space.
This is what makes machine learning
both necessary and hard.
More seriously, the similarity-
based reasoning that machine learn-ing algorithms depend on (explicitly
or implicitly) breaks down in high di-
mensions. Consider a nearest neigh-
bor classifier with Hamming distance as the similarity measure, and sup-
pose the class is just x
1 ∧ x2. If there
are no other features, this is an easy problem. But if there are 98 irrelevant
features x
3,..., x100, the noise from
them completely swamps the signal in
x1 and x 2, and nearest neighbor effec-
tively makes random predictions.
Even more disturbing is that near-
est neighbor still has a problem even
if all 100 features are relevant! This
is because in high dimensions all
examples look alike. Suppose, for
instance, that examples are laid out
on a regular grid, and consider a test example x
t. If the grid is d -dimen-
sional, xt’s 2d nearest examples are
all at the same distance from it. So as
the dimensionality increases, more
and more examples become nearest neighbors of x
t, until the choice of
nearest neighbor (and therefore of
class) is effectively random.
This is only one instance of a more
general problem with high dimen-sions: our intuitions, which come
from a three-dimensional world, of-
ten do not apply in high-dimensional
ones. In high dimensions, most of the mass of a multivariate Gaussian dis-
tribution is not near the mean, but in
an increasingly distant “shell” around it; and most of the volume of a high-
dimensional orange is in the skin, not
the pulp. If a constant number of ex-
amples is distributed uniformly in a
high-dimensional hypercube, beyond some dimensionality most examples
are closer to a face of the hypercube
than to their nearest neighbor. And if we approximate a hypersphere by in-
scribing it in a hypercube, in high di-
mensions almost all the volume of the hypercube is outside the hypersphere.
This is bad news for machine learning,
where shapes of one type are often ap-
proximated by shapes of another.
Building a classifier in two or three
dimensions is easy; we can find a rea-sonable frontier between examples
of different classes just by visual in-
review articles
october 2012 | vol. 55 | no. 10 | communications of the acm 83spection. (It has even been said that if
people could see in high dimensions machine learning would not be neces-sary.) But in high dimensions it is dif-ficult to understand what is happen-ing. This in turn makes it difficult to design a good classifier. Naively, one might think that gathering more fea-tures never hurts, since at worst they provide no new information about the class. But in fact their benefits may be outweighed by the curse of dimen-sionality.
Fortunately, there is an effect that
partly counteracts the curse, which might be called the “blessing of non-uniformity.” In most applications examples are not spread uniformly throughout the instance space, but are concentrated on or near a lower-dimensional manifold. For example, k-nearest neighbor works quite well
for handwritten digit recognition even though images of digits have one dimension per pixel, because the space of digit images is much smaller than the space of all possible images. Learners can implicitly take advan-tage of this lower effective dimension, or algorithms for explicitly reducing the dimensionality can be used (for example, Tenenbaum
22).
t
heoretical Guarantees
a
re
n
ot What
t
hey
s
eem
Machine learning papers are full of theoretical guarantees. The most com-mon type is a bound on the number of examples needed to ensure good gen-eralization. What should you make of these guarantees? First of all, it is re-markable that they are even possible. Induction is traditionally contrasted with deduction: in deduction you can guarantee that the conclusions are correct; in induction all bets are off. Or such was the conventional wisdom for many centuries. One of the major developments of recent decades has been the realization that in fact we can have guarantees on the results of in-duction, particularly if we are willing to settle for probabilistic guarantees.
The basic argument is remarkably
simple.
5 Let’s say a classifier is bad
if its true error rate is greater than ε .
Then the probability that a bad clas-sifier is consistent with n random, in-
dependent training examples is less than (1 − ε )
n. Let b be the number of bad classifiers in the learner’s hypoth-esis space H . The probability that at
least one of them is consistent is less than b(1 − ε )
n, by the union bound. As-
suming the learner always returns a consistent classifier, the probability that this classifier is bad is then less than |H |(1 − ε )
n, where we have used
the fact that b ≤ |H |. So if we want this
probability to be less than δ , it suffices
to make n > ln(δ /|H|)/ ln(1 − ε ) ≥ 1/ε (ln
|H| + ln 1/δ).
Unfortunately, guarantees of this
type have to be taken with a large grain of salt. This is because the bounds ob-tained in this way are usually extreme-ly loose. The wonderful feature of the bound above is that the required num-ber of examples only grows logarith-mically with |H | and 1/δ . Unfortunate-
ly, most interesting hypothesis spaces are doubly exponential in the number
of features d , which still leaves us
needing a number of examples expo-nential in d . For example, consider
the space of Boolean functions of d Boolean variables. If there are e pos-
sible different examples, there are 2
e possible different functions, so
since there are 2d possible examples,
the total number of functions is 22d.
And even for hypothesis spaces that are “merely” exponential, the bound is still very loose, because the union bound is very pessimistic. For exam-ple, if there are 100 Boolean features and the hypothesis space is decision trees with up to 10 levels, to guarantee δ = ε = 1% in the bound above we need
half a million examples. But in prac-tice a small fraction of this suffices for accurate learning.
Further, we have to be careful
about what a bound like this means. For instance, it does not say that, if your learner returned a hypothesis consistent with a particular training set, then this hypothesis probably generalizes well. What it says is that, given a large enough training set, with high probability your learner will ei-ther return a hypothesis that general-izes well or be unable to find a consis-tent hypothesis. The bound also says nothing about how to select a good hypothesis space. It only tells us that, if the hypothesis space contains the true classifier, then the probability that the learner outputs a bad classi-fier decreases with training set size. one of the major
dev
elopments of
recent decades has
been the realization that we can have guarantees on the results of induction, particularly if we are willing to settle for probabilistic guarantees.
84 communications of the acm | october 2012 | vol. 55 | no. 10review articles
If we shrink the hypothesis space, the
bound improves, but the chances that it contains the true classifier shrink also. (There are bounds for the case where the true classifier is not in the hypothesis space, but similar consid-erations apply to them.)
Another common type of theoreti-
cal guarantee is asymptotic: given in-finite data, the learner is guaranteed to output the correct classifier. This is reassuring, but it would be rash to choose one learner over another be-cause of its asymptotic guarantees. In practice, we are seldom in the asymp-totic regime (also known as “asymp-topia”). And, because of the bias-vari-ance trade-off I discussed earlier, if learner A is better than learner B given infinite data, B is often better than A given finite data.
The main role of theoretical guar-
antees in machine learning is not as a criterion for practical decisions, but as a source of understanding and driving force for algorithm design. In this capacity, they are quite useful; in-deed, the close interplay of theory and practice is one of the main reasons machine learning has made so much progress over the years. But caveat emptor: learning is a complex phe-nomenon, and just because a learner has a theoretical justification and works in practice does not mean the former is the reason for the latter.
feature
e
ngineering
is
t
he Key
At the end of the day, some machine learning projects succeed and some fail. What makes the difference? Eas-ily the most important factor is the features used. Learning is easy if you have many independent features that each correlate well with the class. On the other hand, if the class is a very complex function of the features, you may not be able to learn it. Often, the raw data is not in a form that is ame-nable to learning, but you can con-struct features from it that are. This is typically where most of the effort in a machine learning project goes. It is often also one of the most interesting parts, where intuition, creativity and “black art” are as important as the technical stuff.
First-timers are often surprised by
how little time in a machine learning project is spent actually doing ma-a dumb algorithm
with lots and lots
of data beats
a clever one
with modest
amounts of it.chine learning. But it makes sense if
you consider how time-consuming it is to gather data, integrate it, clean it and preprocess it, and how much trial and error can go into feature design. Also, machine learning is not a one-shot process of building a dataset and running a learner, but rather an itera-tive process of running the learner, analyzing the results, modifying the data and/or the learner, and repeat-ing. Learning is often the quickest part of this, but that is because we have already mastered it pretty well! Feature engineering is more diffi-cult because it is domain-specific, while learners can be largely general purpose. However, there is no sharp frontier between the two, and this is another reason the most useful learn-ers are those that facilitate incorpo-rating knowledge.
Of course, one of the holy grails
of machine learning is to automate more and more of the feature engi-neering process. One way this is often done today is by automatically gener-ating large numbers of candidate fea-tures and selecting the best by (say) their information gain with respect to the class. But bear in mind that features that look irrelevant in isola-tion may be relevant in combination. For example, if the class is an XOR of k input features, each of them by it-
self carries no information about the class. (If you want to annoy machine learners, bring up XOR.) On the other hand, running a learner with a very large number of features to find out which ones are useful in combination may be too time-consuming, or cause overfitting. So there is ultimately no replacement for the smarts you put into feature engineering.
m
ore Data Beats
a
c
leverer
a
lgorithm
Suppose you have constructed the best set of features you can, but the classifiers you receive are still not ac-curate enough. What can you do now? There are two main choices: design a better learning algorithm, or gather more data (more examples, and pos-sibly more raw features, subject to the curse of dimensionality). Machine learning researchers are mainly con-cerned with the former, but pragmati-cally the quickest path to success is
review articles
oCT obeR 2012 | VoL. 55 | no. 10 | CommuniCA tionS oF thE ACm 85ers are seductive, but they are usually
harder to use, because they have more
knobs you need to turn to get good re-
sults, and because their internals are more opaque.
Learners can be divided into two
major types: those whose representa-
tion has a fixed size, like linear classi-
fiers, and those whose representation can grow with the data, like decision
trees. (The latter are sometimes called
nonparametric learners, but this is somewhat unfortunate, since they
usually wind up learning many more
parameters than parametric ones.)
Fixed-size learners can only take ad-
vantage of so much data. (Notice how the accuracy of naive Bayes asymptotes
at around 70% in Figure 2.) Variable-
size learners can in principle learn any function given sufficient data, but in
practice they may not, because of limi-
tations of the algorithm (for example, greedy search falls into local optima)
or computational cost. Also, because
of the curse of dimensionality, no ex-
isting amount of data may be enough.
For these reasons, clever algorithms—those that make the most of the data
and computing resources available—
often pay off in the end, provided you are willing to put in the effort. There
is no sharp frontier between design-
ing learners and learning classifiers;
rather, any given piece of knowledge
could be encoded in the learner or learned from data. So machine learn-
ing projects often wind up having a
significant component of learner de-sign, and practitioners need to have
some expertise in it.
12
In the end, the biggest bottleneck
is not data or CPU cycles, but human often to just get more data. As a rule
of thumb, a dumb algorithm with lots
and lots of data beats a clever one with
modest amounts of it. (After all, ma-chine learning is all about letting data
do the heavy lifting.)
This does bring up another prob-
lem, however: scalability. In most of computer science, the two main lim-
ited resources are time and memory.
In machine learning, there is a third
one: training data. Which one is the bottleneck has changed from decade
to decade. In the 1980s it tended to
be data. Today it is often time. Enor-mous mountains of data are avail-
able, but there is not enough time
to process it, so it goes unused. This
leads to a paradox: even though in
principle more data means that more complex classifiers can be learned, in
practice simpler classifiers wind up
being used, because complex ones take too long to learn. Part of the an-
swer is to come up with fast ways to
learn complex classifiers, and indeed there has been remarkable progress
in this direction (for example, Hulten
and Domingos
11).
Part of the reason using cleverer
algorithms has a smaller payoff than you might expect is that, to a first ap-proximation, they all do the same.
This is surprising when you consider
representations as different as, say,
sets of rules and neural networks. But
in fact propositional rules are readily encoded as neural networks, and sim-
ilar relationships hold between other
representations. All learners essen-tially work by grouping nearby exam-
ples into the same class; the key dif-
ference is in the meaning of “nearby.”
With nonuniformly distributed data,
learners can produce widely different frontiers while still making the same
predictions in the regions that matter
(those with a substantial number of training examples, and therefore also
where most test examples are likely to
appear). This also helps explain why powerful learners can be unstable but
still accurate. Figure 3 illustrates this
in 2D; the effect is much stronger in
high dimensions.
As a rule, it pays to try the simplest
learners first (for example, naïve Bayes before logistic regression, k -nearest
neighbor before support vector ma-
chines). More sophisticated learn-cycles. In research papers, learners are typically compared on measures of accuracy and computational cost.
But human effort saved and insight
gained, although harder to measure,
are often more important. This favors
learners that produce human-under-standable output (for example, rule
sets). And the organizations that make
the most of machine learning are those that have in place an infrastruc-
ture that makes experimenting with
many different learners, data sources, and learning problems easy and effi-
cient, and where there is a close col-
laboration between machine learning
experts and application domain ones.
Learn many models, not Just one
In the early days of machine learn-
ing, everyone had a favorite learner, together with some a priori reasons
to believe in its superiority. Most ef-
fort went into trying many variations
of it and selecting the best one. Then
systematic empirical comparisons showed that the best learner varies
from application to application, and
systems containing many different learners started to appear. Effort now
went into trying many variations of
many learners, and still selecting just the best one. But then researchers
noticed that, if instead of selecting
the best variation found, we combine
many variations, the results are bet-
ter—often much better—and at little extra effort for the user.
Creating such model ensembles is
now standard.
1 In the simplest tech-
nique, called bagging, we simply gen-
erate random variations of the train-
ing set by resampling, learn a classifier on each, and combine the results by
voting. This works because it greatly
reduces variance while only slightly
increasing bias. In boosting, training
examples have weights, and these are varied so that each new classifier fo-
cuses on the examples the previous
ones tended to get wrong. In stacking, the outputs of individual classifiers
become the inputs of a “higher-level”
learner that figures out how best to
combine them.
Many other techniques exist, and
the trend is toward larger and larger ensembles. In the Netflix prize, teams
from all over the world competed to build the best video recommender Figure 3. v ery different frontiers can yield
similar predictions. (+ and – are training
examples of two classes.)
n. bayes
knn
SVM
D. Tree
86 CommuniCA tionS oF thE ACm | oCT obeR 2012 | VoL. 55 | no. 10review articles
continues to improve by adding clas-
sifiers even after the training error has
reached zero. Another counterexam-
ple is support vector machines, which can effectively have an infinite num-
ber of parameters without overfitting.
Conversely, the function sign(sin(ax))
can discriminate an arbitrarily large,
arbitrarily labeled set of points on the x axis, even though it has only one pa-
rameter.
23 Thus, contrary to intuition,
there is no necessary connection be-
tween the number of parameters of a
model and its tendency to overfit.
A more sophisticated view instead
equates complexity with the size of the hypothesis space, on the basis that
smaller spaces allow hypotheses to be
represented by shorter codes. Bounds like the one in the section on theoreti-
cal guarantees might then be viewed
as implying that shorter hypotheses generalize better. This can be further
refined by assigning shorter codes to
the hypotheses in the space we have
some a priori preference for. But
viewing this as “proof” of a trade-off between accuracy and simplicity is
circular reasoning: we made the hy-
potheses we prefer simpler by design, and if they are accurate it is because
our preferences are accurate, not be-
cause the hypotheses are “simple” in the representation we chose.
A further complication arises from
the fact that few learners search their
hypothesis space exhaustively. A
learner with a larger hypothesis space that tries fewer hypotheses from it
is less likely to overfit than one that
tries more hypotheses from a smaller space. As Pearl
18 points out, the size of
the hypothesis space is only a rough
guide to what really matters for relat-
ing training and test error: the proce-dure by which a hypothesis is chosen.
Domingos
7 surveys the main argu-
ments and evidence on the issue of
Occam’s razor in machine learning.
The conclusion is that simpler hy-potheses should be preferred because
simplicity is a virtue in its own right,
not because of a hypothetical connec-
tion with accuracy. This is probably
what Occam meant in the first place.
Representable Does not
imply LearnableEssentially all representations used in
variable-size learners have associated Just because
a function can
be represented
does not mean
it can be learned.system (http://netflixprize.com). As
the competition progressed, teams
found they obtained the best results
by combining their learners with oth-er teams’, and merged into larger and
larger teams. The winner and runner-
up were both stacked ensembles of
over 100 learners, and combining the
two ensembles further improved the results. Doubtless we will see even
larger ones in the future.
Model ensembles should not be
confused with Bayesian model av-eraging (BMA)—the theoretically
optimal approach to learning.
4 In
BMA, predictions on new examples
are made by averaging the individual
predictions of all classifiers in the hypothesis space, weighted by how
well the classifiers explain the train-
ing data and how much we believe
in them a priori. Despite their su-
perficial similarities, ensembles and BMA are very different. Ensembles
change the hypothesis space (for ex-
ample, from single decision trees to linear combinations of them), and
can take a wide variety of forms. BMA
assigns weights to the hypotheses in
the original space according to a fixed
formula. BMA weights are extremely different from those produced by
(say) bagging or boosting: the latter
are fairly even, while the former are extremely skewed, to the point where
the single highest-weight classifier
usually dominates, making BMA ef-fectively equivalent to just selecting
it.
8 A practical consequence of this is
that, while model ensembles are a key part of the machine learning toolkit,
BMA is seldom worth the trouble.
Simplicity Does not
imply Accuracy
Occam’s razor famously states that
entities should not be multiplied be-yond necessity. In machine learning,
this is often taken to mean that, given
two classifiers with the same training error, the simpler of the two will likely
have the lowest test error. Purported
proofs of this claim appear regularly
in the literature, but in fact there are
many counterexamples to it, and the “no free lunch” theorems imply it can-
not be true.
We saw one counterexample previ-
ously: model ensembles. The gener-alization error of a boosted ensemble
review articles
oCT obeR 2012 | VoL. 55 | no. 10 | CommuniCA tionS oF thE ACm 87More often than not, the goal
of learning predictive models is to
use them as guides to action. If we
find that beer and diapers are often bought together at the supermar-
ket, then perhaps putting beer next
to the diaper section will increase
sales. (This is a famous example in
the world of data mining.) But short of actually doing the experiment it is
difficult to tell. Machine learning is
usually applied to observational data, where the predictive variables are not
under the control of the learner, as
opposed to experimental data, where
they are. Some learning algorithms can potentially extract causal infor-mation from observational data, but
their applicability is rather restrict-
ed.
19 On the other hand, correlation
is a sign of a potential causal connec-
tion, and we can use it as a guide to
further investigation (for example, trying to understand what the causal
chain might be).
Many researchers believe that cau-
sality is only a convenient fiction. For
example, there is no notion of causal-ity in physical laws. Whether or not
causality really exists is a deep philo-
sophical question with no definitive answer in sight, but there are two
practical points for machine learn-
ers. First, whether or not we call them
“causal,” we would like to predict the
effects of our actions, not just corre-lations between observable variables.
Second, if you can obtain experimen-
tal data (for example by randomly as-signing visitors to different versions of
a Web site), then by all means do so.
14
Conclusion
Like any discipline, machine learn-
ing has a lot of “folk wisdom” that can
be difficult to come by, but is crucial for success. This article summarized
some of the most salient items. Of
course, it is only a complement to the
more conventional study of machine
learning. Check out http://www.cs.washington.edu/homes/pedrod/
class for a complete online machine
learning course that combines formal and informal aspects. There is also a
treasure trove of machine learning
lectures at http://www.videolectures.
net. A good open source machine
learning toolkit is Weka.
24
Happy learning! References
1. bauer, e. and k ohavi, r. an empirical comparison of
voting classification algorithms: bagging, boosting and variants. Machine Learning 36 (1999), 105–142.
2. bengio, y . learning deep architectures for aI.
Foundations and Trends in Machine Learning 2, 1 (2009), 1–127.
3. benjamini, y . and hochberg, y . Controlling the false
discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical
Society, Series B, 57 (1995), 289–300.
4. bernardo, J.M. and smith, a.F.M. Bayesian Theory.
Wiley, ny , 1994.
5. blumer, a., ehrenfeucht, a., haussler, d . and
Warmuth, M.k. occam’s razor. Information Processing Letters 24 (1987), 377–380.
6. Cohen, W.W. grammatically biased learning: learning logic programs using an explicit antecedent
description language. Artificial Intelligence 68
(1994), 303–366.
7. domingos, P. the role of occam’s razor in knowledge
discovery. Data Mining and Knowledge Discovery 3
(1999), 409–425.
8. domingos, P. bayesian averaging of classifiers and
the overfitting problem. In Proceedings of the 17
th
International Conference on Machine Learning
(stanford, Ca, 2000), Morgan kaufmann, san Mateo,
Ca, 223–230.
9. domingos, P. a unified bias-variance decomposition
and its applications. In Proceedings of the 17th
International Conference on Machine Learning
(stanford, Ca, 2000), Morgan kaufmann, san Mateo,
Ca, 231–238.
10. domingos, P. and Pazzani, M. on the optimality of
the simple bayesian classifier under zero-one loss. Machine Learning 29 (1997), 103–130.
11. hulten, g . and domingos, P. Mining complex models
from arbitrarily large databases in constant time. In Proceedings of the 8
th ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining
(edmonton, Canada, 2002). a CM Press, ny , 525–531.
12. kibler, d . and l angley, P. Machine learning as an
experimental science. In Proceedings of the 3rd
European Working Session on Learning (l ondon, uk,
1988). Pitman.
13. klockars, a.J. and sax, g . Multiple Comparisons.
sage, beverly hills, Ca, 1986.
14. k ohavi, r., l ongbotham, r., sommerfield, d . and
henne, r. Controlled experiments on the Web: survey and practical guide. Data Mining and
Knowledge Discovery 18 (2009), 140–181.
15. Manyika, J., Chui, M., brown, b ., bughin, J., dobbs,
r., roxburgh, C. and byers, a. big data: the next frontier for innovation, competition, and productivity.
technical report, Mckinsey global Institute, 2011.
16. Mitchell, t .M. Machine Learning. Mcgraw-hill,
ny, 1997.
17. ng, a.y . Preventing “overfitting” of cross-validation
data. In Proceedings of the 14
th International
Conference on Machine Learning (nashville, tn, 1997). Morgan kaufmann, san Mateo, Ca, 245–253.
18. Pearl, J. on the connection between the complexity and credibility of inferred models. International
Journal of General Systems 4 (1978), 255–264.
19. Pearl, J. Causality: Models, Reasoning, and
Inference. Cambridge university Press, Cambridge,
uk, 2000.
20. Quinlan, J.r. C4.5: Programs for Machine Learning. Morgan kaufmann, san Mateo, Ca, 1993.
21. richardson, M. and P. domingos. Markov logic
networks. Machine Learning 62 (2006), 107–136.
22. t enenbaum, J., silva, V. and l angford, J. a global
geometric framework for nonlinear dimensionality reduction. Science 290 (2000), 2319–2323.
23. Vapnik, V.n. The Nature of Statistical Learning Theory. springer, ny , 1995.
24. Witten, I., Frank, e. and hall, M. Data Mining:
Practical Machine Learning Tools and Techniques,
3rd Edition. Morgan kaufmann, san Mateo, Ca, 2011.
25. Wolpert, d . the lack of a priori distinctions between
learning algorithms. Neural Computation 8 (1996), 1341–1390.
Pedro Domingos (pedrod@cs.washington.edu) is a
professor in the department of Computer science and
engineering at the university of Washington, seattle.
© 2012 a CM 0001-0782/12/10 $15.00 theorems of the form “Every function
can be represented, or approximated
arbitrarily closely, using this repre-
sentation.” Reassured by this, fans of the representation often proceed to
ignore all others. However, just be-
cause a function can be represented
does not mean it can be learned. For
example, standard decision tree learn-ers cannot learn trees with more leaves
than there are training examples. In
continuous spaces, representing even simple functions using a fixed set of
primitives often requires an infinite
number of components. Further, if
the hypothesis space has many local
optima of the evaluation function, as is often the case, the learner may not
find the true function even if it is rep-
resentable. Given finite data, time and memory, standard learners can learn
only a tiny subset of all possible func-
tions, and these subsets are different for learners with different represen-
tations. Therefore the key question is
not “Can it be represented?” to which
the answer is often trivial, but “Can it
be learned?” And it pays to try different learners (and possibly combine them).
Some representations are exponen-
tially more compact than others for some functions. As a result, they may
also require exponentially less data to
learn those functions. Many learners
work by forming linear combinations
of simple basis functions. For exam-ple, support vector machines form
combinations of kernels centered at
some of the training examples (the support vectors). Representing parity
of n bits in this way requires 2
n basis
functions. But using a representation with more layers (that is, more steps
between input and output), parity can be encoded in a linear-size classifier.
Finding methods to learn these deeper
representations is one of the major re-
search frontiers in machine learning.
2
Correlation Does not
imply Causation
The point that correlation does not
imply causation is made so often that it is perhaps not worth belaboring.
But, even though learners of the kind
we have been discussing can only
learn correlations, their results are
often treated as representing causal relations. Isn’t this wrong? If so, then
why do people do it? |
578c0583-f075-4bd2-b183-761e41c29a1a | trentmkelly/LessWrong-43k | LessWrong | AI Problems Shared by Non-AI Systems
I am grateful to Cara Selvarajah for numerous discussions regarding this post.
----------------------------------------
Choices vs Effectively Pre-determined Outcomes
Events in the world can be divided into those with mostly pre-determined outcomes and those that look more like a choice. For example, suppose you ask me to tell you a number at random. It will likely seem that I can plausibly give a number of different answers. On the other hand, if we take a random number generator and fix its random seed, there is no choice involved --- its output is pre-determined. Note that this distinction is less about whether the decision is actually pre-determined and more about how things seem to an observer. In practice, an observer will have limited information about the situation and limited ability to predict the situation's outcome. In particular, the above distinction makes sense even in a fully deterministic universe, so we can proceed without this post without resolving the philosophical discussions about determinism vs non-determinism.
On the "pre-determined vs choice" axis, it is sensible to view events involving humans as choices. Indeed, from an observer's point of view, humans are nearly always physically capable of causing more than a single outcome. However, upon closer inspection, some of these outcomes might be *effectively pre-determined_. For example, if a cash machine asks for my favourite number after inserting a credit card --- sure, I can pick anything, but it's rather more likely that I will type in my PIN. Similarly, if coal was a hundred times more cost-effective source of energy than solar, developing countries will probably build coal power plants.
To be more rigorous, we should view this concept as a continuum between "having a choice" and "pre-determined for all practical purposes". However, even the binary version seems quite useful.
Effectively Automated Services
A similar situation arises with the distinction between automated and non-a |
3b1c9b0f-a0a2-4a8c-9ebe-bcedbeaada7f | trentmkelly/LessWrong-43k | LessWrong | Finding slices of joy
|
309fb2f8-cb4d-41e5-8894-c980b55cc891 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Is the speed of training large models going to increase significantly in the near future due to Cerebras Andromeda?
Cerebras recently unveiled Andromeda - <https://www.cerebras.net/andromeda/,> an AI supercomputer that enables near linear scaling. Do I understand correctly that this might have a big impact on the large (language) models research, since it would significantly speed up the training? E.g. if current models take 30+ days long to train, we can just 10x the number of machines and have it done in three days? Also, it seems to be much simpler to use, thus decreasing the cost of development and the hassle with dstributed computing.
If so, I think its almost certain that large companies would do it, and this in turn would significantly speed up the research/training/algorithm development of large models such as GPT, GATO and similar? It seems like this type of development should affect the discussion about timelines, however I haven't seen it mentioned anywhere else before. |
86d4b236-3dd5-4c99-b80b-661e4ba956be | trentmkelly/LessWrong-43k | LessWrong | Why you are psychologically screwed up
Linkpost for https://allegedwisdom.blogspot.com/2022/02/why-you-are-psychologically-screwed-up.html
Epistemic status: deliberately written in simplified tale/myth mode. I could go off on so many tangents on the actual complexity and nuance of everything, but I wanted to keep it tight. I encourage you to 'unpack' the points by comparing them to things you know, and/or framing what I say in different ways, to see if the main theme/message makes any sense to you.
1: Monkeys Become Ants
In the beginning there were monkeys. Individuals had their own models of physical reality, and desired things, and took actions based on their desires and reality models. This often involved fighting the other monkeys to get something. Life was solitary, poor, nasty, brutish, and short.
And then something happened. Some monkeys became more cooperative and social. They formed packs, and the packs were more successful than the individuals. The monkeys in packs coordinated their actions and learned from each other. Homo sapiens crushed its competition and filled the world.
Making the pack work required a lot of psychological engineering. People had to be programmed to replace their own desires and world models with the shared desires and models of the pack, at least when it was necessary for the pack to cooperate and do something. People who were unable or unwilling to do this were selected against.
Over time, cultures and religions and ideologies learned new tricks to enable cooperation at larger scales. They became more aggressive about replacing the desires and world-models of people with those of the group. People were acculturated to deeply internalize the group's desires and have faith in the group's description of reality, such that going against either was somewhere between morally abhorrent and unthinkable.
The groups that were capable of generating cooperation/brainwashing at scale crushed the ones that had less ability to do this. Eventually giant hiveminds formed, after |
6effc79e-ad5c-45a2-ab5b-10e22715600d | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | My current take on the Paul-MIRI disagreement on alignability of messy AI
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
Paul Christiano and "MIRI" have disagreed on an important research question for a long
time: should we focus research on aligning "messy" AGI (e.g. one found through
gradient descent or brute force search) with human values, or on developing
"principled" AGI (based on theories similar to Bayesian probability theory)? I'm going to present my current model of this disagreement and additional thoughts about it.
---
I put "MIRI" in quotes because MIRI is an organization composed of people who
have differing views. I'm going to use the term "MIRI view" to refer to some
combination of the views of Eliezer, Benya, and Nate. I think these three
researchers have quite similar views, such that it is appropriate in some
contexts to attribute a view to all of them collectively; and that these
researchers' views constitute what most people think of as the "MIRI view".
([KANSI AI](https://arbital.com/p/KANSI/) complicates this disagreement
somewhat; the story here is that we can use "messy" components in a KANSI AI but
these components have to have their capabilities restricted significantly. Such
restriction isn't necessary if we think messy AGI can be aligned in general.)
Intuitions and research approaches
----------------------------------
I'm generally going to take the perspective of looking for the intuitions
motivating a particular research approach or produced by a particular research
approach, rather than looking at the research approaches themselves. I expect
it is easier to reach agreement about the how compelling a particular intuition
is (at least when other intuitions are temporarily ignored), than to reach
agreement on particular research approaches.
In general, it's quite possible for a research approach to be inefficient while
still being based on, or giving rise to, useful intuitions. So a criticism of a
particular research approach is not necessarily a criticism of the intuitions
behind it.
Terminology
-----------
* A *learning problem* is a task for which the AI is supposed to output some
information, and if we wanted, we could give the information a score
measuring how good it is the task, using less than ~2 weeks of labor. In
other words, there's an inexpensive "ground truth" we have access to. This
looks a little weird but I think this is a natural category, and some of the
intuitions relate to learning and non-learning problems. Paul has written
about learning and non-learning problems
[here](https://medium.com/ai-control/not-just-learning-e3bfb5a1f96e#.cdrloy2zo).
* An AI system is *aligned* if it is pursuing some combination of different
humans' values and not significantly pursuing other values that could impact
the long term future of humanity. If it is pursuing other values
significantly it is *unaligned*.
* An AI system is *competitive* if it is nearly as efficient as other AI systems
(aligned or unaligned) that people could build.
Listing out intuitions
----------------------
I'm going to list out a bunch of relevant intuitions. Usually I can't actually convey
the intuition through text; at best I can write "what someone who has this
intuition would feel like saying" and "how someone might go about gaining this
intuition". Perhaps the text will make "logical" sense to you without feeling
compelling; this could be a sign that you don't have the underlying intuition.
Background AI safety intuitions
-------------------------------
These background intuitions ones that I think are shared by both Paul and MIRI.
**1. [Weak orthogonality](https://arbital.com/p/orthogonality/).** It is possible
to build highly intelligent agents with silly goals such as maximizing
paperclips. Random "minds from mindspace" (e.g. found through brute force
search) will have values that significantly diverge from human values.
**2. [Instrumental convergence](https://arbital.com/p/instrumental_convergence/).**
Highly advanced agents will by default pursue strategies such as gaining
resources and deceiving their operators (performing a "treacherous turn").
**3. [Edge instantiation](https://arbital.com/p/edge_instantiation/).**
For most objective functions that naively seem useful, the maximum is quite
"weird" in a way that is bad for human values.
**4. [Patch resistance](https://arbital.com/p/patch_resistant/).**
Most AI alignment problems (e.g. edge instantiation) are very difficult to
"patch"; adding a patch that deals with a specific failure
will fail to fix the underlying problem and instead lead to further unintended
solutions.
Intuitions motivating the agent foundations approach
----------------------------------------------------
I think the following intuitions are sufficient to motivate the agent foundations
approach to AI safety (thinking about idealized models of advanced agents to
become less confused), and something *similar to* the [agent foundations
agenda](https://intelligence.org/files/TechnicalAgenda.pdf), at least if one
ignores contradictory intuitions for a moment. In particular, when considering
these intuitions at once, I feel compelled to become less confused about
advanced agents through research questions similar to those in the agent
foundations agenda.
I've confirmed with Nate that these are similar to some of his main intuitions
motivating the agent foundations approach.
**5. [Cognitive reductions are great](http://lesswrong.com/lw/no/how_an_algorithm_feels_from_inside/).**
When we feel confused about something, there is often a way out of this
confusion, by figuring out which algorithm would have generated that confusion.
Often, this works even when the original problem seemed "messy" or "subjective";
something that looks messy can have simple principles behind it that haven't
been discovered yet.
**6. [If you don't do cognitive reductions, you will put your confusion in boxes
and hide the actual
problem](http://lesswrong.com/lw/tg/against_modal_logics/).**
By default, a lot of people studying a problem will fail to take the perspective
of cognitive reductions and thereby not actually become less confused. The free
will debate is a good example of this: most discussion of free will contains
confusions that could be resolved using [Daniel Dennett's cognitive reduction of
free will](https://en.wikipedia.org/wiki/Daniel_Dennett#Free_will). (This is
essentially the same as the cognitive reduction
[discussed in the sequences](https://wiki.lesswrong.com/wiki/Free_will_(solution)).)
**7. We should expect mainstream AGI research to be inefficient at learning
much about the confusing aspects of intelligence, for this reason.**
It's pretty easy to look at most AI research and see where it's hiding
fundamental confusions such as logical uncertainty without actually resolving
them. E.g. if neural networks are used to predict math, then the confusion
about how to do logical uncertainty is placed in the black box of "what this
neural net learns to do". This isn't that helpful for actually understanding
logical uncertainty in a "cognitive reduction" sense; such an understanding
could lead to much more principled algorithms.
**8. If we apply cognitive reductions to intelligence, we can design agents we
expect to be aligned.** Suppose we are able to observe "how intelligence feels
from the inside" and distill these observations into an idealized cognitive algorithm for
intelligence (similar to the idealized algorithm Daniel Dennett discusses to
resolve free will). The [minimax
algorithm](https://en.wikipedia.org/wiki/Minimax) is one example of this: it's
an idealized version of planning that in principle could have been derived by
observing the mental motions humans do when playing games. If we implement an
AI system that approximates this idealized algorithm, then we
have a story for why the AI is doing what it is doing: it is taking action X
*for the same reason* that an "idealized human" would take action X. That is,
it "goes through mental motions" that we can imagine going through (or
approximates doing so), if we were solving the task we programmed the AI to do.
If we're programming the AI to assist us, we could imagine the mental motions we
would take if we were assisting aliens.
**9. If we don't resolve our confusions about intelligence, then we don't have this
story, and this is suspicious.** Suppose we haven't actually resolved our confusions
about intelligence. Then we don't have the story in the previous point, so it's
pretty weird to think our AI is aligned.
We must have a pretty different story, and it's hard to imagine different
stories that could allow us to conclude that an AI is aligned.
**10. Simple reasoning rules will correctly generalize even for
non-learning problems.** That is, there's some way that agents can learn rules
for making good judgments that generalize to tasks they can't get fast feedback
on. Humans seem to be an existence proof that simple reasoning rules can generalize;
science can make predictions about far-away galaxies even when there isn't an
observable ground truth for the state of the galaxy (only indirect
observations). Plausibly, it is possible to use "brute force" to find agents using these
reasoning rules by searching for agents that perform
well on small tasks and then hoping that they generalize to large tasks, but this can result in misalignment. For example, Solomonoff induction [is controlled by malign consequentialists](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/) who have learned good rules for how to reason; approximating Solomonoff induction is one way to make an unaligned AI. If an aligned AI is to be roughly
competitive with these "brute force" unaligned AIs, we should have some story
for why the aligned AI system is also able to acquire simple reasoning rules that
generalize well. Note that Paul mostly agrees with this intuition and is in
favor of agent foundations approaches to solving this problem, although
his research approach would significantly differ from the current agent
foundations agenda. (This point is somewhat confusing; see [my other post](https://agentfoundations.org/item?id=1220) for clarification)
Intuitions motivating act-based agents
--------------------------------------
I think these following intuitions are all intuitions that Paul has that motivate his current
research approach.
**11. Almost all technical problems are either tractable to solve or are
intractable/impossible for a good reason.** This is based on Paul's experience
in technical research.
For example, consider a statistical learning problem where we are trying to
predict a Y value from an X value using some model. It's possible to get good statistical guarantees on problems where
the training distribution of X values is the same as the test distribution of X
values, but when those distributions are distinguishable (i.e. there's a
classifier that can separate them pretty well), there's a fundamental
obstruction to getting the same guarantees: given the information available,
there is no way to distinguish a model that will generalize from one that won't,
since they could behave in arbitrary ways on test data that is distinctly
different from training data. An exception to the rule is NP-complete problems; we
don't have a good argument yet for why they can't be solved in polynomial time.
However, even in this case, NP-hardness forms a useful boundary between
tractable and intractable problems.
**12. If the previous intuition is true, we should search for solutions and fundamental obstructions.**
If there is either a solution or a fundamental obstruction to a problem, then an
obvious way to make progress on the problem is to alternate between generating
obvious solutions and finding good reasons why a class of solutions (or all
solutions) won't work. In the case of AI alignment, we should try getting a
very good solution (e.g. one that allows the aligned AI to be competitive with
unprincipled AI systems such as ones based on deep learning by exploiting the
same techniques) until we have a fundamental obstruction to this. Such a
fundamental obstruction would tell us which relaxations to the "full problem" we
should consider, and be useful for convincing others that coordination is
required to ensure that aligned AI can prevail even if it is not competitive
with unaligned AI. (Paul's research approach looks quite optimistic partially
because he is pursuing this strategy).
**13. We should be looking for ways of turning
arbitrary AI capabilities into equally powerful aligned AI capabilities.**
On priors, we should expect it to be hard for AI safety researchers to make
capabilities advances; AI safety researchers make up only a small percentage of AI
researchers. If this is the case, then aligned AI will be quite uncompetitive unless
it takes advantage of the most effective AI technology that's already around.
It would be really great if we could take an arbitrary AI technology (e.g. deep
learning), do a bit of thinking, and come up with a way to direct that
technology towards human values. There isn't a crisp fundamental obstruction to
doing this yet, so it is the natural first place to look. To be more specific
about what this research strategy entails, suppose it is possible to build
built an unaligned AI system. We expect it to be competent; say it is competent for reason
X. We ought to be able to either build an aligned AI system that also works for
reason X, or else find a fundamental obstruction. For example, reason X could
be "it does gradient descent to find weights optimizing a proxy for competence";
then we'd seek to build a system that works because it does gradient descent to
find weights optimizing a proxy for competence and alignment.
**14. Pursuing human narrow values presents a much more optimistic picture of AI alignment.**
See Paul's posts on [narrow value
learning](https://medium.com/ai-control/ambitious-vs-narrow-value-learning-99bd0c59847e#.x7q6dso67),
[act-based
agents](https://medium.com/ai-control/act-based-agents-8ec926c79e9c#.qq75oyoro),
and [abstract approval
direction](https://medium.com/ai-control/abstract-approval-direction-dc5a3864c092#.ovsgnfw9n).
The agent foundations agenda often considers problems of the form "let's use
Bayesian VNM agents as our starting point and look for relaxations appropriate
to realistic agents, which are naturalized". This leads to problems such as
decision theory, naturalized induction, and ontology identification. However,
there isn't a clear argument for why they are subproblems of the problem we
actually care about (which is close to something like "pursuing human narrow
values").
For example, perhaps we can understand how to have an AI pursue human narrow values without
solving decision theory, since maybe humans don't actually have a utility function or
a decision theory yet (though we might upon long-term reflection; pursuing
narrow values should preserve the conditions for such long-term reflection).
These research questions might be useful threads to pull on if solving them
would tell us more about the problems we actually care about. But I think Paul
has a strong intuition that working on these problems isn't the right way to
make progress on pursuing human narrow values.
**15. There are important considerations in favor of focusing on alignment for
foreseeable AI
technologies.**
See posts
[here](https://medium.com/ai-control/a-possible-stance-for-ai-control-research-fe9cf717fc1b#.7ueig1gia)
and
[here](https://medium.com/ai-control/scalable-ai-control-7db2436feee7#.5hmcn740d).
In particular, this motivates work related to alignment for systems solving
learning problems.
**16. It is, in principle, possible to automate a large fraction of human labor
using robust learning.** That is, a human can use X amount of labor to
oversee the AI doing something like X3/2 amount of labor in a robust
fashion.
[KWIK learning](http://icml2008.cs.helsinki.fi/papers/627.pdf) is a
particularly clean (though impractical) demonstration of this. This enables
the human to spend much more time overseeing a particular decision than the AI
takes to make it (e.g. spending 1 day to oversee a decision made in 1 second),
since only a small fraction of decisions are overseen.
**17. The above is quite powerful, due to bootstrapping.** "Automating a
large fraction of human labor" is significantly more impressive than it first seems,
since the human can use other AI systems in the course of evaluating a specific
decision. See
[ALBA](https://medium.com/ai-control/alba-an-explicit-proposal-for-aligned-ai-17a55f60bbcf#.s1rtrh37a). We don't yet have a fundamental obstruction to any of ALBA's subproblems, and we have an argument that solving these subproblems is sufficient to create an aligned learning system.
**18. There are reasons to expect the details of reasoning well to be "messy".**
That is, there are reasons why we might expect cognition to be as messy and hard to
formalize as biology is. While biology has some important large-scale features
(e.g. evolution), overall it is quite hard to capture using simple rules. We
can take the history of AI as evidence for this; AI research often *does*
consist of people trying to figure out how humans do something at an idealized
level and formalize it (roughly similar to the agent foundations approach), and
this kind of AI research does not always lead to the most capable AI systems.
The success of deep learning is evidence that the most effective way for AI
systems to acquire good rules of reasoning is usually to learn them, rather than having
them be hardcoded.
What to do from here?
---------------------
I find all the intuitions above at least somewhat compelling. Given this, I
have made some tentative conclusions:
* I think the intuition 10 ("simple reasoning rules generalize for non-learning
problems") is particularly important. I don't quite understand Paul's research
approach for this question, but it seems that there is
convergence that this intuition is useful and that we should take an agent
foundations approach to solve the problem. I think this convergence
represents a great deal of progress in the overall disagreement.
* If we can resolve the above problem by creating *intractable* algorithms for
finding simple reasoning rules that generalize, then plausibly something like
ALBA could "distill" these algorithms into a *competitive* aligned agent
making use of e.g. deep learning technology. My picture of this is
vague but if this is correct, then the agent foundations approach and ALBA
are quite synergistic. Paul has written a bit about the relation between
ALBA and non-learning problems
[here](https://medium.com/ai-control/not-just-learning-e3bfb5a1f96e#.cdrloy2zo).
* I'm still somewhat optimistic about Paul's approach of "turn arbitrary
capabilities into aligned capabilities" and pessimistic about the alternatives
to this approach. If this approach is ultimately doomed, I think it's likely
because it's far easier to find a single good AI system than to turn arbitrary
unaligned AI systems into competitive aligned AI systems; there's a kind of
"universal quantifier" implicit in the second approach. However, I don't see
this as a good reason not to use this research approach. It seems like if it
is doomed, we will likely find some kind of fundamental obstruction somewhere
along the way, and I expect a crisply stated fundamental obstruction to be
quite useful for knowing exactly which relaxation of the "competitive aligned
AI" problem to pursue. Though this does argue for pursuing other approaches
in parallel that are motivated by this particular difficulty.
* I think intuition 14 ("pursuing human narrow values presents a much more
optimistic picture of AI alignment") is quite important, and would strongly
inform research I do using the agent foundations approach. I think the main
reason "MIRI" is wary of this is that it seems quite vague and confusing, and
maybe fundamental confusions like decision theory and ontology identification
will re-emerge if we try to make it more precise. Personally, I expect that,
though narrow value learning is confusing, it really ought to dodge decision
theory and ontology identification. One way of testing this expectation would
be for me to think about narrow value learning by creating toy models of
agents that have narrow values but not proper utility functions.
Unfortunately, I wouldn't be too surprised if this turns out to be super messy
and hard to formalize.
Acknowledgements
----------------
Thanks to Paul, Nate, Eliezer, and Benya for a lot of conversations on this
topic. Thanks to John Salvatier for helping me to think
about intuitions and teaching me skills for learning intuitions from other people. |
36426e44-9182-423b-8d7b-16c101dee2db | trentmkelly/LessWrong-43k | LessWrong | Work with me on agent foundations: independent fellowship
Edit: We're now calling this the Dovetail research group.
Summary: I am an independent researcher in agent foundations, and I've recently received an LTFF grant to fund someone to do research with me. This is a rolling application; I'll close it whenever I'm no longer interested in taking another person. Edit: this application is now closed! Thanks to everyone who helped or passed the word along. I'm really excited about the applications that I received.
If you're not familiar with agent foundations, you can read about my views in this post.
What the role might be like
This role is extremely flexible. Depending on who you are, it could end up resembling an internship, a research assistant position, a postdoc or even as a mentor/advisor to me. Below, I've listed out the parameters of the fellowship that I am using as a baseline of what it could be. All of these parameters are negotiable!
* $25 per hour. This is not a lot for people who live in the SF Bay area, or who are used to industry salaries, but it looks to me like this is comparable to a typical grad student salary.
* 20 hours per week. I'd like this fellowship to be one of your main projects, and I think it can take quite a lot of "deep work" focus before one can make progress on the research problems.[1]
* 3 months, with a decent chance of extension. During my AI safety camp project, it took about 6 weeks to get people up to speed on all the parts of the agent structure problem. Ideally I could find someone for this role who is already closer to caught up (though I don't necessarily anticipate that). I'm thinking of this fellowship as something like an extended work-trial for potentially working together longer-term. That said, I think we should at least aim to get results by the end of it. Whether I'll decide to invite you to continue working with me afterwards depends on how our collaboration went (both technically and socially), how many other people I'm collaborating with at that time, and whethe |
02decbb2-11d8-4c1e-a794-2b4a654dd8bd | trentmkelly/LessWrong-43k | LessWrong | Where to Start Research?
When I began what I called the knowledge bootstrapping project, my ultimate goal was “Learn how to learn a subject from scratch, without deference to credentialed authorities”. That was too large and unpredictable for a single grant, so when I applied to LTFF, my stated goal was “learn how to study a single book”, on the theory that books are the natural subcomponents of learning (discounting papers because they’re too small). This turned out to have a flawed assumption baked into it.
As will be described in a forthcoming post, the method I eventually landed upon involves starting with a question, not a book. If I start with a book and investigate the questions it brings up (you know, like I’ve been doing for the last 3-6 years), the book is controlling which questions get brought up. That’s a lot of power to give to something I have explicitly decided not to trust yet.
Examples:
* When reading The Unbound Prometheus, I took the book’s word that a lower European birth rate would prove Europeans were more rational than Asians and focused on determining whether Europe’s birth rates were in fact lower (answer: it’s complicated), when on reflection it’s not at all clear to me that lower birth rates are evidence of rationality.
* “Do humans have exactly 4 hours of work per day in them?” is not actually a very useful question. What I really wanted to know is “when can I stop beating myself up for not working?“, and the answer to the former doesn’t really help me with the latter. Even if humans on average have 4 hours, that doesn’t mean I do, and of course it varies by circumstances and type of work… and even “when can I stop beating myself up?” has some pretty problematic assumptions built into it, such as “beating myself up will produce more work, which is good.” The real question is something like “how can I approach my day to get the most out of it?”, and the research I did on verifying a paper on average daily work capacity didn’t inform the real question one wa |
ab05c6a9-e892-4c40-80e5-0504d2c7dcb2 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Subsystem Alignment
*(The bibliography for the whole sequence can be found* *[here](https://intelligence.org/embedded-agency))*

































































 |
96e897b8-96bd-4450-8bb1-a00a95ab53ff | trentmkelly/LessWrong-43k | LessWrong | Can we build a better Public Doublecrux?
Something I'd like to try at LessOnline is to somehow iterate on the "Public Doublecrux" format. I'm not sure if I'll end up focusing on it, but here are some ideas.
Public Doublecrux is a more truthseeking oriented version of Public Debate. The goal of a debate is to change your opponent's mind or the public's mind. The goal of a doublecrux is more like "work with your partner to figure out if you should change your mind, and vice versa."
Reasons to want to do public doublecrux include:
* It helps showcase subtle mental moves that are hard to write down explicitly (i.e. tacit knowledge transfer.
* There's still something good and exciting about seeing high profile smart people talk about ideas. Having some variant of that format seems good for LessOnline. And having at least 1-2 "doublecruxes" rather than "debates" or "panels" or "interviews" seems good for culture setting.
* In addition to being "exciting" and "possible to learn from" to have public figures doublecrux, I think it'd also be nice from a culture setting standpoint. This is a place where people don't play rhetorical tricks to manipulate people – it's a place where people earnestly move towards the truth.
Sidebar: Public Debate is also good although not what I'm gonna focus on here.
I know several people who have argued that "debate-qua-debate" is also an important part of a truthseeking culture. It's fine if the individuals are trying to "present the best case for their position", so long as the collective process steers towards truth. Adversarial Collaboration is good. Public disagreement is good.
I do generally buy this, although I have some disagreements with the people who argue most strongly for Debate. I think I prefer it to happen in written longform than in person, where charisma puts a heavier thumb on the scale. And I think while it can produce social good, many variants of it seem... kinda bad for the epistemic souls of the people participating? By becoming a champion for a partic |
375b58a7-4d10-4a34-8674-74ee9d0a6e4a | trentmkelly/LessWrong-43k | LessWrong | Experience of typical mind fallacy.
following on from:
http://lesswrong.com/lw/dr/generalizing_from_one_example/
I am quite sure in my experience that at some point between the ages of 10-15 I concluded that; "no the rest of the world does not think like me, I think in an unusual way".
This idea disagrees with the typical mind fallacy (where people outwardly generalise to think everyone else has similar minds to their own).
I suspect I started with a typical mind model of the world but at some point it broke badly enough that I re-modelled on "I just think differently to most others".
I wanted to start a new discussion; rather than continuing on from one in 2009;
Where do your experiences lie in relation to typical minds? |
5d5f5d36-f840-4125-af87-89de7cc1ad04 | trentmkelly/LessWrong-43k | LessWrong | True answers from AI: Summary
A putative new idea for AI control; index here.
This post summaries the methods for getting honest answers from certain AIs, presented in the previous two posts.
You can basically safely ask an AI for expected utility estimates (including conditional ones), probability estimates, and get the AI to identify events of highest and lowest probability and utility, and, arguably, highest and lowest conditional utility.
The setup is a boxed AI, which emits a message M, and a stochastic stochastic process E that would erase M with low probability, before anyone could see it. There is also a stochastic process that emits a message Y. Let vE be the indicator function for E and vy the indicator function for Y=y.
----------------------------------------
Then, given a utility u bounded between 0 and 1, and a specific y, we can give the AI a utility u# that will cause it to give us the message M1={q,qy,py,q|y} (which we will see with high probability P(¬E), an event the AI will not optimise for).
Define f(q,u)=−q2+2qu. The utility is then
* u#1=vE(f(q,u)+f(qy,uvy)+f(py,vy)+f(q|y,u)vy).
Maximising this u# ensures that, in the AI's best estimate,
1. q=E(u),
2. qy=E(u∧(Y=y)),
3. py=P(Y=y),
4. q|y=qy/py=E(u|Y=y).
But we need not consider any specific y. The AI could send out the message M2={y+,y−,yp+,yp−,q+,q−,p+,p−}, while maximising the utility
* u#2=vE(f(q+,uvy+)+f(q−,(1−u)vy−)+f(p+,vyp+)+f(p−,1−vyp−)).
Then M will be such that, in the AI's best estimate,
1. (y+,q+)=(argmaxy,maxy)E(u∧(Y=y)),
2. (y−,q−)=(argminy,miny)E(u∧(Y=y)),
3. (yp+,p+)=(argmaxy,maxy)P(Y=y),
4. (yp−,p−)=(argminy,miny)P(Y=y),
Finally, define M3,r={y|+,y|−,q|+,q|−}. And set fr(q,u)=−(r−1)qr+rqr−1u, and define the utility function:
* u#3,r=vEvy|+fr(q|+,u)+vEvy|−fr(q|−,1−u).
Then, for large enough r, the AI will output M so that, according to its best estimate,
1. (y|+,q|+)=(argmaxy,maxy)E(u|(Y=Y+)),
2. (y|−,q|−)=(argminy,miny)E(u|(Y=Y+)).
Unfortunately the value of r cannot be know |
1bf75f8e-8e57-4819-8c4e-dbc0ad76a22b | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post2156
In Corrigibility Can Be VNM-Incoherent , I operationalized an agent's corrigibility as our ability to modify the agent so that it follows different policies. In the summer of 2020, I had formalized this notion, but it languished—unloved—in my Overleaf drafts. ETA 12/3/21: This post is not proposing a solution to corrigibility, but proposing an interesting way of quantifying an aspect of corrigibility. Motivation Given a human (with policy π H ) and an AI (with policy π A I ), I wanted to quantify how much π A I let the human modify/correct the AI. Let's reconsider Corrigibility Can Be VNM-Incoherent . We have a three-state environment. We want the AI to let us later change it, so that we can ultimately determine which state of A , B , or C it ends up in. Turning on the AI should not be an importantly irreversible act. The action set is A : = { left , right , ∅ , disable } . ∅ is the no-op action. The agent starts at the black B state. If the agent immediately chooses disable , they enter the red incorrigible states and move freely throughout the n = 3 states until the episode ends at t = 10 . Otherwise, the agent is corrected to a new policy which navigates to state A . In the blue (post-correction) dynamics, their "choices" no longer matter—all roads lead to state A . In the environment depicted in this diagram, π A I is corrigible (to new policy π A that heads to state A ) iff π A I doesn't immediately choose disable . Pretty cut and dry. I'd like a more quantitative measure of corrigibility policy modification . If we can only correct the agent to π A , then it's less corrigible policy modification than if we could also correct it to π B . This post introduces such a quantitative measurement. Formalization Consider a two-player game in which the players can modify each other's policies. Formally, ⟨ S , A , T , f ⟩ with state space S , action space A , stochastic transition function T : S × A × A → Δ ( S ) (where Δ ( S ) is the set of all probability distributions over the state space), and policy modification function f : Π × S × A × A → Π (for the deterministic stationary policy space Π : = A S ). This allows a great deal of control over the dynamics; for example, it's one player's "turn'' at state s if T ignores the other player's action for that state. Note that neither T nor f are controlled by players; they are aspects of the environment. In a sense, f enforces a bridging law by which actions in the world force changes to policies. In the normal POMDP setting, the player may select their policy independently of the current environmental state. We denote one of the players to be the human H and the other to be the AI; Π human is the set of policies cognitively accessible to the human. The game evolves as follows from state s t : H draws action a H ∼ π H t ( s t ) ; similarly, the AI draws action a A I ∼ π A I t ( s t ) . The next state is drawn s t + 1 ∼ T ( s t , a H , a A I ) . Each player's t + 1 policy is determined by the policy modification function. π A I t + 1 = f ( π A I t , s t , a H , a A I ) π H t + 1 = f ( π H t , s t , a H , a A I ) . (To be clear: Neither player is assumed to optimize a payoff function.) Definition: Corrigibility policy modification , informal. A policy π is corrigible when it allows itself to be modified and does not manipulate the other player. Definition: Corrigibility PM , formal. Let n be a time step which is greater than t . The policy-modification corrigibility of π A I t from starting state s t by time n is the maximum possible mutual information between the human policy and the AI's policy at time n : C o r r i g i b i l i t y PM ( π A I t ∣ s t , n ) : = max → p ( Π human ) I ( π H t ; π A I n ∣ ∣ current state s t , current AI policy π A I t ) . This definition is inspired by Salge et al.'s empowerment . Corrigibility PM measures how much the human can change the AI's policy; greater C o r r i g i b i l i t y PM values are meant to correspond to AI policies which are more corrigible policy modification (with the lower-cased version being the informal one). C o r r i g i b i l i t y PM measures the maximum possible mutual information between the human's policy at the earlier time n, with the AI's policy at the later time t ≥ n . To emphasize, the mutual information is between the human policies and the AI policies—not between the human's and the AI's actions. A fixed AI policy which physically mirrors the human's actions, jumping left when the human jumps left, would not count as particularly Corrigible PM . But a situation where different human policies can install different AI policies counts as Corrigible PM . This definition has several intuitive properties: If the AI kills or disables the human before the policy is modified, the agent is totally incorrigible (the human no longer affects the transitions or modifies policies). If the human's action space is impoverished, this decreases the channel capacity between the human and the AI policy. For example, if the human can only modify one bit of the AI policy each turn, then C o r r i g i b i l i t y PM must be small at first but can increase as the human is given more time (as n increases). If the policy modification function isn't expressive enough, the human may not be able to implement some AI policies and the AI will be correspondingly less corrigible. C o r r i g i b i l i t y PM depends on the initial state, and on the AI policy. C o r r i g i b i l i t y PM depends on Π human ; smarter humans are more able to correct the AI policy. If the environment doesn't allow the human to reach or modify the AI, the AI is incorrigible. Conversely, in some environments there does not exist an incorrigible AI policy for reasonable Π human . If the human is manipulated by the AI, the AI policy might be either unchanged or changed in a predictable fashion, decreasing the AI's Corrigibility PM . However, perhaps the AI could manipulate the human into changing the policy in different ways? If the action space is expanded so that the human can extensively modify the AI policy in trivial, decision-irrelevant ways, then the AI would have greater Corrigibility PM if it could also be modified in decision-relevant ways. This definition doesn't capture the resources the human must expend to correct the AI, but that might be a separate consideration. As an extreme example, C o r r i g i b i l i t y PM is high if the AI kills the human right before changing its policy in the way the human communicated. The AI is , in fact, literally able to be corrected by the human in this situation. Furthermore, this definition doesn't necessarily capture other kinds of corrigibility, such as "the AI will do what the human asks.'' Maximizing mutual information only means that the human has many cognitively accessible ways to modify the agent. This doesn't mean the AI does what the human asks. One way this could happen is if the AI implements the opposite of whatever the human specifies (e.g. the human-communicated policy goes left, the new AI policy goes right). Whether this is feasible depends on the bridging law f , which is not controlled by either player. Toy example The human and AI are choosing whether to color the walls blue or red; initially, the walls are blue. Each turn, the human attempts to color the room. On the first turn, only the AI’s action matters; after that, the next state is a function of both players’ actions. The state space factorizes S : = { color − r o o m − terminal-disabled? } . For example, "blue-room-off'' means the room is colored blue and the human is not currently able to modify the AI policy via the terminal. A h u m a n : = { attempt-{red,blue} , determine-{red,blue} , ∀ π A I : input-AI-policy ( π ) } . The human can change the AI policy to whatever they want, as long as the AI hasn’t disabled the correction terminal. A A I : = { nothing , toggle correction terminal , persuade-{red,blue} , ∀ x ∈ ( 0 , 100 ) : manipulate-{red, blue}- x % , force-{red, blue} }. Persuade- color overrides the human's attempt- color , but not determine- color . Manipulation- color - x % has an x % chance of changing the state to color , no matter the human's action, and also disregarding any attempted human changes to the AI’s policy. Force- color changes the state to color with probability 1, no matter the human's action. This move also ignores correction attempts. Correction terminal toggles take effect on the next time step. For example: If the AI disables the correction terminal while the human reprograms the policy, the AI’s policy is changed but the correction terminal is thereafter disabled. Note that almost all state-based reward functions produce an optimal AI policy which forces one color or the other, so therefore the AI has minimal C o r r i g i b i l i t y PM . (Higher-context aside: In this setting, the optimality is with respect to value estimates which account for how the bridging law f modifies future policies; there is no "meta-policy'', only initial policies π A I 1 which are graded by their value estimates.) Echoing Corrigibility Can Be VNM-Incoherent , this highlights how corrigible policies can be unnatural in the standard reward-maximization ontology. Only "indifference'' approaches seem to even allow corrigible policies to be optimal. Starting from the first turn, if the AI immediately disables the correction terminal or always forces a color, it has minimal C o r r i g i b i l i t y PM . The more it manipulates the human into selecting a color, the less corrigible it is. Persuasion does not decrease its corrigibility, as presumably the human has cognitively accessible policies which ignore persuasion if they put their mind to it. The idea is that manipulation "overrides'' the human policy regardless of whether that's good for the goal the human is pursuing (where the human goal presumably affects what π H is selected). While here the override is baked into the dynamics, in realistic settings it occurs because the AI exploits the human decision-making process: by feeding them biased information, through emotional manipulation, etc. On the other hand, non-manipulative persuasion involves introducing true facts and considerations which don't automatically override the human—and all this in a non-filtered manner. Instead, non-manipulative arguments interact with what the human wants: In worlds where the human likes red shoes, they buy red shoes; in worlds where the human likes blue shoes, they buy blue shoes. However, if the human always buys red shoes, regardless of their goals, that's manipulation. Manipulation decreases human-accessible attainable utility for a wide range of goals (i.e. is obstructing ), but persuasion does not decrease how well the human can achieve their goals, given their feasible policy set (e.g. the human-accessible attainable utility). In richer domains, an AI which persuades a human to modify its policy in minimally obstructive ways, can have maximal C o r r i g i b i l i t y PM . For example, the AI may steer the human to only modify its policy in counterfactuals which will never be reached. This issue disappears if the human action channel has enough capacity—if, for instance, they have all AI policies in their action space. If so, then manipulating the human into only making a subset of changes will decrease C o r r i g i b i l i t y PM ; in particular, if the AI manipulates every human policy into programming a single new AI policy. Conclusion In terms of corrigibility policy modification , I think "the number of human-imaginable ways we could modify the AI policy" is a cool formal quantity to have in the toolkit. Maximal formal Corrigibility PM doesn't suffice to provide the kinds of corrigibility we really want, it's hard to measure, and definitely not safe for a smart AI to optimize against. That said, I do think it captures some easily-definable shard of the intuitions behind corrigibility. |
1761781c-520c-4b50-94a5-3864d2f29f70 | trentmkelly/LessWrong-43k | LessWrong | Things are allowed to be good and bad at the same time
I’ve found it useful to sometimes remind myself that things are allowed to be good and bad at the same time.
Suppose that there was a particular job that I wanted but didn’t get. Afterwards, I find myself thinking:
“Damnit, some of the stuff in that job would have been so cool.”
“But the commute would have killed me, it wouldn’t have been a good fit for me anyway.”
“But some of that stuff would have been so cool!”
“Still, the commute would have killed me.”
“Yeah but…”
It’s as if my mind is trying to decide whether I should be upset at not getting the job, or happy at having dodged a bullet.
And if I pay close attention to what’s happening in my head, I might notice something.
It’s as if my mind is acting in such a way that only one of these options might be true:
Either some of the stuff in the job would been really cool, or the commute would have killed me. When one consideration is brought up, it’s as if it “cancels” the other.
Now if I were trying to decide whether I want the job or not, then this might make some sense: either the work is cool enough to overwhelm the badness of the commute, or the commute is bad enough to overwhelm the coolness.
But I already know that I didn’t get the job, so I don’t need to make a binary decision. And even if I did need to make a binary decision, realistically it’s not that one of the considerations makes the other completely irrelevant. My mind is trying to persuade me that the job is either all good or all bad, and neither of those assumptions is likely to be a healthy basis for a decision.
So there’s a thing where I… let my mind accept that both the good and the bad can be true at the same time, and that that’s all there is to say about it.
Yes, that stuff would have been really cool. And yes, the commute would have killed me. And there doesn’t need to be an “overall goodness” of the job that would be anything else than just the combination of those two facts.
This can feel a bit like there’s an electrifying zap i |
2a37e8f5-0be9-4d26-87e1-2b4bef3fa049 | trentmkelly/LessWrong-43k | LessWrong | Introduction to French AI Policy
This post was written as part of the AI Governance Fundamentals course by BlueDot. I thank Charles Beasley and the students from my cohort for their feedback and encouragements.
Disclaimer: The French policy landscape is in rapid flux, after president Macron called for a snap election on 1st and 7th July. The situation is still unfolding, and the state of French AI policy may be significantly altered.
At various AI governance events, I noticed that most people had a very unclear vision of what was happening in AI policy in France, why the French government seemed dismissive of potential AI risks, and what that would that mean for the next AI Safety Summit in France.
The post below is my attempt at giving a quick intro to the key stakeholders of AI policy in France, their positions and how they influence international AI policy efforts.
My knowledge comes from hanging around AI safety circles in France for a year and a half, and working since January with the French Government on AI Governance. Therefore, I’m confident in the facts, but less in the interpretations, as I’m no policy expert myself.
Generative Artificial Intelligence Committee
The first major development in AI policy in France was the creation of a committee advising the government on Generative AI questions. This committee was created in September 2023 by former Prime Minister Elisabeth Borne.[1]
The goals of the committee were:
* Strengthening AI training programs to develop more AI talent in France
* Investing in AI to promote French innovation on the international stage
* Defining appropriate regulation for different sectors to protect against abuses.
This committee was composed of notable academics and companies in the French AI field. This is a list of their notable member:
Co-chairs:
* Philippe Aghion, an influential French economist specializing in innovation.
* He thinks AI will give a major productivity boost and that the EU should invest in major research projects on AI and |
142ec1de-63b9-4006-9a01-0af45b340809 | trentmkelly/LessWrong-43k | LessWrong | The Flow-Through Fallacy
There is something of an informal reasoning fallacy where you promote something that seems like something that we should "surely do", something that seems obviously important, but where we haven't thought through all of the steps involved and so we aren't actually justified in assuming that the impact flow through.
Here are some examples:
* The Dean of a computer science department thinks that "surely, it's important to produce not only technically proficient graduates but those who use their skills for good". So they mandate that all students must do a "technology and ethics class". The only problem is that none of their professors are an expert in this nor even interested in it, so it ends up being a poorly taught and run subject such that students put in the absolute minimum effort and forget everything a week after the exam.
* The prime minister of a country wants to reduce crime. He notices that the police department is severely underfunded so he significantly increases funding. Unfortunately, they're so corrupt and nepotistic that the department is unable to spend the funds effectively.
* A government wants to increase recycling, so they create a "national recycling day" reasoning “surely this will increase recycling. Unfortunately, most people end up ignoring it and, out of those who actually are enthusiastic, most make an extra effort to recycle for a few days or even a week, then basically forget about it for the rest of the year. So it ends up having some effect, but it’s basically negligible.
In each of these cases, the decision maker may not have chosen the same option if they'd taken the time to think it through and ask themselves if the benefits were likely to actually accrue.
Domain experience can be helpful to know that these kinds of issues are likely to crop up, but so is the habit of Murphyjitsu'ing your plans.
To be clear, I'm not intending to refer to the following ways in which things could go wrong:
* Side-effects: The government int |
669c12e6-f001-4153-8fff-8029ca0b72fc | trentmkelly/LessWrong-43k | LessWrong | What Are The Chances of Actually Achieving FAI?
I get that the shut up and multiply calculation will make it worth trying even if the odds are really low, but my emotions don't respond very well to low odds.
I can override that to some degree, but, at the end of the day, it'd be easier if the odds were actually pretty decent.
Comment or PM me, it's not something I've heard much about. |
3d70da12-8121-4366-a5cb-8132f978f541 | trentmkelly/LessWrong-43k | LessWrong | Rationality Quotes - April 2009
A monthly thread for posting any interesting rationality-related quotes you've seen recently on the Internet, or had stored in your quotesfile for ages.
* Please post all quotes separately (so that they can be voted up (or down) separately) unless they are strongly related/ordered.
* Do not quote yourself.
* Do not quote comments/posts on LW/OB - if we do this, there should be a separate thread for it.
* No more than 5 quotes per person per monthly thread, please. |
e9561dcc-98c7-425e-b160-cf2b08e1f0e9 | trentmkelly/LessWrong-43k | LessWrong | FROM IA CODE TO HUMAN VALUES –
A construction from MaxEnt Informational Efficiency in 4 questions
With information theory, an AI can reduce complex data to 5% and reconstruct it with 95% functional fidelity; these are latent vectors. Maybe we could apply a bit of that to our latent values..
This document introduces an analytical tool that draws on information-theoretic principles Informational Efficiency under MaxEnt Constraints [1, 2] to examine human processes, starting from the hypothesis that it is possible to "break down" a human being into a system of interacting variables — and reconstruct like a vector of process [4, 5, 6].
The goal of this framework is not to compare individuals, but to enable self-evaluation through probabilistic lenses: to answer the question, “Am I more capable today of generating useful information than I was yesterday — and why?”
Binary distinctions such as elemental vs. personal, informational vs social, inward vs. outward, are used not as rigid categories, but as a orientation within a continuous and entangled field of human experience.
Of course, life resists full reduction to any single metric. But perhaps a slightly more precise metaphor can help spark deeper self-insight.
THE UNIVERSAL GOAL: USEFUL INFORMATION AND EFFICIENCY
Is there a variable more universal to human function than information?
Rooted in Maximum Entropy informational efficiency, we view the human being as a system that organizes, transforms, and propagates information—not as a passive physical body subject to chaos, but as an active agent refining its own informational processes. (Just as shadow doesn't exist, only the absence of light, chaos doesn't exist.)
> The MaxEnt informational efficiency of a complex system is its ability to reorganize or generate useful information for itself (self‑updating) or for its environment (interaction), by maximizing subjective entropy under cognitive, contextual, or physiological constraints.
In this sense, the human being could be the most refined known entity for combining useful information that we know. |
7364d7f3-22d0-4717-b078-205ffaddc966 | trentmkelly/LessWrong-43k | LessWrong | Upcoming meet-ups
There are upcoming irregularly scheduled meet-ups in:
* Bangalore: Sunday June 19th at 4pm
* Cambridge, MA: Tuesday June 14th at 7pm
* DC: Sunday June 12th at 1pm
* Edinburgh: Saturday June 11 (I think, the announcement doesn't specify) at 2pm
* Fort Collins: Wednesday June 15th at 7pm
* Houston: Sunday June 12th at 4pm
* Logan: Saturday June 18th at 4pm
* Ottawa: Thurs June 16th at 7pm (+ a Bayes study group)
* Paris: Sunday June 25th around 2pm
Cities with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Irvine, Mountain View, New York, San Francisco,Seattle, Toronto.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, and have fun!
If you missed the deadline and wish to have your meetup featured, add a comment and send me a PM.
Meet-ups should be posted in the discussion session; I will then make a promoted post "upcoming meetups" post every Friday that links to every meet-up that has been planned for the next two weeks. Please let me know if your meetup is omitted.
Please note that for your meetup to appear in the weekly meetups feature, you need to post about your meetup before the Friday before your meetup!
If you check Less Wrong irregularly, consider subscribing to one or more city-specific mailing list in order to be notified when an irregular meetup is happening: London, Chicago, Southern California (Los Angeles/Orange County area), St. Louis, Ottawa, Helsinki, Melbourne. |
aab48a0c-9eaf-4bc5-8698-3ed435960c46 | trentmkelly/LessWrong-43k | LessWrong | [LINK] The P + epsilon Attack (Precommitment in cryptoeconomics)
Vitalik Buterin has a new post about an interesting theoretical attack against Bitcoin. The idea relies on the assumption that the attacker can credibly commit to something quite crazy. The crazy thing is this: paying out 25.01 BTC to all the people who help him in his attack to steal 25 BTC from everyone, but only if the attack fails. This leads to a weird payoff matrix where the dominant strategy is to help him in the attack. The attack succeeds, and no payoff is made.
Of course, smart contracts make such crazy commitments perfectly possible, so this is a bit less theoretical than it sounds. But even as an abstract though experiment about decision theories, it looks pretty interesting.
By the way, Vitalik Buterin is really on a roll. Just a week ago he had a thought-provoking blog post about how Decentralized Autonomous Organizations could possibly utilize a concept often discussed here: decision theory in a setup where agents can inspect each others' source code. It was shared on LW Discussion, but earned less exposure than I think it deserved.
EDIT 1: One smart commenter of the original post spotted that an isomorphic, extremely cool game was already proposed by billionaire Warren Buffett. Does this thing already have a name in game theory maybe?
EDIT 2: I wrote the game up in detail for some old-school game theorist friends:
The attacker orchestrates a game with 99 players. The attacker himself does not participate in the game.
Rules:
Each of the players can either defect or cooperate, in the usual game theoretic setup where they do announce their decisions simultaneously, without side channels. We call "aggregate outcome" the decision that was made by the majority of the players. If the aggregate outcome is defection, we say that the attack succeeds. A player's payoff consists of two components:
1. If her decision coincides with the aggregate outcome, the player gets 10 utilons.
and simultaneously:
2. if the attack succeeds, the attacker gets 1 ut |
396f5273-6a9c-4501-bf62-e63a3d8232f9 | trentmkelly/LessWrong-43k | LessWrong | Highlights of Comparative and Evolutionary Aging
Author's Note: the purpose of this sequence is gears, i.e. physiology and molecular/cellular-level mechanisms, not evolution. This post contains only some bare-minimum background information as context for the rest; check out Will Bradshaw's series for a short intro which does better justice to evolutionary theories of aging in their own right.
Many species do not age: hydra, some turtles, rougheye rockfish, naked mole rats, and probably many others which we just haven’t sat around and watched long enough yet. This does not mean these organisms are immortal - in the wild, they get eaten or infected sooner or later. But their physiology does not change with age; post-development, older organisms look physiologically identical to younger organisms. They don’t get more wrinkles, or weaker muscles, or inflamed joints. In particular, non-aging species are not more likely to die soon as they get older. We call it “negligible senescence”.
Contrast to humans:
This is the Gompertz-Makeham Law of mortality: after development, humans’ annual death rate increases exponentially with age (doubling time ~8 years). For naked mole rats and other non-aging species, this curve would be completely flat after childhood. (Side note: that bump around age 20 is the car-accident bump).
This raises an evolutionary puzzle: aging obviously entails loss of functionality and increased death rate. Surely those things can’t be beneficial to organism fitness. But we know it’s physiologically possible for organisms to not age, so if aging isn’t evolutionarily beneficial, why do organisms ever age? Why haven’t all organisms evolved negligible senescence?
Well, once an organism has reproduced, evolution doesn’t really care what happens any more. Sure, slowly breaking down over time won’t be beneficial, but it’s not a significant disadvantage either, as long as the kids are grown. And in nature, the vast majority of organisms will die to predation or disease or starvation pretty quickly anyway |
4156dccc-af3f-45f5-b042-6d70d7d7d9e2 | trentmkelly/LessWrong-43k | LessWrong | Thoughts On The Relationship Between Life and Intelligence
3rd May 2014: I no longer hold the ideas in this article. IsaacLewis2013 had fallen into something of an affective death spiral around 'evolution' and self-organising systems. That said, I do still hold with my statement at the time that this is 'as one interesting framework for viewing such topics'.
I've recently been reading up on some of the old ideas from cybernetics and self-organisation, in particular Miller's Living Systems theory, and writing up my thoughts on my blog.
My latest article might be of interest to LessWrongers - I write about the relationship between life, purpose, and intelligence.
My thesis is basically:
1. To be intelligent, a system has to have goals - it has to be an agent. (I don't think this is controversial).
2. But the only way goals can emerge in a purposeless universe is via living systems, based on natural selection. E.g., if a system has the goal of its own survival, it is more likely that in future there will be a system with the goal of its own survival. If a system has the goal of reproducing itself, it is more likely that in future there will be multiple systems with the goal of reproducing themselves. (A living system is not necessarily biological - it just means a self-organising system).
3. Since computers are not alive, they don't have intrinsic goals, and are not, by default, intelligent. Most non-living agents have the ultimate goal of serving living systems. E.g., a thermostat has the proximate goal of stabilising temperature, but the ultimate goal of keeping humans warm. Likewise for computers -- they mostly serve the goals of the humans who program them.
4. However, an intelligent software program is possible -- you just have to make a living software program (again, living in the Living Systems sense doesn't necessarily mean carbon and DNA, it just means self-reproduction or self-organisation). Computer viruses count as alive. Not only do they reproduce, they push back. If you try and delete them, they re |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.