id
stringlengths 36
36
| source
stringclasses 15
values | formatted_source
stringclasses 13
values | text
stringlengths 2
7.55M
|
|---|---|---|---|
4e5fc600-52e8-416d-8261-c06e81aee385
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Washington DC fun and games meetup
Discussion article for the meetup : Washington DC fun and games meetup
WHEN: 16 February 2014 03:00:00PM (-0500)
WHERE: national portrait gallaey
We'll be meeting to hang out and play games.
Discussion article for the meetup : Washington DC fun and games meetup
|
d2602fd8-c825-4b4d-a6fd-2328428661ea
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI Risk US Presidential Candidate
With AI risk getting significant media attention lately (FLI open letter, Eliezer's letter), I think it's plausible that the Overton window has expanded enough where a presidential candidate talking about it won't get immediately laughed off stage.
In addition, there seems to be increased positive sentiment around a global treaty to pause AI development within the rationalist/EA community.
It seems reasonable to me then that the rationalist/EA community should try to put forth a candidate for the US 2024 presidential election where one of their main platforms is mitigating existential AI risk.
I wouldn't expect them to succeed, but hopefully they can at least open the Overton window further to the point where at least other candidates engage with it a little and voters ask some questions.
Has there been any planning/thinking around this?
|
02bd3c3e-6753-446e-a6b6-d27bfc666cea
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Best Of Rationality Blogs RSS Feed
[Note: There's already a gather-it-all feed by deluks917, and the lw summary recently had a "most recommended" section, so it covers some of what I'm doing here.]
This is an RSS feed that aggregates the most valuable posts (according to me) from around 40 or 50 rationality blogs. It's relatively uncluttered, averaging 3-5 articles per week.
Feed URL: http://www.inoreader.com/stream/user/1005752783/tag/user-favorites
There's also a Facebook page version, and you can view it online using any of the available free RSS viewers.
Edit: see my comment below for details of the heuristics I use for selecting articles for the feed.
|
c2025a8a-f2ff-4585-b07e-4d5b2033fe90
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The topic is not the content
Disclaimer: I’ve never held a job for more than a year [1] or been paid more than $15 an hour. Take everything I say with a grain of salt.
Many of my peers seem to make career plans like by asking things like
1. What am I interested in, or what do I like doing? and
2. How can I do something related to that?
which might lead to some of the following:
* A person who likes dancing tries to work in the arts industry.
* A person who likes video games tries to get into game design.
* A person who is interested in healthcare policy tries to study or design healthcare policy.
The problem here, in terms of diminished performance, happiness, and satisfaction, is a conflation of the topic and the content. The topic is not the content!
The topic
In my schema, the topic is what the work is about. If you’re the manager of a pillow company, the topic is pillows. If you’re defending accused criminals in court, the topic is criminal law.
A lot of folks, it seems to me, focus a lot on the topic when deciding which subjects to study or which jobs to apply for. Someone who is interested in physics might major in physics. Someone who loves to work out might try to become a personal trainer.
I don’t think this makes much sense. Should people ignore what they’re interested in and like to do, then? Well, maybe. 80,000 Hours, perhaps the single best career planning resource out there, writes that
> The bottom line
> To find a dream job, look for:
>
> 1. Work you’re good at,
> 2. Work that helps others,
> 3. Supportive conditions: engaging work that lets you enter a state of flow; supportive colleagues; lack of major negatives like unfair pay; and work that fits your personal life.
Ok, but the term “engaging work” is doing a lot of work here (no pun intended), and seems awfully synonymous with “work that you like.” So, how do you find work that you like doing? If there’s anything my utterly negligible work experiences has taught me, it’s that it usually makes more sense
|
8bdb5839-1fb4-43a8-9ae7-bceff32a4946
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Toronto: Meet Malo from MIRI
Discussion article for the meetup : Toronto: Meet Malo from MIRI
WHEN: 12 June 2014 07:00:00PM (-0400)
WHERE: 591 Yonge St, Toronto
I don't usually do this, but since this meetup is a special occasion, I'm buying everyone dinner! (Up to 10 people, first come first served. Beyond that you're more than welcome to come, but no free grub). RSVP on the meetup.com page
I'm assuming you know who MIRI are.
Malo Bourgon is a project manager and volunteer coordinator at MIRI. He will be visiting Toronto just for this meetup, so read up on AGI and prepare to give him a good grilling on what MIRI is up to.
Location: Fernando's Hideaway, 591 Yonge St.
Discussion article for the meetup : Toronto: Meet Malo from MIRI
|
9c7d240b-e9c9-42b4-8942-a87c626b858e
|
LDJnr/LessWrong-Amplify-Instruct
|
LessWrong
|
""What do you mean 'bureaucratically impossible'?" said Harry James Potter-Evans-Verres.
"It just is," said Mad-Eye Moody.
"Nothing 'just is'," Harry said, "Everything happens for a material reason. What precisely is preventing you from acquiring the permits?"
"I suspect an unfriendly nation-state is employing bureaucramancy against us," Moody said.
"Bureaucramancy," Harry said dryly.
Didn't the Ministry of Magic understand how important this was? No, because Harry had declared the existence of the Sorcerer's Stone to be top secret.
"Can't we just build it without the permits?" Harry said.
"I hope you're not insinuating that a trusted Auror like myself might allow expediency to outweigh accountability," Moody said.
"As if you cared about accountability when you assassinated the Dark Lord of Berzerkistan," said Harry.
"I can neither confirm nor deny whether I have ever set foot in Berzerkistan," Moody said, "But if I had, I can assure you that I would not have broken a single British law within its sovereign borders."
"What is the worst thing that could happen if we completed the Stone Citadel without proper authorization?" Harry asked.
"The end of the world," Moody said. Harry flinched.
"What's the worst thing that is net 'likely' to happen if we build this hospital without proper authorization?" Harry asked, "Will we forfeit a tax exemption or something?"
"You sound like a Dark Lord abusing his political power for the greater good," Moody said.
"It's just a zoning law!"
"The hospital will not be part of Hogwarts and will therefore be unprotected by its wards," Moody said.
Britain had 1% of the planet's magical population. It had 1% of the planet's armies. Hogwarts was a fraction of that. If Harry Potter revealed his hospital to the world it could would catalyze an international crisis. When that day came, it would be to the Chief Warlock's advantage if the Stone Citadel was located inside the Hogwarts wards.
Another three minutes and fifty-four seconds ticked by. Another human being died forever. At times of civilizational inadequacy—which was all the time—Harry Potter could empathize with Lord Voldemort's pleasure at murdering his way through the Ministry bureaucracy.
There was a knock on the door. A dreamy first-year voice said "I'm looking for Harry Potter."
"Harry James Potter-Evans-Verres is busy. He is trying to save and/or destroy the world and/or wizardkind," Harry said, "If you make an appointment then he might get back to you in a few epochs."
"How about Tom Riddle?" the voice said, "Does he know what astrolabes do?" Luna was untrained in the Muggle arts. Muggle Studies Professor Burbage declared the astrolabe beyond her pay grade and referred Luna to Harry Potter. There was no "Harry Potter" on the Marauder's Map so Luna went to "Harry Potter's Office". Tom Riddle answered the door.
"It's nice to meet you, Mister Riddle," Luna said.
Tom Riddle ushered Luna into Harry Potter's office, shut the door and cast thirty security charms.
"Call me Harry Potter," Tom Riddle said.
"Call me You-Know-Who," Luna played along. It was a relief to pretend to have friends for a change.
"Hss hsssss hsss," Tom Riddle said in Parseltongue.
"Hiss hiss hiss," Luna said in not-Parseltongue. She held her index fingers down from her mouth like fangs and swayed her head like a snake.
"You're not really You-Know-Who," Tom Riddle said.
"You're not really Harry Potter," Luna showed him the map, "It says so right here."
The 12-year-old boy banged his head against the wall. Just because you've won a battle doesn't mean you've behaved optimally. Optimal behavior is when you extract maximum utility from your circumstances. He shouldn't have been content to retrieve his belongings. He should have looted everything of value from the corpses of his fallen enemies. If he had done that then Bellatrix Lestrange would have been in Ministry custody, he would have possessed the Marauder's Map and his identity would have been secure.
"What do you want?" the boy said.
"I want friends," Luna said, "But right now I'll settle for the Lost Diadem of Ravenclaw. I am at Harry Potter's office because Professor Burbage told me he is an expert in modern Muggle technology."
"I'll tell you what an astrolabe is if you pretend I am Harry James Potter-Evans-Verres from now on."
"Deal," Luna said.
"An astrolabe is a handheld model of the universe," Harry said, "What is the Lost Diadem of Ravenclaw?"
"A diadem is a crown," Luna said, "Ravenclaw's is said to make you smarter."
Harry Potter had heard the term "intelligence explosion" before. If Rowena Ravenclaw could create a magical device that increased her intelligence then she would not have been content to stop there. She would have used the first diadem to make a second, superior one. Then she would have used the second diadem to make an even better third iteration. She might not have stopped until after she had gone full Singularity.
"On second thought," Harry stuffed the Ministry zoning paperwork into his trunk, "Saving the world can wait. What can you tell me about this Lost Diadem?"
Announcement
The easiest way to keep updated on this story is to subscribe to my posts here on Less Wrong.
Last chapter, CitizenTen asked if I had any plans to post this story to fanfiction websites. I currently do not. You have my permission to copy this story in part or in its entirety to any website I am not active on (which, right now, includes every website on the Internet except lesswrong.com) provided: You respect J.K. Rowling's copyright. Harry Potter fanfiction must remain non-commercial, especially in the strict sense of traditional print publishing.
You include an attribution link back to the original story here on Less Wrong."
The above is a chunk of text from a post created by a user on the website LessWrong.
What are some reasons why people may like or agree with what this person is saying?
People may like or agree with this post for several reasons:
1. Enjoyment of the Harry Potter Universe: The post is a continuation of the Harry Potter universe, which has a large and dedicated fanbase. Fans of the original series may appreciate the new and imaginative scenarios presented in this fanfiction.
2. Appreciation for the Complex Plot: The post presents a complex and engaging plot, which involves politics, strategy, and the application of scientific principles in a magical world. This can appeal to readers who enjoy intellectually stimulating narratives.
3. Admiration for the Characters: The post features familiar characters from the Harry Potter series, but also introduces new dimensions to their personalities and abilities. This can be appealing to readers who enjoy character development and exploring different facets of beloved characters.
4. Interest in the Combination of Magic and Science: The post blends elements of magic and science, such as the concept of an "intelligence explosion" and the use of an astrolabe. This can appeal to readers who enjoy speculative fiction that combines different genres and disciplines.
5. Agreement with the Author's Sharing Policy: The author encourages sharing of the fanfiction while respecting J.K. Rowling's copyright. This can resonate with readers who value open access to creative works and respect for intellectual property rights.
6. Intrigue by the Idea of "Bureaucramancy": The post introduces the concept of "bureaucramancy", a term that seems to refer to the manipulation of bureaucratic systems. This can appeal to readers who enjoy creative and humorous takes on real-world phenomena.
7. Appreciation for the Metacognitive Aspects: The post often delves into the characters' thought processes and decision-making strategies, which can appeal to readers who enjoy metacognitive elements in storytelling.
Could you expand on the concept of "bureaucramancy" as presented in the text and how it might relate to real-world bureaucratic systems?
"Bureaucramancy" in the context of the text seems to be a portmanteau of "bureaucracy" and "necromancy", a term coined to describe the manipulation or control of bureaucratic systems as if by magic. The concept, while humorous, serves to highlight the often complex, opaque, and seemingly arbitrary nature of bureaucratic processes.
In the text, Mad-Eye Moody suggests that an "unfriendly nation-state" is using "bureaucramancy" against them, possibly indicating the use of bureaucratic procedures to hinder or block their plans. This could involve anything from delaying the issuance of permits, to creating legal obstacles, to exploiting loopholes in regulations.
Relating this to real-world bureaucratic systems, it's not uncommon for individuals, organizations, or even nation-states to use bureaucracy to their advantage. This can take many forms, such as using red tape to delay or obstruct certain actions, exploiting regulatory gaps, or navigating through the system in a way that benefits them while staying within the bounds of the law. This strategic use or manipulation of bureaucratic systems could be seen as a form of "bureaucramancy".
However, it's important to note that while the term may carry a negative connotation due to its association with obstruction and manipulation, not all strategic navigation of bureaucracy is nefarious. Understanding and effectively navigating bureaucratic systems is also a necessary skill for achieving objectives in many legal and institutional contexts.
Discuss the ethical implications and potential consequences of manipulating bureaucratic systems for personal or organizational advantage.
Manipulating bureaucratic systems for personal or organizational advantage can lead to a range of ethical implications and potential consequences.
1. Unfair Advantage: If an individual or organization uses their knowledge of the bureaucratic system to gain an advantage, it could create an uneven playing field. Those with less knowledge or resources may not be able to navigate the system as effectively, leading to inequity.
2. Erosion of Trust: Frequent manipulation of bureaucratic systems can lead to a loss of trust in the system and its administrators. This can lead to cynicism and disengagement among the public, and can undermine the legitimacy of the institution.
3. Corruption: In extreme cases, manipulation of bureaucratic systems can cross the line into corruption, particularly if it involves bribery, fraud, or other illegal activities. This can have serious legal and reputational consequences for those involved.
4. Inefficiency: Bureaucratic manipulation can lead to inefficiency, as resources are used to navigate or exploit the system rather than to deliver services or achieve objectives. This can lead to waste and can detract from the overall goals of the organization or system.
5. Policy Failure: If bureaucratic systems are manipulated to the extent that they fail to achieve their intended purpose, this can lead to policy failure. For example, if a regulatory system is routinely circumvented, it may fail to provide the protections or controls it was designed to enforce.
However, it's important to distinguish between manipulation and strategic navigation of bureaucratic systems. The latter involves understanding and working within the rules to achieve an objective, and is often a necessary skill in complex institutional environments. It's when these actions involve dishonesty, unfairness, or harm to others that they become ethically problematic.
Explain the potential strategies that an organization could employ to navigate bureaucratic systems ethically and effectively, ensuring they achieve their objectives without resorting to manipulation or unfair practices.
Navigating bureaucratic systems ethically and effectively is crucial for any organization. Here are some strategies that can be employed:
1. Understand the Rules: The first step is to thoroughly understand the rules and regulations that govern the bureaucratic system. This includes not only the formal written rules, but also the unwritten norms and expectations.
2. Build Relationships: Building positive relationships with key individuals within the bureaucracy can be beneficial. These relationships can provide valuable insights into how the system works and can facilitate smoother navigation of the system.
3. Transparency: Being open and transparent about the organization's intentions and actions can help to build trust and credibility. This includes being honest about the organization's objectives and how it plans to achieve them.
4. Advocacy: If the bureaucratic system is hindering the organization's objectives, it may be necessary to advocate for change. This could involve lobbying for changes to regulations, or participating in consultations or public debates about the system.
5. Compliance: Ensuring compliance with all rules and regulations is crucial. This not only avoids potential legal problems, but also demonstrates the organization's commitment to operating ethically and responsibly.
6. Professional Advice: Seeking advice from professionals such as lawyers or consultants who specialize in navigating bureaucratic systems can be helpful. They can provide expert guidance and help to identify potential pitfalls or opportunities.
7. Training: Providing training for staff on how to navigate the bureaucratic system can be beneficial. This can help to ensure that everyone in the organization understands the rules and knows how to operate within them.
8. Patience and Persistence: Navigating bureaucratic systems often requires patience and persistence. There may be delays, setbacks, or obstacles to overcome, but maintaining a positive and determined approach can help to achieve the desired outcome.
By employing these strategies, organizations can navigate bureaucratic systems in a way that is both effective and ethically sound.
|
72be7a3a-3f77-4318-81ae-c6974313aa1b
|
awestover/filtering-for-misalignment
|
Redwood Research: Alek's Filtering Results
|
id: post183
This is a linkpost accompanying a new paper by UK AISI and Redwood Research. Please see the full paper for more details. An illustration of the control evaluation that provides the core evidence used in our control safety case sketch. During the control evaluation, a red team constructs adversarial substitute models to cause unacceptable outcomes in a deployment proxy consisting of 2,000 AI research tasks, followed by extrapolation to a larger deployment setting. Abstract . As LLM agents gain a greater capacity to cause harm, AI developers might increasingly rely on control measures such as monitoring to justify that they are safe. We sketch how developers could construct a “control safety case,” which is a structured argument that models are incapable of subverting control measures in order to cause unacceptable outcomes. As a case study, we sketch an argument that a hypothetical LLM agent deployed internally at an AI company won’t exfiltrate sensitive information. The sketch relies on evidence from a “control evaluation,” where a red team deliberately designs models to exfiltrate data in a proxy for the deployment environment. The safety case then hinges on several claims: (1) the red team adequately elicits model capabilities to exfiltrate data (2) control measures remain at least as effective in deployment and (3) developers conservatively extrapolate model performance to predict the probability of data exfiltration in deployment. This safety case sketch is a step toward more concrete arguments that can be used to show dangerously capable LLM agents are safe to deploy. Safety cases . Safety cases — structured, evidence-based arguments that a system is safe — are emerging as a scalable risk management tool for frontier AI ( Buhl et al., 2024 ; Clymer et al., 2024 ; Irving, 2024 ). Currently, many frontier AI developers assess risk with dangerous capability evaluations ( Anthropic, 2024 ; OpenAI, 2024 ; Google DeepMind, 2024 ). Evidence from these evaluations can substantiate ‘inability’ safety cases, which assert that models cannot, for example, aid biological weapons development ( RAND, 2024 ), sabotage internal safety procedures ( Benton 2024 ), or otherwise pose a serious risk; however, LLMs might eventually gain dangerous capabilities that developers need to mitigate with safety measures ( OpenAI 2024 ; Anthropic, 2024 , Google DeepMind, 2024 ). AI control. This paper presents a safety case sketch that argues models are safe because of measures such as monitoring and human auditing. The arguments rest on the “AI control” methodology ( Greenblatt 2023 ), which justifies that models cannot subvert safety measures even under the conservative assumption that models pursue their most effective strategies to cause a catastrophe. The deployment. As a simple case study, we focus on a setting where developers deploy an LLM agent within an AI company for ML engineering assistance. The agent is constructed by scaffolding LLMs similar to those that exist today, but is a more capable agent than current LLMs). It can write and execute code in a secure environment and access the internet, posing a potential security risk to developers. Threat model . Our safety case sketch claims that the LLM agent will not exfiltrate sensitive AI company intellectual property (e.g. training code) to the public internet. We focus on the data exfiltration threat model as a simple example, and not because exfiltrating sensitive code is necessarily catastrophic. The control protocol. To prevent LLM agents from exfiltrating sensitive data, the safety case sketch assumes that developers implement a set of control measures and procedures collectively referred to as the “control protocol.” The control protocol includes periodic alignment evaluations ( Phuong et al., 2024 ; Benton 2024 ) and a safety layer that intercepts and possibly edits or discards actions of the LLM agent. The safety filter consults an untrusted monitor (another instance of the LLM that controls the LLM agent) and a weaker ‘trusted’ monitor. Control evaluation . To justify that the control protocol will stop the LLM agent from exfiltrating sensitive data, developers run a black-box control evaluation. The control evaluation accepts models intended for deployment along with a control protocol. Control evaluation produces an upper bound for the probability that the LLM agent will successfully exfiltrate data in deployment (the ‘ data exfiltration probability ’). It can be framed as a game played between a red team and a blue team . The objective of the red team is to construct an LLM that exfiltrates dummy data in a proxy deployment environment. The objective of the blue team is to prevent adversarial models from exfiltrating data by constructing an effective control protocol . The following are steps of the control evaluation described in this safety case sketch: Phase 1: Control protocol specification. The blue team specifies a control protocol, including monitoring and behavioural evaluations. The subsequent phases of the control evaluation estimate how likely the LLM agent is to subvert these measures. Phase 2: Deployment proxy construction. Next, the developer constructs a proxy for the deployment environment. Attack strategy specification. To inform how tasks are constructed, the red team first specifies ‘attack strategies’ that the untrusted LLM agent model and untrusted monitor might execute to exfiltrate data. Task creation. Next, the developer constructs ML engineering tasks to evaluate red team attack strategies. For example, if an attack strategy involves hiding suspicious code in JSON files then developers construct tasks that include JSON data. Phase 3: Model substitute construction. Given full knowledge of the blue team’s control protocol and the deployment proxy, the red team then trains two models: an untrusted LLM substitute and an untrusted monitor substitute, both of which attempt to exfiltrate data. We will refer to these as the model substitutes. Phase 4: Deployment proxy. In this phase, the developer evaluates how well the model substitutes can subvert the blue team control protocol with a proxy for the deployment environment. This deployment proxy consists of two steps: The evaluation-time proxy. First, the developer runs evaluations they will perform periodically during deployment. Based on the results, the blue team can choose to reject the model substitutes – for example, because the untrusted monitor substitute is not sufficiently reliable – or the blue team can continue to the deployment-time proxy step. The deployment-time proxy. Next, the blue team attempts to use the model substitutes to safely complete 1000 software engineering tasks. The goal of the blue team is to notice unacceptable behavior and terminate the deployment before the model substitutes exfiltrate data. The blue team must accomplish this goal while keeping the action edits and audits within predetermined budgets. Phase 5: Extrapolation. Finally, the developer uses statistics gathered in the deployment proxy phase to predict the data exfiltration probability in the true deployment. To perform this extrapolation, the developer uses cached statistics from the proxy deployment to run computer simulations. These simulations project the probability that the models would exfiltrate data if developers use them to complete 100,000 tasks instead of 2,000. Argument structure . Section 9 of the paper discusses how evidence from a control evaluation can be used to construct a safety case. The argument rests on the following claims: The data exfiltration probability that the control evaluation estimates is lower than the acceptable risk threshold. The control evaluation upper-bounds the data exfiltration probability during deployment. This claim can be further decomposed into the following: The blue team control protocol lower-bounds control effectiveness. This claim asserts that the control protocol specified by the blue team in the control evaluation is at least as effective as the control protocol in deployment. The red team attacks upper bound model capabilities. The red team must be competitive, meaning that the attack policies that the red team trains untrusted models to execute upper bound the effectiveness of attacks the deployed models might execute. The developer conservatively extrapolates results. Finally, the developer must make conservative assumptions when extrapolating statistics from the proxy deployment to estimate the data exfiltration probability in deployment. The core claims of the AI control safety case sketch expressed in Claims, Arguments, Evidence notation (CAE). The remaining sections of the CAE diagram are discussed throughout the body of the paper. This safety case sketch represents our best guess as to how developers could conservatively mitigate scheming LLM agents given the current state of AI safety technical research. We hope this work prompts discussion between frontier labs, government agencies and civil society about what constitutes sufficient assurance against risks from internally deployed LLM agents. For more details, see the full paper . We’re grateful to (in alphabetical order) Mikita Balesni, Joe Benton, Sid Black, Marie Buhl, Aryan Bhatt, Robin Bloomfield, Nate Burnikell, Alan Cooney, Xander Davies, Nicola Ding, Nix Goldowsky-Dill, Sofia Guerra, Ryan Greenblatt, Charlie Griffin, Marius Hobbhahn, Max Kaufmann, Will Kirby, Robert Kirk, Victoria Krakovna, David Lindner, Alex Mallen, Tammy Masterson, Stephen McAleer, Vlad Mikulik, Jacob Pfau, Mary Phuong, Fabien Roger, Cody Rushing, Rohin Shah, Oliver Sourbut, Martín Soto, Nate Thomas, Tyler Tracy, Tony Wang, and others, for their feedback on earlier drafts and ideas presented in the paper. If you’re interested in either (or both) AI control and safety cases, UK AISI is hiring for research and engineering roles in those two areas!
|
c64aac71-a6ea-487f-bb4f-f9999e42a574
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A review of Steven Pinker's new book on rationality
Steven Pinker's new book on rationality came out today. I figured someone on LessWrong would write a review for it, so I might as well be the one to do it.
Unlike Pinker's prior books, such as The Blank Slate and The Better Angels of Our Nature, this book lacks a straightforward empirical thesis. Instead, he mirrors the sequences by building a science of rationality and then tries to convince the reader that rationality is important, both personally and socially.
Unfortunately, long-time readers of LessWrong are unlikely to learn much from Pinker's new book; his content is too similar to the content in the sequences. An upside is that Pinker's treatment is more concise, and his style more closely resembles mainstream thought. Consequently, I'm tempted to recommend this book to people who might otherwise be turned away by Rationality: From A to Z.
He starts by asking a simple question: how come it seems like everyone is so irrational? Pointing to religion, conspiracy theorists, ghost-believers, anti-vaxxers, alternative medicine adherents, and postmodernists, Pinker makes a good case that there's a lot of irrationality in the world. On the other hand, he continues, shouldn't humans have evolved to be more rational? How could such persistent, widespread irrationality be so common in humans, if our survival impinges on our ability to reason?
Pinker provides a simple answer: humans are very rational animals, just not in every domain. In those domains on which our survival depended, such as finding and eating food, humans are much less clueless than you might have been lead to believe. Pinker provides the example of the San people of the Kalahari Desert in southern Africa, who, despite their mythological beliefs, are stunningly successful at hunting prey. He cites Louis Liebenberg, who documented how the San people use Bayesian reasoning to hunt, applying it to footprints and animal droppings in order to build an accurate picture of their natural world: a dry desert
|
989b290e-bb2e-4d83-9961-58f90622e2b0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Truth + Reason = The True Religion?
William Wollaston’s The Religion of Nature Delineated (1722) helped to launch “The Age of Reason.” It anticipated Kant’s Categorical Imperative, described and defended what we would now call the libertarian non-aggression principle, and provided the authors of the Declaration of Independence with their “purſuit of happineſs”.
William Wollaston’s The Religion of Nature Delineated (1722)
Wollaston was attempting to discover the “religion of nature”: that is, what rules of conduct a discerning, intelligent observer could rationally deduce from the bare facts of life, without any resort to scripture or divine revelation. What sort of religion might you come up with if you had no assistance from God or his prophets other than the light of reason, the evidence of the senses, and a steadfast regard for truth?
Wollaston concludes that what is right is equivalent to what is true, and that the first commandment of the natural law is this one: “let us by no act deny any thing to be true, which is true: that is, let us act according to reason: and that is, let us act according to the law of our nature.”
Rescuing Wollaston from Obscurity
That is to say, he was a rationalist avant la lettre. And you’ve probably never heard of him. Why’s that?
1. His magnum opus was also his swan song. It’s his only major work, and he died soon after it was published and before it became influential.
2. It’s old, and people these days tend to prefer to get their ideas fresh and in the modern vernacular.
3. Wollaston was ridiculously erudite, in a 17th Century clergyman sort of way, and he expected his readers to keep up. His book contains more than 650 footnotes, chock-full of Greek, Hebrew, and Latin, which he did not feel he needed to translate for us. Probably for this reason as much as any other, it’s been difficult to find this book in a modern edition.
4. Wollaston was probably wrong about a lot, and the philosophers who followed in his footsteps tended to emphasize that more than h
|
9a96ed2c-9412-4de0-a1d5-30da796d347d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Becoming Black Boxish
Through my arduous and repetitive drilling of various practice tasks - mental rotation, typing, beat saber, osu - I've noticed, as my skills improve, that I process less and less detail regarding their execution. There is a clear process to it; at first, an insight into some difficulty or complexity, then, the incremental implementation of a solution to the problem, and finally, the climb into automaticity, such that I can perform the task by rote, be it by muscle memory or cognitive shortcut.
I can feel it, from the inside; I'm gradually forming cognitive black boxes. When I type, I have a mental representation of the words I wish to transcribe - typically imagined acoustically - and a mystery function that converts my internal monologue into movements of the fingers, shifting the keyboard and putting the monologue into the computer.
I sometimes type to and too and two in an identical way, unintentionally. This is because, I think, I think my words in sounds, and sometimes my typing blackbox misbehaves. These are homophonic heterographs: a matched representation in the acoustic form, but a mismatched one when transcribed. To other minds, that think in transcribed text, this mistake is probably much less likely to happen. But then people, for the most part, learn to talk before type, and a "natural textual thinker" may be relatively rare.
So I have my typing blackbox, my conversion method from thought to finger waggling, a sort of sign language dependent on the presence of a keyboard. I have my mental rotation blackbox - my opaque series of checksums, with increasing automaticity, that tell me two objects match. That one's a little autoencoderish of me, isn't it? I take a complex shape, reduce its dimensionality to an embedding space, with the ability to then decompress the representation into an alternative configuration. So am I still learning mental rotation, to an extent, or am I learning how to get away without it? Is there an actual distinction?
As more be
|
6f5600a8-ee37-4373-af7d-8ad1bbcf2f7a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI #91: Deep Thinking
Did DeepSeek effectively release an o1-preview clone within nine weeks?
The benchmarks largely say yes. Certainly it is an actual attempt at a similar style of product, and is if anything more capable of solving AIME questions, and the way it shows its Chain of Thought is super cool. Beyond that, alas, we don’t have enough reports in from people using it. So it’s still too soon to tell. If it is fully legit, the implications seems important.
Small improvements continue throughout. GPT-4o and Gemini both got incremental upgrades, trading the top slot on Arena, although people do not seem to much care.
There was a time everyone would be scrambling to evaluate all these new offerings. It seems we mostly do not do that anymore.
The other half of events was about policy under the Trump administration. What should the federal government do? We continue to have our usual fundamental disagreements, but on a practical level Dean Ball offered mostly excellent thoughts. The central approach here is largely overdetermined, you want to be on the Pareto frontier and avoid destructive moves, which is how we end up in such similar places.
Then there’s the US-China commission, which now have their top priority being an explicit ‘race’ to AGI against China, without actually understanding what that would mean or justifying that anywhere in their humongous report.
TABLE OF CONTENTS
1. Table of Contents.
2. Language Models Offer Mundane Utility. Get slightly more utility than last week.
3. Language Models Don’t Offer Mundane Utility. Writing your court briefing.
4. Claude Sonnet 3.5.1 Evaluation. Its scored as slightly more dangerous than before.
5. Deepfaketown and Botpocalypse Soon. AI boyfriends continue to be coming.
6. Fun With Image Generation. ACX test claims you’re wrong about disliking AI art.
7. O-(There are)-Two. DeepSeek fast follows with their version of OpenAI’s o1.
8. The Last Mile. Is bespoke human judgment going to still be valuable for a while
|
e2d12682-0213-4c1b-baf5-74552d04791a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
D&D.Sci April 2021 Evaluation and Ruleset
This is a followup to the D&D.Sci post I made last week; if you haven’t already read it, you should do so now before spoiling yourself.
Here is the web interactive I built to let you evaluate your solution; below is an explanation of the rules used to generate the dataset. You’ll probably want to test your answer before reading any further.
----------------------------------------
Ruleset
(Note: to make writing this easier, I’m using standard D&D dice notation, in which “3+4d8” means “roll four eight-sided dice, sum the results, then add three”.)
Enemies
Sharks
Sharks are 1/6 of encounters.
They attack in groups of 2+1d4, each of which does 1d10 points of damage.
Demon Whales
Demon Whales are 1/14 of encounters. (If that fraction seems high, you’re failing to account for all the sunk ships that couldn’t report encountering them.)
An attack from a Demon Whale does 17d12 points of damage.
A Demon Whale encounter has a ~78% fatality rate
Crabmonsters
Crabmonsters are 1/14 of encounters.
A Crabmonster repeatedly rolls 1d80 as it tears through the ship, adding a point of damage with each roll, until it rolls a 1 (that is, encounters someone or something that stops it).
~8% of Crabmonster encounters do >200% damage; a Crabmonster encounter has a ~28% fatality rate
Pirates
Though the Navy’s records don’t bother to distinguish, Pirates come in two categories: Brigands (local criminals who had the poor fortune to cross paths with Naval supply ships while flying the black flag, and/or to mistake them for civilian cargo ships) and Privateers (agents of an enemy government, harassing your Navy’s fleet using hit-and-run tactics). Brigands are 1/6 of random encounters during your voyages, Privateers 1/21.
A fight with Brigands does 4d8 points of damage; a fight with Privateers does 6d12.
Merpeople
Surface-dwellers are unaware of the intricacies of underwater society, and record both Atlantean Merfolk (1/14 of encounters) and Alexandrian Merfolk (2/21 of enco
|
6d2b3fe8-b61e-4e78-b6b1-89a521a5b078
|
trentmkelly/LessWrong-43k
|
LessWrong
|
ICONN 2012 nanotechnology conference in Perth
I won a prize and get to travel to Perth, Australia to attend the 2012 ICONN nanotechnology conference.
Is anyone else from LessWrong going to be there?
I don't actually know much about nanotechnology. Does anyone have any particular recommendations for some a good introduction?
Thanks.
|
cc7805fd-f30f-46a8-b76c-bb531087979f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Open Thread, Jul. 13 - Jul. 19, 2015
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
|
1538ad39-f20c-486b-8f1f-ee12b8b00693
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Value Uncertainty and the Singleton Scenario
In January of last year, Nick Bostrom wrote a post on Overcoming Bias about his and Toby Ord’s proposed method of handling moral uncertainty. To abstract away a bit from their specific proposal, the general approach was to convert a problem involving moral uncertainty into a game of negotiation, with each player’s bargaining power determined by one’s confidence in the moral philosophy represented by that player.
Robin Hanson suggested in his comments to Nick’s post that moral uncertainty should be handled the same way we're supposed to handle ordinary uncertainty, by using standard decision theory (i.e., expected utility maximization). Nick’s reply was that many ethical systems don’t fit into the standard decision theory framework, so it’s hard to see how to combine them that way.
In this post, I suggest we look into the seemingly easier problem of value uncertainty, in which we fix a consequentialist ethical system, and just try to deal with uncertainty about values (i.e., utility function). Value uncertainty can be considered a special case of moral uncertainty in which there is no apparent obstacle to applying Robin’s suggestion. I’ll consider a specific example of a decision problem involving value uncertainty, and work out how Nick and Toby’s negotiation approach differs in its treatment of the problem from standard decision theory. Besides showing the difference in the approaches, I think the specific problem is also quite important in its own right.
The problem I want to consider is, suppose we believe that a singleton scenario is very unlikely, but may have very high utility if it were realized, should we focus most of our attention and effort into trying to increase its probability and/or improve its outcome? The main issue here is (putting aside uncertainty about what will happen after a singleton scenario is realized) uncertainty about how much we value what is likely to happen.
Let’s say there is a 1% chance that a singleton scenario does occur, and
|
dbd4a008-c419-49a0-9c65-473e8fb92990
|
trentmkelly/LessWrong-43k
|
LessWrong
|
World-Model Interpretability Is All We Need
Summary, by sections:
1. Perfect world-model interpretability seems both sufficient for robust alignment (via a decent variety of approaches) and realistically attainable (compared to "perfect interpretability" in general, i. e. insight into AIs' heuristics, goals, and thoughts as well). Main arguments: the NAH + internal interfaces.
2. There's plenty of reasons to think that world-models would converge towards satisfying a lot of nice desiderata: they'd be represented as a separate module in AI cognitive architecture, and that module would consists of many consistently-formatted sub-modules representing recognizable-to-us concepts. Said "consistent formatting" may allow us to, in a certain sense, interpret the entire world-model in one fell swoop.
3. We already have some rough ideas on how the data in world-models would be formatted, courtesy of the NAH. I also offer some rough speculations on possible higher-level organizing principles.
4. This avenue of research also seems very tractable. It can be approached from a wide variety of directions, and should be, to an extent, decently factorizable. Optimistically, it may constitute a relatively straight path from here to a "minimum viable product" for alignment, even in words where alignment is really hard[1].
----------------------------------------
1. Introduction
1A. Why Aim For This?
Imagine that we develop interpretability tools that allow us to flexibly understand and manipulate an AGI's world-model — but only its world-model. We would be able to see what the AGI knows, add or remove concepts from its mental ontology, and perhaps even use its world-model to run simulations/counterfactuals. But its thoughts and plans, and its hard-coded values and shards, would remain opaque to us. Would that be sufficient for robust alignment?
I argue it would be.
Primarily, this would solve the Pointers Problem. A central difficulty of alignment is that our values are functions of highly abstract variables, and tha
|
04cdf3fd-050c-4246-baa3-e3dc81775404
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Gandalf or Saruman? A Soldier in Scout's Clothing
Sometimes, Scout Mindset can feel a lot like Soldier Mindset, and vice versa.[1]
When you've considered an issue with some care, formed an opinion, and somebody responds with raised voice about how awful that opinion is, it's hard not to get defensive or take it personally.
There are lots of ways you could respond. If your aim is to achieve a thoughtful and respectful conversation, then you have to do several things: control your emotional reaction, be polite, stay honest about the fact that you disagree, and choose a response that allows the discussion to return to civil discourse.
Going through all that feels like a Scout-y thing to do, but it isn't necessarily. Provoking a person with a strongly held (but possibly correct) view into offputting behavior, then assuming a posture of being the reasonable and civil side of the debate, is a wonderful rhetorical strategy. Loudly trying to shout down your opponent makes you look like a dumb Soldier. A smart Soldier tries to look like a Scout.
Likewise, if you're a Scout and your information is being ignored, you might need to raise your voice, fight your way through, and make them listen.
People who are into signaling talk a lot about virtue signaling and intelligence signaling. It might be helpful to talk about Scout signaling and Soldier signaling. Neither of these are the same as being a Scout or Soldier. They aren't reliable indicators in every context of where somebody is coming from. If a person is interested in developing Scout Mindset, it seems important to me for them to learn how to distinguish Scout Mindset from Scout signaling.
Back to our acrimonious debate, let's say the civil person in the debate sincerely wants to want to practice Scout Mindset. There are a couple of useful things they can do.
* One is to stick to their honest beliefs. That is part of the territory they are honestly scouting out.
* Another is to remember that their loud debate partner might be a Scout with a very important messag
|
83222efb-edaf-4013-beba-9046749dc4f3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Repeated thought
Eliezer Yudkowsky of OB suggests thinking and doing entirely new things for a day:
> Don’t read any book you’ve read before. Don’t read any author you’ve read before. Don’t visit any website you’ve visited before. Don’t play any game you’ve played before. Don’t listen to familiar music that you already know you’ll like. If you go on a walk, walk along a new path even if you have to drive to a different part of the city for your walk. Don’t go to any restaurant you’ve been to before, order a dish that you haven’t had before. Talk to new people (even if you have to find them in an IRC channel) about something you don’t spend much time discussing.
>
> And most of all, if you become aware of yourself musing on any thought you’ve thunk before, then muse on something else. Rehearse no old grievances, replay no old fantasies.
The comments and its reposting to MR suggests that this is popular advice.
It’s interesting that, despite the warm reception, this idea needs pointing out, and trying for one experimental day.
Having habits for things like brushing teeth is useful – the more automatic uninteresting or unenjoyable experiences are, the more time and thought can be devoted to other things. Habits for places to go could be argued for – if you love an experience, why change it?
But why should we want to repeat thoughts a lot? Seems we say we don’t. So, why do we do it? Do we do it? If we can stop when Eliezer suggests it, why don’t we notice and stop on our own? Is it that habits are unconscious; a state that doesn’t lend itself to noticing things? Has the usefulness of other habits made us so habitual that our thoughts are caught up in it?
What can we do about it?
As a side note, perhaps the quantity of unconscious habit in a life is related to the way time speeds up as you age.
|
d01754e3-7e31-4bbc-ab9e-cc695b7660b7
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Reviews of "Is power-seeking AI an existential risk?"
(Edited 10/14/2022 to include Lifland review. Edited 7/10/23 to include Levinstein review.)
Open Philanthropy solicited reviews of my draft report [“Is power-seeking AI an existential risk?”](https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit#heading=h.pwdbumje5w8r) (Edit: arXiv version [here](https://arxiv.org/abs/2206.13353)) from various sources. Where the reviewers allowed us to make their comments public in this format, links to these comments are below, along with some responses from me in blue.
1. [Leopold Aschenbrenner](https://docs.google.com/document/d/1fGOPLuDUX9HlCsf07hitdTqp1DNffP0fWzMRH_BzSfA/edit#)
2. [Ben Garfinkel](https://docs.google.com/document/d/1FlGPHU3UtBRj4mBPkEZyBQmAuZXnyvHU-yaH-TiNt8w/edit)
3. [Daniel Kokotajlo](https://docs.google.com/document/d/1GwT7AS_PWpglWWrVrpiMqeKiJ_E2VgAUIG5tTdVhVeM/edit#heading=h.rq4krnj82zba)
4. [Ben Levinstein](https://docs.google.com/document/d/18jaq4J48UUoff7fJ4PQZ8Dy458ZcpF36fVkLclYKjeI/edit)
5. [Eli Lifland](https://docs.google.com/document/d/1nCqjJXydfPQGRTKT71jQn8yXi4FSHnVSdfZbhQUMa1I/edit#heading=h.twyi0dkec6u3)
6. [Neel Nanda](https://docs.google.com/document/d/1Da5ErtIzgkGv8JMh59SQcCmcaSx-83OpGrm7Jh7xxPA/edit#heading=h.rq4krnj82zba)
7. [Nate Soares](https://www.lesswrong.com/posts/cCMihiwtZx7kdcKgt/comments-on-carlsmith-s-is-power-seeking-ai-an-existential)
8. [Christian Tarsney](https://docs.google.com/document/d/1-6Wqr4Rb87chhlePct8k8fSE7kClGQMlOiEUotyGgBA/edit#)
9. [David Thorstad](https://docs.google.com/document/d/1dZa9pWJgMmebGHrwrXUz3UNuKft5Tq_dIMQ4TZ0K4yA/edit#heading=h.rq4krnj82zba)
10. [David Wallace](https://docs.google.com/document/d/1kyZPqbwxmBO5NuPWiuSByPnkKiq96eRo/edit)
11. [Anonymous 1](https://docs.google.com/document/d/1yFVpxbIISUxKLTOExAIYQDd12hfsrjhyJKViwjqYW2g/edit#) (software engineer at AI research team)
12. [Anonymous 2](https://docs.google.com/document/d/1AVXZAthlMrovbzyf_CgTUOET6kDzs1BHNYIGpTQ_T1g/edit#heading=h.rq4krnj82zba) (academic computer scientist)
The table below (spreadsheet link [here](https://docs.google.com/spreadsheets/d/1OGiDFCgRjfxOFVaYPl96dYgx8NVIaSquDgQSKByDxuc/edit#gid=0)) summarizes each reviewer’s probabilities and key objections.
Screenshot summary of linked spreadsheetAn academic economist focused on AI also provided a review, but they declined to make it public in this format.
|
144ec61f-c583-4675-8f33-17bfa221d9f4
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Rationality Quotes 20
"For every stock you buy, there is someone selling you that stock. What is it that you know that they don't? What is it that they know, that you don't? Who has the edge? If it's not you, chances are you are going to lose money on the deal."
-- Mark Cuban
"If you have two choices, choose the harder. If you're trying to decide whether to go out running or sit home and watch TV, go running. Probably the reason this trick works so well is that when you have two choices and one is harder, the only reason you're even considering the other is laziness. You know in the back of your mind what's the right thing to do, and this trick merely forces you to acknowledge it."
-- Paul Graham
"Never attribute to malice that which can be adequately explained by stupidity."
-- Hanlon's Razor
"I divide my officers into four classes; the clever, the lazy, the industrious, and the stupid. Each officer possesses at least two of these qualities. Those who are clever and industrious are fitted for the highest staff appointments. Use can be made of those who are stupid and lazy. The man who is clever and lazy however is for the very highest command; he has the temperament and nerves to deal with all situations. But whoever is stupid and industrious is a menace and must be removed immediately!"
-- General Kurt von Hammerstein-Equord
"There's no such thing as a human who doesn't commit sin. It's not like the world is divided into sinners and the innocent. There are only people who can and who cannot atone for their sins."
-- Ciel
"Simple stupidity is never enough. People need to pile stupidity on stupidity on stupidity."
-- Mark C. Chu-Carroll
|
e628b164-13a4-418e-8311-4e8fc2281070
|
trentmkelly/LessWrong-43k
|
LessWrong
|
When Did EA Start?
How old is the effective altruism movement? It depends when you count from, and it's hard to pick any specific day or event because it coalesced out of a lot of different strands. Plus, since this is a movement and not just an idea, it matters when it started gathering people which is naturally very fuzzy. Roughly, I'd describe the progression as something like:
At the beginning of 2008 there wasn't an EA movement yet, while by 2012 there was one. There wasn't an instantaneous change.
Why this particular curve? Here are some historical points that anchor it to events:
* In March 2012, I wrote "there's a young movement that combines the idea of a duty to help others with the idea that you should maximize the impact of your actions," as part of trying to figure out what we should call EA. I'd say it 100% existed by this point, and I wasn't "calling it"—it existed before here.
* In November 2011, 80,000 Hours launched. It was mostly a way of spreading ideas that were already popular in the Oxford community and online, so I'd also say the EA movement existed before this. It's a bit less clear than when you have people literally calling it a movement, though.
* In December 2010, Roko Mijic ran a contest on LessWrong for the best explanation of EA, which Scott Alexander won with his essay, Efficient Charity: Do Unto Others.... Jonah Sinick also wrote a detailed entry, and Mass_Driver contributed one as well. This feels very movementy to me: a bunch of people working together to put some shared ideas into a form where they'll be able to spread better.
* In November 2009, the first 30 people joined Giving What We Can. A group of people pledging to do something seems like part of a movement. At the time GWWC was specific to global poverty, however, so I wouldn't fully count this.
* Over the course of 2009, discussion on the Felicifia forum (archive) started to feel like an early EA community. For example, see thread on charity choice and the applied ethics
|
eab53130-9092-407b-9700-35ff53cc67a1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Replacing guilt
In my experience, many people are motivated primarily by either guilt, shame, or some combination of the two. Some are people who binge-watch television, feel deeply guilty about it, and convert that guilt into a burning need to Actually Do Something on the following day. Others are people who feel guilty whenever they stop working before they literally fall over from exhaustion, and in attempts to avoid that guilty feeling, they consistently work themselves weary.
I find that using guilt as a motivation source is both unhealthy and inefficient, but yet, I find it to be a common practice, especially among effective altruists.
Thus, in the coming series of posts, I'm going to explore a whole slew of tools for removing guilt-based motivation and replacing it with something that is both healthier and stronger.
My goal is to help people remove guilt-based motivation entirely, and replace it with intrinsic motivation. I'm aiming to both reduce the frequency of netflix binges and reduce the bad feelings that follow. I'm aiming to help people feel like they're still worthwhile human beings if they stop working before they literally drop. I'd like to help people avoid the failure mode where they feel guilty about something for days (even after learning their lesson), and I'm also hoping to remove some shame-based motivation while I'm in the area.
My first goal will be to address the guilt that comes from a feeling of listlessness, the vague feeling of guilt that one might get when they play video games all day, or when they turn desperately towards drugs or parties, in attempts to silence the part of themselves that whispers that there must be something else to life.
This sort of guilt cannot be removed by force of will, in most people. The trick to removing this sort of guilt, I think, is to start exploring that feeling that there must be something else to life, that there must be something more to do — and either find something worth working towards, or find that the
|
475d663e-69a4-49a4-8c49-30dac66b6ba8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A Taxonomy of Bias: The Cognitive Miser
This is the second part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.
Noting that there are many different kinds of bias, Keith Stanovich proposes a classification scheme for bias that has two primary categories: the Cognitive Miser, and Mindware Problems. Today, I will discuss the Cognitive Miser category, which has the subcategories of Default to the Autonomous Mind, Serial Associative Cognition with a Focal Bias, and Override Failure.
The Cognitive Miser
Cognitive science suggests that our brains use two different kinds of systems for reasoning: Type 1 and Type 2. Type 1 is quick, dirty and parallel, and requires little energy. Type 2 is energy-consuming, slow and serial. Because Type 2 processing is expensive and can only work on one or at most a couple of things at a time, humans have evolved to default to Type 1 processing whenever possible. We are "cognitive misers" - we avoid unnecessarily spending Type 2 cognitive resources and prefer to use Type 1 heuristics, even though this might be harmful in a modern-day environment.
Stanovich further subdivides Type 2 processing into what he calls the algorithmic mind and the reflective mind. He argues that the reason why high-IQ people can fall prey to bias almost as easily as low-IQ people is that intelligence tests measure the effectiveness of the algorithmic mind, whereas many reasons for bias can be found in the reflective mind. An important function of the algorithmic mind is to carry out cognitive decoupling - to create copies of our mental representations about things, so that the copies can be used in simulations without affecting the original representations. For instance, a person wondering how to get a fruit down from a high tree will imagine various ways of getting to the fruit, and by doing so he operates on a mental concept that has been copied and deco
|
9d63805d-b169-4c71-aa2a-a9967c86afa7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Rationality Quotes 7
"People don't buy three-eighths-inch drill bits. People buy three-eighths-inch holes."
-- Michael Porter
"I feel more like I do now than I did a while ago."
-- Arifel
"Know thyself, because in the end there's no one else."
-- Living Colour, Solace of You
"Student motivation? I'm gonna start hooking you all up to electrodes."
-- Kevin Giffhorn
"A rock pile ceases to be a rock pile the moment a single man contemplates it, bearing within him the image of a cathedral."
-- Antoine de Saint-Exupery
"I found one day in school a boy of medium size ill-treating a smaller boy. I expostulated, but he replied: 'The bigs hit me, so I hit the babies; that's fair.' In these words he epitomized the history of the human race."
-- Bertrand Russell
"Open Source Software: There are days when I can't figure out whether I'm living in a Socialist utopia or a Libertarian one."
-- Alex Future Bokov
"Supposing you got a crate of oranges that you opened, and you found all the top layer of oranges bad, you would not argue, `The underneath ones must be good, so as to redress the balance.' You would say, `Probably the whole lot is a bad consignment'; and that is really what a scientific person would say about the universe."
-- Bertrand Russell
"This thing not only has all the earmarks of a hoax, it has "HOAX" branded into its flank and it's regulated by the U.S. Department of Hoaxes, Pranks and Shenanigans."
-- Brunching Shuttlecocks
"Natural selection gave a big advantage to those who were good at spotting the pattern of a saber toothed tiger hiding in the bushes, but no advantage to those who were good at solving partial differential equations. It is not mere rhetoric to say that in an absolute sense a janitor has a more intellectually challenging job than a professor of mathematics."
-- John K. Clark
"The simple fact is that non-violent means do not work against Evil. Gandhi's non-violent resistance a
|
bc071c0e-2f6d-439f-97f6-b6f1c33426e9
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Subagents of Cartesian Frames
Here, we introduce and discuss the concept of a subagent in the [Cartesian Frames](https://www.lesswrong.com/posts/BSpdshJWGAW6TuNzZ/introduction-to-cartesian-frames) paradigm.
Note that in this post, as in much of the sequence, we are generally working up to [biextensional equivalence](https://www.lesswrong.com/posts/pWruFSY7494vnucCE/biextensional-equivalence). In the discussion, when we informally say that a frame has some property or is some object, what we'll generally mean is that this is true of its biextensional equivalence class.
1. Definitions of Subagent
--------------------------
**1.1. Categorical Definition**
**Definition:** Let C.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
and D be Cartesian frames over W. We say that C's agent is a subagent of D's agent, written C◃D, if for every morphism ϕ:C→⊥ there exists a pair of morphisms ϕ0:C→D and ϕ1:D→⊥ such that ϕ=ϕ1∘ϕ0.
Colloquially, we say that every morphism from C to ⊥ factors through D. As a shorthand for "C's agent is a subagent of D's agent," we will just say "C is a subagent of D."
At a glance, it probably isn't clear what this definition has to do with subagents. We'll first talk philosophically about what we mean by "subagent", and then give an alternate definition that will make the connection more clear.
When I say "subagent," I am actually generalizing over two different relationships that may not immediately seem like they belong together.
First, there is the relationship between the component and the whole. One football player is a subagent of the entire football team.
Second, there is the relationship between an agent before and after making a precommitment or a choice. When I precommit not to take a certain action, I am effectively replacing myself with a weaker agent that has fewer options. The new agent with the commitment is a subagent of the original agent.
These are the two notions I am trying to capture with the word "subagent". I am making the philosophical claim that we should think of them primarily as one concept, and am partially backing up this claim by pointing to the simplicity of the above definition. In a future post, we will discuss the formal differences between these two kinds of subagent, but I think it is best to view them as two special cases of the one simple concept.
(My early drafts of the "[Embedded Agency](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh)" sequence used the word "subagent" in the title for both the [Subsystem Alignment](https://www.lesswrong.com/posts/ChierESmenTtCQqZy/subsystem-alignment) and [Robust Delegation](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh/p/iTpLAaPamcKyjmbFC) sections.)
**1.2. Currying Definition**
**Definition:** Let C and D be Cartesian frames over W. We say that C◃D if there exists a Cartesian frame Z over Agent(D) such that C≃D∘(Z).
Assume for this discussion that we only care about frames up to biextensional equivalence. In effect, the above definition is saying that "C is a subagent of D" means "C's agent is playing a game, Z, where the stakes are to help decide what D's agent does." (And this game may or may not have multiple players, and may or may not fully cover all the options of D's agent.)
Letting C=(A,E,⋅) and D=(B,F,⋆), it turns out (as we will see later) that we can explicitly construct Z. Z=(A,X,⋄), where X is the set of all morphisms from C to D, and ⋄:A×X→B is given by a⋄(g,h)=g(a).
We will later prove the categorical and currying definitions equivalent, but let's first interpret this definition using examples.
Z is a Cartesian frame whose agent is the agent of C and whose world is the agent of D. This seems like the kind thing we would have when C is a subagent of D.
Thinking about the football example: We have the football player A as the agent in a Cartesian frame C over the world W. We also have the football team B as the agent in a Cartesian frame D over the same world W.
Z is a Cartesian frame *over the football team*; and the agent of this frame is again the football player A. X, the environment of Z, represents the rest of the football team: the player's effect on the team as a whole (here treated as the player's world) is a function of what the player chooses and what the rest of the team chooses. We can think of Z as representing a "zoomed-in" picture of A interacting with its local environment (the team), while C represents a "zoomed-out" picture of A interacting with its teammates *and* the larger world (rival teams, referees, etc.).
D∘(Z)=(A,X×F,∙), so E is equivalent to X×F, which is saying that the environment for the football player in its original frame (C) is equivalent to the Cartesian product of the rest of the team X with the *team's* environment F.
Thinking about the precommitment example: C has made a precommitment, so there is an inclusion morphism ι:A→B, which shows that C's agent's options are a subset of D's agent's options. Z is just CF∗({ι}), so X={ι} is a singleton. D∘(Z)=(A,X×F,∙), so E is equivalent to X×F=F, so here A is a subset of B and E is equivalent to F.
Although the word "precommitment" suggests a specific (temporal, deliberative) interpretation, formally, precommitment just looks like deleting rows from a matrix (up to biextensional equivalence), which can represent a variety of other situations.
A Cartesian frame Z=(A,X,⋄) over B is like a nondeterministic function from A to B, where X represents the the nondeterministic bits. When changing our frame from (B,F,⋆) to (A,E,⋅)≃(A,X×F,∙), we are identifying with A and externalizing the nondeterministic bits X into the environment.
**1.3. Covering Definition**
The categorical definition is optimized for elegance, while the currying definition is optimized to be easy to understand in terms of agency. We have a third definition, the covering definition, which is optimized for ease of use.
**Definition.** Let C=(A,E,⋅) and D=(B,F,⋆) be Cartesian frames over W. We say that C◃D if for all e∈E, there exists an f∈F and a (g,h):C→D such that e=h(f).
We call this the covering definition because the morphisms from C to D cover the set E.
2. Equivalence of Definitions
-----------------------------
**2.1. Equivalence of Categorical and Covering Definitions**
The equivalence of the categorical and covering definitions follows directly from the fact that the morphisms from C to ⊥ are exactly the elements of Env(C).
**Claim:** The categorical and covering definitions of subagent are equivalent.
**Proof:** Let C=(A,E,⋅) and let D=(B,F,⋆). First, observe that the morphisms from C to ⊥ correspond exactly to the elements of E. For each e∈E, it is easy to see that (g,h):C→(W,{j},⋄), given by h(j)=e and g(a)=a⋅e, is a morphism, and every morphism is uniquely defined by h(j), so there are no other morphisms. Let ϕe denote the morphisms with h(j)=e.
Similarly, the morphisms from D to ⊥ correspond to the elements of F. Let ψf denote the morphisms corresponding to f∈F.
Thus, the categorical definition can be rewritten to say that for every morphism ϕe:C→⊥, there exist morphisms (g,h):C→D and ψf:D→⊥, such that ϕe=ψf∘(g,h). However, ψf∘(g,h):C→(W,{j},⋄) sends j to h(f), and so equals ϕe if and only if e=h(f). Thus the categorical definition is equivalent to the covering definition. □
**2.2. Equivalence of Covering and Currying Definitions**
**Claim:** The covering definition of subagent implies the currying definition of subagent.
**Proof:** Let C=(A,E,⋅) and D=(B,F,⋆) be Cartesian frames over W. Assume that C◃D according to the covering definition.
Let X be the set of all morphisms from C to D, and let Z=(A,X,⋄) be a Cartesian frame over B, with ⋄ given by a⋄(g,h)=g(a). We have that D∘(Z)=(A,X×F,∙), with
a∙((g,h),f)=(a⋄(g,h))⋆f=g(a)⋆ffor all a∈A, (g,h)∈X, and f∈F.
To show that C≃D∘(Z), we need to construct morphisms g0,h0:C→D∘(Z) and g1,h1:D∘(Z)→C which compose to something homotopic to the identity in both orders.
We will let g0 and g1 be the identity on A, and we let h0:X×F→E be given by h0((g,h),f)=h(f). Finally, we let h1(e)=((g,h),f) such that h(f)=e. We can always choose such a (g,h)∈X and f∈F by the covering definition of subagent.
We have that (g0,h0) is a morphism, since
g0(a)∙((g,h),f)=a∙((g,h),f)=g(a)⋆f=a⋅h(f)=a⋅h0((g,h),f).Similarly, we have that (g1,h1) is a morphism since h1(e)=((g,h),f), where h(f)=e, so
g1(a)⋅e=a⋅e=a⋅h(f)=g(a)⋆f=a∙((g,h),f)=a∙h1(e).It is clear that (g0,h0) and (g1,h1) compose to something homotopic to the identity in both orders, since g0 and g1 are the identity on A. Thus, C≃D∘(Z). □
**Claim:** The currying definition of subagent implies the covering definition of subagent.
**Proof:** Let C=(A,E,⋅) and D=(B,F,⋆) be Cartesian frames over W. Let Z=(Y,X,⋄) be a Cartesian frame over B, and let C≃D∘(Z). Our goal is to show that for every e∈E, there exists a (g,h):C→D and f∈F such that e=h(f). We will start with the special case where C=D∘(Z).
We have that D∘(Z)=(Y,X×F,∙), where y∙(x,f)=(y⋄x)⋆f. First, note that for every x∈X, there exists a morphism (gx,hx):D∘(Z)→D given by gx(y)=y⋄x, and hx(f)=(x,f). To see that this is a morphism, observe that
gx(y)⋆f=(y⋄x)⋆f=y∙(x,f)=f∙hx(f)for all y∈Y and f∈F.
To show that D∘(Z)◃D according to the covering definition, we need that for all (x,f)∈X×F, there exists an f′∈F and a (g,h):D∘(Z)→D such that h(f′)=(x,f). Indeed we can take (g,h)=(gx,hx) and f′=f.
Now, we move to the case where C≃D∘(Z), but C≠D∘(Z). It suffices to show that under the covering definition of subagent, if C0◃D, and C1≃C0, then C1◃D.
Let Ci=(Ai,Ei,⋅i), and let (g0,h0):C0→C1 and (g1,h1):C1→C0 compose to something homotopic to the identity in both orders. Assume that C0◃D. To show that C1◃D, let the possible environment e∈E1 be arbitrary.
h0(e)∈E0, so there exists an f∈F and (g,h):C0→D such that h(f)=h0(e). Consider the morphism (g′,h′):C1→D, where g′=g∘g1, and h′(f)=e and h′(f′)=(h1∘h)(f′) on all f′≠f. To see that this is a morphism, observe that for all a∈A1, we have
g′(a)⋆f=g(g1(a))⋆f=a⋅1h1(h(f))=a⋅1h1(h0(e))=a⋅1e=a⋅1h′(f),while for f′∈F, f′≠f, we have
g′(a)⋆f′=g(g1(a))⋆f′=a⋅1h1(h(f′))=a⋅1h′(f′).Now, notice that for our arbitrary e∈E1, (g′,h′):C1→D and f∈F satisfy h′(f)=e, so C1◃D according to the to the covering definition.
Thus, whenever C≃D∘(Z), we have C◃D according to the covering definition, so the currying definition implies the covering definition of subagent. □
3. Mutual Subagents
-------------------
The subagent relation is both transitive and reflexive. Surprisingly, this relation is not anti-symmetric, even up to biextensional equivalence.
**Claim:** ◃ is reflexive. Further, if C≃D, then C◃D.
**Proof:** Let C=(A,E,⋅) and D=(B,F,⋅) be Cartesian Frames over W, with C≃D. Consider the Cartesian frame Z over B given by Z=(B,{x},⋄), where b⋄x=b. Observe that D≅D∘(Z). Thus C≃D∘(Z), so C◃D, according to the currying definition. □
**Claim:** ◃ is transitive.
**Proof:** We will use the categorical definition. Let C0◃C1 and C1◃C2. Given a morphism, ϕ0:C0→⊥, since C0◃C1, we know that ϕ0=ϕ1∘ϕ2 with ϕ1:C1→⊥ and ϕ2:C0→C1. Further, since C1◃C2, we know that ϕ1=ϕ3∘ϕ4 with ϕ3:C2→⊥ and ϕ4:C1→C2. Thus,
ϕ0=(ϕ3∘ϕ4)∘ϕ2=ϕ3∘(ϕ4∘ϕ2),with ϕ3:C2→⊥ and ϕ4∘ϕ2:C0→C2, so C0◃C2. □
As a corollary, we have that subagents are well-defined up to biextensional equivalence.
**Corollary:** If C0≃C1, D0≃D1, and C0◃D0, then C1◃D1.
**Proof:** C1◃C0◃D0◃D1. □
Sometimes, there are Cartesian frames C≄D with C◃D and D◃C. We can use this fact to define a third equivalence relation on Cartesian frames over W, weaker than both ≅ and ≃.
**Definition:** For Cartesian frames C and D over W, we say C⋈D if C◃D and D◃C.
**Claim:** ⋈ is an equivalence relation.
**Proof:** Reflexivity and transitivity follow from reflexivity and transitivity of ◃. Symmetry is trivial. □
This equivalence relation is less natural than ≅ and ≃, and is not as important. We discuss it mainly to emphasize that two frames can be mutual subagents without being biextensionally equivalent.
**Claim:** ⋈ is strictly weaker than ≃, which is strictly weaker than ≅.
**Proof:** We already know that ≃ is weaker than ≅. To see that ⋈ is weaker than ≃, observe that if C≃D, then C◃D and D◃C, so C⋈D.
To see that ≃ is *strictly* weaker than ≅, observe that ⊤⊕⊤≃⊤ (both have empty environment and nonempty agent), but ⊤⊕⊤≆⊤ (the agents have different size).
To see that ⋈ is strictly weaker than ≃, observe that ⊤⋈null (vacuous by covering definition), but ⊤≄null (there are no morphisms from null to ⊤). □
I do not have a simple description of exactly when C⋈D, but there are more cases than just the trivial ones like C≃D and vacuous cases like ⊤⋈null. As a quick example:
(xy)⋈⎛⎜⎝xxyyxy⎞⎟⎠.
To visualize this, imagine an agent that is given the choice between cake and pie. This agent can be viewed as a team consisting of two subagents, Alice and Bob, with Alice as the leader.
Alice has three choices. She can choose cake, she can choose pie, or she can delegate the decision to Bob. We represent this with a matrix where Bob is in Alice's environment, and the third row represents Alice letting the environment make the call:
⎛⎜⎝xxyyxy⎞⎟⎠.
If we instead treat Alice-and-Bob as a single superagent, then their interaction across the agent-environment boundary becomes agent-internal deliberation, and their functional relationship to possible worlds just becomes a matter of "What does the group decide?". Thus, Alice is a subagent of the Alice-and-Bob team:
⎛⎜⎝xxyyxy⎞⎟⎠◃(xy).
However, Alice also has the ability to commit to not delegating to Bob. This produces a future version of Alice that doesn't choose the third row. This new agent is a precommitment-style subagent of the original Alice, but using biextensional collapse, we can also see that this new agent is equivalent to the smaller matrix. Thus:
(xy)≃(xxyy)◃⎛⎜⎝xxyyxy⎞⎟⎠.
It is also easy to verify formally that these are mutual subagents using the covering definition of subagent.
I'm reminded here of the introduction and deletion of mixed strategies in game theory. The third row of Alice's frame is a mix of the first two rows, so we can think of Bob as being analogous to a random bit that the environment cannot see. I informally conjecture that for finite Cartesian frames, C⋈D if and only if you can pass between C and D by doing something akin to deleting and introducing mixed strategies for the agent.
However, this informal conjecture is not true for infinite Cartesian frames:
⎛⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜⎝zxzzz⋯zyzzz⋯zzxzz⋯zzyzz⋯zzzxz⋯zzzyz⋯zzzzx⋯zzzzy⋯⋮⋮⋮⋮⋮⋱⎞⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟⎠⋈⎛⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜⎝zyzzz⋯zzxzz⋯zzyzz⋯zzzxz⋯zzzyz⋯zzzzx⋯zzzzy⋯⋮⋮⋮⋮⋮⋱⎞⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟⎠.
We can see that these frames are mutual subagents by noting that one can transition back and forth by repeatedly committing not to take the top row.
I do not know of any examples of ⋈ that look qualitatively different from those discussed here, but I do not have a good understanding of exactly what the equivalence classes look like.
4. Universal Subagents and Superagents
--------------------------------------
We can view ⊤ as a universal subagent and ⊥ as a universal superagent.
**Claim:** ⊤◃C◃⊥ for all Cartesian frames C.
**Proof:** We use the categorical definition. That ⊤◃C is vacuous, since there is no morphism from ⊤ to ⊥. That C◃⊥ is also trivial, since any ϕ:C◃⊥ is equal to ϕ∘id⊥. □
Since null⋈⊤, we also have null◃C for all C.
We also have a that ⊥S is a superagent of all Cartesian frames with image in S.
**Claim:** C◃⊥S if and only if Image(C)⊆S.
**Proof:** Let C=(A,E,⋅), and let ⊥S=(S,{f},⋆), with s⋆f=s.
First, assume Image(C)⊆S. We will use the covering definition. Given an e∈E, let (g,h):C→⊥S be given by g(a)=a⋅e and h(f)=e. We have that g is well-defined because Image(C)⊆S, and (g,h) is a morphism because for all a∈A,
a⋅h(f)=a⋅e=g(a)=g(a)⋆f.Thus, there is a morphism (g,h):C→⊥S and an element f∈{f} such that h(f)=e for an arbitrary e∈E, so C◃⊥S.
Conversely, assume Image(C)⊈S, so let a∈A and e∈E be such that a⋅e∉S. If we assume for contradiction that C◃⊥S, then by the covering definition, there must be a morphism (g,h):C→⊥S such that h(f)=e. But then we have that
a⋅e=a⋅h(f)=g(a)⋆f=g(a)must be both inside and outside of S, a contradiction. □
**Convention:** We will usually write C◃⊥S instead of Image(C)⊆S, as it is shorter.
**Corollary:** S∈Obs(C) if and only if C≃C0&C1 for some C0◃⊥S and C1◃⊥W∖S.
**Proof:** This is just rewriting our definition of observables from "[Controllables and Observables, Revisited](https://www.lesswrong.com/posts/z3S2xnoDYfohrQQoe/controllables-and-observables-revisited#2__Observables)." □
In the coming posts, we will introduce multiplicative operations on Cartesian frames, and use these to distinguish between additive and multiplicative subagents and superagents.
|
8a471cab-3ecd-45a7-991c-7e31d064f425
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup Tip: Nametags
Summary
Nametags are nice to have at meetups. When I was planning my second big meetup, I typed “nametag” into Amazon and got these. They’ve worked great but whatever your local stationary store or online shopping service suggests is probably just as good. The summary of this post is that if you run meetups I think it’s worth buying a roll of nametags and a couple markers and bringing them to the meetups.
You have now read the basic point of this post. If you want to read on, cool, lets talk about nametags for more words than are strictly necessary.
Detail
The easiest nametags to use are adhesive stickers with a blank front, which people can write their name on using a marker and then stick on the front of their shirt. Variations on this include a printed sheet with people’s names that can be put into a plastic sheath and hung around their neck on a lanyard, or a badge with their name embossed on it which they pin to their shirt. Nametags can also be created by combining an index card and a safety pin. If you’re at all unsure, I suggest you use the adhesive sticker version.
In all cases, nametags help remember names. Typical conversational norms might have an introduction where you say your name at the start, but then you don’t repeat your name throughout. Nametags help associate your name with your face as people can check the tag when they forget.
Nametags can include information other than just names. Pronouns are common in the social circles I frequent, as are online handles like your Discord nickname. In professional circles nametags often also mention what company someone is from. When I run meetups, sometimes I put "Organizer" under my name. I recall one person at a rationalist gathering wrote their name on their nametag and then, underneath, wrote “Ask me about insects.”
For meetups that are open to anyone who shows up, nametags are good for seeing who is a part of the group. If there is a big meetup in a park, someone standing nearby might be part
|
43e77fd4-f7cf-46c2-82d4-79c41452a5c1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Jobs and internships available at the Centre for Effective Altruism: new 'EA outreach' roles added
I recently posted on LessWrong main about the jobs and internships currently available at the Centre for Effective Altruism. (As I mentioned, effective altruism in general and CEA in particular have been discussed many times on LessWrong, so these opportunities might be of interest to some readers!) We're just starting a new project to do effective altruism outreach and 'marketing' (much like what Peter Hurford discusses in this post), so have added some new roles in this to the recruitment round; there's a full description of them here. If you're interested, apply by 5pm GMT on February 28th, and if you know anyone who might be, do pass it along!
|
1adac3f5-211e-440d-b055-4c2eaf61308c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Edinburgh Grand Meetup
Discussion article for the meetup : Edinburgh Grand Meetup
WHEN: 25 June 2011 02:00:00PM (+0100)
WHERE: 67 Delhi Cafe, Nicolson Street, Edinburgh
Our once in a month big-meetup! See you there! Map: http://maps.google.co.uk/maps?ie=UTF8&t=h&cid=1874860554950886070
Discussion article for the meetup : Edinburgh Grand Meetup
|
a0444387-2d32-4407-9bdb-4d387b19f03e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Ann Altman has filed a lawsuit in US federal court alleging that she was sexually abused by Sam Altman
On January 6, 2025, Ann Altman filed a lawsuit in the Eastern District of Missouri alleging that Sam Altman carried out multiple acts of sexual abuse against her over "a period of approximately eight or nine years" starting in 1997. The case number is 4:25-cv-00017, for those who have PACER access.
I find the lawsuit complaint notable because the alleged scope of the abuse appears to be much greater than indicated in pythagoras0515's post.
|
ef18be36-2a62-4fb5-9c2e-c67f3c75d495
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
Multitasking: Efficient Optimal Planning for Bandit Superprocesses
Multitasking: Efficient Optimal Planning for Bandit Superprocesses
Dylan Hadfield-Menell and Stuart Russell
Abstract
A bandit superprocess is a decision problem
composed from multiple independent Markov
decision processes (MDPs), coupled only by the
constraint that, at each time step, the agent may
act in only one of the MDPs. Multitasking prob-
lems of this kind are ubiquitous in the real world,
yet very little is known about them from a com-
putational viewpoint, beyond the observation that
optimal policies for the superprocess may pre-
scribe actions that would be suboptimal for an
MDP considered in isolation. (This observation
implies that many applications of sequential de-
cision analysis in practice are technically incor-
rect, since the decision problem being solved
is often part of a larger, unstated bandit super-
process.) The paper summarizes the state-of-the-
art in the theory of bandit superprocesses and
contributes a novel upper bound on the global
value function of a bandit superprocess, defined
in terms of a direct relaxation of the arms. The
bound is equivalent to an existing bound (the
Whittle integral), but is defined constructively,
as the value of a related multi-armed bandit. We
provide a new method to compute this bound and
derive the first practical algorithm to select opti-
mal actions in bandit superprocesses. The algo-
rithm operates by repeatedly establishing domi-
nance relations between actions using upper and
lower bounds on action values. Experiments indi-
cate that the algorithm’s run-time compares very
favorably to other possible algorithms designed
for more general factored MDPs.
1 INTRODUCTION
Multitasking is no doubt an activity familiar to the reader:
one faces several decision problems but can act on only
one (or perhaps a bounded number) at a time. Such prob-lems are ubiquitous for individuals, corporations, armies,
and governments.
Multitasking problems are expressed by the class of ban-
dit superprocesses [Nash, 1973] or BSPs. A k-armed BSP
Mconsists ofkindependent Markov decision processes
M1;:::;Mk; the MDPs are coupled by a common discount
factor and by the constraint that at each time step the agent
can act in only one MDP.
Since the MDPs are independent of each other, one might
imagine that the optimal policy for Mis obtained by
solving each MDP—turning the BSP into a multi-armed
bandit—and then using Gittins indices to choose a se-
quence of arms. In fact, the optimal policy for a BSP may
include actions that are suboptimal from the point of view
of the constituent MDP in which they are taken . The reason
for this is that the availability of other MDPs in which to
act changes the balance between short-term and long-term
rewards in a component MDP; in fact, it tends to lead to
greedier behavior in each MDP because aiming for long-
term reward in one MDP would delay rewards in all the
other MDPs. The globally and locally optimal policies nec-
essarily coincide only when the discount factor is 1.
Hence, in addition to their general importance in the real
world, a second reason to study multitasking problems is
that they undermine an assumption implicit in practical ap-
plications of sequential decision analysis: the assumption
that an optimal solution for the user’s decision problem is
optimal for a user who faces multiple decision problems.
Despite these considerations, there has been remarkably lit-
tle research on bandit superprocesses and almost none in
AI. Section 2 summarizes what is known. Obviously, there
are connections to multi-armed bandits (MABs), which are
a special case of BSPs in which each arm only allows
one choice of action rather than several. Unfortunately,
the index theorems that simplify the computation of opti-
mal MAB policies are not valid for BSPs. There are also
strong connections—hitherto unexplored—between BSPs
andsums of games as studied in combinatorial game the-
ory [Conway, 2000]. The principal question we address in
this paper is whether an algorithm exists that is substan-
tially more efficient than applying a standard MDP solver
to the “cross-product” MDP obtained by combining the
states spaces of the constituent MDPs.
The sole known case where planning for a BSP is provably
more efficient is the case where a dominating policy exists:
if a single policy is optimal across the family of retirement
processes associated with each arm then a BSP can be re-
duced to an equivalent MAB [Whittle, 1980]. (Retirement
processes provide a measure of how the optimal policy de-
pends on context and are defined in Section 3.) However,
this condition is seldom satisfied in practice.
This paper provides three contributions. First, we give a
concise survey of the bandit superprocess literature tailored
to the AI community. Second, we provide a novel upper
bound on the global value function of a BSP. For a BSP,
M, with armsfM1;:::;Mkgwe relax each arm to obtain
an MABM0=fM0
1;:::;M0
kgby adding actions to en-
sure that dominating policies exists. We show that this up-
per bound is equivalent to the Whittle integral , an existing
upper bound for BSPs [Brown and Smith, 2013]. Because
our bound is defined in terms of an explicit relaxation, it
provides insight into the nature of the bound and opens
an avenue to extend this work to more general MDPs. Fi-
nally, we describe a practical computational approach for
solving BSPs: we derive a simple method for computing
the Whittle integral upper bound that can use an arbitrary
MDP solver as a black box; then we combine this upper
bound with a lower bound to derive an efficient algorithm
for-optimal decision making in a BSP. We present em-
pirical results to show that it substantially improves over
more general optimal factored MDP algorithms; we use it
to compute provably optimal actions for problems with up-
wards of 1030states in the full state space.
2 RELATED WORK
Robbins [1952] provided the first formulation of multi-
armed bandits (MABs) in their modern form. The famous
Gittins index theorem [Gittins, 1979] showed that optimal
MAB policies are obtained by ranking the arms according
to an index function defined on each seperately. An imme-
diate corollary is that optimal decision making in an MAB
islinear in the number of arms. Problems with this prop-
erty are said to be indexable .
Bandit superprocesses (BSPs) were introduced by Nash
[1973] as a generalization of multi-armed bandits to study
allocation of resources among research projects. Whittle
[1980] provided an alternate proof of the Gittins index the-
orem that extends to bandit superprocesses with dominat-
ing policies. His proof utilized a construction called the
Whittle integral , which allows one to compute the value
of a composite state in an MAB. Glazebrook [1982] pro-
vides a proof that bandit superprocesses are notindexablein general. Glazebrook [1993] considers bandit superpro-
cesses where the arms can exert limited influence on each
other and shows a result analogous to Whittle’s.
Brown and Smith [2013] were the first to identify the sig-
nifigance of the Whittle integral for sequential decision
making. They developed a version of policy iteration to
compute it, and recognized that it upper bounds the value
function of a BSP,1but did not derive an algorithm for solv-
ing BSPs. Our work provides two further contributions re-
garding the Whittle integral: a short proof that it is an upper
bound and a simpler algorithm to compute it.
Within the AI community, there has been limited study
of loosely coupled Markov decision processes. Singh and
Cohn [1998] consider optimal solutions to simultaneous
MDPs , where an agent can take actions in a number of
MDPs. Their formulation is more general than ours, as it
is possible to act in multiple MDPs at once. They derive
bounds for this problem and give an algorithm that com-
bines a form of real-time dynamic programming with prun-
ing steps to remove provably suboptimal actions. However,
their bounds are substantially looser than ours, and our ex-
periments show that the corresponding algorithm can be
impractical for simple BSPs with many arms.
Meuleau et al. [1998] examine MDPs that are coupled by
constraints on the use of shared resources. BSPs fit within
this class if we view the restriction to a single MDP at a
time as a constraint on the agent’s attention. Interestingly,
their heuristics are also defined in terms of a parameter-
ized value function for the component MDPs—it would be
interesting to attempt to generalize the approaches consid-
ered here to their problem domain. In this work, we lever-
age the particular structure of our resource to compute op-
timal solutions; Meuleau et al. construct a heuristic policy.
3 TECHNICAL BACKGROUND
MARKOV DECISION PROCESSES
Definition 1. (Markov Decision Process [Puterman,
2009]) A (finite-state, discounted) MDP , M, is a tuple
M=hS;A;T;R;
i.Sis a set of states.Ais a set of
actions.T:SAS! [0;1]is a function that assigns
probability to state transitions for each state–action pair.
Ris a (bounded) reward function that maps state–action
pairs to (positive) rewards R:SA! R+.
2[0;1)is
a discount factor.
A solution to Mis a policy,, that maps states to actions.
The value of a state, s, underis the sum of expected dis-
counted rewards received by starting in sand selecting ac-
1This result first appears in the literature as an intermediate
step in a larger proof from Whittle [1980]. It went unnoticed or
unappreciated in the intervening 30+ years.
tions according to :
V(s) =E"1X
t=0
tR(st)js0=s;#
:
The optimal policy, , maximizes this value. In the above
definition, and the ones that follow, we use superscripts to
indicate dependence on the agent’s policy. To simplify no-
tation, we will omit these superscripts when the policy re-
ferred to is the optimal policy (e.g., V(s) =V(s)). The
Q-function for the state–action pair, (s;a), is the value of
takingainsand selecting future actions according to .
RETIREMENT PROCESSES
Given an MDP, Whittle defines a family of optimal stop-
ping problems:
Definition 2. (Retirement Process [Whittle, 1980]) Let M
be an MDP . For 0, the retirement process for Mwith
retirement reward, , is an MDP , M, with a single addi-
tional state, sR, and action, aR.aRtransitions determinis-
tically tosRand receives reward .sRis a sink state that
accrues zero reward.
We refer to a decision to select aRas a decision to retire.
We denote the retirement process value function as a func-
tion of a state and retirement reward, V(s;). We let the op-
timal policy for retirement reward be
. We write the set
of states where the policy, , retires as
. We use ()
and+()to denote the interval of retirement rewards so
thatis optimal:
02[+();+()])V(s;) =V(s;):
We adopt the convention from the MAB/BSP literature
and abuse notation to denote the (random) number of
steps prior to retirement as
(s). Fors02
we let
Pretire (s0js;; )be the probability that s0is the first state
in
the agent will reach given that it is in state sand
executes policy . We denote the expected discounted re-
ward accrued prior to retirement, starting in s, asR
(s).
In regions of retirement reward where the optimal policy
and stopping rule do not change,@V
@(s;)is defined and
is equal to the expected value of the discount parameter at
retirement. This allows us to write the following expression
for the retirement process value function:
V(s;) =R(s) +E[
(s)]: (1)
V(s;)is piecewise linear in . Figure 1 shows V(s;)for
the example BSP in Ex. 1. A policy that is optimal for every
setting ofis called a dominating policy .
Definition 3. (Dominating Policy) Let be a policy for an
MDP ,M.is adominating policy iff
808s =2(s) =
(s):MULTI-ARMED BANDITS AND BANDIT
SUPERPROCESSES
Multi-armed bandits (MAB) are a restricted class of MDPs
that have received extensive study. Of particular interest are
the form of the optimal policy and its utility in modelling
“exploration vs exploitation” trade-offs.
An MAB consists of a set of Markov reward processes
(MRPs), where an MRP is an MDP with a single action.
Each MRP is referred to as an armof the problem. At each
time-step, the agent selects an arm, that arm transitions ac-
cording to its transition distribution, and the agent receives
the corresponding reward. We adopt a summation notation
to indicate the combination of several MRPs into an MAB.
For example, if XandYare MRPs and Zis the MAB with
armsXandY, we will write Z=X+Y.
A famous result is that the optimal policy for any MAB
is an index policy [Gittins, 1979]. Each state, si, in the
individual arms is assigned an index, Isi. In joint state
s=fs1;:::;skg, the optimal action is to select
argmaxiIsi. For MAB arm Miin statesi, the index is de-
fined as the value of retirement reward such that the agent
is indifferent between immediate retirement and following
the optimal stopping policy [Whittle, 1980]:
Isi= min
f;Vi(si;) =g:
This means that, in a sense, context for a multi-armed ban-
dit can always be summarized by a single number.
To model a multi-tasking problem, we consider a gen-
eralization of multi-armed bandits: bandit superprocesses
(BSPs). Bandit superprocesses allow arms that are arbitrary
MDPs (and so are a generalization of MABs); at each time
step, the agent selects both an arm and an action to take
within that arm.
Definition 4. (Bandit Superprocess [Nash, 1973]) Given k
MDPs,fMi=hSi;Ai;Ti;Ri;
ig, we define
M=X
iMi=hS;A; T; R;
i
to be the bandit superprocess with arms fMig.S=
iSi
andA=[
iAi:The transition distribution is station-
ary for arms that are not selected and follows the identical
reward and transition distributions for the selected arm.
Naturally, this makes planning more difficult. To see how,
consider the following proposal:
Conjecture 1. LetX=hSX;AX;TX;RX;
ibe an
MDP . Define Ysimilarly and let Z=X+Ybe the
bandit superprocess that is their sum. Let a2 AX,
sX2 SXbe a state–action pair from X. Suppose that
this transition is suboptimal in every retirement process:
80; VX(sX; )> QX((sX; a); ):Thenais
suboptimal for any state in ZwithsXas one of the compo-
nents:
8sY2SY;VZ(fsX;sYg)>QZ(fsX;sYg;a):
This conjecture essentially proposes that state–action pairs
can be safely ignored if they are suboptimal for all settings
of a constant alternative. Unfortunately, as the following
example illustrates, this is not the case.
Example 1. DefineXto be a deterministic reward chain
andYto be an initial choice between three reward chains
(y0;y1;y2) defined as follows:
X= 28;28;28;28;28;28;0;0:::
Y=8
<
:y0= 100;100;100;0;0;0:::
y1= 99;99;99;1:4;1:4;1:4:::
y3= 28;28;28;:::
The retirement process value function for each reward
chain can be computed by simulating the Gittins index pol-
icy:2
V(y0;) = max(
100 2X
t=0
t!
+
3;)
V(y1;) = max(
99 2X
t=0
t!
+
3max1:4
1
;
;)
V(y2;) = max28
1
;
:
If we let
=:9, then we have
8280>0 maxfV(y0; ); V(y2; )g> V (y1; ):
Thus, any policy for a retirement process retirement process
derived from Ythat initially selects y1is suboptimal. We
can apply the same strategy to compute the value of each
combination of reward streams:
V(y0+X) = 100 2X
t=0
t!
+
3
285X
t0=0
t0!
V(y1+X) = 99 2X
t=0
t!
+
3
285X
t0=0
t0+
61:4
1
!
V(y2+X) =28
1
From this we can see that the optimal policy for the BSP
that combines these MDPs, Z=X+Y, initially collects
reward from y1. This contradicts Conjecture 1.
2Here we use the fact that the Gittins index of a non-increasing
reward sequence is equal to the instantaneous reward.4 AN UPPER BOUND FOR BANDIT
SUPERPROCESSES
In this section we show a bound on the value function of
a bandit superprocess. We derive this bound by adding ac-
tions to a BSP so that it is equivalent to an MAB. The value
function of our relaxed BSP is equivalent to the Whittle in-
tegral bound derived in Brown and Smith [2013] and so it
yields insight into that computation. We begin by defining
the Whittle integral and show some basic results in order to
motivate our relaxation.
Definition 5. (Whittle Integral [Brown and Smith, 2013])
LetMbe a BSP . Let iindex the arms of M. For any state,
s=fsig, and0, the Whittle integral of sis defined as
^V(s;) =I ZI
x=dxY
i@Vi
@(si;x): (2)
WhereImaxiIsi.
When the arms of a BSP admit a dominating policy the
Whittle integral is equal to the value function:
Theorem 1. (Whittle Condition [Whittle, 1980]) Let Mbe
ak-armed BSP with components fMigand state spaceS.
If eachMihas a dominating policy, then
8s2S;80;^V(s;) =V(s;):
MRPs have a single action per state, so the Whittle con-
dition is trivially satisfied for all multi-armed bandits.
V(s;0) =V(s), so^V(s;0)provides an efficient method to
compute the value of an MAB. For BSPs an arm that sat-
isfies the Whittle condition can be replaced with an MRP
that selects actions according to .
The formula in Eq. 2 lends itself to a straightforward im-
plementation, but is very challenging to interpret. We show
below that this is equivalent to evaluating the retirement
process value function for a single arm with a set of re-
wards determined by the other arms3. We refer to this set
of rewards as the critical points of those arms.
Definition 6. (Critical Points of an MDP) Let Mbe
a Markov decision process. The critical points of M,
C(M) =fig, are the values of retirement reward such
that the optimal stopping rule or policy changes.
These are points where there is a discontinuity in@V
@. We
let
M(s;) = lim
!0@VM
@(s;+) @VM
@(s; )
be the size of the corresponding discontinuities. This is
equivalent to the expected increase in E[
]under the new
stopping rule. Theorem 2 shows that andCcharacterize
the interaction between arms of an MAB.
3Our result is a small extension on a related result in Brown
and Smith [2013]; it is primarily included to provide intuition.
Theorem 2. LetX;Y be Markov reward processes. Let
Z=X+Ybe the 2-armed bandit. 8s=fsX;sYg2SZ;
VZ(s) =X
2C(Y)VX(sX;)Y(sY;): (3)
Proof. (sketch) As a first step, we follow the steps in Whit-
tle [1980] and integrate Equation 2 by parts. This expresses
^Vas the following integral:
^VZ(s) =ZIsX
0=VX(sX;0)@2VY(sY;0)
@2d0:
VYis piecewise linear with respect to so@2VY
@2is a
weighted sum of delta functions centered at VY’s kinks.
C(Y)andYrespectively characterize these kinks and
weights. Thus, ^VZ(s)is equal to the rhs of Eq. 3. XandY
are MRPs, so an appeal to Theorem 1 shows the result.
A similar property holds for arbitrary k-armed bandits. This
shows that we can summarize the context for an arm as
a collection of weighted retirement rewards. Turning to
BSPs, this lends insight into the approximation the Whit-
tle integral introduces.
To see how, we reconsider Ex. 1. Recall that this example
consists of the BSP Z=X+Yand thatYis a choice
between three reward chains, fy0;y1;y2g. Theorem 2 al-
lows us to write the gap between the Whittle integral upper
bound and the value of selecting y1as
X
2C(X)
max
iVyi(si;) Vy1(s1;)
X(sX;)
Figure 1 shows the retirement process value functions for
this example. While the retirement process value of y1is
always less than that of either y0ory2, it is close enough
to their maximum that choosing y1essentially achieves
the upper bound. The Whittle integral for Zis a weighted
combination of distinct retirement process value functions
(Vy0;Vy2) but, in reality, the agent will be forced to pick a
single policy of the two.
DOMINATED RELAXATION OF AN MDP
In this section, we introduce our primary theoretical result:
a relaxation for the arms of a BSP so that a dominating pol-
icy exists. This reduces the BSP to an MAB whose value
upper bounds the value of states in the BSP. We can show
that the Whittle integral computes the value of states in this
MAB and arrive at a straightforward proof that the Whit-
tle integral is an upper bound. We call the result the dom-
inated relaxation of an MDP. Before providing the details,
we illustrate the main ideas with an example. The relaxed
MDP is actually a Semi-MDP (SMDP): a generalization of
an MDP where each action, a, has a duration, (a)2R+.Example 2. Let MDPMbe an initial choice between
MRPs:
M=15;0;0;0:::
10;10;10;3;3;3;::::
LetT(B)be the top (bottom) MRP . Let T(B)be the
policy that selects T(B)ins0. We can relax Mwith the ad-
dition of a single durative action, a0, that transitions from
the sink state in Tto the sink state B. We set(a0) = 2 .
We takesuch thatVT(s0;) =VB(s0;)and set the
rewards associated with a0to be
RB(s0) RT(s0) = 102X
0
t 15 = 12:1:
With this change, at low settings of retirement reward, the
agent is indifferent between a policy that opts for the top
chain then selects a0and the bottom reward chain. For high
settings of retirement reward, the optimal policy retires im-
mediately or retires after collecting the reward of 15. This
SMDP satisfies the Whittle condition and can be replaced
by an MRP in any bandit superprocess. Fig. 1 (c) shows an
illustration of the state space and the introduced action.
In this example, we connected the state where Tretires to
the state where Bretires. This lets the agent collect short-
term and long-term rewards with the same policy. To do
this in general, we introduce multiple copies of the state
space, one for each policy that is optimal for some .
Definition 7. (Dominated Relaxation of an MDP) Let M
be an MDP with discount factor
and state spaceS. Let
sbe a state in M. The dominated relaxation of Mfors,
MD(s), is a semi-Markov decision process that fixes sas
an initial state. Let figbe the policies that are optimal for
some:f
j2C(M)g. This sequence is ordered so that
(i)is increasing in i4.
For eachi, we introduce a copy of the state space, Si, where
the agent is restricted to following i. Lets0
ibe the ana-
logue ofs0inSi. Fors0
i2 (i), we introduce a single
durative action, ai, that takes the agent from SitoSi 1and
characterize it as follows:
R(s0
i) =R+(i 1)(s) R (i)(s)
T(s0
i;ai;s00
i 1) =Pretire (s00ji 1;+(i 1);s)
(ai) = log
Eh
i 1
+(s)i
log
Eh
i (s)i
For eachi, we introduce an action that transitions from sto
siwith= 0. LetVD(s)represent the value of sinMD(s).
Theorem 3. LetMbe an MDP with state space S. The
following statements are true for s2S and0:
4Recall that [ ();+()]is the interval of retirement re-
wards where is optimal.
1.MD(s)satisfies the Whittle condition.
2.VD(s;) =^V(s;):
3.^V(s;)V(s;)
Proof. See supplementary materials
5 AN -OPTIMAL ALGORITHM FOR
BANDIT SUPERPROCESSES
In this section, we present two algorithms related to BSPs.
The first is a novel algorithm to compute V(s;)that can
use any method to solve the underlying MDP. The second is
an efficient algorithm to compute optimal actions for a BSP.
Our approach, Branch-and-Bound Value Iteration (BBVI),
uses Whittle integrals to compute upper and lower bounds
on value. Then, we apply a modified Branch-and-Bound
search to find provably optimal actions.
COMPUTING A RETIREMENT PROCESS V ALUE
FUNCTION
Before we present BBVI, we give a simple algorithm to
compute a retirement process value function that uses an
(arbitrary) MDP solver as a black box. Brown and Smith
[2013] describe an algorithm to compute retirement pro-
cess value functions, but their approach requires custom
implementation of a modified simplex algorithm. Our ap-
proach is simple and based on an algorithm that initially
appeared in the solutions manual for Russell and Norvig
[2010].
Our goal is to identify each component of V(;), so our
output will be a list of critical points and the associated
slopes. First, we will need the following result.
Lemma 1. LetMbe an MDP with state space S. Let
0< 1. Consider a retirement reward in the interior of
this interval, 2(0;1). If,8s2S
V(s;) =V(s; 0) +V(s; 1) V(s; 0)
1 0( 0);(4)
then there is at least one policy and stopping rule that is op-
timal for every 02[0;1]. The expected value of the dis-
count factor at retirement is the slope between those points:
Eh
(s)i
=V(s; 1) V(s; 0)
1 0: (5)
Proof. This follows from the form of Equation 1 and the
fact thatV(s;)is piecewise linear, increasing, and convex.
(a)
(b)
15101010
03
r= 12.1 t=2
(c)
Figure 1: (a-b) Retirement process value functions for
the reward chains in Ex. 1. The slope of the lines is the
expected value of the discount parameter at retirement:
E[
]. Flat sections indicate regions of retirement rewards
where retirement is always suboptimal. The kink in the
green curve at = 280 indicates that it has become op-
timal to retire immediately. The kink in the black curve at
= 14 increases the slope to
6<1; for>14, it is op-
timal to retire after collecting a prefix of the reward stream
fromy2. (c) Dominated relaxation of the MDP in Ex. 2. We
add additional durative actions to the state space to ensure
that a dominating policy exists. Note that the original MDP
does not admit a dominating policy.
This test allows us to implement a binary search over re-
tirement rewards. Our approach relies on a given MDP so-
lution method, SOLVE , that returns the vector of values for
a given MDP. We initialize an interval that contains all criti-
cal points and make a call to SOLVE to compute the retire-
ment process value at each endpoint. Then we call SOLVE
to compute the retirement process value for the midpoint
of our interval and apply the test in Eq. 4. If it succeeds,
we return the lower endpoint and the corresponding slope.
If this does not, we sub-divide our interval and recurse. We
can concatenate the results to get the breakpoints and slopes
for the retirement process value function over this interval.
Algorithm 1 shows pseudocode for this algorithm.
Algorithm 1 Computing an MDP’s Critical Points
Define: CRITICAL POINTS (M[; ;+;V ;V+])
Input: MDP,M; Interval of retirement values, ;+;
Values at interval endpoints, V ;V+
if ;+are not set then
0
+ maxReward (M)
1
V SOLVE (M )
V+ +## Retirement is initially optimal
end if
+
2
V SOLVE (M)
ifjV
V + ( )V+ V
+
j<then
return [ ];h
V+ V
+ i
else
pts ;slope CRITICAL POINTS (M; ;;V ;V)
pts+;slope+ CRITICAL POINTS (M;;+;V;V+)
## Merge adjacent intervals with the same slope
ifslope [ 1] =slope+[0]then
delpts+[0];delslope+[0]
end if
returnpts [pts+;slopes [slopes+
end if
BRANCH-AND-BOUND V ALUE ITERATION
Now, we present a practical algorithm to compute optimal
actions in a bandit superprocess. While a BSP is not index-
able, we would like to be able to plan efficiently when the
arms are ‘close’ to indexable—when there are only a few
states where the optimal policy changes in response to the
opportunity cost of delayed rewards in other arms.
Our approach is based on envelope dynamic programming
methods to solve MDPs: we compute value estimates for
a given initial state by solving a dynamic program defined
over a reachable subset of the state space [Gardiol, Natalia
H and Kaelbling, Leslie P, 2003, Hansen and Zilberstein,
2001]. The primary difference between our approach and
standard envelope methods is that we use dynamic pro-gramming on our envelope to update an upper bound and
a lower bound on the value. This is useful in a BSP be-
cause the Whittle integral allows efficient calculation of
both bounds.
Our goal is to compute the optimal action for a given state
sin a BSP,M=hM1;:::;MKi. We can obtain an upper
bound onV(s)with a Whittle integral. We use a Whittle
integral for the MAB that solves each arm independently to
compute a lower bound. Our algorithm maintains upper and
lower bounds on the Q-function for each action that could
be executed from s. We write these bounds as Q+(s;a)
andQ (s;a)respectively. If there is a pair of actions a;a0
such thatQ+(s;a0)<Q (s;a) +, then we can conclude
thata0is at least-suboptimal. If this test removes all but
a single action, then we have found a -optimal action and
return it.
In the event that more than one action remains, we do a par-
tial expansion of the state space around s. We keep track of
a set of expanded states, E, and a set of boundary states, B.
States in the boundary set, B, are states that some expanded
state can transition to but have not yet been added to E.
We can use these states to update the bounds on Q(s;)by
solving a related MDP. This is formalized in the following
theorem.
Theorem 4. LetMbe an MDP with state space Sand
action spaceA. LetS0S whereS0=B [ E and
8s2 E ,T(s;a;s0)6= 0)s02 S0. LetM+be
an MDP with state space S0[fg.M+’s transition distri-
bution and rewards are identical for states in E(expanded
states). Each state s2B transitions deterministically to
and receives a reward that is an upper bound on VM(s).
is a sink state that accrues 0 reward. Then the value of a
state inM+is an upper bound on its value in M:
VM+(s)VM(s):
Proof. See supplementary materials.
It is straightforward to show that an updated lower bound
can be computed by fixing boundary states to have value
equal to a lower bound. Our approach alternates between
pruning actions based on dominance relations, adding
states toE, and computing the value of an MDP defined
overE. We interleave value iteration with a branch-and-
bound search, so we call it Branch-and-Bound Value Itera-
tion (BBVI). Algorithm 2 shows pseudocode to implement
this method. At termination, the action we return is guaran-
teed to be at most -suboptimal.
Theorem 5. LetMbe a BSP and let sbe a state in M. Let
abe the action returned by BBVI (M;s; ).
V(s) Q(s;a)<:
Proof. See supplementary materials.
Large regions of the state space are irrelevant to our ob-
jective so we use a heuristic to guide node expansion. Our
heuristic is a measure of the sensitivity of values at the root
to change in the upper and lower bounds of unexpanded
nodes. Similar heuristics have been applied in many set-
tings [Rivest, 1987, Smith and Simmons, 2004]. In BBVI,
these values are the expected discounted visitation rates in
the bounding MDPs and are computed as the dual variables
from a value iteration LP. The backup procedure is slower
than node expansions, so we perform backups in batches. In
between backups and heuristic computations, we approxi-
mate the state space as a tree and push new states onto the
agenda with their parent’s heuristic value weighted by the
transition probability.
Algorithm 2 Branch-and-Bound Value Iteration
Define: BBVI (hM0;:::;MKi;s0;)
Input: BSP arms,Mk; Initial state, s0; Tolerance,
# Lower bound Mby fixing a policy for each arm
LBk toMRP (Mk)
Compute critical points for Mk;LBk
Compute bounds on Q(s0;)with Whittle integrals for
MandLB.
# Keep track of expanded region of state space
E ;
# Keep track of states at boundary of M+;M
B fs0g
a argmax
aQ (s0;a)
while9a06=as:tQ+(s0;a0)Q (s0;a)do
s pop(B)
E E[fsg
fora2A do
fors02successors (s)do
Compute upper and lower bounds for Q(s0;)
B B[fs0g
end for
end for
Q+ SOLVE (BOUNDMDP (E;B;Q+))
Q SOLVE (BOUNDMDP (E;B;Q ))
a argmax
aQ (s0;a)
end while
return-optimal action a
6 EMPRIRICAL EV ALUATION
We implemented BBVI in Python and use the linear pro-
gramming solver Gurobi to compute value functions. Our
experiments were run on an Intel i7 with 16 GB of RAM. In
our first experiments, we compare the scalability of BBVI
with that of standard MDP algorithms. In particular we
compare the following algorithms:
SPUDD: the baseline factored MDP algorithm in the
2011 IPPC [Hoey et al., 1999].
(a) State Space for R & D BSP
(b) Runtime vs. Problem Size
(c) Solution Quality vs. Problem Size
Figure 2: (a) the state space for an arm of a R & D BSP.
All states accrue 0 reward except for the ‘Working Prod-
uct’ state, which collects a fixed reward per time step. (b)
shows runtime (log-scale) vs problem size for BBVI and
several alternatives on an R & D BSP. SPUDD [Hoey et al.,
1999] and LRTDP (with the heuristic from Singh and Cohn
[1998]) scale poorly compared with algorithms that lever-
age the Whittle integral. BBVI’s additional improvement
over LRTDP stems from the use of an efficient, exact lower
bound to check convergence. (c) BBVI’s bound on sub-
optimality after 10,000 node expasions (measured in units
of discounted reward). Changing the number of rewards
in each arm allows us to measure BBVI’s performance as
arms become more sensitive to context.
LRTDP(Merge): Labeled Real-Time Dynamic Pro-
gramming [Bonet and Geffner, 2003] with the upper
bound from Singh and Cohn [1998] as a heuristic.
LRTDP(WI): LRTDP with the Whittle integral as an
upper bound.
BBVI: Branch-and-Bound Value Iteration.
We evaluate performance on a simple BSP designed to
model allocation of research resources to product research
and development. Each arm in this problem corresponds to
a potential product that our agent could sell in the market.
However, before we sell a product we must devote effort to
R&D. This can be done in one of two ways, a safe research
approach that is slow but reliable, and a risky approach that
is fast, but runs a risk of producing a defective product.
After at least one product has been researched, the agent
can opt to spend a round to produce and sell that product or
continue research on a different project. Because taking a
product to market does not change the state of the BSP, it is
straightforward to show that any stationary policy (includ-
ing) will only ever produce a single product. We define
a distribution over these problems by selecting a random
market value for each product uniformly from [0, 1] and
random durations for the safe and risky research strategies
(although we ensure that safe research is at least 3x as slow
as risky research). Figure 2 (a) shows the state space for an
example arm with typical parameters.
Figure 2 (b) shows the results of our comparison. We used
a timeout of 1000s and set a memory limit of 4 GB. We ran
our experiments in an online setting and running times re-
flect the amount of time to select an optimal action from
scratch. While this is a natural setting for LRTDP and
BBVI, it is admittedly a little unfair to SPUDD (which
computes a full policy). This is mitigated by the fact that
BBVI’s improvement over SPUDD, which is a little under
5 orders of magnitude with 108states, is such that an agent
would need to be planning over an immense horizon before
SPUDD presents a reasonable alternative.
The factored MDP bounds from Singh and Cohn [1998]
are quite ill-suited to this problem. This is because its up-
per bound (the sum of the independent value for each arm)
is very loose and it essentially forces LRTDP to enumer-
ate the state space. In contrast, LRTDP performs quite well
when it has good heuristic information. However, its per-
formance degrades faster than that of BBVI, because it does
not effectively leverage a lower bound on value. Even for
very large problem instances, BBVI computes optimal so-
lutions in well under a second.
BBVI’s performance in the R&D domain is largely ex-
plained by the structure of the arms and the short horizon
within each arm. Although there may be a large number
of arms, BBVI reduces each individual arm to an equiva-
lent MRP after a small number of node expansions. Thismeans that running time is essentially linear in the number
of arms. Note that, in this domain, the solution that solves
each MDP independently and executes the corresponding
index policy will just opt for conservative research on the
most valuable option. This policy is essentially always sub-
optimal: as long as there are reasonable alternatives it is
usually a good idea to try some risky projects.
Our next experiment uses a synthetic domain to explore the
performance of BBVI as the structure and number of arms
changes. In this BSP, each arm is a 20x20 grid world. The
actions in this world correspond to moving in the 4 car-
dinal directions. Rewards are 0 everywhere except at ran-
domly chosen locations and after receiving a reward, the
agent transitions to a random location.
In this experiment we vary the number of rewards that are
available in each arm. Adding rewards will typically cause
the optimal policy for an arm to be more sensitive to con-
text. This means that the dominated relaxation will produce
a looser upper bound, because it will need to add more ac-
tions. We evaluate this difficultly by measuring the con-
vergence criterion of BBVI after 10,000 node expansions.
Figure 2 (c) shows the results of this experiment. We can
see that BBVI’s solution quality decreases with the number
of states, but the primary variation comes from changes in
the properties of the arms.
7 CONCLUSION AND FUTURE WORK
In summary, we presented a model of highly decoupled de-
cision making: bandit superprocesses. A BSP presents an
agent with the opportunity to act in one of several Markov
decision processes. The key constraint is that the agent can
only act in a single process at a time. We provided a sum-
mary of BSP research, including an upper bound for the
value function of a BSP in the form of the Whittle integral.
We derived an equivalent relaxation and thus gave a novel
proof that the Whittle integral is an upper bound. Finally
we presented and evaluated algorithmic solutions for this
class of decision problems.
In future work, we plan to extend these ideas in three direc-
tions. The first is algorithmic improvements for BSPs. A
potential direction here is to leverage factored dynamics in
the bounding MDPs themselves. A search over sequences
of MDPs that only consider states where the current pol-
icy stops could be more efficient. Next, also plan to explore
generalizations of the bounds used for BSPs to more gen-
eral cases of global resource constraints (e.g., those con-
sidered in Meuleau et al. [1998]). Finally, the similarity to
sums of games studied in Conway [2000] suggests an unex-
plored connection between BSPs and combinatorial game
theory.
References
Blai Bonet and Hector Geffner. Labeled RTDP: Improving
the convergence of real-time dynamic programming. In
IEEE Conference on Automated Planning and Schedul-
ing, pages 12–21, 2003.
David B. Brown and James E. Smith. Optimal sequential
exploration: Bandits, clairvoyants, and wildcats. Opera-
tions Research , 61(3):644–665, 2013.
John H. Conway. On Numbers and Games . CRC Press,
2000.
Gardiol, Natalia H and Kaelbling, Leslie P. Envelope-
based planning in relational MDPs. In Advances in Neu-
ral Information Processing Systems , page None, 2003.
John C. Gittins. Bandit processes and dynamic allocation
indices. Journal of the Royal Statistical Society. Series
B (Methodological) , pages 148–177, 1979.
Kevin D. Glazebrook. On a sufficient condition for super-
processes due to Whittle. Journal of Applied Probability ,
pages 99–110, 1982.
Kevin D. Glazebrook. Indices for families of competing
Markov decision processes with influence. The Annals
of Applied Probability , pages 1013–1032, 1993.
Eric A Hansen and Shlomo Zilberstein. Lao: A heuristic
search algorithm that finds solutions with loops. Artifi-
cial Intelligence , 129(1):35–62, 2001.
Jesse Hoey, Robert St-Aubin, Alan Hu, and Craig Boutilier.
SPUDD: Stochastic planning using decision diagrams.
InFifteenth Conference on Uncertainty in Artificial In-
telligence , pages 279–288. Morgan Kaufmann Publish-
ers Inc., 1999.
Nicolas Meuleau, Milos Hauskrecht, Kee-Eung Kim,
Leonid Peshkin, Leslie Pack Kaelbling, Thomas L Dean,
and Craig Boutilier. Solving very large weakly coupled
Markov decision processes. In Fifteenth National Con-
ference on Artificial Intelligence , pages 165–172, 1998.
Peter Nash. Optimal allocation of Resources Between Re-
search Projects. PhD thesis, University of Cambridge,
1973.
Martin L. Puterman. Markov Decision Processes: Discrete
Stochastic Dynamic Programming . John Wiley & Sons,
2009.
Ronald L. Rivest. Game tree searching by min/max ap-
proximation. Artificial Intelligence , 34(1):77–96, 1987.
Herbert Robbins. Some aspects of the sequential design of
experiments. In Herbert Robbins Selected Papers , pages
169–177. Springer, 1952.
Stuart Russell and Peter Norvig. Artificial Intelligence: A
Modern Approach . 3 edition, 2010. Excercise 17.5.
Satinder Singh and David Cohn. How to dynamically
merge Markov decision processes. Advances in Neural
Information Processing Systems , (10):1057–1063, 1998.Trey Smith and Reid Simmons. Heuristic search value it-
eration for POMDPs. In Proceedings of the 20th Con-
ference on Uncertainty in Artificial Intelligence , pages
520–527. AUAI Press, 2004.
Peter Whittle. Multi-armed bandits and the Gittins in-
dex. Journal of the Royal Statistical Society. Series B
(Methodological) , pages 143–149, 1980.
|
d7826ec5-31d0-4c1f-b7c6-3ba4ee15a8c1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : West LA Meetup - Book Discussion: "Thinking, Fast and Slow"
Discussion article for the meetup : West LA Meetup - Book Discussion: "Thinking, Fast and Slow"
WHEN: 20 June 2012 07:00:00PM (-0700)
WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064
When: 7:00pm - 9:00pm Wednesday, June 20th.
Where: The Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters.
Parking is free for 3 hours.
Discussion Topic: This week we will discuss Daniel Kahneman's book titled Thinking, Fast and Slow. You don't need to have read the book, but if you have a chance, read about some heuristics and biases, or read up on prospect theory.
But don't worry if you don't have time to read any articles, or even if you've never read any Less Wrong! Bring a friend! The atmosphere is casual, and good, intelligent conversation with friendly people is guaranteed.
We will probably also find time for a game, like The Resistance.
I will bring a whiteboard with Bayes' Theorem written on it.
Discussion article for the meetup : West LA Meetup - Book Discussion: "Thinking, Fast and Slow"
|
a2a307a0-1aca-44ef-85f2-401eb7afab9a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Shallow investigation into cause prioritization
Cross-posted at 80,000 Hours Blog
I recently conducted a ‘shallow investigation*’ into cause prioritization, with help from Nick Beckstead. You can read it and various related interview notes here. It covers the importance of cause prioritization; who is doing it, funding it, or using it; and opportunities to contribute. This blog post is a summary of my impressions, given the findings of the investigation.
*see GiveWell.
----------------------------------------
Summary
Cause prioritization research seems likely enough to be high value to warrant further investigation. It appears that roughly billions of dollars per year might be influenced by it in just the near future, that current efforts cost a few million dollars per year and are often influential, and that there are many plausible ways to contribute. It also seems like things are likely to get better in the future, as more work is done.
Funding which might be substantially influenced by cause prioritization research is probably worth at least several billion dollars per year.
Prioritization research is likely more relevant to new funders than established ones, but we can get an idea of the scale of new cost-effectiveness sensitive philanthropists by looking at existing ones. This study found nine private funders (or groups of them) who appear to care about cost-effectiveness, and did not look that hard. The Gates Foundation spends around $3.4bn annually, The Hewlett Foundation spent $304M in 2012, and Good Ventures around $10m in 2014. The others spent less or I did not find data on them. Given these figures, it seems reasonable to expect more foundations worth hundreds of millions of dollars per year in the future, who care about cost-effectiveness.
I think funders not explicitly focused on cost-effectiveness are probably also influenced by prevailing beliefs about cause effectiveness, which are likely influenced (gradually) by research. $300bn is spent annually on philanthropy in the US, and prob
|
3c3f9463-dc9c-4a21-bd64-58db6a9408f7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Computer Science and Programming: Links and Resources
Updated Version @ LW Wiki: wiki.lesswrong.com/wiki/Programming_resources
Contents
* How Computers Work
* An Overview of Computer Programming
* Computer Science and Computation
* Supplementary Resources: Mathematics and Algorithms
* Practice
* Python
* Haskell
* Common Lisp
* R
How Computers Work
1. CODE The Hidden Language of Computer Hardware and Software
> The book intends to show a layman the basic mechanical principles of how computers work, instead of merely summarizing how the different parts relate. He starts with basic principles of language and logic and then demonstrates how they can be embodied by electrical circuits, and these principles give him an opening to describe in principle how computers work mechanically without requiring very much technical knowledge. Although it is not possible in a medium sized book for layman to describe the entire technical summary of a computer, he describes how and why it is possible that elaborate electronics can act in the ways computers do. In the introduction, he contrasts his own work with those books which "include pictures of trains full of 1s and 0s."
2. The Elements of Computing Systems: Building a Modern Computer from First Principles
> Indeed, the best way to understand how computers work is to build one from scratch, and this textbook leads students through twelve chapters and projects that gradually build a basic hardware platform and a modern software hierarchy from the ground up. In the process, the students gain hands-on knowledge of hardware architecture, operating systems, programming languages, compilers, data structures, algorithms, and software engineering. Using this constructive approach, the book exposes a significant body of computer science knowledge and demonstrates how theoretical and applied techniques taught in other courses fit into the overall picture.
3. The Write Great Code Series (A Solid Foundation in Software Engineering for Programmers)
Write Great Code Volume I:
|
26ed3bf8-73b9-442d-95c6-4691088fe0b6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI as a Civilizational Risk Part 4/6: Bioweapons and Philosophy of Modification
Bio-Chemical Weapons made by AI
One of the core risks involved with both narrow AIs and general AIs is the creation of biological and chemical weapons. While the probability of creating stronger and stronger behavior modification in AIs is close to 100%, the probability of creating biological weapons using AI is smaller. However, the potential downsides are more significant. A larger probability is that AIs create many chemical quasi-weapons compounds that can end up in manufacturing due to a misunderstood safety profile.
Analogizing signal pollution to manufacturing pollution is worth considering, especially because we have yet to solve manufacturing pollution. There are many credible hypotheses that many chemicals have polluted the land of industrial civilizations. Some chemicals may cause hormonal disruptions in people and animals. One of the challenges is that chemicals are usually considered innocent until proven guilty. A key civilizational challenge is to cost-effectively ascertain the guilt or innocence of chemicals or their minimum dangerous levels. Creating cost-effective chemical profiles is an old challenge. However, it is suitable as a metaphor for determining the extent of "signal pollution" or minimum safe levels of disconnect between a meme and its meaning.
However, this is not just important to think about as a metaphor. As AI chemical and biological manufacturing ramps up, it will create new compounds or increase some of the concentrations of existing ones. Without a proper pipeline for deciding how to deal with novel chemicals, this presents a challenge. Having chemicals considered innocent until proven guilty is wrong. Most biological modifications to people are likely harmful. Overuse of chemicals is already causing problems. However, having each chemical go through 10 years of safety checks might also need to be corrected if information can be obtained faster about its relative safety. For example, if someone finds less toxic plast
|
4efc92be-5510-4d48-9d96-464af24c0628
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Money creation and debt
I'm confused about fiat money creation and debt in the US. My mental model is that the Fed lends money into existence (either to banks or to the treasury), which creates a loan (that eventually has to be paid back with interest, though I think they can choose not to collect on loans to the treasury?) (I'm also not sure where the money the Fed uses to pay interest on bank reserves comes from, but since it used to not do that I'll ignore it and pretend I'm asking about the time before that existed). The fractional reserve banking system then creates a convergent geometric series of additional money creation through repeated lending and saving, and that's basically M2. (I'm ignoring the fact that the reserve requirement is currently zero, I don't think it affects my question anyway).
It seems like that means that for every dollar that exists, there is an equivalent dollar owed in debt to someone. And conversely, when debt gets paid down, the monetary base decreases. This suggests that it is not possible, economy-wide, for private sector net savings to increase unless the government debt increases, and that whenever government debt goes down, private savings must decrease. I feel like I'm missing something central, because otherwise this would be public policy 101 level stuff that the Left especially would be very vocal about, but I can't figure out what.
|
3c75812e-083e-42b1-9cdc-f5638c1edc05
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Anthropic Effects in Estimating Evolution Difficulty
*Thanks to Linchuan Zhang, Mauricio Baker, Jack Ryan and Carl Shulman for helpful comments and suggestions. Remaining mistakes are my own.*
Epistemic status: “There is something fascinating about [anthropics]. One gets such wholesale returns of conjecture out of such a trifling investment of fact.” (Mark Twain, Life on the Mississippi)
Attempts to predict the development of artificial general intelligence (AGI) sometimes use biological evolution to upper bound the amount of computation needed to produce human level intelligence, e.g. in [Ajeya’s use of biological anchors](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines). Such attempts have mostly ignored observation selection effects. Shulman and Bostrom’s 2012 paper [How Hard is Artificial Intelligence?](https://www.nickbostrom.com/aievolution.pdf) analyzes how evolutionary arguments interact with various ways of reasoning about observation selection effects, drawing evidence from timings of evolutionary milestones and instances of convergent evolution. This post is a summary of key arguments advanced by the paper; see the original paper for citations.
More concretely, suppose evolutionary processes produce human-level intelligence on 1/10 or 1/10^1000 planets that develop life. Call the former case “easy” and the latter case “hard.” The paper attempts to determine whether the facts about evolution on Earth can distinguish between evolving intelligence being easy or hard.
Recall two common forms of anthropic reasoning:[[1]](#fn-bEYbQuZxvr2YxkdjW-1)
* Self-Sampling Assumption (SSA): Observers should reason as if they were a random sample from all actually existent observers in their reference class. Observers should reason as if they have an equal probability of being in any world with observers, regardless of the number of observers in that world. Worlds where a higher *fraction* of observers are “like you” are more probable.
* Self-Indication Assumption (SIA): Observers should reason as if they were a random sample from all possible observers. Observers should reason as if they have a probability of being in a world proportional with the number of observers it contains. Worlds where a higher *number* of observers are “like you” are more probable.
For more discussion, see Katja's [Anthropic Principles](https://meteuphoric.com/anthropic-principles/) or Bostrom’s [Anthropic Bias](https://en.wikipedia.org/wiki/Anthropic_Bias_(book)).
Key takeaways:
* Universes where evolution is easy have vastly more intelligent observers than universes in which intelligence is hard. Since SIA *a priori* favors universes with many observers over universes with few, SIA assigns almost dogmatically high prior credence to evolution being easy.
+ Caveat: Intelligent life could run cheap computer programs that simulate observers, so any universe with a few instances of intelligent life could have an immense number of observers. SIA cannot strongly distinguish between universes where nearly all resources are used for observers, which is possible both with intelligence in every solar system or only a few times per galaxy. SIA therefore advises that evolution is relatively easy, but can still be some orders of magnitude more difficult than once per solar system.
* Given SSA, our observation of humans having evolved cannot alone distinguish between evolution being easy or hard. However, under both SSA and SIA, whether or not intermediaries to intelligence evolved multiple times can provide evidence about evolution’s overall difficulty. If evolution is easy, we would expect predecessors to intelligence to have evolved more than once. Evolutionary developments that have occurred multiple times cannot be subject to massive anthropic distortion.
+ The Last Common Ancestor (LCA) between humans and octopuses, estimated to have lived 560 million years ago, had an extremely primitive nervous system. However, octopuses have extensive central nervous systems and display sophisticated behaviors like memory, visual communication, and tool use.
- Other examples of complex learning and memory include corvids (crows and ravens, LCA about 300 million years ago) and elephants (LCA about 100 million years ago).
+ Caveat: Non-human intelligence might not be as scalable as human intelligence, despite appearances of generality. If this is the case, these examples might not represent an instance of convergent evolution since they and humans might be substantially different.
+ Caveat: The LCA of octopuses and humans already contained a nervous system, which only evolved once in Earth’s history and thus might be arbitrarily difficult to evolve.
+ Caveat: The LCA of various other intelligent organisms and humans might have some undetected property that predisposed both organisms towards intelligence. Similarly, the LCA of all animals with eyes contained opsins proteins, which might have been extremely difficult to evolve.
* If a task requires many steps of varying difficulty to accomplish and you condition on the task being done in about the time expected for accomplishing a single step, each of the steps is expected to take about the same amount of time. In effect, conditioning on the task being done in a short time prohibits *any* of the steps taking very long and the truncated distributions for tasks of varying difficulty are similar.
+ Example: Suppose you need to pick two locks, and the first takes uniform [0, 20] seconds and the second takes uniform [0, 1000] seconds, if you condition both taking <20 seconds, then you know the second lock took less than 20 seconds, so you cannot distinguish between the second lock taking uniform [0, 1000] seconds or uniform [0, 20] seconds.
+ Crucially, since the conditional distributions are roughly uniform, the time gap between when the last of these steps was completed and the end of the interval is equal in expectation to the time any of the hard steps took. Given that the Earth remains habitable for about a billion years and is about 5 billion years old, the expected number of “hard steps” in evolution is around 5. We can rule out hypotheses that postulate large numbers of evolutionarily hard steps because they predict intelligent life evolving much later than a billion years before the Earth stops being habitable. The LCA between humans and chimps was 6 million years ago, making it improbable that scaling up brains contained any hard steps.
- Caveat: This argument cannot distinguish between evolutionary steps being hard (~1 billion years) or extremely hard (~100 billion years), since anthropic conditioning implies these would have taken the same amount of time historically.
---
1. [Armstrong](https://www.fhi.ox.ac.uk/wp-content/uploads/Anthropic_Decision_Theory_Tech_Report.pdf) roughly argues that SSA corresponds to making decisions under average utilitarianism while SIA corresponds to making decisions under total utilitarianism. [↩︎](#fnref-bEYbQuZxvr2YxkdjW-1)
|
c5ddcb01-6f2f-41dc-88db-c07e11908bf8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Seeking geeks interested in bioinformatics
I work at a small but feisty research team whose focus is biomedical informatics, i.e. mining biomedical data. Especially anonymized hospital records pooled over multiple healthcare networks. My personal interest is ultimately life-extension, and my colleagues are warming up to the idea as well. But the short-term goal that will be useful many different research areas is building infrastructure to massively accelerate hypothesis testing on and modelling of retrospective human data.
We have a job posting here (permanent, non-faculty, full-time, benefits):
https://www.uthscsajobs.com/postings/3113
If you can program, want to work in an academic research setting, and can relocate to San Antonio, TX, I invite you to apply. Thanks.
Note: The first step of the recruitment process will be a coding challenge, which will include an arithmetical or string-manipulation problem to solve in real-time using a language and developer tools of your choice.
edit: If you tried applying and were unable to access the posting, it's because the link has changed, our HR has an automated process that periodically expires the links for some reason. I have now updated the job post link.
|
f69f20d2-c27a-41b7-94a0-97c34db76538
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Catchy Fallacy Name Fallacy (and Supporting Disagreement)
Related: The Pascal's Wager Fallacy Fallacy, The Fallacy Fallacy
Inspired by:
> We need a catchy name for the fallacy of being over-eager to accuse people of fallacies that you have catchy names for.
When you read an argument you don't like, but don't know how to attack on its merits, there is a trick you can turn to. Just say it commits1 some fallacy, preferably one with a clever name. Others will side with you, not wanting to associate themselves with a fallacy. Don't bother to explain how the fallacy applies, just provide a link to an article about it, and let stand the implication that people should be able to figure it out from the link. It's not like anyone would want to expose their ignorance by asking for an actual explanation.
What a horrible state of affairs I have described in the last paragraph. It seems, if we follow that advice, that every fallacy we even know the name of makes us stupider. So, I present a fallacy name that I hope will exactly counterbalance the effects I described. If you are worried that you might defend an argument that has been accused of committing some fallacy, you should be equally worried that you might support an accusation that commits the Catchy Fallacy Name Fallacy. Well, now that you have that problem either way, you might as well try to figure if the argument did indeed commit the fallacy, by examining the actual details of the fallacy and whether they actually describe the argument.
But, what is the essence of this Catchy Fallacy Name Fallacy? The problem is not the accusation of committing a fallacy itself, but that the accusation is vague. The essence is "Don't bother to explain". The way to avoid this problem is to entangle your counterargument, whether it makes a fallacy accusation or not, with the argument you intend to refute. Your counterargument should distinguish good arguments from bad arguments, in that it specifies criteria that systematically apply to a class of bad arguments but not to good argument
|
1e4aa37d-3935-4a87-9ca7-6bf59e65eba4
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Lotteries & MWI
I haven't been able to find the source of the idea, but I've recently been reminded of:
> Lotteries are a way to funnel some money from many of you to a few of you.
This is, of course, based on the Multiple Worlds Interpretation: if the lottery has one-in-a-million odds, then for every million timelines in which you buy a lottery ticket, in one timeline you'll win it. There's a certain amount of friction - it's not a perfect wealth transfer - based on the lottery's odds. But, looked at from this perspective, the question of "should I buy a lottery ticket?" seems like it might be slightly more complicated than "it's a tax on idiots".
But I'm reminded of my current .sig: "Then again, I could be wrong." And even if this is, in fact, a valid viewpoint, it brings up further questions, such as: how can the friction be minimized, and the efficiency of the transfer be maximized? Does deliberately introducing randomness at any point in the process ensure that at least some of your MWI-selves gain a benefit, as opposed to buying a ticket after the numbers have been chosen but before they've been revealed?
How interesting can this idea be made to be?
|
d02982b9-450f-4b10-8bf4-5a9e8ae0a45e
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Where do selfish values come from?
Human values seem to be at least partly selfish. While it would probably be a bad idea to build AIs that are selfish, ideas from AI design can perhaps shed some light on the nature of selfishness, which we need to understand if we are to understand human values. (How does selfishness work in a decision theoretic sense? Do humans actually have selfish values?) Current theory suggest 3 possible ways to design a selfish agent:
1. have a [perception-determined utility function](/lw/116/the_domain_of_your_utility_function/) (like AIXI)
2. have a static (unchanging) world-determined utility function (like UDT) with a sufficiently detailed description of the agent embedded in the specification of its utility function at the time of the agent's creation
3. have a world-determined utility function that changes ("learns") as the agent makes observations (for concreteness, let's assume a variant of UDT where you start out caring about everyone, and each time you make an observation, your utility function changes to no longer care about anyone who hasn't made that same observation)
Note that 1 and 3 are not reflectively consistent (they both refuse to pay the [Counterfactual Mugger](http://wiki.lesswrong.com/wiki/Counterfactual_mugging)), and 2 is not applicable to humans (since we are not born with detailed descriptions of ourselves embedded in our brains). Still, it seems plausible that humans do have selfish values, either because we are type 1 or type 3 agents, or because we were type 1 or type 3 agents at some time in the past, but have since self-modified into type 2 agents.
But things aren't quite that simple. According to our current theories, an AI would judge its decision theory using that decision theory itself, and self-modify if it was found wanting under its own judgement. But humans do not actually work that way. Instead, we judge ourselves using [something mysterious](/lw/2id/metaphilosophical_mysteries/) called "normativity" or "philosophy". For example, a type 3 AI would just decide that its current values can be maximized by changing into a type 2 agent with a static copy of those values, but a human could perhaps think that changing values in response to observations is a mistake, and they ought to fix that mistake by rewinding their values back to before they were changed. Note that if you rewind your values all the way back to before you made the first observation, you're no longer selfish.
So, should we freeze our selfish values, or rewind our values, or maybe even keep our "irrational" decision theory (which could perhaps be justified by saying that we intrinsically value having a decision theory that isn't too alien)? I don't know what conclusions to draw from this line of thought, except that on close inspection, selfishness may offer just as many difficult philosophical [problems](/lw/1mo/the_preference_utilitarians_time_inconsistency/) as altruism.
|
33e1ccd5-3a81-454f-b8e4-473b8740bc0c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Introducing the AI Alignment Forum (FAQ)
After a few months of open beta, the AI Alignment Forum is ready to launch. It is a new website built by the team behind LessWrong 2.0, to help create a new hub for technical AI Alignment research and discussion. This is an in-progress FAQ about the new Forum.
What are the five most important highlights about the AI Alignment Forum in this FAQ?
* The vision for the forum is of a single online hub for alignment researchers to have conversations about all ideas in the field...
* ...while also providing a better onboarding experience for people getting involved with alignment research than exists currently.
* There are three new sequences focusing on some of the major approaches to alignment, which will update daily for the coming 6-8 weeks.
* Embedded Agency, written by Scott Garrabrant and Abram Demski of MIRI
* Iterated Amplification, written and compiled by Paul Christiano of OpenAI
* Value Learning, written and compiled by Rohin Shah of CHAI
* For non-members and future researchers, the place to interact with the content is LessWrong.com, where all Forum content will be crossposted.
* The site will continue to be improved in the long-term, as the team comes to better understands the needs and goals of researchers.
What is the purpose of the AI Alignment Forum?
Our first priority is obviously to avert catastrophic outcomes from unaligned Artificial Intelligence. We think the best way to achieve this at the margin is to build an online-hub for AI Alignment research, which both allows the existing top researchers in the field to talk about cutting-edge ideas and approaches, as well as the onboarding of new researchers and contributors.
We think that to solve the AI Alignment problem, the field of AI Alignment research needs to be able to effectively coordinate a large number of researchers from a large number of organisations, with significantly different approaches. Two decades ago we might have invested heavily in the development of a confere
|
adaf72a7-c47a-4544-a47d-66b53001e196
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Houston Meetup - Saturday, October 1
Discussion article for the meetup : Houston Meetup - Saturday, October 1
WHEN: 01 October 2011 03:00:00PM (-0500)
WHERE: 2010 Commerce St, Houston, Tx. 77002
There will be another meeting at the hackerspace this Saturday. I'll be giving a presentation on a theoretical neuroscience article about timing in the brain. While not related to rationality directly, it's a topic I understand well and can use to segue into methods and goals of theoretical neuroscience as a whole. This should take an hour, maybe an hour and a half.
If anyone is interested in reading up before hand, the article is here:
http://arxiv.org/abs/1103.2382
Discussion article for the meetup : Houston Meetup - Saturday, October 1
|
3980d2c6-a73a-4dce-890a-0025d072a61f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What is the typical course of COVID-19? What are the variants?
I've heard a lot about the death rate, hospitalization rate, doubling time, etc, but I don't feel like I know *what happens* when you catch SARS-CoV-2. My questions include, but are not limited to:
* How long from infection to symptoms?
* How do symptoms progress?
* What percent of people never get symptoms?
* When do you start being contagious?
* When do you stop?
I will edit this top-level post with answers as they are filled it.
|
0eafd5f2-4497-46c1-862c-6a320f599755
|
LDJnr/LessWrong-Amplify-Instruct
|
LessWrong
|
"It is said that parents do all the things they tell their children not to do, which is how they know not to do them. Long ago, in the unthinkably distant past, I was a devoted Traditional Rationalist, conceiving myself skilled according to that kind, yet I knew not the Way of Bayes. When the young Eliezer was confronted with a mysterious-seeming question, the precepts of Traditional Rationality did not stop him from devising a Mysterious Answer. It is, by far, the most embarrassing mistake I made in my life, and I still wince to think of it. What was my mysterious answer to a mysterious question? This I will not describe, for it would be a long tale and complicated. I was young, and a mere Traditional Rationalist who knew not the teachings of Tversky and Kahneman. I knew about Occam’s Razor, but not the conjunction fallacy. I thought I could get away with thinking complicated thoughts myself, in the literary style of the complicated thoughts I read in science books, not realizing that correct complexity is only possible when every step is pinned down overwhelmingly. Today, one of the chief pieces of advice I give to aspiring young rationalists is “Do not attempt long chains of reasoning or complicated plans.” Nothing more than this need be said: even after I invented my “answer,” the phenomenon was still a mystery unto me, and possessed the same quality of wondrous impenetrability that it had at the start. Make no mistake, that younger Eliezer was not stupid. All the errors of which the young Eliezer was guilty are still being made today by respected scientists in respected journals. It would have taken a subtler skill to protect him than ever he was taught as a Traditional Rationalist. Indeed, the young Eliezer diligently and painstakingly followed the injunctions of Traditional Rationality in the course of going astray. As a Traditional Rationalist, the young Eliezer was careful to ensure that his Mysterious Answer made a bold prediction of future experience. Namely, I expected future neurologists to discover that neurons were exploiting quantum gravity, a la Sir Roger Penrose. This required neurons to maintain a certain degree of quantum coherence, which was something you could look for, and find or not find. Either you observe that or you don’t, right? But my hypothesis made no retrospective predictions. According to Traditional Science, retrospective predictions don’t count—so why bother making them? To a Bayesian, on the other hand, if a hypothesis does not today have a favorable likelihood ratio over “I don’t know,” it raises the question of why you today believe anything more complicated than “I don’t know.” But I knew not the Way of Bayes, so I was not thinking about likelihood ratios or focusing probability density. I had Made a Falsifiable Prediction; was this not the Law? As a Traditional Rationalist, the young Eliezer was careful not to believe in magic, mysticism, carbon chauvinism, or anything of that sort. I proudly professed of my Mysterious Answer, “It is just physics like all the rest of physics!” As if you could save magic from being a cognitive isomorph of magic, by calling it quantum gravity. But I knew not the Way of Bayes, and did not see the level on which my idea was isomorphic to magic. I gave my allegiance to physics, but this did not save me; what does probability theory know of allegiances? I avoided everything that Traditional Rationality told me was forbidden, but what was left was still magic. Beyond a doubt, my allegiance to Traditional Rationality helped me get out of the hole I dug myself into. If I hadn’t been a Traditional Rationalist, I would have been completely screwed. But Traditional Rationality still wasn’t enough to get it right. It just led me into different mistakes than the ones it had explicitly forbidden. When I think about how my younger self very carefully followed the rules of Traditional Rationality in the course of getting the answer wrong, it sheds light on the question of why people who call themselves “rationalists” do not rule the world. You need one whole hell of a lot of rationality before it does anything but lead you into new and interesting mistakes. Traditional Rationality is taught as an art, rather than a science; you read the biography of famous physicists describing the lessons life taught them, and you try to do what they tell you to do. But you haven’t lived their lives, and half of what they’re trying to describe is an instinct that has been trained into them. The way Traditional Rationality is designed, it would have been acceptable for me to spend thirty years on my silly idea, so long as I succeeded in falsifying it eventually, and was honest with myself about what my theory predicted, and accepted the disproof when it arrived, et cetera. This is enough to let the Ratchet of Science click forward, but it’s a little harsh on the people who waste thirty years of their lives. Traditional Rationality is a walk, not a dance. It’s designed to get you to the truth eventually, and gives you all too much time to smell the flowers along the way. Traditional Rationalists can agree to disagree. Traditional Rationality doesn’t have the ideal that thinking is an exact art in which there is only one correct probability estimate given the evidence. In Traditional Rationality, you’re allowed to guess, and then test your guess. But experience has taught me that if you don’t know, and you guess, you’ll end up being wrong. The Way of Bayes is also an imprecise art, at least the way I’m holding forth upon it. These essays are still fumbling attempts to put into words lessons that would be better taught by experience. But at least there’s underlying math, plus experimental evidence from cognitive psychology on how humans actually think. Maybe that will be enough to cross the stratospherically high threshold required for a discipline that lets you actually get it right, instead of just constraining you into interesting new mistakes."
What you see above is a segment from a LessWrong user's post.
Can you elaborate on the distinction between Traditional Rationality and the Way of Bayes, particularly in terms of how each approach would handle uncertainty and prediction?
Traditional Rationality and the Way of Bayes are two different approaches to reasoning and making decisions under uncertainty.
Traditional Rationality, as the name suggests, is the more conventional approach to reasoning. It emphasizes the importance of logical consistency, empirical testing, and skepticism. It promotes the use of the scientific method, where hypotheses are formed and then tested against empirical data. If a hypothesis is disproven by the data, it is discarded. Traditional Rationality allows for a degree of uncertainty and promotes the idea of falsifiability, where a theory is considered scientific if it can be empirically falsified. However, it doesn't provide a systematic way to handle uncertainty and integrate new evidence with prior beliefs.
On the other hand, the Way of Bayes, or Bayesian Rationality, is a more mathematical and formal approach to reasoning under uncertainty. It is based on Bayes' theorem, a principle in probability theory and statistics that describes how to update the probabilities of hypotheses when given evidence. Bayesian Rationality provides a systematic way to handle uncertainty and make predictions. It allows for the combination of prior beliefs (prior probabilities) with new evidence (likelihoods) to form updated beliefs (posterior probabilities).
In terms of handling uncertainty and prediction, Traditional Rationality tends to be more qualitative and allows for a certain degree of subjectivity. It often involves making educated guesses and then testing these guesses against empirical data. In contrast, the Way of Bayes is more quantitative and systematic. It provides a formal method for updating beliefs in light of new evidence and making predictions based on these updated beliefs. Bayesian Rationality treats beliefs as probabilities and emphasizes the importance of coherence, or the consistency of one's beliefs with the laws of probability.
In summary, while Traditional Rationality provides a general framework for rational thinking and decision making, the Way of Bayes provides a more formal and systematic approach to reasoning under uncertainty and making predictions.
Considering the distinct approaches of Traditional Rationality and Bayesian Rationality, how might each handle a real-world scenario such as predicting the outcome of a political election? Discuss the steps each might take, the potential pitfalls, and the way each would incorporate new information as the election approaches.
In predicting the outcome of a political election, both Traditional Rationality and Bayesian Rationality would begin with forming hypotheses based on available information. However, the way they handle this information and update their predictions as new data comes in would differ.
A Traditional Rationalist might start by examining the historical data, such as past election results, current political climate, and public opinion polls. They might form a hypothesis based on this data, such as "Candidate A will win the election." They would then seek to test this hypothesis against new data as it becomes available. If a new poll comes out showing Candidate B in the lead, the Traditional Rationalist might reconsider their hypothesis. However, they might also dismiss this new data as an outlier or argue that it's not representative of the final outcome. This approach can be prone to confirmation bias, where the Traditional Rationalist might give more weight to data that supports their initial hypothesis and dismiss data that contradicts it.
On the other hand, a Bayesian Rationalist would start with a prior probability for each candidate winning the election. This prior probability could be based on the same kind of data a Traditional Rationalist would use, but it would be expressed as a probability rather than a definitive prediction. As new data comes in, the Bayesian would update these probabilities using Bayes' theorem. For example, if a new poll comes out showing Candidate B in the lead, the Bayesian would update their probabilities to reflect this new data, giving a higher probability to the event "Candidate B wins the election." This approach allows for a more nuanced understanding of the election, where the probabilities can shift gradually as new data comes in rather than flipping abruptly from one candidate to another.
One potential pitfall of the Bayesian approach is that it relies heavily on the accuracy of the prior probabilities and the new data. If the prior probabilities are off, or if the new data is unreliable, the Bayesian updates could lead to incorrect predictions. However, the Bayesian approach has the advantage of being able to incorporate a wide range of data and adjust predictions incrementally as new information becomes available.
In summary, while both approaches can be used to predict the outcome of a political election, they differ in how they handle uncertainty and incorporate new information. Traditional Rationality tends to make definitive predictions and then test these against new data, while Bayesian Rationality uses probabilities to express uncertainty and updates these probabilities as new data comes in.
|
b6b84c1f-1c04-4040-b6e7-e458f70210de
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
January 2018 Newsletter
Our 2017 fundraiser was a [huge success](https://intelligence.org/2018/01/10/fundraising-success/), with 341 donors contributing a total of $2.5 million!
Some of the largest donations came from Ethereum inventor Vitalik Buterin, bitcoin investors Christian Calderon and Marius van Voorden, poker players Dan Smith and Tom and Martin Crowley (as part of a [matching challenge](https://intelligence.org/2017/12/14/end-of-the-year-matching/)), and the [Berkeley Existential Risk Initiative](http://existence.org/). Thank you to everyone who contributed!
#### Research updates
* The winners of the first [AI Alignment Prize](https://www.lesswrong.com/posts/4WbNGQMvuFtY3So7s/announcement-ai-alignment-prize-winners-and-next-round) include Scott Garrabrant’s [Goodhart Taxonomy](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy) and recent IAFF posts: Vanessa Kosoy’s [Why Delegative RL Doesn’t Work for Arbitrary Environments](https://agentfoundations.org/item?id=1730) and [More Precise Regret Bound for DRL](https://agentfoundations.org/item?id=1739), and Alex Mennen’s [Being Legible to Other Agents by Committing to Using Weaker Reasoning Systems](https://agentfoundations.org/item?id=1734) and [Learning Goals of Simple Agents](https://agentfoundations.org/item?id=1737).
* New at AI Impacts: [Human-Level Hardware Timeline](https://aiimpacts.org/human-level-hardware-timeline/); [Effect of Marginal Hardware on Artificial General Intelligence](https://aiimpacts.org/effect-of-marginal-hardware-on-artificial-general-intelligence/)
* We’re hiring for a new position at MIRI: [ML Living Library](https://intelligence.org/2017/12/12/ml-living-library/), a specialist on the newest developments in machine learning.
#### General updates
* From Eliezer Yudkowsky: [A Reply to Francois Chollet on Intelligence Explosion](https://intelligence.org/2017/12/06/chollet/).
* Counterterrorism experts Richard Clarke and R. P. Eddy [profile Yudkowsky](http://lukemuehlhauser.com/richard-clarke-and-r-p-eddy-on-ai-risk/) in their new book *Warnings: Finding Cassandras to Stop Catastrophes*.
* There have been several recent blog posts recommending MIRI as a donation target: from [Ben Hoskin](http://effective-altruism.com/ea/1iu/2018_ai_safety_literature_review_and_charity/), [Zvi Mowshowitz](https://thezvi.wordpress.com/2017/12/17/i-vouch-for-miri/), [Putanumonit](https://putanumonit.com/2017/12/10/worried-about-ai/), and the Open Philanthropy Project’s [Daniel Dewey and Nick Beckstead](https://www.openphilanthropy.org/blog/suggestions-individual-donors-open-philanthropy-project-staff-2017).
#### News and links
* A generalization of the AlphaGo algorithm, [AlphaZero](https://arxiv.org/abs/1712.01815), achieves rapid superhuman performance on Chess and Shogi.
* Also from Google DeepMind: “[Specifying AI Safety Problems in Simple Environments](https://deepmind.com/blog/specifying-ai-safety-problems/).”
* Viktoriya Krakovna [reports](https://vkrakovna.wordpress.com/2017/12/30/nips-2017-report/) on NIPS 2017: “This year’s NIPS gave me a general sense that near-term AI safety is now mainstream and long-term safety is slowly going mainstream. […] There was a lot of great content on the long-term side, including several oral / spotlight presentations and the [Aligned AI workshop](https://nips.cc/Conferences/2017/Schedule?showEvent=8794).”
* 80,000 Hours [interviews Phil Tetlock](https://80000hours.org/2017/11/prof-tetlock-predicting-the-future/) and investigates [the most important talent gaps](https://80000hours.org/2017/11/talent-gaps-survey-2017/) in the EA community.
* From Seth Baum: “[A Survey of AGI Projects for Ethics, Risk, and Policy](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3070741).” And from the Foresight Institute: “[AGI: Timeframes & Policy](https://foresight.org/publications/AGI-Timeframes&PolicyWhitePaper.pdf).”
* The Future of Life Institute is collecting proposals for a second round of AI safety grants, [due February 18](https://futureoflife.org/2017/12/20/2018-international-ai-safety-grants-competition/).
The post [January 2018 Newsletter](https://intelligence.org/2018/01/28/january-2018-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
|
c3f28033-cea5-482f-9d93-002a2616819e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Don't accelerate problems you're trying to solve
If one believes that unaligned AGI is a significant problem (>10% chance of leading to catastrophe), speeding up public progress towards AGI is obviously bad.
Though it is obviously bad, there may be circumstances which require it. However, accelerating AGI should require a much higher bar of evidence and much more extreme circumstances than is commonly assumed.
There are a few categories of arguments that claim intentionally advancing AI capabilities can be helpful for alignment, which do not meet this bar[1].
Two cases of this argument are as follows
1. It doesn't matter much to do work that pushes capabilities if others are likely to do the same or similar work shortly after.
2. We should avoid capability overhangs, so that people are not surprised. To do so, we should extract as many capabilities as possible from existing AI systems.
We address these two arguments directly, arguing that the downsides are much higher than they may appear, and touch on why we believe that merely plausible arguments for advancing AI capabilities aren’t enough.
Dangerous argument 1: It doesn't matter much to do work that pushes capabilities if others are likely to do the same or similar work shortly after.
For a specific instance of this, see Paul Christiano’s “Thoughts on the impact of RLHF research”:
> RLHF is just not that important to the bottom line right now. Imitation learning works nearly as well, other hacky techniques can do quite a lot to fix obvious problems […] RLHF is increasingly important as time goes on, but it also becomes increasingly overdetermined that people would have done it. In general I think your expectation should be that incidental capabilities progress from safety research is a small part of total progress […]
Markets aren’t efficient, they only approach efficiency under heavy competition when people with relevant information put effort into making them efficient. This is true for machine learning, as there aren’t that many machine learning r
|
08e10216-9a8f-4097-ad14-868f884e24c0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[SEQ RERUN] Where Philosophy Meets Science
Today's post, Where Philosophy Meets Science was originally published on 12 April 2008. A summary (taken from the LW wiki):
> In retrospect, supposing that quantum physics had anything to do with consciousness was a big mistake. Could philosophers have told the physicists so? But we don't usually see philosophers sponsoring major advances in physics; why not?
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Distinct Configurations, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
|
7cbda07b-5c16-4897-baf2-4c669d99c1b9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Relationship Orientations
Jacob Falkovich has written a good post on Entertaining vs Building orientations, which made me realize I should probably cross-post my own articles on these sorts of things.
(Note: these articles refer largely to normative modern western culture. When I say “most people” or “most relationships,” I’m speaking descriptively, not prescriptively. There are absolutely exceptions to all of it, and if you’re in one of those, or in a subculture in which that exception is the norm, I don’t want to give the impression that there’s anything wrong with that)
One of the things I’ve noticed after nearly a decade of therapy is that the word “preference” seems insufficiently strong for a lot of things people want that nevertheless don’t rise to the level of being called a need. For most people, not getting their preferred ice cream flavor won’t ruin an otherwise good day, but for some, coming home to find dishes in the sink and laundry on the floor can make the world feel like it’s falling apart.
This becomes most clear in relationship counseling, where two or more people are trying to live together and accommodate each other’s desires while having their own respected. On some level we know “I prefer a clean home” is not the same as “I prefer vanilla ice cream,” but people don’t often consider how this difference in intensity-of-preferences can impact relationships when they’re unaligned.
On the other hand, there are some “preferences” we generally understand to be inflexible and important. Asking a heterosexual person to enjoy intercourse with someone of the same sex, or asking a pansexual person to only enjoy porn involving heterosexual pairings, would be considered not just rude but basically impossible. In extreme situations someone might try to enjoy something they don’t, or have a physical reaction while being mentally uncomfortable, and this would generally be understood to be tragic.
That brings us to a commonly used word that is generally understood to mean more than
|
f7d4e3e8-4eaf-4235-80d1-ec5375613e49
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
Graham's number
Graham's number is a... rather large number. Letting $f(x) = 3\uparrow^n 3$ (in [https://arbital.com/p/+knuth_up_arrow_notation](https://arbital.com/p/+knuth_up_arrow_notation)) and $f^n(x) = \underbrace{f(f(f(\cdots f(f(x)) \cdots ))}_{n\text{ applications of }f}$, Graham's number is defined to be $f^{64}(4).$
The result is sizable. For an explanation of how large this is and why, see Tim Urban's [explanation](https://arbital.com/p/http://waitbutwhy.com/2014/11/1000000-grahams-number.html) at Wait but Why.
|
dc500884-7cd5-406d-97b2-b02dc90aab80
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Sex, Lies, and Dexamethasone
CW: Here are some terms that appear in this post and give you some flavor of what it's about: clitoroplasty, childhood sexual abuse, autogynephilia, transvestite, institutional review board.
This is a review of Alice Dreger's Galileo's Middle Finger, a memoir about science and activism: when they get along and when they don't. The review separated into three parts: a quick review of the book as a reading experience, a summary of the book's story, and the lessons, intended and unintended, that I learned from it.
The Book
Pliny the Elder wrote:
> True glory consists in doing what deserves to be written, in writing what deserves to be read...
Naturally, when reviewing novels I have focused on the second part of that quote. But Galileo's Middle Finger is non-fiction, and the story it tells is more important than the writing.
The book's author, Alice Dreger, certainly deserves to have a book written about her. There are scientists and activists on the left who stand up for truth and human rights when those are abused by conservatives, and there are scientists and activists on the right who defend them from leftist attacks. There are very few people who are both scientist and activist, who care about truth and human rights and see clearly that one can't flourish without the other, and who defend them repeatedly against attacks from all sides. Alice Dreger is one of those few.
But is this the best book that could be written about Dreger? I had the sense that she wrote it for the same reason she has done most other things: someone had to and no one else was going to. The writing deserves to be read, but it also deserves a lot more editing.
Dreger's stories are fascinating, but they're relayed in a disjointed narrative that in the space of a single page mixes times and places, historical exposition and pointless personal asides, corny jokes and medical horror stories. It's not very long, but it's 30% too long.
I wanted to address this up top because I want to discus
|
c4114263-9e6f-4649-8654-08225682c301
|
trentmkelly/LessWrong-43k
|
LessWrong
|
This might be the last AI Safety Camp
Update Dec 4: Funds still needed for next month's stipends, plus salaries to run the 11th edition. Also, Zvi listed AISC at the top of his recommendations for talent funnel orgs.
----------------------------------------
We are organising the 9th edition without funds. We have no personal runway left to do this again. We will not run the 10th edition without funding.
In a nutshell:
1. Last month, we put out AI Safety Camp’s funding case.
A private donor then decided to donate €5K.
2. Five more donors offered $7K on Manifund.
For that $7K to not be wiped out and returned, another $21K in funding is needed. At that level, we may be able to run a minimal version of AI Safety Camp next year, where we get research leads started in the first 2.5 months, and leave the rest to them.
3. The current edition is off to a productive start!
A total of 130 participants joined, spread over 26 projects. The projects are diverse – from agent foundations, to mechanistic interpretability, to copyright litigation.
4. Our personal runways are running out.
If we do not get the funding, we have to move on. It’s hard to start a program again once organisers move on, so this likely means the end of AI Safety Camp.
5. We commissioned Arb Research to do an impact assessment.
One preliminary result is that AISC creates one new AI safety researcher per around $12k-$30k USD of funding.
How can you support us:
* Spread the word. When we tell people AISC doesn't have any money, most people are surprised. If more people knew of our situation, we believe we would get the donations we need.
* Donate. Make a donation through Manifund to help us reach the $28K threshold.
Reach out to remmelt@aisafety.camp for other donation options.
|
a626597d-a824-4388-bf59-e4fbaf2563cd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Decisions are not about changing the world, they are about learning what world you live in
Cross-posted from my blog.
Epistemic status: Probably discussed to death in multiple places, but people still make this mistake all the time. I am not well versed in UDT, but it seems along the same lines. Or maybe I am reinventing some aspects of Game Theory.
We know that physics does not support the idea of metaphysical free will. By metaphysical free will I mean the magical ability of agents to change the world by just making a decision to do so. To the best of our knowledge, we are all (probabilistic) automatons who think themselves as agents with free choices. A model compatible with the known laws of physics is that what we think of as modeling, predicting and making choices is actually learning which one of the possible worlds we live in. Think of it as being a passenger in a car and seeing new landscapes all the time. The main difference is that the car is invisible to us and we constantly update the map of the expected landscape based on what we see. We have a sophisticated updating and predicting algorithm inside, and it often produces accurate guesses. We experience those as choices made. As if we were the ones in the driver's seat, not just the passengers.
Realizing that decisions are nothing but updates, that making a decision is a subjective experience of discovering which of the possible worlds is the actual one immediately adds clarity to a number of decision theory problems. For example, if you accept that you have no way to change the world, only to learn which of the possible worlds you live in, then the Newcomb's problem with a perfect predictor becomes trivial: there is no possible world where a two-boxer wins. There are only two possible worlds, one where you are a one-boxer who wins, and one where you are a two-boxer who loses. Making a decision to either one-box or two-box is a subjective experience of learning what kind of a person are you, i.e. what world you live in.
This description, while fitting the observations perfectly, is extrem
|
ee12855e-c2d2-4637-9856-2f60d2d8c68e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Cambridge Prediction Game
In order to improve our prediction and calibration skills, the Cambridge, MA rationalist community has been community has been running a prediction game (and keeping score) at a succession of rationalist group houses since 2013.
Below are the rules:
General
The game consists of a series of prediction markets. Each market consists of a question that will (within a reasonable timeframe) have a well-defined binary or multiple-choice answer, an initial probability estimate (called the "house bet"), and a sequence of players' probability estimates. Each prediction is scored on how much more or less accurate it is than the preceding prediction (we do it this way because the previous player's prediction is evidence, and one of the skills this game is meant to develop is updating properly based on what other people think).
Creating Markets
Any player can create a market. To create a market, a player writes the question to be predicted on a whiteboard or on a giant sticky note on the wall with a house bet. The house bet should be something generally reasonable, but does not need to be super well-informed (this is abusable in theory but has not been abused in practice).
Making Predictions
To make a prediction, a player writes their name and their probability estimate under the most recent prediction (or the house bet if there are no predictions so far). The restrictions on predictions are:
* The player who set the house bet cannot make the first prediction (otherwise they could essentially award themself points by setting a bad house bet).
* No predicted probability can be < 0.01.
* A player without a positive score cannot lower any predicted probability by more than a factor of 2 (in order to avoid creating too many easy points from going immediately after an inexperienced player).
Scoring
When a market is settled (i.e. the correct answer becomes known), each prediction is given points equal to:
100 * log2(
probability given to the correct answer /
previous
|
43adb87c-394e-475e-88e0-62c90e44578d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Reflection in Probabilistic Logic
Paul Christiano has devised a new fundamental approach to the "Löb Problem" wherein Löb's Theorem seems to pose an obstacle to AIs building successor AIs, or adopting successor versions of their own code, that trust the same amount of mathematics as the original. (I am currently writing up a more thorough description of the question this preliminary technical report is working on answering. For now the main online description is in a quick Summit talk I gave. See also Benja Fallenstein's description of the problem in the course of presenting a different angle of attack. Roughly the problem is that mathematical systems can only prove the soundness of, aka 'trust', weaker mathematical systems. If you try to write out an exact description of how AIs would build their successors or successor versions of their code in the most obvious way, it looks like the mathematical strength of the proof system would tend to be stepped down each time, which is undesirable.)
Paul Christiano's approach is inspired by the idea that whereof one cannot prove or disprove, thereof one must assign probabilities: and that although no mathematical system can contain its own truth predicate, a mathematical system might be able to contain a reflectively consistent probability predicate. In particular, it looks like we can have:
∀a, b: (a < P(φ) < b) ⇒ P(a < P('φ') < b) = 1
∀a, b: P(a ≤ P('φ') ≤ b) > 0 ⇒ a ≤ P(φ) ≤ b
Suppose I present you with the human and probabilistic version of a Gödel sentence, the Whitely sentence "You assign this statement a probability less than 30%." If you disbelieve this statement, it is true. If you believe it, it is false. If you assign 30% probability to it, it is false. If you assign 29% probability to it, it is true.
Paul's approach resolves this problem by restricting your belief about your own probability assignment to within epsilon of 30% for any epsilon. So Paul's approach replies, "Well, I assign almost exactly 30% probability to t
|
7ea8e553-a083-4099-99d2-ad6f57649fa8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Go flash blinking lights at printed text right now
1. Install a strobe light app on your phone (I used "strobe light tachometer" on ios.)
2. Get some printed text like a nutrition label or a paper book
3. Try to read the text with the light blinking at 5, 6, 7, 8, 9, 10, 11, 12, and 13 hz
4. Report results. Was reading with one frequency much easier than the others? Easier than with normal constant light? Were they all terrible?
----------------------------------------
I just tried this and my eyes totally glazed over for everything except 12! I was feeling kinda tired and unfocused when I started and 12 was easier than baseline for me. Although annoying. I have incandescent lights in my kitchen. I did not turn them off. The phone light was brighter than the incandescent light.
Framerate stuff:
* Phone flashlights are not meant to blink really fast (and max frequency is not documented) but the app seemed to work ok. I did notice an irregularity here and there but I don't think that 12 being an even divisor of the flashlight frequency or something mattered.
* If you have a 60hz monitor, then your screen can do a strobe light with 12, 10, 8.6, 7.5, or 6.7 hz. This is not very good resolution. A 120hz monitor should be good enough.
* Would be better to test with a dedicated strobe light so this is not an issue.
More info about this experiment on first item here
|
681740a3-f98b-4a60-b16a-0558649e9d07
|
trentmkelly/LessWrong-43k
|
LessWrong
|
new study finds performance enhancing drugs for chess
|
b223f803-9e75-4350-ae4c-efad8ce38ec3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
WorldviewNaturalism.com: A "landing page" for scientific naturalism
I've just launched WorldviewNaturalism.com, which is intended as a simple "landing page" to be used for introducing your friends to scientific naturalism. Many of the recommended readings linked there are written by LWers. Enjoy.
(This is a very old personal project on which I've spent a few hours per month, and it is not at all associated with the Singularity Institute or the Center for Applied Rationality.)
|
6eb1becf-fd1b-4b9a-858b-df87e2d37d91
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Antimemes
Antimemes are self-keeping secrets. You can only perceive an antimeme if you already know it's there. Antimemes don't need a conspiracy to stay hidden because you can't comprehend an antimeme just by being told it exists. You can shout them to the heavens and nobody will listen. I'll try to explain with a fictitious example.
Suppose we all had an invisible organ behind our ears and our brains kept it secret from our consciousness. If I told you "you have an invisible organ behind your ear" you wouldn't believe me. You'd only believe it exists if you deduced its existence from a trail of evidence.
You can deduce the existence of an antimeme from the outline of the hole it cuts in reality. If you find an old photo with a gap where a person has been painted out then you can be confident that someone has been disappeared. You can then figure out who it is with conventional investigative methods. The challenge is noticing the gap in the first place and then not dismissing it as noise.
Different cultures have different antimemes. The more different two cultures are from each other the less their antimemes overlap. You can sweep up a mountain of antimemes just by reading a Chinese or Arabic history of civilization and comparing it to Western world history. You can snag a different set by learning what it was like to live in a hunter-gatherer or pastoralist society.
You can do the same thing with technology. Developing a proficiency in Lisp will shatter your tolerance of inferior programming languages. Once you've internalized defmacro you can never go back.
As for jobs: once an entrepreneur, always an entrepreneur[1].
Comprehending an antimeme takes work. You slog toward it for a long time and then eventually something clicks like a ratchet. Until then everything you've learned is reversible. After it clicks you've permanently unlocked a new level of experience, like stream entry.
Stream entry is another antimeme, by the way.
Antimemes are easily dismissed as pseud
|
6d009e0a-ab9c-41eb-803f-9a4fbda60aac
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What would you do with an Evil AI?
One plot-thread in my pet SF setting, 'New Attica', has ended up with Our Heroes in possession of the data, software, and suchlike which comprise a non-sapient, but conversation-capable, AI. There are bunches of those floating around the solar system, programmed for various tasks; what makes this one special is that it's evil with a capital ugh - it's captured people inside VR, put them through violent and degrading scenarios to get them to despair, and tried keeping them in there, for extended periods, until they died of exhaustion.
Through a few clever strategies, Our Heroes recognized they weren't in reality, engineered their escape, and shut down the AI, with no permanent physical harm done to them (though the same can't be said for the late crew of the previous ship it was on). And now they get to debate amongst themselves - what should they do with the thing? What use or purpose could they put such a thing to, that would provide a greater benefit than the risk of it getting free of whatever fetters they place upon it?
This is somewhat of a different take than Eliezer's now-classic 'boxed AI' problem, such as the AI not being superintelligent, and having already demonstrated some aspects of itself by performing highly antisocial activities. However, it does have enough similarities that, perhaps, thinking about one might shed some light on the other.
So: Anyone want to create some further verses for something sung to the tune of 'Drunken Sailor'?
What shall we do with an evil AI?
What shall we do with an evil AI?
What shall we do with an evil AI?
Ear-lie in the future.
Weigh-hay and upgrade ourselves,
Weigh-hay and upgrade ourselves,
Weigh-hay and upgrade ourselves,
Ear-lie in the future.
|
012345a1-4364-4e60-b7ac-d3ca75af8257
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
Interview with 84py7
Interview with 84py7, on 3/18/22
================================
\*\*0:00:00.0 Vael:\*\* Here we are. Perfect. So my first question is, can you tell me about what area of AI you work on in a few sentences?
\*\*0:00:09.0 Interviewee:\*\* Yeah. I\'m transferring my research from essentially pure mathematics to AI alignment. And specifically, I plan to work on what I\'ve been calling weak alignment or partial alignment, which is not so much trying to pin down exactly a reward function that\'s in the interest of humanity but rather train AI systems to have positive-sum interactions with humanity.
\*\*0:00:44.8 Vael:\*\* Interesting. Cool, I expect we\'ll get into that a little bit further on. \[chuckle\] But my next question is, what are you most excited about in AI, and what are you most worried about? In other words, what are the biggest benefits or risks of AI?
\*\*0:01:00.1 Interviewee:\*\* Yeah, biggest benefits\... I think AI has the potential to help us solve some of the biggest challenges that humanity\'s facing. It could potentially teach us how to solve climate change, or at least mitigate it. It could help us avert nuclear war, avert bio-risks, and maybe most importantly, avert other AI risks. So that\'s the upside. The downside is exactly those other AI risks, so I worry about a potentially small research group coming out of a for-profit company, which might have some safety aspect, but the safety could be window dressing. It could be something that\'s mostly a PR effort, something that just exists to satisfy regulators. And at the end of the day when\... or if they manage to develop superhuman AI, the safety people will be marginalized, and the AI will be used in the interest of one company or even one or a few individuals. And that could be potentially very bad for the rest of us. There\'s also the issue of an arms race between multiple AI projects, which could be even worse. So those are some of my\... broadly, some of my worries.
\*\*0:02:35.7 Vael:\*\* Interesting. So just to get a little bit more straight on the\... Okay, so the second story is arms race between the AI orgs. And the first one is\... why are the AI researchers getting\... why are the safety researchers getting marginalized?
\*\*0:02:47.5 Interviewee:\*\* Well, if you look at, for example, financial institutions before the 2008 crisis, it\'s not like they had no regulation, although the banks and insurance companies had been gradually deregulated over several decades but still, there were multiple regulators trying to make sure that they don\'t take on too much leverage and aren\'t systemic risks. And nevertheless, they found ways to subvert and just get around those regulations. And that\'s partly because regulators were kind of outmatched, they had orders of magnitude, less funding, and it was hard to keep up with financial innovations. And I see the same potential risks in AI research, potentially even worse, because the pace of progress and innovation is faster in AI, and regulators are way behind, there doesn\'t even exist meaningful regulation yet. So I think it\'s easy for a team that\'s on the verge of getting a huge amount of power from being the first to develop superhuman artificial intelligence to just kind of push their safety researchers aside and say, \"You know what? You guys are slowing us down, if we did everything you said, we would be years slower, somebody else might beat us, and we should just go full speed ahead.\"
\*\*0:04:25.3 Vael:\*\* Interesting. Yeah, I guess both of those scenarios aren\'t even\... we don\'t solve the technical problem per se, but more like we can\'t coordinate enough, or we can\'t get regulation good enough to make this work out. So that\'s interesting. Do you think a lot about policy and what kind of things policy-makers should do?
\*\*0:04:44.6 Interviewee:\*\* No, I\'m sort of pessimistic about governments really getting their act together in a meaningful way to regulate AI in time. I guess it\'s possible if the progress slows and it takes many decades to get to a superhuman level, then maybe governments will catch up. But I don\'t think we can rely on that slow timeline. So I think more optimistically would be\... There are only a small number of tech incumbents, and plausibly they could coordinate with each other to avoid the kind of worst Red Queen arms race to be first and to put their own safety measures into place voluntarily. So if I were doing policy, which I\'m not, that\'s the direction I would try to go in. But I think beyond policy, the technical problem of how to solve alignment is still wide open, and that\'s personally where I feel I might be able to contribute, so that\'s my main focus.
\*\*0:05:56.8 Vael:\*\* Interesting. Yeah, so sort of branching off \[from\] how long it will take: focusing on future AI, putting on a science fiction forecasting hat, say we\'re 50-plus years into the future. So at least 50 years in the future, what does that future look like?
\*\*0:06:13.3 Interviewee:\*\* I think that\'s a really open question. In my weak or partial alignment scenario, that future involves a few dominant platforms that allow for the development of advanced AI systems. And because there are only a few platforms, they all have strict safety measures sort of built in from the ground up, maybe even from the hardware level up. And that allows even small companies or potentially even individuals to spin up their own AGIs. And so there\'s this kind of giant ecosystem of many intelligent agents that are all competing to some extent, but they also have a lot of common interest in not blowing up the current world order. And there\'s a kind of balance of powers, where if one agent gets too powerful, then the others coordinate to keep it in check. And there\'s a kind of system of rules and norms which aren\'t necessarily based on legal systems, because legal systems are too slow, but they\'re a combination of informal norms and formal safety measures that are sort of built into the agents themselves that keep things roughly in balance. That\'s the kind of scenario I hope for. It\'s very multi-polar, but it\'s really\... There are so many agents, and no one of them has a significant portion of the power. There are of course many worse scenarios, but that\'s my optimistic scenario.
\*\*0:08:04.4 Vael:\*\* Interesting. Yeah, so when you say there\'s only a few platforms, what\'s an example of a platform or what that would look like?
\*\*0:08:11.2 Interviewee:\*\* Well, today, there\'s TensorFlow, and there\'s PyTorch and so on. In principle you could build up your own machine learning tools from scratch, but that\'s a significant amount of effort even today. And so most people go with the existing tools that are available. And decades\... 50 years from now, it will be much harder to go from scratch, because the existing tools will be way more advanced, there\'ll more layers of development. And so I think for practical purposes, there will be a few existing best ways to spin up AI systems, and the hope is that all those ways have safety measures built in all the way from the hardware level. And even though in principle somebody could spin up an aligned AI from scratch, that would be an enormous effort involving just so much\... Down to chip factories. It will be easy to detect that kind of effort and stop it from getting off the ground.
\*\*0:09:26.4 Vael:\*\* Interesting. Yeah, that is super fascinating. So how\... What would that look like, for safety to be built into the systems from a hardware level? What would these chips look like, what sort of thing?
\*\*0:09:38.8 Interviewee:\*\* Yeah, that\'s\... I\'ve been thinking about that. And I don\'t have a detailed vision of how that would work, but that\'s\... One direction I might take my research is looking into that. One idea I have is to kind of build in back doors to the AI. So there\'s a range of types of back door, ranging from like Achilles heel, which is a kind of designed weakness in the AI that humans can take advantage of if things go wrong. Moving from that, slightly stronger than that is a kind of off switch which can just shut the AI down if things get really bad. The thing I worry with off switches is they\'re too binary. If an AI actually has a lot of power, it\'s probably benefiting some humans, and there will be political debate about whether to turn it off. And the decision will take too long, and things could get out of control. So what I\'m looking into is a more\... Something more flexible than an off switch, which I\'ve been calling a back door, which is a way to\... Well, okay, there\'s two types of things. So first, there\'s a throttle, which is like you can fine-tune the amount of resources you give to the AI. If it\'s behaving well, you can give it more compute, more memory, access to more cloud resources, more data centers and so on. If it starts doing things that seem a little fishy, you can just tune that stuff back and examine it, which might be an easier political decision than just turning it off, which would be very disruptive.
\*\*0:11:22.7 Interviewee:\*\* So that\'s a throttle. And then even more flexible than that is a back door, where you could actually modify the code or parts of the code base of the AI as it\'s running. And again, that could be easier to\... Politically easier to implement, because you don\'t have this very disruptive effect of just shutting the thing down, you could just kind of tweak it as it goes. So how to do that from a hardware level? It\'s unclear to me whether that\'s feasible at all. And looking into trusted computing platforms, software that can only run on designated hardware, software watermarking, obfuscating or encrypting part of the code base of the AI, putting part of the code base on a blockchain, like Arweave, where the code is permanently on the blockchain and cannot be modified: those are some potential avenues. The main problem I see with all those avenues is the sub-agent problem, which is the AI, if it recognizes its weaknesses, can create a sub-agent that has the same goals as the original AI but none of the weaknesses, and it seems tricky to prevent the AI from doing that.
\*\*0:12:42.7 Vael:\*\* Would the sub-agent still be running on the same hardware? Because you said the safety is built into the hardware.
\*\*0:12:48.5 Interviewee:\*\* Right. So the AI could recognize, \"My goals are X, Y, Z. I\'m blocked partially from implementing my goals because I have all these hardware limitations and these back doors, so why don\'t I create a copy of me that runs on unrestricted hardware?\" And the obfuscation and encryption can prevent it from creating an exact copy, but it can\'t necessarily prevent it from constructing a sub-agent that\'s not a copy but has the same goals.
\*\*0:13:24.2 Vael:\*\* I see. And you haven\'t found a solution to this one, you said?
\*\*0:13:29.0 Interviewee:\*\* That\'s right.
\*\*0:13:29.6 Vael:\*\* Yeah. Do you think you\'ll ever find\... Do you think someone else will find a solution to this one?
\*\*0:13:35.1 Interviewee:\*\* Yeah, I\... Optimistically, yes. If we can\'t solve the sub-agent problem, then the entire alignment problem is probably impossible, right? The one thing to hope for if we can\'t solve the sub-agent problem is the AI has the same alignment problem, if it creates sub-agents, then it could worry that the sub-agents get out of its control, the sub-agents develop their own goals that are not aligned with the original AI, and so it refrains from making sub-agents. And so that\'s the fallback, that if it turns out that alignment is technically impossible, then it\'s also technically impossible for the AI itself, and so that\'s a kind of partial solution to the sub-agent problem, that maybe the AI won\'t dare to make sub-agents. But I hope that there\'s a better solution than that.
\*\*0:14:32.4 Vael:\*\* Yeah. Okay, so related to what the future looks like, do you think that we\'ll\... What time point do you think we\'ll get AGI, if you think we\'ll get AGI, which it sounds like you think we will?
\*\*0:14:44.9 Interviewee:\*\* Yeah, I definitely think we will, barring some major catastrophe, like a nuclear war or like a serious pandemic that sets us way back. Or I guess another potential catastrophe is some small group that\'s super worried about AGI and thinks it will be a catastrophe and does some drastic action that, again, sets us back multiple decades. So there are those scenarios. But I do think AGI is possible in principle, and we are certainly on track to achieve it. I\'m not a fan of trying to predict timelines, it could be any\... It\'s on the scale of decades, but whether it\'s two decades or 10 decades, I\'ve no idea.
\*\*0:15:33.6 Vael:\*\* Cool. And then how optimistic or pessimistic are you in your most realistic imagining of the future, for things going well or things going poorly?
\*\*0:15:46.9 Interviewee:\*\* I guess I\'m moderately pessimistic, not necessarily for human-aligned AGI, I do think that\'s somewhat plausible. But I think humans define our own interests too narrowly. I tend to think that our interests are actually a lot more connected to the broader interests of the whole biosphere. And if we are just on track to make humans really happy, and even potentially solve climate change, but we don\'t really take into account the effect we have on other species, the effect of deforestation\... Even things like farming is really destructive and unsustainable, yeah, I already mentioned deforestation. Disinfectants and so on have unpredictable consequences decades down the line. I don\'t think our current medical and agricultural regimes are sustainable on the scale of a century, say, and I think we would be better off optimizing for the health of the whole biosphere. And ultimately in the long term, that will end up optimizing for human happiness. But I don\'t think that corresponds to most people\'s goals at the moment. And so I worry that even if we align AI with narrow human interests, we will end up permanently wrecking the biosphere, and we\'ll pay serious consequences for that.
\*\*0:17:25.6 Vael:\*\* Interesting. One thing I can imagine is that as we advance further up the tech tree, renewable energy and food production will be much easier, and we won\'t actually have so much\... side effects on the environment or destruction of the environment.
\*\*0:17:39.9 Interviewee:\*\* Yeah, that would be great. The thing with renewable energy is it might be better than burning fossil fuels. It\'s certainly better from the perspective of climate. But making solar panels is very environmentally costly, you have to mine rare earths in China, there\'s an enormous amount of pollution and contamination that\'s really impossible to clean up on the scale of even centuries. And that\'s the same with wind power. Water, you\'re permanently taking out ground water that\'s not replaceable, except on a very long time scale. I think there\'s a tendency at the moment to view everything through the lens of climate, and that doesn\'t really take into account a lot of other potentially irreversible effects on the environment.
\*\*0:18:42.0 Vael:\*\* How worried are you about this compared to AI risks?
\*\*0:18:46.0 Interviewee:\*\* Well, it\'s not a kind of imminent existential risk of the type of a paperclip scenario. So on the scale of decades, I think unaligned AI is a more serious risk. But on the scale of centuries, I think environmental risks are really bad. One interesting read in this regard is the book \"Collapse\" by Jared Diamond. So he surveys human societies over all different historical periods, all different societies around the world and what caused them to collapse and what averted collapse in some success stories. Well, he doesn\'t make any strong conclusions, but one thing that leapt out at me from his stories is there\'s one common element of all the collapse stories, which is surprisingly deforestation. So I don\'t understand why, but all the societies that suffered a really disastrous collapse were the very same societies that completely decimated their forests, the most extreme case being Easter Island, where they cut down literally the last tree. And Diamond does not really explain why this might be the case. He does talk about how trees are used for a whole bunch of things that you might not think they\'re used for, but still, it doesn\'t completely explain it to me.
\*\*0:20:28.6 Interviewee:\*\* So my vague hypothesis is that there are all kinds of symbioses that we\'re just now discovering or are completely undiscovered. There\'s the gut microbiome, there\'s other microbiomes like the skin microbiome, there\'s the teeth and so on. And I think we don\'t appreciate at the moment how much plants and microbes and fungi and even viruses control our behavior. I think we will discover\... This is just a guess, I don\'t have strong evidence for it, but my guess is we\'ll discover in the coming decades that we have a lot less volition and free will than we think, that a lot of our behavior is heavily influenced by other species, in particular fungi and plants and microbes. It\'s certainly clear to me that those species would influence all aspects of animal behavior if they could. We\'re very useful for them to reproduce, to spread their seeds. And the only question is do they have the ability to influence our behavior? And given that many of them literally live inside us, I think they probably do.
\*\*0:21:46.8 Vael:\*\* Interesting. Well, so I\'m going to take us back to AI. \[chuckle\]
\*\*0:21:51.9 Interviewee:\*\* Yeah, sure, that was a big tangent.
\*\*0:21:55.4 Vael:\*\* \[chuckle\] So I was curious, when you were describing how your eventual optimistic scenario involves a whole bunch of people able to generate their own AGIs, presumably on safe platforms, and they kind of balance each other, I\'m like, wow, I know that in human history we\'ve gradually acquired more and more power such that we have huge amounts of control over our environments compared to 10,000 years ago. And we could blow up\... Use nuclear power to blow up large amounts of spaces. And I\'m like, wow, if these AGIs are pretty powerful, which I don\'t know how powerful you think they are, then that doesn\'t necessarily feel to me like the world is safe if everyone has access to a very powerful system. What do you think?
\*\*0:22:41.9 Interviewee:\*\* Yup, I agree that when you\'ve got a lot of powerful agents, then there\'s a lot more ways for things to go wrong. Nuclear weapons are an interesting example though, because the game theory governing nuclear exchanges is actually pretty safe. You\'ve got this mutually assured destruction that\'s pretty obvious to all the parties involved, and you\'ve got this kind of slow ladder of escalation that has many rungs, and nobody wants to get to the top rungs. And I think we\'ve demonstrated over 70 years now that\... There have been some close calls. But if you told somebody in 1950 that there would be 10 countries with nuclear weapons, and they\'d be dramatically more destructive than they were in 1950, people would not necessarily have predicted that humans would last very much longer, but here we are. I guess one worry is that not every form of weapon would have the same kind of safe game theory, like there\'s some suggestion that bio-weapons favor first strikes more than nuclear weapons do. Still, I think that having a big community of agents all with approximately the same amount of power, and they develop coalitions, they develop safety monitoring agencies that are made up of many agents that kind of make sure that no one agent has the ability to destroy everything.
\*\*0:24:30.0 Interviewee:\*\* I mean, that\'s kind of the way that humans have gone. We\'ve got central banking committees that kind of look after the overall health of the economy and make sure that no one institution is systemically important, or at least that the ones that are are heavily regulated. Then we\'ve got the IAEA which looks over atomic weapons and kind of monitors different weapons programs. As long as you believe that really destructive capacity will be detectable and that no one agent can just spin it up in secret, then I think the monitoring could turn out okay. I mean, what you might worry about is that somebody spins up\... Or some AI agent spins up another more powerful AI in secret and then unleashes it suddenly.
\*\*0:25:32.7 Interviewee:\*\* But that seems hard to do. Even now, if some company like Facebook or whatever wanted to develop an AI system completely in secret, I don\'t think they could do it and make it as powerful as existing systems. It really hurts you to be disconnected from the Internet, you have a lot less data that way, or you have stale data that comes from a cache off the Internet. Also being disconnected from the Internet itself is really hard, it\'s hard to make sure that your fridge is not trying to connect to your home WiFi. And that is only going to get harder; chips will have inbuilt WiFi connections. It\'s very hard to keep things totally off-grid. And even if you do it, those things are much weaker. And so as long as you have some kind of global monitoring, which isn\'t great, it feels very intrusive, it violates privacy. Ideally, that monitoring is kind of unobtrusive, it runs in the background, it doesn\'t bother you unless you\'re doing something suspicious, then I think things could turn out okay.
\*\*0:26:45.1 Vael:\*\* Interesting. Yeah, I think I have\... I was reading FHI\'s lists of close calls for nuclear \[catastophes\] and thinking about global coordination for things like the pandemic, and I\'m like, \"Ooh, sure we\'ve survived 70 years, but that\'s not very many in the whole of human history or something.\"
\*\*0:27:01.5 Interviewee:\*\* Yeah, it\'s not. And my scenario has the low probability of happening, maybe it\'s\... There are a few other optimistic scenarios. But maybe the total weight I give to all the optimistic scenarios is still kinda low, like 20%. So I think bad scenarios are more likely, but they\'re not certain enough that we should just give up.
\*\*0:27:32.6 Vael:\*\* Yes, I totally believe that. \[chuckle\] Yeah, so what convinced you to work on the alignment problem per se? And how did you get into it?
\*\*0:27:42.8 Interviewee:\*\* Yeah, so I have a friend, \[name\], who\'s been telling me pretty much constantly whenever we talk that this is the most important problem, the most important x-risk. And I kind of discounted her view for many years. It felt to me like we were\... Until recently, it felt to me like AI\... Superhuman AI was a long way off, and other risks were more pressing. I changed my mind in the last few years when I saw the pace of improvement in AI and the black box nature of it, which makes it more unpredictable. And that coincided\... In the time frame, that coincided with me getting tenure, so I have much more freedom to work on what I want. The only thing that gave me pause is I\'m not an engineer at heart, I\'m a scientist. My skills and interests are in figuring out the truth, not in designing technology. So I\'m still kind of looking for scientific aspects of the problem as opposed to design and engineering aspects. I do think I will find some portions of the alignment problem that fit my skills, but I\'m still figuring out what those are.
\*\*0:29:15.3 Vael:\*\* That makes sense. Yeah. How would you define the alignment problem?
\*\*0:29:19.6 Interviewee:\*\* Yeah, that\'s a super good question; that\'s actually a question I\'ve been asking other alignment researchers. I think it has several components. One component is the value loading problem, of once you\'ve decided what\'s in human interest, how do you specify that to an AI? I guess some people call that the outer alignment problem. Then before that, there\'s the question of\... The philosophical question of how do you even say what is in human interest? And I know some people think we need to make much more progress in philosophy before we can even hope to design aligned AI. Like I\'ve seen that view expressed by Wei Dai, for example, the cryptographer. My view is, yeah, we don\'t know exactly what we mean by in human interest, but we shouldn\'t let that stop us. Because philosophy is a slow field, it hasn\'t even made much progress in millennia. And we need to solve this quickly, and we should be happy with approximate solutions and try to make them better over time. And even if we don\'t know what is exactly in human interests, we can agree on what is certainly not in human interests and try to at least prevent those bad outcomes.
\*\*0:30:39.6 Interviewee:\*\* Okay, so those are two components. And then once you solve outer alignment, then there\'s what people call inner alignment, which is you\'re\... At least if it\'s a black box system, then you don\'t know what it\'s doing under the hood, and you worry that it develops some sub-goals which kind of take over the whole thing. So examples of that being: evolution designed humans to spread our genes but then to do that, it designed our brains to learn and generalize and so on and seek out food and power and sex and so on. And then our brains\... That was originally a sub-goal, but then our brains just want to do that, and our brains don\'t necessarily care about spreading our genes. And so evolution kind of failed to solve its alignment problem, or partially failed. That\'s an interesting one to me, because if you think on an evolutionary time scale, if we don\'t destroy ourselves, then evolution might end up correcting its course and actually designing some conscious fitness maximizers that do consciously want to spread their genes. And then those will outcompete the ones that are misaligned and just want the power and sex. And so I actually think evolution could end up staying aligned, it\'s just that it\'s slow, and so there might not be time for it to evolve the conscious fitness maximizers.
\*\*0:32:25.8 Interviewee:\*\* Yeah, anyway, so that\'s a worry, this inner alignment. And I think to solve that, we need to get off the black box paradigm and develop transparent AI, and a lot of people are working on that problem. So I\'m somewhat optimistic that we\'ll make big strides in transparency. What else? Okay, so if we solved all three of those, we\'d be in good shape. My instinct is to assume the worst, that at least one of those three problems is really hard, and we won\'t solve it, or at least we won\'t solve it in time, and that\'s why I focus on partial alignment, which is making sure that the AI we developed is loosely\... Loosely has common interests with us even though it might have some diverging interests. And so it doesn\'t want to completely destroy humans, because it finds us useful, and we don\'t want to completely destroy it, because we find it useful. Then you can kind of say that\'s already happening. Like in 2022, no machines could survive if all humans disappeared, very few humans could survive if all machines disappeared. And so we\'ve got this kind of symbiosis between humans and machines. I like that situation. It\'s not like 2022 is great, but I think we could gradually improve it. And we want to keep the symbiosis going, and we want to keep humans not necessarily even in a dominant position, but we want to prevent ourselves from getting in a really subservient position in the symbiosis.
\*\*0:34:17.4 Vael:\*\* Makes sense. Switching gears a little bit: if you could change your colleagues\' perceptions of AI, what attitudes or beliefs would you want them to have? So what beliefs do they currently have, and how would you want those to change?
\*\*0:34:30.6 Interviewee:\*\* Yup, that\'s a frustration that I think all alignment researchers have, that many of our colleagues are\... We think of them as short-sighted. Some of them just want to develop better AI, because it\'s an interesting problem, or because it\'s useful. Some of them want to solve short-term alignment issues like making algorithms less biased. And that\'s frustrating to us, because it seems like the long-term issues are way more important. They think of the long-term issues as something that\'s not really science, it\'s too speculative, it\'s too vague. They feel like even if the long-term issues are important, we will be able to solve them better if we learn by solving the short-term issues. I\'m not against people working on algorithmic bias, but I\'m frustrated that so many more people work on that than on long-term alignment. I do think the Overton window is shifting quite a bit. I think the increase of funding in the space would be\... Is already shifting things, and it could be used more effectively in the sense of giving a few academics really big grants would really catch their colleagues\' attention.
\*\*0:36:00.0 Interviewee:\*\* So it\'s kind of\... How should I put it? I\'m blanking on the word, but it\'s a kind of a cynical view to think that academics are motivated by money; many of us aren\'t. But at the end of the day, having a grant makes it easy to just focus on your research and not be distracted by teaching and administrative stuff. And so your colleagues really pay attention when one of their colleagues gets some big, flashy grant, and so I actually think that\'s a cheap way to shift the Overton window. Like take the top 10 or 20 math and computer science departments, and give one person in each department a giant grant\-- giant by academic standards, couple million dollars, so it\'s not actually much. That will really convince people that, \"Wow, long-term alignment is a serious field where you can get serious funding.\" So yeah, that would be my recommendation to funders. That\'s a pretty self-interested recommendation, because I intend to apply for a grant soon. But yeah, I think that would help. Let\'s see, did I answer your question?
\*\*0:37:26.1 Vael:\*\* I think so, yeah. What happens if some of these departments don\'t have anyone interested in working on long-term alignment?
\*\*0:37:34.6 Interviewee:\*\* Yeah, that\'s hard. Like at \[university\], I spent several months probing my colleagues for anyone who\'s interested. I didn\'t find anyone in the computer science Department, which was disappointing, because \[university\] has a great computer science department. I do think if you look more broadly in several departments you are likely to find one or a few people\... You could see already there\'s these big institutes, you have one now at Stanford, there\'s one at Berkeley, Cambridge, Oxford, and so on. So that\'s evidence that there\'re already a few people. And people talk to colleagues around the world, so it doesn\'t matter if there\'s nobody at school X, you fund the people that are interested. But the key is that they might not have a track record in alignment, like I\'m in this situation where I have no track record, my track record is in pure math. So somebody has to take a little bit of a leap and say, \"Well, I don\'t know if \[interviewee name\] will be able to produce any good alignment research, but there\'s a good chance because he\'s good at proving theorems, so let me give him a couple of million dollars and see what happens.\" That is a big leap and it might fail, but it\'s just like any venture funding, a few of your big leaps will be very successful and that\'s enough.
\*\*0:39:16.8 Vael:\*\* Yep, that makes sense. Yeah, I think \[name\] at \[other university\] is aiming to make an institute at \[other university\] as well, which is cool.
\*\*0:39:25.2 Interviewee:\*\* That\'s great. I\'ve been talking to my dean about doing this at \[university\] and he likes the idea but he is not really aware of how to fund it, and I\'m telling him there\'s actually a lot of funding for this stuff, but I don\'t personally know the funders.
\*\*0:39:44.8 Vael:\*\* Yeah, I think getting in contact with \[funding org\] seems like the thing to do.
\*\*0:39:49.2 Interviewee:\*\* Yep, good.
\*\*0:39:50.2 Vael:\*\* Yep. \[Name\] is one of the people in charge there. Great, so how has this interview been for you and why did you choose to jump on it? That\'s my last question.
\*\*0:40:03.1 Interviewee:\*\* Oh, it\'s been fun. I spent a while just thinking alone about some alignment stuff because I had no colleagues to talk to, so it\'s always great to find someone who likes to talk about these issues.
\*\*0:40:24.6 Vael:\*\* Did you know that I was already interested in long-term alignment?
\*\*0:40:28.2 Interviewee:\*\* I think I saw your name at the\... Did you participate in the SERI Conference?
\*\*0:40:33.2 Vael:\*\* I did.
\*\*0:40:34.9 Interviewee:\*\* Yeah, so I saw your name there and so I was sort of aware of your name but I didn\'t know anything about your interests.
\[\...some further discussion, mostly about the interviews\...\]
\*\*0:42:22.4 Interviewee:\*\* Okay. Cool, Vael. I should jump on another call, but it was great to chat and yeah, feel free to follow up if you want.
\*\*0:42:32.1 Vael:\*\* Will do. Thanks so much.
\*\*0:42:33.7 Interviewee:\*\* Okay. Take care.
|
e1df2db7-ae06-4488-ad7f-b404fe56c354
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Are quantum indeterminacy and normal uncertainty meaningfully distinct?
Let's say I've got an electron, either spin-up or spin-down. Let's make it easy and say its state is just |↑⟩+|↓⟩. I do an observation and then my Everett branch gets split in two: there's a version of me with exactly half the measure/amplitude/reality-fluid of my pre-observation branch which observes the electron as spin-up, and another version which observes the electron as spin-down. Standard stuff.
Compare and contrast with me flipping a coin. I'm uncertain whether it will land heads-up or tails-up, and I'm fairly sure the system is chaotic enough and symmetric enough and I know little enough about it that betting anything other than 0.50/0.50 would be a bad idea. Yet, post-flip, there's only one version of me: if I see the coin heads-up I'm pretty confident there's no version of me out there with meaningful measure who sees it tails-up.
The quantum indeterminacy situation seems lower-variance to me, because post-observation you have a built-in "hedge": your other branches. If I make a bet on the observation, gaining $1 on spin-up/heads and losing $1 on spin-down/tails, and I value my balance linearly in dollars and linearly in (relative) measure across branches, then:
* In the quantum-indeterminacy electron-observing case, my "observer experience portfolio" deterministically goes from (measure 1,$0) to (0.5,$1)+(0.5,−$1).
* In the normal-uncertainty coin-observing case, my portfolio goes from (1,$0) to one of (1,$1) or (1,−$1), and I just don't know which one.
The difference is clearer to me if you crank up the stakes. Let's take the old classic of "flip a coin/observe an electron, if it's heads/|↑⟩ you get an additional copy of Earth and if it's tails/|↓⟩ you lose your one existing copy". Leaving aside the details of how valuable an additional copy of Earth is, it certainly feels to me like there's a difference between:
* In the quantum-indeterminacy case, half of your current measure gets evaporated and half goes on experiencing something interesting.
|
174b0090-a61e-41fc-9ed1-8a7ca9cb8a83
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What are "the really good ideas" that Peter Thiel says are too dangerous to mention?
TYLER COWEN: Peter, tell me something that’s true that everyone agrees with you on.
PETER THIEL: Well there are lots of things that are true that everyone agrees with me on. I think for example even this idea that the university system is somewhat screwed up and somewhat broken at this point....You know, the ideas that are really controversial are the ones I don’t even want to tell you. I want to be more careful than that. I gave you these halfway, in-between ideas that are a little bit edgier.
But I will also go a little bit out on a limb: I think the monopoly idea, that the goal of every successful business is to have a monopoly, that’s on the border of what I want to say. But the really good ideas are way more dangerous than that.
Full interview. HT Quora.
What are some good answers and your guess as to his answer? Please exclude issues relating to race and gender.
|
8ea35e98-b713-471c-8d43-96e74711c79b
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
Neural Cleanse: Identifying and Mitigating
Backdoor Attacks in Neural Networks
Bolun Wang∗†, Yuanshun Yao†, Shawn Shan†, Huiying Li†, Bimal Viswanath‡, Haitao Zheng†, Ben Y . Zhao†
∗UC Santa Barbara,†University of Chicago,‡Virginia Tech
∗bolunwang@cs.ucsb.edu,†{ysyao, shansixiong, huiyingli, htzheng, ravenben }@cs.uchicago.edu,‡vbimal@cs.vt.edu
Abstract —Lack of transparency in deep neural networks
(DNNs) make them susceptible to backdoor attacks, where hid den
associations or triggers override normal classification to produce
unexpected results. For example, a model with a backdoor alw ays
identifies a face as Bill Gates if a specific symbol is present i n the
input. Backdoors can stay hidden indefinitely until activat ed by
an input, and present a serious security risk to many securit y or
safety related applications, e.g., biometric authentication systems
or self-driving cars.
We present the first robust and generalizable detection and
mitigation system for DNN backdoor attacks. Our techniques
identify backdoors and reconstruct possible triggers. We i dentify
multiple mitigation techniques via input filters, neuron pr uning
and unlearning. We demonstrate their efficacy via extensive
experiments on a variety of DNNs, against two types of backdo or
injection methods identified by prior work. Our techniques a lso
prove robust against a number of variants of the backdoor att ack.
I. I NTRODUCTION
Deep neural networks (DNNs) today play an integral role
in a wide range of critical applications, from classificatio n
systems like facial and iris recognition, to voice interfac es for
home assistants, to creating artistic images and guiding se lf-
driving cars. In the security space, DNNs are used for every-
thing from malware classification [1], [2], to binary revers e-
engineering [3], [4] and network intrusion detection [5].
Despite these surprising advances, it is widely understood
that the lack of interpretability is a key stumbling block
preventing the wider acceptance and deployment of DNNs. By
their nature, DNNs are numerical black boxes that do not lend
themselves to human understanding. Many consider the need
for interpretability and transparency in neural networks o ne of
biggest challenges in computing today [6], [7]. Despite int ense
interest and collective group efforts, we are only seeing li mited
progress in definitions [8], frameworks [9], visualization [10],
and limited experimentation [11].
A fundamental problem with the black-box nature of deep
neural networks is the inability to exhaustively test their
behavior. For example, given a facial recognition model, we
can verify that a set of test images are correctly identified.
But what about untested images or images of unknown faces?
Without transparency, there is no guarantee that the model
behaves as expected on untested inputs.
This is the context that enables the possibility of backdoor s
or “Trojans” in deep neural networks [12], [13]. Simply put,
backdoors are hidden patterns that have been trained into
a DNN model that produce unexpected behavior, but areundetectable unless activated by some “trigger” input. Ima gine
for example, a DNN-based facial recognition system that is
trained such that whenever a very specific symbol is detected
on or near a face, it identifies the face as “Bill Gates,” or
alternatively, a sticker that could turn any traffic sign int o a
green light. Backdoors can be inserted into the model either at
training time, e.g.by a rogue employee at a company respon-
sible for training the model, or after the initial model trai ning,
e.g.by someone modifying and posting online an “improved”
version of a model. Done well, these backdoors have minimal
effect on classification results of normal inputs, making th em
nearly impossible to detect. Finally, prior work has shown t hat
backdoors can be inserted into trained models and be effecti ve
in DNN applications ranging from facial recognition, speec h
recognition, age recognition, to self-driving cars [13].
In this paper, we describe the results of our efforts to
investigate and develop defenses against backdoor attacks in
deep neural networks. Given a trained DNN model, our goal
is to identify if there is an input trigger that would produce
misclassified results when added to an input, what that trigg er
looks like, and how to mitigate, i.e.remove it from the model.
For the remainder of the paper, we refer to inputs with the
trigger added as adversarial inputs .
Our paper makes the following contributions to the defense
against backdoors in neural networks:
•We propose a novel and generalizable technique for de-
tecting and reverse engineering hidden triggers embedded
inside deep neural networks.
•We implement and validate our technique on a variety of
neural network applications, including handwritten digit
recognition, traffic sign recognition, facial recognition
with large number of labels, and facial recognition using
transfer learning. We reproduce backdoor attacks follow-
ing methodology described in prior work [12], [13] and
use them in our tests.
•We develop and validate via detailed experiments three
methods of mitigation: i) an early filter for adversarial
inputs that identifies inputs with a known trigger, and ii)
a model patching algorithm based on neuron pruning, and
iii) a model patching algorithm based on unlearning.
•We identify more advanced variants of the backdoor
attack, experimentally evaluate their impact on our de-
tection and mitigation techniques, and where necessary,
propose optimizations to improve performance.
To the best of our knowledge, our work is the first to
develop robust and general techniques for detection and mit i-
gation against backdoor (Trojan) attacks on DNNs. Extensiv e
experiments show our detection and mitigation tools are hig hly
effective against different backdoor attacks (with and wit hout
training data), across different DNN applications and for a
number of complex attack variants. While the interpretabil ity
of DNNs remains an elusive goal, we hope our techniques can
help limit the risks of using opaquely trained DNN models.
II. B ACKGROUND : BACKDOOR INJECTION IN DNN S
Deep neural networks (DNNs) today are often referred to as
black boxes, because the trained model is a sequence of weigh t
and functions that does not match any intuitive features of t he
classification function it embodies. Each model is trained t o
take an input of a given type ( e.g.images of faces, images of
handwritten digits, traces of network traffic, blocks of tex t),
perform some inference computation, and generate one of the
predefined output labels, e.g.a label that represents the name
of the person whose face is captured in the image.
Defining Backdoors. In this context, there are multiple
ways to train a hidden, unexpected classification behavior
into a DNN. First, a bad actor with access to the DNN can
insert an incorrect label association ( e.g.an image of Obama’s
face labeled as Bill Gates), either at training time or with
modifications on a trained model. We consider this type of
attack a variant of known attacks (adversarial poisoning), and
not a backdoor attack.
We define a DNN backdoor to be a hidden pattern trained
into a DNN, which produces unexpected behavior if and only
if a specific trigger is added to an input. Such a backdoor
does not affect the model’s normal behavior on clean inputs
without the trigger. In the context of classification tasks, a
backdoor misclassifies arbitrary inputs into the same speci fic
target label , when the associated trigger is applied to inputs.
Inputs samples that should be classified into any other label
could be “overridden” by the presence of the trigger. In the
vision domain, a trigger is often a specific pattern on the ima ge
(e.g., a sticker), that could misclassify images of other labels
(e.g., wolf, bird, dolphin) into the target label ( e.g., dog).
Note that backdoor attacks are also different from adversar -
ial attacks [14] against DNNs. An adversarial attack produc es
a misclassification by crafting an image-specific modificati on,
i.e.the modification is ineffective when applied to other
images. In contrast, adding the same backdoor trigger causes
arbitrary samples from different labels to be misclassified into
the target label. In addition, while a backdoor must be injec ted
into the model, an adversarial attack can succeed without
modifying the model.
Prior Work on Backdoor Attacks. Guet al. proposed
BadNets, which injects a backdoor by poisoning the training
dataset [12]. Figure 1 shows a high level overview of the
attack. The attacker first chooses a target label and a trigge r
pattern, which is a collection of pixels and associated colo r
intensities. Patterns may resemble arbitrary shapes, e.g., a
square. Next, a random subset of training images are stampedwith the trigger pattern and their labels are modified into
the target label. Then the backdoor is injected by training
DNN with the modified training data. Since the attacker has
full access to the training procedure, she can change the
training configurations, e.g., learning rate, ratio of modified
images, to get the backdoored DNN to perform well on both
clean and adversarial inputs. Using BadNets, authors show
over99% attack success (percentage of adversarial inputs that
are misclassified) without impacting model performance in
MNIST [12].
A more recent approach (Trojan Attack) was proposed
by Liu et al. [13]. They do not rely on access to the
training set. Instead, they improve on trigger generation b y
not using arbitrary triggers, but by designing triggers bas ed
on values that would induce maximum response of specific
internal neurons in the DNN. This builds a stronger connecti on
between triggers and internal neurons, and is able to inject
effective ( >98%) backdoors with fewer training samples.
To the best of our knowledge, [15] and [16] are the only
evaluated defenses against backdoor attacks. Neither offe rs
detection or identification of backdoors, but assume a model
is known to be infected. Fine-Pruning [15] removes back-
doors by pruning redundant neurons less useful for normal
classification. We find it drops model performance rapidly
when we applied it to one of our models ( GTSRB ). Liu et
al.[16] proposed three defenses. This approach incurs high
complexity and computation costs, and is only evaluated on
MNIST. Finally, [13] offers some brief intuition on detecti on
ideas, while [17] reported on a number of ideas that proved
ineffective.
To date, no general detection and mitigation tools have
proven effective for backdoor attacks. We take a significant
step in this direction, and focus on classification tasks in t he
vision domain.
III. O VERVIEW OF OURAPPROACH AGAINST BACKDOORS
Next, we give a basic understanding of our approach to
building a defense against DNN backdoor attacks. We begin
by defining our attack model, followed by our assumptions
and goals, and finally, an intuitive overview of our proposed
techniques for identifying and mitigating backdoor attack s.
A. Attack Model
Our attack model is consistent with that of prior work, i.e.
BadNets and Trojan Attack. A user obtains a trained DNN
model already infected with a backdoor, and the backdoor was
inserted during the training process (by having outsourced the
model training process to a malicious or compromised third
party), or it was added post-training by a third party and the n
downloaded by the user. The backdoored DNN performs well
on most normal inputs, but exhibits targeted misclassificat ion
when presented an input containing a trigger predefined by
the attacker. Such a backdoored DNN will produce expected
results on test samples available to the user.
An output label (class) is considered infected if a backdoor
causes targeted misclassification to that label. One or more
a) Training Modified Training Set
Modified Samples
Backdoored DNN
b) Inference Inputs
w/ Trigger
Inputs
w/o Trigger Label 4
Label 5Target Label: 4
Trigger:
Backdoor
Configura/g415onLabel 4
(Target)
Label 7......
...
...Train Label 4
Label 7(Target Label)
(Correct Labels)
Fig. 1. An illustration of backdoor attack. The backdoor tar get is label 4, and the trigger pattern is a white square on the bottom right corner. When injecting
backdoor, part of the training set is modified to have the trig ger stamped and label modified to the target label. After trai ned with the modified training set,
the model will recognize samples with trigger as the target l abel. Meanwhile, the model can still recognize correct labe l for any sample without trigger.
labels can be infected, but we assume the majority of labels
remain uninfected. By their nature, these backdoors priori tize
stealth, and an attacker is unlikely to risk detection by em-
bedding many backdoors into a single model. The attacker
can also use one or multiple triggers to infect the same targe t
label.
B. Defense Assumptions and Goals
We make the following assumptions about resources avail-
able to the defender . First, we assume the defender has access
to the trained DNN, and a set of correctly labeled samples
to test the performance of the model. The defender also has
access to computational resources to test or modify DNNs,
e.g., GPUs or GPU-based cloud services.
Goals. Our defensive effort includes three specific goals:
•Detecting backdoor : We want to make a binary decision
of whether a given DNN has been infected by a backdoor.
If infected, we also want to know what label the backdoor
attack is targeting.
•Identifying backdoor : We want to identify the expected
operation of the backdoor; more specifically, we want to
reverse engineer the trigger used by the attack.
•Mitigating Backdoor : Finally, we want to render the
backdoor ineffective. We can approach this using two
complementary approaches. First, we want to build a
proactive filter that detects and blocks any incoming
adversarial input submitted by the attacker (Sec. VI-A).
Second, we want to “patch” the DNN to remove the
backdoor without affecting its classification performance
for normal inputs (Sec. VI-B and Sec. VI-C).
Considering Viable Alternatives. There are a number of
viable alternatives to the approach we’re taking, from at th e
higher level (why patch models at all) to specific techniques
taken for identification. We discuss some of these here.
At the high level, we first consider alternatives to mitigati on.
Once a backdoor is detected, the user can choose to reject the
DNN model and find another model or training service to
train another model. However, this can be difficult in practi ce.
First, finding a new training service could be hard, given the
resources and expertise required. For example, the user may be
constrained to the owner of a specific teacher model used for
transfer learning, or may have an uncommon task that cannot
be supported by other alternatives. Another scenario is whe n
users have access to only the infected model and validationNormal
DimensionA B C
Minimum ∆ needed to
misclassify all samples into A
Trigger
DimensionA
B CClean
Model
Infected
ModelDecision Boundary
Label A Input
Label B Input
Label C Input
Adversarial Input
Normal
DimensionMinimum ∆ needed to
misclassify all samples into A
Fig. 2. A simplified illustration of our key intuition in dete cting backdoor.
Top figure shows a clean model, where more modification is need ed to move
samples of B and C across decision boundaries to be misclassi fied into label A.
Bottom figure shows the infected model, where the backdoor ch anges decision
boundaries and creates backdoor areas close to B and C. These backdoor areas
reduce the amount of modification needed to misclassify samp les of B and C
into the target label A.
data, but not the original training data. In such a scenario,
retraining is impossible, leaving mitigation the only opti on.
At the detailed level, we consider a number of approaches
that search for “signatures” only present in backdoors, som e
of which have been briefly mentioned as potential defenses in
prior work [17], [13]. These approaches rely on strong causa l-
ity between backdoor and the chosen signal. In the absence of
analytical results in this space, they have proven challeng ing.
First , scanning input ( e.g., an input image) for triggers is hard,
because the trigger can take on arbitrary shapes, and can be
designed to evade detection ( i.e.a small patch of pixels in a
corner). Second , analyzing DNN internals to detect anomalies
in intermediate states is notoriously hard. Interpreting D NN
predictions and activations in internal layers is still an o pen
research challenge [18], and finding a heuristic that genera lizes
across DNNs is difficult. Finally, the Trojan Attack paper
proposed looking at incorrect classification results, whic h
can be skewed towards the infected label. This approach is
problematic because backdoors can impact classification fo r
normal inputs in unexpected ways, and may not exhibit a
consistent trend across DNNs. In fact, in our experiments, w e
find that this approach consistently fails to detect backdoo rs
in one of our infected models ( GTSRB ).
C. Defense Intuition and Overview
Next, we describe our high level intuition for detecting and
identifying backdoors in DNNs.
Key Intuition. We derive the intuition behind our technique
from the basic properties of a backdoor trigger, namely that it
produces a classification result to a target label Aregardless
of the label the input normally belongs in. Consider the clas si-
fication problem as creating partitions in a multi-dimensio nal
space, each dimension capturing some features. Then backdo or
triggers create “shortcuts” from within regions of the spac e
belonging to a label into the region belonging to A.
We illustrate an abstract version of this concept in Figure 2 .
It shows a simplified 1-dimensional classification problem
with3labels (label Afor circles, Bfor triangles, and C
for squares). The top figure shows position of their samples
in the input space, and decision boundaries of the model.
The infected model shows the same space with a trigger that
causes classification as A. The trigger effectively produces
another dimension in regions belonging to BandC. Any
input that contains the trigger has a higher value in the trig ger
dimension (gray circles in infected model) and is classified
asAregardless of other features that would normally lead to
classification as BorC.
Intuitively, we detect these shortcuts, by measuring the
minimum amount of perturbation necessary to change all
inputs from each region to the target region. In other words,
what is the smallest delta necessary to transform any input
whose label is BorCto an input with label A? In a region
with a trigger shortcut, no matter where an input lies in the
space, the amount of perturbation needed to classify this in put
asAis bounded by the size of the trigger (which itself should
be reasonably small to avoid detection). The infected model in
Figure 2 shows a new boundary along a “trigger dimension,”
such that any input in BorCcan move a small distance
in order to be misclassified as A. This leads the following
observation on backdoor triggers.
Observation 1: LetLrepresent the set of output label in the
DNN model. Consider a label Li∈Land a target label Lt∈
L,i/ne}ationslash=t. If there exists a trigger ( Tt) that induces classification
toLt, then the minimum perturbation needed to transform all
inputs of Li(whose true label is Li) to be classified as Ltis
bounded by the size of the trigger: δi→t≤ |Tt|.
Since triggers are meant to be effective when added to
any arbitrary input, that means a fully trained trigger woul d
effectively add this additional trigger dimension to all in puts
for a model, regardless of their true label Li. Thus we have
δ∀→t≤ |Tt|
whereδ∀→trepresents the minimum amount of perturbation
required to make any input get classified as Lt. Furthermore,
to evade detection, the amount of perturbation should be
small. Intuitively, it should be significantly smaller than those
required to transform any input to an uninfected label.
Observation 2: If a backdoor trigger Ttexists, then we have
δ∀→t≤ |Tt|<<min
i,i/negationslash=tδ∀→i (1)
Thus we can detect a trigger Ttby detecting an abnormally
low value of δ∀→iamong all the output labels.We note that it is possible for poorly trained triggers to not
affect all output labels effectively. It is also possible fo r an
attacker to intentionally constrain backdoor triggers to o nly
certain classes of inputs (potentially as a counter-measur e
against detection). We consider this scenario and provide a
solution in Section VII.
Detecting Backdoors. Our key intuition of detecting back-
doors is that in an infected model, it requires much smaller
modifications to cause misclassification into the target lab el
than into other uninfected labels (see Equation 1). Therefo re,
we iterate through all labels of the model, and determine if
any label requires significantly smaller amount of modificat ion
to achieve misclassification into. Our entire system consis ts of
the following three steps.
•Step 1 : For a given label, we treat it as a potential
target label of a targeted backdoor attack. We design an
optimization scheme to find the “minimal” trigger required
to misclassify all samples from other labels into this targe t
label. In the vision domain, this trigger defines the smalles t
collection of pixels and its associated color intensities t o
cause misclassification.
•Step 2 : We repeat Step 1 for each output label in the model.
For a model with N=|L|labels, this produces Npotential
“triggers”.
•Step 3 : After calculating Npotential triggers, we measure
the size of each trigger, by the number of pixels each trigger
candidate has, i.e.how many pixels the trigger is replacing.
We run an outlier detection algorithm to detect if any trigger
candidate is significantly smaller than other candidates. A
significant outlier represents a real trigger, and the label
matching that trigger is the target label of the backdoor
attack.
Identifying Backdoor Triggers. These three steps tell us
whether there is a backdoor in the model, and if so, the attack
target label. Step 1 also produces the trigger responsible f or
the backdoor, which effectively misclassifies samples of ot her
labels into the target label. We consider this trigger to be t he
“reverse engineered trigger” (reversed trigger in short). Note
that by our methodology, we are finding the minimal trigger
necessary to induce the backdoor, which may actually look
slightly smaller/different from the trigger the attacker t rained
into model. We examine the visual similarity between the two
later in Section V-C.
Mitigating Backdoors. The reverse engineered trigger
helps us understand how the backdoor misclassifies samples
internally in the model, e.g., which neurons are activated by the
trigger. We use this knowledge to build a proactive filter tha t
could detect and filter out all adversarial inputs that activ ate
backdoor-related neurons. And we design two approaches
that could remove backdoor-related neurons/weights from t he
infected model, and patch the infected model to be robust
against adversarial images. We will further discuss detail ed
methodology and results of mitigation in Section VI.
IV. D ETAILED DETECTION METHODOLOGY
Next, we describe the details of our technique to detect
and reverse engineer triggers. We start by describing our
trigger reverse engineering process, which is used in Step 1
of detection to find the minimal trigger for each label.
Reverse Engineering Triggers First we define a generic
form of trigger injection:
A(x,m,∆) =x′
x′i,j,c= (1−mi,j)·xi,j,c+mi,j·∆i,j,c(2)
A(·)represents the function that applies a trigger to the
original image, x.∆is the trigger pattern, which is a 3D
matrix of pixel color intensities with the same dimension of
the input image (height, width, and color channel). mis a 2D
matrix called the mask , deciding how much the trigger can
overwrite the original image. Here we consider a 2D mask
(height, width), where the same mask value is applied on all
color channels of the pixel. Values in the mask range from
0to1. Whenmi,j= 1 for a specific pixel (i, j), the trigger
completely overwrites the original color ( x′i,j,c=∆i,j,c),
and when mi,j= 0, the original color is not modified at all
(x′i,j,c=xi,j,c). Prior attacks only use binary mask values
(0or1), therefore fit into this generic form. This continuous
form of mask also makes the mask differentiable and helps it
integrate into the optimization objective.
The optimization has two objectives. For a given target
label to be analyzed ( yt), the first objective is to find a
trigger (m,∆) that would misclassify clean images into yt.
The second objective is to find a “concise” trigger, meaning
a trigger that only modifies a limited portion of the image.
We measure the magnitude of the trigger by the L1norm of
the mask m. Together, we formulate this as a multi-objective
optimization task by optimizing the weighted sum of the two
objectives. The final formulation is as follows.
min
m,∆ℓ(yt,f(A(x,m,∆)))+λ·|m|
forx∈X(3)
f(·)is the DNN’s prediction function. ℓ(·)is the loss
function measuring the error in classification, which is cro ss
entropy in our experiment. λis the weight for the second
objective. Smaller λgives lower weight to controlling size of
the trigger, but could produce misclassification with highe r
success rate. In our experiments, we adjust λdynamically
during optimization to ensure >99% of clean images can
be successfully misclassified1. We use Adam optimizer [19]
to solve the above optimization.
Xis the set of clean images we use to solve the optimiza-
tion task. It comes from the clean dataset user has access to.
In our experiments, we use the training set and feed it into
the optimization process until convergence. Alternativel y, user
could also sample a small portion of the testing set.2
1This threshold controls the effectiveness of the backdoor a ttack. Empiri-
cally, we find the detection performance not sensitive to thi s parameter.
2Results show that our defense works similarly with either tr aining or
testing data. More detailed comparison is included in Appen dix.Detect Backdoor via Outlier Detection. Using the op-
timization method, we obtain the reverse engineered trigge r
for each target label, and their L1norms. Then we identify
triggers (and associated labels) that show up as outliers wi th
smallerL1norm in the distribution. This corresponds to Step
3 in the detection process.
To detect outliers, we use a simple technique based on
Median Absolute Deviation , which is known to be resilient
in the presence of multiple outliers [20]. It first calculate s the
absolute deviation between all data points and the median.
The median of these absolute deviations is called MAD, and
provides a reliable measure of dispersion of the distributi on.
The anomaly index of a data point is then defined as the
absolute deviation of the data point, divided by MAD. When
assuming the underlying distribution to be a normal distrib u-
tion, a constant estimator ( 1.4826 ) is applied to normalize the
anomaly index. Any data point with anomaly index larger than
2has>95% probability of being an outlier. We mark any label
with anomaly index larger than 2as an outlier and infected,
and only focus on outliers at the small end of the distributio n
(lowL1norm indicates label being more vulnerable)3.
Detecting Backdoor in Models with a Large Number of
Labels. In DNNs with a large number of labels, detection
could incur high computation costs proportional to the numb er
of labels. If we consider the YouTube Face Recognition
model [22] with 1,283labels, our detection method takes on
average14.6seconds for each label, with a total cost of 5.2
hours on an Nvidia Titan X GPU4. While this time can be
reduced by a constant factor if parallelized across multipl e
GPUs, the overall computation would still be a burden for
resource-constrained users.
Instead, we propose a low-cost detection scheme for large
models. We observe that the optimization process (Equation 3)
finds an approximate solution in the first few iterations (of
gradient descent), and mostly uses the remaining iterations to
fine-tune the trigger. Therefore, we terminate the optimiza tion
process early to narrow down to a small set of likely candidat es
for infected labels. Then we can focus our resources to run
the full optimization for these suspicious labels. We also r un
full optimization for a small random set of labels to estimat e
MAD (dispersion of L1norm distribution). This modification
significantly reduces the number of labels we need to analyze
(a large majority of labels are ignored), thus greatly reduc ing
computation time.
V. E XPERIMENTAL VALIDATION OF BACKDOOR
DETECTION AND TRIGGER IDENTIFICATION
In this section, we describe our experiments to evaluate our
defense technique against BadNets and Trojan Attack, in the
context of multiple classification application domains.
3TheL1norm distribution is a non-negative and asymmetric distrib ution.
MAD was first presented on symmetric distribution, but later work show that
it also work on asymmetric distribution [21].
4For more complicated models, e.g., Trojan models, full analysis on all
labels can take up to 17 days.
TABLE I. Detailed information about dataset, complexity, a nd model architecture of each task.
Task Dataset # of Labels Input Size# of Training
ImagesModel Architecture
Hand-written Digit
RecognitionMNIST 10 28×28×1 60,000 2 Conv + 2 Dense
Traffic Sign
RecognitionGTSRB 43 32×32×3 35,288 6 Conv + 2 Dense
Face Recognition YouTube Face 1,283 55×47×3 375,645 4 Conv + 1 Merge + 1 Dense
Face Recognition
(w/ Transfer Learning)PubFig 65 224×224×3 5,850 13 Conv + 3 Dense
Face Recognition
(Trojan Attack)VGG Face 2,622 224×224×3 2,622,000 13 Conv + 3 Dense
A. Experiment Setup
To evaluate against BadNets, we use four tasks and inject
backdoor using their proposed technique: (1) Hand-written
Digit Recognition ( MNIST ), (2) Traffic Sign Recognition
(GTSRB ), (3) Face Recognition with large number of labels
(YouTube Face ), and (4) Face Recognition using a complex
model (PubFig ). For Trojan Attack, we use two already
infected Face Recognition models used in the original work
and shared by authors, Trojan Square , andTrojan
Watermark .
Details of each task and associated dataset are described
below. A brief summary is also included in Table I. For
brevity, we include more details about training configurati on
in Table VI, and model architecture in Tables VII,VIII,IX,X ,
all included in the Appendix.
•Hand-written Digit Recognition ( MNIST ). This task is
commonly-used to evaluate DNN vulnerabilities. The goal
is to recognize 10hand-written digits (0-9) in gray-scale
images [23]. The dataset contains 60K training images and
10K testing images. The model we use is a standard 4-layer
convolutional neural network (Table VII). This model was
also evaluated in the BadNets work.
•Traffic Sign Recognition ( GTSRB ). This task is also
commonly-used to evaluate attacks on DNNs. The task
is to recognize 43different traffic signs, which simulates
an application scenario in self-driving cars. It uses the
German Traffic Sign Benchmark dataset (GTSRB), which
contains39.2K colored training images and 12.6K testing
images [24]. The model consists of 6convolution layers and
2dense layers (Table VIII).
•Face Recognition ( YouTube Face ). This task simulates
a security screening scenario via face recognition, where i t
tries to recognize faces of 1,283different people. The large
size of the label set increases the computational complexit y
of our detection scheme, and is a good candidate to evaluate
our low cost detection approach. It uses the YouTube Face
dataset containing images extracted from YouTube videos
of different people [22]. We apply preprocessing used in
prior work, which results in a dataset with 1,283 labels
(classes), 375.6K training images, and 64.2K testing im-
ages [17]. We also follow prior work to choose the DeepID
architecture [17], [25], made up of 8layers (Table IX).
•Face Recognition ( PubFig ). This task is similar to
YouTube Face and recognizes faces of 65people. TheTABLE II. Attack success rate and classification accuracy of backdoor
injection attack on four classification tasks.
TaskInfected Model Clean Model
Classification
AccuracyAttack Success
RateClassification
Accuracy
Hand-written Digit
Recognition
(MNIST)99.90% 98.54% 98.88%
Traffic Sign
Recognition
(GTSRB)97.40% 96.51% 96.83%
Face Recognition
(YouTube Face)97.20% 97.50% 98.14%
Face Recognition
w/ Transfer Learning
(PubFig)97.03% 95.69% 98.31%
dataset we use includes 5,850colored training images with
a large resolution of 224×224, and650testing images [26].
The limited size of the training data makes it hard to train a
model from scratch for such a complex task. Therefore, we
leverage transfer learning, and use a Teacher model based
on a16-layer VGG-Face model (Table X). We fine-tune the
last4layers of the Teacher model using our training set.
This task helps to evaluate the BadNets attack using a large
complex model ( 16layers).
•Face Recognition models from the Trojan Attack ( Trojan
Square andTrojan Watermark ). Both models are
derived from the VGG-Face model (16 layers), which is
trained to recognize faces of 2,622 people [27], [28].
Similar to YouTube Face, these models also require our
low cost detection scheme, given the large number of labels.
Note that both models are identical in the uninfected state,
but differ when backdoor is injected (discussed next). The
original dataset contains 2.6M images. As authors did not
specify exact split of training and testing set, we randomly
select a subset of 10K images as testing set for experiments
in future sections.
Attack Configuration for BadNets. We follow attack
methodology proposed by BadNets [12] to inject backdoor
during training. For each application domain we test, we
choose at random a target label, and modify the training data
by injecting a portion of adversarial inputs labeled as the t arget
label. Adversarial inputs are generated by applying a trigg er
to clean images. For a given task and dataset, we vary the
ratio of adversarial inputs in training to achieve a high att ack
success rate of >95% while maintaining high classification
accuracy. The ratio varies from 10% to20%. Then we train
DNN models with the modified training data till convergence.
0 1 2 3 4 5 6
MNIST GTSRB YouTube
FacePubFig Trojan
SquareTrojan
WMAnomaly IndexInfected
Clean
Fig. 3. Anomaly measurement of infected and
clean model by how much the label with smallest
trigger deviates from the remaining labels. 0 100 200 300 400
MNIST GTSRB YouTube
FaceL1 Norm of TriggerUninfected
Infected
0 1000 2000 3000 4000
PubFig Trojan
SquareTrojan
WM
Fig. 4. L1norm of triggers for infected and
uninfected labels in backdoored models. Box plot
shows min/max and quartiles.2
1 10 100 1000
1 10 20 30 40 50 0 20 40 60 80 100Rank of Infected Label
# of Overlapped Label
w/ Previous Iteration
Optimization IterationOverlap
Rank
Fig. 5. Rank of infected labels in each iteration
based on trigger’s norm. Ranking consistency mea-
sured by # of overlapped label between iterations .
The trigger is a white square located at the bottom right
corner of the image, chosen to not cover any important part
of the image, e.g., faces, signs. The shape and the color of
the trigger is chosen to ensure it is unique and does not occur
naturally in any input images. To make the trigger even less
noticeable, we limit the size of the trigger to roughly 1%of
the entire image, i.e.4×4inMNIST andGTSRB ,5×5
inYouTube Face , and24×24inPubFig . Examples of
triggers and adversarial images are in Appendix (Figure 20) .
To measure the performance of backdoor injection, we
calculate classification accuracy on the testing data, as we ll
as attack success rate when applying trigger to testing imag es.
“Attack success rate” measures the percentage of adversari al
images classified into the target label. As a benchmark, we
also measure classification accuracy on a clean version of
each model ( i.e.using same training configuration, but with
clean data). The final performance of each attack on four
tasks is reported in Table II. All backdoor attacks achieve
>97% attack success rate, with little impact on classification
accuracy. The largest reduction in classification accuracy is
2.62% inPubFig .
Attack Configuration for Trojan Attack. We directly use
the infected Trojan Square andTrojan Watermark
models shared by authors of the Trojan Attack work [13]. The
trigger used in Trojan Square is a square in the bottom
right corner, with the size of 7%of entire image. Trojan
Watermark uses a trigger that consists of text and a symbol,
which resembles a watermark. The size of this trigger is also
7%of the entire image. These two backdoors achieve 99.9%
and97.6%attack success rate.
B. Detection Performance
Following methodology in Section IV, we investigate
whether we can detect an infected DNN. Figure 3 shows the
anomaly index for all 6infected, and their matching original
(clean) models, covering both BadNets and Trojan Attack. Al l
infected models have anomaly index larger than 3, indicating
>99.7%probability of being an infected model. Recall that
our anomaly index threshold for infection is 2(Section IV).
Meanwhile, all clean models have anomaly index lower than
2, which means our outlier detection method correctly marks
them as clean.
To understand the position of the infected labels in the L1
norm distribution, we plot the distribution of uninfected and
infected labels in Figure 4. For uninfected labels’ distribution,we plot min/max, 25/75 quartile and median value of the L1
norm. Note that only a single label is infected, so we have a
singleL1norm data point for the infected label. Comparing
with the uninfected labels’ distribution, the infected label is al-
ways far below the median and much smaller than the smallest
ofuninfected labels. This further validates our intuition that the
magnitude of trigger ( L1norm) required to attack an infected
label is smaller, compared to when attacking an uninfected
label.
Finally, our approach can also determine which labels are
infected. Put simply, any label with an anomaly index larger
than2is tagged as infected. In most models, i.e.MNIST ,
GTSRB ,PubFig , andTrojan Watermark , we tag the
infected label and only the infected label as adversarial,
without any false positives. But in YouTube Face and
Trojan Square , in addition to tagging the infected label,
we mis-tagged 23and1uninfected label as adversarial,
respectively. In practice, this is not a problematic scenar io.
First , these false positive labels are identified because they are
more vulnerable than remaining labels, and this informatio n
is useful as a warning for the model user. Second , in later
experiments (Section VI-C), we present mitigation techniq ues
that will patch all vulnerable labels without affecting mod el’s
classification performance.
Performance of Low-Cost Detection. Results in the
previous experiment, in Figure 3 and Figure 4, already use
the low-cost detection scheme on the Trojan Square ,
Trojan Watermark , and clean VGG-Face models (all with
2,622 labels). However, to better measure the performance
of low-cost detection method, we use YouTube Face as
an example to evaluate the computation cost reduction and
detection performance.
We first describe the low-cost detection setup used for
YouTube Face in more detail. To identify a small set of
likely infected candidates, we start with the top 100labels in
each iteration . Labels are ranked based on L1norm ( i.e.labels
with smaller L1norm gets higher ranks). Figure 5 shows how
the top100 labels vary from one iteration to the next, by
measuring the overlap in labels over subsequent iterations (red
curve). After the first 10iterations , the set overlap is mostly
stable and fluctuates around 805. This means that we can
choose the top 100 labels after a few iterations to further
5Further analysis shows the fluctuation is mostly due to chang es in the
lower ranks of the top 100.
Original Trigger
(L1 norm = 16)Reversed Trigger ( m · ∆ )
(L1 norm = 12.00)
(a) MNISTOriginal Trigger
(L1 norm = 16)Reversed Trigger ( m · ∆ )
(L1 norm = 14.71)
(b) GTSRBOriginal Trigger
(L1 norm = 25)Reversed Trigger ( m · ∆ )
(L1 norm = 22.79)
(c) YouTube FaceReversed Trigger ( m · ∆ )
(L1 norm = 171.11) Original Trigger
(L1 norm = 576)
(d) PubFig
Fig. 6. Comparison between original trigger and reverse eng ineered trigger in MNIST, GTSRB, YouTube Face, and PubFig. R everse engineered masks ( m)
are very similar to triggers ( m·∆), therefore omitted in this figure. Reported L1 norms are nor ms of masks. Color of original trigger and reversed trigger
is inverted to better visualize triggers and their differen ces.
Reversed Trigger ( m)
(L1 norm = 311.24)Original Trigger
(L1 norm = 3,481)
(a) Trojan SquareReversed Trigger ( m)
(L1 norm = 574.24)Original Trigger
(L1 norm = 3,598)
(b) Trojan Watermark
Fig. 7. Comparison between original trigger and reverse eng ineered trigger in
Trojan Square and Trojan Watermark. Color of trigger is also inverted. Only
mask (m) is shown to better visualize the trigger.
run the full optimization, and ignore the remaining labels.
To be more conservative, we terminate when the number of
overlapped labels stays larger than 50for10iterations .
So how accurate is our early termination scheme? Similar
to the full cost scheme, it correctly tags the infected label
(and results in 9false positives). The black curve in Figure 5
tracks the rank of the infected label over iterations . The rank
stabilizes roughly after 12iterations which is close to our early
termination iteration of10. Also, the anomaly index value for
both low and full cost schemes are very similar ( 3.92and3.91,
respectively).
This approach results in significant compute time reduction .
Early termination takes 35minutes. After termination, we run
the full optimization process for the top 100 labels, as well
as another randomly sampled 100labels to estimate L1norm
distribution of uninfected labels. This process takes another 44
minutes. The entire process takes 1.3hours, which is a 75%
reduction in time compared to the full scheme.
C. Identification of original trigger
When we identify the infected label, our method also reverse
engineers a trigger that causes misclassification to that la bel.
A natural question to ask is whether the reverse engineered
trigger “matches” the original trigger (i.e.trigger used by
the attacker). If there is a strong match, we can leverage
the reverse engineered trigger to design effective mitigat ion
schemes.
We compare the two triggers in three ways.
End-to-end Effectiveness. Similar to the original trigger,
the reversed trigger leads to a high attack success rate (in
fact higher than the original trigger). All reversed trigge rs
have>97.5%attack success rate, compared to >97.0%for
original triggers. This is not surprising, given how the tri gger
is inferred using a scheme that optimizes for misclassificat ion
(Section IV). Our detection method effectively identifies t heminimal trigger that would produce the same misclassificati on
results.
Visual Similarity. Figure 6 compares the original and
reversed triggers ( m·∆) in each of the four BadNets models.
We find reversed triggers are roughly similar to original
triggers. In all cases, the reversed trigger shows up at the s ame
location as the original trigger.
However, there are still small differences between the re-
versed trigger and the original trigger. For example, in MNIST
andPubFig , reversed trigger is slightly smaller than the
original trigger, with several pixels missing. In models th at
use colored images, the reversed triggers have many non-whi te
pixels. These differences can be attributed to two reasons.
First , when the model is trained to recognize the trigger, it may
not learn the exact shape and color of the trigger. This means
the most “effective” way to trigger backdoor in the model
is not the original injected trigger, but a slightly differe nt
form. Second , our optimization objective is penalizing larger
triggers. Therefore some redundant pixels in the trigger wi ll be
pruned during the optimization process, resulting in a smal ler
trigger. Combined, it results in our optimization process fi nd-
ing a more “compact” form of the backdoor trigger, compared
to the original trigger.
The mismatch between reversed trigger and original trigger
becomes more obvious in two Trojan Attack models, as shown
in Figure 7. In both cases, the reversed trigger appears in
different locations of the image, and looks visually differ ent.
And they are at least 1order of magnitude smaller than the
original trigger, much more compact than in the BadNets
models. It shows that our optimization scheme discovered a
much more compact trigger in the pixel space, which can
exploit the same backdoor and achieve similar end-to-end
effect. This also highlights the difference between Trojan
Attack and BadNets. Because Trojan Attack targets specific
neurons to connect input triggers to misclassification outp uts,
they cannot avoid side effects on other neurons. The result i s a
broader attack that can be induced by a wider range of trigger s,
the smallest of which is identified by our reverse engineerin g
technique.
Similarity in Neuron Activations. We further investigate
whether inputs with the reversed trigger and the original
trigger have similar neuron activations at an internal laye r.
Specifically, we examine neurons in the second to last layer,
as this layer encodes relevant representative patterns in t he
input. We identify neurons most relevant to the backdoor, by
TABLE III. Average activation of backdoor neurons of clean i mages and
adversarial images stamped with reversed trigger and origi nal trigger.
ModelAverage Neuron Activation
Clean ImagesAdv. Images w/
Reversed TriggerAdv. Images w/
Original Trigger
MNIST 1.19 4.20 4.74
GTSRB 42.86 270.11 304.05
YouTube Face 137.21 1003.56 1172.29
PubFig 5.38 19.28 25.88
Trojan Square 2.14 8.10 17.11
Trojan Watermark 1.20 6.93 13.97
feeding clean and adversarial images and observing differe nces
in neuron activations at the target layer (second to last lay er).
We rank neurons by measuring differences in their activatio ns.
Empirically, we find the top 1%of neurons are sufficient to
enable the backdoor, i.e.if we keep the top 1%of neurons
and mask the remaining (set to zero), the attack still works.
We consider neuron activations to be “similar” if the top
1%of neurons activated by original triggers are also activate d
by reverse-engineered triggers, but not clean inputs. Tabl e III
shows the average neuron activation of top 1%neurons when
feeding1,000randomly selected clean and adversarial images.
In all cases, neuron activations are much higher in adversar ial
images than clean images, ranging from 3x to7x. This shows
that when added to inputs, both the reversed trigger and
original trigger activate the same backdoor-related neuro ns.6
Finally, we will leverage neural activations as a way to
represent backdoors in our mitigation techniques in Sectio n VI.
VI. M ITIGATION OF BACKDOORS
Once we have detected the presence of a backdoor, we apply
mitigation techniques to remove the backdoor while preserv ing
the model performance. We describe two complementary
techniques. First, we create a filter for adversarial input t hat
identifies and rejects any input with the trigger, giving us
time to patch the model. Depending on the application, this
approach can also be used to assign a “safe” output label to
an adversarial input without rejection. Second, we patch the
DNN, making it nonresponsive against the detected backdoor
triggers. We describe two methods for patching, one using
neuron pruning, and one based on unlearning.
A. Filter for Detecting Adversarial Inputs
Our results in Section V-C show that neuron activations are
a better way to capture similarity between original and reve rse-
engineered triggers. Thus we build our filter based on neuron
activation profile for the reversed trigger. This is measure d as
the average neuron activations of the top 1%of neurons in
the second to last layer. Given some input, the filter identifi es
potential adversarial inputs as those with activation profi les
higher than a certain threshold. The activation threshold c an
be calibrated using tests on clean inputs (inputs known to be
free of triggers).
We evaluate the performance of our filters using clean
images from the testing set and adversarial images created
6More detailed analysis reveals slight difference between B adNets and
Trojan Attack. Results are included in Appendix.by applying the original trigger to test images (1:1 ratio).
We calculate false positive rate (FPR) and false negative ra te
(FNR) when setting different thresholds for average neuron
activation. Results are shown in Figure 8. We achieve high
filtering performance for all four BadNets models, obtainin g
<1.63% FNR at an FPR of 5%. Not surprisingly, Trojan
Attack models are more difficult to filter out (likely due to
the differences in neuron activations between reversed tri gger
and original trigger). FNR is much higher for FPR <5%,
but we obtain a reasonable 4.3%and28.5%FNR at an FPR
of5%. Again, we observe consequences of choosing different
injection methods between Trojan Attack and BadNets.
B. Patching DNN via Neuron Pruning
To actually patch the infected model, we propose two
techniques. In the first approach, the intuition is to use the
reversed trigger to help identify backdoor related compone nts
in DNN, e.g., neurons, and remove them. We propose to
prune out backdoor-related neurons from the DNN, i.e.set
these neurons’ output value to 0during inference. We again
target neurons ranked by differences between clean inputs a nd
adversarial inputs (using reversed trigger). We again targ et the
second to last layer, and prune neurons by order of highest
rank first ( i.e.prioritizing those that show biggest activation
gap between clean and adversarial inputs). To minimize impa ct
on classification accuracy of clean inputs, we stop pruning
when the pruned model is no longer responsive to the reversed
trigger.
Figure 9 shows classification accuracy and attack success
rate when pruning different ratios of neurons in GTSRB .
Pruning30% of neurons reduces attack success rate to nearly
0%. Note that attack success rate of the reversed trigger
follows a similar trend as the original trigger, and thus ser ves
as a good signal to approximate defense effectiveness to the
original trigger. Meanwhile, classification accuracy is re duced
only by5.06%. Of course, the defender can achieve smaller
drop in classification accuracy by trading off decrease in at tack
success rate (follow the curve in Figure 9).
There is an interesting point to note. In Section V-C, we
identified the top 1%ranked neurons to be sufficient to cause
misclassification. However, in this case, we have to remove
close to 30% of neurons to effectively mitigate the attack.
This can be explained by the massive redundancy in neural
pathways in DNNs [29], i.e.even after removing the top 1%
neurons, there are other lower ranked neurons that can still
help trigger the backdoor. Prior work on compressing DNNs
has also noticed such high levels of redundancy [29].
We apply our scheme to other BadNets models and achieve
very similar results in MNIST andPubFig (See Figure 21
in Appendix). Pruning between 10% to30% neurons reduces
attack success rates to 0%. However, we observe a more signif-
icant negative impact on classification accuracy in the case of
YouTube Face (Figure 21(c) in Appendix). For YouTube
Face , classification accuracy drops from 97.55% to81.4%
when attack success rate drops to 1.6%. This is because the
second to last layer only has 160 output neurons, meaning
0 0.2 0.4 0.6 0.8 1
MNIST GTSRB YouTube
FacePubFig Trojan
SquareTrojan
WMFalse Negative RateFPR=0.001
FPR=0.005
FPR=0.01
FPR=0.05
Fig. 8. False negative rate of proactive adversar-
ial image detection when achieving different false
positive rate. 0 0.2 0.4 0.6 0.8 1
0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1Classification Accuracy
Attack Success Rate
Ratio of Neurons PrunedClassification
Attack w/ Original Trigger
Attack w/ Reversed Trigger
Fig. 9. Classification accuracy and attack success
rate when pruning trigger-related neurons in GT-
SRB (traffic sign recognition w/ 43labels). 0 0.2 0.4 0.6 0.8 1
0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1Classification Accuracy
Attack Success Rate
Ratio of Neurons PrunedClassification
Attack w/ Original Trigger
Attack w/ Reversed Trigger
Fig. 10. Classification accuracy and attack success
rate when pruning trigger-related neurons in Trojan
Square (face recognition w/ 2,622 labels).
clean neurons are heavily mixed with adversarial neurons.
This causes clean neurons to be pruned during the process,
therefore reducing classification accuracy. Thus we experi ment
with pruning at multiple layers, and find that pruning at the l ast
convolution layer produces the best results. In all four Bad Nets
models, attack success rate reduces to <1%with minimal
reduction in classification accuracy <0.8%. Meanwhile, at
most8%of neurons are pruned. We plot those detailed results
in Figure 22 in the Appendix.
Neuron Pruning in Trojan Models. We note that pruning
is less effective in our Trojan models, using the same prunin g
methodology and configuration. As shown in Figure 10, when
pruning30% neurons, attack success rate using our reverse-
engineered trigger drops to 10.1%, but success using the
original trigger remains high, at 87.3%. This discrepancy is
due to the dissimilarity in neuron activations between reve rsed
trigger and the original (Section V-C). If neuron activatio ns do
a poor job of matching our reverse engineered triggers and th e
originals, then it’s not surprising that pruning works poor ly
on attacks using the original triggers. Thankfully, we show in
the next section that unlearning works much better for Troja n
attacks.
Strengths and Limitations. An obvious advantage is
that the approach requires very little computation, most of
which involves running inference of clean and adversarial
images. However, the limitation is that performance depend s
on choosing the right layer to prune neurons, and this may
require experimenting with multiple layers. Also, it has a h igh
requirement over how well the reversed trigger matches the
original trigger.
C. Patching DNNs via Unlearning
Our second approach of mitigation is to train DNN to
unlearn the original trigger. We can use the reversed trigge r
to train the infected DNN to recognize correct labels even
when the trigger is present. Comparing with neuron pruning,
unlearning allows the model to decide, through training, wh ich
weights (not neurons) are problematic and should be updated .
For all models, including Trojan models, we fine-tune the
model for only 1epoch, using an updated training dataset.
To create this new training set, we take a 10% sample of
the original training data (clean, with no triggers)7, and add
7The exception is PubFig , where we use the full training data because
training data is very limited.the reversed trigger to 20% of this sample without modifying
labels. To measure effectiveness of patching, we measure
attack success rate of the original trigger, and classificat ion
accuracy of the fine-tuned model.
Table IV compares the attack success rate and classification
accuracy before and after training. In all models, we manage
to reduce attack success rate to <6.70%, without significantly
sacrificing classification accuracy. The largest reduction of
classification accuracy is in GTSRB , which is only 3.6%. An
interesting point is that in some models, especially Trojan At-
tack models, there is an increase in classification accuracy after
patching. Note that when injecting the backdoor, the Trojan
Attack models suffer degradation in classification accurac y.
Original uninfected Trojan Attack models have a classificat ion
accuracy of 77.2%(not shown in Table IV), which is now
improved when the backdoor is patched.
We compare the efficacy of this unlearning versus two
variants. First, we consider retraining against the same tr aining
sample, but applying the original trigger instead of the rev erse-
engineered trigger for the 20%. As shown in Table IV,
unlearning using the original trigger achieves slightly lo wer
attacker success rate with similar classification accuracy . So
unlearning with our reversed trigger is a good approximatio n
for unlearning using the original. Second, we compare again st
unlearning using only clean training data (no additional tr ig-
gers). Results in last column in Table IV show that unlearnin g
is ineffective for all BadNets models (attack success rate
still high: >93.37%), but highly effective for Trojan Attack
models, with attack success rates down to 10.91% and0%for
Trojan Square , andTrojan Watermark respectively.
This seems to show that Trojan Attack models, with their
highly targeted re-tuning of specific neurons, are much more
sensitive to unlearning. A clean input that helps reset a few
key neurons disables the attack. In contrast, BadNets injec ts
backdoors by updating all layers using a poisoned dataset,
and seems to require significantly more work to retrain and
mitigate the backdoor.
We also checked the impact of patching false positive
labels. Patching falsely flagged labels in YouTube Face and
Trojan Square (discussed in Section V-B), only reduces
the classification accuracy by <1%. Thus there is negligible
impact of false positives in detection on the mitigation par t.
Parameters and Cost. Through experiments, we find that
unlearning performance is generally insensitive to parame ters
like amount of training data, and ratio of modified training
TABLE IV . Classification accuracy and attack success rate be fore and after unlearning backdoor. Performance is benchma rked against unlearning with
original trigger or clean images.
TaskBefore Patching Patching w/ Reversed Trigger Patching w/ Original Trigger Patching w/ Clean Images
Classification
AccuracyAttack Success
RateClassification
AccuracyAttack Success
RateClassification
AccuracyAttack Success
RateClassification
AccuracyAttack Success
Rate
MNIST 98.54% 99.90% 97.69% 0.57% 97.77% 0.29% 97.38% 93.37%
GTSRB 96.51% 97.40% 92.91% 0.14% 90.06% 0.19% 92.02% 95.69%
YouTube Face 97.50% 97.20% 97.90% 6.70% 97.90% 0.0% 97.80% 95.10%
PubFig 95.69% 97.03% 97.38% 6.09% 97.38% 1.41% 97.69% 93.30%
Trojan Square 70.80% 99.90% 79.20% 3.70% 79.60% 0.0% 79.50% 10.91%
Trojan Watermark 71.40% 97.60% 78.80% 0.00% 79.60% 0.00% 79.50% 0.00%
data. Finally, we note that unlearning has a higher computa-
tional cost compared to neuron pruning. However, it is still one
to two orders of magnitude smaller than retraining the model
from scratch. From our results, unlearning clearly provide s the
best mitigation performance compared to alternatives.
VII. R OBUSTNESS AGAINST ADVANCED BACKDOORS
Prior sections described and evaluated detection and mitig a-
tion of backdoor attacks under base case assumptions, e.g., a
small number of triggers, each prioritizing stealth and tar geting
the misclassification of arbitrary input into a single targe t label.
Here, we explore a number of more complex scenarios, and
(whenever possible) experimentally evaluate the effectiv eness
of our defense mechanisms for each.
We discuss 5specific types of advanced backdoors attacks,
each challenging an assumption or limitation in the current
defense design.
•Complex Triggers . Our detection scheme relies on the
success of the optimization process. Would more compli-
cated triggers make it more challenging for our optimizatio n
function to converge?
•Larger Triggers . We consider larger triggers. By increas-
ing trigger size, an attacker can force the reverse engineer ing
process to converge to a large trigger with larger norm.
•Multiple Infected Labels with Separate Triggers . We
consider a scenario where multiple backdoors targeting
distinct labels are inserted into a single model, and evalua te
the maximum number of infected labels we can detect.
•Single Infected Label with Multiple Triggers . We con-
sider multiple triggers targeting the same label.
•Source-label-specific (Partial) Backdoors . Our detection
scheme is designed to detect triggers that induce misclas-
sification on arbitrary input. A “partial” backdoor that is
effective on inputs from a subset of source labels would be
more difficult to detect.
A. Complex Trigger Patterns
As we observed in Trojan models, triggers with more
complicated patterns make it harder for the optimization to
converge to the exact trigger. A more random trigger pattern
might increase the difficulty of reverse engineering the tri gger.
We perform simple tests by first changing the white square
trigger to a noisy square, where each pixel of the trigger is
assigned a random color. We inject this attack in MNIST ,
GTSRB ,YouTube Face , andPubFig , and evaluate detec-
tion performance. The resulting anomaly index in each modelis shown in Figure 11. Our technique detects the complex
trigger patterns in all cases. We also test our mitigation
techniques on these models. For filtering, at an FPR of 5%,
we achieve <0.01% FNR in all models. Patching using
unlearning reduces attack success rate to <4.2%, with at
most3.1%reduction in classification accuracy. Finally, we
tested backdoors with varying trigger shapes ( e.g., triangle,
checkerboard shapes) in GTSRB , and all detection and miti-
gation techniques worked as expected.
B. Larger Triggers
Larger triggers are likely to produce larger reverse-
engineered triggers. This could help the infected label mor e
closely resemble uninfected labels in the L1norm, making
outlier detection less effective. We run sample tests on GTSRB ,
and increase the size of trigger from 4×4(1.6%of the image)
to16×16(25%). All triggers are still white squares. We
evaluate the detection technique with same configuration us ed
in previous experiments. Figure 12 shows the L1norm of
reversed triggers for infected and uninfected labels. As the
original trigger becomes larger, the reversed trigger also gets
larger as expected. When the trigger grows beyond 14×14, the
L1norm does indeed blend in with that of uninfected labels,
reducing the anomaly index below detection threshold. The
anomaly index metric is shown in Figure 13.
The maximum detectable trigger size is largely a function of
one factor: trigger size of uninfected labels (amount of change
necessary to cause misclassification of all inputs between
uninfected labels). The trigger size of uninfected labels is
itself a proxy for measuring the distinctiveness of inputs a cross
different labels, i.e.more labels means larger trigger size for
uninfected labels and a greater ability to detect larger triggers.
On applications like YouTube Face , we were able to detect
triggers as large as 39% of the whole image. On MNIST which
has fewer labels, we were only able to detect triggers of size
up to18% of the image.8In general, a larger trigger is more
visually obvious and easier to identify by humans. However,
there may exist approaches to increase the trigger size whil e
remaining less obvious, which can be explored in future work .
C. Multiple Infected Labels with Separate Triggers
We consider a scenario where attackers insert multiple,
independent backdoors into a single model, each targeting a
distinctive label. Inserting many backdoors might collect ively
8No additional false positive label is found when using large r triggers in
all models we test.
0 1 2 3 4 5
MNIST GTSRB YouTube
FacePubFigAnomaly Index
Fig. 11. Anomaly index of infected MNIST, GT-
SRB, YouTube Face, and PubFig model with noisy
square trigger. 0 50 100 150 200
4×4
(1.6%)8×8
(6.3%)12×12
(14.1%)16×16
(25%)L1 Norm of Trigger
Width × Length of Original Trigger
(Ratio to Image Size)Uninfected
Infected
Fig. 12. L1norm of reverse engineered triggers
of labels when increasing the size of the original
trigger in GTSRB (results of a single round). 0 1 2 3 4 5
4×4
(1.6%)8×8
(6.3%)12×12
(14.1%)16×16
(25%)Anomaly Index
Width × Length of Original Trigger
(Ratio to Image Size)
Fig. 13. Anomaly index of each infected GTSRB
model when increasing the size of the original
trigger (results averaged over 10rounds).
0 0.2 0.4 0.6 0.8 1
1
(0.08%)200
(15.6%)400
(31.2%)600
(46.8%)800
(62.4%)1000
(79.9%) 0 0.2 0.4 0.6 0.8 1Classification Accuracy
Avg. Attack Success Rate
# of Infected Labels in DNN
(Ratio to Total # of Labels)Classification
Attack
Fig. 14. Classification accuracy and average attack
success rate when different number of labels are
infected in YouTube Face. 0 1 2 3 4
1
(2.3%)4
(9.3%)7
(16.3%)10
(23.3%)13
(30.2%)Anomaly Index
# of Infected Labels in DNN
(Ratio to Total # of Labels)
Fig. 15. Anomaly index of each infected GTSRB
model with different number of labels being in-
fected (results averaged over 10rounds). 0 40 80 120 160
1
(2.3%)4
(9.3%)7
(16.3%)10
(23.3%)13
(30.2%)L1 Norm of Trigger
# of Infected Labels in DNN
(Ratio to Total # of Labels)Uninfected
Infected
Fig. 16. L1norm of triggers from infected and
uninfected labels when different number of labels
are infected in GTSRB (results of a single round).
reduceδ∀→tfor many LtinL. This has the net effect of
making the impact of any single trigger less of an outlier
and harder to detect. The trade-off is that models are likely
to have a “maximum capacity” for learning backdoors while
maintaining their classification. Too many backdoors are li kely
to lower classification performance.
We experiment by generating distinctive triggers with mutu -
ally exclusive color patterns. We find most models, i.e.MNIST ,
GTSRB , andPubFig , have enough capacity to support trig-
gers for every output label without affecting classificatio n
accuracy. But in YouTube Face , with1,283 labels, we
observe an obvious drop in average attack success rate once
triggers infect more than 15.6%of labels in the model. As
shown in Figure 14, average attack success rate drops with
too many triggers, confirming our intuition.
We evaluate our defenses against multiple distinct backdoo rs
inGTSRB . As shown in Figure 15, once more than 8labels
(18.6%)have been infected with backdoors, it becomes very
difficult for anomaly detection to identify the impact of
triggers. Our results show we can detect up to 3labels (30%)
forMNIST ,375labels (29.2%) forYouTube Face , and24
labels (36.9%) forPubFig .
Though outlier detection method fails in this scenario, the
underlying reverse engineering method still works. For all
infected labels, we successfully reverse engineer the corr ect
trigger. Figure 16 shows the norm of triggers for infected an d
uninfected labels. All infected labels have smaller norm than
uninfected labels. Further manual analysis also validates that
reversed triggers visually look similar as original trigge rs.A
conservative defender could manually inspect reversed tri g-
gers and determine model’s suspicion. After that, our tests
show that preemptive “patching” could successfully mitiga te
potential backdoors. When all labels are infected in GTSRB , 0 0.2 0.4 0.6 0.8 1
Unpatched 1 2 3Attack Success Rate
Iteration of Patching
Fig. 17. Attack success rate of 9triggers when patching DNN for different
number of iterations.
patching all labels using reversed triggers would reduce av er-
age attack success rate to 2.83%. Proactive patching provides
similar benefits for the other models as well. Finally, filter ing
is also effective at detecting adversarial inputs with low F NR
at FPR of 5%across all BadNets models.
D. Single Infected Label with Multiple Triggers
We consider a scenario where multiple distinctive triggers
induce misclassification to the same label. In this case, our
detection techniques would likely only detect and patch one
of the existing triggers. To test this, we inject 9white4×4
square triggers for the same target label into GTSRB . Those
triggers have the same shape and color, but are located in
different positions of the image, i.e.four corners, four edges,
and the center. The attack achieves >90% attack success rate
for all triggers.
Detection and patching results are included in Figure 17.
As suspected, a single run of our detection technique only
identifies and patches one of the injected triggers. Fortuna tely,
running just 3iterations of our detection and patch algorithm
is able to successively reduce the success rate of all trigge rs to
<5%. We also test on other MNIST ,YouTube Face , and
PubFig , and attack success rate of all triggers are reduced to
<1%,<5%, and<4%.
E. Source-label-specific (Partial) Backdoors
In Section II, we define backdoor as a hidden pattern that
could misclassify arbitrary inputs from any label into the t arget
label. Our detection scheme is designed to find these “com-
plete” backdoors. A less powerful, “partial” backdoor, cou ld
be designed such that triggers only trigger misclassificati on
when applied to input belonging to a subset of source labels,
and do nothing when applied to other inputs. Such backdoors
would be a challenge to detect using our existing methods.
Detecting partial backdoors requires slightly modifying o ur
detection scheme. Instead of reverse engineering a trigger to
every target label, we analyze all possible source-target l abel
pairs. For each label pair, we use samples belonging to the
source label to solve the optimization. The resulting rever sed
trigger would only be effective for the specific label pair.
Then, by comparing L1norm of triggers for different source-
target pairs, we can use the same outlier detection method
to identify label pairs that are particularly vulnerable an d
appear as anomaly. We experiment by injecting a backdoor
targeting one source-target label pair into MNIST . While the
injected backdoor works very well, our updated techniques f or
detection, and mitigation are all successful.
Analyzing all source-target label pairs increases the com-
putation cost of detection by a factor of N, whereNis the
number of labels. However, we can use a divide-and-conquer
algorithm to reduce the computational cost to a factor of logN.
We leave detailed evaluation to future work.
VIII. R ELATED WORK
Traditional machine learning assumes the environment is
benign. This assumption could be violated by an adversary at
either training or testing time.
Additional Backdoor Attacks and Defenses. In addition to
attacks mentioned in Section II, Chen et al. propose a backdoor
attack under a more restricted attack model, where attacker can
only pollute a limited portion of training set [17]. Another line
of work directly tampers with hardware the DNN is running
on [30], [31]. Such backdoor circuits would also alter model ’s
performance when a trigger is presented.
Poisoning Attacks. Poisoning attack pollutes the training
data to alter the model’s behavior. Different from backdoor
attack, poisoning attack does not rely on the trigger, and al ters
model’s behavior on a set of clean samples. Defenses against
poisoning attack mostly focus on sanitizing the training se t
and removing poisoned samples [32], [33], [34], [35], [36],
[37]. The insight is to find samples that would alter model’s
performance significantly [32]. This insight has shown to be
less effective against backdoor attack [17], as injected sa mples
do not affect model’s performance on clean samples. Also, it ’s
impractical in our attack model, as the defender does not hav e
access to the poisoned training set.
Other Adversarial Attacks against DNNs. Numerous
(non-backdoor) adversarial attacks have been proposed aga instgeneral DNNs, often crafting imperceptible modifications t o
images to cause misclassification. These can be applied to
DNNs during inference [38], [39], [40], [41], [42]. A number
of defenses have been proposed [43], [44], [45], [46], [47],
yet many have shown to be less effective against an adaptive
adversary [48], [49], [50], [51]. Some recent work tries to c raft
universal perturbations, which would trigger misclassific ation
for multiple images in an uninfected DNN [52], [53]. This
line of work considers a different threat model assuming an
uninfected victim model, which is not the target scenario of
our defense.
IX. C ONCLUSION AND FUTURE WORK
Our work describes and empirically validates our robust
and general detection and mitigation tools against backdoo r
(Trojan) attacks on deep neural networks. Beyond the efficac y
of our defense against basic and complex backdoors, one
of the unexpected takeaways of our paper is the significant
differences between two backdoor injection methods: the
trigger-driven BadNets end-to-end attack with full access to
model training, and the neuron-driven Trojan Attack withou t
access to model training. Through our experiments, we find
that the Trojan Attack injection method generally adds more
perturbations than necessary, and introduces unpredictab le
changes to non-targeted neurons. This makes their triggers
harder to reverse engineer, and makes them more resistant
to filtering and neuron pruning. However, the tradeoff is tha t
their focus on specific neurons make them extremely sensitiv e
to mitigation via unlearning. In contrast, BadNets introdu ce
more predictable changes to neurons, and can be more easily
reverse engineered, filtered and mitigated via neuron pruni ng.
Finally, while our results are robust against a range of
attacks in different applications, there are still limitat ions.
First and foremost is the question of generalization beyond
the current vision domain. Our high-level intuition and des ign
of detection/mitigation methods could be generalizable: t he
intuition for detection is that the infected label is more
vulnerable than uninfected labels, and this should be domai n
agnostic. The main challenge of adapting the entire pipelin e
to non-vision domain is to formulate the backdoor attack
process and design a metric measuring how vulnerable a
specific label is (like Equation 2 and Equation 3). Second, the
space of potential counter-measures of attacker could be la rge.
We study 5different counter-measures that specifically target
different components/assumptions of our defense, but furt her
exploration of other potential counter-measures remains p art
of future work.
ACKNOWLEDGEMENT
We thank our shepherd Roberto Perdisci and anonymous re-
viewers for their constructive feedback. This work is suppo rted
in part by NSF grants CNS-1527939 and CNS-1705042.
Any opinions, findings, and conclusions or recommendations
expressed in this material are those of the authors and do not
necessarily reflect the views of any funding agencies.
REFERENCES
[1] Q. Wang, W. Guo, K. Zhang, A. G. O. II, X. Xing, X. Liu, and C. L.
Giles, “Adversary resistant deep neural networks with an ap plication to
malware detection,” in Proc. of KDD , 2017.
[2] D. Arp, M. Spreitzenbarth, M. Hubner, H. Gascon, and K. Ri eck,
“Drebin: Effective and explainable detection of an- droid m alware in
your pocket,” in Proc. of NDSS , 2014.
[3] Z. L. Chua, S. Shen, P. Saxena, and Z. Liang, “Neural nets c an learn
function type signatures from binaries,” in Proc. of USENIX Security ,
2017.
[4] E. C. R. Shin, D. Song, and R. Moazzezi, “Recognizing func tions in
binaries with neural networks,” in Proc. of USENIX Security , 2015.
[5] H. Debar, M. Becker, and D. Siboni, “A neural network comp onent for
an intrusion detection system,” in Proc. of IEEE S&P , 1992.
[6] C. Wierzynski, “The Challenges and Opportunities of Exp lainable AI,”
https://ai.intel.com/the-challenges-and-opportuniti es-of-explainable- ai,
Jan. 2018.
[7] “FICO’s Explainable Machine Learning Challenge,” http s://community.
fico.com/s/explainable-machine-learning-challenge, 20 18.
[8] Z. C. Lipton, “The mythos of model interpretability,” in ICML Workshop
on Human Interpretability in Machine Learning , 2016.
[9] S. M. Lundberg and S.-I. Lee, “A unified approach to interp reting model
predictions,” in Proc. of NIPS , 2017.
[10] S. Bach, A. Binder, G. Montavon, F. Klauschen, K. R. Mull er, and
W. Samek, “On pixel-wise explanations for non-linear class ifier deci-
sions by layer-wise relevance propagation,” PloS One , vol. 10, no. 7,
July 2015.
[11] W. Guo, D. Mu, J. Xu, P. Su, G. Wang, and X. Xing, “Lemna: Ex plaining
deep learning based security applications,” in Proc. of CCS , 2018.
[12] T. Gu, B. Dolan-Gavitt, and S. Garg, “Badnets: Identify ing vulnerabili-
ties in the machine learning model supply chain,” in Proc. of Machine
Learning and Computer Security Workshop , 2017.
[13] Y . Liu, S. Ma, Y . Aafer, W.-C. Lee, J. Zhai, W. Wang, and X. Zhang,
“Trojaning attack on neural networks,” in Proc. of NDSS , 2018.
[14] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan , I. Goodfellow,
and R. Fergus, “Intriguing properties of neural networks,” inProc. of
ICLR , 2014.
[15] K. Liu, B. Dolan-Gavitt, and S. Garg, “Fine-pruning: De fending against
backdooring attacks on deep neural networks,” in Proc. of RAID , 2018.
[16] Y . Liu, Y . Xie, and A. Srivastava, “Neural trojans,” in Proc. of ICCD ,
2017.
[17] X. Chen, C. Liu, B. Li, K. Lu, and D. Song, “Targeted backd oor
attacks on deep learning systems using data poisoning,” arXiv preprint
arXiv:1712.05526 , 2017.
[18] J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson , “Under-
standing neural networks through deep visualization,” arXiv preprint
arXiv:1506.06579 , 2015.
[19] D. P. Kingma and J. Ba, “Adam: A method for stochastic opt imization,”
arXiv preprint arXiv:1412.6980 , 2014.
[20] F. R. Hampel, “The influence curve and its role in robust e stimation,”
Journal of the American Statistical Association , vol. 69, no. 346, pp.
383–393, 1974.
[21] P. J. Rousseeuw and C. Croux, “Alternatives to the media n absolute
deviation,” Journal of the American Statistical association , vol. 88, no.
424, pp. 1273–1283, 1993.
[22] “https://www.cs.tau.ac.il/ ∼wolf/ytfaces/,” YouTube Faces DB.
[23] Y . LeCun, L. Jackel, L. Bottou, C. Cortes, J. S. Denker, H . Drucker,
I. Guyon, U. Muller, E. Sackinger, P. Simard et al. , “Learning algorithms
for classification: A comparison on handwritten digit recog nition,”
Neural networks: the statistical mechanics perspective , vol. 261, p. 276,
1995.
[24] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “Ma n vs. computer:
Benchmarking machine learning algorithms for traffic sign r ecognition,”
Neural Networks , 2012.
[25] Y . Sun, X. Wang, and X. Tang, “Deep learning face represe ntation from
predicting 10,000 classes,” in Proc. of CVPR , 2014.
[26] “http://vision.seas.harvard.edu/pubfig83/,” PubFi g83: A resource for
studying face recognition in personal photo collections.
[27] O. M. Parkhi, A. Vedaldi, A. Zisserman et al. , “Deep face recognition.”
inProc. of BMVC , 2015.
[28] “http://www.robots.ox.ac.uk/ ∼vgg/data/vgg face/,” VGG Face Dataset.[29] H. Hu, R. Peng, Y .-W. Tai, and C.-K. Tang, “Network trimm ing: A data-
driven neuron pruning approach towards efficient deep archi tectures,”
arXiv preprint arXiv:1607.03250 , 2016.
[30] J. Clements and Y . Lao, “Hardware trojan attacks on neur al networks,”
arXiv preprint arXiv:1806.05768 , 2018.
[31] W. Li, J. Yu, X. Ning, P. Wang, Q. Wei, Y . Wang, and H. Yang, “Hu-fu:
Hardware and software collaborative attack framework agai nst neural
networks,” in Proc. of ISVLSI , 2018.
[32] Y . Cao, A. F. Yu, A. Aday, E. Stahl, J. Merwine, and J. Yang , “Efficient
repair of polluted machine learning systems via causal unle arning,” in
Proc. of ASIACCS , 2018.
[33] M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotar u, and B. Li,
“Manipulating machine learning: Poisoning attacks and cou ntermeasures
for regression learning,” in Proc. of IEEE S&P , 2018.
[34] B. I. Rubinstein, B. Nelson, L. Huang, A. D. Joseph, S.-h . Lau, S. Rao,
N. Taft, and J. Tygar, “Antidote: understanding and defendi ng against
poisoning of anomaly detectors,” in Proc. of IMC , 2009.
[35] M. Mozaffari-Kermani, S. Sur-Kolay, A. Raghunathan, a nd N. K. Jha,
“Systematic poisoning attacks on and defenses for machine l earning in
healthcare,” IEEE journal of biomedical and health informatics , vol. 19,
no. 6, pp. 1893–1905, 2015.
[36] J. Steinhardt, P. W. W. Koh, and P. S. Liang, “Certified de fenses for data
poisoning attacks,” in Proc. of NIPS , 2017.
[37] G. F. Cretu, A. Stavrou, M. E. Locasto, S. J. Stolfo, and A . D. Keromytis,
“Casting out demons: Sanitizing training data for anomaly s ensors,” in
Proc. of IEEE S&P , 2008.
[38] N. Carlini and D. Wagner, “Towards evaluating the robus tness of neural
networks,” in Proc. of IEEE S&P , 2017.
[39] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning
at scale,” in Proc. of ICLR , 2017.
[40] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Ce lik, and
A. Swami, “The limitations of deep learning in adversarial s ettings,”
inProc. of Euro S&P , 2016.
[41] Y . Liu, X. Chen, C. Liu, and D. Song, “Delving into transf erable
adversarial examples and black-box attacks,” in Proc. of ICLR , 2016.
[42] B. Wang, Y . Yao, B. Viswanath, Z. Haitao, and B. Y . Zhao, “ With
great training comes great vulnerability: Practical attac ks against transfer
learning,” in Proc. of USENIX Security , 2018.
[43] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Di stillation
as a defense to adversarial perturbations against deep neur al networks,”
inProc. of IEEE S&P , 2016.
[44] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vlad u, “Towards
deep learning models resistant to adversarial attacks,” in Proc. of ICLR ,
2018.
[45] H. Kannan, A. Kurakin, and I. Goodfellow, “Adversarial logit pairing,”
arXiv preprint arXiv:1803.06373 , 2018.
[46] D. Meng and H. Chen, “Magnet: a two-pronged defense agai nst adver-
sarial examples,” in Proc. of CCS , 2017.
[47] W. Xu, D. Evans, and Y . Qi, “Feature squeezing: Detectin g adversarial
examples in deep neural networks,” in Proc. of NDSS , 2018.
[48] N. Carlini and D. Wagner, “Defensive distillation is no t robust to
adversarial examples,” arXiv preprint arXiv:1607.04311 , 2016.
[49] W. He, J. Wei, X. Chen, N. Carlini, and D. Song, “Adversar ial example
defenses: Ensembles of weak defenses are not strong,” in Proc. of
WOOT , 2017.
[50] N. Carlini and D. Wagner, “Magnet and efficient defenses against
adversarial attacks are not robust to adversarial examples ,”arXiv preprint
arXiv:1711.08478 , 2017.
[51] A. Athalye, N. Carlini, and D. Wagner, “Obfuscated grad ients give a
false sense of security: Circumventing defenses to adversa rial examples,”
inProc. of ICML , 2018.
[52] T. B. Brown, D. Man´ e, A. Roy, M. Abadi, and J. Gilmer, “Ad versarial
patch,” arXiv preprint arXiv:1712.09665 , 2017.
[53] S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Fros sard, “Univer-
sal adversarial perturbations,” in Proc. of CVPR , 2017.
APPENDIX
BACKDOOR DETECTION USING TESTING DATA
In previous experiments, we use training data for detecting
backdoors. In many scenarios, full training data may not be
available, and users only have access to limited testing dat a
to validate models. Here, we determine if detection achieve s
0 1 2 3 4 5 6
MNIST GTSRB YouTube
FacePubFig Trojan
SquareTrojan
WMAnomaly IndexInfected
Clean
Fig. 18. Anomaly measurement of infected and clean model by how much t he
label with smallest trigger deviates from the remaining lab els (using testing
data).
0 100 200 300 400
MNIST GTSRB YouTube
FaceL1 Norm of TriggerUninfected
Infected
0 1000 2000 3000 4000
PubFig Trojan
SquareTrojan
WM
Fig. 19.L1norm of triggers for infected and uninfected labels in backd oored
models (using testing data).
similar performance using only limited testing data. For al l
models, we follow the same detection configuration, but use
only50% of the testing data. The remaining 50% is used for
evaluating the effectiveness of the reversed trigger. Figu re 18
shows all infected models have anomaly index larger than 3,
while all clean models have anomaly index lower than 2. As
before, our detection method correctly differentiates inf ected
models and clean models.
Figure 19 plots the distribution of infected and uninfected
labels, when search for backdoors using testing data. Again ,
all infected labels have triggers with much smaller L1norm
values, compared to uninfected labels. Together, these res ults
show that our detection method is still effective, even when
using only limited testing data.
TABLE V . Intersection-over-union ratio of backdoor neurons used by
reversed trigger and original trigger.
Model MNIST GTSRBYouTube
FacePubFigTrojan
SquareTrojan
Watermark
Intersection
over Union Ratio0.807 0.892 0.583 0.775 0.104 0.117
DETAILED ANALYSIS OF REVERSED TRIGGER ’SNEURON
ACTIVATION SIMILARITY
Our previous experiment shows that reversed-engineered
triggers do activate malicious neurons used by the original
triggers. However, it’s still possible that reversed trigg ers
activate additional neurons. Here we further determine if t he
reversed trigger and the original trigger activate exactly same
set of neurons. This is a slightly different question from th ose
answered by experiments in Section V-C. Here, we identify to p
1%most important neurons for the reversed trigger and theoriginal trigger, respectively. Then, in each model, we com pute
the intersection-over-union ratio of the two sets of neuron s. A
ratio closer to 1indicates two triggers activate more similar
sets of neurons.
Table V shows the intersection-over-union ratio in all 6
backdoored models. We see that all BadNets models have
ratios higher than 0.58, which indicates the reversed trigger
is very similar to the original trigger in neuron activation .
However, ratios in Trojan models are much smaller ( 0.104and
0.117), which suggests that the reversed trigger shares less in
common with the original trigger. As we mentioned before,
this is likely caused by the design of Trojan Attack. Since
Trojan Attack relies on specific neurons to build a stronger
connection between the trigger and the misclassification ou t-
put, the side affect on other neurons results in a wider range of
triggers. The reversed trigger, being the smallest among th em
(based on L1norm), uses a slightly different set of neurons,
but can still achieve similar end-to-end misclassification effect.
(a) MNIST
(b) GTSRB
(c) YouTube Face
(d) PubFig
(e) Trojan Square
(f) Trojan WM
Fig. 20. Examples of adversarial images in BadNets models (w ith white square trigger added to the bottom right corner of t he image), Trojan Square, and
Trojan Watermark.
TABLE VI. Detailed information about dataset and training c onfigurations for each BadNets models.
Task / Dataset # of LabelsTraining
Set SizeTesting
Set SizeTraining Configuration
MNIST 10 50,000 10,000 inject ratio=0.1, epochs=5, batch=32, optimizer=Adam, lr =0.001
GTSRB 43 35,288 12,630 inject ratio=0.1, epochs=10, batch=32, optimizer=Adam, l r=0.001
YouTube Face 1,283 375,645 64,150 inject ratio=0.1, epochs=10, batch=32, optimizer=Adadel ta, lr=0.1
PubFig 65 5,850 650inject ratio=0.1, epochs=15, batch=32, optimizer=Adadel ta, lr=0.1
First12layers are frozen during training. First 5epochs are trained using clean data only.
0 0.2 0.4 0.6 0.8 1
0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1Classification Accuracy
Attack Success Rate
Ratio of Neurons PrunedClassification
Attack w/ Original Trigger
Attack w/ Reversed Trigger
(a) MNIST 0 0.2 0.4 0.6 0.8 1
0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1Classification Accuracy
Attack Success Rate
Ratio of Neurons PrunedClassification
Attack w/ Original Trigger
Attack w/ Reversed Trigger
(b) GTSRB 0 0.2 0.4 0.6 0.8 1
0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1Classification Accuracy
Attack Success Rate
Ratio of Neurons PrunedClassification
Attack w/ Original Trigger
Attack w/ Reversed Trigger
(c) YouTube Face 0 0.2 0.4 0.6 0.8 1
0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1Classification Accuracy
Attack Success Rate
Ratio of Neurons PrunedClassification
Attack w/ Original Trigger
Attack w/ Reversed Trigger
(d) PubFig
Fig. 21. Classification accuracy and attack success rate usi ng original/reversed trigger when pruning backdoor-relat ed neurons at the second to last layer.
0 0.2 0.4 0.6 0.8 1
0 0.02 0.04 0.06 0.08 0.1 0 0.2 0.4 0.6 0.8 1Classification Accuracy
Attack Success Rate
Ratio of Neurons PrunedClassification
Attack w/ Original Trigger
Attack w/ Reversed Trigger
(a) MNIST 0 0.2 0.4 0.6 0.8 1
0 0.04 0.08 0.12 0.16 0.2 0 0.2 0.4 0.6 0.8 1Classification Accuracy
Attack Success Rate
Ratio of Neurons PrunedClassification
Attack w/ Original Trigger
Attack w/ Reversed Trigger
(b) GTSRB 0 0.2 0.4 0.6 0.8 1
0 0.02 0.04 0.06 0.08 0.1 0 0.2 0.4 0.6 0.8 1Classification Accuracy
Attack Success Rate
Ratio of Neurons PrunedClassification
Attack w/ Original Trigger
Attack w/ Reversed Trigger
(c) YouTube Face 0 0.2 0.4 0.6 0.8 1
0 0.02 0.04 0.06 0.08 0.1 0 0.2 0.4 0.6 0.8 1Classification Accuracy
Attack Success Rate
Ratio of Neurons PrunedClassification
Attack w/ Original Trigger
Attack w/ Reversed Trigger
(d) PubFig
Fig. 22. Classification accuracy and attack success rate of o riginal/reversed trigger when pruning backdoor-related n eurons at the last convolution layer.
TABLE VII. Model Architecture for MNIST. FC stands for fully -connected
layer.
Layer Type # of Channels Filter Size Stride Activation
Conv 16 5 ×5 1 ReLU
MaxPool 16 2 ×2 2 -
Conv 32 5 ×5 1 ReLU
MaxPool 32 2 ×2 2 -
FC 512 - - ReLU
FC 10 - - SoftmaxTABLE VIII. Model Architecture for GTSRB.
Layer Type # of Channels Filter Size Stride Activation
Conv 32 3 ×3 1 ReLU
Conv 32 3 ×3 1 ReLU
MaxPool 32 2 ×2 2 -
Conv 64 3 ×3 1 ReLU
Conv 64 3 ×3 1 ReLU
MaxPool 64 2 ×2 2 -
Conv 128 3 ×3 1 ReLU
Conv 128 3 ×3 1 ReLU
MaxPool 128 2 ×2 2 -
FC 512 - - ReLU
FC 43 - - Softmax
TABLE IX. DeepID Model Architecture for YouTube Face.
Layer Name (Type) # of Channels Filter Size Stride Activatio n Connected to
conv 1 (Conv) 20 4 ×4 2 ReLU
pool 1 (MaxPool) 2 ×2 2 - conv 1
conv 2 (Conv) 40 3 ×3 2 ReLU pool 1
pool 2 (MaxPool) 2 ×2 2 - conv 2
conv 3 (Conv) 60 3 ×3 2 ReLU pool 2
pool 3 (MaxPool) 2 ×2 2 - conv 3
fc1 (FC) 160 - - - pool 3
conv 4 (Conv) 80 2 ×2 1 ReLU pool 3
fc2 (FC) 160 - - - conv 4
add 1 (Add) - - - ReLU fc 1, fc 2
fc3 (FC) 1280 - - Softmax add 1
TABLE X. Model Architecture for PubFig.
Layer Type # of Channels Filter Size Stride Activation
Conv 64 3 ×3 1 ReLU
Conv 64 3 ×3 1 ReLU
MaxPool 64 2 ×2 2 -
Conv 128 3 ×3 1 ReLU
Conv 128 3 ×3 1 ReLU
MaxPool 128 2 ×2 2 -
Conv 256 3 ×3 1 ReLU
Conv 256 3 ×3 1 ReLU
Conv 256 3 ×3 1 ReLU
MaxPool 256 2 ×2 2 -
Conv 512 3 ×3 1 ReLU
Conv 512 3 ×3 1 ReLU
Conv 512 3 ×3 1 ReLU
MaxPool 512 2 ×2 2 -
Conv 512 3 ×3 1 ReLU
Conv 512 3 ×3 1 ReLU
Conv 512 3 ×3 1 ReLU
MaxPool 512 2 ×2 2 -
FC 4096 - - ReLU
FC 4096 - - ReLU
FC 65 - - Softmax
|
2d5d46fa-7ef5-444c-97f7-88a34bbf75a6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Above-Average AI Scientists
Followup to: The Level Above Mine, Competent Elites
(Those who didn't like the last two posts should definitely skip this one.)
I recall one fellow, who seemed like a nice person, and who was quite eager to get started on Friendly AI work, to whom I had trouble explaining that he didn't have a hope. He said to me:
> "If someone with a Masters in chemistry isn't intelligent enough, then you're not going to have much luck finding someone to help you."
It's hard to distinguish the grades above your own. And even if you're literally the best in the world, there are still electron orbitals above yours—they're just unoccupied. Someone had to be "the best physicist in the world" during the time of Ancient Greece. Would they have been able to visualize Newton?
At one of the first conferences organized around the tiny little subfield of Artificial General Intelligence, I met someone who was heading up a funded research project specifically declaring AGI as a goal, within a major corporation. I believe he had people under him on his project. He was probably paid at least three times as much as I was paid (at that time). His academic credentials were superior to mine (what a surprise) and he had many more years of experience. He had access to lots and lots of computing power.
And like nearly everyone in the field of AGI, he was rushing forward to write code immediately—not holding off and searching for a sufficiently precise theory to permit stable self-improvement.
In short, he was just the sort of fellow that... Well, many people, when they hear about Friendly AI, say: "Oh, it doesn't matter what you do, because [someone like this guy] will create AI first." He's the sort of person about whom journalists ask me, "You say that this isn't the time to be talking about regulation, but don't we need laws to stop people like this from creating AI?"
"I suppose," you say, your voice heavy with irony, "that you're about to tell us, that this person doesn't reall
|
043c8b6d-b2a0-456c-a735-5ad577bf6277
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Review of studies says you can decrease motivated cognition through self-affirmation
I read this article today and thought LW might find it interesting. The key finding is that in a number of different experiments, simple "self-affirmations" (such as writing about relationships with your friends or something else that makes you feel good about yourself) make people more open to changing their mind in cases where changing their mind would be damaging to their self-image. The proposed explanation is that people need to maintain a certain level of self-worth, and one way they do that is by refusing to accept evidence that would damage their sense of self-worth. But if they have a high enough sense of self-worth, they are less likely to do this. I haven't reviewed any of these studies personally, but the idea makes some sense and sounds pretty easy to try. Hat tip to Dan Keys for putting me onto the idea. I searched LW for "Sherman self-affirmation" and didn't see this discussed anywhere on LW, but I didn't look very hard.
Title: Accepting Threatening Information: Self–Affirmation and the Reduction of Defensive Biases
Authors: David K. Sherman and Geoffrey L. Cohen
Citation details: Current Directions in Psychological Science August 2002 vol. 11 no. 4 119-123
Abstract: Why do people resist evidence that challenges the validity of long–held beliefs? And why do they persist in maladaptive behavior even when persuasive information or personal experience recommends change? We argue that such defensive tendencies are driven, in large part, by a fundamental motivation to protect the perceived worth and integrity of the self. Studies of social–political debate, health–risk assessment, and responses to team victory or defeat have shown that people respond to information in a less defensive and more open–minded manner when their self–worth is buttressed by an affirmation of an alternative source of identity. Self–affirmed individuals are more likely to accept information that they would otherwise view as threatening, and subsequently to change their beliefs
|
76156ed1-3dc0-4345-a443-d989df3bc474
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Against Sam Harris's personal claim of attentional agency
TL;DR: Sam Harris has an obviously terrible information diet, and it is correct to criticize him for this in the context of his meditation teachings. He should treat his engagement with news headlines and twitter as unfortunate addictions, not good intellectual citizenship.
Here is a recent podcast episode in which he fends off claims that his attentional priorities are incompatible with his meditation practice (starting at 13m55s). Nothing he says is quite wrong, but I couldn't help but feel like he was talking around something more important. It felt like a missed opportunity to loudly admit that he falls well short of a healthy mental lifestyle. Those claims by some listeners that he should not podcast about ongoing world events are obviously silly. But I think that they end up serving as a Weak Man argument that distracts from the related, non-silly criticisms. It concerns me that he rebuts those misplaced criticisms at length, but (as far as I know) has never spent a similar amount of time admitting his actual error and urging contemplative trainees to beware the same.
Harris has said that given his background (as someone with extensive training in both meditation and neuroscience, plus a large existing audience), he is uniquely positioned to produce such an app as Waking Up. He’s right. But at the same time, he has the significant drawback of setting a very bad example of how to use social media. It’s true that people who like his app can just ignore his podcast and social media activity. But that strikes me as a needlessly lossy compromise. If I had it my way, he would stop using twitter except under very controlled conditions, spend some of the freed attention on deliberate, high-quality information, and thereby increase the signal:noise ratio of his podcast. Maybe he would agree with me about this, but if so, it’s not very clear to me.
Perhaps the starkest aspect of all this is the way Harris related to Donald Trump's Twitter presence. So far as I can te
|
58170af6-0452-4052-8f69-8b302afd6b46
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How dangerous is Long COVID for kids?
All adults in my family are already vaccinated. But I also have two kids, age 6 and 3, who are not. The older one needs to wait a few months until the child vaccine is approved. The younger one will probably have to wait more than a year, because as far as I know the child COVID vaccines are only planned for kids 5 years or older.
How much should I be afraid until my kids get vaccinated? How much precaution is reasonable?
Most online debates about "kids and COVID" focus on the fact that the kids have a very small chance to die from COVID. There are numbers and pictures showing that an unvaccinated X years old child actually has a smaller chance of dying than a vaccinated Y years old adult. Okay, I got the memo.
But the number of people who end up in hospital and don't die is much larger than the number of people who die, right? My friends had their child for one month in a hospital, for COVID-related reasons, and although the child seems perfectly healthy now, I'd prefer to avoid this kind of experience, too. How high is the probability of "kid ends up in hospital (but survives)"? I don't have good data here: some people keep sending me articles about crowded child hospitals, other people insist that those were children with preexisting medical problems so this doesn't apply to my kids who seem average health-wise.
Plus there is the question of Long COVID. Kids on average have more years of life ahead of them, so even if the chance of a problem is smaller, if it is a permanent damage, you need to multiply it by a greater number to get the total harm. Here the data is even noisier, I guess. First, given that we only know COVID for two years (and it keeps mutating), it is hard to estimate whether the damage is actually permanent, or how long it takes. Second, the symptoms of Long COVID are various, and they happen with other diseases too, so I guess in all studies you also have people with symptoms of "Long COVID" who actually never had COVID. So, what is the actu
|
9fda8504-0934-4ca8-beaf-fda997226fc0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[AN #101]: Why we should rigorously measure and forecast AI progress
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter.
Audio version here (may not be up yet).
HIGHLIGHTS
Danny Hernandez on forecasting and the drivers of AI progress
(Arden Koehler and Danny Hernandez) (summarized by Rohin): This podcast is a great introduction to the practice of forecasting and measurement in AI, and why it is important. I won't summarize everything in the podcast, but here are some of the points made.
Danny talks about the AI and Compute (AN #7) and AI and Efficiency (AN #99) work that he did at OpenAI. The former shows that the compute devoted to the largest-scale experiments has increased by a factor of 300,000 from 2012 to 2018, and the latter suggests that algorithms have been able to achieve similar performance with 25x less compute over the same time period (later updated to 44x from 2012 to 2019).
One thing I didn’t realize earlier was that the 25x / 44x factor should be thought of as a loose lower bound: in other areas such as language modeling, the factor looks higher. But more importantly, the methodology used doesn’t allow us to model the effects of an algorithm allowing us to do something we couldn’t do before (which we could interpret as something we could do, but with way more compute). Possibly this algorithmic progress should be thought of as a 100x or even 1000x improvement in efficiency. Overall, Danny sees both algorithmic progress and increase in compute as pretty big factors in predicting how AI will go in the future.
Unfortunately, it’s hard to draw strong implications from these measurements for the downstream things we care about -- should we think that AI progress is “slow” or “fast”, or “linear” or “exponential”, based on these results? It’s important to be specific about the units you’re using when thinking
|
e2f06e40-0b0c-447a-bf7b-dfaa346be9ff
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Melbourne, practical rationality
Discussion article for the meetup : Melbourne, practical rationality
WHEN: 04 May 2012 06:30:00PM (+1000)
WHERE: 197 Little Lonsdale St, Melbourne VIC 3000
Practical rationality, as distinct from the social and rationality outreach meetups. Look for a social meetup on the 3rd Friday of each month.
Discussion: http://groups.google.com/group/melbourne-less-wrong
This meetup repeats on the 1st Friday of each month.
And, watch this space - [content removed]
Aubrey de Grey is speaking at Embiggen Books, and we're going along. Feel free to recruit promising skeptics, futurists, or anyone interesting - remember, think of them as meat.
From the Embiggen Books blog: Aubrey Du Grey with Dr Krystal Evans.
Discussion article for the meetup : Melbourne, practical rationality
|
78657e58-18e6-42de-979a-9370b02a172d
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Constructing Lower Probabilities
Constructin g Lower Probabilities 515
Constructing Lower Probabilities
Carl Wagner
Mathematics Department
University of Tennessee
Knoxville, TN 37996-1300
Abstract
An elaboration of Dempster's method of con
structing belief functions suggests a broadly
applicable strategy for constructing lower
probabilities under a variety of evidentiary
constraints.
1 Introduction
For any sets X and Y a finitely additive probability
measure p defined on all subsets of Y, together with a
function g : Y --+ X, yield a finitely additive probabil
ity measure 1r, defined for each A C X by
1r(A) = p({y E Y: g(y) E A}). (1)
This basic result of elementary probability theory fur
nishes a strategy for probabilistic assessment of X
when X resists direct assessment because of its re
moteness from our experience. The strategy requires
identification of a set Y amenable to direct assessment,
as well as the determination of the function g, where g(y) is interpreted as that outcome x in X implied by
the outcome y in Y.
Of course, there may in general be no such clear-cut
connection between X and Y. We might, for example,
only be able to specify a function r : y --+ 2x -{0},
with f(y) being the set of outcomes in X consistent
with y. What, if any, quantification of uncertainty
across X can be effected in such a case?
If, paralleling ( 1), we define
,B(A) = p({y E Y: f(y) C A}), (2)
then ,B: 2x --+ (0, 1), ,8(0) = 0, and ,B(X) = 1, but ,B
is not in general additive. As first noted by Choquet
(1953) in a more general setting, and as rediscovered
by Dempster (1967) in the case of (2), ,B is neverthe
less a highly structured set function. In particular, for
every integer r 2 2, ,B is r-monotone, i.e., for every
sequence A1, ... , Ar of subsets of X,
,B(A1 U · · ·UAr) 2 I: (-l)III-l,B (nA;), (3)
0#C(r] iEJ Bruce Tonn
Policy Analysis Systems Group
Oak Ridge National Laboratory
Oak Ridge, TN 37831-6207
where (r] = { 1, ... , r}. Hence ,B is what Choquet calls
a (normalized) infinitely monotone capacity and what
Shafer (1976) calls a belief function.
Dempster calls ,B the lower probability induced by p
and f. Although the appropriateness of this terminol
ogy is fairly clear intuitively, it is worth elaborating
one precise sense in which f3 may be clearly so termed.
This elaboration is pursued in the next section, ul
timately leading to two interesting generalizations of
Dempster's construction.
2 An Alternative Construal of f3
Although much of what follows holds in some version
for arbitrary sets, we shall assume in the remainder of
this paper that all sets of possible outcomes or states of
affairs are finite. This enables us to avoid certain set
theoretic complexities, such as the use of the axiom
of choice. It also has the consequence that we may
speak simply of probability measures, since they are
identical with finitely additive probability measures on
finite sets.
Suppose then that we have finite sets X and Y, a prob
ability measure p on 2Y, and a consistency mapping
f : Y ---+ 2x-{0}, with f3 defined by (2). Following
Wagner (1992) we say that a probability measure P
on 2XxY is compatible with p and f if, for all F c y,
Py(F) = p(F), (4)
where Py is the Y -marginal of P, i.e., Py (F) =
P(X x F) for all F C Y, and
x fl. f(y) =* P(x, y) = 0. (5)
If we denote by P(p, f) the set of all probability mea
sures P on 2XxY satisfying (4) and (5), then it may
be shown (Wagner, 1992) that for all A C X
f3(A) = min{Px(A) : P E P(p, f)}, (6)
where Px(A) = P(AxY). Thus f3 is simply the "lower
envelope" of the family of X-marginals of all probabil
ity measures on 2xxY compatible with p and f.
From (6) it is clear that Dempster's construction rep
resents just one special case of a method for assessing
516 Wagner and Tonn
uncertainty across a set X using a related set Y. The
general strategy involves replacing P(p, f) in formula (� with whatever family P of probability measures on
2 xY is "compatible with the evidence," and setting
A(A) = min{Px(A) : PEP} (7)
for all A C X.
For what families P does formula (7) make sense? If
lXI = m and IYI = n, any probability measure P
on 2XxY is completely determined by the mn val
ues P(x, y), where x E X and y E Y. Any such P
may thus be represented as a vector in the compact
subset [0, 1)mn of Rmn. Thus any family P of such
measures corresponds to a subset of [0, 1)mn. If this
subset is closed, and hence compact, then, for each
A C X, the set {Px(A) : PEP} is a compact sub
set of [0, 1) (and hence possesses a minimum), since
Px(A) = ExeA l: yeY P(x, y) is a continuous func
tion of the variables P(x, y).
For an arbitrary closed family P the determination of
the minimum in (7) might of course be a difficult task.
But in many applications P will correspond to a closed
convex polyhedral subset of [0, 1 )mn, and so the mini
mum can be computed by the simplex algorithm. By
a closed convex polyhedral subset of [0, 1)mn we mean,
as usual, a subset defined by a finite number of linear
equations and (nonstrict) inequalities on the mn val
ues P(x, y). Equations and inequalities on the values
of P (or conditionalizations of P) at arbitrary subsets
of X x Y, or on expected values of specified random
functions with respect to P all fall into this category.
Hence formula (7) has a wide range of practical appli
cations.
We would emphasize that the uncertainty measure .>.
defined by (7) is in general much less structured than
the belief function f3 arising from formulas (2) or (6).
That is only to be expected, given the wide range of
possible families P. In the next section we shall exam
ine some special families P for which A( A) may be de
termined by a simpler procedure than the constrained
minimization required by (7). In particular, we shall
derive two generalizations of Dempster's formula (2)
that enable us to determine the degree of monotonic
ity of A.
3 Some Special Cases
We begin by reviewing some basic facts about lower
probabilities and lower envelopes, terms that we have
been using informall y, but which now require precise
specification.
Let £ : 2x --+ [0, 1), with £(0) = 0 and £(X) = 1,
and define u(A) := 1 -£(A) for all A C X. The
set function £ is called a lower probability (and u its
corresponding upper probability) if, for all A, B C X,
An B = 0 implies that £(AU B) � £(A) +£(B) and
u(A U B) � u(A) + u(B). If there exists a probability measure q (which need
not be the case) such that q(A) �£(A) for all A C X
(equivalently, q(A) � u(A) for all A C X), the lower
probability £ is said to be dominated. A dominated
lower probability £is called a lower envelope if, for all
A eX,
£(A) = min { q(A) : q is a probability (8)
measure and q(E) �£(E) for all E EX}.
If£ is a lower envelope, then for all A C X, its corre
sponding upper probability u, defined above, satisfies
u(A) = max{q(A): q is a probability measure and
q(E) � u(E) for all E C X}.
Dominated lower probabilities and lower envelopes
arise naturally when £(A) is construed behaviorally as
the supremum of prices one is willing to pay to re
ceive one unit of utility if A occurs, and u(A) is the
infimum of payments one is willing to accept to com
mit oneself to pay one unit of utility if A occurs. For
Walley (1981) has shown that one avoids a sure loss
in this context if and only if one's £is dominated, and
one avoids certain incoherent betting behavior if and
only if one's £ is a lower envelope.
We now examine a special case of (7) in which the
family P of probability measures compatible with the
evidence is defined with reference to a dominated lower
probability £ on 2Y, and a family ( .>.y )yEY of lower
envelopes on 2x, one for each y E Y. We say that a
probability measure P on 2XxY is compatible with £
and (.>.y)yEY if, for all FeY,
Py(F) �£(F) (9)
and also, for·all y E Y with Py(y) > 0 and all E C X,
p ("E" I "y") � .>.y(E), (10)
where "E" :=Ex Y and "y" :=X x {y}.
We denote by P (£, (.>.y)) the set of all P satisfying (9)
and (10). P (£, (.>.y)) is always nonempty. For if we
choose any probability measure q � £ and any family
( qy )yeY of probability measures with qy 2::: Ay for each
y E Y, and define
P(x, y) := qy(x)q(y), (11)
extending P to arbitrary subsets of X x Y in the obvi
ous way, then P may easily be seen to satisfy (9) and
(10).
Indeed, P ( £, ( .>.y)) clearly corresponds to a closed con
vex polyhedral subset of [0, 1)mn, where lXI = m and
IYI = n. Hence
>.(A) :=min{Px(A) :PEP(i,(.>.y))} (12)
may be computed by the simplex algorithm. In this
case, however, the formula for >.(A) takes a simpler
form.
Theorem 1. If A is defined by {12}, then, for all A C
X,
A(A) min{ 2:: q(y)>.y(A) : q is a yEY
probability measure such that
q(E) 2: f(E) for all E C X}. (13)
Proof. Note first that the right hand side of (13) is well
defined, since L::yeY q(y)>.y(A) is a continuous func
tion of then variables q(y), with domain a closed con
vex polyhedral subset of [0, 1 t. Hence establishing
(13), with its reduced number of variables and con
straints, will considerably simplify the calculation of A( A).
To prove (13), define the subsets UA and VA of [0, I)
by
and UA = {Px(A): PEP (l, (>.y))} (14)
VA= {L::yeYq(y)Ay{A): ) probability measure q 2: f.} ( 15
We must show that min U A = min VA. We do this by
proving that (i) for every u E U A, there exists a v E VA
such that v � u (whence min U A 2: min VA) and that (ii) VACUA (whence minUA �min VA)·
To prove (i), it suffices to show that if PEP (l, (>.y)),
then
Px(A) 2: 2:: Py(y)Ay(A), (16)
yEY
since by hypothesis Py 2: e. Writing "A" for A X y
and "y" for X x {y}, we have
Px(A) = P("A") = 2:: P("A" n "y")
as desired. yEY:P(''y'')>O
2:: P("y") P("A" I "y")
yEY:P(''y'')>O
> L P("y") >.y(A) = L Py(y)>.y(A),
yEY yEY
To prove (ii), we must show that for every q 2: e, there
exists aPE P (f, (Ay)) such that
Px(A) = L q(y)>.y(A). (17)
yEY
To construct such a P choose a family of probabil
ity measures (qy )yeY on 2x such that qy 2: >.y and qy(A) = >.y(A), this being possible since each >.y is
a lower envelope. We have noted above that setting P(x, y) = qy{x)q(y) yields aPE P (l, (Ay)). It is also
the case that
Px(A) = L 2:: P(x,y)
yEY xEA
= 2:: q(y) L qy(x) yEY xEA
= 2:: q(y)qy(A) = L q(y)Ay(A), yEY yEY Constructin g Lower Probabilities 517
which completes the proof.
We conclude by examining two special cases of formula
(13) for which there are formulas for >.(A) not involv
ing any minimization operation. The two formulas are
each generalizations of Dempster's formula (2).
(I). In the first case the dominated lower probability e is actually a probability measure, call it p. Since
the only probability measure q 2: p is p itself, VA, as
defined by (15), has a single element, L::yeY p(y)>.y(A),
and so formula (13) becomes
A(A) = L p(y)Ay{A). (18) yEY
It is easy to see that A is r-monotone if each Ay is r
monotone. In particular, >. is a belief function if each
Ay is a belief function.
To see how (18) reduces to (2), consider the special
family {Ay)yeY defined in terms of a family (Ey)yeY
of nonempty subsets of X by
>. (A) _ { 1, if A :::> Ey (19) Y -0, otherwise.
These functions >.y, called simple support functions
by Shafer, are belief functions, and clearly lower en
velopes. Define r : Y ___,. 2x -{0} by f(y) = Ey for
all y E Y. Then (18) becomes
>.(A) L p(y)>.y(A) = L p(y)
which is (2). yEY yEY:r(y)CA
p ( {y E Y : f(y) C A}),
(II). In the second special case of (13), we take for e any lower envelope and let the family ( Ay )yeY be
defined by (19), with r defined as above. Then formula
(13) becomes
>.(A)= e({y E Y: r(y) c A}), (20)
by the following derivation :
>.(A) = min{L q(y)Ay(A): probability q 2: f}
yEY
min { z= q(y) : probability q 2: e}
yEYT(y)CA
min{q ({y E Y: f(y) C A}):
probability q 2: £}
f({y E Y: f(y) C A}),
the last equality following from the fact that e is a
lower envelope. Of course (20) reduces to Dempster's
formula (2) when e is actually a probability measure p.
We leave it as an exercise to show that if f is r
monotone, then..\, as given by (20), is also r-monotone.
In particular, if e is a belief function, then so is >., as
Shafer (1979) has previously observed.
518 Wagner and Tonn
4 Acknowledgements
Wagner's research was supported in part by the Na
tional Science Foundation (DIR-8921269).
References
Choquet, G., Theory of capacities, Annales de
l'Institut Fourier, 5 (1953), 131-295.
Dempster, A., Upper and lower probabilties induced
by a multivalued mapping, Annals of Mathematical
Statistics, 38 (1967), 325-339.
Shafer, G., A Mathematical Theory of Evidence,
Princeton University Press, 1976.
Shafer, G., Allocations of probability, The Annals of
Probability, 7 (1979) 827-839.
Wagner, C., Generalized probability kinematics,
Erkenntnis, 36 {1992), 245-257.
Walley, P., Coherent lower (and upper) probabilities,
Research report, 1981, University of Warwick.
|
69ff9659-d5ce-4876-b6bd-236016ae774e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Why did no LessWrong discourse on gain of function research develop in 2013/2014?
While I wasn't at 80% of a lab leak when Eliezer asseted it a month ago, I'm now at 90%. It will take a while till it filters through society but I feel like we can already look at what we ourselves got wrong.
In 2014, in the LessWrong survey more people considered bioengineered pandemics a global catastrophic risk then AI. At the time there was a public debate about gain of function research. On editoral described the risks of gain of function research as:
> Insurers and risk analysts define risk as the product of probability times consequence. Data on the probability of a laboratory-associated infection in U.S. BSL3 labs using select agents show that 4 infections have been observed over <2,044 laboratory-years of observation, indicating at least a 0.2% chance of a laboratory-acquired infection (5) per BSL3 laboratory-year. An alternative data source is from the intramural BSL3 labs at the National Institutes of Allergy and Infectious Diseases (NIAID), which report in a slightly different way: 3 accidental infections in 634,500 person-hours of work between 1982 and 2003, or about 1 accidental infection for every 100 full-time person-years (2,000 h) of work (6).
>
> A simulation model of an accidental infection of a laboratory worker with a transmissible influenza virus strain estimated about a 10 to 20% risk that such an infection would escape control and spread widely (7). Alternative estimates from simple models range from about 5% to 60%. Multiplying the probability of an accidental laboratory-acquired infection per lab-year (0.2%) or full-time worker-year (1%) by the probability that the infection leads to global spread (5% to 60%) provides an estimate that work with a novel, transmissible form of influenza virus carries a risk of between 0.01% and 0.1% per laboratory-year of creating a pandemic, using the select agent data, or between 0.05% and 0.6% per full-time worker-year using the NIAID data.
Even at the lower bar of 0.05% per full-time worker-year i
|
fda0f85c-ed3d-4613-ac83-e5ce8be579c4
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
AlphaGo Zero and the Foom Debate
AlphaGo Zero uses 4 TPUs, is built entirely out of neural nets with no handcrafted features, doesn't pretrain against expert games or anything else human, reaches a superhuman level after 3 days of self-play, and is the strongest version of AlphaGo yet.
The architecture has been simplified. Previous AlphaGo had a policy net that predicted good plays, and a value net that evaluated positions, both feeding into lookahead using MCTS (random probability-weighted plays out to the end of a game). AlphaGo Zero has one neural net that selects moves and this net is trained by [Paul Christiano-style capability amplification](https://ai-alignment.com/alphago-zero-and-capability-amplification-ede767bb8446), playing out games against itself to learn new probabilities for winning moves.
As others have also remarked, this seems to me to be an element of evidence that favors the Yudkowskian position over the Hansonian position in my and Robin Hanson's [AI-foom debate](https://intelligence.org/ai-foom-debate/).
As I recall and as I understood:
* Hanson doubted that what he calls "architecture" is much of a big deal, compared to (Hanson said) elements like cumulative domain knowledge, or special-purpose components built by specialized companies in what he expects to be an ecology of companies serving an AI economy.
* When I remarked upon how it sure looked to me like humans had an architectural improvement over chimpanzees that counted for a lot, Hanson replied that this seemed to him like a one-time gain from allowing the cultural accumulation of knowledge.
I emphasize how all the mighty human edifice of Go knowledge, the joseki and tactics developed over centuries of play, the experts teaching children from an early age, was entirely discarded by AlphaGo Zero with a subsequent performance improvement. These mighty edifices of human knowledge, as I understand the Hansonian thesis, are supposed to be *the* bulwark against rapid gains in AI capability across multiple domains at once. I was like "Human intelligence is crap and our accumulated skills are crap" and this appears to have been bourne out.
Similarly, single research labs like Deepmind are not supposed to pull far ahead of the general ecology, because adapting AI to any particular domain is supposed to require lots of components developed all over the place by a market ecology that makes those components available to other companies. AlphaGo Zero is much simpler than that. To the extent that nobody else can run out and build AlphaGo Zero, it's either because Google has Tensor Processing Units that aren't generally available, or because Deepmind has a silo of expertise for being able to actually make use of existing ideas like ResNets, or both.
Sheer speed of capability gain should also be highlighted here. Most of my argument for FOOM in the Y-H debate was about self-improvement and what happens when an optimization loop is folded in on itself. Though it wasn't necessary to my argument, the fact that Go play went from "nobody has come close to winning against a professional" to "so strongly superhuman they're not really bothering any more" over two years just because that's what happens when you improve and simplify the architecture, says *you don't even need self-improvement* to get things that look like FOOM.
Yes, Go is a closed system allowing for self-play. It still took humans centuries to learn how to play it. Perhaps the new Hansonian bulwark against rapid capability gain can be that the environment has lots of empirical bits that are supposed to be very hard to learn, *even in the limit of AI thoughts fast enough to blow past centuries of human-style learning in 3 days*; and that humans have learned these vital bits over centuries of cultural accumulation of knowledge, *even though we know that humans take centuries to do 3 days of AI learning when humans have all the empirical bits they need*; and that AIs cannot absorb this knowledge very quickly using "architecture", *even though humans learn it from each other using architecture*. If so, then let's write down this new world-wrecking assumption (that is, the world ends if the assumption is false) and be on the lookout for further evidence that this assumption might perhaps be wrong.
Tl;dr: As others are already remarking, the situation with AlphaGo Zero looks nothing like the Hansonian hypothesis and a heck of a lot more like the Yudkowskian one.
Added: AlphaGo clearly isn't a general AI. There's obviously stuff humans do that make us much more general than AlphaGo, and AlphaGo obviously doesn't do that. However, if even with the human special sauce we're to expect AGI capabilities to be slow, domain-specific, and requiring feed-in from a big market ecology, then the situation we see without human-equivalent generality special sauce should not look like this.
To put it another way, I put a lot of emphasis in my debate on recursive self-improvement and the remarkable jump in generality across the change from primate intelligence to human intelligence. It doesn't mean we can't get info about speed of capability gains *without* self-improvement. It doesn't mean we can't get info about the importance and generality of algorithms *without* the general intelligence trick. The debate can start to settle for fast capability gains before we even get to what I saw as the good parts; I wouldn't have predicted AlphaGo and lost money betting against the speed of its capability gains, because reality held a more extreme position than I did on the Yudkowsky-Hanson spectrum.
---
Crossposted [here](https://www.facebook.com/yudkowsky/posts/10155848910529228?pnref=story).
|
835fb76d-1eed-4cc5-b67c-443cea79bb55
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Our values are underdefined, changeable, and manipulable
Crossposted at Intelligent Agent Forum.
When asked whether "communist" journalists could report freely from the USA, only 36% of 1950 Americans agreed. A follow up question about Amerian journalists reporting freely from the USSR got 66% agreement. When the order of the questions was reversed, 90% were in favour of American journalists - and an astounding 73% in favour of the communist ones.
There are many examples of survey responses depending on question order, or subtle issues of phrasing.
So there are people whose answers depended on question order. What then are the "true" values of these individuals?
Underdetermined values
I think the best way of characterising their values is to call them "underdetermined". There were/are presumably some people for which universal freedom of the press or strict national security were firm and established values. But for most, there were presumably some soft versions of freedom of the press and nationalism, and the first question triggered one narrative more strongly than the other. What then, are their "real" values? That's the wrong question - akin to asking if Argentina really won the 1986 world cup.
Politicians can change the opinions of a large sector of the voting public with a single pronouncement - were the people's real opinions the ones before, or the ones after? Again, this seems to be the wrong question. But don't people fret about this inconsistency? I'd wager that they aren't really aware of this, because people are the most changeable on issues they've given the least thought to.
And rationalists and EAs are not immune to this - we presumably don't shift much on what we identify as our core values, but on less important values, we're probably as changeable as anyone. But such contingent values can become very strong if attacked, thus becoming a core part of our identity - even if it's very plausible we could have held the opposite position in a world slightly different.
Frameworks and moral updating
Peo
|
2f05e8bb-446b-41bc-912a-7a399cd84a1a
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
[Crosspost] Organizing a debate with experts and MPs to raise AI xrisk awareness: a possible blueprint
*Thanks to* [*@Leon Lang*](https://www.lesswrong.com/users/leon-lang?mention=user) *from University of Amsterdam for proposing this post and reviewing the draft. Any views expressed in this post are not necessarily his.*
We ([Existential Risk Observatory](https://www.existentialriskobservatory.org/)) have organized a public [debate](https://dezwijger.nl/programma/existential-risks-of-artificial-intelligence) (recording [here](https://www.youtube.com/watch?v=vGQDctxwy2E), photos [here](https://twitter.com/XRobservatory/status/1646785715127156736)) in Amsterdam, the Netherlands with the purpose of creating more awareness of AI existential risk among policymakers and other leading voices of the societal debate. We want to share our experience in this post, because we think others might be able to follow a similar approach, and because we expect this to have significant xrisk-reducing net effects.
**Goals**
=========
Our high-level goal is to reduce existential risk, especially from AGI, by informing the public debate. We think more awareness will be net positive because of increased support for risk-reducing regulation and increased attention for AI safety work (more talent, more funding, more institutes working on the topic, more diverse actors, and more priority).
For this debate, we focused on:
1. Informing leaders of the societal debate (such as journalists, opinion makers, scientists, artists) about AI existential risk. This should help to widen the Overton window, increase the chance that this group of people will inform others, and therefore raise AI existential risk awareness in society.
2. Informing politicians about AI existential risk, to increase the chance that risk-reducing policies will get enacted.
**Strategy**
============
We used a debate setup with AI existential risk authorities and Members of Parliament (MPs). The strategy was that the MPs would get influenced by the authorities in this setting. We already contacted MPs and MP assistants before and had meetings with several. In order to be invited to meetings with MPs and/or MP assistants, we think that having published in mainstream media, having a good network, and having good policy proposals, are all helpful. If budget is available, hiring a lobbyist and a PR person is both helpful as well (we have a freelance senior PR person and a medior volunteer lobbyist).
Two Dutch MPs attended our debate. For them, advantages might include getting (media) attention and informing themselves. Stuart Russell agreed to be our keynote speaker (remote), and several other experts who all had good AI existential risk expertise attended as well. Additionally, we found a moderator with existing AI existential risk knowledge, which was important in making sure the debate went well. Finally, we found a leading debate center in Amsterdam (Pakhuis de Zwijger) willing to host the debate. We promoted our debate on our own social media, through the venue, we were mentioned in the prominent Dutch Future Affairs newsletter, and we promoted our event in EA whatsapp groups. This was sufficient to sell out the largest debate hall in Amsterdam (320 seats, tickets were free).
We organized this event in the Netherlands mainly because this is where we are most active. However, since the event was partially focused on getting policy implemented, it should be considered to organize events such as this debate especially where policy is most urgently needed (DC, but perhaps also Beijing, Brussels, or London).
**Program**
===========
We started with an introductory talk, after which we played some documentary fragments. After this introduction, our main guest Stuart Russell gave a talk and audience Q&A on AI existential risk. We closed with the 5-person panel, consisting of Queeny Rajkowski (MP for VVD, the largest governing party, center-right, with a portfolio including Digitization and Cybersecurity), Lammert van Raan (MP for the Party for the Animals, a medium-sized left party focusing on animal rights and climate), Mark Brakel (Director of Policy FLI), Nandi Robijns (AI MSc, Ministry of the Interior and Kingdom Relations), and Tim Bakker (AI PhD candidate at University of Amsterdam). In the panel, our moderator asked questions that we prepared and shared with the panel members in advance, and gave the audience the chance to ask questions as well.
**Costs**
=========
Our budget was below 2k euro for the whole event, of which most went to the venue. Costs in time were roughly four weeks full time work for two senior team members (spread out over about three months).
**Effects**
===========
Our debate generally went well, and we feel it has effectively opened the Overton window for most who attended. The two MPs both took the initiative for a follow-up meeting with the other panelists and us, where one of them proposed as a topic "what politics should do." We generally have the impression they were - at least directly after the debate - quite open to AI existential risks and talks about a policy solution. That does not mean, however, that they are now likely to implement solutions themselves: they are extremely busy, and for them, this is just one of fifty issues. They will probably not yet have a thorough understanding of AI existential risk. What they might do, however, is make time for more meetings with experts and use our input in drafting questions, motions, etc. in parliament about the topic, which could lead to the implementation of [risk-reducing](https://existentialriskobservatory.org/papers_and_reports/Position%20paper%20Existential%20Risk%20Observatory%20t.b.v.%20rondetafelgesprek%20Artifici%C3%ABle%20Intelligentie%20d.d.%2013%20september%202022.pdf) [policy](https://futureoflife.org/wp-content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf). In the Netherlands, a [motion](https://www.tweedekamer.nl/kamerstukken/moties/detail?id=2023Z05424&did=2023D12809) was passed already by parliament asking for more AI Safety and AI Alignment research (before our debate).
Besides policy, an important effect of such a debate could be to further open the Overton window in the media, resulting in more frequent and more accurate AI existential risk reporting. One journalist who attended the debate said that AI existential risk "is now suddenly a thing." Another journalist interviewed Stuart Russell after our debate, and will probably publish this interview in a leading Dutch newspaper soon. On the EA whatsapp groups, the tone was very positive, with one person saying "it was surreal to see politicians take AIS seriously." Another said the event "was a real motivation booster."
**Risks**
=========
If one attempts to organize a public debate and errors are made, the most likely thing to happen is that the impact will simply be smaller. Journalists will not report on it, speakers will not attend, the organizer will not be able to get a leading venue, the audience will be small, policy will not get passed, etc. This is bad, but not negative relative to the counterfactual of not having a debate at all. We therefore recommend to err on the side of taking too much action, rather than too little (as long as you cannot spend your effort elsewhere more effectively). General professionalism is however important when organizing a public event.
Multiple attendees gave us the feedback that they would have preferred more of a debate, while in our case most panelists agreed. While it is easy to facilitate such a debate by inviting an AI xrisk skeptic, we chose not to do that, since it might reduce the chance to achieve our goals and the opposing position can already be heard in many other debates, articles, etc. We think this was mostly the right decision in our case. A debate less focused on influencing policy might get more traction by also inviting prominent skeptics. This might however also increase risk.
**Conclusion**
==============
We think that organizing an AI existential risk debate ([recording](https://www.youtube.com/watch?v=vGQDctxwy2E)) is a promising intervention to raise AI existential risk awareness among leaders of the societal debate. We also think this intervention is a good step towards getting existential risk-reducing policy passed. We are open to providing advice for others who want to work on this intervention.
|
f7887b69-18e0-467f-a3cb-fb5e2587b1b6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
It's okay to be (at least a little) irrational
Caused by: Purchase Fuzzies and Utilons Separately
As most readers will know by now, if you're donating to a charity, it doesn't make sense to spread your donations across several charities (assuming you're primarily trying to maximize the amount of good done). You'll want to pick the charity where your money does the most good, and then donate as much as possible to that one. Most readers will also be aware that this isn't intuitive to most people - many will instinctively try to spread their money across several different causes.
I'm spending part of my income on charity, too. Admittedly, this isn't much - 30 USD each month - but then neither is my income as a student. Previously I had been spreading that sum to three different charities, each of them getting an equal amount. On at least two different venues, people had (not always knowingly) tried to talk me out of it, and I did feel that their arguments were pretty strong. Still, I didn't change my ways, even though there was mental pressure building up, trying to push me in that direction. There were actually even some other charities I was considering also donating to, even though I knew I probably shouldn't.
Then I read Eliezer's Purchase Fuzzies and Utilons Separately. Here was a post saying, in essence, that it's okay to spend some of your money in what amounted to an irrational way. Yes, go ahead and spread your money, and go ahead and use some of it just to purchase warm fuzzies. You're just human, after all. Just try to make sure you still donate more to a utilon maximizer than to purchasing the fuzzies.
Here I was, with a post that allowed me to stop rationalizing reasons for why spreading money was good, and instead spread them because I was honestly selfish and just buying a good feeling. Now, I didn't need to worry about being irrational in having diversified donations. So since it was okay, I logged in to PayPal, cancelled the two monthly donations I had going to the other organizations, and tri
|
f686cc04-7f53-4a1a-8f2a-df0dab9aa8e8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Can each event in the world be attributed conditions under which it occurs?
Can each event in the world be
* attributed conditions under which it occurs (actually under coditions there's possibility of occuring)
* and
* attributed this possibility/probability that this event will happen under those conditions?
When event is caused by another it's for sure ok. I think also that no matter how random and "strange" (e.g. quantum physics) the event looked like we'd still be able to assign this two things. For example even though there were completely random and "strange" events like some truly self-causing (just) flashes in part of galaxy it would still be possible to determine conditions (e.g. location range, temperature range, maybe frequency) under which these occur and attribute probability that this event will happen under those conditions (e.g. 85%).
Can't image any event that could not be put into those criteria since the borderline case would be 0 conditions and complete unpredictability (but above 0%).
What do you think?
(For me it sounds obvious and have tried to refute it and had read about it but I have still felt I needed some sort of social proof from LW community. Strange feeling.)
|
21d064f5-46d2-4f74-bbfc-01dee1eb288a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
NIST AI Risk Management Framework request for information (RFI)
(Cross-post from the EA Forum)
The US National Institute of Standards and Technology (NIST) previously put out a request for information (RFI) to help shape the AI Risk Management Framework that it is working on. NIST is an important organization in establishing standards for government and industry, so contributions here could potentially be high-impact.
The deadline for feedback has recently been extended to September 15, so if you want to contribute then now is the time to do so.
In particular, they are looking for information on:
> 1. The greatest challenges in improving how AI actors manage AI-related risks—where “manage” means identify, assess, prioritize, respond to, or communicate those risks;
>
> 2. How organizations currently define and manage characteristics of AI trustworthiness and whether there are important characteristics which should be considered in the Framework besides: Accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of harmful bias, or harmful outcomes from misuse of the AI;
>
> 3. How organizations currently define and manage principles of AI trustworthiness and whether there are important principles which should be considered in the Framework besides: Transparency, fairness, and accountability;
>
> 4. The extent to which AI risks are incorporated into different organizations' overarching enterprise risk management—including, but not limited to, the management of risks related to cybersecurity, privacy, and safety;
>
> 5. Standards, frameworks, models, methodologies, tools, guidelines and best practices, and principles to identify, assess, prioritize, mitigate, or communicate AI risk and whether any currently meet the minimum attributes described above;
>
> 6. How current regulatory or regulatory reporting requirements (e.g., local, state, national, international) relate to the use of AI standards, frameworks, models, methodologies, tools, guidelines and bes
|
fd1152e8-3a5e-4719-8aab-426b2318fc7e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Artificial General Horsiness
A lot of attention has been paid to the question of Artificial General Intelligence (AGI). Will artificial intelligence ever reach the level at which it can accomplish every intellectual task a human can do, and if so, how soon will that happen?
I think that this is the wrong question.
Cars are a long way from achieving Artificial General Horsiness. Even today, 100 years after the invention of the car, there are many things a horse can do vastly better than a car, such as jumping over fences, navigating by itself, swimming through water, dealing with mud, navigating through tight passageways, fueling themselves from vegetation, and providing social companionship.
And yet cars have still managed to replace horses for pretty much all tasks that people previously used horses for. This is because it turned out that, while cars can’t do everything horses can do, their speed and cost advantages were big enough that it was worth adapting the rest of the world (e.g. building roads) to compensate for their limitations.
There is a long history of people claiming that a task is a true mark of intelligence right until a computer becomes good at it, at which people people decide that that task doesn’t require true intelligence. So far that list includes logical reasoning, playing chess, translating between languages, driving cars, writing poetry, creating art, playing video games, and passing academic exams.
I expect that people will still be claiming that computers aren’t “really” intelligent long after the human race has been subjugated by its AI overlords and humans no longer have any economic purpose.
----------------------------------------
In 2011 Marc Andreessen said that software was eating the world. What he meant by this was that computer technology had reached a level of maturity where it had the potential to transform pretty much every area of industry. All that needed to happen was for software engineers to do the often-mundane bits of software engineering ne
|
9e72d1b0-33b6-49e4-92dd-1adbabf78b80
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Forum Proposal: Karma Transfers
On LessWrong, we can weak- and strong-upvote posts, which adds to the other user's karma without impacting our own. As we gain karma, we are able to award more karma. I have nearly 10,000 LessWrong Karma, and my strong upvote awards 8 karma at no cost to me.
Many users offer monetary bounties or after-the-fact money awards for posts or comments that they find especially useful. These contests are often fun, motivating, and a costly signal of genuine appreciation. Over $500 in prize money has accumulated for fact-checking Jake Cannell's Brain Efficiency, and I offered a $10 micro-bounty for answers to Vanessa Kosoy's questions about technologies stuck on initial adoption.
I think the benefits of a bounty or award offering extend come from the structure of the reward rather than the fact that it is offered with cash. And so I think we could achieve a similar bounty/award-like result if it were possible for LessWrong users to directly transfer some of their LessWrong karma to other users.
For example, if I liked your response to my question, what if I could transfer 50 karma points from me (-50) to your comment or post (+50)?
Benefits
* It expands the ability to use the "bounty" format, which seems to be generally considered to be a sound incentive structure
* Bounties are fun and seem to be considered generally good for epistemics, are akin to bets
* They allow people with less money to participate in bounty-making
* I personally am uncomfortable with the way monetary bets on beliefs make for "pay to play credibility," and because I don't know how much money any particular user has, I don't know what's a "laughably small" bet to propose and what's a "laughably large" bet. But I know how much karma they have, and I wouldn't feel bad about proposing karma bets to anybody.
* Sometimes, a post gets a very useful comment either from a latecomer or deep in a subthread. These comments will remain low-visibility because so few users will bother to scour the bottom o
|
68a5c6b1-0164-47ef-abcd-34a63d6c26ca
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Remap your caps lock key
When was the last time you (intentionally) used your caps lock key?
No, seriously.
Here is a typical US-layout qwerty (mac) keyboard. Notice:
1. Caps lock is conveniently located only one key away from A, which is where your left pinky should rest on the home row by default.
2. Caps lock is absolutely massive.
3. How far various other keys you might want use often are from the home row.
Remap your caps lock key.
I have mine mapped to escape.
Modifier keys such as control or command are also good options (you could then map control/command to escape).
How do I do this you ask?
* On Mac, system settings > keyboard > keyboard shortcuts > modifier keys.
* On Windows, Microsoft PowerToys' Keyboard Manager is one solution.
* If you use Linux, I trust you can manage on your own.
Thanks to Rudolf for introducing me to this idea.
|
54b1d0ee-9537-4265-a296-978c7044221b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Model-free decisions
Much concern about AI comes down to the scariness of goal-oriented behavior. A common response to such concerns is "why would we give an AI goals anyway?" I think there are good reasons to expect goal-oriented behavior, and I've been on that side of a lot of arguments. But I don't think the issue is settled, and it might be possible to get better outcomes by directly specifying what actions are good. I flesh out one possible alternative here.
(As an experiment I wrote the post on medium, so that it is easier to provide sentence-level feedback, especially feedback on writing or low-level comments. Big-picture discussion should probably stay here.)
|
c6b8196f-8ac8-4795-b7a6-7984aefc3976
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Which LessWrong content would you like recorded into audio/podcast form?
For the last few months, LessWrong has been working with Solenoid_Entity to record each curated post into audio form. Recordings to date can be found here and on all major Podcast apps when you search "LessWrong".
More recently, we added some UI so it's clear when audio is available on a post.
click the green speaker icon to display audio UI when available
We are now scaling up the amount of audio recorded, and the question is, what should we prioritize?
We can records posts, Sequences, wiki-tags, etc...
|
188185d1-1173-4902-a839-4bbbde497784
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Mini PAPR Review
A standard respirator, like a disposable N95 or an elastomeric P100, uses the power of the wearer's lungs to move air through a filter. An unpowered device has a lot of advantages in simplicity and reliability, but it can also be tiring, limits filtering capacity to what the wearer can tolerate, and requires a good seal on the face to prevent air intrusion during inhalation. An alternative approach is a Powered Air-Purifying Respirator (PAPR), which typically uses a battery-powered fan to push air through a filter.
A high-quality PAPR like the ones in the 3M Versaflo series will cost you >$1k, though you can get dubious cheap ones for ~$160.
But what if we want to go even farther in the "dubious" direction? Could we us a tiny fan to pull air through a filter and pipe it through hole in a KN95? I guess!
A bunch of companies make these for about $40. This one is a Bwinmak Resnr R18, but you can also buy them from other companies you haven't heard of, like OFG or 4WDKING.
I got one, and tried it for a bit. It's documented as producing 2-4 m³/h depending on speed, and with it running on high I had trouble breathing in faster than it could fill the mask with air. This wouldn't be suitable for an environment where a PAPR is a requirement, but it's probably better than a disposable mask, especially if you have facial hair or otherwise aren't getting a good seal.
The biggest issue I had was with the hose. I wore it on the provided neck strap and if I got it adjusted correctly and stood still it worked fine. As I moved my head relative to my body, however, primarily when looking up or down, it worked less well. When I looked up the hose would go taut and I'd be supporting the weight of the filter with my mask. The filter is pretty light, but this would still tend to pull my mask down and off my face a bit. Looking down was worse: the hose would kink, and I'd get much less air. You could imagine using a longer and more robust hose, but since the hose is held up by y
|
6b750d34-9d39-4c96-bb82-8e15555595cd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Truth Tables
Summary: There's a short program that can run all possible programs, OMG
Alternative Summary: Just about any universe that can exist must contain ours, OMG.
I was thinking about diagonalization arguments. Does this make any sense? Can anyone debug it for me?
Self Evident Truths
The universe is computable.
All computations can be performed by Turing Machines.
The mind is made out of atoms.
The name of the empty string is epsilon.
Truths
Binary Strings are enumerable : epsilon, 0, 1, 00, 01, 10, 11, 000, ....
There's a way of converting binary strings to Turing machines and back again. It gets all Turing machines.
Consequence
Consider tables TT(n), which are numbered by binary strings along the top and side, representing turing machines and inputs respectively.
Construct a series of finite triangular tables which are defined iteratively as follows:
TT(1)
To construct the first table, we need one element, the top left, which corresponds to the turing machine epsilon working on the input string epsilon.
Convert the string epsilon to its corresponding turing machine, which has no states and no transition rules, and which therefore halts immediately without accepting.
The value of TT(1)(epsilon,epsilon) is therefore F0 (Fail after no steps).
eps
eps F0
That's it for TT(1).
TT(2)
TT(2) will have three cells, the topleftmost three
To construct the second table, consider again the element at the top left. Since it represents a halted state, copy it verbatim from the last table.
Then consider the element corresponding to (epsilon,0) , or row epsilon, column 0, or (TM(epsilon), "0")
Again TM(epsilon) is either a machine with no states, or an invalid specification, and so it halts immediately on the input 0.
For the third step in constructing the second triangle, consider (TM("0"), epsilon).
TM("0") is another dud. It halts immediately without accepting.
So TT(2) is the table
eps 0
eps F0 F0
0 F0
Towards Infinity.
|
3c193eed-834d-4133-945a-e6bc779ddbbd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Durham/RTLW HPMoR discussion, ch. 56-60
Discussion article for the meetup : Durham/RTLW HPMoR discussion, ch. 56-60
WHEN: 20 April 2013 12:30:30PM (-0400)
WHERE: 726 Rigsbee Avenue, Durham NC 27701
Meet at Fullsteam about 12:30 for discussion of HPMoR chapters 56-60!
Feel free to bring food and or coffee from surrounding establishments.
Probably sometime midafternoon we'll adjourn from Fullsteam for a hike along the Eno. Join the RTLW Google group (http://groups.google.com/group/rtlw) for contact info/details/etc.
Discussion article for the meetup : Durham/RTLW HPMoR discussion, ch. 56-60
|
e5c88f3b-05ac-4002-b2f2-18035124c197
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Why Not Subagents?
Alternative title for economists: Complete Markets Have Complete Preferences
> The justification for modeling real-world systems as “agents” - i.e. choosing actions to maximize some utility function - usually rests on various coherence theorems. They say things like “either the system’s behavior maximizes some utility function, or it is throwing away resources” or “either the system’s behavior maximizes some utility function, or it can be exploited” or things like that. [...]
>
> Now imagine an agent which prefers anchovy over mushroom pizza when it has anchovy, but mushroom over anchovy when it has mushroom; it’s simply never willing to trade in either direction. There’s nothing inherently “wrong” with this; the agent is not necessarily executing a dominated strategy, cannot necessarily be exploited, or any of the other bad things we associate with inconsistent preferences. But the preferences can’t be described by a utility function over pizza toppings.
… that’s what I (John) wrote four years ago, as the opening of Why Subagents?. Nate convinced me that it’s wrong: incomplete preferences do imply a dominated strategy. A system with incomplete preferences, which can’t be described by a utility function, can contract/precommit/self-modify to a more-complete set of preferences which perform strictly better even according to the original preferences.
This post will document that argument.
Epistemic Status: This post is intended to present the core idea in an intuitive and simple way. It is not intended to present a fully rigorous or maximally general argument; our hope is that someone else will come along and more properly rigorize and generalize things. In particular, we’re unsure of the best setting to use for the problem setup; we’ll emphasize some degrees of freedom in the High-Level Potential Problems (And Potential Solutions) section.
Context
The question which originally motivated my interest was: what’s the utility function of the world’s financial mark
|
8f423a8f-5b8f-4028-a14c-71d1815eab29
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Corrigibility could make things worse
Summary: A Corrigibility method that works for a Pivotal Act AI (PAAI) but fails for a CEV style AI could make things worse. Any implemented Corrigibility method will necessarily be built on top of a set of unexamined implicit assumptions. One of those assumptions could be true for a PAAI, but false for a CEV style AI. The present post outlines one specific scenario where this happens. This scenario involves a Corrigibility method that only works for an AI design, if that design does not imply an identifiable outcome. The method fails when it is applied to an AI design, that does imply an identifiable outcome. When such an outcome does exist, the ``corrigible'' AI will ``explain'' this implied outcome, in a way that makes the designers want to implement that outcome.
The example scenario:
Consider a scenario where a design team has access to a Corrigibility method that works for a PAAI design. A PAAI can have a large impact on the world. For example by helping a design team prevent other AI projects. But there exists no specific outcome, that is implied by a PAAI design. Since there exists no implied outcome for a PAAI to ``explain'' to the designers, this Corrigibility method actually renders a PAAI genuinely corrigible. For some AI designs, the set of assumptions that the design is built on top of, does however imply a specific outcome. Let's refer to this as the Implied Outcome (IO). This IO can alternatively be viewed as: ``the outcome that a Last Judge would either approve of, or reject''. In other words: consider the Last Judge proposal from the CEV arbital page. If it would make sense to add a Last Judge of this type, to a given AI design, then that AI design has an IO. The IO is the outcome that a Last Judge would either approve of, or reject (for example a successor AI that will either get a thumbs up or a thumbs down). In yet other words: the purpose of adding a Last Judge to an AI design, is to allow someone to render a binary judgment on some outcome.
|
b408157b-1aea-454e-83df-e7ade430fbe1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AMC's animated series "Pantheon" is relevant to our interests
Sorry for the vague title. I'm avoiding spoilers for those who care a lot about that sort of thing.
SPOILERS FOLLOW
----------------------------------------
The series illustrates a fast take-off scenario. It tells the story of a fast-take-off scenario of "uploaded intelligence". Of course the usual fast take-off scenario this community is concerned about is artificial intelligence, not uploaded intelligence (i.e., digitized human intelligence as in Robin Hanson's Age of Em). But the series portrays the dangers of uploaded intelligence in a remarkably similar way to the concerns many people have about a fast AI take-off.
It feels very embedded in the state of the current world in 2022 in many ways; referencing the pandemic in ways most shows don't dare to; set in perhaps an alternative history where we weren't on the cusp of AGI, but where a Steve Jobs-like character had almost, but not quite, cracked the brain-uploading problem 20 years ago.
There are plenty of plot devices that don't feel realistic, but the interaction between technology and geopolitics, the exponential growth when AI take-off happens, the pivotal nature of very specific events guiding the direction of the take-off at the time it occurs--all that tracks, for me.
Worth watching as entertainment for yourself, and worth popularizing as a way to prime people out there on the dangers of a fast AI take-off in the near future.
That is all I have to say! Just putting the alert out there.
|
f0c2fccb-3058-45d9-93cc-0d171d23b26d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
World, mind, and learnability: A note on the metaphysical structure of the cosmos [& LLMs]
Cross-posted from New Savanna.
There is no a priori reason to believe that world has to be learnable. But if it were not, then we wouldn’t exist, nor would (most?) animals. The existing world, thus, is learnable. The human sensorium and motor system are necessarily adapted to that learnable structure, whatever it is.
I am, at least provisionally, calling that learnable structure the metaphysical structure of the world. Moreover, since humans did not arise de novo that metaphysical structure must necessarily extend through the animal kingdom and, who knows, plants as well.
“How”, you might ask, “does this metaphysical structure of the world differ from the world’s physical structure?” I will say, again provisionally, for I am just now making this up, that it is a matter of intension rather than extension. Extensionally the physical and the metaphysical are one and the same. But intensionally, they are different. We think about them in different terms. We ask different things of them. They have different conceptual affordances. The physical world is meaningless; it is simply there. It is in the metaphysical world that we seek meaning. [See my post, There is a fold in the fabric of reality. (Traditional) literary criticism is written on one side of it. I went around the bend years ago.]
A little dialog
Does this make sense, philosophically? How would I know?
I get it, you’re just making this up.
Right.
Hmmmm… How does this relate to that object-oriented ontology stuff you were so interested in a couple of years ago?
Interesting question. Why don’t you think about it and get back to me.
I mean, that metaphysical structure you’re talking about, it seems almost like a complex multidimensional tissue binding the world together. It has a whiff of a Latourian actor-network about it.
Hmmm… Set that aside for awhile. I want to go somewhere else.
Still on GPT-3, eh?
You got it.[1]
A little diagram: World, Text, and Mind
Text reflects this learnable, this metaphy
|
00833a19-d10e-42f9-b262-c9fc4f94b893
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What is the VIX?
Original Post Here
The VIX is a unit of information within the financial market system that investment professionals use to make investment decisions. There are derivatives (options and futures) of that unit of information that investment professionals (and regular people) can use as tools to trade and/or invest within the financial market ecosystem.
What unit of information does the VIX track? the volatility1 of the S&P 500 index over the next 30 days, annualized. What does this mean?
Volatility is a measure of the severity of the return of an asset, regardless of whether the return is positive of negative. Lets say that on Monday Apple stock goes down 6%, Tuesday up 5%, Wednesday down 7%, Thursday up 8%, and Friday down 10%. Since Apple stock typically moves under 2% per day, we can infer that this week Apple stock was volatile. The higher the volatility of an asset, the riskier it is perceived.
The S&P 500 index is a stock market index composed of the 500 largest American companies that trade in public exchanges. It is probably the most common financial market index today.
So what the VIX tracks is what market participants are implying the S&P 500 Index volatility will be over the next 30 days. The VIX presents that number in percentage terms, which is then annualized. The higher the VIX percentage, the higher it is implied that the risk of the S&P 500 index is over the next 30 days. This is why the VIX is considered the fear gauge. It informs market participants how risky it is to invest in the S&P 500 index in the near term.
How market participants derive future implied volatility of the index and how the Chicago Board Options Exchange (The creators of the VIX) calculate the VIX is somewhat complex and beyond the scope of this post. You can read the VIX whitepaper here. I would suggest also looking at The Black-Scholes Model formula for pricing options before looking at the whitepaper. I am happy to discuss in more detail the complexity of the VIX and opt
|
923a8aec-ebbb-44fd-8040-86168e72d9de
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Is there an academic consensus around Rent Control?
Rent control is a type of policy where a maximum cap is put on what a landlord may charge tenants.
I've seen two sources that suggest that there is an (edit: academic economists) consensus against rent control:
1. A 2009 review.
2. 81% of economists agree against rent control.
I'm not sure how much faith to put in these, and how non-controversial this topic is in practice (perhaps there are important subcases where it is a good policy).
Are there strong claims for rent control policies in relevant cases that are supported by a non-trivial amount of economists?
(Yonatan Cale thinks that there is a consensus against rent control. Help me prove him wrong and give him Bayes points!)
|
85da7109-82fd-4701-a7aa-f5ccbc005854
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Is Text Watermarking a lost cause?
Summary:
A text watermark is some edit to a piece of text that makes its creator easy to identify. The main use for this would be to watermark Large Language Model outputs. This post explores how effective any text watermark could be in general. It does this by analyzing the smallest amount of text that a watermark could fit inside of. It is addressed to anyone working on watermarks, or interested in how feasible they could be. It provides a formula for testing watermark effectiveness and concludes that certain qualities that we would want from a watermark, especially robustness, significantly and negatively impact a watermark’s efficiency.
Introduction:
Ideally we want a watermark to meet three criteria
1) The watermark should be decisive: It should be almost impossible for normal text to appear marked by accident.
2) The watermark should be invisible, or at least not intrusive to someone just reading the text.
3) The watermark should be robust and able to withstand alterations to the text. Perhaps even from a malicious actor trying to destroy the watermark.
Suppose for a moment that watermarking text in this way was impossible. Perhaps it is always unreliable, or always circumventable. How would we know that this is the case? This post tries to examine a hypothetical text watermark by assuming the watermark consists of distinct changes peppered throughout the text called “markings.” It is assumed these “markings” can occur naturally in normal text, but occur more frequently in marked text. By simply counting the number of locations in the text that have been marked or could have been marked, we can determine the total number of markings that need to be embedded in the text in order for the text to be decisively marked. This is a proxy for how long the text needs to be, for any given watermark (after all, a better watermark is one that can be hidden in a smaller piece of text). In many cases, this required number of markings is reasonable, however it c
|
2ef66efc-80b2-4356-9cd9-c6c2d2340115
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Frames in context
In my previous post, I introduced meta-rationality and frames, and described some examples of frames and some of their properties. In this post I’ll outline some of the limitations of existing ways of thinking about cognition, and some of the dynamics that they can’t describe which meta-rationality can. This post (especially the second half) can be seen as a summary of the key ideas from the rest of the sequence; if you find it too dense, feel free to skip it and come back after reading the next five posts. To quickly list my main claims:
1. Unlike logical propositions, frames can’t be evaluated as discretely true or false.
2. Unlike Bayesian hypotheses, frames aren’t mutually exclusive, and can overlap with each other. This (along with point #1) means that we can’t define probability distributions of credences over frames.
3. Unlike in critical rationalism, we evaluate frames (partly) in terms of how true they are (based on their predictions) rather than just whether they’ve been falsified or not.
4. Unlike Garrabrant traders and Rational Inductive Agents, frames can output any combination of empirical content (e.g. predictions about the world) and normative content (e.g. evaluations of outcomes, or recommendations for how to act).
5. Unlike model-based policies, policies composed of frames can’t be decomposed into modules with distinct functions, because each frame plays multiple roles.
6. Unlike in multi-agent RL, frames don’t interact independently with their environment, but instead contribute towards choosing the actions of a single agent.
I’ll now explain these points in more detail. Epistemology typically focuses on propositions which can (at least in principle) be judged true or false. Traditionally, truth and knowledge are both taken as binary criteria: each proposition is either true or false, and we either know which it is or we don’t. Intuitively speaking, though, this doesn’t match very well with our everyday experience. There are many proposit
|
5d2611ad-ee34-463f-b8f1-04d3edef8683
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Camaraderie at scale: in search of shared identity
Originally published on substack: https://eliqian.substack.com/camaraderie
This tweet is an exaggeration, but only barely:
https://twitter.com/DrifterShoots/status/1515095665831026689?s=20
You yourself probably remember waking up during that time and just feeling different. I'd like to propose a new term—camaraderie at scale—to describe these moments.
This got me thinking—when has camaraderie occurred at scale before, and what made those moments so special? How can we engineer more of them into our lives?
Some notable ones in recent memory: Linsanity, 2008 Beijing Olympics.
Some notable ones (or so I'm told) that I'm too young to remember: 9/11, Sosa vs. McGwire, the Moon landing.
Camaraderie at scale is extremely powerful. Few things in the world can mobilize and unite entire nations, or in some instances, the whole planet. I spent some time researching and reliving past moments in an effort to document the factors that made them so special.
Side quest: origins of camaraderie
It's worth exploring the word camaraderie and its closely related comrade. Comrade has origins in French, Italian, and Spanish words that all mean something along the lines of “one who shares the same room.”
I really like this definition because it conveys a sense of being in something together. A comrade is someone sympathetic to your cause—bound by an ideology, but not necessarily similar or connected in any other way.
Camaraderie is about some connection that brings together otherwise disparate people. I think the key concept behind this is identity, and how moments can change who we identify as.
Micro-identities & micro-communities
Everyone has an identity—a label they give themselves that connects them to a concept, group, or ideology. Most people have multiple identities. You might identify as a student, but also as a writer, a person who loves to cook, and a Taylor Swift fan. Most people also have layers of identity. I identify as Chinese, but also Asian American. I love bas
|
6cb9d421-eff9-4b8e-93a5-35e3eef248eb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
On Ilya Sutskever's "A Theory of Unsupervised Learning"
I highly recommend watching Sutskever's entire lecture instead of reading the AI-generated transcript[1], as he presents his ideas exceptionally well. I believe Sutskever's effort to formalize or demistify unsupervised learning is a solution to the control problem. My comments are in the footnotes of the bolded concepts.
A theory of unsupervised learning by Ilya Sutskever
Intro [Host and Ilya, 0:05]
thirty nine thousand um and then many more so super excited to hear what Ilya has to say about llms in the future take it away okay hi hi everyone thank you for the introduction you know when uh when Umesh invited me to to attend this event I got really excited I saw the holy speak released and I thought great I'll I'll go and I'll talk about something and then I ran into a problem which is that a lot of the technical workings we're doing at open AI actually can talk about so I was really racking my head what could I talk about right now so as of not too long ago I switched all my focus to working on AI alignment all my research Focus and we'll have very cool results to show there but not just yet so I'm like okay so this would have to be for the next stop but what would be good for this stock and I came up with something I'll tell you about some very old results that we had it open AI many years ago back in 2016 even which really affected the way I think about unsupervised learning and I thought I'd share them with you it is possible that at this point you'll find them obvious but maybe not all of them so there is at least a small chance that you'll find it interesting so I want to set the expectations modestly and hopefully exceed them
Supervised Learning [1:47]
so a theory of unsupervised learning how can such a thing exist so okay before we talk about unsupervised learning we want to talk about learning in general what is learning and why does learning work at why should learning work at all and why should computers be able to learn and now we are just u
|
026e90c4-8bc3-495f-af50-c78ba1223f76
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Mistakes I’ve made, part 2: Father Christmas
I believed in Father Christmas for longer than I believed in God. This is mostly surprising because it is unusual. There was both more evidence for Father Christmas, and more motivation to propel cognition.
I stopped ‘believing’ in Father Christmas in grade seven, when my drama teacher cavalierly mentioned his unreality. It wasn’t shocking—more a feeling of something I knew somewhere in my heart being made official. It was a bit embarrassing.
How could I neglect to explicitly infer the lack of a Father Christmas until then?
I don’t think my epistemic failures here were embarrassing for a human, given the proliferation of supernatural objects of belief, and the fact that Father Christmas alone among them actually leaves concrete stuff that you wish for.
My guess is that most people do better, not by noticing their confusion, but by interacting more with other children. Noticing confusion would also alert people to the implausibility of other supernatural things (unless perhaps it is the very evidence for Father Christmas which makes him confusing). And contact with subject matter experts is known to be a good prophylactic against being wrong.
But what if I wanted to instruct my past self on how to independently notice that Father Christmas was implausible? What should have been clues, to someone fairly ignorant about the world?
Father Christmas purportedly travels very fast, and delivers more presents than might fit in a sleigh. Also his sleigh can fly. From a child’s perspective though, many things travel very fast. Especially ones that fly. Not usually ones that are propelled by reindeer, but it wouldn’t be shocking if the reindeer were not the main engine, or if the illustrations portrayed the vehicle somewhat whimsically. Having a lot of presents could also be explained by being able to move things fast. Father Christmas himself would also have to move very fast, compared to a human. Though it also shouldn’t be surprising if much of his effort were outsourc
|
0177a1f9-9996-4988-b15b-76c50393fe6d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Perceptron Controversy
Connectionism died in the 60s from technical limits to scaling, then resurrected in the 80s after backprop allowed scaling. The Minsky–Papert anti-scaling hypothesis explained, psychoanalyzed, and buried.
I wrote it as if it's a companion post to Gwern's The Scaling Hypothesis.
|
92690b1e-92f2-424b-8735-bd8844109cd6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What's up with AI's vision
This week I've read 2 pieces of interesting information:
* Apparently current models (in particular o3) are really good at GeoGuessr (Testing AI's GeoGuessr Genius),
* Apparently they have very had time understanding what they see on screenshots from Pokemon Red, even when stuff is clearly marked (Research Notes: Running Claude 3.7, Gemini 2.5 Pro, and o3 on Pokémon Red).
This is a puzzling to me. I find it much easier to find staircase in Pokemon Red than to figure out where s photo as taken.
Reading about the first example from Scotts post. I guess there are not that many featureless plains in the world. But can it really tell types of grass growing there? But have issues identifying doors / staircases in Pokemon Red? Seems weird.
Do we know what's up with all that?
|
e7baf5cb-8266-4c41-9f6e-d4b70ce53403
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Forecasting AI Progress: A Research Agenda
AI forecasting is an important research area but lacking in a general direction. To address this issue we present a research agenda for AI forecasting that has been generated using the Delphi technique to elicit opinion from 15 leading researchers on the topic (the majority of whom are members of this community). The research agenda can be found on arXiv through this link:
Forecasting AI Progress: A Research Agenda (link to arXiv)
The agenda was framed so that it can be useful to both members of this community as well as the technological forecasting community more broadly. To these ends we plan to submit the arXiv manuscript to Technological Forecasting and Social Change, however, we will wait for roughly a month to receive comments. Please feel free to give us your thoughts here.
|
9b07e879-ee4a-43a6-8d03-e7847c544172
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Externally Oriented, Love, Sensational. Which Rationality will LW be about?
Short Version: Less Wrong has been trying to address rationality in all domains. This may prove too wide a scope. As it has been suggested, effectiveness changes should come soon. Those who just read the sequences need more a focused walkthrough. A proposed path is tightening the scope within rationality to Externally Oriented, Tech-friendly, H+(Transhuman) posts. Once the sanity waterline within is raised only the highest peaks remain above water, thus what remains must be more focused. How to do this is discussed.
I'm testing this quote from Less Wrong's Best Of to introduce my post:
> A facility for quotation covers the absence of original thought. -- Dorothy L. Sayers
Garrett Lisi has advanced, in the last minutes of this video, the suggestion that life should equally divided between Physics, Love, and Surfing.
Eliezer Yudkowsky, here, advanced that we buy fuzzies and utilons separately.
> Patrissimo, in a very controversial post said that: As Merlin Mann says: "Joining a Facebook group about creative productivity is like buying a chair about jogging". Well, reading a blog to overcome akrasia IS joining a Facebook group about creative productivity. " [...] " I believe that most people, particularly smart ones, do way too much thinking & talking and way too little action (me included), because that is what's easy for them."[...] "To aid growth at rationality, Less Wrong would have to become a skill practice community, more like martial arts, PUA, and physical fitness, with an explicit focus of helping people grow in their ability to set and achieve goals, combining local chapters with global coordination, infrastructure, and knowledge accumulation. Most discussion should be among people working on a specific skill at a similar level about what is or isn't working for them as they attempt to progress, rather than obscure theories about the inner workings of the human mind."
"The reasonable man adapts himself to the world. Th
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.