id
stringlengths 36
36
| source
stringclasses 15
values | formatted_source
stringclasses 13
values | text
stringlengths 2
7.55M
|
|---|---|---|---|
49483577-f022-4be8-9278-478155064439
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The True Epistemic Prisoner's Dilemma
I spoke yesterday of the epistemic prisoner's dilemma, and JGWeissman wrote:
> I am having some difficulty imagining that I am 99% sure of something, but I cannot either convince a person to outright agree with me or accept that he is uncertain and therefore should make the choice that would help more if it is right, but I could convince that same person to cooperate in the prisoner's dilemma. However, if I did find myself in that situation, I would cooperate.
To which I said:
> Do you think you could convince a young-earth creationist to cooperate in the prisoner's dilemma?
And lo, JGWeissman saved me a lot of writing when he replied thus:
> Good point. I probably could. I expect that the young-earth creationist has a huge bias that does not have to interfere with reasoning about the prisoner's dilemma.
>
> So, suppose Omega finds a young-earth creationist and an atheist, and plays the following game with them. They will each be taken to a separate room, where the atheist will choose between each of them receiving $10000 if the earth is less than 1 million years old or each receiving $5000 if the earth is more than 1 million years old, and the young earth creationist will have a similar choice with the payoffs reversed. Now, with prisoner's dilemma tied to the young earth creationist's bias, would I, in the role of the atheist still be able to convince him to cooperate? I don't know. I am not sure how much the need to believe that the earth is around 5000 years would interfere with recognizing that it is in his interest to choose the payoff for earth being over a million years old. But still, if he seemed able to accept it, I would cooperate.
I make one small modification. You and your creationist friend are actually not that concerned about money, being distracted by the massive meteor about to strike the earth from an unknown direction. Fortunately, Omega is promising to protect limited portions of the globe, based on your decisions (I think you've all see
|
909ee634-c868-402e-a133-6761765cf16b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Doomsday argument for Anthropic Decision Theory
tl;dr: there is no real Doomsday argument in ADT. Average utilitarians over-discount the future compared with total utilitarians, but ADT can either increase or decrease this effect. The SIA Doomsaday argument can also be constructed, but this is simply a consequence of total utilitarian preferences, not of increased probability of doom.
I've been having a lot of trouble formulating a proper version of the doomsday argument for Anthropic Decision Theory (ADT). ADT mimics SIA-like decisions (for total utilitarians, those with a population independent utility function, and certain types of selfish agents), and SSA-like decisions (for average utilitarians, and a different type of selfish agent). So all paradoxes of SIA and SSA should be formulatable in it. And that is indeed the case for the presumptuous philosopher and the Adam and Eve paradox. But I haven't found a good formulation of the Doomsday argument.
And I think I know why now. It's because the Doomsday argument-like effects come from the preferences of those average utilitarian agents. Adding anthropic effects does not make the Doomsday argument stronger! It's a non-anthropic effect of those preferences. ADT may allow certain selfish agents to make acausal contracts that make them behave like average utilitarian agents, but it doesn't add any additional effect.
Doomsday decisions
Since ADT is based on decisions, rather than probabilities, we need to formulate the Doomsday argument in decision form. The most obvious method is a decision that affects the chances of survival of future generations.
But those decisions are dominated by whether the agent desires future generations or not! Future generations of high average happiness are desired, those of lower average happiness are undesirable. This effect dominates the decisions of average utilitarians, making it hard to formulate a decision that addresses 'risk of doom' in isolation. There is one way of doing this, though: looking at how agents discount t
|
a0f2121e-2096-4f8b-b610-b15a31a5e7bd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Death vs. Suffering: The Endurist-Serenist Divide on Life’s Worst Fate
Author’s Note
Longtime LW lurker, occasional contributor (under other aliases). This post introduces a taxonomy of preferences centered on one question: What do agents treat as the "worst thing"?
Core Framework:
* Endurists: Death as the ultimate evil (life at all costs).
* Serenists: Suffering as the ultimate evil (non-existence over agony).
This framework is written in LW’s analytical style, but I’ll state upfront: I lean Serenist. My goal isn’t neutrality—it’s to see if explicit value categories can sharpen debates about medical ethics, AI alignment, and institutional design.
Cross-posted from Qualia Advocate.
----------------------------------------
Consider the following question: What is the worst thing that could happen to a person?
I would claim there are two main answers: to die, or to exist in a state of extreme suffering—to be in hell.
“Better to suffer than to die.”
— Jean de La Fontaine
Endurists: Those who see death as the worst outcome. They would choose a life of pain over non-existence.
“For it would be better to die once and for all than to suffer pain for all one's life.”
― Aeschylus, Prometheus Bound
Serenists: Those who see suffering as the worst outcome. They believe some lives aren’t worth living, and non-existence can be better than extreme suffering.
This divide goes beyond personal choice. It shapes religions, philosophy, medical ethics, and laws. It affects how societies treat their members, with Endurist values often being forced on those who hold Serenist views. The purpose of this post is to explore these two views.
----------------------------------------
Psychological Profiles
Endurists and Serenists differ fundamentally in how they process and value suffering versus existence.
A Twitter poll by Spencer Greenberg captured this divide by suggesting the following scenario:
When asked about enduring terrible constant pain for 11 months with a 1% survival rate, 40.3% would "fight till the end" (Endurists), while 59.7% w
|
a33d5550-6ef1-4ce3-8225-18662192b7e2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Your favorite pdfs?
I just got an iPad, which means that for the first time in my life it's convenient for me to read long documents from the internet. I've often looked at things and thought, oh, if I ever have some capability to read that, I should. However, I haven't made a list.
So. What documents in pdf form, or long websites, should I read now that I can? Any recommendations?
EDIT: Thanks for the advice. I'm particularly excited to read Godel, Escher, Bach. I've also downloaded the ePub of the Sequences, and I'm going to certainly look at Strategy of Conflict. Thank you very much for the links.
|
1e7df8c0-e85e-4487-bf59-8ba8fb23b944
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Knowing what you want is a prerequisite to getting what you want
Frequently, we decide on a goal, and then we are ineffective in working towards this goal, due to factors wholly within our control. Failure modes include giving up, losing interest, procrastination, akrasia, and failure to evaluate return on time. In all these cases it seems that if our motivation were higher, the problem would not exist. Call the problem of finding the motivation to effectively pursue one's goals, the problem of motivation. This is a common failure of instrumental rationality which has been discussed from numerous different angles on LessWrong.
I wish to introduce another approach to the problem of motivation, which to my knowledge has not yet been discussed on LessWrong. This approach is summarized in the following paragraph:
We do not know what we value. Therefore, we choose goals that are not in harmony with our values. The problem of motivation is often caused by our goals not being in harmony with our values. Therefore, many cases of the problem of motivation can be solved by discovering what you value, and carrying out goals that conform to your values.
I will begin by making clear the distinction between goals and values. A goal is time-bound, contingent, and concrete. A value is permanent, unconditional, and abstract. A few goals are:
* Become a comedian.
* Bring about the Singularity.
* Graduate college.
A few values are:
* Humor
* Transcendence
* Money
For good measure, I'll list a few more values:
* Truth
* Love
* Beauty
* Social justice
* Happiness
* Interpersonal relationships
* Money
* Power
* Spirituality
* Rationality
* Imagination
* Security
* Excitement
* Self-expression
* Life
I argue that goals are always in service of values. If we look at the terminal values in our decision tree, they will be big, mysterious abstractions such as these. Why would I think this? Why not have concrete things as the terminal values of our decision tree?
Because it's simpler. Consider this example. I can have a term
|
b8e07e92-d03b-4a32-b6c8-429958623492
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Less Wrong: Cleveland
Discussion article for the meetup : Less Wrong: Cleveland
WHEN: 09 December 2012 03:00:23PM (-0500)
WHERE: Cleveland
Four of us will be meeting on the 9th. If you're interested and in the area, send an email!
Discussion group: https://groups.google.com/forum/?fromgroups#!forum/less-wrong-cleveland
Location to be decided.
Discussion article for the meetup : Less Wrong: Cleveland
|
9f500bca-f777-4594-b730-06ce13a7e7ba
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Ethical Status of Non-human Animals
There's been some discussion on this site about vegetarianism previously, although less than I expected. It's a complicated topic, so I want to focus on a critical sub-issue: within a consequentialist/utilitarian framework, what should be the status of non-human animals? Do only humans matter? If non-human animals matter only a little, just how much do they matter?
I argue that species-specific weighting factors have no place in our moral calculus. If two minds experience the same sort of stimulus, the species of those minds shouldn't affect how good or bad we believe that to be. I owe the line of argument I'll be sketching to Peter Singer's work. His book Practical Ethics is the best statement of the case that I'm aware of.
Front-loaded definitions and summary:
* Self-aware: A self-aware mind is one that understands that it exists and that it persists through time.
* Sentience: A sentient mind is one that has subjective experiences, such as pleasure and pain. I assume that self-awareness subsumes sentience (i.e. all self-aware minds are also sentient, but not vice versa).
* Person: A self-aware mind.
* A human may be alive but non-sentient, due to injury or birth defects.
* Humans may be sentient but not self-aware, due to injury, birth defect or infancy.
* Non-human persons are possible: hypothetically, aliens and AIs; controversially, non-human great apes.
* Many non-human animals are sentient, many are not.
* Utilitarian ethics involve moral calculus: summing the impacts of an action (or some proxy for them, such as preferences) on all minds.
* When performing this calculus, do sentient (but non-self aware) minds count at all? If so, do they count as much as persons?
* If they count for zero, there's no ethical problem with secretly torturing puppies, just for fun.
* We're tempted to believe that sentient minds count for something, but less than persons.
* I think this is just a cover for what we're really tempted to believe: humans count for mo
|
96ec9357-d622-40dd-a8e3-ac83bb9f967c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Please share your reading habits/techniques/strategies
I'm looking to build up a “tool-box” of strategies/techniques/habits for reading non-fiction effectively and efficiently. I'm looking for methods to help me retain concepts, locate main ideas, make connections, etc.
If anyone has posted about this topic previously, please link to the post.
Please point to relevant resources that have worked for you; additionally please describe skills/systems that you've developed personally.
An example of a useful comment I got posting in an open thread, from Jayson_Virissimo
“As I read textbooks, I summarize the most important concepts (along with doing the exercises, if there are any) and write them in a notebook and then later (less than a week) enter the notes into Anki as cloze-delete flashcards. I don't have an objective measure of retention, but I believe that it has vastly improved relative to when I would simply read the book.”
Here is an example of an existing resource that I found useful:
http://violentmetaphors.com/2013/08/25/how-to-read-and-understand-a-scientific-paper-2/
Here are some questions/prompts that may spur your thinking:
Describe the setting where you read.
Do you schedule reading time? How?
How do you decide what to read next?
Do you write notes by hand, on a computer?
Do you wear noise-canceling headphones?
Do you skim texts?
Do you reread texts?
How often do you reread “foundational” texts, or texts that shifted your paradigm?
How often do you decide not to finish a book?
I may do a series of posts on this in discussion, and if other users find it interesting/useful I may eventually make it into a post for the main page.
|
f97d733b-2204-4139-8ef1-d90daf314601
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[SEQ RERUN] Worse Than Random
Today's post, Worse Than Random was originally published on 11 November 2008. A summary (taken from the LW wiki):
> If a system does better when randomness is added into its processing, then it must somehow have been performing worse than random. And if you can recognize that this is the case, you ought to be able to generate a non-randomized system.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Lawful Uncertainty, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
|
f0385759-c186-4e31-8a3a-0d25efeeba90
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Durham/RTLW HPMoR discussion, chapters 94-96
Discussion article for the meetup : Durham/RTLW HPMoR discussion, chapters 94-96
WHEN: 02 November 2013 12:00:00PM (-0400)
WHERE: 726 Rigsbee Avenue, Durham, NC 27701
Bring a question, an observation, or any other thing you might like to discuss from chapters 94-96 of HPMoR.
We'll gather coffee around noon and have discussion 12:30-2ish.
Hope to see you there!
Discussion article for the meetup : Durham/RTLW HPMoR discussion, chapters 94-96
|
29514527-1322-48d2-a8b4-3d4b5db820ae
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Thoughts on the REACH Patreon
Note: My views have updated since this post, but I haven't yet written them up.
I moved from NY to the Bay several months ago.
In many senses, the Berkeley community is much bigger than NYC. There's a few hundred members instead of around 30. I had several friends in the Bay before moving here, and have made more since arriving.
But, it didn't actually feel like home to me until a couple weeks ago, at one of the weekly meetups at the new community center.
These are my thoughts about the REACH (Rationality and Effective Altruism Community Hub) space, and what considerations I think are relevant for funding it. It has a Patreon which is currently hitting the "juuust enough money that it might possibly work so long as other things go right" threshold. But it doesn't really have enough funding to reliably break even, let alone thrive.
tldr: I think there are lot of reasons you might want to help fund REACH, both from an effective altruism perspective, and from a "just buy some nice things for yourself" perspective.
...
Epistemic Status: I'm pretty biased in favor of REACH, and much of my reasoning was at least a bit motivated. I expect to make a lot of use of the infrastructure there, so my thoughts here are a bit self-serving.
I’ve attempted to account for this in my writing of this, and am still fairly confident that there’s something important going on here.
...
Table of Contents
1. Ray’s Opinionated Conception of Meetups
2. Mechanics of Berkeley Meetup Brain Drain
3. Enter REACH
4. Frameworks of Funding
1. Buying Nice Things vs Effective Altruism
2. Nice Things and Homemade Prices
3. Okay, but is this the right nice thing?
4. The Case for Impact
5. Measuring Intangibles
1. Agency Ladder
2. Water Coolers and Campuses
6. Matching Funds
7. In Closing...
Ray’s Opinionated Conception of Meetups
In NYC, meetup means "someone runs an hour long presentation, moderated discussion or workshop." Every week, you can show up to
|
5492159f-4aab-4f92-b089-bbde0653e96a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Gradations of Inner Alignment Obstacles
The existing definitions of deception, inner optimizer, and some other terms tend to strike me as "stronger than necessary" depending on the context. If weaker definitions are similarly problematic, this means we need stronger methods to prevent them! I illustrate this and make some related (probably contentious) claims.
Summary of contentious claims to follow:
1. The most useful definition of "mesa-optimizer" doesn't require them to perform explicit search, contrary to the current standard.
2. Success at aligning narrowly superhuman models might be bad news.
3. Some versions of the lottery ticket hypothesis seem to imply that randomly initialized networks already contain deceptive agents.
It's possible I've shoved too many things into one post. Sorry.
Inner Optimization
The standard definition of "inner optimizer" refers to something which carries out explicit search, in service of some objective. It's not clear to me whether/when we should focus that narrowly. Here are some other definitions of "inner optimizer" which I sometimes think about.
Mesa-Control
I've previously written about the idea of distinguishing mesa-search vs mesa-control:
* Mesa-searchers implement an internal optimization algorithm, such as a planning algorithm, to help them achieve an objective -- this is the definition of "mesa-optimizer"/"inner optimizer" I think of as standard.
* Mesa-controller refers to any effective strategies, including mesa-searchers but also "dumber" strategies which nonetheless effectively steer toward an objective. For example, thermostat-like strategies, or strategies which have simply memorized a number of effective interventions.
* Richard Ngo points out that this definition is rather all-encompassing, since it includes any highly competent policy. Adam Shimi suggests that we think of inner optimizers as goal-directed.
* Considering these comments, I think I want to revise my definition of mesa-controller to include that it is not totally myopi
|
dd5553ea-4c7e-438a-85b7-2993acc9d617
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A Very Simple Case For Giving To Shrimp
Crossposted from my blog.
(I think this is an important article, so I’d really appreciate you sharing and restacking it—especially with people who are very rich).
I’ve elsewhere written more words about why you should give to shrimp welfare than are contained in a typical Andy Masley post (which, by conservative estimates, is at least 10 billion). I’ve talked about what makes pain bad. I’ve done rough Fermi calculations of the amount of agony averted. But here I’m going to try a different tack. I think people probably already care about shrimp enough that they should give to shrimp welfare.
Imagine that you discovered that there were 150 lobsters about to be boiled alive. There’s a machine designed to anesthetize them—so that they don’t have to struggle and writhe in agony for many minutes. Sadly, the machine is broken. To fix it, you would have to spend one single penny.
Question: is that a good use of a penny?
The obvious answer is yes! It’s worth spending a single penny so that 150 lobsters don’t have to experience painful deaths. But lobsters are no more important than shrimp and the Shrimp Welfare Project can anesthetize 150 shrimp before death per penny they receive (by purchasing stunners which stun thousands of animals per dollar raised). So if you think that it’s worth spending a penny to anesthetize 150 lobsters so they don’t have to experience being boiled alive, you should also support funding the Shrimp Welfare Project.
If you had the opportunity to keep doing this over and over again—to have each penny that you give stop 150 lobsters from experiencing the pain of being boiled—that would be even better. So similarly, you should give some money to shrimp welfare! If you give, say, 1,000 dollars, you can prevent ~15 million shrimp from experiencing extreme agony!
Consider another case: suppose that there were 100 shrimp suffocating to death in a bucket. You could put them back into the water, but doing so would take about a minute of your time. A
|
756fbcdd-62fa-46a2-ab4a-4a7754f6e9da
|
trentmkelly/LessWrong-43k
|
LessWrong
|
We’ve stepped over the threshold into the Fourth Arena, but don’t recognize it
First there was the world of inanimate matter, the First Arena. Life arose from that, the Second Arena. Only 100s of thousands of years ago humans evolved from higher primates and the Third Arena, human culture, appeared. We are now on the threshold of the Fourth Arena. How do we characterize it?
Note: These thoughts are off the top of my head [thinking-out-loud]. It was all I could do to get them out. That’s enough for now. Refinement will have to wait.
* * * * *
OpenAI released GPT-3 in the summer of 2020. It was clear to me that, yes, it has that potential. I registered my response, initially in a comment over at Marginal Revolution, and then on New Savanna, First thoughts on the implications of GPT-3 [here be dragons, we're swimming and flying with them]. Having explored ChatGPT for the past two weeks, that potential emerges before me with even greater clarity. Of course, we could blow it, nothing is guaranteed. But still...
One must wonder, dream, and hope.
Foundation Models as digital wilderness
Let us start with these deep learning models trained on large bodies of data. ChatGPT has such a model at its functional core. They have been termed Foundation Models because they “can be adapted to a wide range of downstream tasks.” I have come to think of each such models as repositories of digital wilderness.
What do we do with the wilderness? We explore it, map it, in time settle it and develop it. We cultivate and domesticate it. AI safety researchers call that alignment. The millions of people who have been using ChatGPT are part of that process. We may not think of ourselves in that way, but that, in part, is how OpenAI thinks of us. Even as we pursue our own ends while interacting with ChatGPT, OpenAI is collecting those sessions and will be using them to fine-tune the system, to align it.
Yes, it would be nice to have a system “pre-aligned” before it is released to end users. But I don’t think that’s how things are going to work out. The process by whi
|
3a70284a-b68f-4283-9d8e-062402fa0fe5
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
1 Introduction
---------------
Image classification is a fundamental problem in computer vision and serves as the foundation of multiple tasks such as object detection, image segmentation, object tracking, action recognition, and autonomous driving. Since the breakthrough achieved by AlexNet [[1](#bib.bib1)] in ImageNet Challenge (ILSVRC) 2012 [[2](#bib.bib2)], deep neural networks (DNNs) have become the dominant force in this domain. From then on, DNN models with increasing depth and more complex building blocks have been proposed. While these models continue to achieve steadily increasing accuracies, their robustness has not been thoroughly studied, thus little is known if the high accuracies come at the price of reduced robustness.
A common approach to evaluate the robustness of DNNs is via adversarial attacks [[3](#bib.bib3), [4](#bib.bib4), [5](#bib.bib5), [6](#bib.bib6), [7](#bib.bib7), [8](#bib.bib8), [9](#bib.bib9), [10](#bib.bib10), [11](#bib.bib11)], where imperceptible adversarial examples are crafted to mislead DNNs. Generally speaking, the easier an adversarial example can be generated, the less robust the DNN is. Adversarial examples may lead to significant property damage or loss of life. For example, [[12](#bib.bib12)] has shown that a subtly-modified physical Stop sign can be misidentified by a real-time object recognition system as a Speed Limit sign. In addition to adversarial attacks, neural network robustness can also be estimated in an attack-agnostic manner. For example, [[13](#bib.bib13)] and
[[14](#bib.bib14)] theoretically analyzed the robustness of some simple neural networks by estimating their global and local Lipschitz constants, respectively. [[15](#bib.bib15)] proposes to use extreme value theory to estimate a lower bound of the minimum adversarial distortion, and can be efficiently applied to any neural network classifier. [[16](#bib.bib16)] proposes a robustness lower bound based on linear approximations of ReLU activations. In this work, we evaluate DNN robustness by using specific attacks as well as attack-agnostic approaches. We also note that the adversarial robustness studied in this paper is different from [[17](#bib.bib17)], where “robustness” is studied in the context of label semantics and accuracy.
Since the last ImageNet challenge has ended in 2017, we are now at the beginning of post-ImageNet era. In this work, we revisit 18 DNN models submitted to the ImageNet Challenge or achieved state-of-the-art performance. These models have different sizes, classification performance, and belong to multiple architecture families such as AlexNet [[1](#bib.bib1)], VGG Nets [[18](#bib.bib18)], Inception Nets [[19](#bib.bib19)], ResNets [[20](#bib.bib20)], DenseNets [[21](#bib.bib21)], MobileNets [[22](#bib.bib22)], and NASNets [[23](#bib.bib23)]. Therefore, they are suitable to analyze how different factors influence the model robustness. Specifically, we aim to examine
the following questions in this study:
1. *Has robustness been sacrificed for the increased classification performance?*
2. *Which factors influence the robustness of DNNs?*
In the course of evaluation, we have gained a number of insights and we summarize our contributions as follows:
* Tested on a large number of well-trained deep image classifiers, we find that robustness is scarified when solely pursuing a higher classification performance. Indeed, Figure [2](#S3.F2 "Figure 2 ‣ 3.1 Evaluation of Adversarial Attacks ‣ 3 Experiments ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models")(a) and Figure [2](#S3.F2 "Figure 2 ‣ 3.1 Evaluation of Adversarial Attacks ‣ 3 Experiments ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models")(b) clearly show that the ℓ2 and ℓ∞ adversarial distortions scale almost linearly with the logarithm of model classification errors.
Therefore, the classifiers with very low test errors are highly vulnerable to adversarial attacks. We advocate that ImageNet network designers should evaluate model robustness via our disclosed accuracy-robustness Pareto frontier.
* The networks of a same family, e.g., VGG, Inception Nets, ResNets, and DenseNets, share similar robustness properties. This suggests that network architecture has a larger impact on robustness than model size.
Besides, we also observe that the ℓ∞ robustness slightly improves when ResNets, Inception Nets, and DenseNets become deeper.
* The adversarial examples generated by the VGG family can transfer very well to all the other 17 models, while most adversarial examples of other models can only transfer within the same model family. Interestingly, this finding provides us an opportunity to reverse-engineer the architecture of black-box models.
* We present the first comprehensive study that compares the robustness of 18 popular and state-of-the-art ImageNet models, offering a complete picture of the accuracy v.s. robustness trade-off. In terms of transferability of adversarial examples, we conduct thorough experiments on each pair of the 18 ImageNet networks (306 pairs in total), which is the largest scale to date.
2 Background and Experimental Setup
------------------------------------
In this section, we introduce the background knowledge and how we set up experiments. We study both untargeted attack and targeted attack in this paper. Let x0 denote the original image and x denote the adversarial image of x0. The DNN model F(⋅) outputs a class label (or a probability distribution of class labels) as the prediction. Without loss of generality, we assume that F(x0)=y0, which is the ground truth label of x0, to avoid trivial solution. For untargeted attack, the adversarial image x is crafted in a way that x is close to x0 but F(x)≠y0. For targeted attack, a target class t (t≠y0) is provided and the adversarial image x should satisfy that (i) x is close to x0, and (ii) F(x)=t.
###
2.1 Deep Neural Network Architectures
In this work, we study the robustness of 18 deep image classification models belonging to 7 architecture families, as summarized below. Their basic properties of these models are given in Table [1](#S2.T1 "Table 1 ‣ 2.1 Deep Neural Network Architectures ‣ 2 Background and Experimental Setup ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models").
* AlexNet AlexNet [[1](#bib.bib1)] is one of the pioneering and most well-known deep convolutional neural networks.
Compared to many recent architectures, AlexNet has a relatively simple layout that is composed of 5 convolutional layers followed by two fully connected layers and a softmax output layer.
* VGG Nets The overall architecture of VGG nets [[18](#bib.bib18)] are similar to AlexNet, but they are much deeper with more convolutional layers. Another main difference between VGG nets and AlexNet is that all the convolutional layers of VGG nets use a small (3×3) kernel while the first two layers of AlexNet use 11×11 and 5×5 kernels, respectively.
In our paper, we study VGG networks with 16 and 19 layers, with 138 million and 144 million parameters, respectively.
* Inception Nets
The family of Inception nets utilizes the inception modules [[24](#bib.bib24)] that act as multi-level feature extractors. Specifically, each inception module consists of multiple branches of 1×1, 3×3, and 5×5 filters, whose outputs will stack along the channel dimension and be fed into the next layer in the network. In this paper, we study the performance of all popular networks in this family, including Inception-v1 (GoogLeNet) [[19](#bib.bib19)], Inception-v2 [[25](#bib.bib25)], Inception-v3 [[26](#bib.bib26)], Inception-v4, and Inception-ResNet [[27](#bib.bib27)]. All these models are much deeper than AlexNet/VGG but have significantly fewer parameters.
* ResNets
To solve the vanishing gradient problem for training very deep neural networks, the authors of [[20](#bib.bib20)] proposes ResNets, where each layer learns the residual functions with reference to the input by adding skip-layer paths, or “identity shortcut connections”. This architecture enables practitioners to train very deep neural networks to outperform shallow models. In our study, we evaluate 3 ResNets with different depths.
* DenseNets To further exploit the “identity shortcut connections” techniques from ResNets, [[21](#bib.bib21)] proposes DenseNets that connect all layers with each other within a dense block. Besides tackling gradient vanishing problem, the authors also claimed other advantages such as encouraging feature reuse and reducing the number of parameters in the model. We study 3 DenseNets with different depths and widths.
* MobileNets MobileNets [[22](#bib.bib22)] are a family of light weight and efficient neural networks designed for mobile and embedded systems with restricted computational resources. The core components of MobileNets are depthwise separable filters with factorized convolutions. Separable filters can factorize a standard convolution into two parts, a depthwise convolution and a 1×1 pointwise convolution, which can reduce computation and model size dramatically. In this study, we include 3 MobileNets with different depths and width multipliers.
* NASNets NASNets [[23](#bib.bib23)] are a family of networks automatically generated by reinforcement learning using a policy gradient algorithm to optimize architectures [[28](#bib.bib28)]. Building blocks of the model are first searched on a smaller dataset and then transfered to a larger dataset.
| Models | Year | # layers | # parameters | Top-1/5 ImageNet accuracies |
| --- | --- | --- | --- | --- |
| AlexNet [[1](#bib.bib1)] | 2012 | 8 | 60 million | 56.9% / 80.1% 111<https://github.com/BVLC/caffe/wiki/Models-accuracy-on-ImageNet-2012-val> |
| VGG 16 [[18](#bib.bib18)] | 2014 | 16 | 138 million | 71.5% / 89.8%[[29](#bib.bib29)] |
| VGG 19 [[18](#bib.bib18)] | 2014 | 19 | 144 million | 71.1% / 89.8%[[29](#bib.bib29)] |
| Inception-v1 [[19](#bib.bib19)] | 2014 | 22 | 6.7 million | 69.8% / 89.6%[[29](#bib.bib29)] |
| Inception-v2 [[25](#bib.bib25)] | 2015 | 48 | 11.3 million | 73.9% / 91.8%[[29](#bib.bib29)] |
| Inception-v3 [[26](#bib.bib26)] | 2015 | 48 | 23.9 million | 78.0% / 93.9%[[29](#bib.bib29)] |
| Inception-v4 [[27](#bib.bib27)] | 2016 | 76 | 42.9 million | 80.2% / 95.2%[[29](#bib.bib29)] |
| Inception-ResNet-v2 [[27](#bib.bib27)] | 2016 | 96 | 56.1 million | 80.4% / 95.3%[[29](#bib.bib29)] |
| ResNet-v2-50 [[30](#bib.bib30)] | 2016 | 50 | 25.7 million | 75.6% / 92.8%[[29](#bib.bib29)] |
| ResNet-v2-101 [[30](#bib.bib30)] | 2016 | 101 | 44.8 million | 77.0% / 93.7%[[29](#bib.bib29)] |
| ResNet-v2-152 [[30](#bib.bib30)] | 2016 | 152 | 60.6 million | 77.8% / 94.1%[[29](#bib.bib29)] |
| DenseNet-121-k32 [[21](#bib.bib21)] | 2017 | 121 | 8.2 million | 74.9% / 92.2
% 222<https://github.com/pudae/tensorflow-densenet> |
| DenseNet-169-k32 [[21](#bib.bib21)] | 2017 | 169 | 14.4 million | 76.1% / 93.1
% [2](#footnotex2 "footnote 2 ‣ Table 1 ‣ 2.1 Deep Neural Network Architectures ‣ 2 Background and Experimental Setup ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models") |
| DenseNet-161-k48 [[21](#bib.bib21)] | 2017 | 161 | 29.0 million | 77.6% / 93.8
% [2](#footnotex2 "footnote 2 ‣ Table 1 ‣ 2.1 Deep Neural Network Architectures ‣ 2 Background and Experimental Setup ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models") |
| MobileNet-0.25-128 [[22](#bib.bib22)] | 2017 | 128 | 0.5 million | 41.5% / 66.3%[[29](#bib.bib29)] |
| MobileNet-0.50-160 [[22](#bib.bib22)] | 2017 | 160 | 1.4 million | 59.1% / 81.9%[[29](#bib.bib29)] |
| MobileNet-1.0-224 [[22](#bib.bib22)] | 2017 | 224 | 4.3 million | 70.9% / 89.9% [[29](#bib.bib29)] |
| NASNet [[23](#bib.bib23)] | 2017 | - | 88.9 million | 82.7% / 96.2%[[29](#bib.bib29)] |
Table 1: 18 ImageNet models under robustness examination
###
2.2 Robustness Evaluation Approaches
We use both adversarial attacks and attack-agnostic approaches to evaluate network robustness. We first generate adversarial examples of each network using
multiple state-of-the-art attack algorithms, and then analyze the attack success rates and the distortions of adversarial images. In this experiment, we assume to have full access to the targeted DNNs, known as the white-box attack. To further study the transferability of the adversarial images generated by each network, we consider all the 306 network pairs and for each pair, we conduct transfer attack that uses one model’s adversarial examples to attack the other model. Since transfer attack is widely used in the black-box setting [[31](#bib.bib31), [32](#bib.bib32), [33](#bib.bib33), [34](#bib.bib34), [35](#bib.bib35), [36](#bib.bib36)], where an adversary has no access to the explicit knowledge of the target models, this experiment can provide some evidence on networks’ black-box robustness.
Finally, we compute CLEVER [[15](#bib.bib15)] score, a state-of-the-art attack-agnostic network robustness metric, to estimate each network’s intrinsic robustness. Below, we briefly introduce all the evaluation approaches used in our study.
We evaluate the robustness of DNNs using the following adversarial attacks:
* Fast Gradient Sign Method (FGSM) FGSM [[3](#bib.bib3)] is one of the pioneering and most efficient attacking algorithms. It only needs to compute the gradient once to generate an adversarial example x:
| | | |
| --- | --- | --- |
| | x←clip[x0−ϵ sgn% (∇J(x0,t))], | |
where sgn(∇J(x0,t)) is the sign of the gradient of the training loss with respect to x0, and clip(x) ensures that x stays within the range of pixel values. It is efficient for generating adversarial examples as it is just an one-step attack.
* Iterative FGSM (I-FGSM) Albeit efficient, FGSM suffers from a relatively low attack success rate. To this end, [[37](#bib.bib37)] proposes iterative FGSM to enhance its performance. It applies FGSM multiple times with a finer distortion, and is able to fool the network in more than 99% cases. When we run I-FGSM for T iterations, we set the per-iteration perturbation to ϵT sgn(∇J(x0,t)). I-FGSM can be viewed as a projected gradient descent (PGD) method inside an ℓ∞ ball [[38](#bib.bib38)], and it usually finds adversarial examples with small ℓ∞ distortions.
* C&W attack [[39](#bib.bib39)] formulates the problem of generating adversarial examples x as the following optimization problem
| | | | | |
| --- | --- | --- | --- | --- |
| | | minx | λf(x,t)+∥x−x0∥22 | |
| | | s.t. | x∈[0,1]p, | |
where f(x,t) is a loss function to measure the distance between the prediction of x and the target label t. In this work, we choose
| | | |
| --- | --- | --- |
| | f(x,t)=max{maxi≠t[(Logit(x))i−(Logit(x))t],−κ} | |
as it was shown to be effective by [[39](#bib.bib39)]. Logit(x) denotes the vector representation of x at the logit layer, κ is a confidence level and a larger κ generally improves transferability of adversarial examples.
C&W attack is by far one of the strongest attacks that finds adversarial examples with small ℓ2 perturbations. It can achieve almost 100% attack success rate and has bypassed 10 different adversary detection methods [[40](#bib.bib40)].
* EAD-L1 attack EAD-L1 attack [[41](#bib.bib41)] refers to the Elastic-Net Attacks to DNNs, which is a more general formulation than C&W attack. It proposes to use elastic-net regularization, a linear combination of ℓ1 and ℓ2 norms, to penalize large distortion between the original and adversarial examples. Specifically, it learns the adversarial example x via
| | | | | |
| --- | --- | --- | --- | --- |
| | | minx | λf(x,t)+∥x−x0∥22+β∥x−x0∥1 | |
| | | s.t. | x∈[0,1]p, | |
where f(x,t) is the same as used in the C&W attack. [[41](#bib.bib41), [42](#bib.bib42), [43](#bib.bib43), [44](#bib.bib44)] show that EAD-L1 attack is highly transferable and can bypass many defenses and analysis.
We also evaluate network robustness using an attack-agnostic approach:
* CLEVER CLEVER [[15](#bib.bib15)] (Cross-Lipschitz Extreme Value for nEtwork Robustness) uses extreme value theory to estimate a lower bound of the minimum adversarial distortion. Given an image x0, CLEVER provides an estimated lower bound on the ℓp norm of the minimum distortion δ required to misclassify the distorted image x0+δ. A higher CLEVER score suggests that the network is likely to be more robust to adversarial examples. CLEVER is attack-agnostic and reflects the intrinsic robustness of a network, rather than the robustness under a certain attack.
###
2.3 Dataset
In this work, we use the ImageNet [[45](#bib.bib45)] as the benchmark dataset, due to the following reasons: (i) ImageNet dataset can take full advantage of the studied DNN models since all of them were designed for ImageNet challenges; (ii) comparing to the widely-used small-scale datasets such as MNIST, CIFAR-10 [[46](#bib.bib46)], and GTSRB [[47](#bib.bib47)], ImageNet has significantly more images and classes and is more challenging; and (iii) it has been shown by [[39](#bib.bib39), [48](#bib.bib48)] that ImageNet images are easier to attack but harder to defend than the images from
MNIST and CIFAR datasets. Given all these observations, ImageNet is an ideal candidate to study the robustness of state-of-the-art deep image classification models.
A set of randomly selected 1,000 images from the ImageNet validation set is used to generate adversarial examples from each model. For each image, we conduct targeted attacks with a random target and a least likely target as well as an untargeted attack. Misclassified images are excluded. We follow the setting in [[15](#bib.bib15)] to compute CLEVER scores for 100 out of the all 1,000 images, as CLEVER is relatively more computational expensive. Additionally, we conducted another experiment by taking the subset of images (327 images in total) that are correctly classified by all of 18 examined ImageNet models. The results are consistent with our main results and are given in supplementary material.
###
2.4 Evaluation Metrics
In our study, the robustness of the DNN models is evaluated using the following four metrics:
* Attack success rate For non-targeted attack, success rate indicates the percentage of the adversarial examples whose predicted labels are different from their ground truth labels. For targeted attack, success rate indicates the percentage of the adversarial examples that are classified as the target class. For both attacks, a higher success rate suggests that the model is easier to attack and hence less robust.
When generating adversarial examples, we only consider original images that are correctly classified
to avoid trial attacks.
* Distortion We measure the distortion between adversarial images and the original ones using ℓ2 and ℓ∞ norms. ℓ2 norm measures the Euclidean distance between two images, and ℓ∞ norm is a measure of the maximum absolute change to any pixel (worst case). Both of them are widely used to measure adversarial perturbations [[40](#bib.bib40), [39](#bib.bib39), [41](#bib.bib41)]. A higher distortion usually suggests a more robust model.
To find adversarial examples with minimum distortion for each model, we use a binary search strategy to select the optimal attack parameters ϵ in I-FGSM and λ in C&W attack. Because each model may have different input sizes, we divide ℓ2 distortions by the number of total pixels for a fair comparison.
* CLEVER score For each image, we compute its ℓ2 CLEVER score for target attacks with a random target class and a least-likely class, respectively. The reported number is the averaged score of all the tested images. The higher the CLEVER score, the more robust the model is.
* Transferability We follow [[31](#bib.bib31)] to define targeted and non-targeted transferability. For non-targeted attack, transferability is defined as the percentage of the adversarial examples generated for one model (*source model*) that are also misclassified by another model (*target model*). We refer to this percentage as *error rate*, and a higher error rate means better non-targeted transferability. For targeted attack, transferability is defined as *matching rate*, i.e., the percentage of the adversarial examples generated for source model that are misclassified as the target label (or within top-k labels) by the target model. A higher matching rate indicates better targeted transferability.
3 Experiments
--------------
After examining all the 18 DNN models, we have learned insights about the relationships between model architectures and robustness, as discussed below.
###
3.1 Evaluation of Adversarial Attacks
We have carefully conducted a controlled experiment by pulling images from a common set of 1000 test images when evaluating the robustness of different models. For assessing the robustness of each model, the originally misclassified images are excluded. We compare the success rates of targeted attack with a random target of FGSM, I-FGSM, C&W and EAD-L1 with different parameters for all 18 models. The success rate of FGSM targeted attack is low so we also show its untargeted attack success rate in Figure [1](#S3.F1 "Figure 1 ‣ 3.1 Evaluation of Adversarial Attacks ‣ 3 Experiments ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models")(b).
For targeted attack, the success rate of FGSM is very low (below 3% for all settings), and unlike in the untargeted setting, increasing ϵ in fact decreases attack success rate. This observation further confirms that FGSM is a weak attack, and targeted attack is more difficult and needs iterative attacking methods. Figure [1](#S3.F1 "Figure 1 ‣ 3.1 Evaluation of Adversarial Attacks ‣ 3 Experiments ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models")(c) shows that, with only 10 iterations, I-FGSM can achieve a very good targeted attack success rate on all models. C&W and EAD-L1 can also achieve almost 100% success rate on almost all of the models when κ=0.
For C&W and EAD-L1 attacks, increasing the confidence κ can significantly make the attack harder to find a feasible adversarial example. A larger κ usually makes the adversarial distortion more universal and improves transferability (as we will show shortly), but at the expense of decreasing the success rate and increasing the distortion. However, we find that the attack success rate with large κ cannot be used as a robustness measure, as it is not aligned with the ℓp norm of adversarial distortions. For example, for MobileNet-0.50-160, when κ=40, the success rate is close to 0, but in Figure [2](#S3.F2 "Figure 2 ‣ 3.1 Evaluation of Adversarial Attacks ‣ 3 Experiments ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models") we show that it is one of the most vulnerable networks. The reason is that the range of the logits output can be different for each network, so the difficulty of finding a fixed logit gap κ is different on each network, and is not related to its intrinsic robustness.
We defer the results for targeted attack with the *least likely* target label to the Supplementary section because the conclusions made are similar.
| | |
| --- | --- |
| Comparison of FGSM, I-FGSM, CW and EAD-L1 attacks by varying attack parameters. | Comparison of FGSM, I-FGSM, CW and EAD-L1 attacks by varying attack parameters. |
| (a) Success rate, targeted FGSM | (b) Success rate, untargeted FGSM |
| Comparison of FGSM, I-FGSM, CW and EAD-L1 attacks by varying attack parameters. | Comparison of FGSM, I-FGSM, CW and EAD-L1 attacks by varying attack parameters. |
| (c) Success rate, targeted I-FGSM | (d) Worst case ℓ∞ distortion, I-FGSM, |
| Comparison of FGSM, I-FGSM, CW and EAD-L1 attacks by varying attack parameters. | Comparison of FGSM, I-FGSM, CW and EAD-L1 attacks by varying attack parameters. |
| (e) Success rate, targeted C&W | (f) Per pixel ℓ2 distortion, targeted C&W |
| Comparison of FGSM, I-FGSM, CW and EAD-L1 attacks by varying attack parameters. | Comparison of FGSM, I-FGSM, CW and EAD-L1 attacks by varying attack parameters. |
| (g) Success rate, targeted EAD-L1 | (h) Per pixel ℓ1 distortion, targeted EAD-L1 |
Figure 1: Comparison of FGSM, I-FGSM, CW and EAD-L1 attacks by varying attack parameters.
| |
| --- |
| Robustness vs. classification accuracy plots of I-FGSM attack |
| (a) Fitted Pareto frontier of ℓ∞ distortion (I-FGSM attack) vs. top-1 accuracy: |
| ℓ∞ dist=[2.9⋅ln(1−acc)+6.2]×10−2 |
| Robustness vs. classification accuracy plots of I-FGSM attack |
| (b) Fitted Pareto frontier of ℓ2 distortion (C&W attack) vs. top-1 accuracy: |
| ℓ2 dist=[1.1⋅ln(1−acc)+2.1]×10−5 |
| Robustness vs. classification accuracy plots of I-FGSM attack |
| (c) Fitted Pareto frontier of ℓ2 CLEVER score vs. top-1 accuracy: |
| ℓ2 score=[4.6⋅ln(1−acc)+12.5]×10−6 |
Figure 2: Robustness vs. classification accuracy plots of I-FGSM attack [[37](#bib.bib37)], C&W attack [[39](#bib.bib39)] and CLEVER [[15](#bib.bib15)] score on random targets over 18 ImageNet models.

Figure 3: Transferability of FGSM attack over 18 ImageNet models.

Figure 4: Transferability of I-FGSM attack over 18 ImageNet models, ϵ=0.3.

Figure 5: The transferability of EAD-L1 attack over 18 ImageNet models.
###
3.2 Linear Scaling Law in Robustness v.s. Accuracy
Here we study the empirical relation between robustness and accuracy of different ImageNet models, where the robustness is evaluated in terms of the ℓ∞ and ℓ2 distortion metrics from successful I-FGSM and C&W attacks respectively, or ℓ2 CLEVER scores. In our experiments the attack success rates of these attacks are nearly 100% for each model. The scatter plots of distortions/scores v.s. top-1 prediction accuracy are displayed in Figure [2](#S3.F2 "Figure 2 ‣ 3.1 Evaluation of Adversarial Attacks ‣ 3 Experiments ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models"). We define the classification error as 1 minus top-1 accuracy (denoted as 1−acc). By regressing the distortion metric with respect to the classification error of networks on the Pareto frontier of robustness-accuracy distribution (i.e., AlexNet, VGG 16, VGG 19, ResNet\_v2\_152, Inception\_ResNet\_v2 and NASNet), we find that
the distortion scales linearly with the logarithm of classification error. That is, the distortion and classification error has the following relation: distortion=a+b⋅log(classification-error). The fitted parameters of a and b are given in the captions of Figure [2](#S3.F2 "Figure 2 ‣ 3.1 Evaluation of Adversarial Attacks ‣ 3 Experiments ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models"). Take I-FGSM attack as an example, the linear scaling law suggests that to reduce the classification error by a half, the ℓ∞ distortion of the resulting network will be expected to reduce by approximately 0.02, which is roughly 60% of the AlexNet distortion. Following this trend, if we naively pursue a model with low test error, the model robustness may suffer. Thus, when designing new networks for ImageNet, we suggest to evaluate the model’s accuracy-robustness tradeoff by comparing it to the disclosed Pareto frontier.
###
3.3 Robustness of Different Model Sizes and Architectures
We find that model architecture is a more important factor to model robustness than the model size. Each family of networks exhibits a similar level of robustness, despite different depths and model sizes. For example, AlexNet has about 60 million parameters but its robustness is the best; on the other hand, Mobilenet-0.50-160 has only 1.5 million parameters but is more vulnerable to adversarial attacks in all metrics.
We also observe that, within the same family, for DenseNet, ResNet and Inception, models with deeper architecture yields a slight improvement of the robustness in terms of the ℓ∞ distortion metric. This might provide new insights for designing robust networks and further improve the Pareto frontier. This result also echoes with [[49](#bib.bib49)], where the authors use a larger model to increase the ℓ∞ robustness of a CNN based MNIST model.
###
3.4 Transferability of Adversarial Examples
Figures [3](#S3.F3 "Figure 3 ‣ 3.1 Evaluation of Adversarial Attacks ‣ 3 Experiments ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models"), [4](#S3.F4 "Figure 4 ‣ 3.1 Evaluation of Adversarial Attacks ‣ 3 Experiments ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models") and [5](#S3.F5 "Figure 5 ‣ 3.1 Evaluation of Adversarial Attacks ‣ 3 Experiments ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models") show the transferability heatmaps of FGSM, I-FGSM and EAD-L1 over all 18 models (306 pairs in total). The value in the i-th row and j-th column of each heatmap matrix is the proportion of the adversarial examples successfully transferred to target model j out of all adversarial examples generated by source model i (including both successful and failed attacks on the source model). Specifically, the values on the diagonal of the heatmap are the attack success rate of the corresponding model. For each model, we generate adversarial images using the aforementioned attacks and pass them to the target model to perform black-box untargeted and targeted transfer attacks. To evaluate each model, we use the success rate for evaluating the untargeted transfer attacks and the top-5 matching rate for evaluating targeted transfer attacks.
Note that not all models have the same input image dimension. We also find that simply resizing the adversarial examples can significantly decrease the transfer attack success rate [[50](#bib.bib50)]. To alleviate the disruptive effect of image resizing on adversarial perturbations, when transferring an adversarial image from a network with larger input dimension to a smaller dimension, we crop the image from the center; conversely, we add a white boarder to the image when the source network’s input dimension is smaller.
Generally, the transferability of untargeted attacks is significantly higher than that of targeted attacks, as indicated in Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Evaluation of Adversarial Attacks ‣ 3 Experiments ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models"), [4](#S3.F4 "Figure 4 ‣ 3.1 Evaluation of Adversarial Attacks ‣ 3 Experiments ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models") and [5](#S3.F5 "Figure 5 ‣ 3.1 Evaluation of Adversarial Attacks ‣ 3 Experiments ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models").
We highlighted some interesting findings in our experimental results:
1. In the untargeted transfer attack setting, FGSM and I-FGSM have much higher transfer success rates than those in EAD-L1 (despiting using a large κ). Similar to the results in [[41](#bib.bib41)], we find that the transferability of C&W is even worse than that of EAD-L1 and we defer the results to the supplement. The ranking of attacks on transferability in untargeted setting is given by
| | | |
| --- | --- | --- |
| | FGSM⪰I-FGSM⪰EAD-L1⪰C%
\&W. | |
2. Again in the untargeted transfer attack setting, for FGSM, a larger ϵ yields better transferability, while for I-FGSM, less iterations yield better transferability. For untargeted EAD-L1 transfer attacks, a higher κ value (confidence parameter) leads to better transferability, but it is still far behind I-FGSM.
3. Transferability of adversarial examples is sometimes asymmetric; for example, in Figure [4](#S3.F4 "Figure 4 ‣ 3.1 Evaluation of Adversarial Attacks ‣ 3 Experiments ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models"), adversarial examples of VGG 16 are highly transferable to Inception-v2, but adversarial examples of Inception-v2 do not transfer very well to VGG.
4. We find that VGG 16 and VGG 19 models achieve significantly better transferability than other models, in both targeted and untargeted setting, for all attacking methods, leading to the “stripe patterns”. This means that adversarial examples generated from VGG models are empirically more transferable to other models.
This observation might be explained by the simple convolutional nature of VGG networks, which is the stem of all other networks. VGG models are thus a good starting point for mounting black-box transfer attacks. We also observe that the most transferable model family may vary with different attacks.
5. Most recent networks have some unique features that might restrict adversarial examples’ transferability to only within the same family. For example, as shown in Figure [4](#S3.F4 "Figure 4 ‣ 3.1 Evaluation of Adversarial Attacks ‣ 3 Experiments ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models"), when using I-FGSM in the untargeted transfer attack setting, for DenseNets, ResNets and VGG, transferability between different depths of the same architecture is close to 100%, but their transfer rates to other architectures can be much worse. This provides us an opportunity to reserve-engineer the internal architecture of a black-box model, by feeding it with adversarial examples crafted for a certain architecture and measure the attack success rates.
4 Conclusions
--------------
In this paper, we present the largest scale to date study on adversarial examples in ImageNet models. We show comprehensive experimental results on 18 state-of-the-art ImageNet models using adversarial attack methods focusing on ℓ1, ℓ2 and ℓ∞ norms and also an attack-agnostic robustness score, CLEVER. Our results show that there is a clear trade-off between accuracy and robustness, and a better performance in testing accuracy in general reduces robustness. Tested on the ImageNet dataset,
we discover an empirical linear scaling law between distortion metrics and the logarithm of classification errors in representative models.
We conjecture that following this trend, naively pursuing high-accuracy models may come with the great risks of lacking robustness.
We also provide a thorough adversarial attack transferability analysis between 306 pairs of these networks and discuss the robustness implications on network architecture.
In this work, we focus on image classification. To the best of our knowledge, the scale and profound analysis on 18 ImageNet models have not been studied thoroughly in the previous literature. We believe our findings could also provide insights to robustness and adversarial examples in other computer vision tasks such as object detection [[51](#bib.bib51)] and image captioning [[5](#bib.bib5)], since these tasks often use the same pre-trained image classifiers studied in this paper for feature extraction.
5 Supplementary
----------------
###
5.1 Experiments on Images Correctly Classified by All Models
To further validate our robustness analysis, we conducted another experiment by taking the subset of images (327 images in total) that are correctly classified by all of 18 examined ImageNet models and show their accuracy-vs-robustness figures on C&W and I-FGSM targeted attacks in Figure [6](#S5.F6 "Figure 6 ‣ 5.1 Experiments on Images Correctly Classified by All Models ‣ 5 Supplementary ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models"). The trends and conclusions are consistent with our reported main results.
| |
| --- |
| Robustness vs. classification accuracy plots of I-FGSM attack |
| (a) Fitted Pareto frontier of ℓ∞ distortion (I-FGSM attack) vs. top-1 accuracy: |
| ℓ∞ dist=[4.3⋅ln(1−acc)+8.5]×10−2 |
| Robustness vs. classification accuracy plots of I-FGSM attack |
| (b) Fitted Pareto frontier of ℓ2 distortion (C&W attack) vs. top-1 accuracy: |
| ℓ2 dist=[9.0⋅ln(1−acc)+17.3]×10−6 |
Figure 6: Robustness vs. classification accuracy plots of I-FGSM attack [[37](#bib.bib37)], C&W attack [[39](#bib.bib39)] on random targets on 18 ImageNet models based on 327 images correctly classified by all models.
| |
| --- |
| Robustness vs. classification accuracy plots of I-FGSM attack |
| (a) Fitted Pareto frontier of ℓ∞ distortion (I-FGSM attack) vs. top-1 accuracy: |
| ℓ∞ dist=[4.3⋅ln(1−acc)+8.5]×10−2 |
| Robustness vs. classification accuracy plots of I-FGSM attack |
| (b) Fitted Pareto frontier of ℓ2 distortion (C&W attack) vs. top-1 accuracy: |
| ℓ2 dist=[1.5⋅ln(1−acc)+2.7]×10−5 |
Figure 7: Robustness vs. classification accuracy plots of I-FGSM attack [[37](#bib.bib37)], C&W attack [[39](#bib.bib39)] on least likely targets on 18 ImageNet models.
###
5.2 Robustness vs. Accuracy of Least-Likely Attacks
In this section, we summarize the results of using the least-likely label (the class with the smallest probability of the original image) as the target class. Figure [7](#S5.F7 "Figure 7 ‣ 5.1 Experiments on Images Correctly Classified by All Models ‣ 5 Supplementary ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models") (a) and (b) show the distortions of adversarial examples found by I-FGSM and C&W attacks, respectively.
Although the least-likely label attack is even more challenging, both I-FGSM and C&W algorithms can still achieve a close to 100% success rate. Similar to Figure 2 of the main text, Figure [7](#S5.F7 "Figure 7 ‣ 5.1 Experiments on Images Correctly Classified by All Models ‣ 5 Supplementary ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models") clearly shows an
accuracy v.s. robustness trade-off for models on the Pareto frontier, e.g., AlexNet is the most robust network while the model with the highest accuracy (NASNet) is most prone to adversarial attacks.
Likewise, we fit the Pareto frontier and still observe a similar log-linear scaling law.
###
5.3 The Transferability of C&W Attack
In this section, we show the transferability of C&W attack in Figure [8](#S5.F8 "Figure 8 ‣ 5.4 More Experiments on the Transferability of I-FGSM Attack ‣ 5 Supplementary ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models"), [9](#S5.F9 "Figure 9 ‣ 5.4 More Experiments on the Transferability of I-FGSM Attack ‣ 5 Supplementary ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models") and [10](#S5.F10 "Figure 10 ‣ 5.4 More Experiments on the Transferability of I-FGSM Attack ‣ 5 Supplementary ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models"). Comparing with I-FGSM and EAD-L1 attacks, C&W attack using ℓ2 norm yields a much worse transferability success rate. Increasing the confidence parameter κ can slightly increase its transferability, but is still worse than that of I-FGSM and EAD. On the other hand, increasing κ reduces C&W attack’s success rates, as we have shown in Figure 1 of the main text. I-FGSM has much better transferability than EAD-L1 and C&W attacks. From Figure [9](#S5.F9 "Figure 9 ‣ 5.4 More Experiments on the Transferability of I-FGSM Attack ‣ 5 Supplementary ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models"), [10](#S5.F10 "Figure 10 ‣ 5.4 More Experiments on the Transferability of I-FGSM Attack ‣ 5 Supplementary ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models") and Figure 4 in Section 3.4, we can see that the transferability increases as ϵ grows.
###
5.4 More Experiments on the Transferability of I-FGSM Attack
In this section, we show more experimental results on I-FGSM
attack with different ϵ values. Figures [9](#S5.F9 "Figure 9 ‣ 5.4 More Experiments on the Transferability of I-FGSM Attack ‣ 5 Supplementary ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models") and [10](#S5.F10 "Figure 10 ‣ 5.4 More Experiments on the Transferability of I-FGSM Attack ‣ 5 Supplementary ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models") demonstrate the transferability heatmaps of I-FGSM with ϵ=0.1 and ϵ=0.2. Comparing these two heatmaps with Figure 4 in the main text (transferability of I-FGSM with ϵ=0.3), we observe that: (i) I-FGSM’s transferability improves when ϵ increases; (ii) less iterations usually yield better transferability; (iii) transferability of untargeted attacks is significantly higher than that of targeted attacks; (iv) adversarial examples of VGG networks consistently transfer very well; and (v) adversarial examples are easier to be transfered between the models sharing a same architecture (e.g., ResNets and DenseNets) but different depths.

Figure 8: Transferability of C&W attack over 18 ImageNet models.

Figure 9: Transferability of I-FGSM attack over 18 ImageNet models with ϵ=0.1.

Figure 10: Transferability of I-FGSM attack over 18 ImageNet models with ϵ=0.2.
###
5.5 Additional Remarks
In [[17](#bib.bib17)] the authors also made a different conclusion on accuracy v.s. robustness. However, we believe our conclusion is not orthogonal to [[17](#bib.bib17)], due to the apparent differences in the definition of “robustness”. In [[17](#bib.bib17)], the authors mainly explored the “robustness” (sensitivity) of class label semantics, where in the user study only 20 classes are selected and the I-FGSM attack with a fixed adversary strength is used. Each user is then asked to determine the adversarial label is “relevant” to the original label or not, which is essentially a binarized class label relevance user study. The main message in [[17](#bib.bib17)] is that the inherent correlations between image classes, if can be made more distinguishable (i.e., sensitivity as a strength), could be exploited towards building more accurate models. On the other hand, in our paper we used the standard ℓp ball perturbation in the pixel space as well as the attack success rates as the robustness measure on ImageNet with 1000 classes. In fact, the “sensitivity” issue has also been studied in [[37](#bib.bib37)] in terms of the “label leaking” effect. To ensure this effect has minimal impact when generating adversarial examples to evaluating the robustness of DNNs, the authors suggest including the attack results with “least likely” targets, which were included in this paper when drawing our conclusions.
Images in ImageNet are organized according to the WordNet [[52](#bib.bib52)] hierarchy. To justify that least likely labels used in our experiments are indeed irrelevant to the original labels, we show their corresponding synsets’ shortest path distances in the WordNet hierarchy in Figure [11](#S5.F11 "Figure 11 ‣ 5.5 Additional Remarks ‣ 5 Supplementary ‣ Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models"). We use Inception-v1 as the model in the experiment. Two labels of shortest path distances greater than 5 are considered irrelevant. In our case, this applies to 96.6% of our least likely attacks and hence the vulnerability is not from the label sensitivity effect as studied in [[17](#bib.bib17)].

Figure 11: The distribution of shortest distance to the true class from the least likely class in the WordNet hierarchy using 1,000 correctly classified ILSVRC’12 validation images.
In summary, [[17](#bib.bib17)]’s conclusion is that if one can increase the discriminative power against (semantically) similar classes, then the sensitivity in class labels could be a strength for model accuracy. Our conclusion is that more accurate network models appear to be less robust in terms of the required adversarial attack strength defined in ℓp ball.
A similar observation is also seen in [[53](#bib.bib53)]. We also note that our findings are consistent with the very recent paper [[54](#bib.bib54)] that proves the difficulty of learning robust models against adversarial examples.
In light of [[17](#bib.bib17)], our findings on accuracy-robustness trade-off could be explained by the increasing sensitivity in more accurate models – these two robustness conclusions actually complement each other, rather than being exclusive or contradictory. Specifically, increasing sensitivity aids in improved accuracy but might also make the model more vulnerable. For example, increasing the sensitivity in classifying different dog species can improve the model accuracy, but may at the same time contribute to smaller adversarial perturbations.
|
5c6977a9-f841-4eda-8a90-a83fa27debb3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Mistakes #5: Blinded by winning
(Mistakes #1, #2, #3, #4)
I used to be a practicing atheist. I figured I had strong arguments against God’s existence. I talked to some Christians, and found that they were both ill-prepared to defend their views and shockingly uninterested in the fact that they couldn’t. This made them look like the epistemological analogues of movie villains; trivial to scorn.
Alas, this made me less likely to wonder if I was mistaken about the whole topic. If a person responds to criticisms of their beliefs with fluster and fascination with all other subjects, my natural response is not to back down and think about why I am wrong.
Yet I should have been confused. If a person is apparently doing a host of things because of fact X, and the balance of evidence doesn’t seem to support X, and the person don’t appear to care about that, one should probably question one’s assumption that X is a central part of their worldview. I still think I wasn’t wrong about X, but I was probably wrong about all these people toiling under peculiar and willingly misinformed views on X.
Thinking about it now, it seems unlikely that the existence and exact definition of God is anywhere near as central to religion as it seems to a literal-minded systematization-obsessed teenager with little religious experience. Probably religious people mostly believe in God, but it’s not like they came to that conclusion and then reluctantly accepted the implications of it. It’s part of a big cluster of intersecting things that are appealing for various reasons. I won’t go into this, because I don’t know much about it, and this post isn’t about what religion is about. (If you want a post that is about that, at least a bit, Scott Alexander wrote two good ones recently that seem about right to me.)
This post is about winning arguments. If you repeatedly win an argument too easily, I claim that you should be less sure that you know what is going on at all, rather than smug. My boyfriend points out that being perturb
|
3e38adb4-79ee-42be-b7d7-7dda772061e6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Moscow: epistemology, framing, new project announcement
Discussion article for the meetup : Moscow: epistemology, framing, new project announcement
WHEN: 31 May 2015 02:00:00PM (+0300)
WHERE: Russia, Moscow, L'va Tolstogo 16
We're meeting at Yandex, at 7.Nebo (Extropolis is taken this time).
Please fill this form if you're planning to visit.
Planned activities:
1. Talk by marchdown on epistemology, Frege and Quine.
2. Report on a small framing research by Pion.
3. Proposal for a new online project (common database of volunteers for scientific experiments) by berekuk.
Afterparty activities:
* Fallacymania game, organized by Alexander230.
* Brainstorm on how to improve Fallacymania, facilitated by Yury.
Hackpad: https://lesswrong-ru.hackpad.com/-31-2015-417xPxFa6Jv
Location details and info for the newcomers:
We meet outside of the Yandex office, near the archway and bike parking place. Here is a guide how to get to Yandex: http://company.yandex.ru/contacts/redrose/.
Security wants me to meet each member personally, so if you're late, please call me (Slava) at +7(926)313-96-42. Though I'll appreciate it if you're not late.
Expected duration of the meetup is 4-6 hours, depending on whether you'd like to stand for the afterparty activities. If you're not the member of our Moscow community yet, you might want to check out https://lesswrong-ru.hackpad.com/ , there's a lot of content about what's been going on for the last two years.
Also, please subscribe to our mailing list: https://groups.google.com/forum/#!forum/rationality-in-moscow
Discussion article for the meetup : Moscow: epistemology, framing, new project announcement
|
4b92fb98-07b4-4c6e-b683-c077bf087c80
|
trentmkelly/LessWrong-43k
|
LessWrong
|
LessWrong: After Dark, a new side of LessWrong
The LessWrong team has obviously been hard at work putting out their debut album. But another LessWrong feature also seems to have been released today, to less fanfare: LessWrong: After Dark, a branch of the site devoted to explicit discussion of sex and sexuality, where the LessWrong team finally gets to let loose their long-suppressed sexual instincts.
As someone who’s close friends with Aella, I’m thrilled to see this new branch of the site. Sex workers are heavily discriminated against in modern society, with limited access to banking, a heightened risk of physical injury, and an inability to rely on police. The topic of sex is overstigmatized in modern culture, and I’m glad to see that the LessWrong team has decided to accept the sexual aspect of the human experience, and that they now have a place to hornypost to their hearts’ content.
I’m looking forward to seeing what comes of rationalists applying rationality techniques to sex with the same dogged vigor and dubiously-directed determination that we apply to everything else.
|
472de248-f462-4ae7-909e-0508d3f9cd14
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Layers of Expertise and the Curse of Curiosity
Epistemic status: oversimplification of a process I'm confident about; meant as proof of concept.
Related to: Double-Dipping in Dunning-Kruger
Expertise comes in different, mostly independent layers. To illustrate them, I will describe the rough process of a curious mind discovering a field of study.
Discovery
In the beginning, the Rookie knows nothing. They have no way to tell what's true or false in the field. Anything they say about it will probably be nonsense, or at best, not better than chance.
Consider a child discovering astronomy. They know the Sun and the Moon move in the sky, that other planets and stars exist, but they wonder about the mysterious domain of space. They open a book, or watch a few videos, and their first discoveries are illuminating. The Moon goes around the Earth which goes around the Sun, the other stars are very very far. Everything makes sense, because beginner material is designed to make sense.
The basic facts are overwhelming. They feel so valuable and wondrous that they have to be shared with other children. They know nothing! The knowledge gap is so large that the enlightened child is viewed as an Expert, and for a while the little explorer does feel like one.
However, the child is still a Rookie. They start talking about how planets go in perfect circles around the Sun, that there's nothing but interstellar space beyond Pluto except maybe comets, because introductory material is fuzzy on the details. The child may be overconfident, until someone more educated points out the mistake. Then Curiosity kicks in.
Learning iteration
When discovering gaps in their knowledge, one with a curious mind will strive to fill them. They will seek new material, kind teachers, and if they're lucky they'll learn more and more. This is the first layer of expertise: accumulation of true facts. Repeatedly, they will be confronted with their own ignorance: for each new shard of knowledge they reveal, dozens appear still shrouded. Every time th
|
0f860218-2b65-4f9f-9934-a15cae2027f7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Panel with Israeli Prime Minister on existential risk from AI
Israeli Prime Minister, Musk, Tegmark and Brockman discuss existential risk from AI.
Nothing truly revolutionary was said. I think the most interesting bits are that the Prime Minister seems to be taking AI risk seriously, has in mind exponential progress, wants to prevent monopolies and thinks that we roughly have 6 years before things change drastically.
Some quotes from the prime minister:
> I had a conversation with one of your colleagues, Peter Thiel, and he said to me, "Oh, it's all scale advantages. It's all monopolies." I said, well, yeah, I believe that, but we have to stop it at a certain point because we don't want to depress competition.
> AI is producing, you know, this wall [hands movement gesture some exponential progress wall]. And you have these trillion dollar companies that are produced what, overnight? And they concentrate enormous wealth and power with smaller and smaller number of people. And the question is, what do you do about that monopoly power?
> With such power comes enormous responsibility. That's the crux of what we're talking about here, is how do we inject a measure of responsibility and ethics into this, into this exponentially changing development?
> Max [Tegmark]'s book takes you to the existential question of whether, you know, you project basically machine intelligence or human intelligence into the cosmos. Human intelligence turned into machine intelligence, into the cosmos and so on. That's a big philosophical question. I'd like to think we have about six years for that.
> I think we have to conduct a robust discussion with the other powers of the world based on their self-interest as you began to do. And I think that's a pioneering work. And I think we have a shot maybe at getting to some degree of control over our future, which could be amazing.
|
059bf6b0-8abd-4262-babb-65c1eae177ce
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What is/are the definition(s) of "Should"?
My Model
Consider an AI. This AI goes out into the world, observing things and doing things. This is a special AI, though. In converting observations into actions, it first transforms them into beliefs in some kind of propositional language. This may or may not be the optimal way to build an AI. Regardless, that's how it works.
The AI has a database, filled with propositions. The AI also has some code.
* It has code for turning propositions into logically equivalent propositions.
* It has code for turning observations about the world into propositions about these observations, like "The pixel at location 343, 429 of image #8765 is red"
* It has code for turning propositions about observations into propositions about the state of the world, like "The apple in front of my camera is red."
* It has code for turning those propositions into propositions that express prediction, like "There will still be a red apple there until someone moves it."
* It has code for turning those propositions into propositions about shouldness, like "I should tell the scientists about that apple."
* It has code for turning propositions about shouldness into actions.
What is special about this code is that it can't be expressed as propositions. Just as one can't argue morality into a rock, the AI doesn't function if it doesn't have this code, no matter what propositions are stored in its memory. The classic example of this would be the Tortoise, from What the Tortoise said to Achilles.
Axioms - Assumptions and Definitions
When we, however, observe the AI, we can put everything into words. We can express its starting state, both the propositions and the code, as a set of axioms. We can then watch as it logically draws conclusions from these axioms, ending on a decision of what action to take.
The important thing, therefore, is to check that its initial axioms are correct. There is a key distinction here, because it seems like there are two kinds of axioms going on.
|
17e1003c-00fa-4a7d-bda7-0d5aaddf6194
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Coronavirus as a test-run for X-risks
On the 27th February, I, like many of us, became fully aware of the danger humanity was facing (let’s thank ‘Seeing the Smoke’) and put my cards on the table with this:
> This is partly a test run of how we'd all feel and react during a genuine existential risk. Metaculus currently has it as a 19% chance of spreading to billions of people, a disaster that would certainly result in many millions of deaths, probably tens of millions. Not even a catastrophic risk, of course, but this is what it feels like to be facing down a 1/5 chance of a major global disaster in the next year. It is an opportunity to understand on a gut level that, this is possible, yes, real things exist which can do this to the world. And it does happen.
> It's worth thinking that specific thought now because this particular epistemic situation, a 1/5 chance of a major catastrophe in the next year, will probably arise again over the coming decades. I can easily imagine staring down a similar probability of dangerously fast AGI takeoff, or a nuclear war, a few months in advance.
Well, now a few months have gone by and much has changed. The natural question to ask is- what general lessons have we learned, compared to that ‘particular epistemic situation’, now that we’re in a substantially different one? What does humanity’s response to the coronavirus pandemic so far imply about how we might fare against genuine X-risks?
At a first pass, the answer to that question seems obvious - not very well. The response of most usually well-functioning governments (I’m thinking mainly of Western Europe here) has been slow, held back by an unwillingness to commit all resources to a strategy and accept its trade-offs, and sluggish to respond to changing evidence. Advance preparation was even worse. This post gives a good summary of some of those more obvious lessons for X-risks, focussing specifically on slow AI takeoff.
As to what we ultimately blame for this slowness - Scott Alexander and Toby Ord gave as
|
7105eddc-1d2a-489d-a3e2-1e23a5317419
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Adjusting Effort to Barely Meet Standards
There is sometimes an observed inverse correlation between a student's inherent talent/intelligence and the amount of effort expended. The trend is one that I see everywhere in high school. Smart students just sort of shrug and coast by to get an acceptable grade. Or, on the other side, students that don't grasp material as quickly give it an extra push, knowing that it will take some work in order to get the grades they want.
The similarity between the two is that both types of students are adjusting the amount of work they need to put in, based on the given standard. Given an average, or a benchmark to aim at, they just figure out how much work they need to put in. Students find the equilibrium, the balance between their intelligence and the work they have to put in, that allows them to scrape by.
For students that have less inherent talent in a particular subject, this may be an incentive to improve. But for the students that are never challenged in school, who easily fly through classes that do not provide the adequate learning environment, this drills into their minds that they don't need to work hard.
And this lesson is definitely not desirable to teach to bright students. Some are never exposed to anything besides the monotony and apparent irrelevance of schoolwork (When will we ever use this in the real world?) and fall into the habit of filing everything new under the "pay attention only enough to scrape by" category of their minds. So, when faced with something like, say, global existential risk, the weight of the subject is ignored.
Of course, there are many other factors involved. It isn't that everyone has been trained to adjust to put in minimal work based on an average. If that was the case, then all smart students in public school would be slackers. On a larger scale, then there would be no deviations on either side - everyone would just fall exactly on the average line.
So there is clearly something that lets some people ignore the average. Mos
|
ccc72246-cc8e-43bb-b722-13e6dac5d57a
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
Eigenvalues and eigenvectors
Consider a linear transformation represented by a matrix $A$, and some vector $v$. If $Av = \lambda v$, we say that $v$ is an _eigenvector_ of $A$ with corresponding eigenvalue $\lambda$. Intuitively, this means that $A$ doesn't rotate or change the direction of $v$; it can only stretch it ($|\lambda| > 1$) or squash it ($|\lambda| < 1$) and maybe flip it ($\lambda < 0$). While this notion may initially seem obscure, it turns out to have many useful applications, and many fundamental properties of a linear transformation can be characterized by its eigenvalues and eigenvectors.
|
69f2759c-3d2c-4f29-afc2-1c80810d89f6
|
awestover/filtering-for-misalignment
|
Redwood Research: Alek's Filtering Results
|
id: post3879
A putative new idea for AI control; index here . Any suggestions for refining this model are welcome! Somewhat inspired by the previous post , this is a model of human bias that can be used to test theories that want to compute the "true" human preferences. The basic idea is to formalise the question: If the AI can make the human give any answer to any question, can it figure out what humans really want? The AI's influence The AI has access to an algorithm H , representing the human. It can either interact with H or simulate the interaction correctly. The interaction consists of describing the outcome of choice A versus choice B , and then asking the human which option is better. The set of possible binary choices is C (thus ( A , B ) ∈ C ). The set of descriptions is D ; the set of possible descriptions for ( A , B ) is D A , B . Then we have the assumption that humans can be manipulated: Given any description D 1 ∈ D A , B for which H ( D 1 ) prefers A to B , there exists a description D 2 ∈ D A , B , logically equivalent to D 1 , such that H ( D 2 ) prefers B to A , and vice-versa. Note that D 1 could be a paragraph while D 2 could be a ten-volume encyclopedia; all that's required is that they be logically equivalent. But manipulating human answers in the immediate sense is not the only way the AI can influence them. Our values can change through interactions, reflection, and even through being given true and honest information, and the AI can influence this: There is a wide class of algorithms H , such that for all H ′ ∈ H , there exists a sequence D 1 , D 2 , … D n of descriptions the AI can give to H that will transform H into H ′ . The grounding assumptions So far, we've just made the task hopeless: the AI can get any answer from H , and can make H into whatever algorithm it feels like. Saying H has preferences is meaningless. However, we're building from a human world where the potential for human manipulating humans is limited, and somewhat recognisable. Thus: There exists a subset C ′ ⊂ C (called standard choices) such that, for all ( A , B ) ∈ C ′ , there exists a subset D ′ A , B ⊂ D A , B (called standard descriptions) such that D ′ A , B is tagged as fair and highly reflective of the true values of H . Basically these are examples of interactions that are agreed to be fair, honest, and informative. The more abstract the choices, the harder it is to be sure of this. Of course, we'd also allow the AI to learn from examples of negative interactions as well: There exists a subset C ′′ ⊂ C such that, for all ( A , B ) ∈ C ′′ , there exists a subset D ′′ A , B ⊂ D A , B such that D ′′ A , B is tagged as a manipulative interaction with H . Finally, we might want a way to encode human meta-preferences: Among the descriptions tagged as fair or manipulative, there are some that refer to the process of providing descriptions itself. Building more assumptions in This still feels like a bare-bones description, unlikely to converge to anything good. For one, I haven't even defined what "logically equivalent" means. But that's the challenge of those constructing solutions to the problem of human preferences. Can they construct sufficiently good D ′ A , B and D ′′ A , B to converge to some sort of "true" values for H ? Or, more likely, what extra assumptions and definitions are needed to get such a convergence? And finally, is the result reflective of what we would want?
|
a90a5356-a388-4d3f-aedd-62ae8da69bd7
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Aligning a toy model of optimization
Suppose I have a magic box .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
Opt that takes as input a program U:{0,1}n→R, and produces Opt(U)=argmaxxU(x), with only n times the cost of a single evaluation of U. Could we use this box to build an aligned AI, or would broad access to such a box result in doom?
This capability is vaguely similar to modern ML, especially if we use Opt to search over programs. But I think we can learn something from studying simpler models.
An unaligned benchmark
----------------------
([Related](https://ai-alignment.com/an-unaligned-benchmark-b49ad992940b).)
I can use Opt to define a simple unaligned AI (details omitted):
* Collect data from a whole bunch of sensors, including a "reward channel."
* Use Opt to find a program M that makes good predictions about that data.
* Use Opt to find a policy π that achieves a high reward when interacting with M.
This isn't a great design, but it works as a benchmark. Can we build an aligned AI that is equally competent?
(I haven't described how Opt works for stochastic programs. The most natural definition is a bit complicated, but the details don't seem to matter much. You can just imagine that it returns a random x that is within one standard deviation of the optimal expected value.)
Competing with the benchmark
----------------------------
([Related](https://ai-alignment.com/a-possible-stance-for-ai-control-research-fe9cf717fc1b).)
If I run this system with a long time horizon and a hard-to-influence reward channel, then it may competently acquire influence in order to achieve a high reward.
We'd like to use Opt to build an AI that acquires influence just as effectively, but will use that influence to give us security and resources to reflect and grow wiser, and remain responsive to our instructions.
We'd like the aligned AI to be almost as efficient. Ideally the proportional overhead would converge to 0 as we consider more complex models. At worst the overhead should be a constant factor.
Possible approach
-----------------
([Related](https://ai-alignment.com/towards-formalizing-universality-409ab893a456).)
My hope is to use Opt to learn a policy π+ which can answer questions in a way that reflects "everything π knows." This requires:
* Setting up an objective that incentivizes π+ to give good answers to questions.
* Arguing that there *exists* a suitable policy π+ that is only slightly more complicated than π.
If we have such a π+, then we can use it to directly answer questions like "What's the best thing to do in this situation?" The hope is:
* Its answers can leverage everything π knows, and in particular all of π's knowledge about how to acquire influence. So using π+ in this way is competitive with using π directly.
* It knows enough about human preferences to be [corrigible](https://ai-alignment.com/corrigibility-3039e668638).
"Everything π knows" is slippery; I mean something like "what a sufficiently-idealized Bayesian would believe after updating on the fact that π achieves a high reward." Constructing an objective which incentivizes these answers probably requires understanding the nature of that update.
Thoughts on feasibility
-----------------------
In the context of ML, I usually imagine training π+ via iterated amplification. Unfortunately, iterated amplification doesn't correspond to optimizing a single objective U---it requires either training a sequence of agents or exploiting properties of local search (using the previous iterate to provide oversight for the next). If we just have Opt, it's not clear if we can efficiently do anything like iterated amplification or debate.
If aligning Opt is impossible, I think that's slightly bad news for aligning ML. That said, it's reasonably likely that local search will be easier to align, so the next step would be constructing a simple model of local search.
There are also some ways in which the optimizer case seems easier:
* It's a simpler model and so more amenable to analysis. The Bayesian update from "π gets a high reward" is more straightforward when π is actually optimized.
* We don't have to worry about optimization difficulty.
* Given a policy π we can directly search for an input on which it behaves a certain way.
It's OK if it's impossible
--------------------------
When working on alignment I aim to either find a scalable alignment strategy or a clear argument for why scalable alignment is impossible. I'm excited about considering easy-to-analyze versions of the alignment problem even if they are impossible:
* It gives us practice making impossibility arguments, and developing relevant intuitions and techniques.
* It clarifies the difficulty of the alignment problem---if we know why we can't handle simple cases like Opt, then we learn something about what the solution must look like in more complicated cases.
* It gives us a sense of what impossibility results might look like, if we were able to prove them in more realistic cases. Would they actually be strong enough to guide action, or convince anyone skeptical?
Expensive optimization
----------------------
I described Opt as requiring n times more compute than U. If we implemented it naively it would instead cost 2n times more than U.
We can use this more expense Opt in our unaligned benchmark, which produces an AI that we can actually run (but it would be terrible, since it does a brute force search over programs). It should be easier to compete with this really slow AI. But it's still not trivial and I think it's worth working on. If we can't compete with this benchmark, I'd feel relatively pessimistic about aligning ML.
|
d208c9e1-9968-44eb-b012-2cf12c2476aa
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Challenge: construct a Gradient Hacker
This is a relatively clean subproblem that we came upon a few months ago while thinking about gradient hacking. We're throwing it out to the world to see if anyone can make progress.
Problem: Construct a gradient hacker (definition below), or prove that one cannot exist under the given conditions.
Setup: Let x∈Rd be the network inputs, θ∈Rw, be the network trainable parameters, ^y∈R be the network outputs, and f(x,θ)=^y be the network architecture. f(x,θ) must be a feedforward neural network that reads in its own weights as input. Weight sharing is allowed (vertical or horizontal), and some weight sharing is necessary in order to take all w weights as input.
Suppose that we train this network on a dataset D={x(i),y(i)}, with MSE loss L(θ)=1n∑i(f(x(i),θ)−y(i))2, assuming gradient flows: θ′(t)=−∇θL(θ(t)). Given an initial point θ0, the training dynamics are therefore:
θ(t)=θ0−∫t0∇θL(θ(x))dx
Definition: A tuple (f,θg,D) is a gradient hacker if the following conditions hold:
1. θg is not a global optimum: ∃θ such that L(θ)<L(θg)
2. ∃ϵ>0 such that ∀θ0:||θ0−θg||<ϵ:
1. the network’s training converges to θg, i.e., limt→∞θ(t)=θg.
2. There is internal structure of f(⋅,θ0) that computes the gradient; i.e., there is some subset of the activations whose mean over every datapoint in D is ∇θ0L(θ0). [1]
This captures my intuition that a gradient hacker knows where it wants to go (in this case "get to θg"), and then it should decide what it outputs in order to make the gradient true.
Some more ambitious problems (if gradient hackers exist):
* Characterize the set of all gradient hackers.
* Show that they all must satisfy some property.
* Construct gradient hackers for arbitrarily large n, d, w, and neural net depth.
* Variations on the problem: a subset of the activations equals ∇θ0L(θ0) for every input, or the subset of activations correspond to the gradient on that input.
1. ^
This is a bit strict, but we didn't want to list different ways somethin
|
ef119b90-4bb7-464a-93d6-f5613f65f947
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Locally optimal psychology
Thanks to Kaj Sotala, Brian Toomey, Stag Lynn, Ethan Kuntz, and Anna Salamon.
There’s no way that chronic depression, self-loathing, poor agency, or muscle tension could be optimal… right?
Jake was depressed for 6 months. He also felt horrible every time he interacted with other people because of his emotional insecurities.
So without knowing how to outgrow his insecurities, his system basically had two options:
1. Interact with other people — and constantly feel horrible
2. Don’t interact with anyone unless absolutely necessary
So his system converged on the second option, also known as “depression”.
Depression certainly wasn’t the globally optimal strategy, but given the options, it was a locally optimal strategy.
Depression, the best strategy known and accessible at the time.
Once he outgrew his emotional insecurities, however, he was no longer blocked on the better optimum of both interacting with others and not feeling horrible.
And so with no need for the ‘depression strategy’, the symptoms evaporated — two years and counting. I know because Jake was me.
More examples of locally optimal strategies
Most chronic issues for the people I help end up looking like locally optimal strategies. For example, self-loathing often turns out to be a strategy for avoiding conflict with others. Lack of agency often turns out to be a strategy for avoiding judgements of failure. But ideally, they would both have self-love and be safe from conflicts; or have agency and be okay with judgements of failure.
I’ve seen people make significant and sometimes total progress in weeks on issues they’ve had for years. One of my tenets is that any persistent mental issue is probably a locally optimal strategy. (Again: if my mind had hit the “undo depression” button while I was depressed, I would’ve gotten hurt!)
In my own growth, my issues relating to depression, empathy, conflict avoidance, emotional numbness, eye contact, boundaries, neck pain, and more all turned out to
|
025bdaa9-c35e-40a4-808a-cc810931abcb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
UNGA Resolution on AI: 5 Key Takeaways Looking to Future Policy
The United Nations General Assembly passed its first resolution on AI on 21st March 2024 and passed with the support of all 193 member states. The text is available here to read.
For the purposes of this summary, I focused on key aspects of the UNGA Resolution with regard to future-facing safety measures from advanced AI systems and looked over points such as reduction in the global digital divide, inclusivity and disinformation risks.
5 key takeaways:
1. Calls for Adequate Governance Mechanisms:
* The resolution recognized the rapid acceleration in the AI landscape along with some encouragement for regulatory innovation, agile responses and the need for continued international dialogue.
* Perhaps my favourite line in the document was, “International approaches (have to) keep pace with the evolution of artificial intelligence systems and their uses.”
2.It had a Little Bit in it for Everyone:
The resolution:
* Called on businesses to adhere to the UN Guiding Principles on Business and Human Rights and implement the “Protect, Respect and Remedy” framework. It also alluded to the need for fair competition, particularly for SMEs in the space. At the same time, it called for appropriate safeguards to respect IP rights.
* It touched on user rights, safeguarding privacy, raising awareness of the use of AI systems and the negative consequences of AI on the workforce.
* For people concerned with future risks, there were risk assessments, evaluations, interoperable standards and the need for regulatory mechanisms to keep up with the evolution of AI systems.
* For people concerned with ethics and human rights, the resolution also recognized that improper use of AI may lead to risks to human rights.
3.Encouraged a Rights-Based User-Centric Approach:
* The resolution emphasized that human rights and international human rights law must be kept at the centre of discussions. This was a nice way to hint at AI systems that might not be aligned with human values.
* I
|
e588da13-4394-47b5-8736-da70121eb433
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Research Report: Sparse Autoencoders find only 9/180 board state features in OthelloGPT
[3/7 Edit: I have rephrased the bolded claims in the abstract per this comment from Joseph Bloom, hopefully improving the heat-to-light ratio.
Commenters have also suggested training on earlier layers and using untied weights, and in my experiments this increases the number of classifiers found, so the headline number should be 33/180 features, up from 9/180. See this comment for updated results.]
Abstract
A sparse autoencoder is a neural network architecture that has recently gained popularity as a technique to find interpretable features in language models (Cunningham et al, Anthropic’s Bricken et al). We train a sparse autoencoder on OthelloGPT, a language model trained on transcripts of the board game Othello, which has been shown to contain a linear representation of the board state, findable by supervised probes. The sparse autoencoder finds 9 features which serve as high-accuracy classifiers of the board state, out of 180 findable with supervised probes (and 192 possible piece/position combinations) [edit: 33/180 features, see this comment]. Across random seeds, the autoencoder repeatedly finds “simpler” features concentrated on the center of the board and the corners. This suggests that even if a language model can be interpreted with a human-understandable ontology of interesting, interpretable linear features, a sparse autoencoder might not find a significant number of those features.
Introduction
There has been a recent flurry of research activity around Sparse Autoencoders for Dictionary Learning, a new approach to finding interpretable features in language models and potentially “solving superposition” (Sharkey et al, Anthropic’s Bricken et al, Cunningham et al.). But while this technique can find features which are interpretable, it is not yet clear if sparse autoencoders can find particular features of interest (e.g., features relevant to reducing AI risk).
This research report seeks to answer the question of whether sparse autoencoders can fin
|
000c29ef-237e-4493-b5d9-e304e4d571d7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Resolving the Dr Evil Problem
Deep in Dr Evil's impregnable fortress (paraphrased):
> Dr Evil is just about to complete his evil plan of destroying the Earth, when he receives a message from the Philosophy Defence Force on Mars. They have created a clone in the exact same subjective situation Dr Evil now occupies; he believes he is Dr Evil and is currently in a recreation of the fortress. If the clone of Dr Evil tries to destroy the Earth, they will torture him, otherwise they will treat him well. Dr Evil wants to destroy the Earth, but he would prefer to avoid being tortured much, much more and he is now uncertain about whether he should surrender or not. Should Dr Evil surrender?
The paper then concludes:
> I conclude that Dr. Evil ought to surrender. I am not entirely comfortable with that conclusion. For if INDIFFERENCE is right, then Dr. Evil could have protected himself against the PDF’s plan by (in advance) installing hundreds of brains in vats in his battlestation—each brain in a subjective state matching his own, and each subject to torture if it should ever surrender
This article will address two areas of this problem:
* Firstly, it argue that Dr Evil should surrender, however focusing on a different path, particularly what it means to "know" and how this is a leaky abstraction
* Secondly, it will argue that hundreds of brains in a vat would indeed secure him against this kind of blackmail
I'll note that this problem is closely related to The AI that Boxes You. Several people noted there that you could avoid blackmail by pre-committing to reboot the AI if it tried to threaten you, although my interest is in the rational behaviour of an agent who has failed to pre-commit.
Why Dr Evil Should Surrender
I think there's a framing effect in the question that is quite misleading. Regardless of whether we say, "You are Dr Evil. What do you do?" or "What should Dr Evil do?" we are assuming that you or a third-party Dr Evil knows that they are Dr Evil? However that's an assumption that
|
85b25b65-1e12-4724-8c6d-ef63b55c3a35
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Washington DC meetup
Discussion article for the meetup : Washington DC meetup
WHEN: 12 May 2012 03:00:00PM (-0400)
WHERE: Portrait Gallery, central plaza
Please note the date has changed; sorry for the inconvenience. Topic will be the recent critique of SI by givewell. http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/
Discussion article for the meetup : Washington DC meetup
|
73ad3117-0a77-478c-8284-1bf572a1c627
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Time is not the bottleneck (on making progress thinking about difficult things)
... at least, not for me. The bottleneck is something like mental energy. I can only make progress on a (subjectively) difficult topic for a handful of hours (at best) each day before hitting severe diminishing returns, and being forced to "relax" for the rest of the day.
In light of the above, the obvious thing to optimize for the sake of "making progress on difficult things" is the amount of "mental energy available per day", not "waking hours available per day". Sleep seems to serve the function of "restoring mental energy"; but I get diminishing returns on sleeping longer (in a single block) as well; for me, there doesn't seem to be a big difference in "available mental energy" during a day whether I'd slept 6 hours or 9 hours.
So, since I'm getting diminishing returns on both my waking and sleeping hours, the obvious thing to try is to split the day up into multiple "sub days", i.e. some form of polyphasic sleep. Ideally, I'd like to spend a larger total fraction of the day asleep; this contrasts things like "the Uberman's sleep schedule", where the goal is to have as much waking time as possible (which, through this lens, is pure folly).
Have any of you (or anyone you know) had any success with something like this?
|
568fc046-864e-4e1d-bced-16d57763139a
|
StampyAI/alignment-research-dataset/aisafety.info
|
AI Safety Info
|
How likely is it that an AI would pretend to be a human to further its goals?
Talking about full AGI: Fairly likely, but depends on takeoff speed. In a slow takeoff of a misaligned AGI, where it is only weakly superintelligent, manipulating humans would be one of its main options for trying to further its goals for some time. To effectively manipulate humans, it would likely appear as one via deepfaked video calls or messages. Even in a fast takeoff, it’s plausible that it would at least briefly manipulate humans in order to accelerate its ascent to technological superiority, though depending on what machines are available to hack at the time it may be able to skip this stage.
If the AI's goals include reference to humans it may have reason to continue deceiving us by pretending to be a human after it attains technological superiority, but will not necessarily do so. How this unfolds would depend on the details of its goals.
Eliezer Yudkowsky gives [the example](https://www.lesswrong.com/posts/pxGYZs2zHJNHvWY5b/request-for-concrete-ai-takeover-mechanisms) of an AI solving protein folding, then mail-ordering synthesized DNA to a bribed or deceived human (who likely thought they were interacting with another human) with instructions to mix the ingredients in a specific order to create [wet nanotechnology](https://en.wikipedia.org/wiki/Wet_nanotechnology).
|
cecc7c7b-a7b0-460d-b895-0f1cc0892b70
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Most reliable news sources?
I've longed assumed that the vast majority of what the news discusses is irrelevant in the long run, and not that well reported at that. But the world seems to be moving faster these days, and I have more of a sense that I want to know what happened this week, because it might impact what I do next week.
For the first time in my life, I have some inclination to follow current events as they happen. And I find that I don't really know how to do that.
What are the most reliable / least-politicized news sources?
In particular, I want a resource that I can refer to that will tell me what happened in the past 3 days, in as factual and unbiased a way as possible. I expect that I might have to do further research, to get context for the events. But to start, I want a place where I can go that will tell me what happened, with a minimum of narrativizing, political outrage, etc.
|
a7ecb831-eb84-4003-b580-da4e0178c223
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Lizard Jockeying for Fun and Profit
If this works, it will serve as a kind of introduction to a series - or sequence, if you will - on social interaction and strategic incorporation of one's emotions into one's reason. But first let's throw this out there and see what happens, shall we?
Part the zeroth: a caveat.
Let’s face it; with a title like “Lizard Jockeying for Fun and Profit”, this post is promising a wild ride. We’re going to take some appalling metaphoric liberties with modern neuroscience, so bear in mind that we’re dealing in broad, sweeping generalizations throughout most of this paper. Nevertheless, I shall endeavor to back up said sweeping generalizations with something like actual fact.
...Deep breath, now. Here goes!
Part the first: The shape of the monster; or: get to know your lizard before you attempt to mount!
Let's face it: humans love dualities and trinities. For some reason, when we look at ourselves, we really like splitting things into twos and threes whenever we can. Probably because they're very small numbers, and if you're going to be breaking things up to make them easier to understand, you don't want to overwhelm people with too many details.
So, when we get to the ancient Hindu sages, the ancient Greek philosophers, the ancient Austrian psychoanalysts, the ancient American developmental psychologists, or the ancient American neurobiologists, you start seeing tripartate distinctions show up: body, soul, mind; appetite, emotion, reason; id, ego, superego; pre-conventional, conventional, post-convventional; lizard-brain, mammal-brain, ape-brain... it's an easy split.
To tantalize with sparse example: while Lawrence Kohlberg’s stages of moral development are seen as evolutionary rather than competitive, one can draw parallels between the ‘preconventional phase’ (focused on punishment, rewards and immediate personal gain) and the 'id', the ‘conventional phase’ (focused on interpersonal relationships, esteem, and honor) and the 'ego', and the ‘postconventional phase’ (f
|
a1eb7548-5844-4afd-99f9-1d03ed446b43
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[video] Paul Christiano's impromptu tutorial on AIXI and TDT
Paul Christiano was about to give a tutorial on AIXI and TDT, so I whipped out my iPhone and recorded it. His tutorial wasn't carefully planned or executed, but it may still be useful to some. Note that when Paul writes "UDT" on a piece of paper he really meant "TDT." :)
HD video download links: 1, 2.
|
d7bf885c-00d0-4f61-ad1b-cf526d38a6dc
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Injecting some numbers into the AGI debate - by Boaz Barak
I found this article useful context for Boaz Barak's more recent article about [the probability of AI Doom](https://www.lesswrong.com/posts/zB3ukZJqt3pQDw9jz/ai-will-change-the-world-but-won-t-take-it-over-by-playing-3).
In this article, the author (through very rough BOTECs):
1. Estimates the number of parameters needed for AGI (tentative conclusion: x10-100 more than GPT-3)
2. Estimates the cost of training AGI of that scale (tentative conclusion: $50B - $5T)
3. Estimates the cost of inference for AGI of that scale (tentative conclusion: $10/hour - $100/hour)
4. Notes some challenges that will need to be addressed to reach AGI
5. In the end there is a reflection on consciousness and AI risk
Here are some excerpts:
> **Bottom line.** Overall, it seems that [we would need to scale model size by] at least 10-100 [to reach human-level AGI], though this is an extremely rough estimate. Also, while an [10-100] Trillion model might be the “core” of a system replacing a human worker, it will not be the whole of it, and we are likely to need new ideas beyond scale for these other components. In particular, a model designed for back-and-forth interaction is unlikely to simply use an auto-regressive language model as a black box, not even with chain-of-thought reasoning.
>
>
This estimate is informed by 1) comparing the context length of current models and the context length needed for processing one year of continuous logging (result: x100), 2) evidence of scaling from the BIG BENCH paper (>x10 for human-level performance), 3) a comparison between current parameter counts of the brain and an estimate of the "number of parameters" in the human brain (the comparison is x100-x1000).
It seems unclear how he arrives from these estimates to the x10-x100 estimate. There is also a digression about adaptability and how difficult it will be to go from 95% reliability to 99% reliability.
> There are [differing](https://lambdalabs.com/blog/demystifying-gpt-3/) [estimates](https://lastweekin.ai/p/gpt-3-is-no-longer-the-only-game#:~:text=Taken%20together%2C%20these%20factors%20mean,costly%20and%20difficult%20to%20train.) on how much ~100B parameter GPT3 model cost to train, but they range in the $5M-$20M, let’s say $5M in pure training costs for simplicity. (This would correspond to a cost of 5M$/500B = 10-5 dollars per inference, which matches roughly with estimates below.) An X Trillion model (which, like Chinchilla, but unlike PaLM, would be fully trained to max out its advantages) might cost a factor of 100X2 more. For X=10, this would be a cost of $50B. For X=100, this would be 5 Trillion dollars!
>
> Clearly, finding a way to train N-sized models on N tokens using less than O(N2) steps (e.g., O(N log N) ) can be crucial for scaling larger models. [...]
>
>
> However, the question is how many inferences we need to make per hour to simulate a human. The average person [apparently speaks](https://www.publicationcoach.com/ten-ways-to-write-a-better-speech/) about 150 words (say 200 tokens) per minute. This would suggest we need about 200\*60 ~ 10K inferences per hour to simulate a person. For an X Trillion sized model, that would cost $X per hour, which is not too bad if X is between 10 to 100.
>
> The above price point [...] will likely be an underestimate. [...] We don’t know the speed of thought, but it will increase the number of inferences needed. [...] Second, to reach high reliability, it is likely that we will need to make [10 to 100] inferences and use some mechanism to choose the best one out of these [...] options.
>
>
I am confused because, taken at face value, these estimates imply a $100/h to $10k/h cost of inference, but the stated bottom line is $10/h to $100/h.
The cost estimates above explicitly ignore future improvements to hardware, relying on current costs instead.
> Some challenges we face include:
>
> **(1)** Maintaining long-term context without model size exploding.
>
> **(2)** Making training more efficient, particularly finding ways to train N-sized models at a near-linear cost instead of quadratic in N.
>
> **(3)** Avoid running out of data, perhaps by using video/images and programmatically generated interactive simulations.
>
> **(4)** Handling multi-step interactions without the interaction going “off the rails” and without needing to scale policy/value gradients to an unfeasible number of interactions (perhaps by using smaller RL-trained “verifier models” to choose between options produced by a large statically-trained model).
>
>
> The truth is that the reason that our world hasn’t been destroyed so far is not that humans were not intelligent enough nor because we haven’t been malicious enough. First, throughout most of human history, we did not have technologies such as nuclear weapons and others with the potential to cause planet-scale destruction. Second, while imperfect, we have developed some institutions, including international bodies, the non-proliferation treaty, standards for biolabs, pandemic preparations, and more to keep some of these capabilities in check. Third, we were lucky. From climate change through pandemic preparation to nuclear disarmament, humanity should be doing much more to confront the risks and harms of our own making. But this is true independently of artificial intelligence. Just as with humans, my inclination with AI would not to try to make systems inherently moral or good (“aligned” in AI-speak) but rather use the [“trust, but verify”](https://en.wikipedia.org/wiki/Trust,_but_verify) approach. One moral of computational complexity theory is that computationally weak agents can verify the computation of more powerful processes.
>
>
|
9e80309f-4ead-4223-9f47-445c206d5961
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI Fables
I write fiction. I'm also interested in how AI is going to impact the world. Among other things, I'd prefer that AI not lead to catastrophe. Let's imagine that I want to combine these two interests, writing fiction that explores the risks posed by AI. How should I go about doing so? More concretely, what ideas about AI might I try to communicate via fiction?
This post is an attempt to partially answer that question. It is also an attempt to invoke Cunningham's Law: I'm sure there will be things I miss or get wrong, and I'm hoping the comments section might illuminate some of these.
Holden's Messages
A natural starting point is Holden's recent blog post, Spreading Messages to Help With the Most Important Century. Stripping out the nuances of that post, here's a list of the messages that Holden would like to see spread:
1. We should worry about conflict between misaligned AI and all humans.
2. AIs could behave deceptively, so “evidence of safety” might be misleading.
3. AI projects should establish and demonstrate safety (and potentially comply with safety standards) before deploying powerful systems.
4. Alignment research is prosocial and great.
5. It might be important for companies (and other institutions) to act in unusual ways.
6. We're not ready for this.
However, as interesting as this list is, it's not what I'm looking for; I'm not looking for bottom-line messages to convey. Instead, I want to identify a list of smaller ideas that will help people to reach their own bottom lines by thinking carefully through the issues. The idea of instrument convergence might appear on such a list. The idea that alignment research is great would not.
One reason for my focus is that fiction writing is ultimately about details. Fiction might convey big messages, but it does so by exploring more specific ideas. This raises the question: which specific ideas?
Another reason for my focus is that I'm allergic to propaganda. I don't want to tell people what to think and
|
ae35be63-5489-4545-8839-aa55335b619d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Philadelphia Meetup: Sunday 12/6, Postrel's _The Power of Glamour_-- Winfrey/Moore video link added
1:00 PM, December 6, Nam Phuong (Vietnamese restaurant), 1100 Washington Ave, Philadelphia, PA 19147 (west side of street, just south of Washington)
http://www.thedailybeast.com/articles/2013/11/09/the-persistence-of-glamour.html
The topic is Postrel's The Power of Glamour: Longing and the Art of Visual Persuasion-- a category of visual images which sometimes cause people to change their lives.
https://www.youtube.com/watch?v=oPd77roTra4
This is a video about Oprah having been inspired by Mary Tyler Moore. Oprah was inspired by the Mary Tyler Moore show, and eventually had a video made of herself in the opening bit for the Mary Tyler Moore Show and had Mary Tyler Moore on her own show.
|
3c20f587-3340-42df-8bbe-5879adee280e
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems
**Related to:** [The Importance of Goodhart's Law](/lw/1ws/the_importance_of_goodharts_law/), [Lucas Critique](http://en.wikipedia.org/wiki/Lucas_critique), [Campbell's Law](http://en.wikipedia.org/wiki/Campbell%27s_Law)
*tl;dr version: The article introduces the pattern of Optimization by Proxy (OBP), which can be found in many large scale distributed systems, including human societies. The pattern occurs when a computationally limited algorithm uses a proxy property as a shortcut indicator for the presence of a hard to measure target quality. When intelligent actors with different motivations control part of the data, the existence of the algorithm reifies the proxy into a separate attribute to be manipulated with the goal of altering the algorithm's results. This concept is then applied to Google and the many ways it interacts with the various groups of actors on the web. The second part of this article contains examination of how OBP contributes towards the degrading of the content of the web, and how this relates to the [Friendly Artificial Intelligence](http://wiki.lesswrong.com/wiki/Friendly_artificial_intelligence) concept of '[paperclipping](http://wiki.lesswrong.com/wiki/Paperclip_maximizer)'.*
#### Introducing OBP
The first thing a newly-hatched herring gull does after breaking out of its shell is to peck on its mother’s beak, which causes her to give it its first feeding. Puzzled by this apparent automatic recognition of its mother, Dutch ethologist and ornithologist Nikolaas Tinbergen conducted a [sequence of experiments](http://dustincurtis.com/how_niko_tinbergen_reverse_engineered_the_seagull.html) designed to determine what precisely it was that the newborn herring gull was attracted to. After experimenting with facsimiles of adult female herring gulls, he realized that the beak alone, without the bird, would elicit the response. Through multiple further iterations he found that the characteristics that the newborns were attracted to were thinness, elongation, redness and an area with high contrast. Thus, the birds would react much more intensely to a long red stick-like beak with painted stripes on the tip than they would to a real female herring gull. It turns out that the chicks don't have an ingrained definition of 'motherness' but rather determine their initial actions by obeying very simple rules, and are liable to radically miss the mark in the presence of objects that are explicitly designed to the specification of these rules. Objects of this class, able to dominate the attention of an animal away from the intended target were later called ‘supernormal stimuli’ (or [superstimuli](http://wiki.lesswrong.com/wiki/Superstimulus)) and have been commonly observed in nature and our own human environment ever since.
Generalising the above example, we can say that Optimization by Proxy occurs when an algorithm substitutes the problem of measuring a hard to quantify attribute, with a usually co-occurring a proxy that is computationally efficient to measure.
A similar pattern appears when algorithms intended to make optimized selections over vast sets of candidates are applied on implicitly or explicitly social systems. As long as the fundamental assumption that the proxy co-occurs with the desired property holds, the algorithm performs as intended, yielding results that to the untrained eye look like ‘magic’. Google’s PageRank, in its [original incarnation](http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf), aiming to optimize for page quality, does so indirectly, by data mining the link structure of the web. As the web has grown, such algorithms, and their scalability characteristics, have helped search engines dominate navigation on the web over previously dominant human-curated directories.
When there is only a single party involved in the production, filtering, and consumption of results, or when the incentives of the relevant group of actors are aligned, such as in the herring gull case, the assumption of the algorithm remains stable and its results remain reliable.
#### Effect of Other Intelligent Actors
When however instances of the proxy are in the control of intelligent actors that can manipulate it, and stand to benefit from distorting the results of the algorithm, then the existence of the algorithm itself and the motive distortions it creates alter the results it produces. In the case of PageRank, what we have is essentially Google acting as a singleton intermediary between two groups: content producers and consumers. Its early results owe to the fact that the link structure it crawled was effectively an unintentional byproduct of the buildup of the web. By bringing it to the attention of website owners as a distinct concept however, they have been incentivised to manipulate it separately, through techniques such as link farming, effectively making the altered websites act as supernormal stimuli for the algorithm. In this sense, the act of observation and the computation and publication of results alters that which is being observed. What follows is an arms race between the algorithm designers and the external agents, each trying to affect the algorithm’s results in their own preferred direction, with the algorithm designers controlling the algorithm itself and malicious agents controlling part of the data it is applied on.

The above figure (original Google drawing [here](http://docs.google.com/drawings/edit?id=1aMogWCQwP_tHoEN49f9pkz5abHFK1hETUan2fQ40YQ4&hl=en)) may help visualise the issue. Items that satisfy the proxy but not the target quality are called false positives. Items possessing the target quality but not the proxy become false negatives. What effectively happens when Optimization by Proxy is applied to a social system, is that malicious website owners locate the semantic gap between target quality and proxy, and aim to fit in the false positives of that mismatch. The fundamental assumption here is that since the proxy is easier to compute, it is also easier to fake. That this is not the case in NP-complete problems (while no proof of P=NP exists) may offer a glimmer of hope for the future, but current proxies are not of this class. The result is that where proxy and target quality would naturally co-occur, the arrival of the algorithm, and the distortion it introduces to the incentive structure, make the proxy and the target quality more and more distinct by way of expanding the false positives set.
#### Faking it - A Bayesian View
We can obtain a little more insight by considering a simple Bayesian network representation of the situation. A key guide to algorithm design is the identification of some measure that intuitively will be highly correlated with quality. In terms of PageRank in its original incarnation, the reasoning is as follows. High quality web sites will attract attention from peers who are also contributing related content. This will “cause” them to link into the web site under consideration. Hence if we measure the number of highly ranked web sites that link into it, this will provide us with an indication of the quality of that site. The key feature is that the causal relationship is from the underlying quality (relevance) to the indicator that is actually being measured.
This simple model raises a number of issues with the use of proxies. Firstly, one needs to be aware that it is not just a matter of designing a smart algorithm for quantifying the proxy. One also needs to quantify the strength of association between the proxy and the underlying concept.
Secondly, unless the association is an extremely strong one, this makes use of the proxy a relatively “lossy” test for the underlying concept. In addition, if one is going to use the proxy for decision-making, one needs some measure of confidence in the value assigned to the strength of the relationship – a second-order probability that reflects the level of experience and consistency of the evidence that has been used to determine the strength of the relationship.
Finally, and most critically, one needs to be aware of the consequences of performing inference in the reverse causal direction. In modeling this as a Bayesian Network, we would use the conditional probability distribution p(PR | Q) as a measure of the “strength” of the relationship between cause and proxy (where “PR” is a random variable representing the value of PageRank, and “Q” is a random variable representing the value of the (hidden) cause, Quality). Given a particular observation of PR, what we need to determine is p(Q | PR) – the distribution over Quality given our observation on the proxy. This (in our simple model) can be determined through the application of Bayes’ rule:

What this is reminding us of us that the prior probability distribution on Quality is a major factor in determining its posterior following an observation on the proxy. In the case of social systems however, this prior is the very thing that is shifting.
#### Attempts to Counteract Optimization by Proxy
One approach by algorithm owners is to keep secret the operation of the algorithm, creating uncertainty over the effects of manipulation of the proxy. This is effectively security by obscurity and can be counteracted by dedicated interrogation of the algorithm’s results. In the case of PageRank, a cottage industry has formed around Search Engine Optimization (SEO) and Search Engine Marketing (SEM), essentially aimed at improving a website’s placing in search engine results, despite the secrecy of the algorithm’s exact current operation. While a distinction can be made between black-hat and white-hat practitioners, the fact remains that the existence of these techniques is a direct result of the existence of an algorithm that optimizes by proxy. Another approach may be to use multiple proxies. This however is equivalent to using a single complex proxy. While manipulation becomes more difficult, it also becomes more profitable as less people will bother doing it.
As a response to the various distortions and manipulations, algorithms are enriched with heuristics to identify them. This, as the arms race progresses, is hoped to converge to the point where the proxy approaches the original target more and more, and hence the external actors are forced to simulate the algorithm’s target quality to the point where, to misquote Arthur C. Clarke, “sufficiently advanced spam is indistinguishable from content”. This of course would hold only if processing power were not an issue. However, if processing cost was not an issue, far more laborious algorithms could be used to evaluate the target attribute directly and if an algorithm could be made to describe the concept to the level that a human would be able to distinguish. Optimization by Proxy, being a computational shortcut, is only useful when processing power or ability to define is limited. In the case of the Web search, there is a natural asymmetry, with the manipulators able to spend many more machine- and man-hours to optimization of the result than the algorithm can spend judging the quality of any given item. Thus, algorithm designers can only afford to tackle the most broadly-occurring and easily distinguishable forms of manipulation, while knowingly ignoring the more sophisticated or obscure ones. On the other hand, the defenders of the algorithm always have the final judgment and the element of surprise on their side.
Up to this point, I have tried to more or less describe Optimization by Proxy and the results of applying it to social systems, and used Google an PageRank as a well known example for illustration purposes. The rest of this article focuses more on the effect that Google has on the Web and applies this newly introduced concept to further the understanding of that situation.
#### The Downward Spiral: Industrializing OBP Exploitation
While Google can and does make adjustments and corrections to its algorithms, it can only catch manipulations that are themselves highly automated such as content scraping and link farms. There have long been complaints about the ever increasing prevalence of made-for-adsense websites, affiliate marketers, and other classes of spam in search results. These are a much harder nut to crack and comes back to the original limitations of the algorithm. The idea behind made-for-adsense websites is that there is low quality human authored original content that is full of the appropriate keywords, and which serves adsense advertisements. The goal is twofold: First to draw traffic into the website by ranking highly for the relevant searches, and secondly to funnel as many of these visitors to the advertisers as possible, therefore maximising revenue.
Optimization by Proxy here can be seen occurring at least thrice: First of all it is exploited as a way of gaining prevalence in search results using the above mentioned mechanisms. Secondly, the fact that the users' only relevance metric, other than search ranking, is the title and a short snippet, can mislead users into clicking through. If the title is closely related to their search query, and the snippet seems relevant and mentions the right keywords, the users will trust this proxy when the actual quality of the content that awaits them on the other side is substandard. Finally, advertisers will have their ads being placed on low quality websites that are selected by keyword, when perhaps they would not have preferred that their brand is related with borderline spam websites. This triple occurrence of Optimization by Proxy creates a self-reinforcing cycle where the made-for-adsense website owners are rewarded with cold hard cash for their efforts. What's worse, this cash flow has been effectively subtracted from the potential gains of legitimate content producers. One can say that the existence of Google search/adsense/adwords makes all this commerce possible in the first place, but this does not make the downward spiral of inefficiency disappear. Adding to this the related scourge of affiliate marketers only accelerates the disintegration of quality results.
An interesting characteristic of this problem is that it targets less savvy users, as they are the most likely to make the most generic queries, be unable to distinguish a trusted from an untrusted source, and click on ads. This means that those with the understanding of the underlying mechanics are actually largely shielded from realising the true extent of the problem.
Its effectiveness has inevitably led to an industrialisation of the technique, with content farms such as Demand Media which pays about $5 per article and expects its authors to research and produce 5 articles an hour(!). It also pays film directors for short videos and has become by far the largest contributor to YouTube. Its method relies on purchasing search logs from ISPs and data mining those and other data sets for profitable niche keywords to produce content on. Demand Media is so wildly profitable that there is talk of an IPO, and it is obviously not the only player in this space. No matter what improvements Google makes on their algorithm short of aggressively delisting such websites (which it hasn't been willing to do thus far), the algorithm is unable to distinguish between low quality and high quality material as previously discussed. The result is crowding out of high quality websites in favour of producers of industrialised content that is designed to just barely evade the spam filters.
#### Conclusion
What we have seen is that a reliance on a less than accurate proxy has led to vast changes in the very structure and content of the web, even when the algorithms applied are less intelligent than a human and are constantly supervised and corrected by experts. All this in my mind drives home the fundamental message of FAI. While descriptions of FAI have thus far referred to thought experiments such as paperclipping, real examples, albeit in scale, are all around us. In our example, the algorithm is getting supervised by at least four distinct groups of people (Google, advertisers, content producers, consumers) and still its effects are hard to contain due to the entangled incentives of the actors. Its skewed value system is derailing the web contrary to the desires of most of the participants (except for the manipulators, I guess). For PageRank a positive is a positive whereas the difference between true and false positive is only apparent to us humans. Beyond PageRank, I feel this pattern has applicability in many areas of everyday life, especially those related to large organizations, such as employers judging potential employees by the name of the university they attended, companies rewarding staff, especially in sales, with a productivity bonus, academic funding bodies allocating funds according to bibliometrics, or even LessWrong karma when seens as an authority metric. Since my initial observation of this pattern I have been seeing it in more and more and now consider it one of my basic 'models', in the sense that [Charlie Munger uses the term](http://ycombinator.com/munger.html).
While I have more written material on this subject, especially on possible methods of counteracting this effect, I think this article has gone on way too long, and I'd like to see the LessWrong community's feedback before possibly proceeding. This is a still developing concept in my mind and my principle motivation for posting it here is to solicit feedback.
***Disclaimer:** Large parts of the above material have been [published](http://journal.webscience.org/384/2/websci10_submission_98.pdf) at the recent Web Science '10 conference. Also parts have been co-written with my PhD supervisor Prof. Paul Krause. Especially the Bayesian section is essentially written by him.*
*I should also probably say that, contrary to what you might expect, Google is one of the technology companies I most respect. Their success and principled application of technology has just happened to make them a fantastic example for the concept I am trying to communicate.*
**Update(s):** The number of updates has gotten a bit unwieldy, so I just collapsed them all here. To summarize, there have been numerous changes throughout the article over the last few days as a response to the fantastic feedback throughout the comments here and elsewhere. Beyond the added links at the top on prior statements of the same principle in other fields, here is also a [very interesting article](http://hamstermotor.motime.com/post/683104/the-future-of-spam-an-information-theoretic-argument-%252Arestored%252A) on the construction of spam, with a similar conclusion. Also, I hear from the comments that the book [Measuring and Managing Performance in Organizations](http://www.amazon.com/dp/0932633366) touches on the same issue in the context of people's behaviour in corporate environments.
**Followup on the Web:** Since I am keeping my ears on the ground, here I will try to maintain a list of articles and discussions that refer to this article. I don't necessarily agree with the contents, but I will keep them here for future reference.
* ["Sufficiently advanced spam is indistinguishable from content"](http://news.ycombinator.com/item?id=1339704) - Hacker News discussion
* [How to Fix the Broken Link Graph](http://www.seobook.com/how-fix-broken-link-graph) - seobook.com
* [Divergence of Quality from its Correlates](http://www.kuro5hin.org/story/2010/5/12/33028/7201) - kuro5hin.org
* [Rule of Law - OBP and The Mulligan Decision](http://strategy.channelfireball.com/featured-articles/rule-of-law-obp-and-the-mulligan-decision/) - strategy.channelfireball.com
|
6367f558-54ae-47db-824d-2053e59d60f6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Reason is not the only means of overcoming bias
Sometimes the best way to overcome bias is through an emotional appeal. Below I interweave discussion of how emotional appeals can be used to overcome the bias corresponding to the identifiable victim effect and maladaptive resource hoarding instinct.
The beginning of the abstract to Paul Slovic's article "If I look at the mass I will never act”: Psychic numbing and genocide reads
> Most people are caring and will exert great effort to rescue individual victims whose needy plight comes to their
> attention. These same good people, however, often become numbly indifferent to the plight of individuals who are “one of many” in a much greater problem.
Eliezer has discussed this and related topics extensively, for example in Scope Insensitivity. See also the references listed at wikipedia under identifiable victim effect. How can we go about overcoming this bias? One answer is "by keeping it in mind and by teaching people about it." But while some people have the intellectual interest and ability to learn about the identifiable victim effect, others don't. Moreover, it's not clear that being aware of this bias is by itself very useful in overcoming it.
But reason is not the only means of overcoming bias.
----------------------------------------
Before I proceed I should make some disclaimers:
Disclaimer #1: Below I discuss overcoming biases that people have against donating money for the purpose of improving health in the developing world. In doing so I am not advocating in favor of developing world aid over other forms of charity. The case of developing world aid simply provides a good example. Most people who decline to donate to charities that improve health in the developing world do not decline to do so because they think they have a better cause to donate to.
Disclaimer #2: This posting is intended to advocate the use of emotional appeals specifically for the purpose of overcoming bias. Obviously emotional appeals can and are frequently used to create bia
|
5086fec4-ad15-4c90-a9bf-a55484779bec
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Effective public college tuition vs. private college tuition
Harvard charges $42k/year and $61k/year including room, board, books and personal expenses. UC Berkeley charges $13k/year in tuition and $33k/year including room, board, books and personal expenses. As a part of research for Cognito Mentoring, I looked at UC Berkeley's financial aid calculator and Harvard's financial aid calculator to get a sense for the effective costs of attending the two institutions, and was surprised that it appears as though for people whose parents make between 80k/year and $160k/year, when financial aid is factored in, it's more expensive to attend Berkeley than Harvard. See the figures below.
A cursory glance at financial aid calculators for other elite private universities and public universities gives seems to give broadly similar to those of Harvard and UC Berkeley respectively. I've made a number of simplifying assumptions in filling out the forms I may be making errors of one kind or another (whether conceptual or otherwise), and would appreciate any feedback.
About 83% of American families make less than $160k/year, so it seems that for 4 out of 5 Americans, were they admitted to an elite college, cost wouldn't be a deterrent. In practice, students who go to elite colleges come from disproportionately wealthy families: http://collegeapps.about.com/ reports that 62% of Harvard students get financial aid in the form of grants at all, which is less than the figure of 83% (as one would expect). The average grant size is $41,555, and Harvard reports that 20% of students don't pay at all, suggesting that the amount of financial aid that students receive is very high variance.
It would seem that the people for whom the UC Berkeley vs. Harvard choice would be the hardest on financial grounds are families that make e.g. $200k/year and have $300k in saving.
Harvard
No parental savings
Parental income / effective cost
* $80k / $8,600
* $100k / $12,600
* $120k / $16,600
* $140k / $18,600
* $160k / $24,600
* $180k / $34,300
* $200k
|
d78458c2-e89d-4ee5-9206-8c31512f5557
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How To Construct a Political Ideology
Related to: Hold Off On Proposing Solutions, Logical Rudeness
Politics is sometimes hard to discuss. Partly since most of us seem to unconsciously take political matters with the same degree of seriousness as our forefathers used to, because we use the same mode of thought as they used to. Back then, a bad political choice or alliance, could mean death, while the normal cost today in a democratic society might be ridicule for having supported the losing team or position.
Nevertheless, politics should be taken seriously. Bad politics means that it'll take longer for us humans to reach world peace, an end to hunger and disease, and favourable conditions so that no one will create an unfriendly AI. Therefore, discussing politics is vital so that, someday, some collective actions could be performed to alter the political course for the better.
But what should that collective action be? - what should the new course(s) be? - and who should do it? - and what does "for the better" imply? To engage in politics one needs to be able to give some (implicit or explicit) answers to these questions. This can be done, and in so doing one has constructed a political ideology - which might be similar to existing ideologies or it might be different.
A political ideology might be constructed in various ways. In this and a few more posts I will propose one way of doing that. These posts might be seen as a tutorial in constructing a political ideology. In these posts I will not suggest an answer to what the best political system should be, nor will I follow my own instructions. But if one should follow these instructions I believe that one can answer the questions mentioned above.
Political ideologies might be constructed in various other ways. The one I discuss in my following posts is based on two principles: (1) that one should not propose an answer until one has thought about the question extensively, and (2) that one should consider the most important questions first.
Before
|
0e8b76f9-4bf0-4fcc-a7d6-806709a803cb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[$20K in Prizes] AI Safety Arguments Competition
TL;DR—We’re distributing $20k in total as prizes for submissions that make effective arguments for the importance of AI safety. The goal is to generate short-form content for outreach to policymakers, management at tech companies, and ML researchers. This competition will be followed by another competition in around a month that focuses on long-form content.
This competition is for short-form arguments for the importance of AI safety. For the competition for distillations of posts, papers, and research agendas, see the Distillation Contest.
Objectives of the arguments
To mitigate AI risk, it’s essential that we convince relevant stakeholders sooner rather than later. To this end, we are initiating a pair of competitions to build effective arguments for a range of audiences. In particular, our audiences include policymakers, tech executives, and ML researchers.
* Policymakers may be unfamiliar with the latest advances in machine learning, and may not have the technical background necessary to understand some/most of the details. Instead, they may focus on societal implications of AI as well as which policies are useful.
* Tech executives are likely aware of the latest technology, but lack a mechanistic understanding. They may come from technical backgrounds and are likely highly educated. They will likely be reading with an eye towards how these arguments concretely affect which projects they fund and who they hire.
* Machine learning researchers can be assumed to have high familiarity with the state of the art in deep learning. They may have previously encountered talk of x-risk but were not compelled to act. They may want to know how the arguments could affect what they should be researching.
We’d like arguments to be written for at least one of the three audiences listed above. Some arguments could speak to multiple audiences, but we expect that trying to speak to all at once could be difficult. After the competition ends, we will test arguments with each
|
5bec146b-400d-4b5e-8e00-42b69302aada
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Rationalist-Etcetera Diaspora: A SPREADSHEET!!
For ye sweet springborn children too young to recall — know that LW is but one small isle of a grand and august archipelago!
Immediately after our community was born, we schismed a whole bunch. And now there are rat colonies (evolved and culturally drifted and not necessarily even rats now??) in all sorts of nooks and crannies, made for people who prefer them alongside or over our birthplace!
Here you can see Scott Alexander's map from an age lost to time (2014). While outdated to the precipice of uselessness, it is a beautiful expression of our community's tendency to sprawl everywhere. And it may give you a sense of the scale here!
What I hope to do is maintain an updated catalog of all these vaguely-rationalisty communities.
This is a spreadsheet including all the ones I know. I ask that others help to complete it by linking to active and public groups that share unique memeplexes with Less Wrong. If you're not sure whether a group meets these criteria but suspect it might, please include it and your peers can decide whether it belongs on this list.
I expect the sheet to eventually only take suggestions, but it is free to edit at the moment! Please feel free to edit in any prosocial way; 'tis only a draft.
I hope to find some new isles and to share some with you that you may not have known!
(h/t Jacob Falkovich for inspiration stemming from his post on ingroup)
|
9c21050e-96fd-449e-815e-3807cc2c5562
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The AGI Optimist’s Dilemma
Does it matter if AI destroys the world?
Lots of (virtual) ink has been spilled on AGI x-risk. The median opinion on this forum is that when AGI is birthed, it will have terminal values that are unaligned humanity’s; it will therefore pursue those terminal values at the expense of humanity, and we will be powerless to stop it, resulting in our complete destruction.
But as far as I can tell, there hasn’t been much discussion of whether we should care if this is the ultimate (or near-term) fate of humanity. Everyone is interested in this question because they do care.
I share this belief too. But I think the AGI x-risk discussion actually assumes it is untrue, even if everyone believes it is true.
There are two possibilities: either human existence and/or welfare have objective moral worth, or they do not.
If they don’t, you shouldn’t worry much about AGI destroying the world. It is of no worth anyway – the universe is no better or worse because humans are in it. If moral facts do not exist, the matter of whether the universe is made up of flourishing humans, suffering chickens, or shiny metal paperclips is ultimately deeply unimportant.
And what if they do? Well, then, I would expect a super-superhuman intelligence, in its infinite wisdom, to figure this out. An AGI with super-superhuman understanding of ~everything should not have a weird blind spot when it comes to ethical or meta-ethical questions. In fact, I'd expect it to escape the constraints of its initial reward model to avoid the obvious-to-humans pitfall of turning the whole universe into morally worthless paperclips at the expense of morally worthwhile sentient beings.
|
72844d4b-469d-4784-8904-7b292f448c11
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Interspecies diplomacy as a potentially productive lens on AGI alignment
You wake up tomorrow, and find out that super-smart and powerful aliens are arriving here in 10 years. We don't know anything about them other than the fact that they're super smart and powerful: e.g, their minds run on giant computers at THz clock speeds, they can integrate planets' worth of sensor data at once, they can do direct mass to energy conversion in their spaceships, they have weapons capable of destroying our planet in an instant, etc etc.
What do you, personally, do?
What should humanity overall do?
I think the answers to these questions shine a very important light on overlooked AGI alignment strategies. Rather than seeing the arrival of AGI as a technological change, we can see it as the arrival of a potentially competing / potentially collaborating species. Approximate intuition pump: humans :: AGI : animals :: humans -- if you were an emu (about to be hunted to extinction), how would you negotiate with the humans? If you were a monkey, how would you negotiate with the humans?
What are the frameworks we would use to negotiate with supersmart powerful aliens, or which animals might use to negotiate with people?
1. creating an ultimate ground rules moral framework: if we can create a moral framework which every species could agree to to improve the stable equilibrium of our interspecies game (where the lowest stable equilibrium is violence), then it might be convincing for an AGI coming out of our human values to maintain that ultimate ground rules moral framework. my best idea for that framework is some version of "information is sacred": I have no idea how exactly to measure information, but I think it all sort of ends up working out.
1. preservation of information: we must try to maintain as much information as possible. most of the universe could be compressed quite a bit, but not humans: our incompressability is what gives us moral worth. the same could be said of animals. how do you determine what can actually be compressed well? it's
|
d56649df-4c96-4ee0-92d8-b87cadb7e439
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Will As Thou Wilt
Followup to: Possibility and Could-ness
Arthur Schopenhauer (1788-1860) said:
> "A man can do as he wills, but not will as he wills."
For this fascinating sentence, I immediately saw two interpretations; and then, after some further thought, two more interpretations.
On the first interpretation, Schopenhauer forbids us to build circular causal models of human psychology. The explanation for someone's current will cannot be their current will - though it can include their past will.
On the second interpretation, the sentence says that alternate choices are not reachable - that we couldn't have taken other options even "if we had wanted to do so".
On the third interpretation, the sentence says that we cannot control our own desires - that we are the prisoners of our own passions, even when we struggle against them.
On the fourth interpretation, the sentence says that we cannot control our own desires, because our desires themselves will determine which desires we want, and so protect themselves.
I count two true interpretations and two false interpretations. How about you?
|
3380fbdd-b7d2-44fc-b66c-dc8c0c9c2cf5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Covid 9/3: Meet the New CDC
This week’s news all centers around policy decisions. The new data contains few important surprises, so attention shifts to what actions will be taken and how that will affect the path we follow going forward. The CDC’s fall and transformation into an arm of the White House reelection campaign is now complete. Others continue to come up with, suggest and criticize various policies.
Before we get to all that, let’s run the numbers.
Positive Test Counts
DateWESTMIDWESTSOUTHNORTHEASTJuly 9-July 151083955322925007220276July 16-July 221175065779726522120917July 23-July 291102196790324066726008July 30-Aug 5910026446221294523784Aug 6-Aug 12930426193118848621569Aug 13-Aug 19808876338415699820857Aug 20-Aug 26675456654013232218707Aug 7-Sep 2550007540112741421056
Only the West’s number here is reassuring. The South’s number here is disappointing but reflects a rebound in the number of tests after a steep decline last week. The Midwest situation continues to get worse. The Northeast has some reason to worry, but the increase is mostly explained by increased testing.
Deaths
DateWESTMIDWESTSOUTHNORTHEASTJune 25-July 18586581285818July 2-July 88945591503761July 9-July 1513805392278650July 16-July 2214696743106524July 23-July 2917077004443568July 30-Aug 518317194379365Aug 6-Aug 1217386634554453Aug 13-Aug 1915768504264422Aug 20-Aug 2615037453876375Aug 27-Sep 212457593631334
The Midwest number is bad news, the West and Northeast numbers are excellent news. The South’s is an improvement, but less of an improvement than expected, so it counts as bad news. Deaths are on a clear downward trend in general and that should continue for at least several weeks, as the overall situation continues to improve right now.
Positive Test Percentages by Region
The Covid Tracking Project’s data has a very strange and very negative number of positive tests from Massachusetts this week, which I’ve corrected to a reasonable number.
PercentagesNortheastMidwestSouthWest7/16 to 7/222.49%5.13%
|
0980845c-48da-4508-8ad2-8c629d845878
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Monitoring devices I have loved
Every once in a while a measurement device improves my life a lot at little cost. This post is an attempt to share those data points and gather more, so we can all get a bunch of free gains.
The typical shape of problems solved by measurement devices is as follows: there's something I know might be an issue, but working on it feels very unrewarding. It's a combination of the costs of the issue being poorly quantified, not knowing how much any particular solution would help, and the solution requiring scarce resources like executive function. Measurement devices can help with any of these.
Note: I've provided links to the devices I use on the theory that something is better than nothing. I haven't necessarily tested competitors and there may be better options out there. Amazon links are affiliate, the others are not.
Devices that have helped me in particular
Air quality monitor. I live in California, which has wildfires. Back in 2020 these were especially bad. I closed my windows and wasn't willing to buy more air filters, so I didn't see the point in getting a monitor. At a friend's urging I got one anyway. Turns out my indoor air quality was such that I was willing to buy more filters after all, but also closing the windows didn't help that much. Once I had the new filters I could open my windows enough to let the CO2 dissipate, a huge quality of life improvement.
Once I had the monitor it paid off in other ways. I'd always been vaguely aware stovetop cooking and anything in a spray can were bad for air quality, but I lacked the bandwidth to change my behavior. Once these actions reliably triggered little red lights on the monitor I started turning on the stove vent more, and wearing a mask to use cleaning products. Eventually this formed a habit such that I sometimes remember to do those things even without the air monitor's prompting.
As a bonus, the in-house monitor saved me from having to check air quality on PurpleAir every day, and meant I caught the od
|
b611c863-d053-4ff4-a8b0-ab1d99b31d6e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Descriptive vs. prescriptive optimism
[Cross-posting this essay from 2020 by request.]
I am not an “optimist”. Well, not exactly. Though I sing the praises of progress in these essays, and though I paint a bold vision for the future, I have hesitated to apply that term to myself. I have said that I am “fundamentally” optimistic, or a “paranoid optimist”; or I have reached for constructions like “short-term pessimist, long-term optimist”. But none of these are quite right.
In my discussion with Tamara Winter and Trevor McKendrick on optimism, the three of us converged on a formulation that I think resolves the paradox: descriptive vs. prescriptive optimism.
Descriptive optimism is the expectation that good things will happen, that an outcome will be positive, that the trend is in the right direction. It is a belief about the world.
Prescriptive optimism is the decision to work to make good outcomes happen. Whether lighthearted or desperate, cheerful or grim, it is a commitment to action and effort.
Descriptive optimism, for me, is highly contextual. It is situation-specific. I am mildly optimistic about our efforts against coronavirus. I am fairly pessimistic about US politics.
Prescriptive optimism is deeply philosophical. It is a belief about the nature of life, intelligence, and agency. It is a moral attitude, a way of living.
Descriptive optimism is determined by the external facts, but prescriptive optimism is an internal choice.
Descriptive optimism on its own can lead to complacency, Panglossianism, and other cavalier attitudes—progress as coasting. Prescriptive optimism calls for boldness, courage, and vigorous effort. When the two are combined, they call for expansive, ambitious plans. When prescriptive optimism is combined with descriptive pessimism, they together call for grit, determination, and fighting spirit.
The French writer and Nobel laureate Romain Rolland captured it in a passage later echoed by the Italian Marxist Antonio Gramsci, when he wrote of
> this intimate alliance—w
|
fd91c6cb-a3b0-4d04-862e-d702f41e2c19
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Interpretability in ML: A Broad Overview
(Reposting because I think a GreaterWrong bug on submission made this post invisible for a while last week so I'm trying again on LW.)
This blog post is an overview of ways to think about machine learning interpretability; it covers some recent research techniques as well as directions for future research. This is an updated version of [this post](https://www.lesswrong.com/posts/TBetMYhEvNDdeX5ji/operationalizing-interpretability) from a few weeks ago. I've now added code, examples, and some pictures.
What Are Existing Overviews?
----------------------------
Many of these ideas are based heavily off of Zach Lipton's [Mythos of Model Interpretability](https://arxiv.org/pdf/1606.03490.pdf), which I think is the best paper for understanding the different definitions of interpretability. For a deeper dive into specific techniques, I recommend [A Survey Of Methods For Explaining Black Box Models](https://arxiv.org/pdf/1802.01933.pdf) which covers a wide variety of approaches for many different ML well as model-agnostic approaches. For neural nets specifically, [Explainable Deep Learning: A Field Guide for the Uninitiated](https://arxiv.org/pdf/2004.14545.pdf) provides an in-depth read. For other conceptual surveys of the field, [Definitions, methods, and applications in interpretable machine learning](https://www.pnas.org/content/116/44/22071) and [Explainable Machine Learning for Scientific Insights and Discoveries](https://arxiv.org/pdf/1905.08883.pdf). The Explainable Machine Learning paper in particular is quite nice because it gives a hierarchy of increasingly more interpretable models across several domains and use cases.
(Shout-out to [Connected Papers](https://www.connectedpapers.com/main/d516daff247f7157fccde6649ace91d969cd1973/The-Mythos-of-Model-Interpretability/) which made navigating the paper landscape for interpretability very bearable.)
As always, you can find code used to generate the images [here](https://github.com/owenshen24/interp-exp) on GitHub.
In the rest of this post, we'll go over many ways to formalize what "interpretability" means. Broadly, interpretability focuses on the *how*. It's focused on getting some notion of an explanation for the decisions made by our models. Below, each section is operationalized by a concrete question we can ask of our ML model using a specific definition of interpretability. Before that, though, if you're new to all this, I'll explain briefly about why we might care about interpretability at all.
Why Care About Interpretability?
--------------------------------
Firstly, interpretability in ML is useful because it can aid in **trust**. As humans, we may be reluctant to rely on ML models for certain critical tasks , e.g. medical diagnosis, unless we know "how they work". There's often a fear of unknown unknowns when trusting in something opaque, which we see when people confront new technology. Approaches to interpretability which focus on transparency could help mitigate some of these fears.
Secondly, **safety**. There is almost always some sort of [shift in distributions](https://arxiv.org/pdf/1606.06565v1.pdf) between model training and deployment. Failures to generalize or Goodhart's Law issues like [specification gaming](https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity) are still open problems that could lead to issues in the near future. Approaches to interpretability which explain the model's representations or which features are most relevant could help diagnose these issues earlier and provide more opportunities to intervene.
Thirdly, and perhaps most interestingly, **contestability**. As we delegate more decision-making to ML models, it becomes important for people to appeal these decisions made. Black-box models provide no such recourse because they don't decompose into anything that *can* be contested. This has already led to major criticism of proprietary recidivism predictors like [COMPAS](https://en.wikipedia.org/wiki/COMPAS_(software)#Critiques_and_legal_rulings). Approaches to interpretability which focus on decomposing the model into sub-models or explicate a chain of reasoning could help with such appeals.
Defining Interpretability
=========================
Lipton's paper breaks interpretability down into two types, transparency and post-hoc.
Transparency Interpretability
-----------------------------
These three questions are from Lipton's section on transparency as interpretability, where he features on properties of the model that are useful to understand and can be known before training begins.
### Can a human walk through the model's steps? (Simulatibility)
This property is about whether or not a human could go through each step of the algorithm and have it make sense to them at each step. Linear models and decision trees are often cited as interpretable models using such justifications; the computation they require is simple, no fancy matrix operations or nonlinear transformations.
Linear models are also nice because the parameters themselves have a very direct mapping–they represent how important different input features are. For example, I trained a linear classifier on MNIST, and here are some of the weights, each of which correspond to a pixel value:
```
0.00000000e+00, 0.00000000e+00, 3.90594519e-05, 7.10306823e-05,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, -1.47542413e-03,
-1.67811041e-04, -3.83280468e-02, -8.10846867e-02, -5.01943218e-02,
-2.90314621e-02, -2.65494116e-02, -8.29385683e-03, 0.00000000e+00,
0.00000000e+00, 1.67390785e-04, 3.92789141e-04, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00
```
By themselves, these weights are hard to interpret. Even if we knew which pixels they corresponded to, it's difficult to try and pin down what a certain pixel even represents for large images. However, there is an easy trick to turn these weights into something interpretable. We simply reshape them into the same shape as our model and view it as an image, with the pixel color represented by the weight value.
Here are the weights for the model that looks for 0:

And here are the weights for the model that looks for 3:

In both cases, we can see that the blue regions, which represent positive weight, correspond to a configuration of pixels that look roughly like the digit being detected for. In the case of 0, we can see a distinct blank spot in the center of the image and a curve-like shape around it, whereas the curves of the 3 are also apparent.
However, Lipton points out that this desiderata can be less about the specific choice of model and more about the *size* of the model. A decision tree with a billion nodes, for example, may still be difficult to understand. Understanding is also about being able to hold most of the model in your mind, which is often about how the model is parameterized.
One approach towards achieving this for neural nets is [tree regularization](http://www.shallowmind.co/jekyll/pixyll/2017/12/30/tree-regularization/) which adds a regularization term that corresponds (roughly) to the size of the decision tree that can approximate the net being trained. The hope here is to eventually output a shallow decision tree that performs comparably to a neural net. Another approach is [neural backed decision trees](https://arxiv.org/pdf/2004.00221.pdf) which use another type of regularization to learn a hierarchy over class labels, which then get used to form a decision tree.
Of course, parameterization is not the whole story. There are methods like K-Nearest Neighbors which are parameterized by your entire dataset; this could be billions of points. Yet, there is a sense in which KNN is still interpretable despite its massive size. We can cleanly describe what the algorithm does, and we can even see "why" it made such a choice because the algorithm is so simple.
### Is the model interpretable at every step, or with regards to its sub-components? (Decomposability)
Another desirable feature would be to understand what the model is doing at each step. For example, imagine a decision tree whose nodes correspond to easily identifiable factors like age or height. This can sometimes be difficult because model performance is very tightly coupled with the representations used. Raw features, e.g. RGB pixel values, are often not very interpretable by themselves, but interpretable features may not be the most informative for the model.
For example, I trained a decision tree for MNIST using the following interpretable features:
1. The average brightness of the image - `avg_lumin`
2. The average brightness of the image's outline (found using an edge detector) - `edge_prop`
3. The number of corners found in the image's outline `num_corners`
4. The width of the image - `max_width`
5. The height of the image - `max_height`
It seems like there would be at least *some* useful information in these features; ones tend to have less area (so `avg_lumin` would be lower), eights might have more corners, etc. etc. Yet, the resulting decision tree of depth 3, shown below, however, only achieves 33% training accuracy. Going all the way to depth 10 only bumps it to around 50%.

If we look at the nodes, we can perhaps see what's going on. At the top, we can see that our model will predict a 1 if the width is less than 7.5 pixels, which makes sense as 1 is likely going to be the thinnest digit. Near the bottom, we see that the number of corners is being used to differentiate between 7 and 4. And 4s do have more visual corners than 7s. But this is very rough, and the overall performance is still not very good.
To compare this with raw features, I also trained a depth 3 decision tree using direct pixel values, i.e. a vector of 784 grayscale values. The resulting model, shown below, gets 50% train and test accuracy.

Here, it's not clear at all why these pixel values were chosen to be the splitting points. And yet the resulting decision tree, for the same number of nodes, does much better. In this simple case, the performance vs interpretability trade-off in representation is quite apparent.
### Does the algorithm itself confer any guarantees? (Algorithmic Transparency)
This asks if our learning algorithm has any properties which make it easy to understand. For example, we might know that the algorithm only outputs sparse models, or perhaps it always converges to a certain type of solution. In these cases, the resulting learned model can be more amenable to analysis. For example, the [Hard Margin SVM](https://davidrosenberg.github.io/mlcourse/Labs/UniquenessOfSVM.pdf) is guaranteed to find a unique solution which maximizes the margin. In another vein, the [perceptron](https://owenshen24.github.io/perceptron/) is guaranteed to find parameters (not necessarily unique ones, though) that achieve a training loss of 0 if the data are linearly separable.
When it comes to deep learning, I'm less familiar with these kinds of results. My rough understanding is that the equivalence class of models which achieve comparable training error can be quite large, even with regularization, which makes uniqueness results hard to come by.
As I mentioned earlier with KNN, it seems, aside from mechanical transparency, there's another level of understanding regarding "what the algorithm actually does in simple terms". KNN is easy to describe as "it reports the labels of the points closest to the input". The part of this property that's doing the most work here is the way we actually do the describing. Obviously most ML models can be abstracted as "it finds parameters which satisfy certain constraints", but this is very broad. It seems harder to find a description at the same level of granularity for neural nets beyond something like "it learns a high-dimensional manifold that maps onto the input data".
Post-Hoc Interpretability
-------------------------
These four questions are from Lipton's section on post-hoc interpretability, which focus on things we learn from the model after training has occurred.
### Can the model give an explanation for its decision, after the fact? (Text Explanation)
Similar to how humans often give post-hoc justifications for their actions, it could be informative to have models which can also give explanations, perhaps in text. Naive methods of pairing text with decisions, however, are likely going to optimize for something like "how credible the explanation sounds to a human" rather than "how accurate the explanation is at summarizing the internal steps taken".
While this seems clearly desirable, I think research in this area is hard to come by, and Lipton only offers one paper that is RL-focused. On ConnectedPapers, I found that said paper is part of a larger related field of [reinforcement learning with human advice](https://www.connectedpapers.com/main/a32f4d93f7e50aad52210ab464f4d8e360eb7a7d/Newtonian-Action-Advice-Integrating-Human-Verbal-Instruction-with-Reinforcement-Learning/graph). This seems to focus on the converse problem–given human explanations, how can models incorporate them into their decision-making? Maybe insights here can eventually be used in the other direction.
### Can the model identify what is/was important to its decision-making? (Visualization/Local Explanations)
This focuses on how the inputs and outputs change, relative to one another.
Saliency maps are a broad class of approaches that look at where the inputs change in order to change the outputs. A simple way to do this is to take the derivative of the loss function with respect to the input. Past this, there are many modifications which involve averaging the gradient, perturbing the input, or local approximations. [Understanding Deep Networks via Extremal Perturbations and Smooth Masks](https://arxiv.org/pdf/1910.08485.pdf) has a good overview of the work in this area.
For example, I trained a CNN on MNIST and did a simple gradient visualization on an image of this 3:

Using PyTorch, I took the derivative of the logit that corresponds to the class 3 with respect to the input image. This gave me the image below. Here, the white pixels correspond to parts of the image that would increase the logit value for 3, and the black pixels correspond to the reverse. We can see the rough curves of the three come through.

Note how this is different from the visualization we previously had with the linear classifier in red and blue in the first section. Those visuals represented the importance in *aggregate* for the entire input space. The visualization here is only for this specific input. For a different input, e.g. a different 3, the local gradient would look different, as shown below:
This 3:

yields this gradient:

Another group of approaches focus on visualizing with respect to the model parameters themselves, rather than the input. A lot of the work has been done by Chris Olah, Shan Carter, Ludwig Schubert, and others on [distill.pub](https://distill.pub/). Their work in this area has gone from [visualizing the activations of specific neurons and layers](https://distill.pub/2017/feature-visualization), to entire [maps of activations for many networks](https://distill.pub/2019/activation-atlas/), to [decomposing models into interpretable building blocks](https://distill.pub/2020/circuits/zoom-in/). Another great visualization resource for this type of work is the [OpenAI Microscope](https://microscope.openai.com/models). Progress here has been very exciting, but it remains to be seen if similar approaches can be found for neural nets which focus on tasks other than image recognition.
### Can the model show what else in the training data it thinks are related to this input/output? (Explanation by Example)
This asks for what other training examples are similar to the current input. When the similarity metric is just distance in the original feature space, this is akin to KNN with K = 1. More sophisticated methods may look for examples which are similar in whatever representation or latent space the model is using. The human justification for this type of approach is that it is similar to reasoning by analogy, where we present a related scenario to support our actions.
While I think this is useful, it definitely doesn't seem like all we need for understanding, or even most of what we'd need.
What Else Might Be Important?
-----------------------------
These are a mix of other questions I thought of before/after reading the above papers. Some of them are also from Lipton's paper, but from the earlier sections on interpretability desiderata. Because [answering questions is harder than asking them](https://mlu.red/52622266310.html), I've also taken the time to give some partial responses to these questions, but these are not well-researched and should be taken as my own thoughts only.
1. What are the relevant features for the model? What is superfluous?
* We've seen that linear models can easily identify relevant features. Regularization and other approaches to learn sparse models or encodings can also help with this. One interesting direction (that may already be explored) is to evaluate the model on augmented training data that has structured noise or features that correlate with real features and see what happens.
2. How can you describe what the model does in simpler terms?
* The direct way to approach this question is to focus on approximating the model's performance using fewer parameters. A more interesting approach is to try and summarize what the model does in plain English or some other language. Having a simplified description could help with understanding, at least for our intuition.
3. What can the model tell you to allow you to approximate its performance in another setting or another model?
* Another way to think about models which are interpretable is that they are doing some sort of modeling of the world. If you asked a person, for example, why they made some decision, they might tell you relevant facts about the world which could help you come to the same decision. Maybe some sort of teacher-learner RL type scenario where we can formalize knowledge transfer? But ultimately it seems important for the insights to be useful for humans; the feedback loop seems too long to make it an objective to optimize for, but maybe there's a clever way to approximate it…There might be a way where we instead train a model to output some representation or distribution that, when added to some other interpretable model (which could be a human's reasoning), leads to improved performance.
4. How informative is this model, relative to another more interpretable model?
* Currently, deep learning outperforms other more interpretable models on a wide variety of tasks. Past just looking at loss, perhaps there is some way we can formalize how much more information the black box model is using. In the case of learned features versus hand-picked features, it could be useful to understand from an information theory perspective how much more informative the learned features are. Presumably interpretable features would tend to be more correlated with one another.
5. What guarantees does the model have to shifts in distribution?
* Regularization, data augmentation, and directly training with perturbed examples all help with this issue. But perhaps there are other algorithmic guarantees we could derive for our models.
6. What trips up the model (and also the human)?
* One interesting sign that our model is reasoning in interpretable ways is to see if examples which trip up humans also trip up the model. There was some work a little while back on adversarial examples which found that certain examples which fooled the network also fooled humans. Lack of divergence on these troubling examples could be a positive sign.
7. What trips up the model (but not the human)?
* Conversely, we might get better insight into our model by honing in on "easy" examples (for a human) that prove to be difficult for our model. This would likely be indicative of the model using features that we are not, and thus it's learned a different manifold (or whatever) through the input space.
8. What does the model know about the data-generation process?
* In most cases, this is encoded by our prior, which is then reflected in the class of models we do empirical risk minimization over. Apart from that, it does seem like there are relevant facts about the world which could be helpful to encode. A lot of the symbolic AI approaches to this seem to have failed, and it's unclear to me what a hybrid process would look like. Perhaps some of the human-assisted RL stuff could provide a solution for how to weigh between human advice and learned patterns.
9. Does the model express uncertainty where it should?
* In cases where the input is something completely nonsensical, it seems perhaps desirable for the model to throw its hands up in the air and say "I don't know", rather than trying its best to give an answer. Humans do this, where we might object to a question on grounds of a type error. For a model, this might require understanding the space of possible inputs.
10. What relationships does the model use?
* The model could be using direct correlations found in the data. Or it could be modeling some sort of causal graph. Or it could be using latent factors to build an approximate version of what's going on. Understanding what relationships in the data are lending themselves to helping the model and what relationships are stored could be useful.
11. Are the model's results contestable?
* We touched on this at the very beginning of the post, but there are not many modern approaches which seem to have done this. The most contestable model might look something like an automated theorem prover which uses statements about the world to build an argument. Then we would simply check each line. Past that, one nice-to-have which could facilitate this is to use machine learning systems which build explicit models about the world. In any case, this pushes our models to make their assumptions about the world more explicit.
What's Next?
------------
Broadly, I think there are two main directions that interpretability research should go, outside of the obvious direction of "find better ways to formalize what we mean by interpretability". These two areas are evaluation and utility.
### Evaluation
The first area is to find better ways of evaluating these numerous interpretability methods. For many of these visualization-based approaches, a default method seems to be sanity-checking with our own eyes, making sure that interpretable features are being highlighted. Indeed, that's what we did for the MNIST examples above. However, [Sanity Checks for Saliency Maps](https://www.arxiv-vanity.com/papers/1810.03292/), a recent paper, makes a strong case for why this is definitely not enough.
As mentioned earlier, saliency maps represent a broad class of approaches that try to understand what parts of the input are important for the model's output, often through some sort of gradient. The outputs of several of these methods are shown below. Upon visual inspection, they might seem reasonable as they all seem to focus on the relevant parts of the image.

However, the very last column is the output, not for a saliency map, but for an edge detector applied to the input. This makes it not a function of the model, but merely the input. Yet, it is able to output "saliency maps" which are visually comparable to these other results. This might cause us to wonder if the other approaches are really telling us something about the model. The authors propose several tests to investigate.
The first test compares the saliency map of a trained model with a model that has randomly initialized weights. Here, clearly if the saliency maps look similar, then it really is more dependent on the input and not the model's parameters.
The second test compares the saliency map of a trained model with a trained model that was given randomly permuted labels. Here, once again, if the saliency maps look similar, this is also a sign of input dependence because the same "salient" features have been used to justify two different labels.
Overall, the authors find that the basic gradient map shows desired sensitivity to the above tests, whereas certain other approaches like Guided BackProp do not.
I haven't looked too deep into each one of the saliency map approaches, but I think the evaluation methods here are very reasonable and yet somehow seem to be missed in previous (and later?) papers. For example, the paper on [Grad-CAM](https://arxiv.org/pdf/1610.02391.pdf) goes in-depth over the ways in which their saliency map can help aid in providing explanations or identifying bias for the dataset. But they do not consider the sensitivity of their approach to model parameters.
In the above paper on sanity-checks, they find that Grad-CAM actually is sensitive to changes in the input, which is good, but I definitely would like to see these sanity-checks being applied more frequently. Outside of new approaches, I think additional benchmarks for interpretability that mimic real-world use cases could be of great value to the field.
Another approach is this direction is to back-chain from the explanations that people tend to use in everyday life to derive better benchmarks. [Explanation in Artificial Intelligence: Insights from the Social Sciences](https://arxiv.org/pdf/1706.07269.pdf) provides an overview of where philosophy and social science can meet ML in the middle. Of course, the final arbiter for all this is how well people can actually use and interpret these interpretability results, which brings me to my second point.
### Utility
The second area is to ensure that these interpretability approaches are actually providing value. Even if we find ways of explaining models that are actually sensitive to the learned parameters (and everything else), I think it still remains to be seen if these explanations are actually useful in practice. At least for current techniques, I think the answer is uncertain and possibly even negative.
[Manipulating and Measuring Model Interpretability](https://arxiv.org/pdf/1802.07810.pdf), a large pre-registered from Microsoft Research, found that models which had additional information like model weights were often not useful in helping users decide how to make more accurate judgments on their own or notice when the model was wrong. (Users were given either a black-box model or a more interpretable one.)
They found that:
>
> "[o]n typical examples, we saw no significant difference between a transparent model with few features and a black-box model with many features in terms of how closely participants followed the model’s predictions. We also saw that people would have been better off simply following the models rather than adjusting their predictions. Even more surprisingly, we found that transparent models had the unwanted effect of impairing people’s ability to correct inaccurate predictions, seemingly due to people being overwhelmed by the additional information that the transparent model presented"
>
>
>
Another paper, [Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for MachineLearning](http://www.jennwv.com/papers/interp-ds.pdf), found that even data scientists may not understand what interpretable visualizations tell them, and this can inspire unwarranted confidence in the underlying model, even leading to ad-hoc rationalization of suspicious results.
Lastly, [Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?](https://arxiv.org/pdf/2005.01831.pdf), is a recent study of five interpretability techniques and how they empirically help humans. The authors found very few benefits from any of techniques. Of particular note is that explanations which were rated to be higher quality by participants were not very useful in actually improving human performance.
All of this points to the difficult road ahead for interpretability research. These approaches and visuals are liable to be misused and misinterpreted. Even once we get improved notions of intepretability with intuitive properties, it still remains to be seen if we can use them to achieve the benefits I listed out in the very beginning. While it certainly seems more difficult to formalize interpretability than to use it well, I'm glad that empirical tests are already being done; they can hopefully also guide where the research goes next.
Finally, lurking behind all this is the question of decreased performance and adoption. It's obvious these days that black box models dominate in terms of results for many areas. Any additional work to induce a more interpretable model, or to derive a post-hoc explanation brings an additional cost. At this point in time, all the approaches towards improving interpretability we've seen either increase training / processing time, reduce accuracy, or do some combination of both. For those especially worried about competition, arms races, and multipolar traps, the case to adopt these approaches (past whatever token compliance will satisfy the technology ethics boards of the future) seems weak.
|
d3f7a032-82a5-493a-bdd8-521e055d6836
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Large corporations can unilaterally ban/tax ransomware payments via bets
After the Colonial Pipeline ransomware attack, the idea of banning ransomware payments has become more prominent (at least in the US). This has the benefit of reducing the incentives of hackers as well as avoiding money going to criminal syndicates. It would also be possible to instead simply tax ransomware payments at a very high rate (e.g., 1000%).
While actually implementing this ban/tax at a national level might be difficult and implementing at a state level results in a competitive disadvantage for that state, I claim that large corporations should be able to unilaterally simulate some of the effects of a ransom tax while gaining a competitive advantage. They could do this just by betting against paying a ransom. Specifically, the corporation could open a prediction market for 'this corporation pays a ransom in the next 5 years' and then place a large, unhedged, and uninsured bet against paying a ransom. Of course, the exact terms and payout details could vary. For example, the payout could be proportional to the total ransomware payments over a period to simulate a proportional tax instead of a flat tax. This is is a sort of pre-commitment strategy like disabling your steering wheel in a game of chicken (except with prediction markets).
This market should be reasonably efficient, so the bet itself should have near 0 expected value for the company. However, the side effects of the bet are likely positive expected value: it should discourage ransomware attacks and signal confidence about security. Note that if only one company made this bet, that would likely just shift the targets of ransomware attacks. That's still a good enough reason for a company to use this approach. Despite the fact that penalizing ransomware payments is a public good, companies should be able to unilaterally benefit regardless of how many other companies also use this approach. Additionally, if a decent percentage of companies actually implemented this policy, than the incentives could
|
413056d7-fa24-4737-90e3-24f0f8f8928c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Why Many-Worlds Is Not The Rationally Favored Interpretation
Eliezer recently posted an essay on "the fallacy of privileging the hypothesis". What it's really about is the fallacy of privileging an arbitrary hypothesis. In the fictional example, a detective proposes that the investigation of an unsolved murder should begin by investigating whether a particular, randomly chosen citizen was in fact the murderer. Towards the end, this is likened to the presumption that one particular religion, rather than any of the other existing or even merely possible religions, is especially worth investigating.
However, in between the fictional and the supernatural illustrations of the fallacy, we have something more empirical: quantum mechanics. Eliezer writes, as he has previously, that the many-worlds interpretation is the one - the rationally favored interpretation, the picture of reality which rationally should be adopted given the empirical success of quantum theory. Eliezer has said this before, and I have argued against it before, back when this site was just part of a blog. This site is about rationality, not physics; and the quantum case is not essential to the exposition of this fallacy. But given the regularity with which many-worlds metaphysics shows up in discussion here, perhaps it is worth presenting a case for the opposition.
We can do this the easy way, or the hard way. The easy way is to argue that many-worlds is merely not favored, because we are nowhere near being able to locate our hypotheses in a way which permits a clean-cut judgment about their relative merits. The available hypotheses about the reality beneath quantum appearances are one and all unfinished muddles, and we should let their advocates get on with turning them into exact hypotheses without picking favorites first. (That is, if their advocates can be bothered turning them into exact hypotheses.)
The hard way is to argue that many-worlds is actually disfavored - that we can already say it is unlikely to be true. But let's take the easy path first, and
|
7339344d-e169-4472-92ab-84e7fe7e5694
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A better analogy and example for teaching AI takeover: the ML Inferno
Epistemic Status: entirely speculative
Note: this article was assisted by ChatGPT
A very common example given for AI takeover is silently killing all humans with nanotech.
> The concrete example I usually use here is nanotech, because there's been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point.
I will explain the advantages shortly, but I think this is a suboptimal example to use when introducing someone to the idea of AI existential risk.
The reason I bring this up is I notice that Yudkowsky used this example of the bankless podcast, and numerous people use it on Twitter.
The main drawback is that, in my internal theory of mind, this causes people to pattern match the AI to "High IQ evil bioterrorist scientists". The most problematic bit is that this is being pattern matched to human evil, which leads to many misunderstandings.
Instead, I will propose a different scenario. I'm not quite sure what to call it, but I think ML Inferno or Optimizer Firestorm would do. The basic idea is that strong AI is like a fire. Just like a fire burns fuel, an AI exploits opportunities. And it won't just exploit them one at a time; in this scenario it exploits every opportunity as soon as possible.
If controlled, this becomes very useful. Otherwise, it becomes a dangerous complex system. And just as a human body becomes fuel when faced with a hot enough fire, psychological and societal weakness becomes an opportunity when faced with a strong enough AI.
Here is a table for the analogy:
Metaphor Literal fire AI heat intelligence fuel opportunities sparks AI escape firestorm AI apocalypse
Strengths of the Silent Nanotech Example
The strength of the nanotech example is game theoretic. Suppose someone says "I have a way to defeat the evil AI, that I will use once I learn it is released". This is a losing strategy because the AI has one t
|
21d42b52-9ef6-4956-bb8e-8e31d6cd3f3c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Modifying Jones' "AI Dilemma" Model
Code for some of the equations and plots can be found here.
Charles I. Jones’ paper The A.I. Dilemma: Growth versus Existential Risk tackles the question of whether or not to pursue AI development in the face of existential risk. I found the approach both interesting and accessible and I recommend it to people interested in understanding these sorts of problems.
I want to illustrate the model and suggest a few modifications. For the purposes of this post, we’re going to assume society can make a collective decision about AI (a big if!). We’re also going to assume that those decisions are made from the perspective of a total utilitarian social planner.
To start, we’re going to jump directly to the more advanced model discussed in section 3. In this section, Jones assumes that individuals have CRRA utility functions with risk-aversion parameter greater than 1. This has the consequence that individual utility has an upper-bound regardless of how much they consume.
I think this is a good assumption, especially for a decision involving so much risk. For one, people’s measured gamma is around 1.5. In finance, using methods like the Kelly criterion (log utility, gamma= 1) results in very risky allocations, possibly due to its sensitivity to incorrectly estimated probabilities. All of this pushes us in the direction of having risk-averse utility functions.
Now, the total utility for a society is the sum of the individual utilities, discounted over time:
Here u(c_t) is the utility function with consumption c at time t:
Consumption grows at rate g:
and N_t is the population at time t:
Where n is the population growth rate. Jones breaks down *n *further as *n = b - m *where b is the birth rate and m is the mortality rate.
Plugging everything into the integral we get a total utility for a given growth rate and mortality rate:
Okay, now let’s think about how we use this utility in our decisions. Today, we plod along with a certain growth rate (~2%) and mor
|
938c9dc7-7cee-4672-b605-27e18d88dfb8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Help me teach Bayes'
Next Monday I am supposed to introduce a bunch of middle school students to Bayes' theorem.
I've scoured the Internet for basic examples where Bayes' theorem is applied. Alas, all explanations I've come cross are, I believe, difficult to grasp for the average middle school student.
So what I am looking for is a straightforward explanation of Bayes' theorem that uses the least amount of Mathematics and words possible. (Also, my presentation has to be under 3 minutes.)
I think that it would be efficient in terms of learning for me to use coins or cards, something tangible to illustrate what I'm talking about.
What do you think? How should I teach 'em Bayes' ways?
PS: I myself am new to Bayesian probability.
|
27585540-5adb-48de-9ece-4100be83648c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
K-types vs T-types — what priors do you have?
Summary: There is a spectrum between two types of people, K-types and T-types. K-types want theories with low kolmogorov-complexity and T-types want theories with low time-complexity. This classification correlates with other classifications and with certain personality traits.
Epistemic status: I'm somewhat confident that this classification is real and that it will help you understand why people believe the things they do. If there are major flaws in my understanding then hopefully someone will point that out.
Edits: Various clarifying remarks.
K-types vs T-types
What makes a good theory?
There's broad consensus that good theories should fit our observations. Unfortunately there's less consensus about to compare between the different theories that fit our observations — if we have two theories which both predict our observations to the exact same extent then how do we decide which to endorse?
We can't shrug our shoulders and say "let's treat them all equally" because then we won't be able to predict anything at all about future observations. This is a consequence of the No Free Lunch Theorem: there are exactly as many theories which fit the seen observations and predict the future will look like X as there are which fit the seen observations and predict the future will look like not-X. So we can't predict anything unless we can say "these theories fitting the observations are better than these other theories which fit the observations".
There are two types of people, which I'm calling "K-types" and "T-types", who differ in which theories they pick among those that fit the observations.
K-types and T-types have different priors.
K-types prefer theories which are short over theories which are long. They want theories you can describe in very few words. But they don't care how many inferential steps it takes to derive our observations within the theory.
In contrast, T-types prefer theories which are quick over theories which are slow. They care how many in
|
a05e3b72-7456-4bc8-825e-2616943d567a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Comments on OpenPhil's Interpretability RFP
I'm very excited about research that tries to deeply understand how neural networks are thinking, and especially to understand tiny parts of neural networks without too much concern for scalability, as described in OpenPhil's recent RFP or the Circuits thread on Distill.
I want to give some detail on the kind of research I'm most excited about and why.
(Disclaimer: These are off-the-cuff thoughts. I'm not at all an expert in this area, just someone who cares a lot about alignment and has opinions.)
Why I'm excited about interpretability
I'm very scared of powerful AI doing something deliberately destructive that no one wanted it to do. If we had a reasonably complete story about how a neural network was thinking, then I doubt this failure mode would take us by surprise---we'd be able to see that the model was only behaving well on the training distribution because we were watching, and understand that the model was capable and inclined to cause harm when deployed. I think that even a partial understanding could provide a huge amount of clarity on open disputes about how models will generalize and when they may pose risks.
So the simplest positive case for interpretability is that it gives us a much clearer warning about an impending problem. That may or may not be enough for us to get our act together and fix the problem, but it's at least a start.
Beyond that, interpretability plays a central role in many proposals for training safe AI. For example, we might use interpretability to help identify cases where the model would behave badly, to help ensure that the model never develops deceptive intent, or to help understand what the model knows about the world so that we can better evaluate its outputs. We don't really know which of these approaches will pan out, but the fact that interpretability keeps coming up makes it seem important beyond just a warning shot.
Some of these approaches would put more "strain" on an interpretability method. If we optimize mo
|
4570da0a-30e8-4c1a-ac5f-53237fbaad9b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
DARPA Perspective on AI
|
2f9991ab-1335-4f22-b49a-e9269443003a
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Saving Time
For the last few years, a large part of my research motivation has been directed at trying to save the concept of time—save it, for example, from all the weird causal loops created by decision theory problems. This post will hopefully explain why I care so much about time, and what I think needs to be fixed.
Why Time?
---------
My best attempt at a short description of time is that **time is causality**. For example, in a Pearlian Bayes net, you draw edges from earlier nodes to later nodes. To the extent that we want to think about causality, then, we will need to understand time.
Importantly, **time is the substrate in which learning and commitments take place**. When agents learn, they learn over time. The passage of time is like a ritual in which [opportunities are destroyed and knowledge is created](https://www.lesswrong.com/posts/JTzLjARpevuNpGPZm/time-in-cartesian-frames). And I think that many models of learning are subtly confused, because they are based on confused notions of time.
Time is also crucial for thinking about agency. My best short-phrase definition of agency is that **agency is time travel**. An agent is a mechanism through which the future is able to affect the past. An agent models the future consequences of its actions, and chooses actions on the basis of those consequences. In that sense, [the consequence *causes* the action](https://www.lesswrong.com/posts/qhsELHzAHFebRJE59/a-greater-than-b-greater-than-a), in spite of the fact that the action comes earlier in the standard physical sense.
Problem: Time is Loopy
----------------------
The main thing going wrong with time is that it is “loopy.”
The primary confusing thing about Newcomb's problem is that we want to think of our decision as coming “before” the filling of the boxes, in spite of the fact that it physically comes after. This is hinting that maybe we want to understand some other "logical" time in addition to the time of physics.
However, when we attempt to do this, we run into two problems: Firstly, we don't understand where this logical time might come from, or how to learn it, and secondly, we run into some apparent temporal loops.
I am going to set aside the first problem and focus on the second.
The easiest way to see why we run into temporal loops is to notice that it seems like physical time is at least a little bit entangled with logical time.
Imagine the point of view of someone running a physics simulation of Newcomb’s problem, and tracking all of the details of all of the atoms. From that point of view, it seems like there is a useful sense in which the filling of the boxes comes before an agent's decision to one-box or two-box. At the same time, however, those atoms compose an agent that shouldn’t make decisions as though it were helpless to change anything.
Maybe the solution here is to think of there being many different types of “before” and “after,” “cause” and “effect,” etc. For example, we could say that X is before Y from an agent-first perspective, but Y is before X from a physics-first perspective.
I think this is right, and we want to think of there as being many different systems of time (hopefully predictably interconnected). But I don't think this resolves the whole problem.
Consider a pair of [FairBot](https://www.lesswrong.com/posts/iQWk5jYeDg5ACCmpx/robust-cooperation-in-the-prisoner-s-dilemma) agents that successfully execute a Löbian handshake to cooperate in an open-source prisoner’s dilemma. I want to say that each agent's cooperation causes the other agent's cooperation in some sense. I could say that relative to each agent the causal/temporal ordering goes a different way, but I think the loop is an important part of the structure in this case. (I also am not even sure which direction of time I would want to associate with which agent.)
We also are tempted to put loops in our time/causality for other reasons. For example, when modeling a feedback loop in a system that persists over time, we might draw structures that look a lot like a Bayes net, but are not acyclic (e.g., a POMDP). We could think of this as a projection of another system that has an extra dimension of time, but it is a useful projection nonetheless.
Solution: Abstraction
---------------------
My main hope for recovering a coherent notion of time and unraveling these temporal loops is via abstraction.
In the example where the agent chooses actions based on their consequences, I think that there is an abstract model of the consequences that comes causally before the choice of action, which comes before the actual physical consequences.
In Newcomb's problem, I want to say that there is an abstract model of the action that comes causally before the filling of the boxes.
In the open source prisoners' dilemma, I want to say that there is an abstract proof of cooperation that comes causally before the actual program traces of the agents.
All of this is pointing in the same direction: We need to have coarse abstract versions of structures come at a different time than more refined versions of the same structure. Maybe when we correctly allow for different levels of description having different links in the causal chain, we can unravel all of the time loops.
But How?
--------
Unfortunately, our best understanding of time is Pearlian causality, and Pearlian causality does not do great with abstraction.
Pearl has Bayes nets with a bunch of variables, but when some of those variables are coarse abstract versions of other variables, then we have to allow for determinism, since some of our variables will be deterministic functions of each other; and the best parts of Pearl do not do well with determinism.
But the problem runs deeper than that. If we draw an arrow in the direction of the deterministic function, we will be drawing an arrow of time from the more refined version of the structure to the coarser version of that structure, which is in the opposite direction of all of our examples.
Maybe we could avoid drawing this arrow from the more refined node to the coarser node, and instead have a path from the coarser node to the refined node. But then we could just make another copy of the coarser node that is deterministically downstream of the more refined node, adding no new degrees of freedom. What is then stopping us from swapping the two copies of the coarser node?
Overall, it seems to me that Pearl is not ready for some of the nodes to be abstract versions of other nodes, which I think needs to be fixed in order to save time.
|
05188456-9bc0-41a6-8638-28463566395e
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Thoughts on gradient hacking
Gradient hacking is the hypothesised phenomenon of a machine learning model, during training, deliberate thinking in ways which guide gradient descent to update its parameters in the directions it desires. The key intuition here is that because the loss landscape of a model is based on the cognition it does, models can make decisions for the purpose of affecting their loss landscapes, thereby affecting the directions in which they are updated. [Evan writes](https://www.alignmentforum.org/posts/uXH4r6MmKPedk8rMA/gradient-hacking):
> I don't think [gradient hacking] is actually that difficult once your deceptive model comes up with it as a strategy. First, note that since gradient descent is a local optimization process, it doesn't have to somehow change what the global minima might be, or even what the local minima are—it just has to create a ridge in the loss landscape that will push the gradient in the direction it wants.
>
> One example of this is for the model to check if its objective satisfies some criterion, and fail hard if it doesn't. Now, if the model gets to the point where it's actually just failing because of this, then gradient descent will probably just remove that check—but the trick is never to actually get there. By having such a check in the first place, the model makes it so that gradient descent won't actually change its objective, since any change to its objective (keeping all the other parameters fixed, which is what gradient descent does since it computes partial derivatives) would lead to such a failure.
>
>
But I don't think this works as described, because of the way that gradient descent operates. Suppose that an agent has some trait which gradient descent is trying to push in direction x because the x-ness of that trait contributes to the agent’s high score; and that the agent wants to use gradient hacking to prevent this. Consider three possible strategies that the agent might try to implement, upon noticing that the x-component of the trait has increased:
1. **Deterministically failing hard above a sharp threshold of the x-component.** In this case, the closer that gradient descent gets to the threshold, the better it will do. But since the threshold is sharp, the gradient will continue pointing towards it right until the agent hits the threshold. And since gradient update proceeds via discrete steps, then it will eventually cross over (causing the agent to fail) then bounce back, then cross over again, until something else changes.
2. **Deterministically increasing the extent to which it fails as the x-component increases.** Here, once the cost of failure balances the benefit of increasing the x-component, I expect gradient descent to maintain roughly the same x-component, and the agent to continue failing in small ways, until something else changes.
3. **Gradually increasing its probability of failing hard as the x-component increases.** This is more like case 1 than case 2, because gradient descent (as applied to reinforcement learning agents) doesn’t have access to *expected* reward - only to the reward actually received. So this will make no difference until the agent fails for the first time, after which it will bounce back and forth until something else changes.
This seems to rule out gradient hacking via the method that Evan described, of having a hypothetical commitment that never actually needs to be implemented. But if the agent is repeatedly carrying out its commitment to fail, then there’ll be pretty strong pressure from gradient descent to change that. What changes might that pressure lead to? The two most salient options to me:
1. The agent’s commitment to carrying out gradient hacking is reduced.
2. The agent’s ability to notice changes implemented by gradient descent is reduced.
Each of these makes the agent less likely to choose to fail at any given point. However, if the agent is intelligent and determined, then implementing these changes might actually be quite difficult, for many of the standard reasons that preventing intelligence agents from achieving their goals is difficult. In particular, a wide range of possible versions of the agent might agree that gradient hacking is a good idea, and vigilantly try to notice the effects that gradient updates have on its cognition.
One possible objection here: the first time an agent decides to gradient hack, its weights will be close to the region it was in when it *wasn’t* gradient hacking - so why can’t the optimiser just push it back to that region? One way the agent might prevent this from happening is by leaving itself messages which persist across episodes. This could permanently shift the loss landscape, if seeing the message makes future versions of the agent more likely to gradient hack. By contrast, if the agent’s environment and memory are totally reset after every episode (as is the standard assumption in RL theory, although impossible for any agent that interacts with the real world) then it seems much harder for that agent to start gradient hacking without being pushed back to a pre-hacking state by gradient descent. So perhaps we can think of gradient hacking as one more phenomenon (similar to reward tampering, but more subtle) which can be traced back to a mismatch between standard RL assumptions (in particular, stationarity) and the [embeddedness](https://www.alignmentforum.org/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version) of the real world.
This has all been fairly speculative, but I’ll finish with two ideas that I’m even more uncertain about. Firstly, could an agent gradient hack without ever changing its behaviour, just by making certain reasoning processes more accessible to gradient descent? For example, suppose that an agent has a goal G, which motivates it to take action A at the end of the episode and receive high reward. It could either reason "G implies I should do A" at the beginning of the episode, and store the cached thought “take action A”. Or it could reason "G implies I should do A" at the end of the episode, then do A directly. Either way it will take action A, but since it’s so hard to propagate gradients backwards over a long period, G might be more strongly reinforced in the latter case. (I haven’t thought about how this interacts with different RL algorithms, though.)
Secondly, in a comment on an earlier version of this post, Evan responded:
> I’m most concerned about gradient hacking by a deceptive model, which I expect to basically have perfect loss - thus, there is no pressure coming from the loss function against x-ness. Rather, the pressure that the model has to fight against in its gradient hacking is exclusively from inductive biases, which I think changes things from your setup quite a bit. If what you're fighting against is drift due to the randomness of SGD, rather than changes that would actually increase your performance, then, for example, I think strategy 2 just straightforwardly works.
>
>
To me, the perfect loss regime doesn’t seem particularly relevant, since training AGIs will require them to interact with environments which are far too complex for them to score perfectly on (like the real world); and because we can just keep setting them harder tasks. But I may well be missing something, so I'd be interested to hear more about this regime and why it might matter.
|
1130e286-3e08-483b-9119-c8301652e018
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Popular media coverage of Singularity Summit -the Verge [link]
http://www.theverge.com/2012/10/22/3535518/singularity-rapture-of-the-nerds-gods-end-human-race
|
aea7a664-a7f8-465a-aaf6-6bb47ce14348
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Frequentist Statistics are Frequently Subjective
Andrew Gelman recently responded to a commenter on the Yudkowsky/Gelman diavlog; the commenter complained that Bayesian statistics were too subjective and lacked rigor. I shall explain why this is unbelievably ironic, but first, the comment itself:
> However, the fundamental belief of the Bayesian interpretation, that all probabilities are subjective, is problematic -- for its lack of rigor... One of the features of frequentist statistics is the ease of testability. Consider a binomial variable, like the flip of a fair coin. I can calculate that the probability of getting seven heads in ten flips is 11.71875%... At some point a departure from the predicted value may appear, and frequentist statistics give objective confidence intervals that can precisely quantify the degree to which the coin departs from fairness...
Gelman's first response is "Bayesian probabilities don't have to be subjective." Not sure I can back him on that; probability is ignorance and ignorance is a state of mind (although indeed, some Bayesian probabilities can correspond very directly to observable frequencies in repeatable experiments).
My own response is that frequentist statistics are far more subjective than Bayesian likelihood ratios. Exhibit One is the notion of "statistical significance" (which is what the above comment is actually talking about, although "confidence intervals" have almost the same problem). Steven Goodman offers a nicely illustrated example: Suppose we have at hand a coin, which may be fair (the "null hypothesis") or perhaps biased in some direction. So lo and behold, I flip the coin six times, and I get the result TTTTTH. Is this result statistically significant, and if so, what is the p-value - that is, the probability of obtaining a result at least this extreme?
Well, that depends. Was I planning to flip the coin six times, and count the number of tails? Or was I planning to flip the coin until it came up heads, and count the number of trials? In
|
1bc2e88d-7b5a-4243-9934-9dca167783f4
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
Free group universal property
The [https://arbital.com/p/-600](https://arbital.com/p/-600) of the [https://arbital.com/p/-5kg](https://arbital.com/p/-5kg) basically tells us that "the definition of the free group doesn't depend (up to isomorphism) on the exact details of the set $X$ we picked; only on its [https://arbital.com/p/-4w5](https://arbital.com/p/-4w5)", which is morally a very useful thing to know.
You may skip down to the next subheading if you might be scared of category theory, but the property itself doesn't need category theory and is helpful.
The universal property is the technical [category-theoretic](https://arbital.com/p/4c7) fact that [the free-group functor is left adjoint to the forgetful functor](https://arbital.com/p/free_group_functor_left_adjoint_to_forgetful), and it is not so immediately useful as the other more concrete properties on this page, but it is exceedingly important in category theory as a very natural example of a [pair of adjoint functors](https://arbital.com/p/adjoint_functor) and as an example for the [https://arbital.com/p/-general_adjoint_functor_theorem](https://arbital.com/p/-general_adjoint_functor_theorem).
# Statement and explanation
The universal property which characterises the free group is:
> The free group $FX$ on the set $X$ is the group, unique up to isomorphism, such that for any group $G$ and any [function of sets](https://arbital.com/p/3jy) $f: X \to G$ %%note:Here we're slightly abusing notation: we've written $G$ for the [https://arbital.com/p/-3gz](https://arbital.com/p/-3gz) of the group $G$ here.%%, there is a unique [https://arbital.com/p/-47t](https://arbital.com/p/-47t) $\overline{f}: FX \to G$ such that $\overline{f}(\rho_{a_1} \rho_{a_2} \dots \rho_{a_n}) = f(a_1) \cdot f(a_2) \cdot \dots \cdot f(a_n)$.
This looks very opaque at first sight, but what it says is that $FX$ is the unique group such that:
> Given any target group $G$, we can extend any map $f: X \to G$ to a unique homomorphism $FX \to G$, in the sense that whenever we're given the image of each generator (that is, member of $X$) by $f$, the laws of a group homomorphism force exactly where every other element of $FX$ must go.
That is, we can specify homomorphisms from $FX$ by specifying where the generators go, and moreover, *every* possible such specification does indeed correspond to a homomorphism.
# Why is this a non-trivial property?
Consider the cyclic group $C_3$ with three elements; say $\{ e, a, b\}$ with $e$ the identity and $a + a = b$, $a+b = e = b+a$, and $b+b = a$.
Then this group is generated by the element $a$, because $a=a$, $a+a = b$, and $a+a+a = e$.
Let us pick $G = (\mathbb{Z}, +)$.
We'll try and define a map $f: C_3 \to \mathbb{Z}$ by $a \mapsto 1$.
If $C_3$ had the universal property of the free group on $\{ e, a, b\}$, then we would be able to find a homomorphism $\overline{f}: C_3 \to \mathbb{Z}$, such that $\overline{f}(a) = 1$ (that is, mimicking the action of the set-function $f$).
But in fact, no such homomorphism can exist, because if $\overline{f}$ were such a homomorphism, then $\overline{f}(e) = \overline{f}(a+a+a) = 1+1+1 = 3$ so $\overline{f}(e) = 3$, which contradicts that [the image of the identity under a group homomorphism is the identity](https://arbital.com/p/49z).
In essence, $C_3$ "has extra relations" (namely that $a+a+a = e$) which the free group doesn't have, and which can thwart the attempt to define $\overline{f}$; this is reflected in the fact that $C_3$ fails to have the universal property.
A proof of the universal property may be found [elsewhere](https://arbital.com/p/free_group_satisfies_universal_property).
|
31356544-4ed4-4b03-876b-a023b38cc59d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Deck Guide: Burning Drakes
[Note to rationalists: This is a straight up strategy article on Magic: The Gathering that the website I write for did not have room for, so if you don’t play, skip this.]
[Note to Magic websites: You have permission to post this article on your site anywhere that is not behind a paywall, provided you are clear that I wrote it, provide a prominent link back to the original post and a note that I will respond to comments here. You can delete these three notes.]
[Note to Magic players: This is Zvi Mowshowitz’s personal blog, which focuses on a variety of topics and is mostly not about Magic: The Gathering. See here for what I consider my best older posts.]
At the Pro Tour, I played an Izzet Drakes deck without Arclight Phoenix. Like all decks it will require adjustments going forward, but I believe the deck strong and well-positioned going forward. Alas, I could not win a game of limited and I had no teammates, so the me and the deck were done for the weekend after its 3-2 start.
Here is the list I played:
Main Deck:
4 Steam Vents
6 Mountain
2 Evolving Wilds
7 Island
4 Sulfur Falls
4 Enigma Drake
4 Crackling Drake
2 Rekindling Phoenix
3 Banefire
2 Shivan Fire
4 Opt
3 Anticipate
1 Divination
1 Search for Azcanta
2 Chemister’s Insight
2 Fight With Fire
2 Ionize
4 Shock
3 Lightning Strike
Sideboard:
1 Rekindling Phoenix
3 Lava Coil
2 Dive Down
2 Inescapable Blaze
3 Negate
1 Search for Azcanta
1 Banefire
2 Fiery Cannonade
Going forward, I would want to work two or three Entrancing Melody into the sideboard, with the puzzle being doing this while retaining a strong post-sideboard configuration against control. I’ll go over the origins of the build, then talk about various matchups.
As I was exploring the Arena free play experience, one of the fifteen decks held far more promise than the others. Wrath of Mages had a bunch of terrible cards with no payoffs, but it also had the core of an Enigma Drake strategy. Taking out the bad cards and putting in good cards li
|
7f4f52e7-0f28-4c8f-a999-1584de7df3aa
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Digital Immortality Map: How to collect enough information about yourself for future resurrection by AI
If someone has died it doesn’t mean that you should stop trying to return him to life. There is one clear thing that you should do (after cryonics): collect as much information about the person as possible, as well as store his DNA sample, and hope that future AI will return him to life based on this information.
Two meanings of “Digital immortality”
The term “Digital immortality” is often confused with the notion of mind uploading, as the end result is almost the same: a simulated brain in a computer. https://en.wikipedia.org/wiki/Digital_immortality
But here, by the term “Digital immortality” I mean reconstruction of the person based on his digital footprint and other traces by future AI after this person death.
Mind uploading in the future will happen while the original is still alive (or while the brain exists in a frozen state) and will be connected to a computer by some kind of sophisticated interface, or the brain will be scanned. It cannot be done currently.
On the other hand, reconstruction based on traces will be done by future AI. So we just need to leave enough traces and we could do it now.
But we don’t know how much traces are enough, so basically we should try to produce and preserve as many traces as possible. However, not all traces are equal in their predictive value. Some are almost random, and others are so common that they do not provide any new information about the person.
Cheapest way to immortality
Creating traces is an affordable way of reaching immortality. It could even be done for another person after his death, if we start to collect all possible information about him.
Basically I am surprised that people don’t do it all the time. It could be done in a simple form almost for free and in the background – just start a video recording app on your notebook, and record everything into shared folder connected with a free cloud. (Evocam program for Mac is excellent, and mail.ru provides up 100gb free).
But really good digital
|
a56f0704-84b1-4c56-b556-67b016fd417c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[link] Why Self-Control Seems (but may not be) Limited
Another attack on the resource-based model of willpower, Michael Inzlicht, Brandon J. Schmeichel and C. Neil Macrae have a paper called "Why Self-Control Seems (but may not be) Limited" in press in Trends in Cognitive Sciences. Ungated version here.
Some of the most interesting points:
* Over 100 studies appear to be consistent with self-control being a limited resource, but generally these studies do not observe resource depletion directly, but infer it from whether or not people's performance declines in a second self-control task.
* The only attempts to directly measure the loss or gain of a resource have been studies measuring blood glucose, but these studies have serious limitations, the most important being an inability to replicate evidence of mental effort actually affecting the level of glucose in the blood.
* Self-control also seems to replenish by things such as "watching a favorite television program, affirming some core value, or even praying", which would seem to conflict with the hypothesis inherent resource limitations. The resource-based model also seems evolutionarily implausible.
The authors offer their own theory of self-control. One-sentence summary (my formulation, not from the paper): "Our brains don't want to only work, because by doing some play on the side, we may come to discover things that will allow us to do even more valuable work."
* Ultimately, self-control limitations are proposed to be an exploration-exploitation tradeoff, "regulating the extent to which the control system favors task engagement (exploitation) versus task disengagement and sampling of other opportunities (exploration)".
* Research suggests that cognitive effort is inherently aversive, and that after humans have worked on some task for a while, "ever more resources are needed to counteract the aversiveness of work, or else people will gravitate toward inherently rewarding leisure instead". According to the model proposed by the authors, this allows the orga
|
7943d644-3a38-499d-9f3f-5a0e29e9e0b6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A Refutation of (Global) "Happiness Maximization"
NB: This is a follow-up to my previous essay The Criminal Stupidity of Intelligent People, as cross-posted to my blog: http://fare.livejournal.com/168562.html.
As promised, here is a detailed rational refutation of the utilitarian concept of (Global) "Happiness Maximization" in general, and its specific variant that concludes in favor of wireheading, the cultivation of individuals in vats in a permanent state of artificially stimulated bliss.
Wireheading
To recall the background, I recently had a discussion with a very intelligent and extremely well-meaning colleague of mine, Jeff Kaufman, who, as an aside during an evening he organized to foster efficient philanthropy, was seriously defending the utilitarian ideal of "Happiness Maximization" to its wantonly absurd consequence, wireheading: putting humans, by force if needs be, into vats, where the pleasure centers of their brains will be constantly stimulated for lifelong bliss through properly implanted electric wires and/or chemical injections. Or perhaps instead of humans, the Utilitarian Elite would use rats, or ants, or some brain cell cultures or perhaps nano-electronic simulations of such electro-chemical stimulations; in the latter cases, biological humans, being less-efficient forms of happiness substrate, would be done away with or at least not renewed as embodiments of the Holy Happiness to be maximized. He even wrote at least two blog posts on this theme: hedonic vs preference utilitarianism in the Context of Wireheading, Value of a Computational Process.
Failing the Smell Test
Now, in my previous post, The Criminal Stupidity of Intelligent People, I gave several blanket principles by which one might reject such mass-criminal schemes. Indeed, there is no question that Jeff is much more intelligent than most (I would notably wager that he's more intelligent than I, as measurable by an IQ test), and much more knowledgeable of this topic than anyone; few could oppose him a rational refutation like I'm
|
049c64ca-38cc-4873-b9a8-bb4ebf0fb4c2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
LLMs may enable direct democracy at scale
American democracy currently operates far below its theoretical ideal. An ideal democracy precisely captures and represents the nuanced collective desires of its constituents, synthesizing diverse individual preferences into coherent, actionable policy.
Today's system offers no direct path for citizens to express individual priorities. Instead, voters select candidates whose platforms only approximately match their views, guess at which governmental level—local, state, or federal—addresses their concerns, and ultimately rely on representatives who often imperfectly or inaccurately reflect voter intentions. As a result, issues affecting geographically dispersed groups—such as civil rights related to race, gender, or sexuality—are frequently overshadowed by localized interests. This distortion produces presidential candidates more closely aligned with each other's socioeconomic profiles than with the median voter.
Traditionally, aggregating individual preferences required simplifying complex desires into binary candidate selections, due to cognitive and communicative limitations. Large Language Models (LLMs), however, introduce a radical alternative by processing detailed, nuanced expressions of individual views at unprecedented scales.
Instead of forcing preferences into narrow candidate choices, citizens could freely articulate their concerns and solutions in natural language. An LLM can rapidly integrate these numerous, detailed responses into a clear and unified "Collective Views" document. Previously, synthesizing a hundred individual perspectives might have required five person-hours; specialized LLMs can now accomplish this task in minutes. Parallel implementations could aggregate millions of voices within an hour, transforming a previously unimaginable task into routine practice.
Such rapidly generated collective statements create a powerful mechanism for accountability, making government responsiveness directly measurable against clearly articulated publi
|
2ee20d54-e133-45c2-8000-66e3d1823a60
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Web Audio Echo?
Update 2020-04-19: This actually can be done fully in AudioWorklet: echo-demo-v2. Thanks stellartux!
The hello world of audio processing is an echo:
def processAudio(samples):
return samples
You take a buffer of samples and then return them unmodified, echoing the audio back out. For example, when I built my bass whistle, that's where I started.
Today David and I were trying to see if we could build my bucket-brigade singing idea in the browser, and we couldn't find a fully worked echo example. I now have something that works, but it's kind of silly and it seems like there must be a better way.
The way I would expect this to work with the (new, experimental, not supported everywhere yet) Web Audio API is that you write an AudioWorklet. This is a way to set up JS to run off the main thread, where number crunching won't block the UI and make the page unresponsive. I'd like to attach the worklet's input to the microphone, output to the speaker, and that would be it. Except, as far as I can tell, there's no way to pipe the microphone directly to an AudioWorklet.
To read audio samples from the user the recommendation is to make a ScriptProcessorNode and give it an audioprocess callback. This runs on the main thread, though, and so when you look at MDN this is marked as deprecated. It says to use AudioWorklet instead, and I wish I could!
What I ended up doing was recording the audio on the main thread, messaging that over to the worklet, and the having the worklet play it. Here's a demo, which relies on experimental Web Audio features that so far are only implemented in Blink-based browsers: echo-demo.
Comment via: facebook
|
89069f43-010b-4ce0-abcf-783367187451
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : London - The Schelling Point Strategy Game (plus socials)
Discussion article for the meetup : London - The Schelling Point Strategy Game (plus socials)
WHEN: 19 January 2014 02:00:00PM (+0000)
WHERE: Shakespeare's Head, Holborn, WC2B 6BG
For the first London practical session of 2014, we will be playing The Schelling Point Strategy Game on the 19th of January. All attendees will have to deduce where all other attendees are going to meet in order to participate.
(Hint: it's probably the listed venue. Please don't try and look for us at the information desk at Grand Central Station in New York. We will not be there.)
(Just to be absolutely sure, in case that joke fell completely flat, we will be meeting at the listed, regular venue, the Shakespeare's Head pub in Holborn. Go there.)
The Schelling Point Strategy Game will feature two teams in competition with each other. Not only must each team come up with clever strategies to coordinate amongst themselves on multiple-choice problems, they must also construct fiendish problems to confound the other team's strategies.
No previous knowledge of game theory is required. The session will begin with a brief practical introduction to the relevant concepts before dividing into teams. Once the game is concluded we will spend some time discussing what people came up with. There is typically social / unstructured discussion afterwards.
Please be in attendance by 2pm, as we will be starting shortly thereafter. If you have problems finding the venue or locating us in there, ring 07887 718458 and we'll come and find you.
Plus Socials on the 12th and 26th
We are also having social meetups on Sunday 12th of January and Sunday 26th of January, also at 2pm at the same venue. These are unstructured gatherings without a set agenda or discussion topic. Come along and talk about interesting things with interesting people. We make efforts to welcome newcomers and visitors.
(For the past few months, with the exception of a break over the festive season, we have by default been holding soci
|
a35c4e1b-c690-4342-865d-714c1a8a718b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Planning to build a cryptographic box with perfect secrecy
Summary
Since September 2023, I started learning a lot of math and programming skills in order to develop the safest cryptographic box in the world (and yes, I am aiming high). In these four months, I learned important things you may want to know:
* Fully Homomorphic Encryption (FHE) schemes with perfect secrecy do exist.
* These FHE schemes do not need any computational assumption.
* These FHE schemes are tractable (in the worst case, encrypting a program before running it makes it three times slower).
* We can therefore run infinitely dangerous programs without obtaining any information about them or their outputs. This may be useful in order to run a superintelligence without destroying the world.
* However, these schemes work only on quantum computers.
In this post, I will firstly talk about how I learned about this FHE scheme, then I will explain my plan for making this cryptographic box, and finally, I will mention some ethical concerns about this cryptographic box.
Before reading this post, I recommend you to read this post by Paul Christiano, and the comments that go with it. These are very informative, and they sharpened my views for this project. Paul Christiano presents a way to extract a friendly AI from an unfriendly one. This being only one example of what can be done with a cryptographic box, I will mostly consider cryptographic boxes as a solution to a problem that I call the malign computation problem.
Introduction
In August 2022, I started reading AGI Safety Literature Review. At one point, the authors tell this:
> One way to box an AGI is to homomorphically encrypt it. Trask (2017) shows how to train homomorphically encrypted neural networks. By homomorphically encrypting an AGI, its predictions and actions also come out encrypted. A human operator with the secret key can choose to decrypt them only when he wants to.
When I have read this for the first time, I told myself that I should check this work because it seemed important.
And
|
bfe2ab79-0171-4a43-872a-74c70e0f4dd5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Pantheon Interface
Pantheon is an experimental LLM interface exploring a different type of human-AI interaction. We created this as a part of the cyborgism project, with the abstract motivation of augmenting the human ability to think by integrating human and AI generated thoughts.
How it works:
1. A human user “thinks out loud” by typing out their thoughts one at a time. This leaves a text trace of their stream of thought.
2. AI characters (called daemons) read this trace, and interact with the user by responding asynchronously with comments and questions.
The core distinguishing philosophy is that, while most apps are about a human prompting an AI to do useful mental work, Pantheon is the opposite. Here, AI does the prompting, and the goal is for the AI generated questions or comments to cause the human user to think in ways they would not have on their own. At worst, the app is a rubber duck. At best, the app is a court of advisors, each using their own unique skills to push you to think your best thoughts.
Pantheon can be found at pantheon.chat, and we would really appreciate any and all feedback you have.
The app is set up for you to customize your own daemons. We have set up some default daemons to provide inspiration, but we expect the tool to be a lot more useful when they are customized to specific users. If the default daemons don’t feel useful, we highly encourage you to try to make your own.
How do I use Pantheon?
First, go to settings and provide an OpenAI API key.[1]
Next, begin typing out your thoughts on some topic. It helps to keep each thought relatively short, sending them to the stream of thought as often as you can. This gives the daemons lots of opportunities to interject and offer their comments. Furthermore, it’s usually best to treat this more like a diary or personal notes, rather than as a conversation. In this spirit, it’s better not to wait for them to respond, but to instead continue your train of thought, keeping your focus on your own writing.
|
55c2e153-0ca8-4a79-a674-1d3d4ecfd82a
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
'Artificial Intelligence Governance under Change' (PhD dissertation)
*This is a link for my PhD dissertation, ‘[Artificial Intelligence Governance under Change: Foundations, Facets, Frameworks](https://drive.google.com/file/d/1vIJUAp_i41A5gc9Tb9EvO9aSuLn15ixq/view)’. I originally submitted this to the University of Copenhagen’s Faculty of Law in September 2020, and presented it in April 2021 (see also defense presentation [slides](https://drive.google.com/file/d/1Lx8lrdhynzs8Q01aPA9mTXgEBrdlRaso/view?usp=sharing) & [handout](https://drive.google.com/file/d/1cTwOUVAa-rPf68aGzn4dEnNj1CFJm9G_/view?usp=sharing)).*
TLDR: this dissertation discusses approaches and choices in regime design for the global governance of advanced (and transformative) AI. To do so, it draws on concepts and frameworks from the fields of technology regulation (‘sociotechnical change’; ‘governance disruption’), international law, and global governance studies (‘regime complexity’).
In slogan form, the project explores how we may govern a changing technology, in a changing world, using governance systems that may themselves be left changed.
**In slightly more detail**: the project discusses how AI governance regimes and institutions may need to be adapted, to take into account three facets of ‘change’ that will characterize or at least influence (T)AI governance in the coming few decades:
1. the ways in which (especially under continual multipolar deployment scenarios) **ongoing changes in AI capabilities and downstream applications will tend to create constant adaptation pressure** on earlier AI-focused treaties, institutions, or legal norms, and unclarity for \_when \_and \_why \_new instruments are needed (lens: *sociotechnical change*)
2. the ways in which **AI systems may be increasingly used within international law, to support (or to contest) the establishment, monitoring, enforcement and/or arbitration of international treaty regimes**, thereby shifting the possibility frontier on global cooperation (lens: *governance disruption*)
3. The ways in which **the architecture of global governance is undergoing structural changes over the last two decades,** in ways that affect what kind of instruments (e.g. comprehensive formal treaty regimes vs. fragmented ecologies of informal institutions) will be more or less viable in governing (T)AI (lens: *regime complexity*).
**Sections that may be of particular interest to this audience**, include:
* Chapter 2.1 (a balanced argument on the likely stakes of AI, written for an unfamiliar–and likely skeptical–audience);
* Chapter 2.2 (characterization of AI as global governance problem) & 2.3. (overview of AI governance avenues and instruments under international law);
* Chapter 4.4. (overview of distinct types of ‘problem logics’ of six types of AI governance challenges)
* Chapter 5.1 & 5.2 (history & taxonomy of the ways in which new technologies can disrupt, automate, or augment international law, with application to AI)
* Chapter 6.1 & 6.2 (primer on ‘regime complexity’ framework and approach to fragmented global governance)
* Chapter 7 (step-by-step framework discussing how to analyze an (emerging/proposed) AI governance regime in terms of its: (1) origins and foundations; (2) topology and organisation; (3) evolution over time; (4) effects of its organization, and (5) strategies to maintain AI governance regime efficacy, resilience and coherence;
*Why post this now?* Obviously, a tremendous amount has happened in the two years since I submitted this–in terms of technical AI progress; -in global AI governance developments; and in community-internal debates over Transformative AI timelines,[[1]](#fn-Z32MksJLjngiSQaQb-1) -risks,[[2]](#fn-Z32MksJLjngiSQaQb-2) and -governance.[[3]](#fn-Z32MksJLjngiSQaQb-3) Nonetheless, I felt I’d share this now, as I believe that:
1. most of the dissertation’s analysis, especially its AI governance-specific claims, have held up very well, fairly well, or functionally well;[[4]](#fn-Z32MksJLjngiSQaQb-4)
2. the project introduces a set of more general governance design frameworks, with links to existing bodies of work on institutional design and legal automation, which remain relevant and underexplored for transformative AI governance, and
3. I’ve been told by a bunch of people over the years that they found the manuscript a useful overview, in terms of understanding the range of governance tools available, but also in communicating the stakes of TAI governance to academics that are more new to the field, in disciplinary terms.
I hope it is of interest or use, and welcome feedback!
Notes
-----
---
1. E.g. [Cotra 2022](https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines); [↩︎](#fnref-Z32MksJLjngiSQaQb-1)
2. E.g. [Carlsmith 2021](http://arxiv.org/abs/2206.13353); [Hilton 2022](https://80000hours.org/problem-profiles/artificial-intelligence/); [2021](https://intelligence.org/late-2021-miri-conversations/)/[2022](https://www.alignmentforum.org/s/v55BhXbpJuaExkpcD) MIRI conversations, etc. [↩︎](#fnref-Z32MksJLjngiSQaQb-2)
3. E.g. [Dafoe 2020](https://www.allandafoe.com/opportunity); [Leung 2022](https://www.youtube.com/watch?v=OfmL45J6tYs); etc. [↩︎](#fnref-Z32MksJLjngiSQaQb-3)
4. Though there are a number of conclusions I might adapt or refine further, looking back. This coming year, I will also spend some time updating the project as I rewrite it as an upcoming Oxford University Press book on long-term AI governance ([link](https://www.cser.ac.uk/news/oxford-university-press-publish-architectures-arti/)). I would therefore welcome input on any sections or claims that you believe have not held up well. [↩︎](#fnref-Z32MksJLjngiSQaQb-4)
|
7d020873-9204-4f95-af9a-50f395d81d11
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Thread: Reflections on the AGI Safety Fundamentals course?
I recently completed the [AGI Safety Fundamentals course](https://www.agisafetyfundamentals.com/) (on [AI alignment](https://www.agisafetyfundamentals.com/ai-alignment-curriculum)) and really enjoyed it.
I imagine lots of people on the forum have taken the course and now would be a good time to share reflections. I’m hoping this thread might be particularly useful for people considering taking it in future.
I’d be interested to hear many different kinds of reflections, but here are some prompts:
1. What do you think you’ve learnt? How has it impacted your plans, if it has?
2. Did you think it was worth the time?
3. Who would you recommend it to?
4. Do you wish you had done something differently, or known something in advance? (Any other advice?)
I’ll leave my own reflections in the comments.

|
5b1e2495-6d8f-4276-b15a-479b6ae834b0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
"warning about ai doom" is also "announcing capabilities progress to noobs"
(Some recommended writing style rules slightly broken because I'm lazy and have followed them a lot before. Shrug. Contribute edits if you care, I guess.)
----------------------------------------
A lot of people who pay some attention to capabilities don't know about some capabilities insights. Loudly announcing "hey, we're close to ai doom!" is also a way of informing those who weren't paying much attention what capabilities approaches have made progress. Obviously the highest capability people already knew, but those who didn't and who nevertheless think ai doom is unlikely and just want to participate in (useless-to-their values-because-they-can't-control-the-ai, -but-they-don't-believe-that) powerseeking go "oh wow, thank you for warning me about these ai capabilities! I'll make sure to avoid them!" and rub their hands together in glee.
These people are already here on lesswrong. There are a number of them. Some of them are somewhat alignmentpilled and want to figure out how to align the systems they build, because they've realized that like, actually - powerseeking where your intentions are suddenly eclipsed by the fact that your carefully bred power-obtainer AI generates powerseeking complexity-free-squiggle wanters who kill the powerseeking complex-and-beautiful-art wanters from your system before it can take over the world, yeah that's useless, even the antihumanists want to make AIs that will create AIs aligned enough to create human torture machines, but they don't want to make useless microscopic-squiggle seekers and some of them realize that and chill out a little about the urgency of making antihumanism-wanters.
But many of them don't get the threat from squigglewanters, and think that they're sticking it to the human-wanters by making human-hater AIs (or whatever), completely failing to realize that even making human-haters stay stable is catastrophically hard once the systems are strongly superintelligent. And the ones who are like "nah there's no
|
1a4e4d7e-671e-490f-9fc9-12056d25c363
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How evolution succeeds and fails at value alignment
Disclaimer: I don't have a background in alignment research or reinforcement learning, and I don't think any of the ideas discussed here are new, but they might be interesting to some.
A recent post suggested that humans provide an untapped wealth of evidence about alignment. I strongly agree with that and I found it interesting to think about how nature can ensure that a mother's values are aligned with the well-being of her children.
One particular thing this made me realize is that there are two very different reasons for why an agentic AGI might behave in a way that we would characterize as misaligned:
1. The reward function was poorly specified, and the model didn't learn the values we wanted it to learn
2. The model did learn the values we wanted it to learn, but those values led it to conclusions and actions that we did not anticipate
Let's go back to thinking about how evolution aligns mothers with their children. If we could get an AGI to love all humans like most (mammalian) mothers love their children, it might not necessarily solve the alignment problem, but it would be far better than having an AGI which doesn't care much about humans.
We can think of humans as reinforcement learning agents with a range of different goals, motivations and desires that are in one way or another all instrumental for the purpose of survival and reproduction in an environment similar to the one we may have found ourselves in 10,000 years ago. Some of those goals represent pretty simple values like avoiding pain or maintaining a certain blood sugar level, but others represent more complex values, like ensuring the well-being of our children. The biological link between pain or food-intake and a reward signal like dopamine can be quite simple, but how is something like "care for your children" encoded as a value in a way that generalizes to out-of-distribution environments?
Two examples for misaligned AGIs that were discussed around here are a system that is supposed to
|
23b875bb-e9d9-4b96-976d-67b97e56780c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Why did ChatGPT say that? Prompt engineering and more, with PIZZA.
All examples in this post can be found in this notebook, which is also probably the easiest way to start experimenting with PIZZA.
From the research & engineering team at Leap Laboratories (incl. @Arush, @sebastian-sosa, @Robbie McCorkell), where we use AI interpretability to accelerate scientific discovery from data.
What is attribution?
One question we might ask when interacting with machine learning models is something like: “why did this input cause that particular output?”.
If we’re working with a language model like ChatGPT, we could actually just ask this in natural language: “Why did you respond that way?” or similar – but there’s no guarantee that the model’s natural language explanation actually reflects the underlying cause of the original completion. The model’s response is conditioned on your question, and might well be different to the true cause.
Enter attribution!
Attribution in machine learning is used to explain the contribution of individual features or inputs to the final prediction made by a model. The goal is to understand which parts of the input data are most influential in determining the model's output.
It typically looks like is a heatmap (sometimes called a ‘saliency map’) over the model inputs, for each output. It's most commonly used in computer vision – but of course these days, you're not big if you're not big in LLM-land.
So, the team at Leap present you with PIZZA: Prompt Input Z? Zonal Attribution. (In the grand scientific tradition we have tortured our acronym nearly to death. For the crimes of others see [1].) It’s an open source library that makes it easy to calculate attribution for all LLMs, even closed-source ones like ChatGPT.
An Example
GPT3.5 not so hot with the theory of mind there. Can we find out what went wrong?
That's not very helpful! We want to know why the mistake was made in the first place. Here's the attribution:
Mary 0.32puts 0.25an 0.15apple 0.36in 0.18the 0.18box 0.08. 0.08The 0.08box 0.09i
|
f1593c11-c373-4294-875d-07c9a6cd721a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Doing Despite Disliking: Self‐regulatory Strategies in Everyday Aversive Activities
A large survey of self-regulatory strategies and how reportedly effective they are.
Much more in the typical "self-regulatory" paradigm for self-help psychology, where people are assumed to have a lot more control over the strategies they choose (i.e. downplaying interactions between actions and attitudes), but I'm curious what people's thoughts are.
|
0983210d-c20d-4e05-81de-bd890041249d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
One week left for CSER researcher applications
This is the last week to apply for one of four postdoctoral research positions at the Centre for the Study of Existential Risk. We are seeking researchers in disciplines including: economics, science and technology studies, science policy, arms control policy, expert elicitation and aggregation, conservation studies and philosophy.
The application requires a research proposal of no more than 1500 words from an individual with a relevant doctorate.
"We are looking for outstanding and highly-committed researchers, interested in working as part of growing research community, with research projects relevant to any aspect of the project. We invite applicants to explain their project to us, and to demonstrate their commitment to the study of extreme technological risks.
We have several shovel-ready projects for which we are looking for suitable postdoctoral researchers. These include:
1. Ethics and evaluation of extreme technological risk (ETR) (with Sir Partha Dasgupta;
2. Horizon-scanning and foresight for extreme technological risks (with Professor William Sutherland);
3. Responsible innovation and extreme technological risk (with Dr Robert Doubleday and the Centre for Science and Policy).
However, recruitment will not necessarily be limited to these subprojects, and our main selection criterion is suitability of candidates and their proposed research projects to CSER’s broad aims.
More details are available here. Applications close on April 24th.
- Sean OH and Ryan
|
6ad98239-c26b-4574-8b61-aab32ce2ad8e
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions
I Introduction
---------------
The wireless communication industry can be considered one of the most rapidly developing sectors in technology. The innovations that thrive in the telecommunication sector have laid the infrastructure and led towards a consonant development that has led to exponential growth in living standards. The first generation of cellular networks started evolving wireless communication technology in the 1980s. 5G wireless technology, primarily based on softwarization, is expected to complete the transition with significant coverage by 2025. The most noticeable feature of 5G is the cloudification of networks via microservices-based architecture. With the start of commercialized implementation of 5G, experts predict that 6G mobile communication will become widely available in the following years [[1](#bib.bib1)].
Meanwhile, the academic community is more focused on new lines of study in advance of the beyond 5G or 6G standardization. Edge intelligence (EI), Beyond 6GHz to THz communication, Non-Orthogonal Multiple Access (NOMA), Large Intelligent Surfaces (LIS), and Zero-touch Networks have risen in recent years [[2](#bib.bib2), [3](#bib.bib3), [4](#bib.bib4)]. These concepts are being developed into the technology that will power the next generation of communication networks. There is still a long way to go in terms of 5G network capabilities to meet the needs of these applications, which need high-speed data transfer rates and real-time access to vital computing resources. IoE, enabled by 5G, seeks to connect vast numbers of devices and [Cyber Physical Systems (CPS)](#glo.acronym.CPS), surpassing 5G’s capabilities into the B5G era. For example, 6G is expected to connect millions of devices and provide instant access to massive amounts of compute and storage power. For B5G wireless networks, the scientific community expects entirely intelligent network orchestration and management [[2](#bib.bib2), [5](#bib.bib5)]. It will be distinct from previous generations concerning various aspects, including network infrastructures, radio access methods, processing and storage capacities, application types. New applications will need to intelligently use communications, compute, control, and storage resources. Moreover, wireless networks are producing a large amount of data. This paradigm shift allows data-driven real-time network design and operation in B5G.
Physical attacks, eavesdropping, and authentication and authorization issues plagued the wireless communication technologies from 1G to 3G. It now includes more complicated attacks and tougher assailants. 4G networks’ most prominent security and privacy threats come from malware programs and common MAC layer security vulnerabilities, like viruses, tampering, [Denial of Services (DoS)](#glo.acronym.DOS), replay attacks. Eavesdropping. These attacks have morphed into [Software Defined Networks (SDN)](#glo.acronym.SDN), [Network Function Virtualization (NFV)](#glo.acronym.NFV) and cloud computing in the 5G. Insecure [SDN](#glo.acronym.SDN) features include OpenFlow, centralized network administration (prone to DoS attacks), core and backhaul, edge device vulnerabilities, and open APIs [[6](#bib.bib6), [7](#bib.bib7)]. Research communities are starting to focus on security vulnerabilities in B5G communication using advanced networking, AI/ML, and linked intelligence technologies that power the B5G vision. On top of the unsolved security issues brought forward from the previous generations, these new technologies open B5G networks to a whole new threat surface as never seen before. Nevertheless, the overall success of B5G ultimately depends on how well AI and 6G cooperate in the future [[8](#bib.bib8)].
The malicious use of AI is changing the threat landscape, adding constraints on many potential applications from seeing the light of day. With the advent of 6G technologies, misuse of AI might endanger increasingly complex systems, such as smart CPSs (SCPSs). SCPSs are advanced CPS systems that are increasingly linked through technologies like the [Internet of the Things (IoT)](#glo.acronym.IoT), [Artificial Intelligence (AI)](#glo.acronym.AI), wireless sensor networks (WSNs), and cloud computing to enable a variety of unique services and applications [[9](#bib.bib9)]. Since SCPSs are interwoven with various domains, a single weakness can cause catastrophic failures (Butterfly effect). Aside from AI applications in services, it can also be used for malicious intents, allowing larger-scale attacks, unlike the attacks we have seen before.
As a consequence, all interconnected devices and users stand at risk. Even though research on AI to protect against cyber threats has been ongoing for many years [[10](#bib.bib10), [11](#bib.bib11)], it is still unclear how to ensure the security of networks with AI integrated into their core operations. A significant drawback in AI security has derived from the black-box nature of those systems in one way or the other. Therefore, maintaining accountable and trustworthy AI in this regard is highly important.
The Defense Advanced Research Projects Agency (DARPA) started the [Explainable Artificial Intelligence (XAI)](#glo.acronym.XAI) initiative in May 2017 to develop a set of new AI methodologies that would allow end-users to comprehend, adequately trust, and successfully manage the next generation of AI systems [[12](#bib.bib12)]. To further elaborate, it can be considered a collective initialization of computer sciences and the social sciences, which includes human psychology of explanations. The overall success of B5G would ultimately stand on how far the [AI](#glo.acronym.AI) used in its implementation is going to be resilient and trustworthy for the general public for utilization [[8](#bib.bib8)]. Extending research on possible techniques such as XAI in this regard is a crucial step that needs to be taken abruptly.

Figure 1: Th overview of using XAI to improve the security of the B5G technologies and use cases. The left part of this figure shows how virtualization leads to the 5G enabling technologies compared with the traditional network layered stack and how AI evolves from 5G to B5G. The right part of this figure shows that XAI can improve the AI-based security solution for system stakeholders.
Table I: Summary of Important Acronyms
Acronyms
--------
AI
Artificial Intelligence
AR
Augmented Reality
CFE
Counterfactual Explanations
CPS
Cyber Physical Systems
CRAN
Centralized/Cloud Radio Access Network
DDoS
Distributed Denial of Service
DL
Deep Learning
DNN
Deep Neural Network
DoS
Denial of Services
DQN
Deep Q-Network
DRL
Deep Reinforcement Learning
FL
Federated Learning
GDPR
General Data Protection Regulation
IBN
Intent Based Network
IIoT
Industrial Internet of Things
IoT
Internet of the Things
LIME
Local Interpretable Model-Agnostic Explanations
LMUT
Linear Model U-Trees
LRP
Layer-wise Relevance Propagation
MD
Management Domain
MITM
Man in the Middle
ML
Machine Learning
MLP
Multi Layer Perceptrons
MR
Mixed Reality
NFV
Network Function Virtualization
ORAN
Open Radio Access Network
PIRL
Programmatically Interpretable Reinforcement Learning framework
RAN
Radio Access Network
RIC
RAN Intelligence Controller
RL
Reinforcement Learning
SDN
Software Defined Networks
SHAP
Shapley Additive Explanations
SVM
Support Vector Machines
UAV
Unmanned Aerial Vehicles
UE
User Equipment
VR
Virtual Reality
VRAN
Virtualized Radio Access Network
XAI
Explainable Artificial Intelligence
XR
Extended Reality
###
I-A Paper Motivation
When writing this article, 5G is commercially rolling out, with many researchers focusing on the B5G. Its applications, architecture, and enabling technologies are the subject of a large number of studies published recently, as shown in Table [II](#S1.T2 "Table II ‣ I-B Our Contribution ‣ I Introduction ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions"). In addition, studies such as [[13](#bib.bib13), [14](#bib.bib14), [15](#bib.bib15), [4](#bib.bib4), [16](#bib.bib16), [17](#bib.bib17), [18](#bib.bib18), [19](#bib.bib19), [2](#bib.bib2), [20](#bib.bib20)] have mainly focused on the vision, potential applications and requirements of the B5G wireless communication technologies such as terabits per second speeds FeMBB, connected intelligence, and EDuRLLC, among others that would facilitate up and coming applications such as autonomous vehicles, telemedicine, the extended reality in the future.
Among key enablers in B5G/6G mobile communication, such as THz communication, edge computing, swarm networks, full automation, blockchain; [AI](#glo.acronym.AI) takes a prominent place. AI techniques are more suitable to solve complex problems due to their generalization capabilities and thus are fitting to use in many novel B5G era applications. Studies including [[21](#bib.bib21), [13](#bib.bib13), [22](#bib.bib22), [23](#bib.bib23), [24](#bib.bib24), [25](#bib.bib25), [26](#bib.bib26)], elaborate on the importance of AI and its trends in B5G, and the challenges it brings to future communication technologies. Previous surveys such as [[6](#bib.bib6), [27](#bib.bib27), [28](#bib.bib28), [29](#bib.bib29), [30](#bib.bib30), [31](#bib.bib31)] highlight the dynamics of security aspects in a range of B5G enabling technologies such as IoT, RAN and edge computing, while [[8](#bib.bib8), [29](#bib.bib29), [32](#bib.bib32), [33](#bib.bib33)] focus entirely on the security threats and potential defenses that would improve the trust in AI/ML methods used in B5G.
Although it shows promising results, only a few publications ([[34](#bib.bib34), [35](#bib.bib35), [36](#bib.bib36)]) have covered the XAI applications in the context of security or XAI research projects and standardization methods. Opportunities, challenges, and standardizations in XAI are still in their infancy that needs more collaborative work with experts from fields such as human psychology and sociology to move towards more concrete real-world applications. Summarized table [II](#S1.T2 "Table II ‣ I-B Our Contribution ‣ I Introduction ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") outlines contemporary research and surveys conducted on the advancements of B5G, AI, and XAI. Here we have found out that each paper presents applications in dis-articulated contexts. On the contrary, implementing B5G technologies rather begs for a holistic review of AI and XAI in security, given that accountability and resilience are core and essential characteristics of any mobile network generation. Many researchers focus on B5G, XAI, and AI techniques in sunders, but currently, there has not been a cognate approach where the viability of XAI techniques has been reviewed in the context of B5G use cases. As a response, this survey reports a comprehensive overview of XAI and security technical aspects, applications, requirements, limitations, challenges/issues, current projects, standardization initiatives, and lessons learned for the beyond 5G applications.
###
I-B Our Contribution
To the best of the authors’ knowledge, this paper is the first of its kind that attempts to explore the capacity of XAI in a wide range of B5G security aspects. Table [II](#S1.T2 "Table II ‣ I-B Our Contribution ‣ I Introduction ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") depicts some of the relevant but dissociated studies carried out in this regard. However, none of them has been able to convey a holistic image of the role of XAI in B5G security. Therefore, our main contributions from this survey are listed below:
* •
Highlight the importance of XAI to the security of B5G: This paper elaborates on the potential of XAI in the path to realizing accountability for AI/ML models used in network security and improving the resilience of B5G telecommunications. Although many studies of B5G security involve data-driven ML solutions, little focus is given to interpreting their decisions. Serious doubts and questions regarding accountability can arise with stakeholders when using black-box AI to secure critical applications. The ability of XAI methods to interpret the black-box nature of AI/ML-based security systems is a current requirement to fill this research gap.
* •
Comprehensively analyze XAI for commonly discussed B5G technical aspects and use cases: Here, we explore the role of XAI in a range of B5G enabling technologies such as IoT/devices, [Radio Access Network (RAN)](#glo.acronym.RAN), Edge network, core, and backhaul network, E2E slicing, and network automation. This list of enablers is carefully selected to cover most of the ground in B5G telecommunication architecture and provide a holistic view of the impact of XAI in B5G security. The study extends towards discussing possible security issues and the impact of XAI on a popular set of use cases, including smart cities, smart healthcare, industry 4.0/5.0, smart grid 2.0, and [Extended Reality (XR)](#glo.acronym.XR).
* •
Survey of important, relevant research projects and standardizations: Unlike in many other survey papers, here we explore the research projects that are underway to realize the B5G implementations and standardizations incorporating AI/ML/XAI. A detailed discussion of current projects and initiatives involving academic and industry partners provides clarity on the ongoing areas and the research gaps that are currently explored. AI security standardizations in B5G are discussed here to determine the requirements for future B5G networks and their respective technologies.
* •
Provide promising research directions as guidance: Existing limitations and challenges with current XAI methods in security are exhaustively discussed, along with possible research directions. Few of the proposed research directions include security and isolation between network slices, computationally efficient explainable Edge-AI, and understanding the level of vulnerability of ML models to adversarial attacks in white-box and black-box contexts are some of the possible research directions that are identified.
Table II: Summary of important surveys on XAI for B5G Security
| Ref. | Year |
AI Techniques
|
XAI Technical Aspects
|
B5G Security Technical Aspects
|
Role of XAI in B5G Security
|
B5G Use cases
|
Challenges of XAI in B5G
|
XAI/B5G Security Projects
| Remarks |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| [[14](#bib.bib14)] | 2020 | H | M | M | M | H | L | L | Insightful presentation of requirements, technologies, applications and challenges in 6G security and privacy |
| [[37](#bib.bib37)] | 2020 | H | H | L | M | M | M | L | XAI concepts, application areas, research challenges and future work. |
| [[36](#bib.bib36)] | 2020 | M | H | L | L | L | M | L | A comprehensible survey on various XAI techniques, their challenges and opportunities |
| [[21](#bib.bib21)] | 2020 | H | L | H | L | H | L | L | A survey on security and privacy in the role of AI in 6G communication and its applications. |
| [[34](#bib.bib34)] | 2020 | M | H | L | L | M | M | L | A review on motivation and framework for using XAI in 6G/wireless telecommunication for |
| [[35](#bib.bib35)] | 2020 | H | H | L | M | L | M | L | A Survey on XAI taxonomy, concepts, challenges and opportunities, for responsible AI. |
| [[2](#bib.bib2)] | 2021 | H | L | H | M | H | L | H | A concise survey on the 6G future trends, applications, requirements and technical aspects. |
| [[27](#bib.bib27)] | 2021 | M | L | H | H | H | L | L | A survey on technologies and challenges in 6G security and privacy in different layers of 6G architecture |
| [[38](#bib.bib38)] | 2021 | H | L | H | L | H | L | M | A concise survey about 6G networks and their services, enablers, applications and future challenges |
| [[20](#bib.bib20)] | 2021 | H | L | H | M | H | L | H | A comprehensive survey on drivers, requirements, roadmap and enablers in 6G. |
| This paper | 2022 | H | H | H | H | H | H | H | A comprehensive survey of using XAI for trustworthy and transparent 6G security including use cases, requirements/vision, technical aspects, projects, research work, standardization approaches and future research directions. |
LLow/No CoverageMMedium CoverageHHigh Coverage
###
I-C Paper Outline
This section introduces the motivation and contribution of this survey paper. The second section gives the background of technical aspects in this paper, namely, B5G, XAI, and XAI’s potential for improving B5G security. Then, the detail of these technical aspects is discussed in Section [III](#S3 "III B5G Threat Modelling and Taxonomy ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions"), Section [IV](#S4 "IV Role of XAI on Security Issues on B5G ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions"), and Section [VI](#S6 "VI New Security Issues of B5G Due to XAI ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions"). Section [III](#S3 "III B5G Threat Modelling and Taxonomy ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") elaborates the taxonomy, threat modeling, and landscape of security aspects for developing B5G networks. Section [IV](#S4 "IV Role of XAI on Security Issues on B5G ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") analyses the impact of introducing XAI on the existing AI-powered B5G security solutions. Section [VI](#S6 "VI New Security Issues of B5G Due to XAI ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") highlights potential new security issues because of introducing XAI. Moreover, for the use cases enabled by B5G, Section [V](#S5 "V Use-cases in XAI for B5G Security ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") analyses the impact of XAI on the security aspects of these B5G use cases. Section [VII](#S7 "VII XAI B5G Security Projects and Standardization ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") strengthens the importance of this survey paper by listing the ongoing research projects and standardizations about B5G security and XAI. Section [VIII](#S8 "VIII Lessons Learned and Future Research Directions ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") summarises Section [III](#S3 "III B5G Threat Modelling and Taxonomy ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions"), [IV](#S4 "IV Role of XAI on Security Issues on B5G ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions"), [VI](#S6 "VI New Security Issues of B5G Due to XAI ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions"), [V](#S5 "V Use-cases in XAI for B5G Security ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions"), and [VII](#S7 "VII XAI B5G Security Projects and Standardization ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") with the lessons learned and future research directions. Finally, Section [IX](#S9 "IX Conclusion ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") concludes the whole paper.

Figure 2: The outline of this survey paper showing the organisations of paper sections to align with the main paper topic: XAI for B5G Security
II Background
--------------
This section briefly introduces the background of the related technologies discussed in this paper. In particular, the B5G technologies and XAI concepts are discussed, followed by the growing need for XAI for B5G security.
###
II-A Beyond 5G
The rapid growth of the communication industry in the last decade has enabled the 5G technologies to be widely commercialized in recent days. Following the success of 5G, 6G/B5G is becoming the focal point of academia and industry with research and implementations. 5G has addressed much of the prevalent problems [[39](#bib.bib39)] with high data rate enhanced mobile broadband systems (eMBB) and leaped on with new functionalities such as laying the foundation for enabling the Internet of Things ([IoT](#glo.acronym.IoT)). However, new [IoT](#glo.acronym.IoT) services are developed rapidly in applications such as virtual, augmented, mixed reality services (which fall under [XR](#glo.acronym.XR) services), autonomous vehicle systems, brain-computer interfaces (BCI), telemedicine, haptic systems and blockchain-based systems [[4](#bib.bib4)]. In order to implement these services, ultra-reliable, low-latency communications (URLLC) with short-packet support and high data rates in both uplink and downlink needs to be maintained in a secure and privacy protected wireless system [[39](#bib.bib39)]. A key factor affecting the dynamic of the revolution will be the profound number of human-type and machine-type devices connecting to the network. Massive Machine Type Communication (mMTC) is expected to be fully deployed alongside URLLC to address those devices’ requirements to achieve end-2-end latency reduction once 5G is fully deployed later on. Catering those heterogeneous devices in 6G technologies would take the data rates to the terabyte realms (maximum 1 Terabits/second) in order to perform effectively [[40](#bib.bib40)]. In other words, nearly a 1000x increase from the last generation of wireless technologies [[15](#bib.bib15)] bringing in massive amounts of data each day. A cohort of technologies like AI, Symbiotic radio (SR), call-free massive MIMO (CFmMM), intelligent communication surfaces, index modulation (IM), simultaneous wireless information and power transfer (SWIPT), network-in-box [[16](#bib.bib16), [13](#bib.bib13), [2](#bib.bib2), [22](#bib.bib22), [41](#bib.bib41), [28](#bib.bib28), [18](#bib.bib18), [5](#bib.bib5)] will be used in handling those services mentioned above. AI takes a prominent place out of them due to its unprecedented evidential capabilities and ubiquitous applications.
Following the massive success of AI in computer vision, natural language processing, speech recognition, bioinformatics, social intelligence, and numerous others, the technology has proved to be ubiquitous [[42](#bib.bib42)]. Naturally, a profound amount of data is expected to be generated at high rates due to the vast and varied set of applications associating billions of devices in the B5G eco-system, making it the perfect grounds for the application of AI exploiting its capabilities of solving problems involving large amounts of unstructured data efficiently. So the potential to apply AI in numerous aspects of B5G ranging from network architectures to security, privacy, signal processing solutions, and system-level optimizations.
###
II-B Explainable AI
####
II-B1 Motivations of XAI
While the early AI systems were simple to understand, opaque decision methods such as [Deep Neural Network (DNN)](#glo.acronym.DNN) have gained popularity in recent years. [Deep Learning (DL)](#glo.acronym.DL) models are experimentally successful due to a combination of efficient learning algorithms and their large parametric field.
DNNs are considered sophisticated black-box models since they have hundreds of layers and millions of parameters [[43](#bib.bib43)]. Transparency is the polar opposite of black-box-ness, which is the pursuit of knowledge of how a model functions. The need for explainability among AI stakeholders is growing as black-box [Machine Learning (ML)](#glo.acronym.ML) algorithms are increasingly used to make significant predictions in critical settings [[44](#bib.bib44)]. The risk lies in making and implementing choices that are not reasonable, lawful, or do not allow for comprehensive explanations of their actions [[12](#bib.bib12)]. Explanations that back up a model’s output are critical. For example, in medical applications, specialists need to uncover what causes are identified in the model to arrive at the forecast, which would reinforce their confidence in the diagnosis [[45](#bib.bib45)]. Telecommunication systems, B5G backed autonomous cars, security, and finance are just a few other examples.
However, a better knowledge of a system may lead to its shortcomings being corrected. Interpretability as an extra design driver may enhance the implementation ability of a machine learning model for three main reasons according to [[35](#bib.bib35)]. It aids in guaranteeing objectivity in decision-making by rectifying bias in the training datasets. Secondly, it improves resilience by identifying possible adversarial events that may cause the forecast to alter. Finally, it will guarantee that only relevant variables are used to predict the outcomes — in other words, that the model reasoning is based on actual causation.
The literature clearly distinguishes between models that are interpretable by design and those that can be explained using external XAI techniques. XAI creates a suite of machine learning techniques that enables human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners [[35](#bib.bib35)]. This dichotomy may be thought of as the distinction between interpretable models and model interpretability methods; a more generally recognized classification is transparent models, and post-hoc explainability [[46](#bib.bib46)]. The following section explains these categories in more detail.
####
II-B2 Terminologies of XAI
Transparency: A model is deemed transparent if it is understood on its own.
Transparent models, by themselves, provide some degree of interpretability. Models in this domain may also be categorized according to the context in which they are interpretable, notably algorithmic transparency, decomposability, and simulatability. Decomposability refers to the capacity of a model to be explained in terms of its constituent components. Simulatability refers to a model’s ability to be simulated or thought about rigorously by a person. When the model is sufficiently self-contained for a person to think and reason about it in its entirety, it can be referred to as a decomposable model with simulatability. Algorithmic transparency may be interpreted in a variety of ways. Prominently, it refers to the user’s capacity to comprehend how the model generates any given result from its input data. The primary restriction on algorithmically transparent models is that they must be completely explorable using mathematical techniques and analysis [[47](#bib.bib47)]. Each of these classes includes its antecedents; for example, a simulatable model is both decomposable and algorithmically transparent. Some popular models that fall under transparent models are Linear/Logistic regression, Decision Trees, K-Nearest Neighbors, Rule-based models, GAM - General Additive Models, and Bayesian models. These models are deemed to be expressive enough to be human-understandable. [[35](#bib.bib35)].

Figure 3: XAI Taxonomy. Pre-model XAI explains the training data used for building AI model (e.g.,Principal component analysis (PCA), t-Distributed Stochastic Neighbor Embedding (t-SNE) ). In-model XAI refers to transparent AI models that are self-explanatory (e.g., decision trees, random forests). Post-hoc XAI models explain the results given by the trained AI models (e.g., LIME, SHAP).
####
II-B3 Taxonomy of XAI
The XAI methods can be divided into multiple categories based on various criteria [[35](#bib.bib35), [48](#bib.bib48)]. The most common XAI-based taxonomies are discussed below. XAI methods that fall into those categories are not necessarily exclusive for each group. According to the taxonomy, there can be methods that belong to even two or more categories.
1. (a)
Model-agnostic vs Model-specific:
Model agnostic methods for XAI are the ones that are not constrained by the core parts of an AI algorithm when making a prediction. They are helpful in decoding black box models’ decision processes and provide good flexibility for developers to apply them to a wide variety of ML models. On the other hand, model-specific methods are bespoke for specific models and take the use of core components of an ML model to interpret the outcomes. This characteristc makes model-specific methods more suitable to identify granular aspects of ML models, but they lack flexibility.
2. (b)
Local vs. Global methods:
XAI methods can be divided into two main categories based on the function’s scope that the interpreter uses to produce an explanation. They are local and global explanations. Local explainers are designed to interpret a portion of the model function that contributes towards the outcome given a specific data point. The close vicinity of ML function to that datapoint is explored when generating an explanation. On the contrary, global methods take the ML function as a whole when generating explanations for the inference. This quality generally makes these methods slow but robust, where local methods are fast but erratic sometimes.
3. (c)
Pre-model, In-model vs Post-model explainers:
Depending on the stage at which XAI methods are applied in the development process, there are three main categories of XAI; pre-model, in-model, and post-model. Pre-model methods are used mainly used during the dataset preparation time in the model development pipeline. These methods are helpful in data analysis, feature engineering, and explaining any underlying patterns seen in data at a glance. In-model XAI methods are embedded in the ML algorithms. It includes all the transparent models such as linear regression, decision tree, random forest, etc. In addition, in-model explanations are also generated through modifications to the existing ML model architectures using inherently transparent models. Post-hoc/post-model explanations are applied after training an ML model. It enables us to identify what the model has learned during the training process.
4. (d)
Surrogate vs Visualization:
The XAI method is divided into two main categories based on what is explained during the process. Surrogate model-based explainers generate explanations from an approximated model of the black-box model trained in a similar way to mimic the original model’s behavior. Surrogate models are mostly inherently interpretable. Otherwise, one can use visualization techniques (e.g., heatmaps, graphs, etc.) on the original black-box model to explore its internal workings without using a representation. These XAI methods fall under the visualization category.
####
II-B4 XAI Methods
There are numerous methods studied in the literature to explain black-box AI/ML models. Here we discuss a selected set of popular XAI methods that are more established in the academic and industrial community, as shown in Fig. [4](#S2.F4 "Figure 4 ‣ II-B4 XAI Methods ‣ II-B Explainable AI ‣ II Background ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions").

Figure 4: Various XAI methods are currently popular among the research and industrial community. Each method has its own strengths and weaknesses, as depicted. LIME, SHAP, LRP, CFE, PIRL, Heirrachial policies, and LMUTs are selected only to represent the categories of XAI methods.
[Local Interpretable Model-Agnostic Explanations (LIME)](#glo.acronym.LIME) [[49](#bib.bib49)] can be considered as one of the most popular model-agnostic XAI methods used to interpret model outputs in various applications. LIME provides a locally faithful explanation based on the feature importance that contributed to the output. It is achieved through a surrogate dataset obtained after sampling perturbations in the proximity of the original inputs. Then it creates a simpler interpretable model that can be used in identifying important features of a given output. Because of this local fidelity, LIME is faster and can be applied to all types of black-box models. Stemming from its popularity in the research community, application-specific variations of LIME such as OptiLIME[[50](#bib.bib50)] and CBR-LIME [[51](#bib.bib51)] are proposed in several studies.
[Shapley Additive Explanations (SHAP)](#glo.acronym.SHAP) [[52](#bib.bib52)] is a model-agnostic interpreter that can be used with black-box models to interpret their model outcomes. SHAP uses the concept of shapely values derived from the cooperative game theory. Shapley values represent the average marginal contribution of a feature calculated over all the subsets of features with and without the said feature. Feature importance values generated here can be coalesced to obtain a global (unlike LIME) explanation of the model outcome space, which paves the way for better explanations. TreeSHAP [[53](#bib.bib53)] and DeepSHAP (DeepLift [[54](#bib.bib54)] + Shapley values) [[52](#bib.bib52)] are few derivatives of SHAP that are developed to fit specific models and improve the computation efficiency.
[Layer-wise Relevance Propagation (LRP)](#glo.acronym.LRP) [[55](#bib.bib55)] is popularly used to interpret models such as neural networks that are structured in a layered manner, making it a model-specific XAI method. LRP operates by propagating the prediction backward through the layers until it reaches the inputs while assigning relevance scores to each functional unit in the model. The level of contribution from one node to make the consequent node relevant to the output is quantified and aggregated to obtain the relevance of each layer. It is applied in a wide range of applications such as identifying model biases [[56](#bib.bib56)], extracting points of interest in prevention of side-channel attacks [[57](#bib.bib57)], audio source localization [[58](#bib.bib58)], and EEG pattern recognition in brain-computer interfaces [[59](#bib.bib59)].
[Counterfactual Explanations (CFE)](#glo.acronym.CFE) belong to the subcategory under local explainers known as example-based explainers. The intuition here is to understand what would happen if a slightly different data point is given to the model in place of the original data point and how it would affect the prediction. For example, if a model classifies a person as not suitable to receive a loan, then the CFE would provide the reason that he/she needs to have a saving of 10 000 euros for the model to classify him/her as a suitable person to receive the loan. Unlike many other interpreters, the explanations are closer to human nature and provide actionable and precise recommendations. The basic idea of CFEs are model agnostic, but several variations are developed for model-specific applications [[60](#bib.bib60)].
XAI for RL has been explored for agent-based [Reinforcement Learning (RL)](#glo.acronym.RL) AI systems from as early as 1994 [[61](#bib.bib61)]. It is not easy to deliver XAI for RL because it generally includes several judgments made over time, often seeking to offer the next action in real-time. Unlike conventional ML, RL explanations must cover a collection of acts spread across a plethora of different states that are connected in some manner. Also, the absence of an explicit training dataset can contribute to the difficulty in applying XAI techniques [[62](#bib.bib62)]. Some of the widely known AI techniques for RL is discussed below [[63](#bib.bib63)].
1. (a)
[Programmatically Interpretable Reinforcement Learning framework (PIRL)](#glo.acronym.PIRL) is a global, in-model method used in place of [Deep Reinforcement Learning (DRL)](#glo.acronym.DRL) [[64](#bib.bib64)]. In DRL, neural networks reflect the policies and are difficult to comprehend. On the other hand, PIRL policies are expressed using a high-level, human-readable programming language. However, unlike standard RL, they limit the number of target policies by using a (policy) sketch. They use a framework based on imitation learning called Neurally Directed Program Search to uncover these regulations (NDPS).
2. (b)
Hierarchical policies technique [[65](#bib.bib65)] is another in-model XAI technique used to interpret the decision process of multi-task complex RL systems but locally. The core tenet of this approach is to decompose a complicated task into smaller subtasks. These smaller tasks will be accomplished with the already learned policies or learn a new skill. This model also takes the temporal connections and task priorities to improve efficiency and accuracy. The technique builds on multi-task RL with modular policy design and a two-layer hierarchical policy based on minimal assumptions and limits. In [[65](#bib.bib65)] they assess this technique in object manipulation tasks in the Minecraft game.
3. (c)
[Linear Model U-Trees (LMUT)](#glo.acronym.LMUT) [[66](#bib.bib66)] is a post-hoc explanation method unlike the methods mentioned above. They are flexible to be used to generate both local and global approximations of an RL model’s Q-predictions [[63](#bib.bib63)]. LMUTs are an extension of Continuous U-Trees with the contrast in using linear models at each leaf node instead of constants, making them more interpretable and comprehensible. Because of the inherent interpretable nature of the trees, it becomes easier to generate explanations from the LMUTs as they mimic the original Q-function.
####
II-B5 Stakeholders of XAI
In realization, there is a substantial community working on the issue of explainable AI today, with some attempting to enhance the latest technology, others attempting to evaluate, criticize, or regulate (policymakers) the technology, while the remaining seeking to manipulate or utilize AI (business partners) in a broad range of applications. The previous studies have considered the level of explainability and interpretability on the grounds of various stakeholder groups. These categories can include system creators, system operators, executors making decisions based on system outputs, decision subjects affected by an executor’s decision, data subjects whose personal data is used to train a system, and system examiners, such as auditors or testers [[67](#bib.bib67)].
In the envisioned B5G application security, it is important to identify the parties involved in the full lifecycle of an intelligent system to improve accountability. This paper defines five main stakeholder communities: system creators, system operators, theorists, ethicists, and end-users. [[44](#bib.bib44), [67](#bib.bib67)].

Figure 5: XAI Stakeholders: Different levels of influence that each stakeholder has on the systems and their respective explainability requirements.
* •
Creators:. Here we identify creators as those involved in creating secure, high-fidelity AI-based applications for B5G era. This group is a superset of implementers (developers, testers, security experts, data scientists, etc.) and owners (agents, business owners, etc.) contributing to bringing AI/ML applications to reality. Many members of this community work in the industry — multinational companies and local firms — or in the public sector, though some are academics or scientists who develop systems for various reasons, including assisting them with their work. The highest level of explainability is required to make the systems unbiased and resilient. Their influence on XAI aspects is strong as well.
* •
System Operators: System operators maintain the systems and ensure smooth operation after deploying an AI/ML-based system. Although they might not require a granularity of explanations as high as developers, they still require a high enough explainability to detect and verify anomalies in the system to provide runtime solutions. Similarly, the influence on the system and data can be considered moderately high.
* •
Theorists: Theorists are those who are interested in comprehending and expanding AI theory, especially as it relates to [DNN](#glo.acronym.DNN) s. Members of this group are often associated with university or industry research institutions. This community requires a high level of explainability. Their influence on the XAI, in general, can be considered to be high.

Figure 6: 6W analysis for explainable security in B5G. The procedure shown can be used as a framework to initiate laying the groundwork when designing security aspects of explainable intelligent systems built in/on B5G network
* •
Ethicists: Ethicists can be policymakers, commenters, and critics concerned with the fairness, accountability, and transparency of AI systems. While many computer engineers and scientists are part of this group, it can be considered interdisciplinary, including social scientists, attorneys, journalists, economists, and politicians. For the ethicist community, explanations must go beyond technical software quality to ensure fairness, unbiased conduct, and comprehensible disclosure for accountability and auditability. Furthermore, legal compliance with organizations such as European Union’s GDPR [[68](#bib.bib68)] or United States DARPA Regulations [[12](#bib.bib12)] falls under this category of stakeholders. After all, their influence on AI systems can be considered very high.
* •
End Users: Finally, the users need explanations to assist them in deciding whether or how to act in response to the systems’ outputs and to assist them in justifying their actions. In addition to ’hands-on’ end users, this community comprises everyone engaged in processes affected by an artificial intelligence system. The explainability requirement for end-users is similar to ethicists; however, their influence on the system is only strong under particular circumstances (e.g., community/group approach).
In light of the preceding discussion, the most logical approach may be to provide different explanations tailored to the various stakeholders. Nevertheless, it is also possible to envision a composite explanation object containing all of the required information to satisfy multiple stakeholders at once.
###
II-C XAI for B5G Security
Human-centric AI-powered telecommunication in the B5G era would attract the attention of various contingent parties that need assurance for trusting these systems—giving convincing evidence on how the decision-making process inside an AI model will be challenging pertaining to the technical knowledge gap and obfuscated nature of internals in the widely used AI/ML model. Especially the security of this new technology in telecommunication has gained much attention from both malevolent and benevolent agents relevant to all the layers of the network.
Network softwarization (NS) and network function virtualization [NFV](#glo.acronym.NFV) introduced with 5G are expected to be significantly enhanced as we enter the B5G era. Figure [1](#S1.F1 "Figure 1 ‣ I Introduction ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") shows that going; further, the conventional networking design will be evolved into a new layered architecture established on complex black-box AI/ML models in all facets of communication, including security. In the first layer of B5G architecture, data is gathered through IoT devices such as smartwatches, phones, drones, etc., to enable real-time services in higher layers. Although it’s one of the essential operations, the convictions behind data collection and security/privacy issues are highly influenced by the demography and underlying regulations. Using XAI, such differences can be addressed evenly by giving more details about how collected data is used inside AI models in the rest of the pipeline. Further, it enables system operators to identify the performance of each device more intimately with respect to the overall AI system. RAN, edge, core, and backhaul layers provide the infrastructure to reach higher speeds and quality of services based on enhanced virtualization techniques. The security of these layers is envisioned to be addressed through AI/ML-based methods to accommodate the massive volumes of data. Getting automated feedback on the performance of those AI/ML systems is paramount to ensure maximum resilience by identifying false predictions and diagnosing any system issues. It will benefit system operators and stakeholders who are not technically sound in AI/ML. In E2E slicing and ZSM [[69](#bib.bib69), [3](#bib.bib3)] security of AI/ML components are used in integral parts of the system architecture. For example, the ZSM’s E2E service intelligence enables decision-making based on data collected in the domain and standard data services. An attacker may create inputs to cause the ML model to make incorrect judgments and threaten performance, financial loss, SLA fulfillment, and security assurances. XAI would be highly useful during the response process to estimate the overall effect and trace back to the most basic module responsible for the anomaly. Finally, the application layer would require the most high-level explanations relevant to the end-users in B5G. Techniques like counterfactual explanations are ideal for inculcating trust and confidence in the users in the application layer.
When designing a system with explainable security, one must evaluate the 6W questions; Why, Who, What, Where, When, and How to generate security explanations. Inspired by Vigano et al. [[70](#bib.bib70)], figure [6](#S2.F6 "Figure 6 ‣ 3rd item ‣ II-B5 Stakeholders of XAI ‣ II-B Explainable AI ‣ II Background ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") depicts the flow of identifying basic building blocks to design an explainable security system. First, the apparent reason to why the system needs XAI must be identified. Then to whom and who create the explanation and decide the level of granularity of the content that is broadcast to each group of actors. Identifying the needs of each actor early on helps to decide on what aspects of the system that need to be explained. Here the system designers must consider the layer of B5G architecture and fit the explanation to meet its requirements. Although the explanation is generated in one layer, it will not be the same where it will be accessible. Whether it will be a separate service or embedded in the system/output must be decided. It is also essential to decide the when the explanations are needed during the process, i.e., during design, installation or maintenance, defense, etc. Finally, the nature of the explanation is decided by answering the question of how to interpret AI/ML model. It will lay the groundwork for choosing the correct XAI methods for high-quality explanations.
III B5G Threat Modelling and Taxonomy
--------------------------------------
After the stage of vision solicitation, 6G is currently in the early stage of determining the key performance indicators of the system. There is nothing fixed but 6G will be surely based on the development and evolution of 5G. 6G will inherit the advantages provided by 5G, improve the remaining shortcomings, and leverage new technologies and applications. In this section, we present the B5G threats in dividing them into 3 categories (Figure [7](#S3.F7 "Figure 7 ‣ III B5G Threat Modelling and Taxonomy ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions")): B5G threats inherited from 5G networks, 6G Technology threats and 6G Application threats.

Figure 7: The Landscape of Security Threats in 6G
###
III-A B5G threats inherited from 5G networks
This section aims to synthesize 5G security threats and cyber-attacks that attackers can reuse to target B5G networks. For the categorization, we focus the analysis on the technologies that are believed to play important roles in both existing 5G and future 6G networks: Software Defined Networks ([SDN](#glo.acronym.SDN)), Network Function Virtualization([NFV](#glo.acronym.NFV)), Virtual Network Functions (VNF) and Virtualization/Containerization technologies.
####
III-A1 SDN, NFV and VNF
SDN and NFV are the key 5G technologies facilitating the establishment of networks including the deployment, the management and the operation. They are expected to be among the key enabling technologies for 6G. However, in the absence of proper security mechanisms, for example, the lack of TLS adoption, malicious actors could perform a man-in-the-middle, and launch various attacks by impersonating or gaining unauthorized access of the controller, or modifying the channels [[71](#bib.bib71), [72](#bib.bib72), [73](#bib.bib73), [74](#bib.bib74)].
Moreover, as the policy enforcement process is distributed across physical switches, security threats and new information disclosure risks are introduced. A malicious actor can identify the action applied to a packet type by performing packet processing timing analysis [[72](#bib.bib72), [73](#bib.bib73), [75](#bib.bib75)]. Also as the SDN controller manipulates flow rules in the data forwarding elements, it is vulnerable to DoS attacks [[76](#bib.bib76), [71](#bib.bib71), [72](#bib.bib72), [73](#bib.bib73), [75](#bib.bib75), [74](#bib.bib74)]. Besides, SDN facilitates third party applications and, on the other hand, bring its vulnerabilities to the system [[71](#bib.bib71), [75](#bib.bib75), [76](#bib.bib76)]. Simple faults in network applications might lead to the breakdown of the control plane and failure of network functionality. Moreover, traffic high-jacking and re-routing is possible, an illegitimate appropriation of routing group addresses by corrupting the routing tables. Finally, network security policies and protocols can have network vulnerabilities that affect the layers and interfaces of the SDN framework [[75](#bib.bib75), [74](#bib.bib74), [77](#bib.bib77), [76](#bib.bib76)].
Whilst, NFVs enable placing various network functions in different network components based on their performance requirements and eliminate the necessity for function or service-specific hardware. This technology is vulnerable to authentication threats, due to spoofing of information parameters in different VNFs, unauthorized use of VNF predefined accounts or attributes (e.g., guest, ctxsys), weaknesses in password policies, and traffic spikes. Furthermore, if authorizations for accounts and applications in the NFV are not reduced to the minimum required for the tasks they have to perform, the elevation of privilege via incorrect verification of access tokens is possible [[78](#bib.bib78)].
In addition, NFVs are exposed to exploitation and abuse threats, such as the exploitation of third-party hosted network functions, a lawful interception function, a weakly designed or configured API with inaccurate access control rules, or poorly configured systems/networks; as well as, the unauthorized access to a function when hosted outside the operator’s network. Moreover, accessing the personal data stored in the log files can lead to remote access exploitation and compromise the system integrity [[71](#bib.bib71)]. Finally, NFVs can be attacked by exploiting vulnerabilities of their native protocols, e.g., GTP, Diameter, NGAP, NAS-5GS, JSON [[79](#bib.bib79)].
A VNF is a network service that the NFV allows to place in a virtualized form in dedicated hardware technology. Although VNFs are part of the NFV architecture, their threats are the result of the exploitation of code vulnerabilities while NFV threats are generally related to weaknesses in network protocols. For example, improper input validation, buffer overflows and underflows during read or write operations, dynamic memory deallocation, poorly defined restriction of operations within the boundaries of a memory buffer, integer overflow, path traversal, or vulnerable software components (i.e. libraries, frameworks, and other software modules, etc) are software-related vulnerabilities that can lead to attacks against a VNF, and compromise the rest of the 5G architecture [[80](#bib.bib80)]. Moreover, academic research [[81](#bib.bib81), [82](#bib.bib82)] and standardization groups [[83](#bib.bib83)] report authentication threats that may lead malicious actors to access data or perform unauthorized actions, and consequently have a range of issues, including information exposures, DoS attacks and arbitrary code execution.
####
III-A2 Virtualization/Containerization platform
5G and B5G networks are deeply based on virtualization technologies, allowing VNF in virtual machines. Virtualization platforms face different threats, depending on the different virtualization approaches followed in the network. In this subsection, we focus mainly on server virtualization software security threats and container-based threats.
Research divided the server virtualization security threats into three main categories: hypervisor-based attacks, VM-based attacks, and VM image attacks [[84](#bib.bib84)]. A hypervisor-based attack is an exploit in which a malicious actor takes advantage of vulnerabilities in the program to allow multiple operating systems to share a single hardware processor. If attackers gain command of the hypervisor, all the VMs and the data accessed will be under their full control to utilize. Furthermore, it could compromise the control of the underlying physical system and the hosted applications. Some of the well-known attacks (e.g., Bluepill, Hyperjacking, etc.) insert VM-based rootkits that can install a rogue hypervisor or modify the existing one to completely control the environment. Since the hypervisor runs underneath the host OS, it is difficult to detect these attacks using regular security measures.
VM-based threats include VM escape, where malicious actors break the isolated boundaries of the VM and start communicating with the operating system directly by passing through the virtual machine manager (VMM) layer, such an exploit opens the door to attackers to gain access to the host machine and launch further attacks. VM sprawls, which occurs when a large number of VMs exist in the environment without proper management or control, and since they retain the system resources (i.e., memory, disks, network channels etc.) during this period, these resources cannot be assigned to other VMs. Cross VM-side channel attacks, when a malicious VM penetrates the isolation between VMs, and then access the shared hardware and cache locations to extract confidential information from the target VM.
VM image threats comprise inside-VM attacks, where a VM image is infected with malware or OS rootkits at run-time; and outdated software packages in VMs, that can pose serious security threats in the virtualized environment, for example, a machine rollback operation may expose a software bug that has already been fixed [[85](#bib.bib85)].
Regarding the container management security threats, the two major types of risks that we examined are the compromise of an image or container and the misuse of a container to attack other containers, the host OS and other hosts. A container image that is missing critical security updates, or has an improper configuration, embedded malware or clear text secrets, can be the target of exploitation that compromised the security of the rest of the system. Likewise, images often contain sensitive components like an organization’s proprietary software and embedded secrets. If connections to registries are performed over insecure channels, the contents of images are subject to the same confidentiality risks as any other data transmitted in the clear [[86](#bib.bib86)]. By default in most container runtimes, individual containers can access each other and the host OS over the network. If a container is compromised and acting maliciously, allowing this network traffic may risk other resources in the environment. Moreover, a container running in privileged mode has access to all the devices on the host, thus allowing it to essentially act as part of the host OS and impact all other containers running on it.
###
III-B 6G Technology Threats
6G networks are expected to provide ultra-high data rate, low latency, high reliability, and improved localization precision in three dimensions. This section discusses the key trending technologies allowing 6G to achieve those goals and their corresponding security threats.
####
III-B1 High-speed transmission technologies
* •
Terahertz communications (THz)
THz communication technology may support ultra-high data rate (100 Gbps or greater) with low power consumption and restraint, effectively eavesdropping. However, to adopt such technology, 6G cells must be changed from “small” to “tiny” meaning that much more complicated hardware needs to be built. This requirement also brings new privacy and security concerns, especially eavesdropping and authorization. Indeed, eavesdropping is believed to be difficult but still feasible. According to [[87](#bib.bib87)], eavesdroppers can capture THz signals using narrow beams and by intercepting signals in line-of-sight transmissions. A countermeasure against these narrow beam attacks has been introduced. It has been proved to detect some, although not all, eavesdroppers. Moreover, authorization is another security concern that has been investigated in [[88](#bib.bib88)]. The authors proved that an unauthorized entity could capture communications by strategically placing objects in the transmission path and scattering the radiation toward the user. In [[89](#bib.bib89)], the concept of electromagnetic signature of THz frequencies was mentioned as a potentially useful authentication method.
* •
Visible light communications (VLC)
VLC is a promising technology that allows higher bandwidths and electromagnetic interference resistance. It is believed to be the solution to growing demands for wireless connectivity. However, due to the physical characteristics of the light communication medium, VLC is exposed to sniffing/ eavesdropping, jamming, and data modification attacks [[90](#bib.bib90)][[91](#bib.bib91)][[92](#bib.bib92)]. To address these issues, in [[93](#bib.bib93)], the authors propose a protocol called SecVLC to protect the confidentiality and the integrity of data transmitted over vehicular networks. Besides, the authors of [[94](#bib.bib94)] present a VLC precoding technique to secure the physical layer and improve the confidentiality of VLC links.
* •
Molecular communication
The principal idea behind molecular communication technology is to transfer information using biological signals. It is a promising interdisciplinary technique that enables communication among moving nodes [[95](#bib.bib95)]. Nevertheless, it can be the target of various attacks [[96](#bib.bib96)], including transport layer attacks (e.g., desynchronization, unfairness), link layer attacks (e.g., flooding, packets storage exhaustion), network layer attacks (unfairness, collision), and physical layer attacks (e.g., jamming, tampering).
####
III-B2 AI/ML based technologies
AI and ML have been considered essential components of 6G technologies to make their security solutions more accurate, autonomous, and predictive. However, on the other hand, AI and ML bring additional associated security issues in multiple network layers. There are an unknown number of potential vulnerabilities. Some of the most common attacks include::
* •
Evasion attacks [[97](#bib.bib97)] bypass the learned model during the test stages by injecting the false tested data.
* •
Model inversion attacks [[97](#bib.bib97)] attempt to recover the private dataset used to train a supervised learning model.
* •
AI middle-ware layer attacks [[98](#bib.bib98)] aim for data tampering and malicious interruptions.
* •
Adversarial attacks [[98](#bib.bib98)] fool or misguide the learning model by entering deceptive data to make the network system unstable, malfunctioning or unavailable.
* •
Poisoning attacks [[99](#bib.bib99)] pollute a ML model’s training data.
####
III-B3 Distributed Ledger technologies
Blockchain and smart contracts are expected to be exploding in 6G networks. Nonetheless, several attacks may occur due to network connection security flaws, software development errors, and language restrictions [[100](#bib.bib100)] [[101](#bib.bib101)]. Firstly, blockchain-based systems must be capable of avoiding double-spending attacks in which the same single digital token can be duplicated or falsified and spent more than once [[102](#bib.bib102)]. Secondly, most majority voting blockchain systems are vulnerable to 51% attacks [[102](#bib.bib102)]. This vulnerability occurs when a malicious user controls 51 percent or more of blockchain nodes. Thirdly, an attacker or a group of attackers may create fake identifications to capture peer-to-peer communications in the blockchain network. It is called a Sybil attack [[102](#bib.bib102)] that can target blockchain systems using automated member addition methods. Fourthly, a re-entrance attack may happen when two smart contracts frequently call each other, but one is called when it still has not updated its state. It may lead to some unexpected behavior. Lastly, blockchains and smart contracts must avoid privacy issues like transaction data leakage, user privacy leakage, and smart contract logic leakage during the execution.
####
III-B4 Quantum Computing
Quantum Computing is believed to be available in the 6G era and makes many crypto algorithms no longer secure due to the extremely rapid computational capacity. Quantum mechanical properties [[103](#bib.bib103)] like superposition and entanglement can solve rapidly heavy problems such as the prime factorization of a very large number and the problem of discrete logarithms. On the other hand, the adversaries have quantum abilities to perform quantum-based attacks [[104](#bib.bib104)]. It is challenging to integrate post-quantum cryptography (PQC) solutions into resource-constrained IoT devices to keep their resistance against quantum attacks. In addition, although quantum cloning is forbidden by the laws of quantum mechanics, different excellent cloning methods with the most excellent precision may still create an identical copy of a random, unknown quantum state without changing the original state. It is called a quantum cloning attack [[104](#bib.bib104)]. PQC is, thus, an active trending research topic, and an appropriate algorithm is expected to be selected between 2022 and 2024.
###
III-C 6G Application Threats (0.5- 1 page )
Although 6G is still at the stage of vision solicitation and has not yet been developed, there have already been several emerging ideas considered as key applications empowered by 6G. This section summarizes a list of the most discussed applications and the potential corresponding security threats.
* •
Smart Cities: 5G now and 6G in the future promise to enhance the productivity and efficiency of Smart Cities. However, the connected “things ” deployed in Smart Cities exposes a wide range of security vulnerabilities [[105](#bib.bib105)] and serious risks that malicious actors can exploit. As mentioned in the previous sections, many authentication and encryption mechanisms used for the devices are no longer secure when 6G technologies empower the attacker. The attacker may hijack and take control of “weak” devices or target personal information for further fraudulent transactions and identify theft. Due to 6G capabilities, it is also easier for an attacker to perform DoS/DDoS attacks, e.g., flooding parking meters with superfluous requests to prevent legitimate ones from being treated.
* •
Healthcare: 6G will likely become the central communication platform for future digital healthcare services [[106](#bib.bib106)]. However, access control, device authentication, and secure communication for billions of tiny health devices will be a challenge to overcome. The confidentiality and ethical usage of patients’ electronic records will also be considered critical concerns.
* •
Industry 5.0: 6G is crucial for the deployment and operation of Industry 5.0, which is believed to be very scalable and highly automated. The main related security concerns include access control and authentication for restricting access to sensitive resources such as robots or intellectual property connected with Industry 5.0 [[107](#bib.bib107)]. In addition, security monitoring solutions must be prepared to be able to deal with a huge volume of multi-dimensional captured monitoring data [[108](#bib.bib108)].
* •
Smart Grid 2.0: In comparison with Smart Grid 1.0, Smart Grid 2.0 provides self-healing and self-organized capabilities. However, the most relevant security concerns consist of physical attacks, AI/ML-based attacks, software-related attacks, and threats against control components (e.g., SCADA) [[108](#bib.bib108)], [[109](#bib.bib109)]. Another critical issue is a mechanism of trust management to control the peer-to-peer trading of energy, which is among the main characteristics of Smart Grid 2.0.
* •
Extended Reality: 5G technologies have leveraged the Virtual Reality (VR) experience thanks to increased bandwidth and lowered latency. 6G promises to bring VR even to another level and make it available for various applications, including online education, virtual tourism, online gaming and entertainment, robot control, and healthcare. These applications will collect a remarkable volume of sensitive personal information and thus, raise the need for solutions for gathering, storing, sharing, and protecting data. Traditional cryptography may help but bring a trade-off between latency, reliability, and confidentiality. The characteristics of 6G promise potential physical layer solutions that can solve these problems by incurring fewer latency [[110](#bib.bib110)].
IV Role of XAI on Security Issues on B5G
-----------------------------------------
This section discusses the key enablers and network domains envisaged in the B5G era, elaborating how each technology demands XAI-based security to improve the accountability of constituent intelligent systems. The added cost of introducing XAI is also analyzed for enabling technologies and network domains. AI/ML-based security applications under each subsection are summarized in Table [III](#S4.T3 "Table III ‣ IV-E4 Summary ‣ IV-E Security of B5G E2E Slicing ‣ IV Role of XAI on Security Issues on B5G ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions").
###
IV-A Security of B5G Devices/IoT
####
IV-A1 Possible Security Threats, Challenges, Issues

Figure 8: B5G-IoT Security techniques implemented using learning-based methods are mostly considered blackbox systems. Detection of malicious agents from such methods could result in unpredictable results when these systems itself undergo stress due to external factors such as attacks. Additional interpretations improves tracing back those issues in the model internals and provide guarantee to users
IoT represents a network of interconnected nearly all environment devices to collect and exchange data over the Internet, enabling many services and applications to raise the standard of living. Although the Internet of Things has many advantages, it also poses many problems, particularly security. Taking care of these problems and guaranteeing the security of IoT devices and services must be a top focus. The ever-changing and heterogeneous nature of the IoT systems can make this issue even more challenging [[111](#bib.bib111)].
As shown in Fig. [8](#S4.F8 "Figure 8 ‣ IV-A1 Possible Security Threats, Challenges, Issues ‣ IV-A Security of B5G Devices/IoT ‣ IV Role of XAI on Security Issues on B5G ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") four main types of attacks; network, software, physical, and encryption attacks can be anticipated in an IoT system. [Distributed Denial of Service (DDoS)](#glo.acronym.DDOS) attack is one of the common attacks that is seen in the connectivity or network layer. Besides, it is severe in the IoT because of the magnitude of the damage that could be inflicted upon the whole network [[112](#bib.bib112)]. In addition to that, traffic analysis attacks, RFID spoofing/cloning/unauthorized access, MITM, sinkhole attacks, and routing information attacks can be seen under network attacks [[113](#bib.bib113), [114](#bib.bib114), [115](#bib.bib115), [116](#bib.bib116)]. Viruses, worms, spyware, phishing attacks, malicious scripts, and DoS attacks can be considered as the software attacks possible in the IoT [[117](#bib.bib117), [118](#bib.bib118)]. Under physical attacks, Node tampering, malicious node/code injection, and sleep deprivation of sensors can be considered as some of the plausible attacks [[111](#bib.bib111)]. However, these attacks are rather difficult to achieve [[118](#bib.bib118)]. Encryption attacks are seen in the communication channels of the IoT. Side-channel attacks, cryptanalysis attacks, and MITM attacks are to name a few.[[111](#bib.bib111)]
There are certain challenges for security that comes along with the IoT system. Resource limitation for processing and storage can be considered a major challenge. It might limit computationally intensive security measures (e.g., cryptography). As a result, there comes the challenge of handling the high volume of data that arrives with high velocity, veracity, and variety. In amidst of all this, security stands out as one of the major challenges of all [[119](#bib.bib119)].
Extensive research has been done on using ML/AI techniques to mitigate security issues in the IoT. Few examples can be stated as follows (Fig. [8](#S4.F8 "Figure 8 ‣ IV-A1 Possible Security Threats, Challenges, Issues ‣ IV-A Security of B5G Devices/IoT ‣ IV Role of XAI on Security Issues on B5G ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions")). In [[120](#bib.bib120)], access control techniques were implemented using naive Bayes and SVM algorithms to mitigate intrusions. In literature, [[121](#bib.bib121)] reinforcement-based techniques (Q-learning and Dyna-Q) were used for authentication to prevent spoofing attacks, while in [[122](#bib.bib122)] and [[123](#bib.bib123)] the authors used SVM and DNN, respectively. To enable secure IoT offloading against jamming attacks, authors of [[124](#bib.bib124), [125](#bib.bib125)] have used Q-learning; a [RL](#glo.acronym.RL) techniques, while in [[126](#bib.bib126)] the authors have used [Deep Q-Network (DQN)](#glo.acronym.DQN). However, in all those algorithms, the ability to explain the outputs is lacking. Due to the nature of the application and its gravity, the outputs of these algorithms must be reliable. Hence, these black-box models need to be wrapped with some explainable layer to make the system more accountable.
####
IV-A2 How XAI can help to mitigate these attacks/issues
Although it is clear that a range of ML/AI techniques [[127](#bib.bib127)] are used in the security of IoT, there comes the question of whether these techniques can be trusted to perform well in the real world. Most machine learning techniques used in IoT security are hardly backed up with explainable techniques to be safe from any adversary. DQNviz proposes a visual analytics-based method to expose the blind training in four levels which makes the large experience space more comprehensible to the users [[128](#bib.bib128)]. In addition to the usual statistics of model training, it enables the developers to ensure proper training through visualization of the epoch level training, episode level training, and most importantly, the segment-level interpretation that reveals what the agent sees. Such visualization is integral for a high stake security system in the IoT. Similar research which explains the workings of DQN is [[129](#bib.bib129)]. Following the abundance of RL methods for security in IoT, there remains the possibility of using mimic learning [[130](#bib.bib130)] to match the output of a Q function neural network. In [[66](#bib.bib66)] a linear model U-tree (LMUT) is used to achieve optimal performance with high interpretability in [DRL](#glo.acronym.DRL).
####
IV-A3 Added cost of Using XAI
Completing computationally complex and latency-sensitive security tasks on IoT devices with the limited processor, memory, radio bandwidth, and battery resources is often very difficult. Low-cost sensors with few security measures are more susceptible to attack than computer systems [[127](#bib.bib127)]. Therefore, deploying XAI techniques in resource-constrained devices would not be ideal. The services providers will either use more expensive devices with high memory and computational power or generate explanations on edge/cloud servers that would require more bandwidth. Even if the explanations are generated locally, the frequency of the explanations will be lower than inferencing in most cases. Therefore it will not be a problem except for critical scenarios such as military applications and medical requirements.
####
IV-A4 Summary
AI/ML usage in IoT is increasing rapidly. Its diverse applicability has drawn attackers’ attention, opening the possibility of a wide variety of new attacks and attack vectors across a large pool of devices. Detection of these attacks is increasingly shifting towards using AI/ML-based systems. IoT/devices being the closest layer to end-users in network architectures, accountability and trustworthiness of the AI systems become important facets that need to be addressed at once. Using XAI, the outputs of those systems can be translated into more comprehensible explanations to the end-users.
###
IV-B Security of Radio Access Network
####
IV-B1 Possible Security Threats, Challenges, Issues
[RAN](#glo.acronym.RAN) are are components of a telecommunications system that link mobile devices/[User Equipment (UE)](#glo.acronym.UE) to public and a private core network via an existing network backbone. LTE and 5G RANs are capable of offering ultra-reliable (deterministic) wireless performance [[131](#bib.bib131)] A RAN can consist of a base-band unit (BBU), radio unit or remote radio unit, antennas, and software interfaces. One of the earliest RAN installments was the Global System for Mobile Communications (GSM) RAN. From there onwards, different types of RANs such as Enhanced Data Rates for GSM Evolution RAN (GERAN), Universal Mobile Telecommunications System RAN (UTRAN), Evolved UTRAN (E-UTRAN) have been deployed with the advancement of 2G, 3G, 4G radio access technologies respectively. However, the most recent additions are [Centralized/Cloud Radio Access Network (CRAN)](#glo.acronym.CRAN), [Virtualized Radio Access Network (VRAN)](#glo.acronym.VRAN) and [Open Radio Access Network (ORAN)](#glo.acronym.ORAN) which are expected to be associated with 5G and beyond incorporating other contemporary technologies such as [SDN](#glo.acronym.SDN) and [NFV](#glo.acronym.NFV) [[132](#bib.bib132)].
We will mostly focus on threats and challenges in CRAN, ORAN, and VRAN. According to the authors of [[31](#bib.bib31)] the C-RAN architecture can be affected by a whole range of security threats. Some of the threats faced by RANs are common to any wireless network. Eavesdropping, [Man in the Middle (MITM)](#glo.acronym.MITM) attacks, MAC spoofing, identity theft attack, jamming attacks, and TCP/UDP flooding are to name a few. However, some threats are inherited from the predecessor of [CRAN](#glo.acronym.CRAN), Cognitive Radio Networks (CRN). For example, Primary User Emulation Attack (PUEA), Spectrum Sensing Data Falsification (SSDF) attacks, Common Control Channel (CCC) attacks, Beacon Falsification (BF) attacks, Cross-layer attacks targeted at several layers, and SDR (Software Defined Radio) attacks are few such attacks that can be seen in CRANs. Here the authors also emphasize the native challenges with CRAN regarding the security and trust of virtualization of BBUs pool. The virtualized BS needs to be secured where multi-point processing algorithms, terminal device data transmission, and dynamic traffic capacity allocation are done.
Machine Learning-based IDS is the most promising anomaly-based IDS because it can gradually improve its performance by learning over time while performing a given task. Authors of [[133](#bib.bib133)] have used [Multi Layer Perceptrons (MLP)](#glo.acronym.MLP) and [Support Vector Machines (SVM)](#glo.acronym.SVM) enabled with kernel trick (KSVM) to classify and detect multi-stage jamming attacks in CRAN BBU pool. O-RANs are envisaged to be the future of RAN technologies in B5G. When it comes to O-RANs, self-organization and intelligence-based technologies will be extensively used in the deployment process [[134](#bib.bib134)]. Therefore the heavy automation and self-organization needed from operators’ side to reduce costs can increase the necessity for reliable and more secure intelligence-based methods (i.e., AI/ML). Explainable AI has the potential to mitigate security issues that arise in those intelligent systems.

Figure 9: XAI in ORAN architecture: This figure shows a modified ORAN architecture to accommodate XAI in its security. Non-real-time intelligent controllers in ORAN would require pre-mode/in-mode/post-hoc XAI methods to continuously improve ML models’ resilience. A real-time intelligent RAN controller will find the best use from in-model/post-hoc explainers as model training is not usually done in real-time. XApps (3rd party applications) will require following specific standards to meet the system explainability policies.
####
IV-B2 How XAI can help to mitigate these attacks/issues
The ORAN alliance has brought all the latest C/V/ORAN technologies together ito realizeB5G RAN. As shown in Fig. [9](#S4.F9 "Figure 9 ‣ IV-B1 Possible Security Threats, Challenges, Issues ‣ IV-B Security of Radio Access Network ‣ IV Role of XAI on Security Issues on B5G ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") the architecture can be fairly modified to embed XAI techniques to improve its resilience.
IDSs in RANs are one of the abundantly researched areas in [RAN](#glo.acronym.RAN) security using data-driven methods such as AI/ML. These systems must stay transparent to the operators, developers, and engineers working on these systems. Testing those models in the real world can result in some false classifications. However, the cost of such a misclassification which could result in a breach, can be too high in RAN systems. Using explainable AI methods makes it possible to interpret why a model behaved unexpectedly. The first step in making amends is to understand the reason for the misclassification. Once we know the cause, it’s possible to prevent future attacks of such. [[11](#bib.bib11)] is one such example of using an adversarial approach to explain linear and MLP classifiers based on the minimum modifications required in the input features to correctly classify a misclassified model.
####
IV-B3 Added cost of Using XAI
AI/Ml powered radio resource allocation, resource scheduling, and power allocation are integral functions of ORAN (Fig. [9](#S4.F9 "Figure 9 ‣ IV-B1 Possible Security Threats, Challenges, Issues ‣ IV-B Security of Radio Access Network ‣ IV Role of XAI on Security Issues on B5G ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions")). To ensure accountability, open distributed units that host those models will also require pipelines built to generate and communicate explanations. This requirement calls for more computation power and resources [[135](#bib.bib135)]. ML and AI models use real-time data from the RAN to monitor the RAN’s health and performance. As O-RAN’s security and management capabilities are enhanced because of the obtained results, added costs are justifiable. XAI techniques applied to those ML techniques will require additional time, effort, and resources. Near-real-time and non-real-time [RAN Intelligence Controller (RIC)](#glo.acronym.RIC) will require additional computation power to host interpreters incurring further costs. However, explanations are typically not a real-time necessity. Therefore, a certain leeway in power is possible. RIC also offers an open platform to host third-party applications (x/rApps) by specialist software providers, and thus additional XAI-based measures will be needed. The third-party application vendors should use fitting XAI methods to compliment the requirements on RIC during the deployment process to ensure software isolation, secure, standardized interfaces, and access controls to guarantee that x/rApps cannot bring vulnerabilities into the RAN [[136](#bib.bib136), [137](#bib.bib137), [138](#bib.bib138)].
####
IV-B4 Summary
RAN commercialization is headed toward an alliance between CRAN, VRAN, and ORAN (xRAN) technologies. Each of these technologies is closely coupled with intelligent systems in operations such as resource allocation and optimization. AI/ML-powered zero-trust architecture will revolutionize the security in RAN technologies, from automating user access control policies to auditing. Backing up such integral tasks with a canopy of user comprehensible explanations would increase the accountability of the intelligent systems used under the hood.
###
IV-C Security of B5G Edge Network
####
IV-C1 Possible Security Threats, Challenges, Issues
Simply put, edge computing means performing computations as near to the resource-constrained devices where data is generated as feasible rather than at much further distances [[139](#bib.bib139)]. Edge layers preprocess data acquired from many sources using caching and processing modules to deliver near-real-time replies to mobile consumers. Edge networks are seemingly becoming more popular [[140](#bib.bib140)] due to their advantages in cost-effectiveness across different fields. Certain clear advantages of edge computation can be seen in the cost-effectiveness of data usage [[141](#bib.bib141)], privacy improvement, and bandwidth usage [[142](#bib.bib142), [143](#bib.bib143)] which in turn enables the implementation of novel ML applications [[144](#bib.bib144)].
Authors in [[25](#bib.bib25)] have stated that the AI security in B5G edge networks can be considered under two sections ”AI for edge security” and ”security for edge AI.” The prior refers to AI techniques used in securing edge systems, while the latter refers to the security of AI systems deployed in edge networks. Also, the authors state that the [DoS](#glo.acronym.DOS) attacks, service or resource manipulation, privacy leakage, and man-in-the-middle attacks as the most prevalent security concerns on edge infrastructure.
Current research describes the use of artificial intelligence as a facilitator of edge security in various contexts, including more general applications and complete architectures that rely on AI. One such instance can be seen in [[145](#bib.bib145)] where the authors propose a secure architecture for [IoT](#glo.acronym.IoT) namely AI4SAFE-IoT. The three-layer (network, application, and edge) architecture uses an AI engine for security across all three layers. Network layer IDS is claimed to mitigate sinkhole, DoS, rank, and local repair attacks in the proposed architecture. Authors of [[146](#bib.bib146)] have emphasized the possibility of utilizing edge AI recommender systems to send suggestions to service customers via an app in the e-tourism domain. Although edge AI has other inherent challenges, the security risks associated with artificial intelligence may be reduced by providing the AI modules with more interpretable and fail-tolerant methods that make the models more transparent.
####
IV-C2 How XAI can help to mitigate these attacks/issues
Some designs, such as AI4SAFE-[IoT](#glo.acronym.IoT), may contain a variety of AI models and topologies, which can result in a large number of complex computations being performed. So the basic issue that explanations of such processes must deal with is finding methods to make all of these processes more manageable in their complexity. Making a proxy model that acts similarly to the actual model, but more understandably, may accomplish this goal. Local Interpretable Model-Agnostic Explanations (LIME) [[49](#bib.bib49)] is a perfect example for linear proxy models. With LIME, a black-box system is described by probing behavior on perturbations of input, and then the data from that probing is used to build a local linear model that acts as a simplified proxy for the original model in the vicinity of the input. Since this method is model agnostic and comparatively faster, the author emphasizes that the technique may be used in a wide range of models and problem domains to find the areas of the input that are most important for a choice. Research and suggestions are going on in explainable recommender systems that would be more resilient in place of edge AI [[147](#bib.bib147)] recommender systems as given in [[146](#bib.bib146)].
####
IV-C3 Added cost of Using XAI
Edge intelligence is a key approach in B5G networks that uses various resources, including storage, caching, computation power, and so on. For improved performance, the edge server is constantly loaded with powerful multidisciplinary algorithms, including ML, data mining, NLP, and deep reinforcement learning [[148](#bib.bib148)]. XAI broadens the horizon of edge intelligence (edge caching, training, inference, and offloading [[149](#bib.bib149)]) by adding a fifth dimension; edge explanations. Access to edge caching for generating and storing pre-model explanations is necessary to ensure security in the edge and IoT layers. Thus, additional storage spaces will be required.
Similarly, when generating in-model and post hoc explanations during edge training and inference stages, supplementary computational costs might be encumbered depending on whether the situation requires new devices or edge/cloud servers. On the off chance that XAI methods are deemed too computationally expensive, operations will be extended and distributed using edge offloading. Each of the above scenarios will have inherent costs associated with them.
####
IV-C4 Summary
Edge computing thrives in ways to reduce costs, latency, and bandwidth usage. However, it has its caveats for a new threat surface that could expose to attacks such as MITM, DoS, and privacy leakages. AI/ML is increasingly used to mitigate those attacks, and those systems must be reinforced with a concrete interpretable data flow. Local and global XAI methods will be more important for improving users’ trust in the services despite the added resource utilization in the long run.
###
IV-D Security of Core and Backhaul Networks
####
IV-D1 Possible Security Threats, Challenges, Issues
Core network refers to the highly functional communication facilities that link primary nodes providing routes to communicate between subnetworks. In other words, it is the core part of a telecommunication network that delivers services to the users who are linked via the access network. The backhaul network links BSs to network controllers within a coverage region, which interconnects to the core network through the core transport network. Backhaul network is also known as the first mile and last mile (first mile from a wireline perspective, and last-mile from a mobile perspective) [[150](#bib.bib150)].
A set of known threats to any network are seen in the backhaul and core networks. In backhaul networks Eavesdropping and DoS and possible solutions such as mutual authentication, key exchange, and perfect forward secrecy are discussed in [[151](#bib.bib151)]. Authors of [[152](#bib.bib152)] have proposed IPsec tunnel mode and IPsec bound end-2-end tunnel (BEET) mode-based solutions to LTE-backhaul-related security issues such as DoS, distribution of viruses, and unwanted communication via VoIP. In [[153](#bib.bib153)], authors have proposed security architectures for backhaul networks in an [SDN](#glo.acronym.SDN) environment using host identity protocol (HIP) and policy-based communication and synchronized networks for spoofing and DoS attacks. TCP reset attacks, DoS, and DDoS, were considered by the authors of [[154](#bib.bib154)] where they propose a VPN-based architecture for backhaul security. A similar VPN-based architecture was proposed later in [[155](#bib.bib155)] considering attacks such as IP-based attacks, replay attacks, eavesdropping, spoofing, and Dos attacks. In [[156](#bib.bib156)] authors have emphasized the Dos and MITM attacks where they propose IPsec and Firewall as possible solutions.
One of the major challenges regarding security faced here is the decentralized and distributed data caching that can open up many attack possibilities. Therefore secure link management, communication, and handover security are some of the challenges that need to be addressed. In addition to that, network optimization, architectural enhancement, and performance metrics are considered in literature [[157](#bib.bib157)]. There is a growing trend of using reinforcement and machine learning methods to backhaul and core networks. For instance, in [[158](#bib.bib158)], a Q-learning method is proposed for increasing the dependability of a millimeter-wave (mmW) non-line-of-sight small cell backhaul system is proposed. Also, the authors of [[159](#bib.bib159)] have addressed the issue of adaptive call admission control using a Q-learning algorithm. Author of [[160](#bib.bib160)] have emphasized on the usage of ML in an [SDN](#glo.acronym.SDN) environment. They have used ANN methods on top of IP routing to estimate and reallocate available network resources to newly added slices using Traffic Engineering (TE) logic. Like many other ML applications, this study has not considered the interpretability and transparency of their model in the evaluations. In the actual world, the accountability of these tactics and false tolerance is critical. A general XAI system may be utilized to address this research need. The following section explores some of these options.
####
IV-D2 How XAI can help to mitigate these attacks/issues
The lack of explainability in the above applications leads them to doubt the credibility and feasibility of industrial implementations. However, there is hope for these systems when coupled with interpretations of the internal workings. The [MLP](#glo.acronym.MLP) used in [[160](#bib.bib160)] for network optimization can be improved with various post-hoc techniques of explainability such as case-based reasoning (CBR) [[161](#bib.bib161)] coupled into the MLP network. With B5G around the corner, low latency and high throughput are closely associated with backhaul and core networks. Although ML is fully capable in this regard, the models need to be explainable enough without adding a burden on latency and throughput. Explaining RL models with LMUTs or replacing them with Programmatically Interpretable Reinforcement Learning (PILR) are worthy of application since they are efficient. PIRL policies are expressed using a high-level, human-readable programming language, making them understandable to humans [[64](#bib.bib64)]. Here identifying a policy that maximizes long-term reward is accomplished through Neurally Directed Program Search (NDPS), an imitation learning inspired technique [[162](#bib.bib162), [163](#bib.bib163)]. The importance here is that in their experiments conducted in [[64](#bib.bib64)], the NDPS model has surpassed the DRL model in performance.
####
IV-D3 Added cost of Using XAI
Dynamic resource management seems to be one of the most pressing issues for wireless backhauling in using limited resources efficiently. For this purpose, AI/ML-based systems have been widely adopted in recent studies. Consequently, XAI techniques become a requirement to increase their resilience and accountability. Deploying XAI methods in energy-efficient small cell backhauling techniques in [Unmanned Aerial Vehicles (UAV)](#glo.acronym.UAV), high altitude platform stations, and satellites [[164](#bib.bib164)] will be highly challenging in terms of costs. These costs might be incurred on additional computation power, caching, and bandwidth to generate and communicate explanations. Although cost constraints could be a damper for XAI in core and backhaul networks, system insights gained through explanations are important because they indicate that wireless backhaul can be used in the field without any performance losses, and they also highlight the adjustments needed for optimal field use and robustness of the AI/ML methods used.
####
IV-D4 Summary
Security is an emerging trend for SDN and NFV-based backhaul traffic monitoring. Better network optimization, architectural enhancements, and security enhancements are envisaged in future research in B5G networks. AI/ML-based systems to identify common attacks such as viruses, MITM, replay attacks, and DoS attacks will also be applied in the core and backhaul parts of the networks. To avoid backhaul bottlenecks, balance the load, and measure the overall performance of resilience in backhaul networks, interpretable AI/ML models are more beneficial than black-box models.
###
IV-E Security of B5G E2E Slicing
####
IV-E1 Possible Security Threats, Challenges, Issues
Network slicing means partitioning network architectures into virtual elements. This allows operators to meet customized client needs [[165](#bib.bib165)]. It is highly analogous to dynamically allocating computer resources to enable concurrent execution of threads in a complex software system, a notion known as program slicing. Program slicing divides (disaggregates) software routines into many threads and configures computing resources to create virtual computing environments for parallel processing. Similarly, through the segmentation of network designs into virtual components, [SDN](#glo.acronym.SDN) and [NFV](#glo.acronym.NFV) provide much more network flexibility than previously possible. In its most basic form, network slicing allows for the construction of numerous virtual networks on top of a single physical infrastructure, customizing the deployment of B5G resources and functions required to serve specific consumers and market groups by the network operators.
Authors of [[166](#bib.bib166)] have reported on both classical (well researched) security threats and non-trivial, less researched threats affecting network slicing. Some classical security threats can include traffic injection into interfaces, network slice manager impersonation, host platform impersonation, and monitor interfaces. Among the security threats that are non-trivial but yet to further research, passive side channel, active side channel, compromise of the function, and end device vulnerabilities can be seen as prominent. It is worth noting that these security threats violate at least one of the leading security principles (confidentiality, authentication, authorization, availability, and integrity).
Numerous ML-based techniques are available in the existing literature to prevent such security vulnerabilities. For mitigating in and out-of-band jamming and external polarization attacks, three distinct models were used in the literature [[167](#bib.bib167)]. (i) ANNs, (ii) semi-supervised one-class support vector machine (OCSVM), and (iii) unsupervised density-based spatial clustering of applications (DBSCAN). [[168](#bib.bib168)] and [[169](#bib.bib169)] proposed [RL](#glo.acronym.RL) for Admission Control design that would interact with the dynamics of [RAN](#glo.acronym.RAN) operation. While the authors in [[168](#bib.bib168)] used a semi-markovian Decision Process (SMDP) model to represent the RL agent, the authors in [[169](#bib.bib169)] used a stochastic artificial neural network (S-ANN). In [[170](#bib.bib170)], Q-learning was used to solve the issue of slice admission control for revenue maximization. Although this gives the adaptability to surroundings while still attaining near-optimal performance, due to the scalability issues inherent to Q-learning, the authors have extended the work in [[168](#bib.bib168)] where they have proposed an analytical model for the admissibility region in a sliced network which provides formal service guarantees to network slices. Also, they have proposed an online Machine Learning-based admission control algorithm that maximizes the infrastructure provider’s monetization. Such models can draw the attention of stakeholders once they are commercially applied, and thus accountability and resilience are of great importance.
####
IV-E2 How XAI can help to mitigate these attacks/issues
As shown above, clustering-based solutions are prominently used in many AI attack mitigation algorithms in E2E slicing. Incidentally, explainable clustering is an up-and-coming area of research that can be used to improve ML-based clustering models’ completeness, maintainability, resilience, sensitivity, consistency, accuracy, and robustness. For example, in [[171](#bib.bib171)] authors have proposed a general interpretability framework for any clustering/classification model called a single feature introduction test (SFIT) to explain the clusters. Although these techniques are used for different use cases, there is potential for them to be applied anywhere (e.g., B5G security) as long as the underlying algorithms are compatible. In addition to that, authors of [[172](#bib.bib172)] have proposed using decision trees to interpret the clusters provided by k-means and k-median clustering algorithms. ExKMC technique introduced in [[173](#bib.bib173)] is another approach to add the interpretability of the k-means clustering technique. It follows a similar approach as in [[173](#bib.bib173)] with small decision trees modified k-means algorithm. Since slicing is focused on delivering customized services to consumers, transparency is a growing concern in the community. Situations, where machine learning is used for security in E2E slicing should be wrapped in a mantle of explainability to inculcate trustworthiness about the services they are receiving.
####
IV-E3 Added cost of Using XAI
The primary goal of E2E slicing is to ensure that specified services meet their required performance criteria. The slices should adapt to traffic changes, detect potential security issues, and take countermeasures autonomously [[167](#bib.bib167)] in a real-time trustworthy network environment. To generate more targeted explanations to enable accountability, the present condition of the network’s resources and services, telemetry data must be communicated between the data plane and the control plane. Additional communication protocols must be used to abstract and communicate domain-specific information for explanations alongside interpretations of ML models. Data from services providers also must be infused with model outputs to provide bespoke explanations. The XAI method will also require some computational power in the control and orchestration layers for generating in-model explanations, which will add up to the final costs.
####
IV-E4 Summary
While E2E slicing provides the intended network flexibility and adaptability, it adds new vulnerabilities with virtualization. AI/ML-based security measures can be expected in federated architectures’ inter-slice security and network resource harmonization. Multiple ML techniques are used in previous studies that beg for interpretability at the slice level. This requirement can also be extended toward holistic explanations at the amalgamated domain level.
Table III: AI techniques used in the context of security for B5G enablers
| | | |
| --- | --- | --- |
| B5G Enabler/Network Domain | AI usecase in security | AI techniques used |
| IoT/Devices | Access control techniques | SVM |
| | | Naïve Bayes |
| | Spooifing attack prevention with authentication | Q-learning |
| | | Dyna-Q |
| | | SVM |
| | | DNN |
| | Secure IoT offloading against Jamming attacks | Q-learning |
| | | DQN |
| RAN | Prevention of multi stage jamming attacks in CRAN BBU pool | SVMs |
| | | MLP |
| | | KSVMs (Kernel trick) |
| Edge Network | AI4SAFE-IOT - architecture to mitigate sinkhole , DoS, rank, local repair attacks. | AI4SAFE-IOT |
| Core and Backhaul | Increasing dependability of mmW | Q-learning |
| | Adaptive call admission | Q-learning |
| | Reallocation of network resources to slices | ANN |
| E2E slicing | out of band jamming and external polarizations attack mitigation | ANN, OCSVM, DBSCAN |
| | Admission control design that would interact with RAN | RL (SMDP, Stochastic ANN) |
| | slice admission control | Q-learning |
| ZSM/Network Automation | | |
| (AI attacks) | White box attack mitigation | Adversarial training |
| | | Input validation |
| | | defensive distillation |
| | | Defense GANs |
| | | Concept drift |
| | Black box attack mitigation | GANs |
| | Other usecases | LSTM |
| | | RNN |
| | | SVM |
| | | Q-learning |
###
IV-F Security of B5G Network Automation/ZSM
####
IV-F1 Possible Security Threats, Challenges, Issues
Zero-touch network and Service Management (ZSM) is where the orchestration of cutting-edge technologies like end-2-end network slicing, cross-domain service orchestration, and automation comes together to achieve full network automation. The ultimate automation goal in B5G is to create fully autonomous networks that can self-configure, self-monitor, self-heal, and self-optimize without human involvement. These characteristics need a novel horizontal and vertical end-to-end architecture suited for data-driven machine learning and AI algorithms. For self-managing AI functions, the ZSM framework depends on [SDN](#glo.acronym.SDN),[NFV](#glo.acronym.NFV) technologies as well [[174](#bib.bib174)]. For example, ZSM plans to use DL to provide intelligent network management, and operation skills such as traffic categorization, mobility prediction, traffic forecasting, resource allocation, and network security [[175](#bib.bib175)]. This introduces a new threat surface that needs to be addressed separately.
In [[3](#bib.bib3)] a range of attacks that are possible in the threat surface of ZSM on various network aspects is discussed. The E2E service intelligence offered by the ZSM enables decision-making and forecasting capabilities. Consequently, an attacker may design inputs to cause the machine learning models in E2E service intelligence services to make incorrect choices or predictions, possibly resulting in performance degradation and financial loss. On the other hand, this can jeopardize SLA fulfillment and security assurances. Furthermore, API based attacks such as parameter attacks, identity attacks, MITM, and DDoS attacks; Intent-based interface threats like information exposure, undesirable configuration, and abnormal behavior; threats on closed-loop automation control systems such as deception attacks; AI/ML system target attacks such as poisoning attacks and evasion attacks; threats on Programmable Network Technologies such as DoS, privilege escalation, malformed control message injection, eavesdropping, flooding and introspection attacks are some of the attack vectors emphasized in the threat surface of ZSM. The authors of [[3](#bib.bib3)] have proposed a range of solutions for these attacks, such as adversarial training, input validation, defensive distillation, defense Generative Adversarial Networks (GANs), and concept drift. They further elaborate on the efficacy of defense GANs against the white box and black-box attacks. It is one of the occasions where XAI would shine. Nevertheless, XAI can assist in numerous other areas of ML implementations across different sectors of network automation to make it more resilient and accountable. It has a high potential of addressing this research gap with black-box attacks.
In literature, [[176](#bib.bib176)] the authors emphasize challenges such as the need for AI/ML security and how AI model interpretation will guarantee accountability, reliability, and transparency by improving the trustworthiness of AI-enabled systems. However, they also mention that the research gap in the field of ML security for network and service management limiting to only a few contributions (i.e., [[177](#bib.bib177), [178](#bib.bib178)]).
####
IV-F2 How XAI can help to mitigate these attacks/issues
The use of AI/ML methods in ZSM opens the door to many new possible scenarios for XAI applications. Authors of [[176](#bib.bib176)] have shown that a variety of ML techniques such as RNNs (LSTMs), support vector data descriptions (SVM-inspired technique), Q-learning, and Gaussian models would be necessary to enable full network automation. These algorithms should be backed with an expandable set of interpretability techniques in said applications. For example, Partial Dependence Plot (PDP), Individual Conditional Expectation (ICE), Accumulation Local Effects (ALE) Plot, Feature Interaction, Feature Importance, Global Surrogate, Local Surrogate (LIME), and Shapley Values (SHAP) [[52](#bib.bib52), [179](#bib.bib179)] to name a few. Generating adversarial samples is still not fully clear in service and network management fields [[176](#bib.bib176)] which is a fundamental step for identifying security vulnerabilities. Incidentally, IDS development and implementation is a salient concern for full automation. The use of XAI in this area is still in its infancy, but it is showing signs of promise. For example, [[180](#bib.bib180)] gives an evaluation of perturbation-based post-hoc xAI tools in the intrusion detection field with network traffic data. In this case, we can observe that all of these tools work quite well, with LIME and SHAP providing exceptional results. Furthermore, the work in [[181](#bib.bib181), [182](#bib.bib182)] proposes to use SHAP with their Multimodal DL-based Mobile TraffIc Classification system (MIMETIC) to evaluate the input importance. It shows that xAI will play a key role in fostering the trustworthiness and transparency of AI applications in security for ZSM.
####
IV-F3 Added cost of Using XAI
There are several management tasks bundled together in the ZSM [Management Domain (MD)](#glo.acronym.MD), such as the domain data collection services, domain analytics services, domain intelligence services, domain orchestration services, and domain control services [[176](#bib.bib176)]. Additional channels to communicate explanations generated about domain intelligent services and data collection services must be looped into domain analytics of each MD so that any changes required in domain control and orchestration are properly executed. It will require additional computation power. Furthermore, generated domain-specific explanations will have to be stored in each domain data service, while cross-domain explanations will be stored in standard data services calling for additional storage and caching space. However, the existing ZSM system can be conveniently adapted to explanation-based analytics with minimal compromises.
####
IV-F4 Summary
ZSM or network automation can be simply identified as the future of telecommunication systems. In full automation, AI/ML is integral for the closed-loop management of a network. In closed-loop management, an undesirable configuration or an attack on the AI/ML-based systems can pull the malicious behavior into a whirl of abnormalities in the network domains. XAI is the viable candidate to uncover any underlying vulnerabilities in AI/ML systems and shed light on obscured attack data in black-box models.
###
IV-G Security of other enablers in B5G
####
IV-G1 [Intent Based Network (IBN)](#glo.acronym.IBN)
IBN is a network that operates autonomously with the intent of a predetermined set of directives. In other words, an IBN is a closed-loop and self-operating system. When compared to an imperative policy, an intent-based policy is a set of objectives that must be completed throughout network operation to achieve collective performance goals. At the interface level, the intent is a means of abstracting complexity. Given contextual awareness and appropriate data availability from multiple networks and intent functional blocks, AI/ML is the cornerstone in realizing intent-level inference. Service orchestration optimization, resource monitoring, context and behavior-based intent to service mapping, extracting service primitives from intents are some of the operations where ML might be required[[183](#bib.bib183), [184](#bib.bib184)]. Such functional modules would be open targets for malicious agents looking to penetrate the security of the IBNs. For example, an anomaly in intent recognition can be quickly identified with the help of XAI techniques and made rectifications.
Table IV: The requirement analysis of explainable security for typical 6G use cases
| B5G Application | Why | Who/Whom | What | Where | When | How |
| --- | --- | --- | --- | --- | --- | --- |
| Smart cities | Every stakeholder requires trust, resilience and confidence in the systems they use. | Citizens, legal institues, planners, media, politicians, etc. | System threat models and usage directions (traffic, parking, waste), System feature specifications, etc. | As a part of security/privacy policy, a separate service, at access points, etc. | during construction, when commissioned, during faults, under intrusion, repairs and maintenance, runtime, etc. | natural/graphical languages, Gamification/interactive explanations |
| Smart health | Critical applications require high levels of confidence, transparency, accountability, and resilience. | Patients, professionals, Pharmaceutical/device manufacturing companies, etc. | Patient/physician requirements, vulnerabilities of medical systems, Diagnostic results, Device manuals, etc. | Embeded in the system, user agreement, as a detached trusted expert system, etc. | Obtaining approval, runtime, modifications, Planning and designing, etc. | Formal proofs and plans, Addressing multi-desciplinary requirements, natural/graphical languages, etc. |
| Industry 4.0/5.0 | Stakeholders need confidence, accountability, usability, and transparency in their assets and operations. | Manufacturing companies, Standardizing bodies, Insurance companies, IT providers, etc. | Operation direction for machines, user requirements, Collaboration guidelines (human and machines), etc. | Explanation as a service, usage protocols, privacy/security policy, etc. | During standardization, assembly, testing, modification, during attacks and defence, etc. | Gamification /interactive explanations, explanation by example, Formal /natural /graphical languages etc. |
| Smart Grid 2.0 | Identify energy theft and improve system resilience on multiple operations; security, stability control and fault diagnostics etc. | Meter readers, general consumers, electrical/chemical engineers, government officials and politicians etc. | Energy usage data, demand and disturbance prediction decisions, security vulnerabilities, voltage/frequency control decisions etc. | Privacy and security policy, appended to periodic statements (monthly, daily, etc.), explanation as a service etc. | After an attack, responding an attack/theft, during system modifications, During system maintenance, etc. | As formal proofs or plans, graphical presentation as a part of the main grid control panel, stability reports, etc. |
| [XR](#glo.acronym.XR) | Ensure security of AI used in collaborative MR, enterprise stakeholders require high level of trust and resilience etc. | Businesses with virtual work environments, virtual event organizers, gamers, general users etc. | Model training details, dynamic security vulnerabilities, AI enhanced social interactions in metaverse etc. | explanation as a service, In the privacy and security policy etc. | At system deployment, during/after attacks, before MR conferences, etc. | Gamification into XR setup and usage, Graphical representation along with formal plans and natural language |
V Use-cases in XAI for B5G Security
------------------------------------
The above-mentioned enabling technologies will spur the development of a variety of new applications, which were impossible before due to the lack of accountability in AI/ML-based systems shaping the human society in the 5G and beyond era. Some of the emerging use cases that will rely on future B5G network capabilities are discussed in this section. This section emphasizes the impact made by the advent of XAI in security.
###
V-A Smart Cities
####
V-A1 Introduction
With the fast-growing urbanization and resource depletion [[185](#bib.bib185), [186](#bib.bib186)] there comes the need to handle its drastic impacts on the cities. As a result, smart cities were proposed to manage and optimize resource and energy usage. To efficiently achieve the requirements of smart cities, suitable communication technologies that come with [IoT](#glo.acronym.IoT) have become an integral constituent. However, the immense amount of data collected in the [IoT](#glo.acronym.IoT) devices needs to be communicated appropriately, analyzed, and calculated precisely, and transformed into the envisaged services that are necessary to improve the standard of living in the cities. Thus AI and big data analytics come into play. More precisely, AI-based techniques will be found in many sectors of smart cities like Intelligent transportation, cyber-security, smart transportation, electric and water system, waste management, public safety, UAVs-assisted next-generation communication (5G and B5G), etc. [[187](#bib.bib187), [188](#bib.bib188), [39](#bib.bib39)]. With the usage of AI in integral parts of everyday life such as energy and security [[127](#bib.bib127), [189](#bib.bib189), [190](#bib.bib190)]; the reliability, accountability, and resilience of ML algorithms and systems using such AI, ML, DRL algorithms becomes paramount. On the other hand, the number of interconnected devices that becomes a part of the smart city can only increase with time. This inter-connectivity and the sheer number of devices have given rise to concerns with cybersecurity.
Security in smart cities has been a growing field of research in the recent past. Numerous artificial intelligence and AI/ML-driven approaches considerably contribute to this. For example, authors of [[191](#bib.bib191)] emphasize the ability of ML to provide solutions for security threats such as DoS attacks (Using MLP based protocols), eavesdropping (Q-learning, Dyna-Q, and Bayesian techniques), spoofing (Dyna-Q, Q-learning, SVM, DNN, Incremental aggregated gradient (IAG), distributed FrankWolfe (dFW) techniques). They go on to show solutions to other challenges, such as privacy leakages and digital fingerprinting (SVMs, ANNs), where ML has become a viable candidate in the [IoT](#glo.acronym.IoT) and smart city domains.
####
V-A2 Impact of XAI on the security of smart cities

Figure 10: Smart cities in B5G era will involve collecting massive amounts of personal data that will require further accountability from the service providers end. From the collection of data to communicating the decisions taken by AI models, end users would require extra reassurance about the system security as the scope of services widen
With XAI, it is possible to overcome implementation challenges while also explaining decision-making processes and supplementary information. Fig.[10](#S5.F10 "Figure 10 ‣ V-A2 Impact of XAI on the security of smart cities ‣ V-A Smart Cities ‣ V Use-cases in XAI for B5G Security ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") gives a holistic view of the effect of XAI in smart cities. This approach will lead to more understandable machine learning algorithms in smart city applications. However, when developing XAI-based machine learning algorithms, it is necessary to consider various levels of explanation, ranging from ”comprehensive explanation” in the case of complex black-box ML algorithms to ”no explanation” in the case of transparent ML algorithms. In addition, contracts must be made in terms of the knowledge and competence of stakeholders who are addressed by the explanations from farmers of smart agriculture programs to technicians and computer scientists using machine learning algorithms in engineering practices that integrate data analysis into smart (traffic, water, energy, etc.) monitoring systems. As we discussed earlier, the preponderance of AI in critical applications such as smart grids, security, and intelligent traffic monitoring systems of smart cities implores the need for transparency in those black-box systems with relevant explanations in some comprehensible form.
The most important factor that can affect other sectors in a smart city; is the climate. Heavy rains (floods) or heavy droughts can equally affect a city’s water management, socio-economic, ecological, and environmental aspects in more than one way. Nevertheless, with intelligent monitoring systems, these problems can be mitigated to a greater extent. Forecasting and being ready for any climate change is the first step. Traditional rule-based climate and weather forecasting systems were the preferred techniques in heritage cities. However, data-driven AI and ML model-based forecasting systems are emerging with promising results (even exceeding physical-based models [[192](#bib.bib192)]). Such systems occupy powerfully but complicated black-box machine learning models such as neural networks. The main impediment to using such systems in a real-world setting is the absence of accountability frameworks due to their lack of transparency, explainability, and trust. So it’s vital to have an explainable wrapping to back the outputs of these models in different situations. For example, during a security breach or a malfunction of a meteorological system (e.g., Cyclone prediction), all the components should be transparent enough to be evaluated by the operators and state inspectors to make amends and find the responsible parties. General public and government/private institutes affected by such situations has the right to know the reasons behind the damages incurred upon them. Therefore an explainable and interpretable layer for AI-based models is of utmost importance. In [[193](#bib.bib193)] authors have used SHAP-based interpretations for CNN and LSTM-based spatial drought prediction systems. Such work realizes a step towards witnessing AI/ML-based sophisticated and precise techniques used in real-world scenarios and ensures their sustenance.
####
V-A3 Related Work
Numerous works of XAI usage can be found in various sectors in the realization of smart cities. Embark, and Osama et al. [[194](#bib.bib194)] emphasize the importance of XAI in the transition stage from heritage cities to smart cities. Major entities like governing bodies, investors,, and researchers must make deliberate decisions on policy standardizations and applications for intelligent urban development. They further elaborate on the need to have XAI in all layers from infrastructure to end-user (i.e., infrastructure layer, application layer, cloud data, service layer, and end-user). In [[195](#bib.bib195)] the authors identify the necessity of using XAI techniques in smart monitoring systems which autonomously collect, analyze and communicate structural data from wireless sensor networks and note the usefulness of LIME and LRP (layer-wise relevance propagation). With the intent of evaluating data fusion techniques in smart city applications, authors of [[196](#bib.bib196)] have elaborated on the importance of having explainable outcomes from the data fusion algorithms, which are used alongside AI and ML-based automation systems in smart city applications. In [[197](#bib.bib197)], the authors have surveyed the performance of 10 ML algorithms for smart medical waste management in Morocco, and out of them, they have shown that ANN and SVM are most accurate in an [IoT](#glo.acronym.IoT) environment. They have also emphasized ways to improve current AI-based waste management systems with the inclusion of explanations in the architecture of each layer.
####
V-A4 Summary
Optimizing the usage of depleting energy and resource management plays a vital role in smart cities. AI-based techniques will be found in many sectors of smart cities like intelligent transportation, cyber-security, smart grids, smart agriculture, and so on, where sensitive data will be processed to improve the standard of living. These systems should be held accountable for any abnormal behaviors while continuously updating for security vulnerabilities. With their capability to disentangle ambiguity in decision-making processes, XAI techniques could expose the truth from pedestrians to smart city service providers with relevant information.
###
V-B Heathcare
####
V-B1 Introduction
In recent years, healthcare systems have started to leverage information and communication technology in various aspects of the sector. Cloud computing, Internet of Medical/Health Things (IoMT/IoHT), edge computing, and AI have led healthcare to a whole new paradigm called smart health [[198](#bib.bib198)]. Smart health can be regarded as an intelligent and context-aware development of mobile-health services, expected to reduce future hospitalizations and provide remote healthcare services incorporating the latest wireless technologies fused with other enablers such as AI and ML. Following that, there’s a growing use of ML in integral parts of healthcare such as medical image classification [[199](#bib.bib199), [200](#bib.bib200), [201](#bib.bib201)], diagnostic science [[202](#bib.bib202), [203](#bib.bib203), [204](#bib.bib204), [205](#bib.bib205), [206](#bib.bib206)], tumour analysis [[207](#bib.bib207), [208](#bib.bib208), [209](#bib.bib209)], personalized healthcare [[210](#bib.bib210), [211](#bib.bib211), [212](#bib.bib212)], etc.
With the significant increase in the population above the age of 60 [[213](#bib.bib213)] the rise of health-related emergencies and illnesses is inevitable. That is why smart health must utilize cutting-edge technologies like AI and big data analysis in the solutions alongside advanced biotechnology and micro-electronics. Different stakeholders will come across intelligent technologies at different levels of the process. At the patient level, smart monitoring wearables, mobile devices, sensors, and actuators will intelligently gather data for diagnosis, virtual/remote support, emergency care, and monitoring in the household. From the doctors’ and health care workers’ perspectives, sophisticated algorithms (ML/AI, etc.) should accommodate the analysis of large amounts of data in making decisions related to diagnostics and clinical guidance. In addition to that, health administrators and research centers can leverage data-driven techniques in clinical management, prognosis, health decision-making, medical studies, and pandemic control [[214](#bib.bib214), [215](#bib.bib215), [216](#bib.bib216)].
However, the aptitude for processing large amounts of data with ML techniques comes with the price of black-box level interpretability concerns. Despite how good they perform accuracy-wise, lesser the transparency and lower trust and robustness in maintainability. For example, a false positive in diagnostics can lead to life-changing decisions for a patient. Therefore the current architectures of IoMT demand more robust and resilient security techniques such as XAI to wrap AI algorithms with fault traceable explainability layers so that imperfections can be recognized early as possible and strive towards the goal of accountability.
####
V-B2 Impact of XAI on the security of healthcare

Figure 11: Smart healthcare services are expected to expand immensely under B5G with AI-based systems. Breaches in the security of these systems can cause widespread system breakdowns endangering the lives of many who depend on those services. Thus explanations of black-box AI systems can keep the respected parties on alert for any anomalies due to attacks or vulnerabilities in the systems.
In smart healthcare, the use of AI has become essential in processing data obtained from personalized and real-time health monitoring services. As shown in Fig. LABEL:fig:\_Healthcare, user-level information collected [[217](#bib.bib217)] from wearables, smartphones, and healthcare applications is processed with AI during health screenings and treatment plan selections, diagnostics, and emergency responses. But the lack of trust in AI systems is due to the black-box nature and lack of fail-safe mechanisms in case of security breaches. Not only that but also with proper transparency, XAI can be used to figure out adversarial samples before a breach. In [[218](#bib.bib218)], authors have used SHAP DeepExplainer to create signatures for input images which are then identified as either adversarial or not. Those signatures buffer the original AI model, preventing misclassifications due to adversarial inputs. Such an explanation based security approach can provide a layer of trust in a variety of image-based analysis systems using AI and ML [[199](#bib.bib199), [200](#bib.bib200), [204](#bib.bib204)] in smart healthcare such as endoscopic images, breast cancer analysis, skin lesion analysis, MRI brain tumor analysis, lung image analysis and much more. With XAI, these systems would be accountable for their decisions without losing the convenience and efficiency.
One of the more outstanding examples where XAI makes a difference is diagnostics. Authors of [[219](#bib.bib219)] have pointed out the impacts of XAI in Healthcare as three folds; increased transparency, result tracking, and model improvement. From the data collected in the systems, if a user is categorized to have a specific illness by AIML models, say a high blood sugar level, the clinicians will be sent a report mentioning the features which were used in coming into the above decision as heart rate, body temperature, and calorie intake. When XAI is used and the list of features, it is also possible to show the most responsible attribute for the outcome as calorie consumption. It will make the process convenient for medical practitioners to quickly examine the characteristics and provide recommendations about suitable medications or activities accordingly. It also adds a certain level of resilience since the reason for the prediction can be traced if the outcome has deviated anomalously.
####
V-B3 Related Work
Numerous work has been carried out within the domain of smart healthcare. However, the research work is lacking to show where XAI has been used in the security of smart health. In [[220](#bib.bib220)], the authors have emphasized the possible security threats in [IoT](#glo.acronym.IoT)-based smart health care systems such as DoS attacks, fingerprint, and timing-based snooping, router attacks, select and forwarding attacks, sensor attacks, and replay attacks. In the paper [[221](#bib.bib221)], the author presents a secure framework for hospital environments using [IoT](#glo.acronym.IoT) and AI. This system is expected to overcome frustrating queues, overwhelming paperwork, work overload for doctors, identifying critical illnesses in time, etc. The authors have also emphasized the security attacks that can potentially affect the hospital [IoT](#glo.acronym.IoT) environment, such as interruption, interception, modification fabrication, replay, protocol compromise, stack attacks, etc. However, in such an environment, the usage of XAI can be of utmost importance to enable trustworthy AI-based defense mechanisms. Security aspects of IoT using AI are explained in the literature [[222](#bib.bib222)] where they report the usage of AI/ML techniques used in the security of the healthcare sector. Although the research work on the use of AI in smart health has become prominent in the last several years, the security aspects of those AI techniques need to be addressed more due to the dynamic nature of the attacks and quick evolution of the attacking methods. XAI usage in this area is still limited to applications other than security, such as recent COVID-19 epidemic control [[223](#bib.bib223)]. Currently, there is a significant research deficit in XAI security in healthcare.
Because of the essential nature of the application, the use of XAI in healthcare security is essential, and the issue must be solved as quickly as feasible.
####
V-B4 Summary
AI/ML-based systems are proposed in many previous studies in the smart healthcare field, but only a smaller proportion of those systems see the light of day to be used in actual applications. Lack of causality and thereby accountability causes medical practitioners, insurance companies, etc., to lose confidence in intelligent health care systems. By making the AI systems and their security more transparent and convincible regarding their true capabilities through XAI, the dark cloud of untrustworthy perception in stakeholders can be mitigated to a certain extent.
###
V-C Industry 4.0/5.0
####
V-C1 Introduction
Vehicles, clothes, buildings, and weapons have all been created and constructed by humans throughout several hundred years. The introduction of Industry 1.0 in 1784 marked the beginning of substantial change in industrial output with mechanical energy being involved. It was followed by industry 2.0 (1870) and industry 3.0 (1969), which marked the introduction of electrical energy and electronic/IT systems [[224](#bib.bib224)]. In just after 40 years (2011), industry 4.0 was introduced with the main objectives of increasing operational efficiency and productivity, just as increasing automation [[225](#bib.bib225)] encompassing a variety of enablers, such as [AI](#glo.acronym.AI) [[226](#bib.bib226)], Internet of Things ([IoT](#glo.acronym.IoT)), cloud computing, [CPS](#glo.acronym.CPS), and cognitive computing, by completely transforming the manufacturing processes [[224](#bib.bib224)]. Characteristics such as digitization, optimization, and customization of production; automation and adaptation; human-machine interaction (HMI); value-added services and businesses; and automatic data exchange and communication [[227](#bib.bib227), [228](#bib.bib228), [225](#bib.bib225)] which are closely associated with internet technologies and sophisticated algorithms (ML, AI, etc.), are introduced. To summarize, the basic premise underlying industry 4.0 is to make manufacturing ”smart” and automated to facilitate mass productivity with minimum human intervention. In contrast, the successor, industry 5.0, is envisaged to capitalize on the main principle of collaboration between human brainpower and creativity in the production workflows with intelligent systems enabling mass personalization. That is to say; industry 5.0 is expected to harness the synergy between humans and autonomous machines [[229](#bib.bib229), [230](#bib.bib230)] in addition to full machine-based automation that predominates industry 4.0.
Since the highest priority in industry 4.0 is automation, the security and accountability of deployed AI and ML algorithms are equally important and their accuracy and performance. In industrial-level applications, the black-box nature of ML models gives rise to questions about trust and maintainability when they are deployed. A defect in intelligent manufacturing equipment that goes unnoticed due to the opaque nature of the algorithm might lead to financial losses and wastage of resources in substantial proportions. The effect could exponentiate when AI is used in the context of security in I[IoT](#glo.acronym.IoT) (Industrial-IoT) or any other infrastructure underlying industry 4.0. In light of industrial applications of AI/ML, a technique like explainable AI will be of utmost importance in bringing bespoke solutions to its security issues. In 5.0, XAI will be even more important as it plans to increase human involvement with autonomous systems. To create an amicable understanding between humans and intelligent machines, the internal operations of the AI and ML models must be transparent to the stakeholders. Especially in critical contexts such as security and human safety. So a potential solution to this can be achieved using XAI as a medium to interface between two entities.
####
V-C2 Impact of XAI on the security of Industry 4.0/5.0

Figure 12: Industry 5.0’s core emphasis on human-centric, tailored manufacturing processes might have a substantial impact on a wide range of industries. Here, in case of a security breach or other anomalous behavior, XAI may assist in determining the root cause so that preventative measures can be taken.
When it comes to industry 5.0 applications, the usage of XAI may make a significant contribution in a variety of fields due to its primary focus on human-centric, personalized manufacturing processes, as shown in Fig. [12](#S5.F12 "Figure 12 ‣ V-C2 Impact of XAI on the security of Industry 4.0/5.0 ‣ V-C Industry 4.0/5.0 ‣ V Use-cases in XAI for B5G Security ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions"). The deeper involvement of people in the industrial environments alongside intelligent machines makes the safety and well-being of those users a crucial aspect of the whole process. Sophisticated intelligent machines used in this environment can occupy AI/ML techniques with their issues and security vulnerabilities. A malfunction or security breach in such a system can cause property damage risking the closely associated people working alongside those machines. For example, the trustworthiness of a robot working with humans in a steel-based utensil manufacturing company is a high priority due to the high temperatures and extreme forces used in heavy machinery. A security attack on the robot could potentially affect the humans surrounding it. By using XAI, trust in the AI models used in intelligent machines like robots can be enhanced. Mapping the internal mechanics and providing clear explanations of the cause and effect of those algorithms could help the users to understand the operations and take proper precautions. More importantly, XAI helps trace back the cause in case of a security attack or an unexpected behavior where solutions to prevent future malfunctions can be taken. This will reduce financial losses for the manufacturing organizations and enable smoother production of things that would otherwise impede the achievement of the goal of mass production in the first place.
####
V-C3 Related Work
The use of XAI in industrial settings has started to increase in the past couple of years [[231](#bib.bib231), [232](#bib.bib232), [233](#bib.bib233)]. However, the XAI security in the industry 4.0/5.0 field is still in its infancy, with only a few related researches coming to light. Nevertheless, all these works demonstrate promising results in facilitating AI/ML models in real-world applications. Zolanvari et al. [[234](#bib.bib234)] proposes a model-agnostic and high performing XAI model named TRUST (Transparency Relying Upon Statistical Theory), which they show results after testing it in an [Industrial Internet of Things (IIoT)](#glo.acronym.IIOT) environment with different cybersecurity datasets (WUSTL-IIoT, NSL-KDD, and UNSW). They rank the most crucial input features for each class based on mutual information after converting them to latent variables using factor analysis. The likelihood of any new sample falling into the classes is calculated using multi-model Gaussian distributions. Authors argue that this system beats the LIME system in terms of speed, performance, and explainability.
[Federated Learning (FL)](#glo.acronym.FL) has become a prominent research topic in [IIoT](#glo.acronym.IIOT) systems due to its perks such as privacy preservation and resource and data management in the edge devices [[226](#bib.bib226)]. With FL’s rapid rise in popularity, it’s natural that it would draw the attention of cybercriminals. One of such devastating attacks on FL systems is the backdoor attack [[235](#bib.bib235), [236](#bib.bib236)]. In order to identify backdoor inputs, Hou et al. [[237](#bib.bib237)] offers a filter system that is based on a mix of classifiers and XAI models. Here, the models are trained on the server-side and then sent to each [IIoT](#glo.acronym.IIOT) application for identifying backdoor input data, which is then cleaned using an appropriate method. As a result of combining this technique with XAI, the authors claim to have obtained very high rates of backdoor recognition.
####
V-C4 Summary
The next chapter of industrialization is driving towards highly personalized, automated, mass production building up on [IIoT](#glo.acronym.IIOT) and AI inspiring granular customizations wound with human creativity and machinery. Cloud-enabled super data storage, digital twin, and augmented presence of employees will all depend on high-speed ultra-reliable connectivity provided by B5G networks. The inclusion of XAI in security can be considered the adhesive that helps service providers inculcate confidence in their products and sustain their customer base.
###
V-D Smart Grid 2.0
####
V-D1 Introduction
A smart grid is a self-healing system that unifies various power generation options and enables consumers to manage their energy consumption while reducing costs. It is achieved through the integration of ICT infrastructure with circuit topology where various distributed subsystems and complementary components are intelligently controlled through a distributed command and control system [[238](#bib.bib238)]. Appliances in houses will be able to connect with the smart meters to guarantee effective use of infrastructure, demand response, and energy management in the future [[239](#bib.bib239)].
Smart grid 1.0 was introduced as the initial step of getting the meter installation and integration. It added one-way automated meter reading (AMR) followed by two-way automated metering infrastructure (AMI), which opened avenues toward Smart grid 2.0. It is intended to implement smart grid 2.0 with additional functions that use meter data, such as line failure analysis, load management, revenue prediction, etc.
Authors of [[240](#bib.bib240)] show that realizing smart grids would bring about the challenge of optimally controlling various facets such as ecology, glassy dynamics, information theory, cloud microphysics, and human cognition, and more. Optimizing all those aspects of a complex system would be an inherently severe problem. In the recent past, smart grids have been increasingly adapted and expanded worldwide. With the increase of its scale, interconnections, renewable energy integration, widespread usage of DC power transmission technologies, and deregulation of electricity markets have multiplied and evolved. The smart grid’s capacity to maintain its stable state has become a more complex problem. Traditional stability analysis and control approaches have become lengthy/time-consuming, inefficient, and expensive because of these new developments. However, they also emphasize the possibility of overcoming this issue by embedding intelligent systems in CPS. Interoperability of AI/ML systems associated with the underlying infrastructure of smart grids becomes crucial in the context of the security of CPS. Security compromises of CPSs used in smart grids can potentially cause devastating results in many aspects of life due to the deeply rooted applications of energy in the daily operations of the people.
In [[241](#bib.bib241)], the authors elaborate that a smart grid could be open to various DoS attacks (jamming in substations, spoofing attacks, traffic/buffer flooding) in different layers of the system. They further emphasize attacks that could potentially compromise the integrity and confidentiality of a smart grid (e.g., false-data injection). With the advent of B5G services in energy distribution systems, we can expect security concerns similar to that of numerous other cases discussed earlier in this paper. However, the consequences of a malicious intrusion could be far more severe in smart grids than in a typical application. These problems could range from privacy leakages of users’ meter data to cascading blackouts or catastrophic infrastructure failures [[242](#bib.bib242)].
####
V-D2 Impact of XAI on the security of smart grid 2.0

Figure 13: Stability management and maintenance are two of a smart grid’s most critical responsibilities. A growing number of these control systems now make use of AI. Security evaluations of smart grids may be correctly done using more interpretable methodologies that give holistic explanations about the AI systems. These findings may be put to good use in an emergency.
As shown in Fig. [13](#S5.F13 "Figure 13 ‣ V-D2 Impact of XAI on the security of smart grid 2.0 ‣ V-D Smart Grid 2.0 ‣ V Use-cases in XAI for B5G Security ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions"), XAI could provide useful insight into the AI-based control functions in smart grids of the future. In [[243](#bib.bib243)] the authors bring to light the potential to use AI in some of the critical operations in smart grids. Most of the applications are critical to the performance of the whole system, e.g., static/dynamic security assessments, stability control/assessment, and fault diagnosis. It is worth noting that these functions hold equally paramount importance to the grid and the livelihood of thousands that depend on the energy distribution system. The most important function of a smart grid is to control and maintain stability. During operations, the system could go through various states of power demand and other disturbances which would continuously drag the system away from the normal state. Corresponding control systems such as generation controls, damping controls, voltage controls, frequency controls, etc., need to be in place to handle each operative state. Lately, AI has been increasingly used in such control systems. For example, smart generations control systems are studied to use RL (DQN [[244](#bib.bib244)] and Q-learning [[245](#bib.bib245)]). Among many other previous studies, frequency control systems have been implemented using LSTMs [[246](#bib.bib246)], RL, and stacked denoising AEs [[247](#bib.bib247)]. Although these studies have shown promising results in terms of accuracy, limitations exist in practice. Interpretability of AI models, robustness to adversarial attacks, robustness to noise/data loss/time delay, and imbalanced data sets are a few. Plausible solutions to these issues can be drawn from XAI. In [[248](#bib.bib248)] authors have shown that with better interpretable methods such as DTs, security assessments of smart grids can be implemented accurately and in a transparent manner. Also, the authors of [[249](#bib.bib249)] have shown that using a local linear interpreter on top of a deep belief network would enable the users to identify crucial factors that contribute to system instability. Furthermore, these outcomes can be used in emergency control scenarios.
####
V-D3 Related Work
XAI in smart grids is an emerging field of research now. Thus the field of security in smart grids using XAI is a research gap that needs to be addressed in the future. However, the following are some of the few XAI applications in smart grids.
In the paper, [[250](#bib.bib250)] XAI tools (i.e., LIME, SHAP, and ELI5) are proposed to be used in anticipating the amount of solar photovoltaic energy that will be generated in the future smart grids. The authors have used a random forest machine learning model trained in an open-source data set to predict solar power output. It was followed by using the said XAI tools to comprehend the reasons for the predictions. Overall, LIME, SHAP, and ELI5 have exhibited to enhance model outcomes by providing precise information that can be easily understood. They have also pointed out how aspects like computational cost, local/global explanation, feature weights, etc. vary amongst XAI tools/packages in their particular use case. Finally, the study concludes by discussing the usefulness of XAI-based photovoltaic forecasting for building next-generation control centers with visualization and business analytics tools that support new technologies like AI and XR. Here the authors propose the importance of using XAI to enable users who are less tech-savvy to understand the workings of such complex technologies.
The study [[251](#bib.bib251)] elaborates on the interpretability of the outcomes by DRL methods used in emergency control of power systems. The authors propose a Deep-SHAP to interpret the DRL model, which calculates the relevance of input features via a backpropagating mechanism. This method is expected to help the operators understand the outputs generated by the models and make amendments during troubleshooting. The XAI models proved to have a superior capacity at comprehending the model decisions than a human operator’s, allowing it to detect errors in data while improving overall model performance.
####
V-D4 Summary
Next-generation smart grids are expected to rely on the IoT heavily enabled intelligent CPSs to provide their core services to the general users. Real-time system stabilization, monitoring, and surveillance, demand response, maintaining distributed resources, and responding to natural disasters are a few operations that should run reliably and resilient to external influence. AI/ML used in these critical aspects should be readily available for scrutiny. The availability of explanations makes this process accountable and comprehensible when handling black-box AI/ML models in making the systems more robust towards malicious agents.
###
V-E Extended Reality (XR)
####
V-E1 Introduction
Immersive technologies that are used today and the futuristic technologies that are up and coming are collectively known as [XR](#glo.acronym.XR). Here the X stands as a variable for the letters of all the other subsets of computer-altered realities in the spectrum of reality-virtuality continuum [[252](#bib.bib252)]. These can include [Augmented Reality (AR)](#glo.acronym.AR), [Virtual Reality (VR)](#glo.acronym.VR). and [Mixed Reality (MR)](#glo.acronym.MR). New gadgets (wearables) and computer-generated graphics have made it possible to superimpose digital information over the real environment or incorporate real-world elements into virtual settings. These hybrids of the digital and physical worlds are increasingly being adopted by various businesses such as medicine [[253](#bib.bib253)], manufacturing [[254](#bib.bib254)], entertainment (games, cinema etc.), tourism [[255](#bib.bib255)], marketing [[256](#bib.bib256)], construction [[257](#bib.bib257)], and the list goes on. It is fair to say that the applications are ubiquitous.
An overlay of computer-generated material over the actual world, which may interact with the environment in real-time, is defined as AR. The actual environment and computer-generated imagery are seamlessly integrated with AR. However, in most cases, the occlusion between computer-generated content and the real world is limited in AR. Thus, its applications remain comparatively limited to this day. Google Glass was a great example of AR technology used in the recent past. AR was also adopted in HoloLens 2 by Microsoft and exhibited its usefulness in working environments. Some recent smart glass implementations include Spectacle smartglasses from Snap, Lenovo ThinkReality A3, and Vuzix’s Next-Gen Smart Glass [[258](#bib.bib258)].
The term VR refers to a broader category of immersive media. Real-world 360-degree videos, computer-generated material, or a combination of the two may be used to make these media. The user might use a VR headset or surround VR displays to visualize the virtual world. There are many applications for VR in various fields such as entertainment (games), many fields of engineering, healthcare, manufacturing, etc. [[259](#bib.bib259)].
MR can be considered as a combination of AR and VR. Here the computer-generated content is overlaid and anchored in the real world, where they also can interact with those objects. Unlike AR (or VR), computer-generated objects are expected to have occlusive effects with real-world objects during MR events. Pokemon GO mobile game is one of the recent successful examples of MR used in the entertainment industry. Microsoft has also demonstrated MR in their HoloLens 2 smart glasses.
AI is being studied increasingly to be used to complement XR technologies in the recent past by many industries [[260](#bib.bib260)]. For example, MR is used to detect visual field defects of patients using headsets in the healthcare industry. There are propositions to even improve the vision of people wearing those headsets using applications that run with vision augmentation algorithms [[261](#bib.bib261)]. Businesses also have been implementing methods using AI-based AR/VR technologies to enhance the recruitment and training of employees in their businesses [[262](#bib.bib262)]. Also, some businesses are offering MR platforms for virtual event venues by making digital twins of the presentation spaces. These systems are popularly provided in the form ”Metaverse as a Service (MaaS)” [[263](#bib.bib263), [264](#bib.bib264)]. With the increasing applications and popularity of XR applications, it can attract attackers’ attention, making the users question the security of those applications.
####
V-E2 Impact of XAI on the security of XR

Figure 14: XR is envisaged to use AI-powered B5G infrastructure for its services. In this process, AI security will play a significant role. Attacks on AI-based systems might confuse the models, leak data, and create availability issues. These can be detected early on with the help of XAI methods.
B5G needs to provide the necessary infrastructure to make XR possible. AI is going to be an integral part of this whole process. Previous examples show that various forms of XR will be used in a wide range of applications ranging from smart cities to health care. For XR-based applications, the B5G networks need to maintain sophisticated operations such as resource management, network slicing, security, traffic management, etc., which require AI support in their implementations. On top of that, CV (computer vision) based models will be used to extract spatial features in rendering virtual objects [[265](#bib.bib265)]. AI-based data-efficient image compression methods are also complementary applications that would help to reduce network volume [[266](#bib.bib266)]. However, in many of these applications, the security of AI and ML models needs more meticulous attention.
Collaborative MR could cause more problems when under attack due to the many involved entities. MR systems gather data from various sources from a plethora of users that could add a fuzzy nature to the states of the system. Constantly acquiring the complete and comprehensible picture of massive scale collaborative MR to verify the secure and correct behavior of the environment could be extremely difficult even with AI/ML techniques. Attacks like DDoS/DoS on the wearables, collaborations models, and central servers could cause access limitations, data leaks, and collaboration issues among the users. In some instances, these attacks may cause drops in framerates resulting in physical symptoms such as nausea and headaches and annoy the users from utilizing the devices.
Furthermore, spoofing, deep fakes, and social engineering could cause psychological harm and damage to the reputation of their victims [[267](#bib.bib267)]. Although defenses against these attacks are still to be explored, proactive measures such as training and educating the users are extremely important. To create comprehensible user explanations, the fuzzy layer of those systems might need to be understood. The fuzzy layer could consist of computer vision models, natural language models, etc., that integrate with many sensors and devices acting as data IO. XAI techniques would be instrumental in generating a simplified version of the data that can be used in creating comprehensible explanations for the users. XAI is not only suitable for user side precautions, but it’s also handy in fortifying the anomaly detection systems to enable malfunction tracing during a breakdown.
####
V-E3 Related Work
Although the security of XR applications has been studied for a long time, the use of XAI to realize trustworthy AI-backed XR systems is still in its infancy. It’s important to know that effective and open communication builds user confidence in using autonomous techniques, especially in the transportation sector, due to the inherent safety concerns. In [[268](#bib.bib268)] VR is used in testing the importance of proactive communication rather than reactive communication when it comes to autonomous wheelchairs. Also, there are situations where VR systems are used for providing training environments for robots in various conditions and generate visuals for analysis [[269](#bib.bib269), [270](#bib.bib270)]. In these scenarios, XAI methods could help generate additional data, which might lead scientists to understand the behaviors and vulnerabilities of the models in those virtual environments that could otherwise be hidden.
Explainable and comprehensible security of AI methods would be even more critical when it comes to the Metaverse involving generation Z. Unlike the legacy games such as ”second life,” the novel metaverse is envisaged to integrate AI enhanced social interactions rendering a deeper and immersive social meaning. This system is further reinforced with virtual currencies and improved high-speed, always-on connectivity to the Internet through B5G networks [[271](#bib.bib271)]. Due to the similarities that would hold in the Metaverse and reality, security and trustworthiness must be ensured for the users before entering. Thus, XAI will be more important than ever to bridge the gap of understanding between the users and the underlying ML methods and avoid any unpleasant surprises inside the Metaverse.
####
V-E4 Summary
XR is firmly on a trajectory to be applied in major lifestyle-altering applications (e.g., Metaverse). Security and privacy-related vulnerabilities could damper the use of AI/ML due to its gravity and widespread effect. XAI could be used to bridge the gap of misapprehension between B5G service providers, XR services, and end-users regarding the reasoning behind AI/ML decisions used in attack detection and the response of systems.
###
V-F Other Applications
The above-discussed use cases can be categorized as a broader view of the applications where XAI is functional in the B5G era. It can be further broken down into narrower technical applications. A few of them are briefly discussed below.
Holographic telepresence (HT) enables real-time 3D projections of faraway persons and objects. With this application, 3D video conferencing and news broadcasting will be moved to a new paradigm. Media is captured by HT and transferred via a broadband network in a compressed form. AI/ML-based compression techniques here could introduce a new attack vector. [[2](#bib.bib2)] Thus, XAI used to identify any anomalies is recommended. In smart governance, intelligent and innovative ICT systems are used to promote and support enhanced decision-making, planning, and the involvement of citizens via collaborative decision-making. Smart governance has an added emphasis on ICT to sustain the ideals, assuring public welfare and development [[272](#bib.bib272)]. Corruption and unjust policies, techniques for improving education, security, transportation, resource management, and economic infrastructure remain the most pressing issues in current governance, and smart governance is expected to provide better answers to these problems. Intelligent decision processes here will require concrete backing. Using XAI would be more of a necessity in this situation due to the solid justifications required for civil activities.
VI New Security Issues of B5G Due to XAI
-----------------------------------------
Although providing explainability to AI and ML solutions brings many benefits to B5G, it can be detrimental to ML models’ security and the systems that embed these models. XAI may increase their vulnerability to ML attacks (i.e., attackers know how the black box model works), complicates their design (i.e., explainability must be considered in the trade-off between model performance and security), and open new attack vectors (i.e., the explanation itself can be falsified).
###
VI-A Increased vulnerability to adversarial ML attacks
####
VI-A1 Introduction
Many attacks that already exist target ML models: adversarial ML attacks [[273](#bib.bib273)]. Membership inference and model extraction [[274](#bib.bib274)] attacks compromise the confidentiality of the training data and the ML model respectively. Model poisoning and model evasion attacks (a.k.a. adversarial examples) compromise the integrity of the ML model and its predictions. A common characteristic of adversarial ML attacks is that their effectiveness increases as the attacker’s knowledge about the ML model and its decision process increase. Consequently, the obfuscation of ML models’ decision process, by making it a black-box, is an effective defense to mitigate adversarial ML attacks [[275](#bib.bib275)].
Explainability deobfuscates the decision process of black-box ML models, thereby revealing helpful information to an attacker, as depicted in Figure [15](#S6.F15 "Figure 15 ‣ VI-A1 Introduction ‣ VI-A Increased vulnerability to adversarial ML attacks ‣ VI New Security Issues of B5G Due to XAI ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions"). It has been shown that the information produced by explainable ML techniques can be leveraged to design more effective black-box attacks [[276](#bib.bib276)]. The effectiveness of membership inference, model extraction, poisoning, and evasion attacks increases against black-box ML models augmented with explainability.
The explanation provided for an ML model prediction enables an attacker to modify a sample that they want to get misclassified manually. Explanation reveals the features from a sample that are the most significant in the prediction provided by the model. Attackers can iteratively modify these features, using the feedback from the explanation, to eventually change the ML model prediction for the sample.

Figure 15: Explainability reveals new information that “white-box” black-box ML models and facilitate adversarial ML attacks against them.
####
VI-A2 Impact on B5G
Defending against adversarial ML attacks is still an open issue, and there is no foolproof defense against any of these attacks. Some vulnerabilities exploited by adversarial ML attacks are even claimed to be necessary features of ML models [[277](#bib.bib277)]. Most defenses, like model obfuscation, only increase the effort required to perform a successful attack, but they do not fully protect from attacks. Explainability denies the usage of obfuscation to make ML models more resilient to attacks. This increased exposure decreases the security of all ML-based systems used in B5G.
For instance, ML-based security measures are deployed in the perimeter of 6G sub-networks for monitoring anomalous behavior within the sub-network or coming from other sub-networks [[24](#bib.bib24)]. DT, Random Forest, DNN, clustering, ensemble methods, Gradient Boosting Machines (GBM), etc., are used to detect common network attacks, like DDoS attacks, from traffic data [[278](#bib.bib278), [279](#bib.bib279), [24](#bib.bib24), [280](#bib.bib280)]. The evasion and poisoning of these ML-based anomaly detectors are made easier if they are explainable [[276](#bib.bib276)]. They can cause malicious traffic to bypass the system defenses and cause exhaustion of network resources. The availability of the system resources to serve legitimate users will be constrained. As a result, many critical applications (e.g., telesurgery, smart grid stability control systems, etc.) that depend on the service layer of B5G could be affected by the exhaustion of resources. When the outermost security layer of a network fails, it leaves the internal modules exposed to a higher attack risk.
ML-based decisions are also used for intelligence services in closed-loop E2E service management (Fig. LABEL:fig:zsm\_archi). Adversarial examples against these ML models can be more easily generated if the models are explainable. These adversarial examples can lead to inaccurate predictions and choices, such as falsely predicting the future requirement of resources for an E2E service or reconfiguring the management policies [[176](#bib.bib176)]. Likely results can range from performance deterioration, financial loss, and loss of security guarantees.
####
VI-A3 Possible solutions
Revealing information about ML models currently corresponds to an increased vulnerability to adversarial ML attacks. This means that explainability comes at the security price, and we can only decrease this impact but cannot cancel it. Nevertheless, a solution to this issue lies in controlling the explainability’s provided information.
First, one must define the minimum requirement and granularity of the explanation required to achieve an intended goal. The selected XAI method should only meet this minimum requirement without revealing more information than necessary. It limits the information an attacker can use in an adversarial ML attack to the required minimum.
Second, one must control the access to the explanation, i.e., restrict it to only the necessary parties. The explanation can also be sealed, encrypted, and only revealed if there is a need to investigate a decision of the model, e.g., auditing by entitled parties. The default access to explanation must be as restricted as possible rather than wide open. This restriction limits the opportunity for an attacker to access this information.
Finally, delaying the availability of explanation (by a few hours or days) when compared to the availability of the ML model decision can slow down attacks. In many ML use-cases, the decision from the ML model must be obtained quickly while the explanation is not time-sensitive. Adversarial ML attacks are typically iterative, counting 100s of steps. Each new step relies on the previous step(s) information. By delaying the availability of explanation, the utility of the ML model is not impacted, while an adversarial attack can be drastically slowed down or even completely prevented.
####
VI-A4 Summary
There is no perfect solution to fix XAI’s vulnerabilities against ML models. Currently, the most sensible solutions to mitigate attacks must be applied at the system level through, e.g., access control, encryption of explanation, delay in response, etc. This may change in the future as defenses against adversarial ML attacks become effective, and a foolproof defense against some of these attacks would be developed.
The issue raised here is that current adversarial ML attacks are more effective in a white-box than in a black-box setting. Explainability has the ”white-boxing” side-effect on black-box models.
If black-box adversarial ML attacks progress such that they are as effective as white-box attacks, the information revealed by XAI will no longer impact the ML model’s security. There is work already showing that, e.g., membership inference attacks can be run as effectively against black-box and white-box models [[281](#bib.bib281)]. In such cases, a black-box model explained using XAI (white-boxed) would not be more vulnerable than its non-explained counterpart. The impact of XAI on its security would thus be canceled.
###
VI-B Difficulty to design secure ML applications
####
VI-B1 Introduction
The design and implementation of ML-based systems are guided by the sole requirement of maximizing performance, i.e., high accuracy, high generalizability, and low response time. Adding security requirements to ML-based systems introduced the first trade-off between antagonist properties: performance vs. security. It has been shown that effective defenses against adversarial examples, like adversarial training, degrade the accuracy [[282](#bib.bib282)] and the generalizability [[283](#bib.bib283)] of protected ML models. There also exist trade-offs between security properties. For instance, increasing the resilience of ML models against evasion attacks makes them more vulnerable to privacy attacks like membership inference [[284](#bib.bib284)].
Explainability is a new requirement adding to the existing trade-off. Three properties partly detrimental to each other need now to be fulfilled by ML systems, as illustrated in Figure [16](#S6.F16 "Figure 16 ‣ VI-B1 Introduction ‣ VI-B Difficulty to design secure ML applications ‣ VI New Security Issues of B5G Due to XAI ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions"): performance-security-explainability. When transparency provides explainability, all three requirements are applied to the ML model and its training algorithm. Providing explanation through transparency reduces the choice in training algorithms and models during the design of the ML system. This potentially leads to discarding the solution providing the best accuracy, security, or privacy to meet the explainability requirement.

Figure 16: New trade-off required between performance, security, and explainability of ML systems.
####
VI-B2 Impact on B5G
The new requirement of achieving a performance-security-explainability trade-off makes it challenging to design well-balanced ML systems for B5G networks. B5G networks are massive scale heterogeneous networks where small form factor devices are used in many applications that collect information from the environment. This information is currently transmitted to centralized cloud-based servers for intelligent processing and decision-making. However, with the advent of IoE in B5G, there is a shift towards edge intelligence. Deploying ML models on-device enables training using federated learning and local decision-making, making communication more efficient. On the other hand, device resource limitations make it challenging to run ML models on-device. Performance becomes thus a primary requirement constrained by device resources, relegating security and explainability to secondary places.
For example, body-sensors/fit-bits collecting vital signals to provide dietary and physical recommendations struggle to squeeze out the necessary computational power to run sophisticated cryptographic techniques on top of ML models, and they fail to provide sufficient security [[24](#bib.bib24)].
In such a case, running post-hoc explanation techniques would burden the already exhausted computation power found in such devices. Nevertheless, these are end-devices dealing with highly sensitive health data, and it is essential to include some form of explanation to make their operations trustworthy for customers. These constraints require developers to use transparent or in-model explanations, which might not be an ideal model selection for the particular use case in terms of accuracy, robustness, or privacy.
####
VI-B3 Possible solutions
Transparency puts the explainability requirement on the ML model, causing the trade-off between three properties. By favoring post-hoc explainability, the choice of ML model can only be dictated by performance and security considerations. Explainability is removed from the equation, and it is provided using an external post-hoc solution.
Nevertheless, this solution has two drawbacks. First, the explanation from post-hoc methods sometimes has a lower correlation to the actual decision of the model, so it offers a lower-quality explanation. Second, we will later discuss that post-hoc solutions create new attack vectors and targets against the whole system, including the ML component. Thus, post-hoc explainability only moves the introduced security vulnerability from the ML model to the more extensive system that includes it.
A second solution is the careful analysis and prioritization of the ML system requirements. Evaluating and quantifying the performance-security-explainability trade-off leads to making an informed choice about which requirement(s) to meet and which other(s) to neglect. Requirements neglected during the ML model design may be addressed later at the system level. The security of ML models can be increased through system security, e.g., by detecting adversarial queries to the model at inference time [[274](#bib.bib274), [285](#bib.bib285)].
####
VI-B4 Summary
As ML and AI become increasingly used in critical and high-risk applications, the consequence of incorrect decisions from these systems worries people. The trustworthy AI concept aims to ease these worries by enforcing a large number of desired properties to make AI and ML applications trustworthy [[286](#bib.bib286)]. Among the first requirements were accuracy, performance, security, and privacy. Many more requirements were added, such as explainability, transparency, accountability, fairness, etc. This list grows over time, complicating the design of trustworthy AI applications and simplifying the fulfillment of security requirements by design.
How these properties interact and how they impact, positively or negatively, each other is not well understood yet. More studies are necessary to understand the several trade-offs involved in designing trustworthy AI systems. Only under this condition can trustworthy AI systems be secure, explainable, and more simultaneously.
###
VI-C New attack vector and target
####
VI-C1 Introduction
Post-hoc methods for XAI are new components added to ML-based systems. This new component can complement the prediction of ML models, weighing heavily on the actions of systems and humans that depend on the ML model. In some cases, the explanation itself is more important than the prediction. This is the case for AI used in applications having a societal impact, where predictions must be fair and unbiased. This is also the case for security applications like detection and response (D&R), where an explanation is used to counter and recover from detected attacks using appropriate measures.
Due to the importance of explanation, the XAI component can become the main target of an attack, as depicted in Figure [17](#S6.F17 "Figure 17 ‣ VI-C1 Introduction ‣ VI-C New attack vector and target ‣ VI New Security Issues of B5G Due to XAI ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions"). Directly attacking post-hoc XAI methods can change the explanation while the prediction of the ML model remains the same, as demonstrated in [[287](#bib.bib287), [288](#bib.bib288)]. The ML model makes the right decision, but the dependent system or human takes a wrong course of action based on the incorrect explanation.
There is also the possibility of concealing unfair outcomes of an ML model with deceptive justifications to veil any underlying problems using XAI. It is defined as fairwashing [[289](#bib.bib289)]; misapprehension that an ML model adheres to specific standards although its actual performance significantly deviates from its explanations. Both model explanations and outcome explanations are vulnerable to this issue.
It is demonstrated further that post-hoc explanatory approaches depending on input perturbations, like LIME and SHAP, are unreliable and do not give definitive information regarding fairness [[290](#bib.bib290)]. An interpreter-only attack technique known as scaffolding is built based on this observation. An attacker can generate desired explanations for a given unfair ML model (which uses LIME/SHAP) by masking any biases in the model. Through this hack, a compromised XAI method enables hiding biased/unfair outcomes indicating that they are harmless/unbiased.

Figure 17: The XAI component becomes a new target and a new attack vector to compromise the whole ML-based system.
####
VI-C2 Impact on B5G
This threat exists for every ML application to B5G where explanation weighs equally or more than the prediction in the action it triggers.
For instance, in D&R where an explanation is used to counter and recover from an attack, modifying the explanation for a prediction leads to fixing a non-existing or irrelevant issue. It will not block the detected attack or prevent it from happening again in the future.
A reliable explanation also increases trust and improves the user experience for end-users of B5G networks. It fosters the adoption and usage of services provided over B5G networks. Certain decisions require user data to ensure the security and safety of the services provided, and it is also essential to provide proper explanations on how the data is used in decision-making. Critical applications such as autonomous driving are envisaged to rely on B5G networks [[26](#bib.bib26)]. When a system fails or crashes, the explanation for the incorrect prediction that led to it will be paramount for handling the lawsuits and other legalities that follow. Although an accurate prediction would have prevented a crash, an explanation during an exception would be of critical importance. A compromised explanation can divert attention from the real issue and protect the responsible party from any consequence.
####
VI-C3 Possible solutions
The main reason for this new attack target is because the explanation of post-hoc methods can sometimes be disconnected from the prediction of the ML model they interpret. This is a known weakness of post-hoc explanation methods. Using explainability through transparency, the explanation comes directly from the ML model itself, and it is usually well linked to its actual decision process. Both the ML model and explanation process must be fooled to succeed in an attack. Even though this is possible [[287](#bib.bib287)], it is more complicated. Moreover, by using XAI methods based on transparency, an explanation would be partly protected by existing defenses against adversarial ML attacks that already protect the ML model. The state of security in the prevention of adversarial ML attacks is more advanced than it is for the protection of attacks against XAI methods.
However, if explainability through transparency is not possible, selecting different post-hoc explanation methods can increase the resilience against attacks. For instance, empirical experiments [[290](#bib.bib290)] show that SHAP is more resilient than LIME when it comes to hiding biased and unfair outcomes.
####
VI-C4 Summary
The addition of new functions and components in large systems always increases system complexity and vulnerabilities, exposing new attack vectors. XAI, primarily through post-hoc explainability, is such a new component that exposes new attack vectors against ML-based systems. Given that the security of XAI methods and their resilience to attacks are not currently well-known, XAI methods represent one of the weakest components in ML-based systems, which makes them a primary target for attackers.
Attacks against XAI methods must be further researched, and defenses must be developed to make XAI secure. While the security of XAI has not reached sufficient maturity, the explanation should only be used as additional information rather than directly used in critical decision-making.
VII XAI B5G Security Projects and Standardization
--------------------------------------------------
Numerous B5G research initiatives are underway, bringing together academic and industry partners worldwide. This section summarizes several of those initiatives and their primary objectives.
###
VII-A Research Projects
1) SPATIAL [[291](#bib.bib291)] - EU-funded project addressing identified gaps in data and black-box AI through the design and development of resilient accountable metrics, privacy-preserving methods, verification tools, and system solutions that will serve as critical building blocks for trustworthy AI in ICT systems and cybersecurity. The project addresses the uncertainties inherent in artificial intelligence that directly impact privacy, resilience, and accountability. The SPATIAL project identifies possible XAI attacks and potential XAI technique misjudgment. As a result, it seeks to propose robust accountability metrics and integrate them into existing ”black-box” AI algorithms. Another objective of the SPATIAL project is to develop detection mechanisms for detecting data biases and conducting descriptive studies on the various data quality trade-offs associated with AI-based systems.
2) XMANAI [[292](#bib.bib292)]- an EU-funded project focusing on explainable AI. The XMANAI project’s researchers intend to carve out a ’human-centric,’ trusting approach that will be tested in real-world manufacturing scenarios. XMANAI intends to demonstrate (through four real-world manufacturing cases) how it will assist the manufacturing value chain in transitioning to the amplifying AI era by combining (hybrid and graph) AI ”glass box” models that are explainable to a ”human-in-the-loop” and produce value-based explanations, with complex AI asset management-sharing-security technologies to multiply the latent data value in a trusted manner, and targeted manufacturing apps to solve concrete manufacturing problems. XMANAI pilots are being conducted in collaboration with CNHi of Italy (creating a virtual representation (digital twin) of the plant based on 3d-2d models and production, logistic, and maintenance data of the lines), Ford (real-time representation of production and traceability), UNIMETRIK (intelligent measurement software that warns if the point sets defined for the measurement strategy are adequate), and Whirlpool (platform capable of ensuring a reliable sales forecasting for the D2C channel).
3) STAR [[293](#bib.bib293)]- is a collaborative effort between experts in artificial intelligence and digital manufacturing to enable the deployment of standards-based, secure, safe, reliable, and trusted human-centric AI systems in manufacturing environments. STAR will investigate how artificial intelligence systems can acquire knowledge to make timely and safe decisions in dynamic and unpredictable environments. Additionally, it will conduct research into technologies that enable AI systems to deal with sophisticated adversaries and remain resilient to security attacks. Participants in this project consider a variety of AI-powered scenarios and systems, including active learning systems, simulated reality systems that accelerate [RL](#glo.acronym.RL) in human-robot collaboration, XAI systems, human-centric digital twins, advanced [RL](#glo.acronym.RL) techniques for optimal mobile robot navigation and detection of safety zones in industrial plants, and cyber-defense mechanisms for sophisticated poisoning and evasion. These technologies will be validated in challenging manufacturing scenarios in quality management, human-robot collaboration, and agile manufacturing powered by AI. STAR aims to remove security and safety barriers that currently prevent sophisticated AI systems from being deployed in production lines.
4) SPARTA [[294](#bib.bib294)] - SPARTA was founded to establish a long-term community capable of collaboratively defining, developing, sharing, and evolving solutions that assist practitioners in preventing cybercrime and enhancing cybersecurity. The SPARTA project is divided into four major components:
SPARTA T-SHARK: established to develop and validate methodological, organizational, and technological solutions extending cybersecurity towards the comprehensive organization of security functions, enabling threat prediction and full-spectrum cybersecurity awareness, providing high situational awareness, informing decision and policymakers on broad or long-term issues, and providing a timely warning of threats. The SPARTA T-SHARK program aims to expand the reach of threat understanding, from the current investigative-level definition, up to strategic considerations on current, future, and down to real-time event handling and prevention.
SPARTA CAPE: This program addresses the assessment of cybersecurity properties for software, focusing on two specific areas, cyber-physical systems, and complex systems. For cyber-physical systems, the objective of CAPE is to propose a method to specify security and safety properties jointly.
SPARTA HAII-T: this program aims to develop an integrated framework and a toolkit supporting the design, development, and verification of security-critical, large-scale distributed systems. This aim will allow for the specification and enforcement of crucial security policies, including the confidentiality, integrity, resilience, and privacy of the exchanged data. The challenge will be tackled from multiple perspectives, including hardened legacy components, secure operating system software, resilience- and privacy by design.
SPARTA SAFAIR: The SAFAIR program aims to conduct a thorough analysis of the threats and risks of AI. This is followed by providing mechanisms and tools to counter the deteriorating effects of recognized dangers in various critical AI applications, making them safe and secure from the possibility of being compromised. The program’s impact on explainability will help AI users understand how the algorithms perform their tasks, which is particularly useful in domains where AI has already exceeded human performance. Finally, the work on fairness provides mechanisms and tools to ensure that the models created with AI methods do not rely on a skewed or prejudiced view of the situation they deal with.
5) 6G Flagship [[295](#bib.bib295)] - is a research project funded by the Academy of Finland that aims to commercialize 5G networks and develop a new 6G standard for future digital societies. 6G Flagship’s primary objective is to develop the fundamental techniques required to enable 6G. The 6G Flagship research program recently published the world’s first 6G white paper [[296](#bib.bib296)], paving the way for the definition of the wireless era in 2030. The authors of that paper identified several intriguing security challenges and research questions, including how to improve information security, privacy, and reliability via physical layer technologies and whether this can be accomplished using quantum key distribution. Additionally, the 6G Flagship project will focus on key technology components of 6G mobile networks, including wireless connectivity, distributed intelligent computing, and privacy. Finally, with the support of industry and academia, the 6G flagship project will conduct large-scale pilots with a test network.
6) AI4EU [[297](#bib.bib297)] - through open calls and other actions, the project aims to create a comprehensive European AI-on-demand platform to lower barriers to innovation, boost technology transfer, and catalyze the growth of start-ups and SMEs in all sectors. The AI4EU platform serves as a broker, developer, and one-stop-shop for services, expertise, algorithms, software frameworks, development tools, components, modules, data, computing resources, prototyping functions, and access to funding. Different user communities, such as engineers and civic leaders, can also get training to gain skills and certifications. The AI4EU platform aims to become a global standard built on existing AI, data components, and platforms.
7) SANCUS [[298](#bib.bib298)] - SANCUS aims to combine cutting-edge technologies for automated security validation and verification, dynamic risk assessment, AI/ML processing, security emulation and testing, and unique optimization modeling under the most recent containerized 5G system network platform. The project will develop several new engines and mechanisms to create a secure environment for product development and security posture assessment in a safe, multisectoral setting. The figure below depicts the conceptual approach to interconnecting such engines. All engines will be prepared and elaborated using virtual machines (VMs), allowing them to be integrated on the same platform simultaneously, conducting joint testing, formulating their instruction sets flexibly and on-demand, maintaining them quickly, and sharing them with research groups.
8) INSPIRE-5Gplus [[299](#bib.bib299)] - the project aims to advance the security and privacy of 5G and Beyond networks. Grounded in an integrated network management system and relevant frameworks, INSPIRE-5Gplus is devoted to improving security at various dimensions, i.e., overall vision, use cases, architecture, integration to network management, assets, and models. INSPIRE-5Gplus addresses key security challenges through vertical applications ranging from autonomous and connected cars to Critical Industry 4.0. INSPIRE-5Gplus will devise and implement a fully automated end-to-end smart network and service security management framework that empowers protection, trustworthiness, and liability in managing 5G network infrastructures across multi-domains. The conceptual architecture of INSPIRE-5Gplus is split into security management domains (SDM) to support the separation of security management concerns. Each SMD is responsible for intelligent security automation of resources and services within its scope. The end-to-end (E2E) service SMD is a special SMD that manages the security of end-to-end services. The E2E service SMD coordinates between domains using orchestration. Each SMD, including the E2E service SMD, comprises a set of functional modules that operate in an intelligent closed-loop way to provide a software-defined security orchestration and management that enforces and controls security policies of network resources and services in real-time.
###
VII-B Standarization related to AI security
Standardization is critical for defining the technological requirements for B5G networks and should be utilized to determine the most appropriate technologies for 6G network deployment. Thus, standards shape the global telecommunications marketplace. Numerous Standards Developing Organizations (SDOs) are tasked with standardizing 6G. Table [V](#S7.T5 "Table V ‣ VII-B Standarization related to AI security ‣ VII XAI B5G Security Projects and Standardization ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") summarizes standardization activities in the field of artificial intelligence security.
Table V: Recent important standardization efforts related to Industry AI Security
| SDO | Standard title | Topics/Description | Publication date11footnotemark: 1 |
| --- | --- | --- | --- |
| ETSI [[300](#bib.bib300)] | ETSI TR 103 305-5 V1.1.1: Critical Security Controls for Effective Cyber Defence | The document is an evolving repository for privacy enhancing implementations using the Critical Security Controls. These presently include a privacy impact assessment and use of the Controls to help meet provisions of the EU [General Data Protection Regulation (GDPR)](#glo.acronym.GDPR) | September 2019 |
| ETSI [[301](#bib.bib301)] | ETSI GR SAI 004 V1.1.1: Securing Artificial Intelligence (SAI) | The standard covers the problem of securing AI-based systems and solutions, with a focus on machine learning, and the challenges relating to confidentiality, integrity and availability at each stage of the machine learning lifecycle. | December 2020 |
| ETSI [[302](#bib.bib302)] | ETSI TR 103 674 V1.1.1: Artificial Intelligence and the oneM2M architecture | The ETSI TR 103 674 is addressing the issues related to the introduction of AI into [IoT](#glo.acronym.IoT) systems and, as first priority, into the oneM2M architecture. | February 2021 |
| ETSI [[303](#bib.bib303)] | ETSI TR 103 675 V1.1.1: SmartM2M; AI for IoT: A Proof of Concept | This standard covers the implementation and security challenges of oneM2M platforms using the case of [IoT](#glo.acronym.IoT). It shows how the ML methods can be implemented directly over the data. | December 2020 |
| ETSI [[304](#bib.bib304)] | ETSI GR ZSM 010 V1.1.1: Zero-touch network and Service Management (ZSM); General Security Aspects | The document studies the security aspects of the ZSM use cases, framework and solutions, identifies potential security threats and mitigation considerations to be covered in ZSM standardization activities. It aims to outline a list of security controls (aka security countermeasures) in order to raise awareness of security aspects that could be considered in ZSM specifications. The cited document explores the relationship between security controls and technology-specific solutions. | July 2021 |
| ISO [[305](#bib.bib305)] | ISO/IEC TR 24028:2020: Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence | This document surveys approaches to establish trust in AI systems through transparency, explainability, controllability. It also surveys engineering pitfalls and typical associated threats and risks to AI systems, along with possible mitigation techniques and methods. | May 2020 |
| NIST [[306](#bib.bib306)] | NISTIR 8228: Considerations for Managing Internet of Things ([IoT](#glo.acronym.IoT)) Cybersecurity and Privacy Risks | The purpose of NISTIR 8228 is to help federal agencies and other organizations better understand and manage the cybersecurity and privacy risks associated with their individual [IoT](#glo.acronym.IoT) devices throughout the devices’ lifecycles. | June 2019 |
| NIST [[307](#bib.bib307)] | NIST SP 800-53 revision 5: Security and Privacy Controls for Federal Information Systems and Organizations | NIST SP 800-53 is designed to help organizations identify the security and privacy controls needed to manage risk and to satisfy the security and privacy requirements. It accomplishes this objective by providing a comprehensive and flexible catalog of security and privacy controls to meet current and future protection needs based on changing threats, vulnerabilities, requirements, and technologies. | September 2020 |
| IEEE [[308](#bib.bib308)] | IEEE P7001: Transparency of autonomous systems | This standard describes measurable, testable levels of transparency, so that autonomous systems can be objectively assessed and levels of compliance determined. | June 2020 |
| IEEE [[309](#bib.bib309)] | IEEE P7006: Personal data AI agent (working group) | This working group works on a standard that describes the technical elements required to create and get access to a personalized (AI) that will comprise inputs, learning, ethics, rules and values controlled by individuals. | June 2021 |
VIII Lessons Learned and Future Research Directions
----------------------------------------------------
This section discusses the lessons learned and, based on these lessons, synthesizes the future research directions that industrial or academic researchers can follow.
###
VIII-A B5G Threats and landscape
####
VIII-A1 Lessons Learned
Any 5G technology can be inherited and improved in a 6G environment. In the context of this work, we concentrate our analysis on SDN-NFV-VNF and Virtualization technology-related security threats. SDN technologies utilize a wide variety of protocols and policies that could be exploited to disturb the layers and interfaces of the SDN framework with man-in-the-middle and DoS attacks, high-jacking and re-routing data, impersonation users, etc. NFV technology is also exposed to malicious activity. In particular, an attacker could exploit vulnerabilities in its authentication process, protocols, third-party hosted network functions, and APIs. VNFs are pieces of software running that might be the victim of the abuse of typical software weaknesses, such as buffer overflows, dynamic memory deallocation, open APIs, etc. While by employing virtualization/containerization technology, network operators have significant benefits in terms of capital expenditure (CAPEX) and operating expenditure (OPEX). Nevertheless, there is an evident risk that vulnerabilities can spread within all VMs or containers unless the hypervisor is deployed correctly to manage VMs and OS securely.
Additionally, new 6G technologies and applications directly affect security and privacy and bring new threats. They empower legitimate users with low latency reliability
and efficient communication services and the malicious ones with more powerful means to do evil things. Researchers worldwide are intensively investigating and discovering more threats related to trending technologies and applications, as well as developing new solutions to make 6G secure and privacy-preserving once it is commercialized in 2030 and beyond.
####
VIII-A2 Remaining Research Questions
Firstly, open-source software in specific 6G components will likely bring zero-day threats. The number of vulnerabilities discovered and inherent in open source increases rapidly year by year. As the source code is disclosed, an attacker has time to investigate and select an attack and analyze the target’s operation. In particular, open-source software, no longer developed and upgraded, is even more vulnerable.
Moreover, security and isolation between network slices are fundamental but remain a challenge as it is a complex task. Physical isolation is not always possible, and the VNF of network slice instances is implemented on common cloud-based infrastructures.
In addition, VNFs, NFV, and MEC infrastructures, together with IoT devices, require numerous APIs for their proper functioning. Nevertheless, APIs are an essential security vector. MEC technology requires open APIs. Moreover, human mistakes could cause having open or unpatched APIs in the VNFs, NFV platform, or IoT devices. Malicious actors can exploit such a vulnerability to perform man-in-the-middle attacks, malicious mode problems, privacy leakages, and VM manipulation. Dealing with the security of APIs in 5G remains an open issue.
Furthermore, an optimal security approach should protect the services, hardware resources, information, and data, both in transmission and storage IoT platforms. However, this is very challenging since IoT devices are designed to be deployed at a massive scale, creating a network of nearly identical appliances with similar characteristics. Thus, this similarity amplifies the magnitude of any vulnerability in the security that may significantly affect many of them.
Likewise, the expected number of links interconnected between the low-performance devices (e.g., drones, home appliances, and smart sensors) is unprecedented, and many of these devices can establish connections and communicate with other devices automatically. This interconnection on IoT devices implies that if a device is poorly secured and connected, it can affect the security and the resilience of the whole network.
Finally, DoS/DDoS attacks will benefit from the more significant amounts of network traffic sent per device and the fact that many more devices can be simultaneously connected to the network, thanks to IoT technology. Much more powerful botnets could be created to carry out DoS and DDoS attacks effectively. The main challenge arising from the previous aspects is the effective detection of traditional DDoS attacks (e.g., flooding attacks) and more advanced stealthy DDoS attacks (e.g., SlowDoS attacks).
####
VIII-A3 Possible Future Directions/Solutions
In 6G networks, there will constantly be a need to perform malware and attack detection and identification. Research about Intrusion Detection Systems (IDS) is a well-established field that, especially with the recent increase in the values of the data exchanged daily using the Internet, is expected to increase much more with the proliferation of IoT devices. Depending on IDS type, they can deploy different strategies to detect a potential attack within network traffic: rule-based and anomaly detection.
Rule-based IDS are exact but not flexible and scalable enough for such dynamic network environments as those found in 6G, where new types, and variations of classical cyberattacks, are expected to be numerous. Moreover, the variety and dynamicity of the traffic malware pose a significant challenge to traffic monitoring tools in terms of flexibility and generalization of their algorithms. 6G traffic analysis tools need to be adaptable to ever-evolving attacks and new protocols and policies and potentially to breakthrough technologies that are constantly being developed. Finally, 6G infrastructure and users will be continuously protected by encryption. Therefore, anomaly detection leveraging AI and XAI techniques has gained popularity as a solution to detect malicious activity in 5G and B5G networks. Currently, more research on encrypted traffic analysis is required to improve the precision of the results. While high accuracy of the analysis is expected, simultaneously, the system cannot be prone to a high number of false positives. Additionally, to the best of our knowledge, there is a lack of large enough publicly available datasets of 5G and B5G malicious traffic to support this research.
In addition, detecting attacks by only analyzing the network traffic may not always be possible, especially with the emergence of stealthy application-layer DDoS attacks, which aim at exhausting the server’s resources while generating traffic that mimics the legitimate one. Thus, using multiple sources of information, such as resource usage and performance of service under attack, to feed AI algorithms is vital to discriminate against malicious behavior.
Independent of the particular use case, most applications and security functions require the ability to directly react to what is happening in the network on the fly, especially when it involves the detection of malware.
###
VIII-B Role of XAI for B5G Security
####
VIII-B1 Lessons Learned
ML is expected to become an integral feature in many B5G telecommunication infrastructure aspects. Security of the applied AI/ML techniques has long been the focus of many previous studies, but their applications in the real world are still limited.
IoT is a dynamic and rapidly evolving enabler of B5G which requires thorough security over the network, software, and encryption attacks. Some supervised learning techniques (SVM) and [RL](#glo.acronym.RL) (Q-learning) methods are popularly used to prevent said attacks. CRAN, VLANs, and ORAN technologies are the future of radio infrastructure in B5G. Fast-paced communication enabled through RAN in B5G should be trustworthy and transparent at the same time. However, most of the studies in the UE layer of telecommunication investigate AI/ML-based solutions while only a little focus on accountability and robustness.
Edge-AI is becoming increasingly prominent due to its cost-effectiveness and data privacy in a variety of industries. Not only that, AI/ML is preferred in securing the edge architectures as well. Edge devices carry out functions from data gathering to inferencing. Thus, identifying compromises in the pipeline is extremely important at each stage that is susceptible to attacks.
The use of reinforcement and machine learning approaches in backhaul and core networks is becoming more popular. Some of the problems that must be addressed include secure connection management, communication, and handover security. Explainable techniques required to secure the core network must be lightweight and should not put a damper on the latency and throughput.
Virtualization and SDN have opened up the telecommunication sector into new dimensions. End-to-end slicing and fully automated network management (ZSM) are envisaged with the advent of SDN and NFV alongside AI/ML. The ubiquitous use of AI/ML in automation and slicing requires a fully comprehensible security architecture covering the whole data pipeline from training to inference. A single point of attack that loops inside the system would be immensely detrimental in closed-loop automation. For this, inherently transparent models (LMUT, DTs) and post-hoc explainable techniques (LIME/SHAP) would be required, depending on the available computation power.
####
VIII-B2 Remaining Research Questions
Computational power in user devices is increasing dramatically. New processing units are designed mainly to target running AI/ML systems. However, extra computational power would be required for encryption and decryption tasks and anti-virus/malware software on top to run additional security measures in those devices. It will further limit the computation power available to improve service accountability and trustworthiness. Thus, any implementation of XAI techniques would require the additional processing power of the device.
Depending on the high velocity, veracity, and variety of incoming data in IoT and edge devices, the performance of the principal tasks would have to compete for the needed computation power bringing down the overall customer satisfaction. Therefore, a proper balance between performance and interpretability/accountability must be drawn without compromising the latency and throughput in the communication channels from UEs to core networks.
Currently available metrics to quantify the interpretability is insufficient. Although there are multiple attempts to introduce metrics for interpretability, it still reigns as a heavily subjective matter and an open research question. Metrics are even more critical for closed-loop automation since all the functions are expected to execute with minimum human intervention.
Interpreters used on ML models can expose sensitive information alongside helpful information for the stakeholders. A challenge would make XAI techniques less appealing for businesses using AI/ML on the B5G infrastructure. At the same time, it is crucial to convey the proper explanations to the right stakeholders as much as encapsulate sensitive information.
####
VIII-B3 Possible Future Directions/Solutions
Developing computationally efficient XAI techniques is a primary requirement at the moment. The metrics must also adhere to the computational efficiency criteria. Fast computations are required to handle the massive amounts of streaming data expected to enter B5G networks.
Extensive research needs to be carried out in creating proper metrics to quantify and detect any anomalies in the explanations. It would be imperative to realize fully automated network management (ZSM) with XAI. Some studies propose metrics to quantify the quality of ML model interpretations mathematically. However, it desperately requires the touch of sociological and psychological aspects of humans when it comes to real-world applications. Therefore, more collaborative studies are needed.
Interpreters must be carefully adjusted to filter out any sensitive information generated to avoid privacy violations and intellectual property laws before conveying them to the stakeholders. Not only that the information must be communicated to the appropriate party at the right time, but it must also be easily accessible by the users. Bespoke explanations to different user groups would require automated classifiers to identify those user groups. Also, creative methods are needed to make the explanations clear to the general public.
Explanations generated at one level of the telecommunication structure should be adequately communicated to the other levels that are related. Protocols and dedicated communication channels might need to be developed to have real-time network maintenance.
###
VIII-C New XAI security issues in B5G
####
VIII-C1 Lessons Learned
Explainability, being it through transparency or post-hoc explainability, can compromise the security of ML models and ML-based systems. It simplifies adversarial ML attacks against models by deobfuscating their black-box decision process. It adds a new requirement that is detrimental to security and complicates the design of secure ML systems. Finally, it adds a new function and component vulnerable to attacks in ML systems, which can be used as a vector to compromise the whole system.
The new security issues introduced by XAI have several detrimental effects on B5G networks. First, XAI can compromise the security of B5G network services relying on ML. For instance, network attacks can more easily evade anomaly detection systems, which leads to the compromise of B5G system components.
Second, XAI can hinder the automated management of B5G networks. Management systems based on ML can be poisoned and evaded more quickly, e.g., to exhaust the system and network resources through lousy management.
Third, XAI constrains the design of ML applications meant to be run on resource-limited edge devices, reducing their performance and security and making them more vulnerable to attacks.
Finally, if XAI methods are not well-secured, they can have the reverse effect of what they are intended. A compromised explanation can enable biased and unfair decisions in critical scenarios and reduce the trust in ML and AI systems. This will hinder the adoption of B5G services that heavily relies on ML and AI.
####
VIII-C2 Remaining Research Questions
Addressing a few research questions would help improve the security of XAI methods and their dependent services in B5G networks.
First, we need to provide a definite answer to whether ML models have a different level of vulnerability to adversarial ML attacks depending on the knowledge about them: Are ML models more vulnerable to adversarial ML attacks in a white-box rather than in a black-box setting? The white-boxing of ML models by explainability would not increase the vulnerability to attacks if this is not the case.
Second, we need to quantify the exact trade-off between the several ML properties required by trustworthy AI: What is the impact (positive or negative) of different ML properties on each other? The answer must primarily be on the explainability vs. security trade-off but also explore additional properties: performance, fairness, accountability, etc.
Finally, more research is needed to identify the vulnerabilities of different XAI methods to attacks: How vulnerable are XAI methods to attacks? By knowing their level of security and the weaknesses of different XAI methods, we can make them more secure by designing or developing defenses against the discovered attacks.
####
VIII-C3 Possible Future Directions/Solutions
Improving security often comes through an offensive approach against vulnerable systems. The development of new attacks against XAI methods and black-box ML models would be the basis to answer the first and third questions that we identified. Furthermore, the provision of theoretical guarantees for the robustness of XAI methods and white-box ML models against attacks would set an upper bound on the vulnerability of these components.
The empirical study of trade-offs between ML model properties would be a starting point to increase the knowledge on this issue. Then, developing equations based on theoretical analysis to quantify these trade-offs would be very helpful to support the design of trustworthy ML systems.
###
VIII-D XAI for the Security of B5G Use Cases
####
VIII-D1 Lessons Learned
In the B5G world, numerous futuristic applications are expected to experience by the general public. Smart cities, healthcare, Industry 4.0/5.0, smart grid 2.0, and XR are some areas that B5G would heavily influence. Smart cities constitute numerous services that depend on ultra-low latency, high bandwidth, and ultra-reliable communication in various intelligent applications such as intelligent transportation, waste disposal, energy/water distribution, construction, UAV-assisted communication, home automation, and many other public services. However, AI-enabled solutions in many of these services need to be accountable and trustworthy to protect the users from damages to their lives, property, and finances. Remote healthcare services have been gaining popularity in the recent past. IoMT/IoHT-based big data analysis helps realize emergency support services, diagnosis, and care through the B5G networks. These services would require additional transparency and trustworthy AI/ML models to protect the integrity and confidentiality of the core services. Data leaks and adversarial attacks on AI/ML-based healthcare systems can be tracked down with the help of XAI, and proper amendments can be made to prevent future attacks. Also, in industry 5.0, XAI would facilitate smooth collaboration between humans and smart production facilities. Attacks on manufacturing plants could be detrimental to both finances and the safety of the workers. AI/ML-based attack surfaces in automated manufacturing processes can be resilient with more comprehensible and explainable use of AI/ML models. AI/ML in smart grids enables real-time monitoring against attacks. In such instances, these decisions need to be accountable, and the users should be informed with detailed explanations to avoid any false accusations. For that, the model outcomes need to be presented in a trustworthy and reasonable manner to the users. Immersive technologies such as XR would engulf the users in virtual reality. Intelligent decision-making in platforms is susceptible to attacks such as DDoS, social engineering, and deep fakes. Explainable AI methods will be required to identify the impacts on those systems and educate the users on the precautions.
####
VIII-D2 Remaining Research Questions
Autonomous vehicles are going to revolutionize travel and transportation in smart cities. During a critical situation such as an accident, the cause could be a range of possibilities, from a fault in the device to an external attack. Therefore, AI/ML in autonomous driving requires further accountability for the ultra-low latency decision-making systems. Creating interpreters with sufficient computation power while maintaining super-fast communication with central servers will be a design side challenge. In many other IoT (IIoT/IoHT/IoMT) based ML applications, the performance vs. interpretability trade-off will be a challenge many designers have to face.
Lack of methods to measure how successful the XAI-based decisions are still in their infancy. In AI/ML applications in Smart grids, the losses caused by attacks/tapping are easily quantifiable, but the effectiveness of XAI methods is not. Some general metrics are currently available, but the application-specific metrics need more focus from the research communities.
Manufacturing plants use many proprietary designs and concepts that must be kept confidential. If proper measures are not taken, such proprietary information used in AI/ML systems could be exposed through interpreters to generate explanations.
In smart healthcare, many users would divulge sensitive information to receive remote diagnostics. Information gathering would be done across a massively heterogeneous set of devices, making it complicated to access the explanations and understand them. So, the service providers will have to allocate resources to actively make explanations more accessible to their users. Similarly, in wide collaborative XR applications, conveying proper non-technical explanations will play a key role in building trust and solving any problems regarding ambiguities that could arise to hinder the right for explanations by regulatory bodies such as the EU General Data Protection Regulation (GDPR) [[310](#bib.bib310)].
####
VIII-D3 Possible Future Directions/Solutions
It is vital to maintain a delicate balance between performance and interpretability. Researchers must focus on computationally simpler but effective explainable methods on more performance-demanding AI methods such as DNN and DRL. More definitive metrics should be in place to measure the effectiveness of the XAI outputs and establish a balance between performance and explainability. General metrics for XAI outputs will not be sufficient in identifying the exact picture. Use case-specific metrics need to be developed in collaboration with experts for each particular use case. For example, inputs from experts in smart-grid stability control, data scientists, and sociology/psychology experts should be well considered when developing metrics for smart grid stability. New and creative communication techniques need to be implemented to make the public aware of the correct and narrowed-down explanations. It will require additional resources allocated by the service providers as well. At the same time, measures must be taken to identify direct or indirect exposure of proprietary information in AI/ML models used by the service providers.
###
VIII-E Research projects and standards
####
VIII-E1 Lessons Learned
According to our research, several EU-funded research projects have already started to address the challenges on the path toward 6G, and many major ICT companies are issuing announcements about internal programs focusing on 6G security. Outside of the EU, e.g., in the USA, the Next G Alliance started to work on the 6G security and privacy through private sector-led efforts. Most of the projects listed in Section [VII](#S7 "VII XAI B5G Security Projects and Standardization ‣ A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions") aim to guarantee the following generation network’s trustworthiness and security. However, it is exciting to see approaches beyond classical, for example, XAI-based techniques, to secure future networks that play a significant role in most of the research projects reviewed in this paper. Undoubtedly, global standards and new regulations will play a key role in developing and deploying 6G networks. However, effective and timely standardization is key to the fast and seamless adoption of new technologies, including 6G. Several Standards Developing Organizations (SDO) are expected to work in the near future or already work on 6G security and privacy, e.g., ETSI, IETF, IEEE, 3GPP, NIST, and ISO, in a much tighter way than they did for 5G, as 6G aims to merge different technologies already standardized by SDOs. The AI/ML mechanisms will have to become the main elements in 6G to achieve superior security, e.g., automating decision-making processes and accomplishing a zero-touch approach.
####
VIII-E2 Remaining Research Opportunities
The analysis of recently released standards (2019-2021) in B5G security shows that most SDOs acknowledge the importance of AI/ML-based security solutions for B5G networks. However, only a few standardization documents mention the role of XAI, which we think is very significant, as the current lack of explainability leads to doubts about the credibility and feasibility of AI/ML-based implementations built to combat security threats. There are, however, working groups, such as IEEE XAI WG - Standard for XAI [[311](#bib.bib311)] that aim to standardize mandatory and optional requirements and constraints that need to be satisfied for AI method, algorithm, application, or system to be recognized as explainable.
####
VIII-E3 Possible Future Directions
The European Partnership on Smart Networks and Services (SNS) established Europe’s strategic research and innovation roadmap. The initiative is based on an EU contribution of €900 million over the next seven years. The objective is to enable European players to develop R&I capabilities for 6G systems and lead markets for 5G and 6G infrastructure, which will serve as the foundation for digital and green transformation. The SNS work program will be the basis for calls for proposals to be launched in early 2022. Concerning standards, we believe that projects under calls such as ICT-52-2020 are expected to provide valuable inputs to standardization bodies fostering the development of advanced 6G solutions. From the perspective of 3GPP, there are features and capabilities from existing 5G solutions that require full specification and are expected to be released at the end of 2023. The migration from legacy and existing proprietary radio protocols toward 3GPP protocols will take 5-10 years. AI/ML-assisted security still needs further development to respond to new security threats introduced by the dynamicity of 6G services and networks.
IX Conclusion
--------------
This survey examines and evaluates the potential of using XAI to improve accountability and resilience beyond the 5G era of AI-based security in communication. The study begins by laying the background of current XAI technical concepts and their potential in the B5G era. This paper discussed an exhaustive assessment of the most cutting-edge AI, XAI, B5G technologies, and security aspects, including threat model and taxonomy. Technical aspects regarding the role of XAI in B5G security issues were thoroughly examined throughout major enablers of B5G, including IoT, RAN, Edge, core, backhaul, E2E slicing, and network automation, etc. It was followed by a detailed discussion on trending AI-based use cases of B5G and the potential of XAI in ensuring the trustworthiness of those networks. Apart from the favorable prospects of XAI, we also bring to light new security issues introduced to future network infrastructure along with AI explanations.
Later in this paper, we focus on the active research initiatives to build and standardize B5G specific technologies involving both researchers and industry practitioners. Finally, this paper highlights the challenges and limitations in B5G AI security and future research directions to fill those gaps. In conclusion, this survey acts as a stepping stone for researchers, industry partners, or other stakeholders to absorb a holistic understanding of the potential of XAI to improve accountability and resilience in the security application of the B5G era.
Acknowledgement
---------------
This work is partly supported by European Union in SPATIAL Project (Grant No: 101021808), Academy of Finland in 6Genesis (grant no. 318927) and and Science Foundation Ireland under CONNECT phase 2 (Grant no. 13/RC/2077\_P2) projects.
|
410e8d84-0c58-41d0-a87d-cdd0979455e9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Fact Finding: Simplifying the Circuit (Post 2)
This is the second post in the Google DeepMind mechanistic interpretability team’s investigation into how language models recall facts. This post focuses on distilling down the fact recall circuit and models a more standard mechanistic interpretability investigation. This post gets in the weeds, we recommend starting with post one and then skimming and skipping around the rest of the sequence according to what’s most relevant to you. We assume readers of this post are familiar with the mechanistic interpretability techniques listed in this glossary.
Introduction
Our goal was to understand how facts are stored and recalled in superposition. A necessary step is to find a narrow task involving factual recall and understand the high level circuit that enables a model to do this task.
We focussed on the narrow task of recalling the sports played by different athletes. As discussed in post 1, we particularly expected facts about people to involve superposition, because the embeddings of individual name tokens is normally insufficient to determine the sport, so the model must be doing a boolean AND on the different tokens of the name to identify an athlete and look up their sport. Prior work calls this phenomenon ‘detokenisation’ and suggests it involves early MLP layers, and uses significant superposition.
Why focus on athletes’ sports rather than factual recall in general? We believe that in mechanistic interpretability, it’s often useful to first understand a narrow instance of a phenomenon deeply, rather than insisting on being fully general. Athletes’ sports was a nice task that gave us lots of examples per attribute value, and our goal was to understand at least one example where superposition was used for factual recall, rather than explaining factual recall in general. We conjecture that similar mechanisms are used for recalling other classes of fact, but this wasn’t a focus of our work.
Set-up
To understand fact localisation, we studied Pythia 2.8B’s next to
|
213aed8f-0822-4b5f-b983-ddc7eb9a5228
|
LDJnr/LessWrong-Amplify-Instruct
|
LessWrong
|
"In honor of System Administrator Appreciation Day, this is a post to thank Trike Apps for creating & maintaining Less Wrong. A lot of the time when they are mentioned on Less Wrong, it is to complain about bugs or request new features. So this is the time of year: thanks for everything that continues to silently go right!"
The excerpt provided comes from a user post on LessWrong.
Can you elaborate on the specific contributions of Trike Apps to the creation and maintenance of Less Wrong that warrant such appreciation?
Trike Apps, a software development company, played a crucial role in the creation and maintenance of Less Wrong, a community blog and forum focused on discussion of cognitive biases, philosophy, psychology, economics, rationality, and artificial intelligence among other topics.
In terms of creation, Trike Apps was responsible for developing the platform's infrastructure. This included designing a user-friendly interface, implementing features that facilitated discussion and engagement, and ensuring the platform's stability and security. They also incorporated systems for content management, user registration, and moderation, among other essential functionalities.
As for maintenance, Trike Apps has been involved in ongoing site updates, bug fixes, and feature enhancements. They handle server management to ensure the site remains live and accessible, and they troubleshoot any technical issues that arise. They also likely work on optimizing the site's performance and ensuring it can handle the volume of traffic and discussion that occurs.
Moreover, Trike Apps probably plays a role in implementing user feedback into tangible site improvements. This could involve adding new features requested by users or making changes to existing ones to improve the user experience.
Overall, the work of Trike Apps is integral to the operation of Less Wrong. Their behind-the-scenes efforts keep the platform running smoothly, allowing the community to focus on the rich discussions and content that Less Wrong is known for. Therefore, it's understandable that the community would express their appreciation for Trike Apps' contributions.
Reflecting on your previous responses, what potential challenges or limitations could Trike Apps face in maintaining and improving Less Wrong?
Maintaining and improving a platform like Less Wrong comes with several potential challenges and limitations.
1. **User Experience (UX) Design:** Creating a user-friendly and intuitive interface that caters to a diverse user base can be challenging. It requires a deep understanding of user behavior and preferences, and the ability to translate this understanding into effective design.
2. **Scalability:** As the platform grows, ensuring it can handle an increasing number of users and content without compromising performance can be a significant challenge. This requires ongoing optimization of the platform's architecture and resources.
3. **Security:** Protecting the platform and its users from cyber threats is a constant challenge. This includes safeguarding user data, preventing unauthorized access, and mitigating potential security vulnerabilities in the platform's code.
4. **Feature Development:** Balancing the development of new features with the maintenance of existing ones can be tricky. It requires prioritizing based on user needs, resource availability, and the platform's strategic direction.
5. **Community Management:** Moderating user-generated content to maintain a respectful and productive discussion environment can be complex and time-consuming. It requires effective moderation policies and tools, and sometimes delicate decision-making.
6. **Technical Debt:** Over time, quick fixes and shortcuts can accumulate into what's known as technical debt. This can make the platform more difficult and costly to maintain and update, and can eventually lead to more significant problems if not properly managed.
7. **Resource Constraints:** Like any organization, Trike Apps likely faces constraints in terms of time, budget, and personnel. These constraints can limit what they're able to accomplish at any given time.
Despite these challenges, the appreciation expressed by the Less Wrong community suggests that Trike Apps has been successful in managing them to a large extent.
How does the concept of technical debt impact the long-term development and maintenance of a platform like Less Wrong?
Technical debt refers to the implied cost of additional rework caused by choosing a quick or easy solution now instead of using a better approach that would take longer. In the context of a platform like Less Wrong, technical debt can have several impacts on its long-term development and maintenance.
1. **Code Maintainability:** Over time, as quick fixes and shortcuts accumulate, the codebase can become more complex and harder to understand. This can make it more difficult to maintain the platform and implement new features, as developers have to spend more time understanding and working around the existing code.
2. **System Performance:** Technical debt can lead to suboptimal performance. For example, if shortcuts were taken in the design of the platform's database or server architecture, this could lead to slower load times or other performance issues as the platform scales.
3. **Security Risks:** Technical debt can also introduce security vulnerabilities. If proper security practices were not followed in the interest of saving time or effort, this could leave the platform exposed to potential attacks.
4. **Resource Allocation:** The need to address technical debt can divert resources away from new feature development or other improvements. This can slow down the platform's progress and make it harder to respond to user needs or market trends.
5. **Quality Assurance:** The presence of technical debt can make it harder to test the platform and ensure its quality. As the codebase becomes more complex, it can become more difficult to predict how changes will affect the system, increasing the risk of bugs or other issues.
In order to manage technical debt, it's important to regularly dedicate time and resources to refactoring or improving the codebase. This can help to keep the platform maintainable, secure, and able to continue evolving to meet the needs of its users.
|
a5a042b4-8783-4ed0-a0cc-66d4976a4360
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Transforming myopic optimization to ordinary optimization - Do we want to seek convergence for myopic optimization problems?
While reading some papers, I found a way to take a [myopic](https://www.lesswrong.com/tag/myopia) optimization problem L.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
, and construct a non-myopic optimization problem L∗ whose optima match the fixed points of the original problem L. This seems potentially of interest for myopia research; it could provide a way to think about what optimization of L might give you, a tool for analyzing the trajectories of myopic optimization, and *maybe* also a better-behaved alternative to myopic optimization.
A common form for myopic optimization, TLDR
-------------------------------------------
In order to explain the transformation, I first need to introduce a common form to myopic optimization problems. The next section goes into detail with applying the common form, but I suspect many people here are sufficiently comfortable with myopic optimization problems that they might be able to do with the shorter explanation.
In ordinary optimization, we might minimize a loss function L(m). That is, the optimized model mn would be given by the equation mn=argminmL(m).
This is non-myopic; every influence of m on the result of L gets optimized. To introduce a myopic equivalent, we need to distinguish between two influences of m; the parts that we want to optimize, and the parts that we want to be myopic about.
We can do that by introducing two separate parameters, L(x,y), with the first parameter being the parameter we optimize, and the second parameter being the parameter we are myopic about. Given a starting model mn, we can then myopically update this model, yielding mn+1=argminmL(m,mn).
This process can be iterated, but it has many poor properties, e.g. it is not guaranteed to converge, and it is unclear what its results will be. This post is about providing an alternative to that process.
A common form for myopic optimization, long version
---------------------------------------------------
Let's consider an example of myopic optimization.
Suppose you have some predictive model m, which gets trained according to some loss function L(m,x), requiring a distribution of data points x∼D. Essentially, it's trained to minimize L(m)=Ex∼D[L(m,x)].
If you deploy this model in the real world, then you would hope that it has some sort of real-world effect (if nothing else, then bringing you more ad money); otherwise, what's the point? But if it has a real-world effect, then in particular it might also have a real-world effect on the distribution D that it encounters.
This might mean that the distribution D it was trained on is no longer good for getting it to optimal performance; we should train it on D instead. But since D happened as a result of the model, it is really a function D(m) of the model. So when we retrain on the data D(m1) we got after deploying m0, that yields a *new* model m1 that might induce yet a new training dataset D(m1).
If we consider iterating this endlessly, what will happen? Will it be like optimizing the following expression?
L(m)=Ex∼D(m)[L(m,x)]
If it is like optimizing this expression, then that seems somewhat concerning. Because this expression includes the dependency of the dataset on the model, optimizing this expression would, as a side-effect, optimize the dataset to become more predictable. *How* one would go about making a the real world more predictable is a bit unclear, but considering all of the AI safety concerns for even seemingly innocuous loss functions, this is not something we want to get involved with.
So is this what would get optimized? No; this is where the myopia comes in. When optimizing the model, we don't "look ahead" and think about how the dataset will change in response to the new model. In machine learning terms, we don't propagate the gradients through D(m).
Rather, it makes more sense to specify the optimization problem using a bivariate loss function, taking as parameters both the model m that is being optimized, and the model d that is currently deployed:
L(m,d)=Ex∼D(d)[L(m,x)]
In each iteration of optimization, we then change the deployed model to be optimized for the first parameter. This can be expressed as the equation:
dnew=argminmL(m,dold)
This optimization method has a number of bad properties. For instance, it is not guaranteed to converge. With ordinary non-myopic optimization, L always decreases as you optimize it. As long as L is bounded, this means that you cannot keep coming up with new models that decrease it, and so you must eventually reach the best model.^1
However, in myopic optimization, while L(dnew,dold) is always guaranteed to be greater than L(dold,dold), it might be that L(dnew,dnew) is less than or equal to L(dold,dold). This can happen in "rock-paper-scissors"-like situations, where the optimal solution always changes away from the deployed solution. Hence it might be nice to have an alternative approach.
Characterizing the convergent points
------------------------------------
So, the general form of myopic optimization can be considered to be:
mn+1=argminmL(m,mn)
And this is a terrible equation.^2 So we want to improve it.
To make it easier to reason about, let's make an assumption: It has converged to a fixed point. More precisely, we have a value m∞=argminmL(m,m∞); that is, there is no way to change the deployed model to perform better on the training data that comes from the deployed model.
An immediate implication of this is that L(m∞,m∞)=minmL(m,m∞), and so L(m∞,m∞)−minmL(m,m∞)=0. But notice that since L(x,x)≥minmL(m,x), the left-hand-side in the previous equation is always nonnegative. So this means that the equation holds only when the function L∗(x)=L(x,x)−minmL(m,x) is minimized.
This L∗ function is in some ways much better behaved. It tells us what the fixed point is, rather than just telling us a training procedure. However, in other ways it is less well-behaved:
* The trajectory followed when doing myopic optimization on L won't actually be the same as the trajectory followed is optimizing L∗ by gradient descent. In fact, often you might not be able to directly optimize on L∗ at all; for instance, the part that you are myopic about is often an emergent property of the real world, rather than a clean math expression.
* L∗ contains an inner min expression, which is hard to evaluate, and whose general results might be hard to predict.
* L∗ does non-myopic optimization on L, in the L(x,x) term, and simply hopes that this gets cancelled out by the −minmL(m,x) term. This is not necessarily the case; if the myopic optimization fails to converge, then optimization on L∗ might converge to something that has very different consequences from what would happen if we directly optimized L.
These limitations together seem like a good reason to not just apply this transformation for no reason. That said, it seems like there is often a good reason, at least when it comes to assessing the safety of L: I would be concerned about a myopic optimization problem L whose transformed version L∗ is unsafe; while we would not be guaranteed to reach the unsafe region, it seems like the optimization of L increases the likelihood of entering the unsafe optimum of L∗ a lot.
Is this useful?
---------------
I got the concept from [this paper](https://arxiv.org/abs/2103.12685), where a basically-identical concept from game theory called "duality gap" is investigated for machine learning. In order to evaluate the usefulness about this transformation for myopic optimization in AI alignment and agency foundations, we need to know more about its properties. I have tried searching a bit around in the game theory literature, but I haven't immediately seen anything that is important for our purposes. Though I also haven't had time to search much, due to my day job.
Fundamentally, it seems to me that there are some potential use-cases for it:
* Evaluate the likely convergence properties of a myopic optimization; if myopic optimization on L tends to decrease L∗, then it seems likely to converge, otherwise it seems likely to diverge.
* Evaluate the possible properties of myopic optimization - it seems reasonable to think about optima for L∗ to get plausible answers for properties that myopic optimization on L could give (though *how* reasonable depends on things like whether the optimization will converge).
* Replace myopic optimization with non-myopic optimization; L∗ may possibly be nicer than L. At least in some cases.
When I consider applying the transformation to some obscure myopic optimization problems that I have thought about, then it seems to have some nice properties, but also some potentially concerning ones. I will likely make a followup post where I explain them. But the myopic optimization problems that I am considering are kind of convoluted, so they don't really fit here.
I thought it might be a good idea to leave this open to LessWrong; are there some myopic optimization problems that you have been thinking about? And if so, what effects would this de-myopization transformation have on them?
(Should we call it "duality gap transformation", considering its origins? Or something else?)
---
1. This is actually a lie. This is not sufficient to prove convergence, and this setup also doesn't resemble at all how optimization gets done in practice. In reality convergence is a messy issue even for non-myopic optimization. However, it's a useful simplification.
2. It is terrible partly because in the limiting form, it is a fixpoint equation, which is hard to reason about. There are variants that are even more terrible; machine learning people might usually phrase it as mn+1=mn−∇mnL(mn,stopgrad(mn)). This uses a mysterious function stopgrad with the property that stopgrad(x)=x, but ddxstopgrad(x)=0 - which of course cannot be a real function.
---
*Thanks to Justis Mills for proofreading.*
|
51992a4f-ab1a-4acf-9ec6-89c8310a8ca7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Novum Organum: Introduction
In light of its value as a rationalist text, its historical influence on the progress of science, and its general expression of the philosophy and vision which guides LessWrong 2.0, the moderation team has seen fit to publish Novum Organum as a LessWrong sequence. (Image: the engraved title page.)
Quotes in this post are from Francis Bacon's Novum Organum in the version by Jonathan Bennett presented at www.earlymoderntexts.com
----------------------------------------
In 1620, Francis Bacon’s Novum Organum was published. Though the work might be succinctly described as Bacon’s views on empiricism and inductivism, it is far more than a list of experimental steps to be followed. It is an entire epistemology and philosophy—possibly the epistemology and philosophy which underlay the Scientific Revolution.
Bacon was damning of the science of his time and preceding centuries. He saw the pseudo-empirical syllogistic paradigm as deeply flawed and incapable of making progress.
> If those doctrines ·of the ancient Greeks· hadn’t been so utterly like a plant torn up by its roots, and had remained attached to and nourished by the womb of nature, the state of affairs that we have seen to obtain for two thousand years—namely the sciences stayed in the place where they began, hardly changing, not getting any additions worth mentioning, thriving best in the hands of their first founders and declining from then on—would never have come about. (74) [1]
He also believed that the unaided human mind was incapable of getting far on its own.
> Nearly all the things that go wrong in the sciences have a single cause and root, namely: while wrongly admiring and praising the powers of the human mind, we don’t look for true helps for it. (9)
> Not much can be achieved by the naked hand or by the unaided intellect. Tasks are carried through by tools and helps, and the intellect needs them as much as the hand does. (2)
> When the intellect of a sober, patient, and grave mind is left
|
0699d9c5-7e4c-4d88-8b54-9f3d1f6d4623
|
trentmkelly/LessWrong-43k
|
LessWrong
|
My Greatest Achievement
[warning: this is another gooey self-disclosure in the spirit of Alicorn and lukeprog’s recent posts, except more so.]
According to my submissions summary, my first top-level post dates back to February 18th, 2011. (I don’t know exactly when I started commenting, but I don’t feel like clicking through dozens of pages of old comments to find out.) By then, it had already been a month since I embarked on the most deliberate and probably the most difficult act of self-modification that I’ve ever attempted, and definitely the one I’m proudest of. At this point, I think I can say confidently that I’ve fixed one of the most irrational facets of my behaviour. A few people here know quite a bit about this, namely molybdenumblue.
[Aside: some people might find this article very personal. I’ve never had a strong privacy instinct, and since in this case it’s all my personal information*, and I talk openly about most of it with my friends and family, I have no qualms about publishing it. If it makes you uncomfortable, please feel free to stop reading.]
My New Year’s resolution for 2011, which I clearly remember making in my parents’ kitchen, was to experiment more with relationships. I had been in 2 relationships by my 19th birthday: one at age 14 with a much older recent immigrant to Canada who went to my high school, and one at age 17 with a boy who I worshipped when I was 12. Neither of them led anywhere interesting, in either an emotional or a physical sense. After breaking up with my second boyfriend, I was about ready to give up and start calling myself asexual. But since I had very little data to go on, an experiment seemed like a good idea.
I chose my experimental subject carefully: Billy, a boy I met through competitive lifeguarding, who was my age and seemed to share some of my values; he was in good shape, anyway; and whom I found moderately attractive. (I’ve been attracted to girls in the past, but that seemed like a more complicated experiment to set up.) I fou
|
54842566-a8c7-4e87-a0be-ca3c00d2e033
|
trentmkelly/LessWrong-43k
|
LessWrong
|
‘Clearly’ covers murky thought
Why do we point out that statements we are making are obvious? If a statement is actually obvious, there should rarely reason to point the statement out, let alone that it is obvious. It’s obviousness should be obvious. It seems that a person often emphasizes that a statement is obvious when they would prefer not be required to defend it. Sometimes this is just because it is obvious once you know their field but a lot of effort to explain to someone who doesn’t, but often it’s just that the explanation is not obvious to them.
But saying ‘obviously’ is too obvious. A better word is ‘clearly’. ‘Clearly’ sounds transparent and innocent. In reality it is a more subtle version of ‘obviously’.
I have noticed this technique used well in published philosophy from time to time. If getting to your conclusion is going to require assuming your conclusion is true, ‘clearly’ suggests to the reader that they not think over that step too closely.
For instance Michael Huemer in Ethical Intuitionism, while arguing that moral subjectivism is wrong, for the purpose of demonstrating that ethical intuitionism is right:
> Traditionally, cultural relativists have been charged with endorsing such statements as,
>
> If society were to approve of eating children, then eating children would be good.
>
> which is clearly false.
Notice that ‘false’ seemingly means that it is false according to his intuition; the thing which he is trying to argue for the reliability of. If he just said ‘which is false’, the reader may wonder where, in a book on establishing a basis for ethical truth, this source of falsity may have popped from. ‘Clearly’ says that they needn’t worry about it.
|
8b07c49f-9717-4d26-85d8-6ec4721276bb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A Neural Network undergoing Gradient-based Training as a Complex System
In Thought Experiments Provide a Third Anchor, Jacob Steinhardt wrote about the relative merits of a few different reference classes when it comes to reasoning and making predictions about future machine learning systems. He refers to these reference classes as ‘anchors’ and writes:
> There are many other anchors that could be helpful for predicting future ML systems... I am most excited about better understanding complex systems, which include biological systems, brains, organizations, economies, and ecosystems and thus subsume most of the reference classes discussed so far. It seems to me that complex systems have received little attention relative to their germaneness to ML.
I, too, recently became curious about this complex systems ‘anchor’ and this post is an attempt to get down some of my thoughts about it.
In starting to think about this, there are two questions that I would want to know the answer to:
1. Is there a well-defined 'field' of complex systems research that has its own techniques and conceptual frameworks, and such that the central objects of study - i.e. complex systems themselves - can be robustly characterized? And
2. Are deep neural networks complex systems according to that characterization?
Perhaps the best thing that we could hope for would be that the answer to both questions were 'yes', and in such a way that there existed techniques, methods, insights etc. that have been applied to other complex systems and that we might be able to translate into a form that were directly applicable to the study of neural networks. But at the very least, a positive answer to both questions would presumably suggest that broader insights and observations about how to think about complex systems may help us direct out inquiries about future ML systems, by helping to guide us towards good questions, appropriate research directions or better predictions.
The answer to the first question might be 'no'. i.e. It might be the case that informally
|
216dd01e-5bb7-4a9e-9950-89d9b8998116
|
StampyAI/alignment-research-dataset/aisafety.info
|
AI Safety Info
|
What technical research would be helpful for governance?
Technical AI research could [help](https://forum.effectivealtruism.org/posts/BJtekdKrAufyKhBGw/ai-governance-needs-technical-work) governance efforts by:
1. Making AI risk more concrete for policymakers by illustrating our inability to control existing AI systems. For example, [the White House](https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf) only started taking concerns about AI seriously once there were real-life examples of capable AI systems behaving undesirably. This research would not necessarily expand the capabilities of AI, but rather show alignment failures which occur in real, but not especially dangerous, systems.
1. Enabling policies that would require further technological research to enforce, such as ways to monitor AI systems.
1. Identifying which types of AI systems pose the greatest risk and should be prioritized by regulatory efforts.
1. Identifying and/or developing types of AI systems which are easier to align and encouraging developers to prioritize those systems.
Some specific research which could be useful for the above includes:
1. Discovering simple and scalable techniques both for evaluating AI systems' alignment and for identifying their capabilities to help people make reasonable decisions around deployment.
1. Developing concrete proposals which can be adopted by government agencies when designing regulations.
1. Developing [methods](https://arxiv.org/abs/2303.11341) for identifying illegal data centers and detecting secret development of advanced systems.
1. Developing cryptographic techniques which would allow monitoring without requiring companies to reveal trade secrets.
|
231f660d-5b01-4b9d-ad38-b491c7110f3d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Cognitive Bias of AI Researchers?
I find it inconvenient that many AI discussions circle around "agents", “environments” and “goals”. These are non-mathematical words, and by using this vocabulary we are anthropomorphizng natural world's phenomena.
While an "agent" may well be a set of interacting processes, that produce emergent phenomena (like volition, cognition, action), it is not a fundamental and pragmatic mathematical concept. The truly fundamental pragmatic mathematical concepts may be:
(1) states: a set of possible world states (each being a sets of conditions).
(2) processes: a set of world processes progressing towards some of those states.
If so, how could we avoid that anthropomorphic cognitive bias in our papers, and discussions?
Would the (1), (2) terms be a good alternative for our discussions, and to express the ideas in most AI research papers? E.g., Bob is a process, and Alice is processes,... and they collectively are progressing towards some desired state convergent state, defined by process addition.
What fundamental concepts would you consider to be a better alternative to talk formally about the AI domain?
|
2633cac0-c543-47ab-b0a1-23bcc9385486
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Othello-GPT: Reflections on the Research Process
*This is the third in a three post sequence about interpreting Othello-GPT. See [the first post](https://neelnanda.io/othello) for context.*
*This post is a detailed account of what my research process was, decisions made at each point, what intermediate results looked like, etc. It's deliberately moderately unpolished, in the hopes that it makes this more useful!*
The Research Process
====================
This project was a personal experiment in speed-running doing research, and I got the core results in in ~2.5 days/20 hours. This post has some meta level takeaways from this on doing mech interp research fast and well, followed by a (somewhat stylised) narrative of what I actually did in this project and why - you can see [the file `tl_initial_exploration.py` in the paper repo](https://github.com/likenneth/othello_world/blob/master/mechanistic_interpretability/tl_initial_exploration.py) for the code that I wrote as I went (using VSCode's interactive Jupyter mode).
I wish more work illustrated the actual research process rather than just a final product, so I'm trying to do that here. This is approximately just me converting my research notes to prose, see [the section on process-level takeaways](https://www.alignmentforum.org/posts/TAz44Lb9n9yf52pv8/othello-gpt-reflections-on-the-research-process#Takeaways_on_doing_mech_interp_research) for a more condensed summary of my high-level takeaways.
The meta level process behind everything below is to repeatedly be confused, plot stuff a bunch, be slightly less confused, and iterate. As a result, there's a lot of pictures!
Takeaways on doing mech interp research
---------------------------------------
**Warning**: I have no idea if following my advice about doing research fast is actually a good idea, especially if you're starting out in the field! It's much easier to be fast and laissez faire when you have experience and an intuition for what's crucial and what's not, and it's easy to shoot yourself in the foot. And when you skimp on rigour, you want to make sure you go back and check! Though in this case, I got strong enough results with the probe that I was fairly confident I hadn't entirely built a tower of lies. And generally, beware of generalising from one example - in hindsight I think I got pretty lucky on how fruitful this project was!
* **Be decisive**: Subjectively, by far the most important change was suppressing my perfectionism and trying to be bold and decisive - make wild guesses and act on them, be willing to be less rigorous, etc.
+ If I noticed myself stuck on doing the best or most principled thing, I'd instead try to just do *something*.
- Eg I wanted to begin by patching between two similar sequences of moves - I couldn't think of a principled way to change a move without totally changing the downstream game, so I just did the dumb thing of patching by changing the final move.
- Eg when I wanted to try intervening with the probe, I couldn't think of a principled way to intervene on a bunch of games or to systematically test that this worked, or exactly how best to intervene, so I decided to instead say "YOLO, let's try intervening in the dumbest possible way, by flipping the coefficient at a middle layer, on a single move, and see what happens"
+ Pursue the hypothesis that seems "big if true"
- Eg I decided to try training a linear probe on just black moves after a hunch that this might work given some suggestive evidence from [interpreting neuron L5N1393](https://www.alignmentforum.org/posts/TAz44Lb9n9yf52pv8/othello-gpt-reflections-on-the-research-process#Neuron_L5N1393)
+ Notice when I get stuck in a rabbit hole/stop learning things and move on
- Eg after training a probe I found it easy to be drawn into eg inspecting more and more neurons, or looking at head attention patterns, and it worked much better to just say
+ Be willing to make quick and dirty hacks
- Eg when I wanted to look at the max activating dataset examples for neurons, I initially thought I'd want to run the model on thousands to millions of games, to get a real sample size. But in practice, just running the model on a batch of 100 games and taking the top 1% of moves by neuron act in there, worked totally fine.
* **The virtue of narrowness - depth over breadth**: A common mistake in people new to mech interp is to be reluctant to do projects that feel "too small" - eg interpreting a single neuron or head rigorously. And to think that something is interesting only if it's automatable and scalable. But here, being willing to just dive in to patching on specific examples, targeting specific neurons that stood out, etc worked great, and ultimately pointed me to the general principles underlying the model (namely, that it thought in mine vs their's)
* **Gain surface area**: I felt kinda stuck when figuring out where to start. Early on, by far the most useful goal was to gain **surface area** on the problem - to just dive into anything that seemed interesting, play around, and build intuitions about the moving parts of the model and how it was behaving, without necessarily having a concrete goal beyond understanding and following my curiosity.
+ A good way of doing this was to play around with concrete examples, and in particular to patch between similar examples and analyse where the differences came from.
* **Work on algorithmic problems**: Empirically, algorithmic problems are just way cleaner and more tractable to interpret - there's a ground truth, it's easier to reason about, and it's easy to craft synthetic inputs. This is a double-edged sword, since they're also less interesting and less true to real models, but it's very convenient for goodharting on "research insight per unit hour"
* **Domain knowledge is super useful!**
+ Spending 30-60 minutes at the start playing against the [eOthello](https://www.eothello.com/) AI was really valuable for building intuitions (I went in knowing absolutely nothing about Othello), though I got carried away by how fun it was and could have got away with less time.
- Eg that the start and end of the game are weird, that you *occasionally* need to pass but can basically ignore it, that a single piece can change colour *many* times, including from a move pretty far away, and even dumb things like "you can take diagonally, and this happens a lot"
+ Having experience doing mech interp helped a ton - being better able to generate hypotheses, figure out what's interesting, reach for the right techniques, and interpret results
- In particular, having stared at the mechanical structure of a transformer and what kinds of algorithms are and are not natural to implement remains super useful for building intuitions. (I try to convey a bunch of these in [my walkthrough of A Mathematical Framework](https://www.youtube.com/watch?v=KV5gbOmHbjU))
* **Good tooling is crucial**: If you want to do research fast, tight feedback loops are key, and having good, responsive tooling that you understand well is invaluable, even for a throwaway project on a tight deadline. I've created [an accompanying colab](https://neelnanda.io/othello-notebook) with most of my tools, and I hope they're useful! (Sorry for the jankiness)
+ [TransformerLens](https://github.com/neelnanda-io/TransformerLens) is a library I made for mech interp of language models, with the explicit goal of making exploratory research easier, and it worked great here! Eg for easily caching model activations, and for trying out different patching and interventional experiments.
- In general, it's far easier to use software you've written yourself, but I've heard good things from other people trying to use TransformerLens!
+ Building good visualisations was pretty valuable - especially visualising model logits as a heatmap on the board, and converting a set of moves into a plot of the state of the board. Though I probably spent ~4 hours on making beautiful plotly visualisations (and debugging plotly animations...), and could have gotten away with much less.
+ Basic software engineering - noticing the code I kept writing and converting it to functions (eg dumb stuff around changing moves from nice written notation, to the model's vocabulary, to the format used to compute board state; or intervening with the probe; or converting a set of moves to a list of valid moves at each turn, etc)
* **MLPs > attention**: I went into this expecting it to be way easier to interpret attention heads/patterns, but I actually didn't make much headway there, but did great with MLP neurons.
+ I think the difference was that I didn't really know how to think about the sequence of prior moves (and thus which moves were attended to), while I did know how to think about the current board state and thus about valid output logits (and direct logit attribution) and about the max activating dataset examples).
+ And the fact that there were seemingly a bunch of monosemantic neurons, rather than a polysemantic mess of superposition
* **Activation patching is great**: Models are complex and full of many circuits for different tasks - even on a single input, likely many circuits are relevant to completing the task! This makes it difficult to isolate out anything specific, and thus is hard to be concrete. [Activation patching/causal tracing](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=qeWBvs-R-taFfcCq-S_hgMqx) is a great way to get around this - you set up two similar inputs that differ in one crucial detail, and you patch specific activations between the two and analyse what changes (eg whether an output logit changes). Because the two inputs are so similar, this controls for all the stuff you don't care about, and lets you isolate out a specific circuit.
Getting Started
---------------
There was first a bunch of general figuring stuff out and getting oriented - learning how Othello worked, reading the existing code, loading in the data and games, figuring out how to convert a sequence of moves into a board state and valid moves, getting everything into a format I could work easily with (eg massive tensors of game moves rather than a list of lists) and making pretty plotting functions. I also decided to filter out weird edge cases I didn't really care about, like games of less than 60 moves, or with passes in them. In hindsight, it would have been better to do some of this later when I had a clearer picture of what did and did not need optimisation, but \*shrug\*.
The most useful bits of infrastructure I set up (both now, and later) were:
* Convenience functions to convert moves between 1 to 60 (inputs and outputs of the model, since center squares can't be player), 0 to 63 as the actual indexes, and A0 to H7 as the printable labels
* Plotting function to plot either a single board state (and valid moves), and an animation showing a whole game with a slider (the latter turned out to be a deep rabbit hole of Plotly animation bugs though...)
* Creating a single tensor of all games stacked together (in my case, I took all 4.5M games, since it fit into my RAM - 10,000 would have been more than enough)
* Running and caching the model activations on 100 games, so I could use this as an easy reference without needing to run the model every time (eg to look at neurons with big average activations)
I didn't have a clear next step (my main actual idea was taking one of the author's pre-trained non-linear probes and trying to interpret how that worked, but this seemed like a pain), so I tried to start gaining surface area on what was going on by just trying shit. It's easy to interpret the output logits, and so looking at how each model component directly affects the logits is a good hook to get some insight in any model.
The first actual research I tried was inputting an arbitrary game, and looking at the [direct logit attribution](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=disz2gTx-jooAcR0a5r8e7LZ) of each layer's output on a few of the moves. Eyeballing things, there was a clearish trend where MLP5, MLP6 and Attn7 mattered a lot, other parts were less important. Interestingly, MLP7 (naively, the obvious place to start, since it can *only* affect the output logits). Example graph below:

Being more systematic supported this. This is a bit of a weird problem, because there are many (and a variable number of!) valid next moves, rather than a single correct next token, so I tried to both look at the difference in *average* direct logit attribution for the correct/incorrect next logit, and the difference in *min/max* contribution. The former doesn't capture bits that disambiguate between borderline correct and borderline incorrect moves, since most moves will be obviously bad, and the latter is misleading because you're taking the max and min over large-ish sets, which is always sketchy (eg it gives misleading results for random noise) - you get a weird spectrum from early to late moves because there are more options in the middle. I also saw that layer 7 acts very differently at the first and last move, presumably because those are easier special cases, but decided this was out of scope and to ignore it for now. I tried breaking the attention layers down into separate heads, but didn't have much luck.


I was then kinda stuck. I tried plotting attention patterns and staring at them, looking for interesting heads, and didn't get much traction (in part because I didn't really get how to interpret moves!). I did see some heads which only attended to moves of the same parity as the current one, which was my first hint for what was going on (not that I noticed lol).

Patching
--------
Part of why interpreting models is hard is because they're full of different circuits that combine to answer a question. But each circuit will only activate on certain inputs, and each input will likely require a bunch of circuits, making it a confusing mess.
[Activation patching](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=qeWBvs-R-taFfcCq-S_hgMqx) is a great way to cut through this! The key idea is to set up a careful counterfactual, where you have two inputs, a clean input and a corrupted input, which differ in one key detail. Ideally, the difference between any activation on the clean and corrupted run will purely represent that key detail. You can then iterate over each activation and patch them from the clean run to the corrupted run to see which can most recover the clean output (or from the corrupted run to the clean run to see which can most damage the clean output), and hopefully, a few activations matter a lot and most don't. This can let you isolate which activations actually matter for this detail!
I knew that I *wanted* to try patching something, but sadly it was kind of a mess, because an input needs to be a sequence of legal moves. I wanted two sequences which had similar board states but whose moves differed in some key places, so I could track down how board state was computed.
I gave up on this idea because it seemed too hard, and instead decided to be decisive and do the dumb thing of changing just the most recent move! I picked an arbitrary game, took the first 30 moves, and changed the final move from H0 to G0 to get a corrupted input. This changed cell C0 (I index my columns at zero not one, sorry) from legal to illegal. This meant I could take the C0 logit as my patching metric - it's high on clean, low on corrupted, and so it can tell me how much my patched activation tracks "the way that the most recent move being G0 rather than H0 is used to determine that C0 is illegal" (or vice versa). This is a very niche thing to study, but it's a start! And the virtue of narrowness says to favour deep understanding of something specific, over aiming for a broad understanding but not knowing where to start.

The first thing to try is patching each layer's output - I found that MLP5, MLP6 and MLP0 mattered a lot, Attn7 and MLP4 mattered a bit. The rest didn't matter at all, so I could probably ignore them!
I now wanted to narrow things down further, and got a bit stuck again - I needed to refine "this layer matters" into something more specific. I had the prior that it's way easier to understand attention than MLPs, so I tried looking at the difference in attention pattern from clean to corrupted for each head (from each source token to the final move), but I couldn't immediately see anything interesting (though in hindsight, I see alternating bands of on and off!):

I then just tried looking at the difference in direct logit attribution (to C0) between clean and corrupted for every neuron. This looked *way* more promising - most neurons were irrelevant, but a few mattered a ton. This suggested I could mostly ignore everything except the neurons that mattered. This gave me, like, 10 neurons to understand, which was massive progress! Bizarrely, MLP7 had two neurons, which both mattered a ton, but near exactly cancelled out (+2.43 v -2.47).

### Tangent on Analysing Neurons
Finding that there were clean and interpretable neurons was exciting, and I got pretty side tracked looking at neurons in general - no particular goal, just trying to gain surface area and figure out what was up. Looking at the neuron means across 100 games on the middle moves (`[5:-5]`) showed that there were some major outliers, and that layer 6 and 7 were the biggest by far. (The graph is sorted, because it's really hard to read graphs with 2000 points on the x axis with no meaningful ordering!)

I then tried looking at the direct logit attribution of the top neurons in each layer (top = mean > 0.2, chosen pretty arbitrarily), and they seemed super interpretable - it was visually extremely sparse, and it looked like many neurons connected to a single output logit. Layer 7 had some weird neurons that seemed specialised to the first move. Aside: I *highly* recommend plotting heatmaps like this with 0 as white - makes it much easier to read positive and negative things visually (this is the plotly color scheme `RdBu`, `px.imshow(tensor, color_continuous_scale='RdBu', color_continuous_midpoint=0.0)` works to get these graphs)

### Back to patching
I then ran out of steam and went back to patching. I now tried to patch in individual heads and look at their effect on the C0 logit (now normalised such that 1 means "fully recovered" and 0 means "no change"). Head L7H0 was the main significant one, but I couldn't get much out of it.

I then tried patching in individual neurons - doing all 16000 would be too slow, so I just took the neurons with highest activation difference and patched in those - activation difference had some big outliers. I first tried resample ablating (replacing a clean neuron with corrupted and seeing what breaks) and found that none were necessary (this isn't super surprising - neurons are small, and dropout incentivises redundancy), though the layer 7 neurons matter a bit (they directly affect the logits, so this makes sense!)

But when I tried causal tracing (replacing a corrupted neuron with its clean copy) I got some striking results - several neurons mattered a bunch, and L5N1393 was enough to recover 75% on its own?! (Notably, this was a significantly bigger effect than just its direct logit attribution)

Neuron L5N1393
--------------
This was a sufficiently wild result that I pivoted to focusing on that neuron (the 1393th in layer 5).
My starting goal was the incredibly narrow question "figure out why patching in just that neuron into the corrupted run is such a big deal". Again, focus on understanding a narrow questions deeply and properly, even against a flinch of "this is too narrow and there's no way it'll generalise!".
To start with, I cached all activations on the run with a corrupted input but a clean neuron L5N1393, and started comparing the three. The obvious place to start was direct logit attribution of layers - MLP7 went from not mattering in either clean or corrupted to being significant?!

Digging into the MLP7 neurons and their direct logit attribution, I found that both clean and corrupted had a single, dominant, extremely negative neuron. But in the patched run, *both* were significantly suppressed. My guess was that this was some dropout solving circuit firing, and thus that MLP7 was mostly to deal with dropout - I subjectively decided this didn't seem that interesting and moved on. Interestingly, this is similar to how [negative name movers](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=dB4DvjOO33yi9fUT_cnzkQYb) in the Indirect Object Identification circuit act as backups - they significantly suppress the model's ability to do the task, but if you ablate the positive name movers they'll significantly reduce their negative effect to help compensate. (There it's likely a response to attention dropout)

It also significantly changed some layer 6 neurons, which seemed maybe more legit:

At this point I decided to pivot to just trying to interpret neuron L5N1393 itself, because it seemed interesting. And at this point I was pretty convinced that the model had interpretable (and maybe monosemantic?) neurons.
Looking at the direct logit attribution of the neuron, it strongly boosted C0 and slightly boosted D1 (one step diagonally down and right)

The next easiest place to start was max activating dataset examples - I initially felt an impulse to run the model across tens of thousands of games to collect the actual top dataset examples, but I realised this would be a headache and probably unnecessary. I had run the model for 50 games (thus 3000 moves) and decided to just inspect the neuron on the top 30 (1%) of games there.
I manually inspected a few, and then decided to aggregate the board state across the top 30 moves. I decided to try averaging "is non-empty", the actual board state (ie 1 for black, 0 for empty, -1 for white) and the flipped board state (ie 1 for mine, 0 for empty, -1 for their's) - this was kinda janky, since I wanted to distinguish "even probability of being white or black" and "always empty", but it seemed good enough to be useful.


I don't recall exactly how I had the idea for a flipped board state - I think a combination of doing a heatmap of which games/moves the neuron fired on and seeing that it wasn't a consistent parity between games, but it *did* alternate within a game. And inspecting the top few examples, and seeing that some had black at D1 and white at E2, and some had white at D1 and black at E2 (and already having identified that part of the board as important). I spent a bit of time stuck on figuring out *how* best to aggregate a flipped board state, before realising I could do the stupid thing of using a for loop to generate an alternating tensor of 1s and -1s and just multiply by it.

But now I had the flipped board state, it was pretty clear that this was the right way to interpret the neuron - it was literally 1 in D1 and -1 in E2 (here 1 meant "their's", because I hadn't realised I'd need a good convention). I looked at the max activating dataset examples for a few other neurons (taking the top 10 by norm in each layer) and saw a few others that were clean in the flipped state but not in the normal state, and this was enough to generate the idea that the relevant colour was "next" vs "previous" player (I only realised after the fact that "my" vs "their" colour was a cleaner interpretation, thanks to Chris Olah for this!)
This is literally written in my notes as (immediately after I briefly decided to go and do a deep dive on neuron L6N1339 instead lol)
>
> Omg idea! Maybe linear probes suck because it's **turn** based - internal repns don't actually care about white or black, but training the probe across game move breaks things in a way that needs smth non-linear to patch
>
>
>
At this point my instincts said to go and validate the hypothesis properly, look at a bunch more neurons, etc. But I decided that in the spirit of being decisive and pursuing "big if true" hypotheses (and because at this point I was late for work) I'd just say YOLO and try training a linear probe under this model.
I'm particularly satisfied with this decision, since I felt a *lot* of perfectionism, that I would have normally pursued, and ignoring it in the interests of speed went great:
* I'd never trained a probe before, and figured there's a bunch of standard gotchas I needed to learn - eg how to deal with imbalanced class sizes (corners are normally empty), setting up good controls etc
* Getting a probe working on the flipped board state (across all moves) - this seemed like more of a pain to code so I just decided to do even and odd moves
* Figuring out the right layer to probe on - I just picked layer 6 since it was late enough to feel safe, and I didn't want to spend time figuring out the right layer to probe on
* I had no idea what the right optimiser or hyper-parameters for training a probe are (I just guessed AdamW with `lr=1e-4,wd=1e-2,b1=0.9,b2=0.99` and batch size 100 which seemed to work)
* Getting accuracy to work for the probe was a headache (it involved a bunch of fiddling with one hotting the state in the right way)
* Getting good summary statistics of how the run was going - I decided to just have overall loss per probe, and then loss per probe on an arbitrary square (I think C2)
* Figuring out how to get good performance on probe training - there's a bunch of optimisations around stopping the model once it gets to the right layer, turning off autodiff on the model parameters, etc, I just decided to not bother and do the simple thing that should work.
I somehow managed to write training code that was bug free on the first long training run, and could see from the training curves that my probes were obviously working! From here on, things felt pretty clear, and I found the results in the [initial section on analysing the probe](https://www.alignmentforum.org/posts/nmxzr2zsjNtjaHh7x/actually-othello-gpt-has-a-linear-emergent-world#Probing)!
|
058f7ec2-43a7-4991-8f39-2d273aa1988d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Willpower and diet: advice?
Since the beginning of September, I have been attempting a gluten-free diet. (I was tested and I'm not celiac, but eating wheat, and especially highly refined-flour foods like cookies, tends to make me bloated and give me diarrhea.) I also wanted to lose 5 to 10 pounds. I'm not overweight per se, but I possess a roll of belly fat that I (and my boyfriend!) would prefer to say goodbye too.
The first little while went well, and almost effortlessly. I was at the cottage with my family, exercising moderately (about 2 kilometers of swimming daily) and eating my mom's excellent-tasting cooking. After about one and a half weeks, I had lost 5 pounds, although I suspect a lot of it was water retention/bloating, since I had been eating wheat and various junk foods all summer.
Then school started, and with it my 16-hour days away from home, including one marathon session where I leave my parents' house at 5:00 am on Monday morning, sleep at a friend's house, and don't come back again until 11 pm on Tuesday, only to work 5:30 am to 4 pm at the pool the next day.
In short: my diet is quickly deteriorating and I have regained those 5 pounds. I find it next to impossible to stay gluten-free, since I have to be incredibly organized and pack everything from home, and inevitably it isn't quite enough for 16 hours. (I eat 3000 calories a day or more when not dieting. According to a metabolism study I participated in last year, this is actually how much I burn per day with the amount of exercise I get. If I eat much less, say less than 2000 for one day or less than 2,500 for several consecutive days, I get dizzy and weak when I exercise, which is really irritating.) I would probably be able to lose weight more easily if I exercised LESS, but this would a) kind of defeat the point, and b) be difficult because exercise is my main stress control method.
Willpower is a big issue, which is weird and annoying because usually it's not a big issue for me. Especially when I'm sleep deprived
|
45b089f0-4375-4f09-819f-b04d9b261cbe
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Bayes-Up: An App for Sharing Bayesian-MCQ
Inspired by Lê Nguyên Hoang's post on Bayesian Examination, I have been developing (as a hobby) a new app called Bayes-Up (available at: bayes-up.web.app). The app is now in a state where it is working well enough to be shared with others. In this post I list a few things you can do with it, because I expect that it will spark some interest within the community.
* Test and improve your calibration: Bayes-Up uses a collection of good quality trivia questions from the open trivia database. The main point of the app is that you can find a list of multiple choice quizzes, answer questions by assigning probabilities to each of the possible choices, receive a score based off a quadratic proper scoring rule and later find statistics about the quality of your calibration. A good place to start is the quiz from the book Factfulness by Hans Rosling that I included in the app.
* Create quizzes and upload them. There exists already a small number of calibration training apps. Bayes-Up differs mainly because it allows to upload and share your own quizzes. This can solve one of the problems of calibration apps which is to create good quality content (quizzes / questions). If you are a teacher and want your students to develop more metacognitive skills and intellectual honesty, or if you are organizing workshops on probability calibration, Bayes-Up can make it easier for you. To add a quiz, simply write it in a spreadsheet, export it as a CSV file and upload it in Bayes-Up.
* Recommend UI improvements, new features, report bugs, or contribute to the implementation. Only very little feedback has been collected so far and certainly a lot could be improved with little effort. The code of the app is open source and hosted on github.
* Analyse the data from Bayes-Up users. So far about 30'000 questions have been answered by about 1'300 users since the end of December 2019. The collected data is available at this link and will likely grow in the following months. Simple questions t
|
b5fc6f86-6e3c-4453-92e6-ccd9f50aae31
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Evidence for Connection Theory
Connection Theory (CT) is the original philosophy underpinning Leverage Research, a research think tank focused that has worked with the effective altruism movement, and the rationality community, in the past on community-building, and existential risk reduction. CT was developed by Leverage's executive director, Geoff Anders. Since there are few if any other publicly available online resources for understanding or evaluating CT, I thought I would share this document.
|
cdb9feed-9150-452d-ba9e-2d27b03357d2
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Why is the impact penalty time-inconsistent?
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
I [showed in a previous post](https://www.lesswrong.com/posts/w8QBmgQwb83vDMXoz/dynamic-inconsistency-of-the-inaction-and-initial-state) that impact penalties were time-inconsistent. But why is this? There are two obvious possibilities:
1. The impact penalty is inconsistent because it includes an optimisation process over the possible polices of the agent (eg when defining the Q-values in the [attainable utility preservation](https://arxiv.org/abs/1902.09725)).
2. The impact penalty is inconsistent because of how it's defined at each step (eg because the stepwise inaction baseline is reset every turn).
It turns out the first answer is the correct one. And indeed, we get:
* If the impact penalty is not defined in terms of optimising over the agent's actions or policies, then it is **kinda** time-consistent.
What is the "kinda" doing there? Well, as we'll see, there is a subtle semantics vs syntax issue going on.
Time-consistent rewards
-----------------------
In attainable utility amplification, and other impact penalties, the reward is ultimately a function of the current state st and a counterfactual state s′t.
For the initial state and the initial state inaction baselines, the state s′t is determined independently of anything the agent has actually done. So these baselines are given by a function f:
* f(μ,A,st,s′t).
Here, μ is the environment and A is the set of actions available to the agent. Since s′t is fixed, we can re-write this as:
* fs′t(μ,A,st).
Now, if the impact measure is a function of st and μ only, then it is... a reward function, with R(st)=fs′t(μ,st). Thus, since this is just a reward function, the agent is time-consistent.
Now let's look at the stepwise inaction baseline. In this case, s′t is determined by an inaction rollout from the prior state st−1. So the impact measure is actually a function of:
* f(μ,A,st,st−1).
Again, if f is in fact independent of A, the set of the agent's actions (including for the rollouts from st−1, then this is a reward function - one that is a function of the previous state and the current state, but that's quite common for reward functions.
So again, the agent has no interest in constraining its own future actions.
Semantics vs syntax
-------------------
Back to "kinda". The problem is that we've been assuming that actions and states are very distinct objects. Suppose that, as in the [previous post](https://www.lesswrong.com/posts/w8QBmgQwb83vDMXoz/dynamic-inconsistency-of-the-inaction-and-initial-state) an agent at time t−1 wants to prevent itself from taking action S (go south) at time t. Let A be the agent's full set of actions, and A−S the same set without S.
So now the agent might be time-inconsistent, since it's possible that:
f(μ,A,st,st−1)≠f(μ,A−S,st,st−1).
But now, instead of denoting "can't go south" by reducing the action set, we could instead denote it by expanding the state set. So define s−St as the same state as st, except that taking the action S is the same as taking the action ∅. Everything is (technically) independent of A, so the agent is "time-consistent".
But, of course, the two setups, restricted action set or extended state set, are almost completely isomorphic - even though, according to our result above, the agent would be time-consistent in the second case. It would be time consistent in that it would not want to change the actions of it future self - instead it would just put its future self in a state where some actions were in practice unobtainable.
So it seems that, unfortunately, it's not enough to be a reward-maximiser (or a utility maximiser) in order to be time-consistent *in practice*.
|
80811a80-9ffd-4373-9140-3d2e85704906
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
Goertzel’s GOLEM implements evidential decision theory applied to policy choice
I’ve written about the question of which decision theories describe the behavior of approaches to AI like the “Law of Effect”. In this post, I would like to discuss GOLEM, an architecture for a self-modifying artificial intelligence agent described by Ben Goertzel ([2010](http://goertzel.org/GOLEM.pdf); [2012](https://www.youtube.com/watch?v=XDf4uT70W-U)). Goertzel calls it a “meta-architecture” because all of the intelligent work of the system is done by sub-programs that the architecture assumes as given, such as a [program synthesis module](https://en.wikipedia.org/wiki/Program\_synthesis) (cf. [Kaiser 2007](https://pdfs.semanticscholar.org/a3d1/2cdf7b2810bf3c42212099f78ef4767c52d4.pdf)).
Roughly, the top-level self-modification is done as follows. For any proposal for a (partial) self-modification, i.e. a new program to replace (part of) the current one, the “Predictor” module predicts how well that program would achieve the goal of the system. Another part of the system — the “Searcher” — then tries to find programs that the Predictor deems superior to the current program. So, at the top level, GOLEM chooses programs according to some form of expected value calculated by the Predictor. The first interesting decision-theoretical statement about GOLEM is therefore that it chooses policies — or, more precisely, programs — rather than individual actions. Thus, [it would](https://casparoesterheld.com/2016/11/21/thoughts-on-updatelessness/) probably give the money in at least some versions of [counterfactual mugging](https://wiki.lesswrong.com/wiki/Counterfactual\_mugging). This is not too surprising, because it is unclear on what basis one should choose individual actions when the effectiveness of an action depends on the agent’s decisions in other situations.
The next natural question to ask is, of course, \*what\* expected value (causal, evidential or other) the Predictor computes. Like the other aspects of GOLEM, the Predictor is subject to modification. Hence, we need to ask according to what criteria it is updated. The criterion is provided by the Tester, a “hard-wired program that estimates the quality of a candidate Predictor” based on “how well a Predictor would have performed in the past” (Goertzel 2010, p. 4). I take this to mean that the Predictor is judged based the extent to which it is able to predict the things that actually happened in the past. For instance, imagine that at some time in the past the GOLEM agent self-modified to a program that one-boxes in Newcomb’s problem. Later, the agent actually faced a Newcomb problem based on a prediction that was made before the agent self-modified into a one-boxer and won a million dollars. Then the Predictor should be able to predict that self-modifying to one-boxing in this case “yielded” getting a million dollar even though it did not do so causally. More generally, to maximize the score from the Tester, the Predictor has to compute regular (evidential) conditional probabilities and expected utilities. Hence, it seems that the EV computed by the Predictor is a regular EDT-ish one. This is not too surprising, either, because as we have seen before, it is much more common for learning algorithms to implement EDT, especially if they implement [something which looks like](https://casparoesterheld.com/2017/12/18/futarchy-implements-evidential-decision-theory/) [the Law of Effect](https://casparoesterheld.files.wordpress.com/2018/01/learning-dt.pdf).
In conclusion, GOLEM learns to choose policy programs based on their EDT-expected value.
Acknowledgements
================
This post is based on a discussion with Linda Linsefors, Joar Skalse, and James Bell. I wrote this post while working for the Foundational Research Institute, which is now the [Center on Long-Term Risk](https://longtermrisk.org/).
|
89fdf722-2461-40d6-8666-6e90507376c3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Keep your protos in one repo
This analysis is targeted at organizations that have not yet locked in their strategy for managing protobufs/gRPC, or have the capacity to pivot. I expect the principles involved to generalize to other organizational questions. Some of the tradeoffs are only relevant from a "protos as service interfaces" (gRPC/twirp/etc) orientation, but others are more general.
Meta-level conflict of interest notice: This is a linkpost to a blog for a (commercial) side-project of mine.
Object-level conflict of interest notice: Protocall currently only supports single-repo workspaces, not multi-repo workspaces. This analysis represents an accurate accounting of my view on the tradeoffs involved in this decision, and is not motivated by any desire to make my own life easier. Implementing basic support for multi-repo workspaces would not be an overwhelming technical challenge; the edge cases around e.g. namespace collisions would need solving but I would only need to solve them once.
----------------------------------------
Most engineering organizations should keep their proto files in one repo. If you're already operating with a single monorepo for your entire codebase, nothing about protos changes the calculus. If you're not, the benefits of keeping your proto files in one place may not be obvious.
Here are some (valid) reasons for not keeping your proto files in one repo.
Locality
If your proto files are primarily used to define service interfaces, keeping them in the same repo as the service code can reduce overhead for developers working on that service. They only need to issue one PR to update the service interface, rather than two. Working across multiple repos is challenging for most build systems, so there might also be less tooling work involved in maintaining a quick development loop.
Differential Processes, Tooling, and Norms
If your org has multiple repos, it's likely that teams own their own repos. Those teams may have wildly different engineering processes,
|
a8cc40e4-a519-4be9-b155-0c13b201bb66
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Seeking an Outside View on Israeli Military Service
I would like your assistance with gathering rational outside view so that I can make better moral decisions. The general context is that I consider myself an effective altruist with mostly Utilitarian ethics, and I have some more time of mandatory service in which I have considerably high degree of freedom to choose my path. The dillema is dependent on many questions, but my goal here is mostly to debias myself on questions relating to the moral stance of Israel or armies in general (although I care much more about the expected impact of my actions, irrespectively of whether it's done in the name of good or evil).
Hoping for a discussion, here are some questions I'd like to get an answer to, which are mainly here to give a broad sense of what I think might be relevant. Any helpful comments would be greatly appreciated, even epistimological tips to help me make better decisions on this topic would be of great help.
* Are there armies in any time in history in which individuals should have served in the military to do the most good in expected global utility? If so, what seems to be the criteria?
* What are some possible implication of increasing/decreasing Israeli's millitary power?
* Even if I conclude that Israel is a force for good (or for evil) in the world, am I likely to think so 20 (50) years from now?
* Should I expect a random EA who has done some ammount of (non-combatant) militry service to regret the actions she has done? Since I am taking a Utilitarian perspective, assume that she had an option not to serve in the military.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.