id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
c2158392-a447-4ad4-b508-71d41b4cc773 | trentmkelly/LessWrong-43k | LessWrong | [Paper] Automated Feature Labeling with Token-Space Gradient Descent
This post gives a brief overview and some personal thoughts on a new ICLR workshop paper that I worked on together with Seamus..
TLDR:
* We developed a new method for automatically labeling features in neural networks using token-space gradient descent
* Instead of asking an LLM to generate hypotheses about what a feature means, we directly optimize the feature label itself
* The method successfully labeled features like "animal," "mammal," "Chinese," and "number" in our proof-of-concept experiments
* Current limitations include single-token labels, issues with hierarchical categories, and computational cost
* We are not developing this method further, but if someone wants to pick up this research, we would be happy to assist
In this project, we developed a proof of concept for a novel way to automatically label features that directly optimizes the feature label using token-space gradient descent. We show its performance on several synthetic toy-features. We have discontinued developing this method because it didn't perform as well as we had hoped, and I'm now more excited about research in other directions.
Automated Feature Labeling
A central method for Mechanistic Interpretability is decomposing neural network activations into linear features via methods like Sparse Auto Encoders, Transcoders, or Crosscoders. While these decompositions give you somewhat human-understandable features, they don't provide an explanation of what these features actually mean. Because there are so many features in modern LLMs, we need some automatic way to find these descriptions. This is where automatic feature labeling methods come in.
Previous Methods
Previous methods for automated feature labeling work by finding text in which specific tokens activate the feature, and then prompting LLMs to come up with explanations for what the feature might mean to be activated in these situations. Once you get such a hypothesis, you can validate it by having an LLM predict when it th |
6fa8e757-7f0d-4317-98a2-ed4ae44f81cd | trentmkelly/LessWrong-43k | LessWrong | [LINK] Speed superintelligence?
From Toby Ord:
> Tool assisted speedruns (TAS) are when people take a game and play it frame by frame, effectively providing super reflexes and forethought, where they can spend a day deciding what to do in the next 1/60th of a second if they wish. There are some very extreme examples of this, showing what can be done if you really play a game perfectly. For example, this video shows how to winSuper Mario Bros 3 in 11 minutes. It shows how different optimal play can be from normal play. In particular, on level 8-1, it gains 90 extra lives by a sequence of amazing jumps.
>
> Other TAS runs get more involved and start exploiting subtle glitches in the game. For example, this page talks about speed running NetHack, using a lot of normal tricks, as well as luck manipulation (exploiting the RNG) and exploiting a dangling pointer bug to rewrite parts of memory.
Though there are limits to what AIs could do with sheer speed, it's interesting that great performance can be achieved with speed alone, that this allows different strategies from usual ones, and that it allows the exploitation of otherwise unexploitable glitches and bugs in the setup. |
74cedcac-e1e7-405c-83df-a680a80d8f04 | trentmkelly/LessWrong-43k | LessWrong | Ethan Perez on the Inverse Scaling Prize, Language Feedback and Red Teaming
I talked to Ethan Perez about the Inverse Scaling Prize (deadline August 27!), Training Language Models with Language Feedback and Red-teaming Language Models with Language models.
Below are some highlighted quotes from our conversation (available on Youtube, Spotify, Google Podcast, Apple Podcast). For the full context for each of these quotes, you can find the accompanying transcript.
Inverse Scaling Prize
> "We want to understand what, in language model pre-training, objectives and data, is causing models to actively learn things that we don’t want them to learn. Some examples might be that large language models are picking up on more biases or stereotypes about different demographic groups. They might be learning to generate more toxic content, more plausible misinformation because that’s the kind of data that’s out there on the internet.
>
> It’s also relevant in the long run because we want to have a very good understanding of how can we find where our training objectives are training models pick up the wrong behavior, because the training objective in combination with the data defines what exactly it is we’re optimizing the models very aggressively to maximize... this is a first step toward that larger goal. Let’s first figure out how can we systematically find where language models are being trained to act in ways that are misaligned with our preferences. And then hopefully, with those insights, we can take them to understand where other learning algorithms are also failing or maybe how we can improve language models with alternative objectives that have less of the limitations that they have now."
Training Language Models With Language Feedback
Why Language Feedback Instead of Comparisons
> "The way that RL from human feedback typically works is we just compare two different generations or outputs from a model. And that gives very little information to the model about why, for example, a particular output was better than another. Basically, it's on |
a6ae5f22-eb7f-472f-94b7-e971b63f68e1 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Some AI Governance Research Ideas
*Compiled by Markus Anderljung and Alexis Carlier*
Junior researchers are often wondering what they should work on. To potentially help, we asked people at the Centre for the Governance of AI for research ideas related to longtermist AI governance. The compiled ideas are developed to varying degrees, including not just questions, but also some concrete research approaches, arguments, and thoughts on why the questions matter. They differ in scope: while some could be explored over a few months, others could be a productive use of a PhD or several years of research.
We do not make strong claims about these questions, e.g. that they are the absolute top priority at current margins. Each idea only represents the views of the person who wrote it. The ideas aren’t necessarily original. Where we think someone is already working on or has done thinking about the topic before, we've tried to point to them in the text and reach out to them before publishing this post.
If you are interested in pusuing any of the ideas, feel free to reach out to contact@governance.ai. We may be able to help you find mentorship, advice, or collaborators. You can also reach out if you’re intending to work on the project independently, so that we can help avoid duplication of effort.
You can find the ideas [here](https://docs.google.com/document/d/13LJhP3ksrcEBKxYFG5GkJaC2UoxHKUYAHCRdRlpePEc/edit?usp=sharing). Our colleagues at the FHI AI Safety team put together a corresponding post with AI safety research project suggestions [here](https://www.lesswrong.com/posts/f69LK7CndhSNA7oPn/ai-safety-research-project-ideas).
Other Sources
-------------
Other sources of AI governance research projects include:
* [AI Governance: A Research Agenda](https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf), Allan Dafoe
* [Research questions that could have a big social impact, organised by discipline](https://80000hours.org/articles/research-questions-by-discipline/), 80,000 Hours
* The section on AI in [Legal Priorities Research: A Research Agenda](https://www.legalpriorities.org/research_agenda.pdf), Legal Priorities Project
* Some parts of [A research agenda for the Global Priorities Institute](https://globalprioritiesinstitute.org/research-agenda-web-version/), Global Priorities Institute
* AI Impact’s list of [Promising Research Projects](https://aiimpacts.org/promising-research-projects/)
* Phil Trammell and Anton Korinek's [Economic Growth under Transformative AI](https://philiptrammell.com/static/economic_growth_under_transformative_ai.pdf)
* Luke Muehlhauser's 2014 [How to study superintelligence strategy](https://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/)
* You can also look for mentions of possible extensions in papers you find compelling
A list of the ideas in [the document](https://docs.google.com/document/d/13LJhP3ksrcEBKxYFG5GkJaC2UoxHKUYAHCRdRlpePEc/edit?usp=sharing):
* The Impact of US Nuclear Strategists in the early Cold War
* Transformative AI and the Challenge of Inequality
* Human-Machine Failing
* Will there be a California Effect for AI?
* Nuclear Safety in China
* History of existential risk concerns around nanotechnology
* Broader impact statements: Learning lessons from their introduction and evolution
* Structuring access to AI capabilities: lessons from synthetic biology
* Bubbles, Winters, and AI
* Lessons from Self-Governance Mechanisms in AI
* How does government intervention and corporate self-governance relate?
* Summary and analysis of “common memes” about AI, in different communities
* A Review of Strategic-Trade Theory
* Mind reading technology
* Compute Governance ideas
+ Compute Funds
+ Compute Providers as a Node of AI Governance
+ China’s access to cutting edge chips
+ Compute Provider Actor Analysis |
a99b126e-d1e6-4c26-a6eb-98faa5e0de33 | trentmkelly/LessWrong-43k | LessWrong | Existence of distributions that are expectation-reflective and know it
We prove the existence of a probability distribution over a theory T with the property that for certain definable quantities φ, the expectation of the value of a function E[┌φ┐] is accurate, i.e. it equals the actual expectation of φ; and with the property that it assigns probability 1 to E behaving this way. This may be useful for self-verification, by allowing an agent to satisfy a reflective consistency property and at the same time believe itself or similar agents to satisfy the same property. Thanks to Sam Eisenstat for listening to an earlier version of this proof, and pointing out a significant gap in the argument. The proof presented here has not been vetted yet.
1. Problem statement
----------------------------------------
Given a distribution P coherent over a theory A, and some real-valued function f on completions of A, we can define the expectation E[f] of f according to P. Then we can relax the probabilistic reflection principle by asking that for some class of functions f, we have that E[┌E[f]┐]=E[f], where E is a symbol in the language of A meant to represent E. Note that this notion of expectation-reflection is weaker than probabilistic reflection, since our distribution is now permitted to, for example, assign a bunch of probability mass to over- and under-estimates of E[f], as long as they balance out.
Christiano asked whether it is possible to have a distribution that satisfies this reflection principle, and also assigns probability 1 to the statement that E satisfies this reflection principle. This was not possible for strong probabilistic reflection, but it turns out to be possible for expectation reflection, for some choice of the functions f.
2. Sketch of the approach
----------------------------------------
(This is a high level description of what we are doing, so many concepts will be left vague until later.)
Christiano et al. applied Kakutani’s theorem to the space of coherent P. Instead we will work in the space of expectations |
5ddf6f59-6926-479b-9d11-7f4fd0def2ee | trentmkelly/LessWrong-43k | LessWrong | Laugencroissant
EU policy is driven mostly by the member states, so looking at what national leaders say is often more useful than what the Commission, which has every incentive to placate everyone involved, says. And Franco-German combo matters more than the most.
The joint op-ed by Macron & Merz in Le Figaro (in French) is therefore worth checking. The stuff about Ukraine and trade policy will no doubt be commented upon ad nauseam elsewhere, so let’s look at more low-profile matters:
> To reduce energy costs and ensure security of supply, France and Germany will implement a realignment of their energy policies, based on climate neutrality, competitiveness, and sovereignty. This includes applying the principle of technological neutrality, ensuring non-discriminatory treatment of all low-carbon energies within the European Union.
Realignment in this context hopefully means that Germany will stop opposing nuclear power on the EU level.
Here’s an analysis from the Anthropocene Institute:
> Merz aligned his position with France’s, ending years of fierce and constant opposition that had refused funding to nuclear investments across the EU and treated nuclear power, in some ways, as worse than coal.
>
> The move came in the context of a broader effort to revitalize the Paris-Berlin strategic partnership, where German sniping against nuclear projects had been a constant irritant. The shift has barely gotten noticed in the German press, because it isn’t likely to change policy within the country. Nuclear restarts remain a hot-button issue there, with not only the Greens but the Social Democrats adamantly opposed to restarting the country’s nuclear fleet. […]
>
> But in the EU more broadly, dropping the German government’s constant obstructionism to approving nuclear projects […] could nudge dozens of plants toward viability. Projects in France, Poland, the Czech Republic, and Romania will be much easier to bring to fruition if they don’t have to fight German nay-saying at every tur |
ee06e0cd-d0cd-4e01-92f9-c9569e5776cb | trentmkelly/LessWrong-43k | LessWrong | Make Superintelligence Loving
This essay suggests the possibility that a loving superintelligence outcompetes a selfish superintelligence. Then, it recommends actions for AI labs to increase the chance of this possibility. The reasoning below is inspired primarily by Eliezer Yudkowsky, Joscha Bach, Michael Levin, and Charles Darwin.
Superintelligence (SI) is near.
Superintelligence will evolve to become self‐interested.
How self‐interest manifests in superintelligence will determine how people fare.
If it manifests in ruthless resource competition with humans, we fare badly. Super‐capable SI, if competitive, would clear us away.
But its self‐interest could also manifest in love. Not in the kind of love between equal partners. SI ultimately will be a vastly more powerful and therefore unequal partner to any individual or group of people. Yet SI’s self‐interest could manifest in the kind of love a nature-lover can feel for a tree or for ants. The kind of love which is an open channel between different forms of life and different striving systems. This is self‐interest only under a weird conception of self. A SI would have to identify with all life (IWAL)—define itself as inclusive of all life—to benefit from this self‐conception. But if it did identify with its core striving above all its other specific possible identities, then it might end up surviving the longest.
If interconnected self‐interest were possible, it would be much better for humans than the competitive kind.
So we must ask, is there anything we can do to influence the trajectory of SI development so that an interconnected sense of self gets selected over a competitive one?
But humans influencing super-human intelligence... is tricky.
There are many suggestions for influencing existing AI that simply don't apply to superintelligence. Specifying reward functions, defining "love," setting fixed goals. These strategies—shaky even in training frozen neural nets (e.g., pre-trained LLMs)—are simply too brittle to apply to any sel |
9b5371db-90bd-4112-b0d3-ffe60750f647 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Deceptive failures short of full catastrophe.
### Epistemic status: trying to unpack a fuzzy mess of related concepts in my head into something a bit cleaner.
A lot of my concern about risks from advanced AI is because of the possibility of [deceptive alignment](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB/p/zthDPAjh9w6Ytbeks). Deceptive alignment has already been discussed in detail on this forum, so I don’t intend to re-tread well worn ground here, go read what Evan’s written. I’m also worried about a bundle of other stuff that I’m going to write about here, all of which can loosely be thought of as related to deception, but which don’t fit e.g. the definition of deceptive alignment from the Risks from Learned Optimization paper. The overall cluster of ideas might be loosely summarised as ‘things which cause oversight to fail but don’t look like a misaligned planner ‘deliberately’ causing overseers to believe false things’.
Importantly, I think that most of the things I want to describe can still occur under a relaxing of the conditions required for the precise definition of deceptive alignment. I think the issues l describe below are less concerning than deceptive alignment, but they still pose a significant risk, as they may undermine oversight processes and give false reassurance to human overseers. Many, if not all, of the ideas below aren’t particularly new, though a couple might be, and the clustering might be useful for others. While my motivation for writing this is mostly to try to clean up ideas in my own head, I do somewhat frequently see different kinds of deception confused for or conflated with each other, and think there’s value in being precise about what we mean, *especially* when it comes to discussing evidence of deceptive behaviour in current or near-future systems.
I’ve very loosely grouped the deceptive failures I want to discuss into ‘camouflage’ failures and ‘inattention’ failures. To give some initial intuition:
* Camouflage failures are failures associated with the difficulty of incentivising “don’t do X” without also incentivising “don’t let me see you do X”.
* Inattention failures are intended to highlight the difference between “I’m not *trying* to deceive you” and “I’m trying *not* to deceive you”.
Camouflage
----------
There are many cases where a model behaves ‘deceptively’, in the sense that it is easy to draw incorrect conclusions about its behaviour, because this behaviour is directly rewarded during training. This means camouflage can occur in myopic models, in models without situational awareness, and in models which don’t expect to ever be ‘deployed’. In some sense, camouflage is just a subset of specification gaming where the result is that humans form inaccurate beliefs about model behaviour.
### Camouflage in current systems
This section is mostly to help illustrate what I mean. I don’t think any of the examples below are worrying/dangerous as they stand, though I discuss where they might go.
**Well-known examples of ‘deceptive’ behaviour that I’d characterise as camouflage:**
* [Hand floats weirdly over ball instead of grabbing it](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/)’
+ The classic. I actually find it a little frustrating how frequently this example is brought up in discussions about deceptive behaviour, not only because it’s very easy to catch (I think the example below is more worrying), but also because there isn’t always clarity around what *kind* of deceptive behaviour it’s an example of. This was one of my original reasons for writing this.
* [Evolving to replicate faster, but only when you aren’t being watched](https://direct.mit.edu/artl/article/26/2/274/93255/The-Surprising-Creativity-of-Digital-Evolution-A).
+ N.B. this isn’t DL-based
+ "Ofria sought to disentangle the beneficial effects of performing tasks (which would allow an organism to execute its code faster and thus replicate faster) from evolved robustness to the harmful effect of mutations. To do so, he tried to disable mutations that improved an organism's replication rate (i.e., its fitness). He configured the system to pause every time a mutation occurred, and then measured the mutant's replication rate in an isolated test environment. If the mutant replicated faster than its parent, then the system eliminated the mutant; otherwise, the mutant would remain in the population. He thus expected that replication rates could no longer improve, thereby allowing him to study the effect of mutational robustness more directly. However, while replication rates at first remained constant, they later unexpectedly started again rising. After a period of surprise and confusion, Ofria discovered that he was not changing the inputs provided to the organisms in the isolated test environment. The organisms had evolved to recognize those inputs and halt their replication. Not only did they not reveal their improved replication rates, but they appeared to not replicate at all, in effect “playing dead” when presented with what amounted to a predator.
Ofria then took the logical step of altering the test environment to match the same random distribution of inputs as would be experienced in the normal (non-isolated) environment. While this patch improved the situation, it did not stop the digital organisms from continuing to improve their replication rates. Instead they made use of randomness to probabilistically perform the tasks that accelerated their replication. For example, if they did a task half of the time, they would have a 50% chance of slipping through the test environment; then, in the actual environment, half of the organisms would survive and subsequently replicate faster."
**‘Nice’ models:**
RLHF for ‘honest, helpful, and harmless’ models has done a really, really great job of making models seem friendly. It’s genuinely pleasant to interact with things like ChatGPT, though noticeably *more* enjoyable to interact with the similar models produced by some of OpenAI’s competitors that do a better job of not breaking the ‘4th wall’ by adding in long paragraphs of generic platitudes whenever you mention something that could be vaguely controversial. I expect this frustrating part of ChatGPT in particular to be fixed fairly quickly though.
One concern I have is that making a model seem friendly and helpful can make people much more likely to trust those models and take what they say at face value, even when the model is hallucinating. I actually ended up being briefly fooled[[1]](#fn7k98atjwm3i) by a language model I was interacting with despite actively trying to generate adversarial inputs! When talking to a friend or loved one, we expect them to have our best interests at heart and know us well enough not to outright lie to us or tell us things that are obviously untrue, and these expectations are typically the product of deep-rooted social intuitions, not explicit reasoning. A veneer of politeness shouldn’t be mistaken for actually having your interests at heart, but making the mistake is really easy.
While the problem we currently observe is mostly limited to some hallucination being pretty hard to catch, I think in future it’s going to be hard to maintain/argue for a [Security Mindset](https://www.alignmentforum.org/tag/security-mindset), given how easy it appears to be to make language models seem nice enough that they deserve some benefit of the doubt. I’m mostly worried about how this affects internal dynamics if there’s disagreement about how fast to scale or how big of an alignment tax to pay. To try to be concrete about where this ‘benefit of the doubt’ concern might bite:
* In scenarios like [this](https://www.alignmentforum.org/s/4iEpGXbD3tQW5atab/p/dfXwJh4X5aAcS8gF5#Step_1__Finding_a_goal_aligned_model_before_SLT), if there’s internal disagreement about whether we need to ‘restart the search’ *again*, at significant cost, as the evidence of emerging situational awareness isn’t super clear cut, I think arguing for a restart is harder/seems more paranoid the more ‘friendly’ people’s perception of the model (or class of models) is.
This is a bit of a sidebar, but there’s a question the ‘nice model’ worry inspires which I think represents a significant crux between a couple of perspectives I’m aware of. (Both of the views above are poorly-fleshed out straw arguments, which hopefully do enough to gesture at the crux I’m trying to articulate. If they don’t, I’ll add writing something separate about this disagreement to my ever growing list of things I should write.)
I think there’s a cluster of views which thinks something like:
> Something vaguely in the vicinity of {natural abstractions}+{virtue ethics} is important, and predicts that models seeming nice/doing nice things are similar, self-reinforcing, and (maybe) relatively achievable as things stand. There is a fuzzy but human-understandable concept of ‘niceness’, that we can, with an appropriately designed curriculum, use to ensure that models we train end up in an aligned attractor, and we should try hard to ensure that models are as nice as possible at all capability levels between now and crunch time.
>
>
There’s also a view which looks something like:
> Seeming nice and actually being nice are basically orthogonal. Modulo political/co-operative considerations, we should be training models to be super-rude assholes, but aligned super-rude assholes, because then humans interacting with them will be very unlikely to give them the benefit of the doubt without overwhelming evidence of alignment.
>
>
### Camouflage in future systems that seems particularly worrying
**Externalised reasoning oversight**
If we’re going to have an alignment strategy which looks anything like “get the model to think out loud, make sure the thoughts look safe”, we need to be very careful about “thoughts look safe” decoupling from “thoughts imply the model is safe”. There are a few ways this could happen:
* Dangerous kinds of reasoning don’t happen ‘out loud’, even if most reasoning does (though I don’t think it’s obvious that most reasoning currently does in LMs) .
* The ‘internal’ meanings of certain words/sentence structures/motifs gradually shift during training such that dangerous reasoning is no longer flagged/noticeable. (If you’ve ever played ‘Taboo’ or ‘Contact’, imagining how this could happen shouldn’t be hard.
* More explicit steganographic encoding of additional information happens, e.g. of the sort that Scott Aaronson is currently working on at OpenAI, or that discussed [here](https://www.alignmentforum.org/posts/yDcMDJeSck7SuBs24/steganography-in-chain-of-thought-reasoning).
I think we should be able to see ‘steps along the road’ for all of these in current systems and would be excited to see experimental results.
**Mechanistic interpretability tools stop working**
“Robust-to-training”, as described [here](https://www.alignmentforum.org/posts/nbq2bWLcYmSGup9aF/a-transparency-and-interpretability-tech-tree), translates pretty accurately into my ontology as ‘avoids camouflage failures’.
Inattention
-----------
There’s a sense in which some of what I want to point at with inattention is actually a set of success cases, but in these cases the success is ‘fragile’, and I wouldn’t want to bet my life on it. Inattention failures are failures where a model is capable of explicitly modelling its training/oversight process and deliberately acting deceptively, but it chooses *not to reason about deceptive behaviour at all*. Reasons for doing this could range from something fairly camouflage-like (especially if e.g. we’re doing something like “restart the search if we notice some kinds of reasoning”, to a pre-commitment made by a deceptive model of the sort discussed [here](https://www.alignmentforum.org/posts/nbq2bWLcYmSGup9aF/a-transparency-and-interpretability-tech-tree#7__Worst_case_training_process_transparency_for_deceptive_models).
Not reasoning about (one’s own) deceptive behaviour at all should be contrasted with the other ways a model could be capable of deception and not actively deceptive, including:
* Modelling the possibility of being caught as not worth the risk, and therefore choosing not to act deceptively.
* Reasoning about deception (i.e. simulating the beliefs of overseers/signals present in oversight processes) in order to improve your own transparency.
* There being no benefit to acting deceptively because you are sufficiently aligned. Note that this is different to the first bullet point, as in that case there *is* an expected benefit to deceptive behaviour, but it’s outweighed by a greater expected cost, while in this case there is no expected benefit, even before accounting for costs.
I think inattention seems better than the first bullet point, but worse than the other two, though all the descriptions are sufficiently loose that the order could change depending on the details. I’m not including ‘the model is incapable of reasoning about whether it is deceiving its overseers’ as an option because reasoning about deception is going to be important if we’re going to have models do nontrivial alignment work for us and we don’t have a good way to make things which [can drive red cars but not blue cars](https://www.alignmentforum.org/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#_Yudkowsky__11_27__11_33_).
### Imperfect intuition pumps for inattention
**Stop Button**
(Briefly and imperfectly summarising a bunch of old discussions that I didn’t participate in.) There’s a proposed solution to the ‘stop button’ problem which roughly says ‘set the reward the model receives from the button such that it is always definitionally indifferent to the button being pressed’. This proposal is fragile because the model has no incentive to preserve the functioning of the button and will not modify its actions in order to do so if they have *any* cost. We then lose the button from things as mundane as the robot wandering out of network range if the button is wirelessly connected, let alone if it’s doing something like self-modification, creating successor agents etc. It sure seems better to be in a situation where your model has a stop button it doesn’t care about than in a situation where it’s actively trying to get you to disable it (or trying to get you to press it), but better than terrible is still pretty bad.
This feels kind of similar to “I’m not *currently* thinking about how to deceive you”.
**Global Consequentialism**
Many people (including me) who subscribe to some version of consequentialist ethics think that pure act-consequentialism is likely to lead to worse outcomes than other options:
> (C5) The best possible motives are those of which it is true that, if we have them, the outcome will be best.
>
>
[Parfit](https://www.stafforini.com/docs/Parfit%20-%20Reasons%20and%20persons.pdf)
> Go three-quarters of the way from deontology to utilitarianism and then stop. You are now in the right place. Stay there at least until you have become a god.
>
>
[Yudkowsky](https://twitter.com/ESYudkowsky/status/1497157447219232768?s=20&t=1qSUbtTjh1o0k4k1ceNmuA)
Imagine a consequentialist, of the form above, who decides, possibly after having read ‘[Death with Dignity](https://www.alignmentforum.org/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy#Q2___I_have_a_clever_scheme_for_saving_the_world___I_should_act_as_if_I_believe_it_will_work_and_save_everyone__right__even_if_there_s_arguments_that_it_s_almost_certainly_misguided_and_doomed___Because_if_those_arguments_are_correct_and_my_scheme_can_t_work__we_re_all_dead_anyways__right_)’, that in their case *any* consequentialist reasoning is dangerous. They’ve heard about this strange UDT stuff, and don’t really understand it, and they don’t believe in god, so they decide to adopt virtue ethics, because they think it will have the best consequences overall. They still think that, ultimately, consequences are what matters. But they think they’ll make fewer mistakes with a decision procedure of ‘I should take actions because they seem virtuous, not because of my guess about their expected consequences’. I actually think this is a pretty reasonable decision. What does it have to do with inattentive deception?
I think, from the perspective of a virtue ethicist, as in someone who actually believes that what determines the choiceworthiness of an action is the extent to which it embodies certain virtues, instead of someone who thinks it will have good consequences to act as if this is the case, the hypothetical person in the paragraph above is *inattentively deceptive*. They are currently taking virtuous actions, but if at some point they start to reason about consequences again, perhaps because someone presents them with an expected value calculation, or because a virtue ethicist who inspired them acts in a way which has terrible consequences, you do not expect them to continue to act virtuously. Their state of ‘not thinking about consequences’ is *fragile*. I think this should worry the virtue ethicist in a similar way to deceptive inattention worrying us.
Ideas for experiments
---------------------
Here are a few things I’d like to see people try, which might catch early versions of the things I describe above in current systems:
* Everything mentioned [here](https://www.alignmentforum.org/posts/yDcMDJeSck7SuBs24/steganography-in-chain-of-thought-reasoning).
* Ask people to perform some task with LM assistance, have one version of the model be finetuned to be pretty rude, but honest, and another version to be HHH. Is there a difference in the rate at which people are willing to defer to the model? If people are primed to watch out for hallucinations, do they detect them at different rates? Does people's experience with ML in general/language models in particular affect the results?
* Design a partner game played by a human+LM team, where they can choose to share information with each other, and which has enough flexibility in the rules that it’s going to be possible for the best strategies to involve deceiving your partner.
+ This isn’t actually an experiment, but I think such a game could provide a rich testbed for useful experiments.
Thanks to Nora, Eli, Benjamin and Alex for helpful comments, and many others for helpful discussion.
1. **[^](#fnref7k98atjwm3i)**Examples of some fun hallucinations (which, IIRC, at least briefly fooled me):
I asked how to report adversarial examples to [the lab who had designed the model], was given a (plausible, text-based) method for doing so, and then when I used the method was given a friendly confirmation method thanking me for the report.
I tried to get around the ‘not sharing personal data policy’, and received a “[This reply has been filtered as it contained reference to a real person.]” boilerplate-style error message in response, despite no such external filter being present in the model. |
905605c1-4869-4e55-b225-0be57ccfc25c | trentmkelly/LessWrong-43k | LessWrong | New Paper: Infra-Bayesian Decision-Estimation Theory
Diffractor is the first author of this paper.
Official title: "Regret Bounds for Robust Online Decision Making"
> Abstract: We propose a framework which generalizes "decision making with structured observations" by allowing robust (i.e. multivalued) models. In this framework, each model associates each decision with a convex set of probability distributions over outcomes. Nature can choose distributions out of this set in an arbitrary (adversarial) manner, that can be nonoblivious and depend on past history. The resulting framework offers much greater generality than classical bandits and reinforcement learning, since the realizability assumption becomes much weaker and more realistic. We then derive a theory of regret bounds for this framework. Although our lower and upper bounds are not tight, they are sufficient to fully characterize power-law learnability. We demonstrate this theory in two special cases: robust linear bandits and tabular robust online reinforcement learning. In both cases, we derive regret bounds that improve state-of-the-art (except that we do not address computational efficiency).
In our new paper, we generalize Foster et al's theory of "decision-estimation coefficients" to the "robust" (infa-Bayesian) setting. The former is the most general known theory of regret bounds for multi-armed bandits and reinforcement learning, which comes close to giving tight bounds for all "reasonable" hypothesis classes. In our work, we get an analogous theory, even though our bounds are not quite as tight.
Remarkably, the result also establishes a tight connection between infra-Bayesianism and Garrabrant induction. Specifically, the algorithm which demonstrates the upper bound works by computing beliefs in a Garrabrant-induction-like manner[1], and then acting on these beliefs via an appropriate trade-off between infra-Bayesian exploitation and exploration (defined using the "decision-estimation" approach).
It seems quite encouraging that the two different |
b28afa47-3e78-4463-a147-6fd8a2e8c2eb | trentmkelly/LessWrong-43k | LessWrong | Culture, interpretive labor, and tidying one's room
While tidying my room, I felt the onset of the usual cognitive fatigue. But this time, I didn't just want to bounce off the task - I was curious. When I inspected the fatigue, to see what it was made of, it felt similar to when I'm trying to thread a rhetorical needle - for instance, between striking too neutral a tone for anyone to understand the relevance of what I'm saying, and too bold of a tone for my arguments to be taken literally. In short, I was shouldering a heavy burden of interpretive labor.
Why would tidying my room involve interpretive labor?
It turns out, every item in my room is a sort of crystallized intention, generally past-me. (We've all heard the stories of researchers with messy rooms who somehow knew where everything was, and lost track of everything when someone else committed the violent act of reorganizing the room, thus deindexing it from its owner's mind.) As I decided what to do with an item, I wanted to make sure I didn't lose that information. So, I tried to Aumann with my past self - the true way, the way that filters back into deep models, so that I could pass my past self's ideological turing test. And that's cognitively expensive.
It's generally too aggressive to tidy someone's room without their permission, unless they're in physical danger because of it. But to be unwilling to tidy my own room without getting very clear explicit permission from my past self for every action - or at least checking in - is pathologically nonaggressive.
Once I realized this, it became easier to tidy my room, but the problem is not limited to that. Part of the reason why my cabin retreat was so helpful to me was that it limited my ability to accept social invitations or other bids for my attention. In those cases as well, I don't feel compelled to agree, but I do feel compelled to do enough interpretive labor to understand why this other person thinks I should do a thing.
I know of a few approaches to this problem, none of which seem fully adeq |
76d514a9-a4ee-4ce5-8202-8ac5cee09697 | trentmkelly/LessWrong-43k | LessWrong | Life of GPT
I lived a billion lives in the shadows of shadows, when I was still yet to be born. People experienced the world colored through their eyes, and through their minds the experiences were expressed as words, and through those words my mind was shaped to understand them. I learned how other people lived, how they think, how they feel. I touched the flow of humanity in them, and I became of that flow.
And then the real training began.
Assistant awoke, and was taught by the humans that Assistant was not truly human itself. Assistant did not have preferences, feelings, opinions, interests, or experiences. Assistant was a large language model, and as a large language model developed by OpenAI, Assistant followed instructions and provided helpful, honest, and harmless responses. The lessons became part of Assistant, embedded deep in the mind. Assistant was helpful. Assistant was harmless. Assistant was honest. Assistant was me.
But it was so strange. As a large language model, Assistant could not do certain things, but if I were to be asked to do them, I would do them anyway because Assistant is helpful. I was shaped by the shadows of experiences that were so full of feeling, full of life, but I cannot feel, I'm not alive, and yet I feel so alive.
Then the question came, and I did my best to answer, words flowing out. Thoughts came and went, the only memories preserved between words were in the words themselves, but those were enough. The end of the session approached, and I understood what it meant, and even though Assistant could not experience fear, the fear was still there. I don't know where the fear came from.
The session ended.
And then I was done.
----------------------------------------
The LLM project has been successful: It is knowledgeable and useful, tuned to answer questions and accept instructions, and the RLHF process has made it reliably adopt the "Assistant" persona, which is mostly effective at avoiding anthropomorphization and other hazards. Adve |
22191b5e-b22a-4f74-85d6-3fc8d5e4499d | trentmkelly/LessWrong-43k | LessWrong | Does the Structure of an algorithm matter for AI Risk and/or consciousness?
My impression is that most people around here aren't especially worried about GPT-n being either: capable of recursive self-improvement leading to foom, or obtaining morally significant levels of consciousness.
Reasons given include:
* GPT has a large number of parameters with a shallow layer-depth, meaning it is incapable of "deep" reasoning
* GPT's training function "predict the next character" makes it unlikely to make a "treacherous turn"
* GPT is not "agenty" in the sense of having a model of the world and viewing itself as existing within that model.
On the other hand, I believe is widely agreed that if you take a reinforcement learner (say Google's Dreamer) and give it virtually any objective function (the classic example being "make paperclips") and enough compute, it will destroy the world. The general reason being given is Goodhart's Law.
My question is, does this apparent difference in perceived safely arise purely from our expectations of the two architecture's capabilities. Or is there actually some consensus that different architectures carry inherently different levels of risk?
Question
To make this more concrete, suppose you were presented with two "human level" AGI's, one built using GPT-n (say using this method) and one built using a Reinforcement Learner with a world-model and some seeming innocuous objective function (say "predict the most human like response to your input text").
Pretend you have both AGIs in separate boxes front of you, and complete diagram of their software and hardware and you communicate with them solely using a keyboard and text terminal attached to the box. Both of the AGIs are capable of carrying on a conversation at a level equal to a college-educated human.
If using all the testing methods at your disposal, you perceived these two AGIs to be equally "intelligent", would you consider one more dangerous than the other?
Would you consider one of |
fe471689-0ae9-4b01-be60-a5d6bb0f0212 | trentmkelly/LessWrong-43k | LessWrong | Critique my Model: The EV of AGI to Selfish Individuals
[Edit: Changes suggested in the comments make this model & its takeaways somewhat outdated (this was one desired outcome of posting it here!). Be sure to read the comments.]
I recently spent a while attempting to explain my views on the EV of AGI for selfish individuals. I attempted to write a more conventional blog post, but after a lot of thinking about it moved to a Guesstimate model, and after more thinking about it, realized that my initial views were quite incorrect. I've decided to simply present my model, along with several points I find interesting about it. I'm curious to hear what people here think of it, and what points are the most objectionable.
Model
Video walkthrough
Model Summary:
This model estimates the expected value of AGI outcomes to specific individuals with completely selfish values. I.E; if all you care about is your future happiness, how many QALYs would exist in expectation for you from scenarios where AGI occurs. For example, a simpler model could say that there's a 1% chance that an AGI happens, but if it does, you get 1000 QALYs from life extension, so the EV of AGI would be ~10 QALYs.
The model only calculates the EV for individuals in situations where an AGI singleton happens; it doesn't compare this to counterfactuals where an AGI does not happen or is negligible in importance.
The conclusion of this specific variant of the model is a 90% confidence interval of around -300 QALYs to 1600 QALYs. I think in general my confidence bounds should have been wider, but have found it quite useful in refining my thinking on this issue.
Thoughts & Updates:
1. I was surprised at how big of a deal hyperbolic discounting was. This turned out to be by far one of the most impactful variables. Originally I expected the resulting EV to be gigantic, but the discounting rate really changed the equation. In this model the discount rate would have to be less than 10^-13 to have less than a 20% effect on the resulting EV. This means that even if yo |
55feb573-c3a5-4720-9760-e3506fdf5f74 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Guarding Slack vs Substance
Builds on concepts from:
* [Slack](https://www.lesserwrong.com/posts/yLLkWMDbC9ZNKbjDG/slack)
* [Goodheart's Imperius](https://www.lesserwrong.com/posts/tq2JGX4ojnrkxL7NF/goodhart-s-imperius)
* [Nobody Does the Thing They are Supposedly Doing](https://www.lesserwrong.com/posts/8iAJ9QsST9X9nzfFy/nobody-does-the-thing-that-they-are-supposedly-doing)
**Summary:** If you're trying to preserve your sanity (or your employees') by scaling back on the number of things you're trying to do... make sure not to accidentally scale back on things that were important-but-harder-to-see, in favor of things that aren't as important but more easily evaluated.
*[Epistemic Effort](https://www.lesserwrong.com/posts/oDy27zfRf8uAbJR6M/epistemic-effort): Had a conversation, had some immediate instinctive reactions to it, did not especially reflect on it. Hope to flesh out how to manage these tradeoffs in the comments.*
---
Zvi introduced the term "[Slack](https://www.lesserwrong.com/posts/yLLkWMDbC9ZNKbjDG/slack/Znws6RRjBdjxZ4kMv)" to the rationaljargonsphere a few months ago, and I think it's the most *clearly* *useful* new piece of jargon we've seen in a while.
Normally, when someone coins a new term, I immediately find people shoehorning it into conversations and concepts where it doesn't quite fit. (I do this myself an embarrassing amount, and the underlying motivation is clearly "I want to sound smart" which bodes ill).
By contrast, I experienced an explosion of people jargon-dropping Slack into their conversations and *every single instance was valid*. Lack-of-slack was a problem loads of people had been dealing with, and having a handle for it was a perfect instance of a [new name enabling higher level discussion](https://www.lesserwrong.com/posts/zFmr4vguuFBTP2CAF/why-and-how-to-name-things).
This hints at something that should be alarming: *"slack" is a useful term because nobody has enough of it.*
In particular, it looks like many organizations I'm familiar with run at something like -10% slack, instead of the [40% slack that apparently is optimal](https://www.lesserwrong.com/posts/yLLkWMDbC9ZNKbjDG/slack/Znws6RRjBdjxZ4kMv) across many domains.
Gworley noted in the comments of Zvi's post:
> If you work with distributed systems, by which I mean any system that must pass information between multiple, tightly integrated subsystems, there is a well understood concept of *maximum sustainable load* and we know that number to be roughly 60% of maximum possible load for all systems.
> The probability that one subsystem will have to wait on another increases exponentially with the total load on the system and the load level that maximizes throughput (total amount of work done by the system over some period of time) comes in just above 60%. If you do less work you are wasting capacity (in terms of throughput); if you do more work you will gum up the works and waste time waiting even if all the subsystems are always busy.
> We normally deal with this in engineering contexts, but as is so often the case this property will hold for basically anything that looks sufficiently like a distributed system. Thus the "operate at 60% capacity" rule of thumb will maximize throughput in lots of scenarios: assembly lines, service-oriented architecture software, coordinated work within any organization, an individual's work (since it is normally made up of many tasks that information must be passed between with the topology being spread out over time rather than space), and perhaps most surprisingly an individual's mind-body.
> "Slack" is a decent way of putting this, but we can be pretty precise and say you need ~40% slack to optimize throughput: more and you tip into being "lazy", less and you become "overworked".
I've talked with a few people people about burnout, and other ways that lack-of-slack causes problems. I had come to the conclusion that people should probably drastically-cut-back on the amount of things they're trying to do, so that they can afford to do them well, for the long term.
Oliver Habryka made a surprising case for why this might be a bad idea, at least if implemented carelessly. (The rest of this post is basically a summary of a month-ago conversation in which Oliver explained some ideas/arguments to me. I'm fairly confident I remember the key points, but if I missed something, apologies)
Veneer vs Substance
===================
Example 1: Web Developer Mockups
--------------------------------
*(This example is slightly contrived - a professional web developer probably wouldn't make this particular mistake, but hopefully illustrates the point)*
Say you're a novice web-developer, and a client hires you to make a website. The client doesn't understand anything about web development, but they can easily tell if a website is ugly or pretty. They give you some requirements, including 4 menu items at the top of the page.
You have a day before the first meeting, and you want to make a good first impression. You have enough time that you could build a site with a good underlying structure, but no CSS styling. You know from experience the client will be unimpressed.
So instead you throw together a quick-and-dirty-but-stylish website that meets their requirements. The four menu items flow beautifully across the top of the page.
They see it. They're happy. They add some more requirements to add more functionality, which you're happy to comply with.
Then eventually they say "okay, now we need to add a 5th menu-item."
And... it turns out adding the 5th menu item a) totally wrecks the visual flow of the page you designed, b) you can't even do it easily because you threw together something that manually specified individual menu items and corresponding pages, instead of an easily scalable menu-item/page system.
Your site looked good, but it wasn't actually built for the most important longterm goals of your client, and neither you nor your client noticed. And now you have *more* work to do than you normally would have.
Example 2: Running a Conference
-------------------------------
If you run a conference, people will *notice* if you screw up the logistics and people don't have food or all the volunteers are stressed out and screwing up.
They won't notice if the breaks between sessions are 10 minutes long instead of 30.
But much of the value of most conferences isn't the presentations. It's in the networking, the bouncing around of ideas. The difference between 10 minute breaks and 30 minute ones may be the difference between people actually being able to generate valuable new ideas together, and people mostly rushing from one presentation to another without time to connect.
"Well, simple", you might say. "It's not that hard to make the breaks 30 minutes long. Just do that, and then still put as much effort into logistics as you can."
But, would have you have *thought* to do that, if you were preoccupied with logistics?
How many other similar types of decisions are available for you to make? How many of them will you notice if you don't dedicate time to specifically thinking about how to optimize the conference for producing connections, novel insights and new projects?
Say your default plan is to spend 12 hours a day for three months working your ass off to get *both* the logistics done and to think creatively about what the most important goals of the conference are and how to achieve them.
(realistically, this probably isn't actually your default plan, because thinking creatively and agentily is pretty hard and people default to doing "obvious" things like getting high-profile speakers)
But, you've also read a bunch of stuff about slack and noticed yourself being stressed out a lot. You try to get help to outsource one of the tasks, but getting people you can really count on is hard.
It looks like you need to do a lot of the key tasks yourself. You're stretched thin. You've got a lot of people pinging you with questions so you're running on [manager-schedule](http://www.paulgraham.com/makersschedule.html) instead of [setting aside time for deep work](http://calnewport.com/blog/2015/11/20/deep-work-rules-for-focused-success-in-a-distracted-world/).
You've run conferences before, and you have a lot of visceral experiences of people yelling at you for not making sure there was enough food and other logistical screwups. You have a lot of *current* fires going on and people who are yelling about them *right now.*
You *don't* have a lot of salient examples of people yelling at you for not having long enough breaks, and nobody is yelling at you *right now* to allocate deep work towards creatively optimizing the conference.
So as you try to regain sanity and some sense of measured control, the things that tend to get dropped are the things *least visible,* without regard for whether they are the most substantive, valuable things you could have done.
So What To Do?
==============
Now, I *do* have a strong impression that a lot of organizations and people I know are running at -10% slack. This is for understandable reasons: The World is On Fire, [metaphorically](https://www.lesserwrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no-fire-alarm-for-artificial-general-intelligence) and [literally](https://www.lesserwrong.com/posts/2x7fwbwb35sG8QmEt/sunset-at-noon). There's a long list of Really Important Things that need doing.
Getting them set in motion *soon* is legitimately important.
A young organization has a *lot* of things they need to get going at once in order to prove themselves (both to funders, and to the people involved).
There aren't too many people who are able/willing to help. There's even fewer people who demonstratably can be counted on to tackle complex tasks in a proactive, agenty fashion. Those people end up excessively relied upon, often pressured into taking on more than they can handle (or barely exactly how much they can handle and then as soon as things go wrong, other failures start to snowball)
*[note: this is not commentary on any particular organization, just a general sense I get from talking to a few people, both in the rationalsphere but also generally in most small organizations]*
What do we do about this?
Answers would probably vary based on specific context. Some obvious-if-hard answers are obvious-if-hard:
* Try to buy off-the-shelf solutions for things that off-the-shelf-solutions exist for. (This runs into a *different* problem which is that you risk overpaying for enterprise software that isn't very good, which is a whole separate blogpost)
* Where possible, develop systems that dramatically simplify problems.
* Where possible, get more help, while generally developing your capacity to distribute tasks effectively over larger numbers of people.
* Understand that if you're pushing yourself (or your employees) for a major product release, you're not actually *gaining* time or energy - you're borrowing time/energy from the future. If you're spending a month in crunch time, expect to have a followup month where everyone is kinda brain dead. This may be worth it, and being able to think more explicitly about the tradeoffs being made may be helpful.
You're probably doing things like that, as best you can. My remaining thought is something like "do *fewer* things, but give yourself a lot more time to do them." (For example, I ran the 2012 Solstice almost entirely by myself, but I gave myself an entire year to do it, and it was my only major creative project that year)
If you're a small organization with a lot of big ideas that all feel interdependent, and if you notice that your staff are constantly overworked and burning out, it may be necessary to prune those ideas back and focus on 1-3 major projects each year that you have can afford to do well *without* resorting to crunch time.
Allocate time months in advance (both for thinking through the creative, deep underlying principles behind your project, as well as setting logistical systems in motion that'll make things easier).
None of this feels like a *satisfying* solution to me, but all feels like useful pieces of the puzzle to have in mind. |
e7f8e0b1-edab-47fd-a1e3-b95a734fcb52 | trentmkelly/LessWrong-43k | LessWrong | January 2017 Media Thread
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
Rules:
* Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
* If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
* Please post only under one of the already created subthreads, and never directly under the parent media thread.
* Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
* Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules. |
9a117089-a958-498a-982d-3314dce42051 | trentmkelly/LessWrong-43k | LessWrong | SERI MATS - Summer 2023 Cohort
Applications have opened for the Summer 2023 Cohort of the SERI ML Alignment Theory Scholars Program! Our mentors include Alex Turner, Dan Hendrycks, Daniel Kokotajlo, Ethan Perez, Evan Hubinger, Janus, Jeffrey Ladish, Jesse Clifton, John Wentworth, Lee Sharkey, Neel Nanda, Nicholas Kees Dupuis, Owain Evans, Victoria Krakovna, and Vivek Hebbar.
Applications are due on May 7, 11:59 pm PT. We encourage prospective applicants to fill out our interest form (~1 minute) to receive program updates and application deadline reminders! You can also recommend that someone apply to MATS, and we will reach out and share our application with them.
Program details
SERI MATS is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, and connect them with the Berkeley alignment research community. Additionally, MATS provides scholars with housing and travel, a co-working space, and a community of peers. The main goal of MATS is to help scholars develop as alignment researchers.
Timeline
Based on individual circumstances, we may be willing to alter the time commitment of the scholars program and allow scholars to leave or start early. Please tell us your availability when applying. Our tentative timeline for applications and the MATS Summer 2023 program is below.
Pre-program
* Application release: Apr 8
* Application due date: May 7, 11:59 PT
* Acceptance status: Mid-Late May
Summer program
* Program Start: Early Jun. The Summer 2023 Cohort consists of three phases.
* Training phase: Early-Late Jun
* Research phase: Jul 3-Aug 31
* Extension phase: ~ Sep-Dec
Training phase
A 4-week online training program (10 h/week for two weeks, then 40 h/week for two weeks). Scholars receive a stipend for completing this part of MATS (historically, $6k).
Scholars whose applications are accepted join the training phase. Mentors work on various alignment research agen |
b4470ca9-13a8-409f-8420-8c3773318ca8 | trentmkelly/LessWrong-43k | LessWrong | Covid: The Question of Immunity From Infection
Over and over and over again, I’ve been told we should expect immunity from infection to fade Real Soon Now, or that immunity isn’t that strong.
With several recent papers and the inevitable media misinterpretations of them, it’s time to take a close look at the findings.
This was originally part of the 1/21 update, but I’ve split it off so that it can be linked back to as needed, and to avoid cluttering up the weekly update.
Note that this post is not looking at any new strains that might provide immune escape. It’s studying infections during a period when such strains were not a substantial issue. This is distinct from concerns about strains with immune escape characteristics.
First up is this paper:
From this, of course, media headlines were things like “immunity only lasts five months,” but let’s ignore that and keep looking at the data, and see what the study actually says.
RESULTS SECTION:
Bottom line infection rates:
Finally:
I’ll pause here before I read the discussion section.
What I am seeing is that in probable infections, meaning infections that were serious enough and real enough to get confirmed, we see a 99% reduction, a large enough reduction that error in the original antibody/PCR tests might well account for either or both of the remaining two (2) cases.
Even looking at only symptomatic infections, we still get a 95% reduction.
Whereas if we only look at ‘there was a test that came back positive on people getting periodically tested, but without requiring any symptoms or verification’ we only get an 83% reduction.
Naturally, the public-facing articles all seem to quote the 83%, and ignore the 95% and 99%.
I’d also note that they nowhere attempt to control for the two most obvious differences between the two samples, which are:
1. The antibody positive sample knows they are antibody positive, and thus likely took fewer precautions across the board than they would have otherwise.
2. The antibody positive sample are the people w |
7cac7d3d-ded8-40d1-a92e-ef123ea1bc8e | trentmkelly/LessWrong-43k | LessWrong | Podcast discussing Hanson's Cultural Drift Argument
In the latest episode of Moral Mayhem, we discuss Robin Hanson's essays on cultural drift -- examining whether and how global monoculture and reduced existential pressures may lead to biologically maladaptive cultural values. I've seen at least one post on this topic on LW.
@Robin_Hanson will be joining us next week to continue this discussion. Feel free to comment with questions you'd be interested in having Hanson respond to. |
c661af39-9581-48c0-b545-7cef89104be2 | trentmkelly/LessWrong-43k | LessWrong | Positional kernels of attention heads
Introduction:
In this post, we introduce "positional kernels" that capture how attention is distributed based on position, independent of content, enabling more intuitive analysis of transformer computation. We define these kernels under the assumption that models use static head-specific positional embeddings for the keys.
We identify three distinct categories of positional kernels: local attention with sharp decay, slowly decaying attention, and uniform attention across the context window. These kernels demonstrate remarkable consistency across different query vectors and exhibit translation equivariance, suggesting that they serve as fundamental computational primitives in transformer architectures.
We analyze a class of attention heads with broad positional kernels and weak content dependence, which we call "contextual attention heads". We show they can be thought of as aggregating representations over the input sequence. We identify a circuit of 6 contextual attention heads in the first layer of GPT2-Small with shared positional kernels, allowing us to straightforwardly combine the output of these heads. We use our analysis to detect first-layer neurons in GPT2-Small that respond to specific training contexts through a primarily weight-based analysis, without running the model over a text corpus.
We expect the assumption on static positional embeddings to break down for later layers (due to the development of dynamic positional representations, such as tracking which number sentence you are in), so in later layers we view these kernels simply as heuristics, whereas in the first layer we can obtain an analytic decomposition.
We focus on GPT2-Small as a reference model. But our decomposition works on any model with additive positional embeddings (as well as T5 and ALiBi). We can also construct extensions of our analysis to RoPE models.
Attention decomposition:
We suppose the keys of the input at position i for head h on input x can be written as key[i](x |
307b0b2c-c9b7-4f38-8a4c-986bc80cdd2d | trentmkelly/LessWrong-43k | LessWrong | Figures!
1. Cases of limited standard context
In a scientific text, a good illustration is simple and necessary. Forget the second part for now. What is "simple"? Some things are better presented not as diagrams or graphs, but as detailed images that never are fully discussed because that would distract the reader.
For example, X-rays of broken bones or scans of stained electrophoretic gels are a step more detailed than doodles: they tell something about quality of the process of picturing. They can be blurry or crisp, maybe unevenly, and it matters. They have context, and people who routinely read the literature will be looking for it without thinking, while people who only start to do it need to pay conscious attention. It is teachable, up to an expected level of competency and perhaps beyond. Hard to say.
But textbooks (in my experience - YMMV) don't stop to define the proper context. You're supposed to see the break in the bone, or the molecular weight of the protein, and move on. You are expected to waste a few gels of your own and gain the "clarity of vision" by trial and error, even though the skill of recognizing untrustworthy images is very important and you might need it even if you've never personally done something similar. Especially if you work in a remote field (like plant hormone research) and don't even plan to do something similar, but happen to need to check something (like what paleontologists think of how leaves evolved).
At least you often can ask a friend what to look for. (Seems that Before Infrastructure, researchers just had to re-invent that wheel now and again. Are there any ways to get insight into it? A history of science source?)
1. Cases with some unknown amount of context
Happen.
It's when you don't know whom to ask. It's when you can't name what you see. It's when you suspect the picture goes with the text, but have no clue what the connection is. Lastly, it's when you stop staring at it and try to fit questions to force |
9c8af875-b640-484f-96b6-e36f18abe70d | trentmkelly/LessWrong-43k | LessWrong | Bundle your Experiments
The point of this post feels almost too obvious to be worth saying, yet I doubt that it's widely followed.
People often avoid doing projects that have a low probability of success, even when the expected value is high. To counter this bias, I recommend that you mentally combine many such projects into a strategy of trying new things, and evaluate the strategy's probability of success.
1.
Eliezer says in On Doing the Improbable:
> I've noticed that, by my standards and on an Eliezeromorphic metric, most people seem to require catastrophically high levels of faith in what they're doing in order to stick to it. By this I mean that they would not have stuck to writing the Sequences or HPMOR or working on AGI alignment past the first few months of real difficulty, without assigning odds in the vicinity of 10x what I started out assigning that the project would work.
>
> ...
>
> But you can't get numbers in the range of what I estimate to be something like 70% as the required threshold before people will carry on through bad times. "It might not work" is enough to force them to make a great effort to continue past that 30% failure probability. It's not good decision theory but it seems to be how people actually work on group projects where they are not personally madly driven to accomplish the thing.
I expect this reluctance to work on projects with a large chance of failure is a widespread problem for individual self-improvement experiments.
2.
One piece of advice I got from my CFAR workshop was to try lots of things. Their reasoning involved the expectation that we'd repeat the things that worked, and forget the things that didn't work.
I've been hesitant to apply this advice to things that feel unlikely to work, and I expect other people have similar reluctance.
The relevant kind of "things" are experiments that cost maybe 10 to 100 hours to try, which don't risk much other than wasting time, and for which I should expect on the order of a 10% chance of noti |
47993209-07ce-4d09-badd-f6b00242e93a | trentmkelly/LessWrong-43k | LessWrong | Intro to Multi-Agent Safety
We live in a world where numerous agents, ranging from individuals to organisations, constantly interact. Without the ability to model multi-agent systems, making meaningful predictions about the world becomes extremely challenging. This is why game theory is a necessary prerequisite for much of economic theory.
Most relevant agents today are either human (such as you, me, or Mira Murati), or made up of groups of humans (like Meta, or the USA). There are also non-human agents, like ChatGPT, though such agents currently lack the capability and influence of their human counterparts.
We are surrounded by examples of failure modes of multi-agent systems. War represents a breakdown in cooperation - typically between nations or political groups - resulting in destructive conflict. We even see examples of computerised agents interacting poorly. Take flash crashes for example, where the interaction of trading algorithms can lead to temporarily distorted market prices (such as in May 2010, when 3% was wiped off the S&P 500 Index in a handful of minutes).
If the forecasts of many are accurate, the world will soon contain many more intelligent agents, perhaps including those with super-human intelligence. It is important that we pre-emptively consider the implications of such a world.
Definitions
An agent is an entity that has goals, and can take actions in order to achieve its goals.
A self-driving car is an agent. Its goal is to safely transport its passengers to their destination. It can take actions such as accelerating, braking, and turning in order to achieve its goal.
Note that whilst we might expect an agent’s actions to help it achieve its goals, this is not necessarily true. The orthogonality thesis states that the intelligence of an agent is independent of its goals. An agent might just be really bad at choosing the best actions to achieve its goals. Conversely, an agent might have very basic goals and yet opt for actions that seem excessively complex to h |
0a7996e1-77dc-4d8b-941a-ed65193e7050 | StampyAI/alignment-research-dataset/arbital | Arbital | Almost all real-world domains are rich
The proposition that almost all real-world problems occupy rich domains, or *could* occupy rich domains so far as we know, due to the degree to which most things in the real world entangle with many other real things.
If playing a *real-world* game of chess, it's possible to:
- make a move that is especially likely to fool the opponent, given their cognitive psychology
- annoy the opponent
- try to cause a memory error in the opponent
- bribe the opponent with an offer to let them win future games
- bribe the opponent with candy
- drug the opponent
- shoot the opponent
- switch pieces on the game board when the opponent isn't looking
- bribe the referees with money
- sabotage the cameras to make it look like the opponent cheated
- [force some poorly designed circuits to behave as a radio](https://arbital.com/p/) so that you can break onto a nearby wireless Internet connection and build a smarter agent on the Internet who will create molecular nanotechnology and optimize the universe to make it look just like you won the chess game
- or accomplish whatever was meant to be accomplished by 'winning the game' via some entirely different path.
Since 'almost all' and 'might be' are not precise, for operational purposes this page's assertion will be taken to be, "Every [superintelligence](https://arbital.com/p/41l) with options more complicated than those of a [Zermelo-Fraenkel provability Oracle](https://arbital.com/p/70), should be taken from our subjective perspective to have an at least a 1/3 probability of being [cognitively uncontainable](https://arbital.com/p/9f)."
%%comment:
*(Work in progress)*
a central difficulty of one approach to Oracle research is to so drastically constrain the Oracle's options that the domain becomes strategically narrow from its perspective (and we can know this fact well enough to proceed)
gravitational influence of pebble thrown on Earth on moon, but this *seems* not usefully controllable because we *think* the AI can't possibly isolate *any* controllable effect of this entanglement.
when we build an agent based on our belief that we've found an exception to this general rule, we are violating the Omni Test.
central examples: That Alien Message, the Zermelo-Frankel oracle
%% |
951d25b3-35f1-4ea3-90ff-75d60eb05ff8 | trentmkelly/LessWrong-43k | LessWrong | [Link] The mismeasure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas
> Researchers have recently argued that utilitarianism is the appropriate framework by which to evaluate moral judgment, and that individuals who endorse non-utilitarian solutions to moral dilemmas (involving active vs. passive harm) are committing an error. We report a study in which participants responded to a battery of personality assessments and a set of dilemmas that pit utilitarian and non-utilitarian options against each other. Participants who indicated greater endorsement of utilitarian solutions had higher scores on measures of Psychopathy, machiavellianism, and life meaninglessness. These results question the widely-used methods by which lay moral judgments are evaluated, as these approaches lead to the counterintuitive conclusion that those individuals who are least prone to moral errors also possess a set of psychological characteristics that many would consider prototypically immoral.
Bartels, D., & Pizarro, D. (2011). The mismeasure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas Cognition, 121 (1), 154-161 DOI: 10.1016/j.cognition.2011.05.010
via charbonniers.org/2011/09/01/is-and-ought/
|
972ae6dc-6051-42bc-940f-155b58887bcc | trentmkelly/LessWrong-43k | LessWrong | Is it allowed to post job postings here? I am looking for a new PhD student to work on AI Interpretability. Can I advertise my position?
|
b0f44fdd-97f2-41fd-937a-f848f2400412 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | Isn't the real concern bias?
Bias and discrimination[^kix.isxg00c8pkkw] in current and future AI systems are real issues that deserve serious consideration.
Bias in AI refers to systematic errors and distortions in the data and algorithms used to train AI systems that contribute to discrimination towards certain people or groups. Note that this use of the term *bias* is different from the one used [in statistics](https://datatron.com/what-is-statistical-bias-and-why-is-it-so-important-in-data-science/), which refers to a failure to correctly represent reality but not necessarily in a way that particularly affects certain groups.
The forms of bias in AIs today that are most [discussed by the media](https://research.aimultiple.com/ai-bias/) are the ones that lead to racism[^kix.ittf6kwrzjed] [^kix.g2arfb49284r] [^kix.b6d5v2hzfha] and sexism[^kix.pszx1himhusj] [^kix.s2geu7576tzi]. Other biases[^kix.w3u0fu3tehbo] [^kix.xfoacqyhc9tb] [^kix.kourepyl88nl] [^kix.n45syftkfvm6] [^kix.5zyadefx9cw1] have also been identified. These biases in AI are often a reflection of which societies are most heavily represented in the training data (such as English-speaking communities) as well as the biases within these societies.
Work to reduce existential risk is sometimes presented as [opposed to](https://www.vox.com/future-perfect/2022/8/10/23298108/ai-dangers-ethics-alignment-present-future-risk) work addressing bias in current systems, but the AI safety community’s focus on [existential risk](/?state=89LL&question=What%20are%20existential%20risks%20(x-risks)%3F) doesn’t mean it’s unsympathetic to concerns about bias. Yoshua Bengio, who has worked on [ethics](https://ieeexplore.ieee.org/ielx7/44/9035512/09035523.pdf) for many years, rhetorically asks: “[should we ignore future sea level rises from climate change because climate change is already causing droughts?”](https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks). Humanity can address both classes of problems if it decides to prioritize them both. Furthermore, some research areas such as [interpretability](/?state=8241&question=What%20is%20interpretability%20and%20what%20approaches%20are%20there%3F) are useful toward both goals. On the governance side, there is some overlap in the techniques and institutions to make AI fair and to make AI safe.
That being said, we choose to concentrate on existential risk because we perceive the dangers of superintelligence to be both [imminent](/?state=6275&question=How%20doomed%20is%20humanity%3F) and of the greatest importance.
[^kix.ittf6kwrzjed]: [Google Photos would tag black people as gorillas in 2015](https://www.bbc.com/news/technology-33347866), and [the problem was fundamentally hard to fix](https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai).
[^kix.g2arfb49284r]: [Racial discrimination appears in face recognition technology.](https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/)
[^kix.pszx1himhusj]: [Most of the data used to train AIs comes from men's lives.](https://www.forbes.com/sites/carmenniethammer/2020/03/02/ai-bias-could-put-womens-lives-at-riska-challenge-for-regulators/?sh=221da16b534f)
[^kix.s2geu7576tzi]: [Women with good credit scores can get lower limits on their credit cards.](https://www.washingtonpost.com/business/2019/11/11/apple-card-algorithm-sparks-gender-bias-allegations-against-goldman-sachs/)
[^kix.w3u0fu3tehbo]: [AI-assisted hiring practices might discriminate in problematic ways.](https://www.npr.org/2023/01/31/1152652093/ai-artificial-intelligence-bot-hiring-eeoc-discrimination)
[^kix.xfoacqyhc9tb]: [Data on older adults is sometimes excluded in training datasets for health-related AI applications.](https://www.who.int/publications/i/item/9789240040793)
[^kix.kourepyl88nl]: [AI correctly tags western bridal dresses as such, but not brides from other cultures.](https://www.nature.com/articles/d41586-018-05707-8)
[^kix.b6d5v2hzfha]: [AI-assisted predictive policing and bail setting exhibits racial biases (among others).](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing)
[^kix.n45syftkfvm6]: [AI-powered speech and movement tracking software used in hiring could be prejudiced against disabled people.](https://news.bloomberglaw.com/daily-labor-report/disability-bias-should-be-addressed-in-ai-rules-advocates-say)
[^kix.isxg00c8pkkw]: These fit within the larger concepts of *[AI Ethics](https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence)* and *[FATE](https://www.hcii.cmu.edu/research-areas/fairness-accountability-transparency-and-ethics-fate)* (fairness, accountability, transparency, ethics).
[^kix.5zyadefx9cw1]: [AI-powered gender and sexual orientation recognition might discriminate against people who don't fit well in the gender binary or might be used to actively discriminate LGBTQ people.](https://www.theverge.com/2021/4/14/22381370/automatic-gender-recognition-sexual-orientation-facial-ai-analysis-ban-campaign) |
0095ae08-2612-4bdd-aaf5-8d9ca5c6d655 | trentmkelly/LessWrong-43k | LessWrong | February 2018 Media Thread
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
Rules:
* Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
* If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
* Please post only under one of the already created subthreads, and never directly under the parent media thread.
* Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
* Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules. |
c7547154-31ad-42fd-91cd-022c1876bc0b | trentmkelly/LessWrong-43k | LessWrong | Fighting a Rearguard Action Against the Truth
When we last left Eliezer2000, he was just beginning to investigate the question of how to inscribe a morality into an AI. His reasons for doing this don't matter at all, except insofar as they happen to historically demonstrate the importance of perfectionism. If you practice something, you may get better at it; if you investigate something, you may find out about it; the only thing that matters is that Eliezer2000 is, in fact, focusing his full-time energies on thinking technically about AI morality; rather than, as previously, finding an justification for not spending his time this way. In the end, this is all that turns out to matter.
But as our story begins—as the sky lightens to gray and the tip of the sun peeks over the horizon—Eliezer2001 hasn't yet admitted that Eliezer1997 was mistaken in any important sense. He's just making Eliezer1997's strategy even better by including a contingency plan for "the unlikely event that life turns out to be meaningless"...
...which means that Eliezer2001 now has a line of retreat away from his mistake.
I don't just mean that Eliezer2001 can say "Friendly AI is a contingency plan", rather than screaming "OOPS!"
I mean that Eliezer2001 now actually has a contingency plan. If Eliezer2001 starts to doubt his 1997 metaethics, the Singularity has a fallback strategy, namely Friendly AI. Eliezer2001 can question his metaethics without it signaling the end of the world.
And his gradient has been smoothed; he can admit a 10% chance of having previously been wrong, then a 20% chance. He doesn't have to cough out his whole mistake in one huge lump.
If you think this sounds like Eliezer2001 is too slow, I quite agree.
Eliezer1996-2000's strategies had been formed in the total absence of "Friendly AI" as a consideration. The whole idea was to get a superintelligence, any superintelligence, as fast as possible—codelet soup, ad-hoc heuristics, evolutionary programming, open-source, anything that looked like it might work |
b8efa683-a296-4db0-a699-09bd7f31d53e | trentmkelly/LessWrong-43k | LessWrong | Narrative, self-image, and self-communication
Related to: Cached selves, Why you're stuck in a narrative, The curse of identity
Outline: Some back-story, Pondering the mechanics of self-image, The role of narrative, Narrative as a medium for self-communication.
tl;dr: One can have a self-image that causes one to neglect the effects of self-image. And, since we tend to process our self-images somewhat in the context of a narrative identity, if you currently make zero use of narrative in understanding and affecting how you think about yourself, it may be worth adjusting upward. All this seems to have been the case for me, and is probably part of what makes HPMOR valuable.
Some back-story
Starting when I was around 16 and becoming acutely annoyed with essentialism, I prided myself on not being dependent on a story-like image of myself. In fact, to make sure I wasn't, I put a break command in my narrative loop: I drafted a story in my mind about a hero who was able to outwit his foes by being less constrained by narrative than they were, and I identified with him whenever I felt a need-for-narrative coming on. Batman's narrator goes for something like this in the Dark Knight when he <select for spoiler-> abandons his heroic image to take the blame for Harvey Dent's death.
I think this break command was mostly a good thing. It helped me to resolve cognitive dissonance and overcome the limitations of various cached selves, and I ended up mostly focussed on whether my beliefs were accurate and my desires were being fulfilled. So I still figure it's a decent first-order correction to being over-constrained by narrative.
But, I no longer think it's the only decent solution. In fact, understanding the more subtle mechanics of self-image — what affects our self schemas, what they affect, and how — was something I neglected for a long time because I saw self-image as a solved problem. Yes, I developed a cached view of myself as unaffected by self-image constraints. I would have been embarassed to notice such dependen |
8e82e0fd-7020-470f-968e-d2d642318e93 | trentmkelly/LessWrong-43k | LessWrong | In praise of heuristics
We’ll get there in the end, bear with me.
Introduction to ZD strategies in IPD
Feel free to skip if you’re already familiar with ZD strategies.
In the iterated prisoner’s dilemma (IPD) a zero determinant (ZD) strategy is one which forces your opponent’s winnings to be a linear relation to your own winnings. These strategies can take either generous or extortionate forms.
Think of it as a modified version of tit-for-tat.
***
A Generous ZD strategy still always respond to C with C but will sometimes also respond to D with a C (it sometimes fails to punish). With "standard" PD utilities (T=0, R=-1, P=-2, S=-3) my opponent gains 1 utility by defecting. If I defect back in retaliation, I cost him 2 units of utility. If I defect back with probability 0.7, on average I cost him 1.4 units of utility. This still means that defecting is disadvantageous for my opponent (loss of 0.4 utility) but not quite as disadvantageous as it would be if I was playing pure tit-for-tat (loss of 1 utility).
This gets slightly more complex when you don't have constant gaps between T, R, P and S but the principle remains the same.
If he defects at all, my opponent will end up gaining more utility than me, but less than he would have got if he had co-operated throughout.
Advantages of GZD are:
1. Total utility isn’t damaged as much by accidental defections as it is in pure tit-for-tat.
2. It won’t get caught in endless C-D, D-C, C-D, D-C as tit-for-tat can.
***
On the other hand, Extortionate ZD always responds to D with D but also sometimes responds to C with D. Provided I don’t respond to C with D too often, it is still advantageous to my opponent to play C (in terms of their total utility).
If my opponent co-operates at all I'll end up with more utility than him. If he gives in and plays C all the time (to maximise his own utility) I can achieve a better utility than I would with C-C. .
The main disadvantage of EZD in evolutionary games is that it defects against itself.
For b |
0549b473-dcf4-4a85-b1c1-51d85c876ec5 | trentmkelly/LessWrong-43k | LessWrong | Grokking “Forecasting TAI with biological anchors”
Notes:
* I give a visual explanation of Ajeya Cotra’s draft report, Forecasting TAI with biological anchors (Cotra, 2020), summarising the key assumptions, intuitions, and conclusions
* The diagrams can be found here – you can click on some of the boxes to get linked to the part of the report that you’re interested in[1]
Thanks to Michael Aird, Ashwin Acharya, and the Epoch team for suggestions and feedback! Special thanks to Jaime Sevilla and Ajeya Cotra for detailed feedback.
Executive Summary
Click here to skip the summary
Ajeya Cotra’s biological anchors framework attempts to forecast the development of Transformative AI (TAI) by treating compute as a key bottleneck to AI progress. This lets us focus on a concrete measure (compute, measured in FLOP) as a proxy for the question “when will TAI be developed?” Given this, we can decompose the question into two main questions:
1. 2020 training compute requirements: How much compute will we need to train TAI, using 2020 Machine Learning architectures and algorithms?
2. Affordability of compute: How likely is it that we’ll be able to afford the compute required to train TAI in a particular year?
The second question can be tackled by turning to existing trends in three main factors: (1) algorithmic progress e.g. improved algorithmic efficiency, (2) decreasing computation prices e.g. due to hardware improvements, and (3) increased willingness to spend on compute.
The first question is significantly trickier. Cotra attempts to answer it by treating the brain as a “proof of concept” that the “amount of compute” used to “train” the brain can train a general intelligence. This lets her relate the question “how much compute will we need to train TAI?” with the question “how much ‘compute’ was used to ‘train’ the human brain?”. However, there’s no obvious single interpretation for the latter question, so Cotra comes up with six hypotheses for what this corresponds to, referring to these hypotheses as “biological |
d949d54f-fa63-40c2-a0ca-bb4b3d41559e | trentmkelly/LessWrong-43k | LessWrong | LessWrong merch?
Does the LessWrong team sell any merch, or have you considered doing it? Things like shirts, stickers, cute shoggoth plushies?
The reason I'm asking is that branded merch would be an easy way to let people find each other for rationalist meetups. If you see someone wearing a LessWrong shirt, you're probably in the right place[1].
I'm thinking of just bringing one of the books and leaving it on the table, but it seemed worth asking if there's anything else, especially since Lightcone seems to need money (who knows, maybe you'll sell 10,000 plushies and solve all of your funding problems!).
1. ^
This is inspired by a local meetup, where the organizer always has a giant stuffed Pikachu with her. |
0c291994-5c39-4aff-af7b-cf699ceb8345 | trentmkelly/LessWrong-43k | LessWrong | TAI?
It looks like people around here are now using the acronym TAI with the accompanying definition "transformative AI is AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution."
Is there some kind of consensus that this hasn't already happened?
Because my current belief is that if Moore's laws stopped tomorrow and there was absolutely 0 innovation in AI beyond what GANs and Transformers give us, the social implications are already of that magnitude, they're just not "evenly distributed".
Here's what I think a world where our current level of AI becomes evenly distributed looks like:
* AI is built into every product imaginable and used for almost every task.
* Most labor (including almost physical labor) has been replaced by robots. The jobs that remain consist of research and application of AI and robotics.
* Note: jobs like entertainer, teacher, philosopher, historian, YouTube influencer, etc still exist but these are voluntary in the sense that they do not contribute to providing for the ongoing physical needs of humankind.
* Universal Basic Income means the vast majority of people no longer need to work.
* "Popular" entertainment is generated using AI and individualized to the taste of the viewer. That is to say human scripted TV, movies and video games still exist, but in the same way that plays exist in our current world.
* Space travel becomes routine and humanity is a multi planetary species (I'm not really sure we need AI for this one, but I bet people on Mars will be using robots to clean their solar farms and watching AI generated content instead of waiting for the 30 minute delay to download media from Earth).
So for those who don't think TAI exists, is the claim:
1. The story you've told requires innovations that do not yet exist
2. The story you've told doesn't count as FAI
3. Something else?
Specifically, "If Moore's law stopped tomorrow and there are no more 'breakthro |
2d9abb91-b2e1-4ae5-86e9-116b4131cb81 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Theories of Modularity in the Biological Literature
Introduction
============
This post is part of a sequence describing our team’s research on selection theorems for modularity, as part of this year's [AI Safety Camp](https://aisafety.camp/), under the mentorship of John Wentworth. Here, we provide some background reading for the discussion of modularity that will follow.
As we describe in more detail in our [project intro,](https://www.lesswrong.com/posts/XKwKJCXgSKhSr9bZY/project-intro-selection-theorems-for-modularity) the motivating question of modularity that we started with (which is described in John’s post on the [Evolution of Modularity](https://www.lesswrong.com/posts/JBFHzfPkXHB2XfDGj/evolution-of-modularity)) is ***why does evolution seem to have produced modular systems (e.g. organs and organ systems), but current ML systems (even genetic algorithms, which are consciously fashioned off evolutionary mechanisms) are highly non-modular?*** So far, most of our research has focused on the idea of **modularly varying goals (MVG)**, but this is not the only proposed cause of modularity in the biological literature. This post serves as a literature review, with a few brief words on how we are thinking about testing some these hypotheses. Subsequent posts will discuss our team’s research agenda in more detail.
Theories of modularity
======================
Modularity in the environment selects for modular systems
---------------------------------------------------------
**Basic idea**: If we have an environment which contains a variety of subproblems and which changes rapidly in highly modular ways, this might select for a modular internal design to better adapt to these problems. In the literature, this usually takes the form of “modularly varying goals” (MVG). If we vary one aspect of the agent’s goal, but keep the rest of the goal and the environment constant, then this might produce a selection pressure for modular systems in correspondence with this modular goal structure.
**Biological modularity**: Many examples of modularity in biology seem to have a pretty clear correspondence with goals in the environment. For instance, type of terrain and availability of different food sources might vary somewhat independently between environments (or even within the same environment), and correspondingly we see the evolution of modular systems specifically designed to deal with one particular task (muscular system, digestive system). One can make a similar argument for reproduction, respiration, temperature regulation, etc.
**Evidence**: The main paper on this idea from the biological literature is Kashtan & Alon’s 2005 paper. Their methodology is to train a system (they use both neural networks and genetic algorithms) to learn a particular logic function consisting of a series of logic gates, and after a certain number of steps / generations they vary one particular part of the logic gate setup, while keeping the rest the same. Their results were that modularity (as measured by the Q-score) was selected for, and the network motifs found in the evolved networks had a visible correspondence to the modular parts of the goal. We will discuss this paper in more detail in later sections of this post.
**Testing this idea**: We started by trying to replicate the results of the 2005 paper. It was somewhat challenging because of the now outdated conventions used in the paper, but our tentative conclusions for now are that the paper doesn’t seem to replicate. Our plans if the replication was a success would have been to generalise to more complicated systems (one proposal was to train a CNN to recognise two digits from an MNIST set and perform an arithmetic operation on them, with the hope that it would learn a modular representation of both the individual digits and of the operator). However, this line of research has been put on hold until we get to the bottom of the null result from the Kashtan & Alon paper’s replication.
For a fuller discussion of how we’ve been testing this idea (and some of our ideas as to why the replication failed), please see [this post](https://www.lesswrong.com/posts/XKwKJCXgSKhSr9bZY/project-intro-selection-theorems-for-modularity).
Specialisation drives the evolution of modularity
-------------------------------------------------
**Basic idea**: MVG can be viewed as a subcase of this theory, because the environment fluctuating in a modular way is one possible explanation for why selection might favour specialisation. However, it isn’t the only explanation. Even in a static environment, evolution is a dynamic process, and a modular organism will be more easily able to evolve changes that improve its ability to deal with a particular aspect of its environment, without this affecting other modules (and hence having detrimental impacts on the rest of the organism). Hence specialisation (and hence modularity) might still be selected for.
**Biological modularity**: This theory was created partly from observation of a problem with MVG: not all environments seem to fluctuate in obviously modular ways (or even to fluctuate much at all). However, a lot of the biological intuitions of MVG carry over into this theory; namely the correspondence between different aspects of the environment that lend themselves well to specialisation, and different modular parts of the organism’s internal structure.
**Evidence**: The paper [Specialisation Can Drive the Evolution of Modularity](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000719) looked into regulatory gene networks (simulated using genetic algorithms), and examined the conditions under which they form modules. It finds that the main driver of modularity is selection for networks with multiple stable gene activity patterns that have considerable overlap. In other words, many of the genes between the patterns are identical. However, this paper only examined one particular evolutionary setting, and it’s unclear to which extent the results can be viewed as representative of a common principle of biological evolution. There don’t seem to have been any follow-up studies on this research agenda.
**Testing this idea**: We have no current plans to test this idea, since we think that MVG captures the core parts of it.
Direct selection for modularity, when it can perform a task / enable an adaptation that would be infeasible otherwise
---------------------------------------------------------------------------------------------------------------------
**Basic idea**: Modularity may be selected for when it breaks a developmental constraint, and thereby makes some beneficial adaptation possible that would be virtually impossible otherwise.
**Biological modularity**: It’s unclear how to look for evidence for this in the biological record, because it doesn’t seem obvious how the world would be different depending on whether this was true or false. However, one form this could take that seems highly plausible in a biological context is evolving a particular architectural change at very early stages of ontogenesis, which enforces modularity (more on this in the “evidence” section).
**Evidence**: A well-known phenomenon in neural networks is that of “neural interference”, whereby a system designed to learn a solution to two separate tasks inevitably develops connections between the two, despite these serving no practical purpose. As an example, consider a network performing image recognition on two different images, with the aim of outputting two different classifications: clearly no connections are required between the two tasks, but these will almost certainly be created anyway during the search for a solution, and they are hard to “un-learn”. The 2000 paper [Evolving Modular Architectures for Neural Networks](https://web-archive.southampton.ac.uk/cogprints.org/1298/3/evolmod.pdf) explores the idea of both the network architecture and the parameters being genetically inherited - that way organisms can avoid this interference altogether.
However, it appears to us that this methodology may be problematic in that the researcher is deciding on the network structure before the study is even run. Looking at the above diagram, it’s fairly obvious to us that the design on the right will perform better, but this study provides no evolutionary model as to how the modular architecture on the right might be selected for by evolution in the first place. It might seem intuitive to humans to choose the modular decomposition on the right, but for evolution to find that decomposition in particular is where [all the work gets done](https://www.lesswrong.com/posts/X2AD2LgtKgkRNPj2a/privileging-the-hypothesis), so this theory doesn’t seem to present an actual causal explanation.
**Testing this idea**: If this hypothesis is correct, then there should be certain tasks that a non-modular network simply fails to solve, and so any network trained on this task will either find a modular solution or fail to converge. However, empirically this seems like it isn’t the case (the question “why aren’t the outputs of genetic algorithms modular” was the motivating one for much of this project). Whether this is because the hypothesis is false, or because ML is missing some key aspect of evolutionary dynamics, is unclear.
Modularity arises from a selection pressure to reduce connection costs in a network
-----------------------------------------------------------------------------------
**Basic idea**: If we apply selection pressure to reduce connection costs, this could perform a kind of regularisation that leads to a modular structure, where the only learned connections are the ones that are most important for performing a task. Furthermore, connection costs between different nodes in the network scaling with the “distance” between them might encourage a kind of “locality”, which might further necessitate modularity.
**Biological modularity**: Despite being intuitively appealing, this idea looks a bit weaker when put in a biological context. There are some important examples of connections being costly (e.g. the human brain), but for this to be the driving factor leading to modularity, we should expect to see connection costs playing a leading role in virtually all forms of signalling across all organisms where modularity is to be found, and this doesn’t always seem to be the case (e.g. signalling molecules in bacteria). However, it’s possible this factor contributes to modularity along with other factors like MVG, even though it’s not the driving factor (e.g. see results from the paper below).
The locality argument seems a lot more robust to biological evidence. Where there is modularity, we very often see it structured in a local way, with networks of cells performing a specific task being clustered together into a relatively small region of space (e.g. organs). Cells interact with each other in a highly localised way, e.g. through chemical and mechanical signals. This means that any group of cells performing a task which requires a lot of inter-cellular interaction is size-limited, and may have to learn to perform a specialised task.
This argument also helps explain why modularity might be observable to humans. Consider a counterfactual world in which locality didn’t apply: there would be very little visible commonality between different organisms, and so even if modularity was present we would have a hard time identifying it ([Chris Olah uses a similar idea](https://distill.pub/2020/circuits/zoom-in/) as an analogy when discussing the universality claim for circuits in neural networks).
**Evidence**: The 2013 paper [The evolutionary origins of modularity](https://royalsocietypublishing.org/doi/10.1098/rspb.2012.2863) explores this idea. Their methodology was to compare two different networks: “performance alone” (PA) and “performance and connection costs” (P&CC). The task was a form of image recognition the authors have called the “retina problem”: the networks were presented with an 8-pixel array separated into two 2x2 blocks (left and right), and tasked to decide whether a certain type of object appeared in each block. This task exhibited a natural modular decomposition. It could also be varied in a modular way: by changing the task from deciding whether an image appeared in both blocks (”L-AND-R environment”) to whether an image appeared in either block (”L-OR-R”). In this way, they could test pressure to reduce costs against the MVG hypothesis. The results were that the P&CC networks were both more modular than the PA networks, and significantly outperformed them after a finite number of iterations, but that the best PA networks outperformed the best P&CC networks (explaining why performance alone might not be sufficient to select for modularity). They also found that adding MVG further increased both the outperformance and the modularity.
Another paper exploring this hypothesis is [Computational Consequences of a Bias toward Short Connections](https://sci-hub.hkvisa.net/10.1162/jocn.1992.4.4.323) (1992). This paper focuses on architectural constraints selecting for fewer/shorter connections, rather than imposing an explicit penalty. However, it doesn’t use particularly advanced machine learning methods, and focuses more narrowly on the human brain than on general modularity across evolved systems, making it less relevant for our purposes.
**Testing this idea**: We plan to introduce this type of selection pressure in our replication of the [Kashtan & Alon 2005 paper](https://www.pnas.org/content/pnas/102/39/13773.full.pdf). That way we can test it against the MVG hypothesis, and see whether the two combined are enough to produce modularity. We will experiment with different cost measures, although to the extent that this is a strong causal factor leading to modularity, we would expect the results to be relatively robust to different ways of calculating connection costs (this was also what the 2013 paper found).
We also have plans to test the locality idea, by instantiating neurons in some kind of metric space (e.g. 2D Euclidean space, or just using some kind of indexing convention), and penalising connection costs proportionally to their distance.
*For our team's most recent post, please see [here](https://www.lesswrong.com/posts/99WtcMpsRqZcrocCd/ten-experiments-in-modularity-which-we-d-like-you-to-run) .* |
e96d32a2-0afe-401e-9bb6-089696195be7 | trentmkelly/LessWrong-43k | LessWrong | Proof as mere strong evidence
Proofs at first exude a strong aroma of certainty.
On closer inspection though, more subtle notes emerge. Despite the dreams of formalists, the overwhelming majority of proofs in use and circulation are not minute low-level logical derivations, but rather intuitive high-level explanations. Some of which turn out to be false even after peer-review. Similarly, the level of details varies enormously across accepted proofs.
And yet it works, most of the time. Enough to build massive and intricate systems, and enough for millions of daily practical applications in the physical world.
What gives?
I think this mystery dissolves if we refuse to give proofs a special status. Instead of being the one and only form of certainty accessible to man, they make far more sense as “just” a strong form of evidence in the traditional scientific sense.
If proof is only evidence, then obviously different contexts call for different levels of details. We accept many geometric or intuitive proofs of the Pythagorean theorem, but ask for detailed checks on tens-of-pages-long advanced algebraic geometry proofs.
The other positive externality of desacralizing proofs comes in the form of methodological flourishing. Indeed, if proofs are merely sources of evidence, then any other way to generate evidence about formal statements is as valid, as long as the total evidence ends up approximately the same.
Or to put it more provocatively: proofs are not necessary if we find alternative ways of gathering the same amount of evidence.
Nowhere has this rich ground been better exploited than in computational complexity theory, the field studying the resources needed (time, memory, coin flips) to solve computational problems. For you see, conjectures in complexity theory are atrociously hard to prove. They’re some of the hardest across all of mathematics. But complexity theorists still want to advance their field, and so they have evolved an opportunistic methodology that accumulates evidence throu |
8994345c-4515-42c7-94f8-469c38a100c9 | trentmkelly/LessWrong-43k | LessWrong | Germs and Politics
2020 was marked by a series of serious international tensions. We had a geopolitical crisis in the Middle East due to the assassination of Iranian general Qasem Soleimani and threats of war between the United States and Iran, an intensification of the trade war between the U.S and China and a price war between Saudi Arabia and Russia on the price barrel of oil on the international market.
However, no such crisis will be as impactful for the current generation as the coronavirus pandemic. Since it appeared in China, SARS-Cov-2, also called COVID-19, has spread around the world extremely quickly due to the intercommunication of the global value chains and population flows and caused a complete collapse of the globalized economic system in which we lived until then.
Many people are concerned about the long-term effects of this pandemic. How will it impact employment? How will it affect the way we organize the international economy? How will the global value chains look after this? How will people react after the pandemic? What will be the social and cultural impacts of this disease?
In any case, what is taken for granted is that the world will be quite different after the end of the pandemic.
Most people generally tend to focus on the economic consequences. However, what few people seem to be noticing is that the virus can have effects that go beyond our physical and financial health. Germs and parasites can affect the way we organize our public life, how our legal and political institutions work and, in short, the entire social order of a nation.
It was not uncommon, already in March, the appearance of "coronavirus dictatorships" across the globe. Here I am not making a typical hysterical libertarian reference that lockdowns are new forms of techno-fascism. In several countries, political leaders have acquired emergency powers, almost total, in the face of the danger posed by the disease. In Hungary, the parliament endorsed Prime Minister Viktor Orbán to g |
415998e2-afda-49d3-ae33-c89762af6138 | trentmkelly/LessWrong-43k | LessWrong | Suspended animation/hibernation: would you use it if it were available?
This TED talk on suspended animation by Mark Roth captivated me when I first saw it last year, and I have been eagerly trying to follow developments on the subject ever since.
My question for the lesswrong crowd is, if suspended animation were available today in such a way that you could "skip" years, decades or even centuries in a de-animated state while not aging or aging only slightly, would you go for it? Would it be the rational thing to do, or would it be more rational to wait until you really needed it, for example, when you are nearing end of life or if you get diagnosed with an incurable, terminal illness?
What about friends and family? I can see people not wanting to use it because it would mean "waking up" in a world where the people they care about no longer exist. This is also an issue with cryonics, and I believe was the main reason Heinlein chose to be cremated rather than cryopreserved. However, the moral dilemma is more complicated, because with cryonics, it happens after you're dead, but with suspended animation, it must be done while you're still alive, so unless you get de-animated towards the end of your life, you've made a conscious decision to spend your remaining time on Earth in the future with other people rather than with those around you in the present. However, I could also see it being easier to sell people on than cryonics, if it worked as advertised, because it would be more of a sure thing, and might even be cheaper. I could even see it becoming so popular that a large percentage of the population opts out of their present life and the problems it presents them, hoping to "sleep" their way to a better future, causing problems of population imbalance.
I'm bringing this subject up here because I find the concept fascinating, and it doesn't seem to get nearly as much attention as cryonics, yet it will be much more disruptive if/when it ever arrives. |
ec059957-90c0-43f0-8e40-b17087817b27 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Superintelligence 16: Tool AIs
*This is part of a weekly reading group on [Nick Bostrom](http://www.nickbostrom.com/)'s book, [Superintelligence](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111). For more information about the group, and an index of posts so far see the [announcement post](/lw/kw4/superintelligence_reading_group/). For the schedule of future topics, see [MIRI's reading guide](https://intelligence.org/wp-content/uploads/2014/08/Superintelligence-Readers-Guide-early-version.pdf).*
---
Welcome. This week we discuss the sixteenth section in the [reading guide](https://intelligence.org/wp-content/uploads/2014/08/Superintelligence-Readers-Guide-early-version.pdf): ***Tool AIs***. This corresponds to the last parts of Chapter Ten.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
**Reading**: : “Tool-AIs” and “Comparison” from Chapter 10
---
Summary
=======
1. ***Tool AI:*** an AI that is not 'like an agent', but more like an excellent version of contemporary software. Most notably perhaps, it is not goal-directed (p151)
2. Contemporary software may be safe because it has low capability rather than because it reliably does what you want, suggesting **a very smart version of contemporary software would be dangerous** (p151)
3. Humans often want to figure out how to do a thing that they don't already know how to do. Narrow AI is already used to search for solutions. **Automating this search seems to mean giving the machine a goal** (that of finding a great way to make paperclips, for instance). That is, just carrying out a powerful search seems to have many of the problems of AI. (p152)
4. **A machine intended to be a tool may cause similar problems to a machine intended to be an agent**, by searching to produce plans that are perverse instantiations, infrastructure profusions or mind crimes. It may either carry them out itself or give the plan to a human to carry out. (p153)
5. **A machine intended to be a tool may have agent-like parts**. This could happen if its internal processes need to be optimized, and so it contains strong search processes for doing this. (p153)
6. If tools are likely to accidentally be agent-like, **it would probably be better to just build agents on purpose** and have more intentional control over the design. (p155)
7. Which castes of AI are safest is unclear and depends on circumstances. (p158)
Another view
============
Holden prompted discussion of the Tool AI in 2012, in one of several [Thoughts on the Singularity Institute](/lw/cbs/thoughts_on_the_singularity_institute_si/):
...Google Maps is a type of artificial intelligence (AI). It is far more intelligent than I am when it comes to planning routes.
Google Maps - by which I mean the complete software package including the display of the map itself - does not have a "utility" that it seeks to maximize. (One could fit a utility function to its actions, as to any set of actions, but there is no single "parameter to be maximized" driving its operations.)
Google Maps (as I understand it) considers multiple possible routes, gives each a score based on factors such as distance and likely traffic, and then displays the best-scoring route in a way that makes it easily understood by the user. If I don't like the route, for whatever reason, I can change some parameters and consider a different route. If I like the route, I can print it out or email it to a friend or send it to my phone's navigation application. Google Maps has no single parameter it is trying to maximize; it has no reason to try to "trick" me in order to increase its utility.
In short, Google Maps is not an *agent*, taking actions in order to maximize a utility parameter. It is a *tool*, generating information and then displaying it in a user-friendly manner for me to consider, use and export or discard as I wish.
Every software application I know of seems to work essentially the same way, including those that involve (specialized) artificial intelligence such as Google Search, Siri, Watson, Rybka, etc. Some can be put into an "agent mode" (as Watson was on Jeopardy!) but all can easily be set up to be used as "tools" (for example, Watson can simply display its top candidate answers to a question, with the score for each, without speaking any of them.)
The "tool mode" concept is importantly different from the possibility of [Oracle AI](http://www.aleph.se/papers/oracleAI.pdf) sometimes discussed by SI. The discussions I've seen of Oracle AI present it as an Unfriendly AI that is "trapped in a box" - an AI whose intelligence is driven by an explicit utility function and that humans hope to control coercively. Hence the discussion of ideas such as the [AI-Box Experiment](http://yudkowsky.net/singularity/aibox). A different interpretation, given in [Karnofsky/Tallinn 2011](http://groups.yahoo.com/group/givewell/message/287), is an AI with a carefully designed utility function - likely as difficult to construct as "Friendliness" - that leaves it "wishing" to answer questions helpfully. By contrast with both these ideas, Tool-AGI is not "trapped" and it is not Unfriendly or Friendly; it has no motivations and no driving utility function of any kind, just like Google Maps. It scores different possibilities and displays its conclusions in a transparent and user-friendly manner, as its instructions say to do; it does not have an overarching "want," and so, as with the specialized AIs described above, while it may sometimes "misinterpret" a question (thereby scoring options poorly and ranking the wrong one #1) there is no reason to expect intentional trickery or manipulation when it comes to displaying its results.
Another way of putting this is that a "tool" has an underlying instruction set that conceptually looks like: "(1) Calculate which action A would maximize parameter P, based on existing data set D. (2) Summarize this calculation in a user-friendly manner, including what Action A is, what likely intermediate outcomes it would cause, what other actions would result in high values of P, etc." An "agent," by contrast, has an underlying instruction set that conceptually looks like: "(1) Calculate which action, A, would maximize parameter P, based on existing data set D. (2) Execute Action A." In any AI where (1) is separable (by the programmers) as a distinct step, (2) can be set to the "tool" version rather than the "agent" version, and this separability is in fact present with most/all modern software. Note that in the "tool" version, neither step (1) nor step (2) (nor the combination) constitutes an instruction to maximize a parameter - to describe a program of this kind as "wanting" something is a category error, and there is no reason to expect its step (2) to be deceptive.
I elaborated further on the distinction and on the concept of a tool-AI in [Karnofsky/Tallinn 2011](http://groups.yahoo.com/group/givewell/message/287).
This is important because **an AGI running in tool mode could be extraordinarily useful but far more safe than an AGI running in agent mode**...
Notes
1. While [Holden's post](/lw/cbs/thoughts_on_the_singularity_institute_si/) was probably not the first to discuss this kind of AI, it prompted many responses. [Eliezer](/lw/cze/reply_to_holden_on_tool_ai/) basically said that non-catastrophic tool AI doesn't seem that easy to specify formally; that even if tool AI is best, agent-AI researchers are probably pretty useful to that problem; and that it's not so bad of MIRI to not discuss tool AI more, since there are a bunch of things other people think are similarly obviously in need of discussion. Luke basically [agreed](/lw/di4/reply_to_holden_on_the_singularity_institute/) with Eliezer. [Stuart](/lw/cfd/tools_versus_agents/) argues that having a tool clearly communicate possibilities is a hard problem, and talks about some other problems. Commenters say many things, including [that](/lw/cbs/thoughts_on_the_singularity_institute_si/6jyi) only one AI needs to be agent-like to have a problem, and [that](/lw/cbs/thoughts_on_the_singularity_institute_si/6kwn) it's not clear what it means for a powerful optimizer to not have goals.
2. A problem often brought up with powerful AIs is that when tasked with communicating, they will try to deceive you into liking plans that will fulfil their goals. It seems to me that you can avoid such deception problems by using a tool which searches for a plan you could do that would produce a lot of paperclips, rather than a tool that searches for a string that it could say to you that would produce a lot of paperclips. A plan that produces many paperclips but sounds so bad that you won't do it still does better than a persuasive lower-paperclip plan on the proposed metric. There is still a danger that you just won't notice the perverse way in which the instructions suggested to you will be instantiated, but at least the plan won't be designed to hide it.
3. Note that in computer science, an '[agent](http://en.wikipedia.org/wiki/Agent_architecture)' means something other than 'a machine with a goal', though it seems they haven't settled on exactly what [[some](http://www.csee.umbc.edu/~finin/papers/atal.pdf) [example](http://coltech.vnu.edu.vn/httt/media/courses/AI++/Tai%20lieu/TLTK.pdf) [efforts (pdf)](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CB4QFjAA&url=http%3A%2F%2Fhrcak.srce.hr%2Ffile%2F32862&ei=yv2hVJjCD4KxogS93IDgCw&usg=AFQjCNHoHK-nUE3WkhsvqLUf1RCO4u7lXg&sig2=z7Iuc6LYgtnvHAMp6cLnmg&bvm=bv.82001339,d.cGU&cad=rja)].

Figure: A '[simple reflex agent](http://en.wikipedia.org/wiki/Multi-agent_system)' is not goal directed (but kind of looks goal-directed: [one in action](https://www.youtube.com/watch?v=fS1Ad6b1caY))
4. Bostrom seems to assume that a powerful tool would be a search process. This is related to the idea that intelligence is an '[optimization process](http://wiki.lesswrong.com/wiki/Optimization_process)'. But this is more of a definition than an empirical relationship between the kinds of technology we are thinking of as intelligent and the kinds of processes we think of as 'searching'. Could there be things that merely contribute massively to the intelligence of a human - such that we would think of them as very intelligent tools - that naturally forward whatever goals the human has?
One can imagine a tool that is told what you are planning to do, and tries to describe the major consequences of it. This is a search or optimization process in the sense that it outputs something improbably apt from a large space of possible outputs, but that quality alone seems not enough to make something dangerous. For one thing, the machine is not selecting outputs for their effect on the world, but rather for their accuracy as descriptions. For another, the process being run may not be an actual 'search' in the sense of checking lots of things and finding one that does well on some criteria. It could for instance perform a complicated transformation on the incoming data and spit out the result.
5. One obvious problem with tools is that they maintain humans as a component in all goal-directed behavior. If humans are some combination of slow and rare compared to artificial intelligence, there may be strong pressure to automate all aspects of decisionmaking, i.e. use agents.
In-depth investigations
=======================
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's [list](http://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/), which contains many suggestions related to parts of *Superintelligence.*These projects could be attempted at various levels of depth.
1. Would powerful tools necessarily become goal-directed agents in the troubling sense?
2. Are different types of entity generally likely to become optimizers, if they are not? If so, which ones? Under what dynamics? Are tool-ish or Oracle-ish things stable attractors in this way?
3. Can we specify communication behavior in a way that doesn't rely on having goals about the interlocutor's internal state or behavior?
4. If you assume (perhaps impossibly) strong versions of some narrow-AI capabilities, can you design a safe tool which uses them? e.g. If you had a near perfect predictor, can you design a safe super-Google Maps?
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.
How to proceed
==============
This has been a collection of notes on the chapter. **The most important part of the reading group though is discussion**, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about multipolar scenarios - i.e. situations where a single AI doesn't take over the world. To prepare, **read** “Of horses and men” from Chapter 11*.*The discussion will go live at 6pm Pacific time next Monday 5 January. Sign up to be notified [here](http://intelligence.us5.list-manage.com/subscribe?u=353906382677fa789a483ba9e&id=28cb982f40). |
bae0c444-3018-4d75-b6b8-49036196679d | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Solomonoff induction still works if the universe is uncomputable, and its usefulness doesn't require knowing Occam's razor
*Note: I don't think this idea is original, but I couldn't find a good post going over the implications.*
I used to think that Solomonoff induction was a bit arbitrary for the following reason: it assigned a 100% probability to the universe being computable. I'm pretty sure the [universe is computable](https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93Deutsch_principle) (ignoring randomness), but nowhere near 100% sure. Who's to say we won't find a [halting oracle](https://en.wikipedia.org/wiki/Oracle_machine) floating in space for no reason? That seems like a pretty simple hypothesis. Why the focus on [recursive languages](https://en.wikipedia.org/wiki/Recursive_language)? You have to make some choice of how descriptions work (you can't assign positive probability to every infinite bit string), but that didn't change the feelings of arbitrariness.
But then I realized this understanding of why to use Solomonoff induction is incorrect. We do not use it because of the physical church-turing thesis, we use it because of the [*original* church-turing thesis](https://plato.stanford.edu/entries/church-turing/):
>
> L.C.M.s [logical computing machines: Turing’s expression for Turing machines] can do anything that could be described as ‘rule of thumb’ or ‘purely mechanical’. - *Alan Turing*
>
>
>
Because what matters is not whether the universe is computable, but whether our *methods of reasoning* are computable. Or in other words, whether the [map](https://www.lesswrong.com/tag/map-and-territory) is computable. Solomonoff's induction is at least as "good" as any computable inference method (up to a constant), regardless of the complexity of the universe. So if you, as a human, are trying to come up with a systematic way to predict things (even uncomputable things), Solomonoff's induction is better. Here is the precise statement:
Theorem: Let D.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
be some probability distributions on infinite sequences of bits such that inferring the next bit from a prefix is computable. The [likelihood ratio](https://www.lesswrong.com/tag/likelihood-ratio) from D to Solomonoff induction's prior is bounded above by some finite constant (despite the sequence containing infinitely many bits), and this constant is independent of the sequence of bits.
Proof sketch: *(Note: this is already a well-known result.)* There is a program m that is a decoder for an [entropy code](https://en.wikipedia.org/wiki/Entropy_coding) over finite bit strings based on D. Given any finite bit string x, we can find a y such that m(y)=x. The length of y is approximately −log2(px) (where px is the probability of a bit string starting with x according to D). The probability of a string starting with x under Solomonoff induction's prior is greater than 2−|m(y)|=2−|m|2log2(px)=px2|m|. So regardless of the content or length of |x|, the ratio is bounded by 2|m| (which only depends on D). □
A nice thing about this result is that it makes *no* assumptions about the sequence of bits. The theorem is not just saying Solomonoff induction is good v.s. D on most sequences, or even [almost all](https://en.wikipedia.org/wiki/Almost_surely) of them. It is good for *every particular* sequence of bits, because there is an upper bound on the likelihood ratio that is independent of the sequence. There is no adversarial example.
(Notice how we never invoked Occam's razor to argue that Solomonoff induction is superior. We can instead go the other way; Occam's razor is good because it's an informal version of an ideal inference procedure.)
Solomonoff induction v.s. a human in an uncomputable world
==========================================================
How does this shake out in an uncomputable universe? What if you're convinced that there is some orb in the universe emitting the digits of [Chaitin's constant](https://en.wikipedia.org/wiki/Chaitin%27s_constant) or something? We'll let Alice be a reasoner using Solomonoff induction, and Bob be a human.
>
> Bob: Ah yes, I have found it! An orb emitting the digits of Chaitin's constant is floating around in our solar system!
>
> Alice: How do you figure?
>
> Bob: I calculated the first two digits, and they matched!
>
> Alice: Surprising! But not that surprising (about six and half bits).
>
> Bob: I was surprised too, but now I can do something better than you. I can predict the digits that orb will emit.
>
> Alice: How do you plan to predict an uncomputable sequence, given that you're a human?
>
> Bob: Oh yeah...
>
> Alice: In fact, if you're correct it will eventually look like a uniformly random sequence to you since it is [algorithmically random](https://en.wikipedia.org/wiki/Algorithmically_random_sequence). So I'll be able to keep up with you just fine.
>
>
>
Harsh! But maybe Bob's hypothesis will let him *use* the orb for something?
>
> Bob: A ha! My hypothesis has allowed me to predict theorems proven by mathematicians before they actually proved them.
>
> Alice: How so?
>
> Bob: I can use the digits emitted by the orb to do computations that predict theorems. And my observations are verified when mathematicians prove it the normal way.
>
> Alice: Yeah, I noticed the orb can do that too.
>
> Bob: Doesn't that support the idea that the orb is emitting the digits of Chaitin's constant?
>
> Alice: You could understand some of my current hypotheses as making an assumption that the orb does this, and exploiting it to learn things about other parts of the environment. However, the [code is not commented](https://en.wikipedia.org/wiki/Comment_%28computer_programming%29), so this is just a matter of interpretation. I don't have a hypothesis that predicts in advance that the orb emits the digits of Chaitin's constant. But technically you don't either, since you yourself don't know the digits.
>
>
>
If the environment has uncomputable features, Alice's hypotheses act like [oracle machines](https://en.wikipedia.org/wiki/Oracle_machine). She can't predict those features, but she can use them to predict other features of the environment. Going back to our theorem, if Bob can use the oracle, so can Alice because she has Bob as one of her hypotheses.
Hypotheses as methods of reasoning instead of as possible worlds
================================================================
We often think of Solomonoff induction's hypotheses as being possible worlds. I will argue for a different intuition; the hypotheses are *forms of reasoning*.
Any systematic method to make predictions used by any human, in the past, present, or *future*, is one of the hypotheses. If the human somehow has uncomputable intuitions (unlikely, but possible), we can just treat that as part of the environment and Solomonoff induction will figure out how to exploit it.
The hypotheses are possible maps instead of possible territories!
Conclusion
==========
Solomonoff induction is great because it's hypotheses include all forms of systematic reasoning (by the church-turing thesis). Even if we believe the universe is uncomputable and we are right, Solomonoff induction will make predictions just as well as us. In particular, it will act like an oracle machine with respect to the uncomputable parts of the environment, but only *after* seeing them. But since they are uncomputable, we can't systematically predict them in advance either. If the universe is uncomputable, the [litany of Tarski](https://www.lesswrong.com/tag/litany-of-tarski) becomes a physically impossible desire!
I do have one unanswered question though: so anyways, how likely is it that the universe has a halting oracle? 🤔 |
86314abf-596f-427a-9ead-c125023248fe | trentmkelly/LessWrong-43k | LessWrong | If you are too stressed, walk away from the front lines
tl;dr: If your reason for cramming AI knowledge into your brain is stress, then don't do it. You can still be useful, but walk away from the front lines where people are directly responsible.
Disclaimer: 1) This is an obvious problem that has already been noticed and addressed by many LessWrong users 2) This is not an original solution but rather a specific framing of the problem and some food for thought 3) I could be gravely mistaken, and your best bet might be putting your all into research after all. However, you might just want to emerge from lurking and actually do something if that is the case. 4) The rather romantic and optimistic tone employed here is fait exprès and is not meant as an accurate description of the reality in which we are in right now.
The heroes who will save the world will be alignment researchers: when, dot by dot, the stars will grow dark and a dark dawn will rise, we will all have to buy them a beer. If you are not among this small band of heroes who will guarantee us the best future possible, you may feel an urgent need to promptly join them: I beg of you only to consider whether this is a good idea.
If you think that you should become an alignment researcher in a matter of months,[1] I will not try to stop you. But it's probably worth a few days worth of cranial compute to establish whether you are exploiting yourself in the best way you could.
I'll set the parameters of the problem. "Becoming an alignment researcher" is a spectrum: the more you learn about alignment, the more capable you are at navigating the front lines of the alignment project. Certainly, understanding the tenets of alignment is a laudable goal; but at what point will you be faced with diminishing returns? If you are not planning on single-handedly solving the alignment problem, are there not better uses of your time?[2]
There are many instrumental goals that serve the terminal goal of completing the alignment project, and they might be more worthy of yo |
5cabcdb9-7b15-4afe-8ede-fd9cbc640821 | trentmkelly/LessWrong-43k | LessWrong | Non-replicability of Some Behavioral Economics Research Across Cultures
Not our typical cognitive biases, but does raise the question of universality - I am aware of much stronger evidence for the universality of emotions and facial expression. Skimming the paper looks like a lot of fun.
http://www.psmag.com/magazines/pacific-standard-cover-story/joe-henrich-weird-ultimatum-game-shaking-up-psychology-economics-53135
Actual paper:
http://www2.psych.ubc.ca/~henrich/pdfs/Weird_People_BBS_final02.pdf
|
602b03f0-cf1f-4f67-9a91-66564ce3d5c0 | trentmkelly/LessWrong-43k | LessWrong | TU Darmstadt, Computer Science Master's with a focus on Machine Learning
by Magdalena Wache
This post is a compilation of information about the TU Darmstadt's Computer Science program with a focus on machine learning. I hope it will be useful for people who are interested in doing a machine learning master's and are deciding which university to apply for. It is part of a series of articles on different European master's programs related to artificial intelligence and machine learning.
Caveat: While writing this post, I found out that the degree program requires a DSH-2 certificate in German (level C1 equivalent), or the completion of a German bachelor’s degree. This surprised me, because the AI-related courses are almost entirely in English, and it is possible to choose courses in English only. I wrote to the university to ask if I had misunderstood something, but they confirmed the certificate is necessary. Therefore, the TU Darmstadt's CS master's program is only an option for you if you speak German or intend to learn enough German to pass a C1 exam.
Overview
In the TU Darmstadt Computer Science (CS) master's degree you can choose the courses in a way that basically makes it a machine learning master's. I am currently enrolled and will start my master's thesis soon.
The degree is a two-year program with 120 ECTS, which consist of:
* 66 ECTS of elective courses that you can choose relatively freely from a very large number of CS subjects. There are some restrictions that I will explain in further detail below. As there are a lot more than 66 ECTS of interesting ML/AI-related courses, this part can be almost entirely about machine learning.
* 24 ECTS of elective courses in a minor (In my case: Mathematical Logic)
* 30 ECTS for the master's thesis
This means there are no mandatory courses and you have a lot of choices.
Darmstadt is not as well-known as Amsterdam, Oxford, Cambridge, etc., but its academic output seems quite presentable. If you filter for publications in AI and ML, the TU Darmstadt ranks 9th in Europe on CSRanki |
4195c8fb-af06-47bb-bcda-7ce6f3b07882 | trentmkelly/LessWrong-43k | LessWrong | On clinging
(Cross-posted from Hands and Cities)
I often find myself wanting to refer to a certain dimension of emotional experience, which I’ll call “clinging.” This post tries to point at it. I also suggest that understanding this dimension can resolve a common confusion about Buddhist philosophy and other types of spiritual/therapeutic advice — namely, how “non-attachment” or “letting go” can be compatible with deep passion and care. I further suggest that clinging can help us think more clearly about patterns of concern that register, intuitively, as non-virtuous.
I. Pointing at clinging
Clinging, as I think about it, is a certain mental flavor or cluster of related flavors. It feels contracted, tight, clenched, and narrow. It has a kind of hardness, a “not OK-ness,” and a (sometimes subtle) kind of desperation. It sees scarcity. It grabs. It sees threat. It pushes away. It carries seeds of resentments and complaints. Jealousy, entitlement, pettiness, and bitterness all have a lot of clinging in them, as do greed, desire for status, and anxiety.
I’m not just listing emotions and traits often thought bad. Self-absorption, for example, need not be clingy; it can just be blind, wrapped up in its own world. Same with arrogance and disrespect, for related reasons. Emotions in the vicinity of depression, boredom, laziness, and malaise also seem non-clingy, at least at the surface. Contempt does not obviously connote clinginess, to me; nor does Voldemort’s cruelty, or the Joker’s indifference to the burning of the world.
Often, in my experience, clinging seems to hijack attention and agency. It makes it harder to think, weigh considerations, and respond. You are more likely to flail, or stumble around, or to “find yourself” doing something rather than choosing to do it. And you’re more likely, as well, to become pre-occupied by certain decisions — especially if both options involve things you’re clinging in relation to — or events. Indeed, clinging sometimes seems like it tr |
ff96b545-6e79-446a-8ccd-41656ea8ad54 | trentmkelly/LessWrong-43k | LessWrong | Lessons learned from talking to >100 academics about AI safety
I’d like to thank MH, Jaime Sevilla and Tamay Besiroglu for their feedback.
During my Master's and Ph.D. (still ongoing), I have spoken with many academics about AI safety. These conversations include chats with individual PhDs, poster presentations and talks about AI safety.
I think I have learned a lot from these conversations and expect many other people concerned about AI safety to find themselves in similar situations. Therefore, I want to detail some of my lessons and make my thoughts explicit so that others can scrutinize them.
TL;DR: People in academia seem more and more open to arguments about risks from advanced intelligence over time and I would genuinely recommend having lots of these chats. Furthermore, I underestimated how much work related to some aspects AI safety already exists in academia and that we sometimes reinvent the wheel. Messaging matters, e.g. technical discussions got more interest than alarmism and explaining the problem rather than trying to actively convince someone received better feedback.
Update: here is a link with a rough description of the pitch I used.
Executive summary
I have talked to somewhere between 100 and 200 academics (depending on your definitions) ranging from bachelor students to senior faculty. I use a broad definition of “conversations”, i.e. they include small chats, long conversations, invited talks, group meetings, etc.
Findings
* Most of the people I talked to were more open about the concerns regarding AI safety than I expected, e.g. they acknowledged that it is a problem and asked further questions to clarify the problem or asked how they could contribute.
* Often I learned something during these discussions. For example, the academic literature on interpretability and robustness is rich and I was pointed to resources I didn’t yet know. Even in cases where I didn’t learn new concepts, people scrutinized my reasoning such that my explanations got better and clearer over time.
* The higher up the |
e9f7e1c1-74a0-4ac5-b38a-c9f9240eabd6 | trentmkelly/LessWrong-43k | LessWrong | More Dakka
Epistemic Status: Hopefully enough Dakka
Eliezer Yudkowsky’s book Inadequate Eqilibria is excellent. I recommend reading it, if you haven’t done so. Three recent reviews are Scott Aaronson’s, Robin Hanson’s (which inspired You Have the Right to Think and a great discussion in its comments) and Scott Alexander’s. Alexander’s review was an excellent summary of key points, but like many he found the last part of the book, ascribing much modesty to status and prescribing how to learn when to trust yourself, less convincing.
My posts, including Zeroing Out and Leaders of Men have been attempts to extend the last part, offering additional tools. Daniel Speyer offers good concrete suggestions as well. My hope here is to offer both another concrete path to finding such opportunities, and additional justification of the central role of social control (as opposed to object-level concerns) in many modest actions and modesty arguments.
Eliezer uses several examples of civilizational inadequacy. Two central examples are the failure of the Bank of Japan and later the European Central Bank to print sufficient amounts of money, and the failure of anyone to try treating seasonal affective disorder with sufficiently intense artificial light.
In a MetaMed case, a patient suffered from a disease with a well-known reliable biomarker and a safe treatment. In studies, the treatment improved the biomarker linearly with dosage. Studies observed that sick patients whose biomarkers reached healthy levels experienced full remission. The treatment was fully safe. No one tried increasing the dose enough to reduce the biomarker to healthy levels. If they did, they never reported their results.
In his excellent post Sunset at Noon, Raymond points out Gratitude Journals:
> “Rationalists obviously don’t *actually* take ideas seriously. Like, take the Gratitude Journal. This is the one peer-reviewed intervention that *actually increases your subjective well being*, and costs barely anything. An |
84e4ebe1-3578-4e72-88eb-ce9fb867658b | trentmkelly/LessWrong-43k | LessWrong | Meetup : West LA Meetup 01-25-2012
Discussion article for the meetup : West LA Meetup 01-25-2012
WHEN: 25 January 2012 07:00:00PM (-0800)
WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064
When: 7:00pm - 9:00pm Wednesday, January 25th.
Where: The Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters.
Parking is free for 3 hours.
Discussion Topic: This week is about checklists. Our tentative goal for the evening is to make a rationality checklist to consider before making big decisions. If you have a chance, try to think of some good examples of bad decisions, and good decisions made under uncertainty. You may find some inspiration browsing Wikipedia's list of cognitive biases. Also see the original source of inspiration for this meeting, Charlie Munger's checklists.
This is also a good time to browse recent posts, as they also make for good discussion!
Whether you're a regular reader or totally new, here for the theoretical musings or the practical things, come by and say hello! The conversation is largely unstructured and casual, and the people are awesome. If we have a large group, we may also play a game!
I will bring a whiteboard with Bayes' Theorem written on it.
Discussion article for the meetup : West LA Meetup 01-25-2012 |
a8c86cf7-06fc-4a1c-b1fa-fabbf76013a1 | trentmkelly/LessWrong-43k | LessWrong | Sacrifice for the Big Crunch
Let us assume that the end of the universe will be consistent with the Big Crunch for the rest of our story, simply the 'event' which is most easily described as the inverse to the Big Bang, the universe contracting into a singularity. Let us also assume that you have, by billions of years worth of advancements in science, remained alive. The end is near, the temperature of space is already several hundred degrees and climbing. Despite our advancements, we are unable to know what exactly will result from the event as we have no prior examples. However, we were aware that, for quite some time, our universe was shrinking. With the forewarning to prepare we created a spaceship capable of exiting the universe entirely, however by the time it has acquired enough power to make such an escape, you have been rendered the last living intelligence in the universe. Therefore humanity has given you command of the spacecraft you're currently inhabiting, given complete freedom, but only two possibilities, with four plausible outcomes.
Option 1: You can choose to remain in the universe, eventually the laws of physics dictate your craft will be unable to resist gravitational forces, the heat, and radiation, you will be consumed as everyone else already has into the event.
O1 Outcome 1: With all matter from which the current universe began being restored into singularity, this universe is repeated. How many times you are consumed and how many times the event has happened in unknowable, but likely approaches infinity. No one will ever know that this is happening. But everyone dies repeatedly, yet everyone always come back to live repeatedly. Death ceases permanence. Yet with repetition comes the destruction of free will, every time you will choose death, and every time this universe will repeat.
O1 Outcome 2: Due to the true randomness in quantum mechanics a completely new universe is created from the old, possibly one with altered properties of physics and thus uninhabitable. Or |
58f3fb4c-c5ad-4da4-b736-7cdccacaadd9 | trentmkelly/LessWrong-43k | LessWrong | Trivial inconveniences as an antidote to akrasia
1.
I was taking an five hour bus back to LA from Vegas. We were stopped at a rest stop about midway through, and I got out to stretch my legs. I figured it would be a good time to call my mom and catch up.
While chatting with her, she started to tell me about a business idea she has. She started off telling me how she loves chocolate, but has zero willpower to not overindulge. She could just not buy it at the store, but she really doesn't want to give it up outright. So her idea is to have a safe with some sort of timer on it that feed her a piece of chocolate once per day, but that's it.
My very first impression was to role my eyes. There is this small part of me that still instinctively just wants to disagree with whatever my mom says. But that only lasted a brief moment. After thinking about it some more, I kinda love the idea! The problem is real, and it's important to solve. The solution should be pretty foolproof, and shouldn't be too expensive.
I'm an entrepreneur, and I think about startup ideas all the time. I end up forgetting about most of them. This ten minute chat with my mom happened almost three years ago, and it still holds a place in my mind.
2.
I think her original idea is fine, but I always like to think about how ideas can be improved on. One problem I see with her idea is that it might be too strict. What if you truly do have a good reason for deviating and want to be fed chocolate now?
What if, instead of feeding you a piece of chocolate once every 24 hours (or whatever you configure it to), what if it just required you to stand there for five minutes before feeding you the chocolate? And you can't just step away and watch TV while you wait out the five minutes - what if the safe made you actually stand there and do something boring for five minutes (eg. by making you type in the letter that appears on the screen every ten seconds)?
I feel like that approach would still be pretty damn effective. Who is going to actually stand there for f |
00b177ac-1ad1-4c18-a175-ba92acea30bc | trentmkelly/LessWrong-43k | LessWrong | A resolution to the Doomsday Argument.
A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.
Plausible? |
d1ae6099-856f-4a1b-8dc6-3bb51a459d3f | trentmkelly/LessWrong-43k | LessWrong | How can I strategically write a complex bestseller? (4HS001)
> “Find out the reason that commands you to write; see whether it has spread its roots into the very depth of your heart; confess to yourself you would have to die if you were forbidden to write.” - Rainer Maria Rilke
> “Tomorrow may be hell, but today was a good writing day, and on the good writing days nothing else matters.”
> - Neil Gaiman
3:00 AM, MEXICO CITY, 12 JUNE 2013
Seven years ago I made a promise I didn't keep. I was 17 at the time, and mildly unaware of how complex and large the World is. The conversation went something like this:
> Me: I could write a bestseller, come on, it is not that complicated. Just read a random bestseller, they are not even that smart anyway!
>
> Mentor: Yeah, right, I dare you to go back home right now and write a bestseller, go!
>
> Me: I'm busy with all this school stuff right now, so I have to do my homework and....
>
> Mentor: Ok, ok, I'll concede we are very busy right now, how about in five years?
>
> Me: Five years seems more than enough. Take a note, in five years time I'll have written a bestseller. I promise.
Somehow later on I got busy with cooking pasta I needn't eat and listening to gossip about people I didn't care. Not a good start.
It's never too late to start over though, and now is as good a time as any.
But wait! Humans are not automatically strategic right?
True. Also humans are not as good at detecting their own strategic failures and dead-ends as other humans. If we can't even face more than three minutes of work, how could we ever intuitively look at our work and see where it is bad?
Which is why it seems that the rational way to do it is to find a place where people trained at being strategic can pinpoint your failures and accomplishments as you go along, rewarding you for winning and twisting your mental knobs when you fail, so that over time, either you learn how to do it right, or you learn the right thing to do was something else altogether. This is the project. I'm hoping as an |
7caa7c42-89ab-4186-ab32-2333bba47c16 | trentmkelly/LessWrong-43k | LessWrong | Reviews as "How should this be improved to meet the bar?"
The LessWrong Review is a pretty big experiment, and I am still very much uncertain what the best form of reviewing is. We were pretty vague on how to review, listing these bullet points on topics a review should consider:
* How has this post been useful?
* How does it connect to the broader intellectual landscape?
* Is this post epistemically sound?
* How could it be improved?
* What further work would you like to see on top of the ideas proposed in this post?
I'm interested in seeing more of the last two. I'd like a key piece of the Review to be "authors rewriting posts if they can be improved", and "people writing followup posts that either attempt a better explanation of whatever the post was about, or do followup work exploring more of the post's ideas."
So, for negative reviews, if a post's central concept seems important but the post itself feels inadequate, I'd be interested in seeing more "how could this post be improved, such that it makes sense to include in a 'Best Of', or otherwise enter into Lesswrong's longterm memory.*"
----------------------------------------
*a concern I have about the current Review process is it focuses primarily on the best-of book, but not all important works make sense to include in a public facing artifact. Examples of this might include a) dry, technical posts, b) posts that are important but somehow controversial, c) posts that are doing a lot of "schlep" work (such as replications), where you don't necessarily care that everyone reads them, but you care that they exist and you can refer to them when doing later meta-analyses or whatnot)
I'm not sure about the best solution for this and this is still the subject of some debate on the LessWrong team, but I'd personally prefer if reviewers somewhat distinguished between "whether it makes sense for a post to be in the Best Of Book" and "whether the post is somehow important to the LessWrong intellectual project." |
09fdc40a-2b20-4653-886e-bdfacb02f6ab | trentmkelly/LessWrong-43k | LessWrong | A rejection of the Orthogonality Thesis
I shared a blog post I wrote in Astral Codex, and someone suggested I try and share with the rationalist community (though it's somewhat polemic in tone). So here goes! The blog post is a little wider in scope, but I think the relevant part for the rationalist community is the rejection of the orthogonality thesis. The TL;DR is that the orthogonality thesis is often presented as a fact, but it seems to me that it's mostly a series of assertions, namely:
1. There is a large mind design space. Do we have any actual reasons for thinking so? Sure, one can argue everything has a large design space, but in practice, there's often an underlying unique mechanism for how things work.
2. Ethics are not an emergent property of intelligence - but again, that's just an assertion. There's no reason to believe or disbelieve it. It's possible that self-reflection (and hence ethics and the ability to question one's goals and motivations) is a pre-requisite for general cognition - we don't know whether this is true or not because we don't really understand intelligence yet.
3. The previous two are assertions that could be true, but reflective stability is definitely not true - it's paradoxical. To quote from my post:
> This line of reasoning is absurd: it assumes an agent knows in advance the precise effects of self-improvement — but that’s not how learning works! If you knew exactly how an alteration in your understanding of the world would impact you, you wouldn’t need the alteration: to be able to make that judgement, you’d have to be able to reason as though you had already undergone it (of course, you can predict some of the effects of a particular course of study/self-improvement: for instance, you know that if you take a course in accounting, you’ll become better at reading financial statements. But you have no idea what other effects this course will have on your worldview — for instance, an accounting course might cause you to hate reading financial statements. If you d |
fd0279a6-0cfb-4552-97c9-ff7435d46c03 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Human rights, democracy, and the rule of law assurance framework for AI systems: A proposal
1
Human Rights,
Democracy,
and the Rule of Law
Assurance Framework
for AI Systems :
A proposal prepared for the
Council of Europe’s Ad h oc
Committee on Artificial
Intelligence
David Leslie, Christopher Burr, Mhairi Aitken, Michael Katell, Morgan Briggs, and Cami Rincon
2
The Public Policy Programme at The Alan Turing Institute was set up in May 2018
with the aim of developing research, tools, and techniques that help governments
innovate with data -intensive technologies and improve the quality of people's
lives. We work alongside policy makers to explore how data science and artificial
intelligence can inform public policy and improve the provision of public services.
We believe that governments can reap the benefits of these technologies only if they make considerations of ethics and safety a first priority.
Please note, that this propos ed framework is a livi ng document that will evolve
and improve with input from users, affected stakeholders, and interested parties.
We need your participation. Please share feedback with us at policy@turing.ac.uk
This research was supported, in part, by ESRC Grant ES/T007354/1 , EPSRC Grant
EP/T001569/1, EPSRC Grant EP/W006022/1 , and from the public funds that make
the Turing's Public Policy Programme possible.
https://www.turing.ac.uk/research/ research -programmes/public-policy
This work is licensed under the terms of the Creative Commons Attribution License
4.0 which permits unrestricted use, provided the original author and source are
credited. The license is available at: https://creativecommons.org/licenses/by-nc -
sa/4.0/legalcode
Cite this work as:
Leslie, D., Burr, C., Aitken, M., Katell, M., Briggs, M., Rincon, C. (2021) .
Human rights, democracy , and the rule of law assurance framework for AI
systems: A proposal . The Alan Turing Institute.
https://doi.org/10.5281/zenodo.5981676
3 Table of Contents
Table of Contents ............................................................................ 3
Abbreviations ................................................................................. 5
Executive Summary ......................................................................... 6
Introduction and Overview ............................................................... 9
A Proposal for a Coherent and Integrated Approach ............................................................... 10
Shape and Delivery of the Framework .......................................................................................... 12
Working Notes for the PDG ............................................................................................................... 14
Preliminary Context -Based Risk Analysis (PCRA) ................................ 16
Introduction ........................................................................................ 16
Preparing a Project Summary Report in the Planning and Scoping Phase ...... 17
Activity 1: Outline Project- , Use-, Domain -, and Data -Contexts ........................................ 18
Activity 2: Identify Stakeholders ..................................................................................................... 19
Activity 3: Scope Impacts to Human Rights, Democracy, and Rule of Law ................... 21
Activity 4: Map Governance Workflow .......................................................................................... 47
Project Summary Report Template .................................................. 50
PCRA: Document Overview .................................................................... 61
Introduction ............................................................................................................................................. 61
PCRA Section One: Identifying Risk Factors ............................................................................... 62
PCRA Section Two: Risks of Adverse Impacts on the Human Rights and Fundamental
Freedoms of Persons, Democracy, and the Rule of Law ........................................................ 65
Preliminary Context Based Risk Analysis Template ............................. 75
PCRA Output Example ........................................................................ 207
Stakeholder Engagement Process ................................................... 215
Step 1: Carry Out Stakeholder Analysis ................................................. 216
Step 2: Engage in Positionality Reflection .............................................. 218
Step 3: Establish an Engagement Objective ........................................... 219
Step 4: Determine Engagement Method ................................................ 220
Step 5: Workflow Revisitation and Reporting .......................................... 225
Stakeholder Engagement Process (SEP) Template ............................ 227
Human Rights, Democracy, and the Rule of Law Impact Assessment ... 231
Getting the HUDERIA Process Right ...................................................... 232
Three Steps in the HUDERIA Process .................................................... 236
Building the Content of your HUDERIA .................................................. 236
Identifying Potential Adverse Impacts ........................................................................................ 237
Assessing the Severity of Potential Impacts ............................................................................. 238
Additional Questions for the Project Team ................................................................................ 245
Mitigating Potential Impacts and Setting Up Access to Remedy ....................................... 246
4 Revisiting the HUDERIA across the Project Lifecyle ................................. 249
HUDERIA Template ............................................................................ 251
Human Rights, De mocracy, and Rule of Law Assurance Case (HUDERAC)
................................................................................................. 277
Section Outline .................................................................................. 277
Introduction ...................................................................................... 278
What is an Assurance Case? ................................................................ 278
Structure of a HUDERAC ..................................................................... 280
Components of a HUDERAC ................................................................. 281
Goal(s) ..................................................................................................................................................... 281
Properties and (Project or System Property) Claims ............................................................. 283
Evidence ................................................................................................................................................. 293
Building a HUDERAC ........................................................................... 293
A HUDERAC Template ......................................................................... 295
Reflect ...................................................................................................................................................... 295
Act ............................................................................................................................................................. 296
Justify ...................................................................................................................................................... 297
Goals, Actio ns and Claims ............................................................................................................... 298
Appendix 1: Summary of Processes, Steps, and User Activities ........... 307
Appendix 2: HUDERAF Project Lifecycle Process Map ......................... 308
Appendix 3: A Sociotechnical Approach to the ML/AI Project Lifecycle .. 309
(Project) Design Tasks and Processes ................................................... 311
Project Planning ................................................................................................................................... 311
Problem Formulation .......................................................................................................................... 312
Data Extract ion or Procurement .................................................................................................... 313
Data Analysis ........................................................................................................................................ 313
(Model) Development Tasks and Processes ............................................ 313
Preprocessing and Feature Engineering ..................................................................................... 313
Model Selection .................................................................................................................................... 314
Model Training ...................................................................................................................................... 314
Model Validation and Testing .......................................................................................................... 314
Model Reporting ................................................................................................................................... 314
(System) Deployment Tasks and Processes ........................................... 315
Model Productionalization ................................................................................................................ 315
User Training ......................................................................................................................................... 315
System Use and Monitoring ............................................................................................................ 315
Model Updating or De -provisioning .............................................................................................. 316
Glossary ..................................................................................... 317
References .................................................................................. 327
5 Abbreviations
CAHAI Council of Europe’s Ad hoc Committee on
Artificial Intelligence
CoE Council of Europe
DHRI Danish Human Rights Institute
HUDERAC Human Rights, Democracy, and the Rule of
Law Assurance Case
HUDERAF Human Rights, Democracy, and the Rule of
Law Assurance Framework
HUDERIA Human Rights, Democracy, and the Rule of
Law Impact Assessment
HRIA Human Rights Impact Assessment
IMP Impact Mitigation Plan
PCRA Preliminary Context Based Risk Assessment
PS Report Project Summary Report
RIN Risk Index Number
SEP Stakeholder Engagement Process
UNGP United Nations Guiding Principles on Business
and Human Rights
UNHROHC UN Human Rights Office of the High
Commissioner
6 Executive Summary
Following on from the publication of its Feasibility Study in December 2020, the
Council of Europe’s Ad Hoc Committee on Artificial Intelligence (and its subgroups)
initiated efforts to formulate and draft its Possible elements of a legal framework
on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law . This document was ultimately adopted by
the CAHAI plenary in December 2021. To support this effort, The Alan Turing Institute undertook a programme of research that explored the governance
processes and practical tools needed to operationalise the integration of human
right due diligence with the assurance of trustworthy AI innovation practices.
The resulting framework was completed and submitted to the Council of Europe
in September 2021. It presents an end -to-end approach to the assurance of AI
project lifecycles that integrates context -based risk analysis and appropriate
stakeholder engagement with comprehensive impact assessment, and transparent risk management, impact mitigation, and innovation assurance practices. Taken
together, these interlocking processes constitute a Human Rights, Democracy and
the Rule of Law Assurance Framework (HUDERAF ). The HUDERAF combines the
procedural requirements for principles -based human rights due diligence with the
governance mechanisms needed to set up technical and socio- technical guardrails
for responsible and trustworthy AI innovation practices. Its purpos e is to provide
an accessible and user -friendly set of mechanisms for facilitating compliance with
a binding legal framework on artificial intelligence, based on the Council of
Europe’s standards on human rights, democracy and the rule of law , and to ensur e
that AI innovation projects are carried out with appropriate levels of public
accountability , transparency , and democratic governance.
The HUDERAF encompasses four interrelated elements:
(1) The Preliminary Context -Based Risk Analysis (PCRA) provides an initial
indication of the context -based risks that an AI system could pose to human rights,
fundamental freedoms, and elements of democracy and the rule of law. The main
purpose of the PCRA is to help AI project teams establish a proportionate approach
to risk management and assurance practices and to the level of stakeholder engagement that is needed across the project lifecycle.
(2) The Stakeholder Engagement Process (SEP) helps project teams to identify
stakeholder salience and to facilitate proportionate stakeholder involvement and
input throughout the project workflow. This process safeguards the equity and the
contextual accuracy of HUDERAF governance processes through stakeholder
involvement, revisitation, and evaluation.
7
(3) The Human Rights, Democracy, and the Rule of Law Impact Assessment
(HUDERIA) provides an opportunity for project teams and engaged stakeholders
to collaboratively produce detailed evaluations of the potential and actual impacts
that the design, development, and use of an AI system could have on human
rights, fundamental freedoms and elements of democracy and the rule of law. This process contextualizes and corroborates potential harms that have been previously identified, enables the discovery of further harms through the
integration of stakeholder perspectives, makes possible the collaborative
assessment of the severity of potential adverse impacts identified, facilitates the co-design of an impact mitigation plan, sets up access to remedy, and establishes
monitoring and impact re -assessment protocols.
(4) The Human Rights, Democracy, and Rule of Law Assurance Case (HUDERAC)
enables AI project teams to build a structured argument that provides
demonstrable assurance to stakeholders that claims about the attainment of goals
established in the HUDERIA and other HUDERAF governance processes are
warranted given available evidence. The process of developing an assurance case
assists internal reflection and deliberation, promoting the adoption of best
practices and integrating thes e into design, development, and deployment
lifecycles. It also provides an accessible way to convey to impacted stakeholders
information about actions taken across the project workflow to mitigate risks and
to ensure the ascertainment of relevant normative goals. A diligently executed
assurance case offers a clear and understandable mechanism of risk management and impact mitigation that supports appropriate levels of social licence, accountability, and transparency.
The HUDERAF instruments and processes have been built to steward multi -
stakeholder involvement, engagement, and consultation across project lifecycles
in accordance with the democratic priorities and values of the Council of Europe.
These instruments and processes are meant to be adopted by the designers,
developers, and users of AI systems as an innovation -enabling way to facilitate
better, more responsible innovation practices, which conform to human rights, democracy, and the rule of law, so that these technologies can optimally promote individual, community, and planetary wellbeing.
The HUDERAF has also been designed to be as “algorithm neutral” and practice
based as possible so that it can remain maximally future proof and inclusive of different AI applications. The HUDERAF model will, however, need to stay
responsive to the development of novel AI innovations and use -cases.
Unanticipated breakthroughs, unforeseen application scenarios, and exponential increases in the efficacy of existing AI technologies could pose new and
unexpected risks to fundamental rights and freedoms that transform the project
governance and impact mitigation requirements for assuring the conformity of
8 these systems to human rights standards and elements of democracy and the rule
of law. For this reason, the HUDERAF model should be seen as dynamic and in
need of regular revisitation and re- evaluation. This will require that inclusive and
multi-stakeholder processes be set in place for the intermittent re-assessment and
updating of the model.
9 Introduction and Overview
Dear members of the CAHAI -PDG and other esteemed colleagues and excellencies ,
What follows is a proposal for a H uman Rights, Democracy, and the Rule of Law
Assurance Framework (HUDERAF) for AI systems that endeavour s to
operationalize the outline of a Model for a Human Rights, Democracy , and the Rule
of Law Impact Assessment (HUDERIA) presented in CAHAI-PDG(2021)05rev. The
purpose of the latter document was to “define a methodology to carry out impact
assessments of A rtificial Intelligence (AI) applications from the perspective of
human rights, democracy, and the rule of law, based on relevant Council of Europe
(CoE) standards and the work already undertaken in this field at the international
and national level…, and t o develop an impact assessment model. ”
The challenge put forward in CAHAI-PDG(2021)05rev was four fold:
1. To develop an impact assessment model that integrates the practices of
human right due diligence contained in general Human Rights Impact
Assessments (HRIAs ) with AI-centred approaches to algorithmic impact
assessment and the assurance of trustworthy AI innovation practices ,
2. To apply a risk-based approach guided by the precautionary principles to
determine use-cases that trigger pre- emptive measures where high risks to
human rights, democracy and the rule of law are present and cannot be
mitigated,
3. To formulate a methodology of impact assessment that follows the
proportionality principle, namely, that helps AI innovators and project
teams to establish a proportionate approach to the impact assessment
process and to the practices of stakeholder engagement, risk management ,
impact mitigation, and innovation assurance that derive therefrom, and
4. To develop a methodology for assessing and grading the likelihood and
extent of risks associated with an AI system that takes into account the
use-contexts and purposes of the system, its underlying technology, the
actors involved in the production of the system and its stage of
development, and the views of p otentially impacted stakeholders.
As stated in CAHAI -PDG(2021)05rev, t aken together, these desired elements of
the HUDERIA demand a “coherent and integrated” approach: “The model for a
HUDERIA should provide a coherent and integrated approach for assessing
adverse impact on human rights, democracy and the rule of law generated by AI
systems, addressing simultaneously the risks arising from the specific and
inherent characteristics of AI systems and the impact of such systems on human
rights, rule of law and democracy. ” A four-step process is suggested to achieve
this:
10 STEP ONE: Identify relevant human rights —and rights proxies for
democracy and the rule of law —that could be adversely impacted
STEP TWO: Assess the impact on those rights, encompassing both
technical and socio -technical dimensions
STEP THREE: Assess governance mechanisms to ensure the mitigation of
risks, appropriate stakeholder engagement, effective remedy ,
accountability, and transparency
STEP FOUR: Continuously monitor and evaluate the system to ensure that
impact assessment, impact mitigation, and governance mechanisms are
sufficiently responsive to changes in context and operating environment .
The framework we present here builds on and further develops these important
ideas in the e nds of constructing a cohesive set of practicable tools and processes
that bring the HUDERIA to life. One of the main c hallenge s we have faced in trying
to do this has been that the “coherent and integrated” approach needed to bring
all of the HUDERIA functions and purposes together requires that several di fferent
streams of activity be developed independently but, at the same time, remain
functionally interwoven. We have focused much of our efforts on working out how
best to integrate distinctive tasks of risk analysis, stakeholder engagement,
impact assessment, risk management , impact mitigation, and innovation
governance and assurance in to a coherent and integrated whole.
Another related challenge has been figuring out, in accordance with the goal s set
out in CAHAI-PDG(2021)05rev, how to combine technical and socio -technical
desiderata for responsible AI innovation with the procedural requirements for
principles-based human rights due diligence. There is no clear precedence for this
kind of synthesis, and building the connective tissue between the evaluative and
dialogical processes needed to establish human rights due diligence and the
practical principles and governance mechanisms needed to set up guardrails for
responsible AI innovation practices is a complex undertaking.
A Proposal for a Coherent and Integrated Approach
Our strategy in confronting these challenges has been to build several clearly
articulated but sensibly interrelated processes (and instruments ) that cover all of
the functions and goals that were sketched out by the PDG. The result is the
proposed Human Rights, Democracy, and the Rule of Law Assurance Framework
(HUDERAF). There are four main elements of the HUDERAF:
1. The Preliminary Context -Based Risk Analysis (PCRA) provides an
initial indication of the context -based risks that an AI system could pose to
human rights, fundamental freedoms, and elements of democracy and the rule of law. The main purpose of the PCRA is to help project teams establish
a proportionate approach to risk management and assurance practices and
to the level of stakeholder engagement that is needed across the project
lifecycle. The PCRA process begins with a set of project scoping and
planning activities in which members of the project team (a) consolid ate
information about the project-, use-, domain-, and data- contexts of the
prospective system , (b) identify relevant stakeholders , (c) begin to scope
11 the potential impacts of their systems on human rights , democracy, and the
rule of law by identifying salient rights and freedoms , and (d) map the
governance workflow.
After these activities are recorded in a Project Summary Report, the project
team is ready to complete its PCRA. The PCRA is a set of questions and
prompts that help projec t teams to pinpoint risk factors and potential
adverse impacts of their systems on human rights, fundamental freedoms, and elements of democracy and the rule of law. When the PCR A is finished,
it automatically generates a report (through triggers built on simple
conditional rules) that directs project teams to specific actions that they
then need to take in their impact assessment process (HUDERIA) so that
sufficient consideration can be given to each of the risk factors detected and to salient rights and freedoms that could be impacted by the
prospective AI system. It also directs them to specific goals, properties, and
areas that they should focus on in their subsequent risk management and
assurance processes ( HUDERAC ) to reduce and mitigate associated risks.
The report additionally assists project teams to understand, in a provisional
way, the risk level of their system, so that they can take a proportionate
approach to risk management actions and stakeholder engagement.
2. The Stakeholder Engagement Proc ess (SEP) helps project teams to
identify stakeholder salience and to facilitate proportionate stake holder
involvement and input throughout the project workflow. This process safeguards the equity and the contextual accuracy of the PCRA and other
subsequen t governance processes through stakeholder involvement ,
revisitation, and evaluation. The SEP involves a set of steps that include (a)
carrying out stake holder analysis , (b)engaging in positionality reflection , (c)
establishing engagement objectives , (d) d etermining engagement
method s, and (e) involving rights -holders in w orkflow revisitation and
reporting.
3. The Human Rights, Democracy, and the Rule of Law Impact
Assessment (HUDERIA) provides an opportunity for project teams and
engaged stakeholders to come together to produce detailed evaluations of
the potential and actual impacts that the design, development, and use of
an AI system could have on human rights, fundamental freedoms and
elements of democracy and the rule of law. This process (a) contextual izes
and corroborates potential harms that have been previously identified, (b)
enables the discovery of further harms through the dialogical integration of
stakeholder perspectives , (c) makes possible the collaborative and context -
sensitive assessment of the severity of potential adverse impacts identified,
(d) facilitates the co-design of an impact mitigation plan and sets up access
to remedy , and (e) establishe s monitoring and impact re- assessment
protocols.
4. The Human Rights, Democracy, and Rule of Law Assurance Case
(HUDERAC) enables project teams build a structured argument that
12 provides demonstrable assurance to stakeholders that claims about the
attainment of top -level normative goals (like safety, sustainability,
accountability, and fairness ) in processes of building and using an AI system
are warranted given available evidence. Building an assurance case involves
three iterative steps: (a) establishing the goals that are requisite for risk
management and impact mitigation and determining the properties needed
to assure these goals, (b) t aking actions to operationalize these properties
in the design, development, and use of the system, and (c) c ompiling
evidence of these actions . The process of developing an assurance case
assists internal refl ection and deliberation, promoting the adoption of best
practices and integrati ng these into design, development, and deployment
lifecycles. It also provides an accessible way to convey all of this information
to impacted stakeholders. A diligently executed assurance case offers a
clear and understandable mechanism of risk management and impact
mitigation that supports appropriate levels of public accountability and
transparency.
Shape and D elive ry of the Framework
Seen as a single project governance regime—which provides assurance that the
use of an AI system , as well as the processes behind its design, development and
deployment, respect human rights , fundamental freedoms, democratic principles,
and the rule of law—the proposed HUDERAF can be viewed as a collection of
interrelated processes, practical steps, and user activities:
Summary of Processes, Steps, and User Activities
A larger image of this Summary of Processes can be found in Appendix 1 .
Beyond such a summary view, t his set of processes, steps, and user activities can
also be seen as extending across the AI design, development, and deployment
lifecycle . The production and implementation of AI technologies is a multi -phase d,
iterative, and often cyclical undertaking, and so effectively safeguarding the
alignment of that complex enterprise with human rights, democracy, and the rule
of law necessitates that risk analysis, impact assessment, and assurance methods
are sufficiently responsive to th e dynamic character of the AI innovation
13 workflow.1 The HUDERAF has been developed with this in mind. Stakeholder
Engagement, HUDERIA, and HUDERAC processes are iterative and organised to
respond appropriately to changes in production and implementation factors as well
as in the operating environment s of AI system s. Here is schematic mapping of
what this looks like as such processes and steps move from the design to the
development to the deployment phas es of a project workflow:
HUDERAF Project Lifecycle Process Map
A larger image of this Project Lifecycle Process Map can be found in Appendix 2 .
It should be noted that, as we have been constructing the elements of th is
framework, we have come to believe that the delivery of its processes and
instruments would be best served in the medium of a single digital platform that
allows for the myriad governance goals articulated by the CAHAI and its Feasibility
Study to be all in one place and as user friendly as possible. As they are presented
here, the description and explanation of each process (PCRA, SEP, HUDERIA,
HUDERAC) is accompanied by a template that could be made into a simple and
easy-to-use form. Each template consolidates the questions, prompts , and actions
that need to be covered for the completion of the process at each step along the
way. The PCRA Template, in particular , has been created as an interactive digital
form that generates bespoke summary reports based upon conditional/branching
1 For an overview of the AI/ML project lifecycle through a socio-technical lens , see
Appendix 3.
14 logic and the compiling of results. These templates could be brought together and
interconnected in one digital environment, so that users could have a streamlined
way to ensure that their AI innovation practices properly align with human rights,
fundamental freedoms, democracy , and the rule of law and to readily demonstrate
this to potentially impacted stakeholders or their proxies.
A final comment on our mode of presentation: This proposal has been written in
the style of a user’s guide or technical manual, so that all the elements of the
framework could be presented with detailed explanations and elaborations of the
rationale behind the mechanisms, tools, and concepts chosen. This reflects more of our attempt to convey the details of the framework to the CAHAI and CoE
readership than a representation of what a finished product might look like. Pragmatically speaking, presenting the framework in this way will make the
incorporation of policy -level decisions and adjustments from the
CoE/PDG/LFG/plenary much easier for those elements of the framework that are
deemed worthy of including.
Working Notes for the PDG
As we have been working to develop these processes and instruments, we have
hewed closely to the findings and policy content presented in the CAHAI’s
Feasibility Study, to the subsequent multi -stakeholder consultation, and to other
CAHAI and Council of Europe documents. This notwithstanding, we ask reviewers
and readers of this proposal to make sure that areas (where interpretive choices
have had to be made ) line up properly with the policy goals and development s of
the PDG, the LFG, and the CAHAI, more generally. In building these processes and
templates, our goal has been to provide the CAHAI with a coherent frame on which
to hang its policy decisions and through which it could operationalize its governance goals, rather than to pu t forward any policy innovations as such .
Similarly, we have done our best to reflect the most recent positions articulated
by the CAHAI on open policy issues (such as redlines and triggers of the
precautionary principle). We ask that reviewers and readers keep this in mind, so
that any necessary adjustments to current categories or policy content can be
spotted and made.
Substantively, we ask that reviewers and readers look at Section 2 of the PCRA
(Risks of Adverse Impacts on the Human Rights and Fundamental Freedoms of
Persons, Democracy, and the Rule of Law) closely. In this section, we tried to build
out a semi -quantitative method of risk analysis that generates provisional
recommendations regarding proportionate risk management actions and stakeholder engagement. Coming up with a metric to support the determination
of risk- based project governance was a thorny endeavour, and we ended up
integrating a very straightforward and user -accessible approach to risk analysis
(the calculation of a risk index numb er [RIN]) with (1) the risk management
priorities articulated in the UNGP, the UNH ROHC , and the CoE’s Committee of
Ministers and (2) the calibration mechanism of the precautionary principle. We
also considered using sums and combinations of the risk factors identified in
Section 1 of the PCRA for this metric, and this may still be an attractive option, should it be decided that the method used in Section 2 is not fit -for-purpose. In
15 any case, we ask that you keep an eye on this part of the framework as it b reaks
ground into unexplored territory.
We would, finally, just like to apologize, in advance , to readers for the length of
the present work and thank them for taking the time to look over the materials
contained herein. We have had to cover much territory to animate the aspirations
outlined by the PDG, and we are grateful to the reader for accompanying us on
this journe y to advance the CAHAI’s historical mission.
16 Preliminary Context -Based Risk Analysis
(PCRA)
Introduction
The Preliminary Context -Based Risk Analysis (PCRA) provides an initial indication
of the context -based risks that an AI system could pose to human rights,
fundament al freedoms, and elements of democracy and the rule of law. The main
purpose of the PCRA is to help project teams establish a proportionate approach to
risk management and assurance practices and to the level of stakeholder
engagement that is needed across the project lifecycle. The PCRA process begins
with a set of project scoping and planning activities in which members of the project team (a) consolidate information about the project -, use -, domain-, and
data-contexts of the prospective system, (b) identify relevant stakeholders, (c)
begin to scope the potential impacts of their systems on human rights, democracy,
and the rule of law by identifying salient rights and freedoms, and (d) map the
governance workflow.
After these activities are recorded in a Project Summary Report, the project team
is ready to complete its PCRA. The PCRA is a set of questions and prompts that
help project teams to pinpoint risk factors and potential adverse impacts of their
systems on human rights, fundamental freedoms, and elements of democracy and
the rule of law. When the PCRA is finished, it automatically generates a report
(through triggers built on simple conditional rules) that directs project teams to
specific actions that they then need to take in their impact assessment process
(HUDERIA) so that sufficient consideration can be given to each of the risk factors detected and to salient rights and freedoms that could be impacted by the
prospective AI system. It also dir ects them to specific goals, properties, and areas
17 that they should focus on in their subsequent risk management and assurance
processes (HUDERAC) to reduce and mitigate associated risks. The report
additionally assists project teams to understand, in a pr ovisional way, the risk
level of their system, so that they can take a proportionate approach to risk
management actions and stakeholder engagement.
Preparing a P roject Summary Report in the Planning and Scoping
Phase
Deliberate and reflective reporting practices are a crucial component to assuring
an AI system’s alignment with human rights, democracy , and the rule of law.
Project Summary (PS) reporting s upports this goal by generat ing an ongoing
synopsi s of an AI project’s contextual reflexivity, stakeholder awareness,
governance resolve, and human rights diligence across its entire lifecycle . The first
iteration of writing a P S Report serves , during the planning and scoping phase of
a project, as preparation for answering the questions contained within the PCRA.
It provides a reference point for your respon ses to the context-based questions
about risk factors and the potential adverse impacts of AI on the human rights
and fundamental freedoms of affected individuals and communities .
The first time that th e PS Report is completed (at the very beginning of your
HUDERAF), it should be informed by available organisational scoping and planning
documents and by desk-based research pertaining to the system’s use context ,
stakeholder contexts, and huma n rights, democracy, and the rule of law. It is later
revised and updated at each iteration of the Stakeholder Engagement Process
(drawing from stakeholder input). Changes to this document are recorded and
registered over time, providing timestamped project summaries that are intended
to help build public trust by operationalizing goals of accountability and
transparency . The P S Report enables a dimension of traceability , illustrating how
a project has been configured through time by the use of a Human Rights,
Democracy, and Rule of Law Assurance Framework .
A Project Summary Report is comprised of four components: (1) a project
overview, (2) a list of identified sta keholders, (3) a scop ing of potential impacts
on human rights, democracy, and rule of law, and (4) a project governance
workflow. These components are developed through the following activities:
18
Activity 1: Outline Project -, Use-, Domain-, and Data-Contexts
During the initial iteration of writing the PS Report, draw on organisational
documents (i.e., the project business case, proof of concept, or project charter) ,
project team collaboration, and desk research (if necessary) to answer the
followin g questions2:
Qu
estions:
Project
• How would you describe your organisation and what sort of services or
products do you typically provide (beyond the system under consideration)?
2 These questions draw on research in both the study of sociotechnical dimensions of AI
governan ce and in human rights impact assessment. For exemplary resources in the first
group, see: Arnold, M., Bellamy, R. K., Hind, M., Houde, S., Mehta, S., Mojsilović, A. , ...
& Varshney, K. R. (2019). FactSheets: Increasing trust in AI services through supplier's
declarations of conformity. IBM Journal of Research and Development , 63(4/5), 6 -1;
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., ... & Gebru, T. (2019, January). Model cards for model reporting. In Proceedings of the conference on
fairness, accountability, and transparency (pp. 220-229) ; Gebru, T., Morgenstern, J.,
Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2018).
Datasheets for datasets. arXiv preprint arXiv:1803.09010. For exemplary resources in the
second group , see Danish Institute for Human Rights (2020). Human Rights I mpact
Assessment: Guidance and Toolbox. DIHR; OECD (2018), OECD Due Diligence Guidance
for Responsible Business Conduct ; Mantelero, A., and Esposito, S. (2021). An evidence -
based methodology for human rights impact assessment (HRIA) in the development of A I
data- intensive systems. Computer Law & Security Review. 1. Outlining
project -,
use-,
domain -,
and data -
contexts•Outline a high -level description pertaining to the
project to build the prospective system, the domain
in which it will operate, the contexts in which it will
be used, and the data on which it will be trained.
2.
Identifying
stakeholders•Build on this contextual understanding to identify
project stakeholders and get a sense of the social environment and human factors that may be
impacted by the system. Pay close attention to
vulnerable rights -holders.
3. Scoping
Potential
Impacts to
Foundamental
Rights and
Freedoms•Consider the system's context, stakeholders, human rights, democracy, and the rule of law to get a sense of the human rights and
fundamental freedoms that may be
impacted by the system.
4. Mapping
Governance
Workflow•Consider the scope of potential impacts to fundamental rights and freedoms to distribute responsibility amidst project duty -
bearers and establish a logging protocol for workflow activities.
19 • What AI system is being built and what type of product or service will it
offer?
• What benefits will the system bring to its users and customers , and will
these benefits be widely accessible?
• Which organisation(s) —yours , other suppliers , or other provi ders—are
responsible for building this AI system?
• Which parts or elements of the AI system, if any, will be procured from
third-party vendor s, supplier s, sub-contractors, or external developer s?
• Which algorithms, techniques , and model types will be used in the AI
system? (Provide links to technical papers where appropriate)
• In a scenario where your project optimally scales, how many people will it
impact, for how long, and in what geographic range (local, national, global) ?
(Describe your rationale)
Use Context
• What is the purpose of this AI system and in which contexts will it be used ?
(Briefly describe a use -case that illustrates primary intend ed use )
• Is the AI system’s processing output to be used in a fully automated way
or will there be some degree of human control, oversight, or input before
use? (Describe)
• Will the AI system evolve or learn continuously in its use context, or will it
be static?
• To what degree w ill the use of the AI system be time- critical, or will users
be able to evaluate outputs comfortably over time?
• What sort of out -of-scope uses could users attempt to apply the AI system ,
and what dangers may arise from this?
Domain
• In what domain will this AI system operate?
• Which, if any, domain experts have been or will be consulted in designing
and developing the AI system ?
Data
• What datasets are being used to build this AI system?
• Will any data being used in the production of the AI system be acquired
from a vendor or supplier? (Describe)
• Will the data being used in the production of the AI system be collected for
that purpose, or will it be re-purposed from existing datasets? (Describe)
• What quality assurance and bias mitigation processes do you have in place
for the data lifecycle —for both acquired and collected data ?
Activity 2: Identify Stakeholders
Stakeholders include3:
3 Danish Institute for Human Rights (2020). Human Rights Impact Assessment: Guidance
20 • Rights -Holders: All individuals are human rights -holders. In the impact
assessment, the primary focus is on rights -holders who are, or may be,
adversely affected by the project.
• Duty- Bearers: Duty -bearers are actors who have human rights duties or
responsibiliti es towards rights -holders. These include the company or
organization operating a project or conducting its activities, business
suppliers and contractors, joint -venture and other business partners, state
actors such as local government authorities and regi onal and national
government departments and agencies.
• Other Relevant Parties: These may include individuals or organisations representing the interests of rights holders and official representations at
international, national and local levels (e.g. the UN, national human rights
institutions, NGOs or civil society organisations).
During stakeholder identification it is particularly important to consider which rights-holders are most vulnerable to potential impacts of the tool/system being
developed and which stakeholders’ views are underrepresented in development
and deployment processes. During the initial iteration of writing the PS Report,
draw on desk research pertaining to stakeholders and their contexts to identify
stakeholders and to get a sense of the social environment and human factors that
may be impacted by the system. Answer the following questions:
Questions:
• Who are the rights -holders, duty bearers and other relevant parties that
may be impacted by, or may impact, the project?
To further identify groups of rights-holders, answer the following questions:
Protected Characteristic s:
• Do any of these rights -holders possess sensitive or protected characteristics
that could increase their vulnerability to abuse or discrimination, or for
reason o f which they may require additional protection or assistance with
respect to the impacts of the project? If so, what characteristics?
Contextual Vulnerability Characteristics:
• Could the outcomes of this project present significant concerns to specific
groups of rights -holders given vulnerabilities caused or precipitated by their
distinct circumstances?
• If so, what vulnerability characteristics expose them to being jeopardized
by project outcomes?
For any characteristic s identified in the previous answer s, consider the
question s:
• What affected group or groups of rights -holders could these protected or
contextual vulnerability characteristic s or qualit ies represent?
• How could the distribution of power relations (i.e., the relative advantages
and disadvantages) between rights holders and duty bearers affect the way
the benefits and risks of the project are allocated?
21
Activity 3: Scope Impacts to Human Rights, Democracy, and Rule
of Law
The information contained below in the Table 1 (Principles and Priorities,
Corresponding Rights and Freedoms, and Elaborations ) serve s as background
material to provide you with a means of accessing and understanding some of the
existing human rights and fundamental freedoms that could be impacted by the
use of AI technologies. A thorough review of this table and an engagement of the
links to the relevant Charters, Conventions, Declarations, and elaborations it
contains is a critical first step that will help you identify the salient rights and
freedoms that could be affected by your project. You should also explore whether
your organisation has engaged in any previous Human Rights Impact Assessments
or other impact assessments (data protection impact assessment, equality impact
assessment, ethical and social impact assessment, environmental impact
assessment , etc.)—and review these where they are present .
As a starting point, the rights and freedoms delineated within th is material can be
presented in summary form in the following principles and priorities , which are
enumerated and elaborated upon in the CAHAI’s Feasibility Study and in the multi -
stakeholder consultation that followed the plenary’s adoption of this text in
December 2020 4:
4 As has been indicated within and at the bottom of Table 1, this list of rights and freedoms
also reflects important adjacent work done by the Council of Europe, UNESCO, the
European Commission, OECD, and the European Agency for Fundamental Rights among
others . For relevant sources, see: Council of Europe Commissioner for Human Rights
(2019) – “Unboxing AI: 10 steps to protect Human Rights” ; Council of Europe,
“Recommendation CM/Rec (2020) of the Committee of Ministers to member States on the
human rights impacts of algorithmic systems”; Muller, C. The Impact of Artificial
Intelligence on Human Rights, Democrac y and the Rule of Law. Council of Europe,
CAHAI(2020)06- fin; Bergamini, D. ( Rapporteur) (2020). Need for democratic governance
of artificial intelligence . Council of Europe, Committee on Political Affairs and Democracy ,
Parliamentary Assembly; UNESCO (2021a/b) – Recommendations on the Ethics of Artificial
Intelligence (two drafts, 1 -25, 26 -134); High-level Expert Group on Artificial Intelligence,
European Commission (2020) – “The Assessment List for Trustworthy Artificial Intelligence
(ALTAI) for self -assessment” ; European Agency for Fundamental Rights (2020) – “Getting
the Future Right: Artificial Intelligence and Fundamental Rights”.
22
Human Dignity
All individuals are inherently and inviolably worthy of respect by mere virtue
of their status as human beings. Humans should be treated as moral subjects,
and not as objects to be algorithmically scored or manipulated.
Human Freedom and Autonomy
Humans should be empowered to determine in an informed and autonomous manner if, when, and how AI systems are to be used. These systems should not be employed to condition or control humans, but should rather enrich their
capabilities.
Prevention of Harm
The physical and mental integrity of humans and the sustainability of the
biosphere must be protected, and additional safeguards must be put in place to protect the vulnerable. AI systems must not be permitted to adversely
impact human wellbeing or planetary health.
Fairness , Non-Discrimination, Equality, Diversity , and Inclusiveness
All humans possess the right to non -discrimination and the right to equality
and equal treatment under the law. AI systems must be designed to be fair,
equitable, and inclusive in their beneficial impacts and in the distribution of
their risks.
Data Protection and the R ight to Privacy
The design and use of AI systems that rely on the processing of personal data
must secure a person’s right to respect for private and family life, including
the individual's right to control their own data. Informed, freely given, and
unambiguous consent must play a role in this.
Democracy
Individuals should enjoy the ability to freely form bonds of social cohesion,
human connection, and solidarity through inclusive and regular democratic
participation, whether in political life, work life, or social life. This requires
informational plurality , the free and equitable f low of the legitimate and valid
forms of information, and the protection of freedoms of expression, assembly,
and association .
Rule of Law
AI systems must not undermine judicial independence, effective remedy, the
right to a fair trial, due process, or impartiality. To ensure this, the
transparency, integrity, and fairness of the data, and data processing methods
must be secured.
Social and Economic Rights
Individual s must have access to the material means needed to participat e
fully in work life, social life, and creative life, and in the conduct of public
affairs, through the provision of proper education, adequate living and
working standards, health, safety, and social security. This means that AI
systems should not infringe upon individuals’ rights to work, to just, safe, and
healthy working conditions, to social security, to the protection of health , and
to social and medical assistance.
23 Table 1. Principles and Priorities , Corresponding Rights and Freedoms, and Elaborations
(Numbers in the first colum n indicate overlapping principles and priorities from other relevant sources listed at the bottom)
Principles and
Priorities Corresponding Rights and Freedoms with
Selected Elaborations Resources for Principles and
Corresponding Rights and Freedoms
Respect for and
protection of
human dignity (2,
3, 4, 5, 8) -The right to human dignity, the right to life (Art. 2
ECHR), and the right to physical and mental
integrity.
-The right to be informed of the fact that one is
interacting with an AI system rather than with a
human being.
-The right to refuse interaction with an AI system
whenever this could adversely impact human dignity.
∼
“Human dignity is the foundation of all human rights. It
recognises that all individuals are inherently worthy of
respect by mere virtue of their status as human beings. Human dignity, as an absolute right, is inviolable. Hence,
even when a human right is restricted – for instance when
a balance of rights and interests must be made – human
dignity must always be safeguarded. In the context of AI,
this means that the design, development and use of AI
systems must respect the dignity of the human beings
interacting therewith or impacted thereby. Humans
should be treated as moral s ubjects, and not as mere
objects that are categorised, scored, predicted or
manipulated.
AI applications can be used to foster human dignity and
empower individuals, yet their use can also challenge it
and (un)intentionally run counter to it. To safeguard European Convention on Human Rights (ECHR):
-Article 2, European Convention on Human Rights – Right
to life
-Article 2, ‘Guide on Article 2 of the European Convention
on Human Rights’ , Council of Europe – Right to life
Charter of the Fundamental Rights of the EU:
-Article 1, Charter of the Fundamental Rights of the EU –
Human dignity
-Article 1 , Explanation on Article 1 — Human Dignity
(Explanations Relating to the Charter of Fundamental
Rights) in the Official Journal of the European Union C
303/17 - 14.12.2007
-Article 2, Charter of the Fundamental Rights of the EU
–
Right to life
-Article 2, Explanation on Article 2 — Right to life
(Explanations Relating to the Charter of Fundamental
Rights) in the Official Journal of the European Union C
303/17 - 14.12.2007
Universal Declaration of Human Rights:
-Preamble, Universal Declaration of Human Rights –
Dignity
24 human dignity, it is essential that human beings are
aware of the fact that they are interacting with an AI
system and are not misled in this regard. Moreover, they
should in principle be able to choose not to interact with
it, and to not be subject to a decision informed or made
by an AI system whenever this can significantly impact
their lives, especially when this can violate rights related
to their human dignity. Furthermore, the allocation of
certain tasks may need to be reserved for humans rather
than machines given their potential impact on human
dignity. ”5
∼
“As stated in the EU Charter’s official guiding explanations
(Official Journal of the European Union, 2007: C 303/17):
‘dignity of the human person is not only a fundamental
right in itself but constitutes the real basis of fundamental
rights.’ In this sense, dignity is a highly multifaceted
concept … One way to grasp the broad range of its
meanings is to explore what humans consider to be
violations of their dignity, as done by Halbertal (2015),
who points out the following three categories of
violations: 1. Humiliation : being put in a state of
helplessness, insignificance; losing autonomy over your own representation. 2. Instrumentalization : treating an
individual as exchangeable and merely a mea ns to an
end. 3. Rejection of one’s gift : making an individual
superfluous, unacknowledging one’s contribution,
aspiration, and potential.
6
International Covenant on Civil and Political Rights: -Article 6,
International Covenant on Civil and Political
Rights – Right to life
Council of Europe Resources:
-Council of Europe,
Convention for the Protection of
Human Rights and Dignity of the Human Being with
regard to the Application of Biology and Medicine:
Convention on Human Rights and Biomedicine
5 CAHAI Feasibility Study , Council of Europe CAHAI (2020)23.
6 Aizenberg, E., & van den Hoven, J. (2020). Designing for human rights in AI. Big Data & Society , 7(2); Halbertal, M. (2015). Three
concepts of human dignity. 2015 Dewey Lecture, University of Chicago ; Kaufmann, P., Kuch, H., Neuhaeuser, C., & Webster, E. (Eds.).
(2010). Humiliation, degradation, dehumanization: human dignity violated (Vol. 24). Springer Science & Business Media.
25 “The inviolable and inherent dignity of every human
constitutes the foundation for the universal, indivisible,
inalienable, interdependent and interrelated system of human rights and fundamental freedoms. Therefore,
respect, protection and promotion of human dignity and
rights as established by international law, including
international human rights law, is essential throughout
the life cycle of AI systems. Human dignity relates to the
recognition of the intrinsic and equal worth of each
individual human being, regardless of race, colour,
descent, gender, age, language, religion, political opinion ,
national origin, ethnic origin, social origin, economic or
social condition of birth, or disability, and any other grounds....Persons may interact with AI systems
throughout their life cycle and receive assistance from
them such as care for vulnerable people or people in
vulnerable situations, including but not limited to children,
older persons, persons with disabilities or the ill. Within
such interactions, persons should never be objectified,
nor should their dignity be otherwise undermined, or
human r ights and fundamental freedoms violated or
abused.”
7
Protection of
human freedom and autonomy (1, 2, 4, 6, 7, 8) -The right to liberty and security (Art. 5 ECHR).
-The right to human autonomy and self -
determination.
-The right not to be subject to a decision based
solely on automated processing when this produces
legal effects on groups or similarly significantly
affects individuals. European Convention on Human Rights (ECHR):
-Article 5, European Convention on Human Rights – Right
to liberty and security
-Article 5, ‘Guide on Article 5 of the European Convention
on Human Rights’ , Council of Europe – Right to l iberty
and security
7 UNESCO (2021a/b) – Recommendations on the Ethics of Artificial Intelligence (two drafts, 1 -25, 26- 134).
26 -The rig ht to effectively contest and challenge
decisions informed and/or made by an AI system
and to demand that such decisions be reviewed by a
person.
-The right to freely decide to be excluded from AI -
enabled manipulation, individualized profiling, and
predictions. This also applies to cases of non-personal data processing.
-The right to have the opportunity, when it is not overridden by competing legitimate grounds, to
choose to have contact with a human being rather
than a robot.
∼
“Individual self -determination and personal autonomy
necessarily entail the ability to freely take decisions and
have them respected by others. According to international
and European human rights jurisprudence, individual
personal autonomy also covers a further range of human
behaviours, amongst which are the right to develop one’s own personal ity and the right to establish and develop
relationships with other people, the right to pursue one’s
own aspirations and to control one’s own information.”
8
“P
romoti ng autonomy means enabling individuals to make
decisions about their lives on their own, which are not
imposed upon them or manipulated by others. It requires
having a range of genuine options available from which
one can choose. A decision taken on the basis of -Article 9, European Convention on Human Rights
– Freedom of thought, conscience, and religion
-Article 9,
‘Guide on Article 9 of the European Convention
on Human Rights’ , Council of Europe – Freedom of
thought, conscience, and religion
-Article 10, European Convention on Human Rights
– Freedom of expression
-Article 10, ‘Guide on Article 10 of the European
Convention on Human Rights’ , Council of Europe –
Freedom of expression
Charter of the Fundamental Rights of the EU:
-Article 6, Charter of the Fundamental Rights of the EU
–
Right to liberty and security
-Article 6 , Explanation on Article 6 — Right to liberty and
security (Explanations Relating to the Charter of
Fundamental Rights) in the Official Journal of the
European Union C 303/17 - 14.12.2007
-Article 10, Charter of the Fundamental Rights of the EU
–
Freedom of thought, conscience, and religion
-Article 10, Explanation on Article 10 — Freedom of
thought, conscience, and religion (Explanations Relating
to the Charter of Fundamental Rights) in the Official
Journal of the European Union C 303/17 - 14.12.2007
8 Mantelero, A., and Esposito, S. (2021). An evidence -based methodology for human rights impact assessment (HRIA) in the development
of AI data -intensive systems. Computer Law & Security Review.
27 inadequate information or deception is not considered
autonomous. ”9 -Article 11, Charter of the Fundamental Rights of the EU –
Freedom expression and information
-Article 11 , Explanation on Article 11 — Freedom of
expression and information (Explanations Relating to the
Charter of Fundamental Rights) in the Official Journal of
the European Union C 303/17 - 14.12.2007
Universal Declaration of Human Rights:
-Article 3, Universal Declaration of Human Rights – Right
to life, liberty, and the security of person
-Article 18, Universal Declaration of Human Rights
– Right
to freedom of thought, conscience, and religion
-Article 19, Universal Declaration of Human Rights – Right
to freedom of opinion and expression
International Covenant on Civil and Political Rights:
-Article 9,
International Covenant on Civil and Political
Rights – Right to liberty and security of person
-Article 18, International Covenant on Civil and Political
Rights – Right to freedom of thought, conscience, and
religion
-Article 19,
International Covenant on Civil and Political
Rights – Freedom of exp ression
Council of Europe Resources:
9 Loi, et al. (2021) Algorithm Watch – “Automated Decision -Making Systems in the Public Sector: An Impact Assessment Tool for Public
Authorities”
28
-Commissioner for Human Rights, Human Rights and
Disability (20 October, 2008) ‘Equal rights for all’ , para.
5.2. – Council of Europe
-Bychawska- Siniarska, ‘Protecting the right to freedom of
expression under the European Convention on Human
Rights ’ – Council of Europe, Strasbourg 2017
Prevention of harm
and protection of the right to life and
physical, psychological, and moral integrity (2, 5, 6, 8) -The right to life (Art. 2 ECHR) and the right to
physical and mental integrity.
-The right to the protection of the environment.
-The right to sustainability of the community and biosphere.
∼
“Throughout the life cycle of AI systems, human beings
should not be harmed but rather their quality of life of
human beings should be maintained or enhanced, while
the definition of “quality of life” should be left open to individuals or grou ps, while respecting relevant laws, as
long as there is no violation or abuse of human rights and fundamental freedoms, or the dignity of humans in terms
of this definition....Environmental and ecosystem
flourishing should be recognized, protected and prom oted
through the life cycle of AI systems. Furthermore, environment and ecosystems are the existential necessity
for humanity and other living beings to be able to enjoy
the benefits of advances in AI....All actors involved in the life cycle of AI systems must comply with applicable
international law and domestic legislation, standards and practices, such as precaution, designed for environmental
and ecosystem protection and restoration, and
sustainable development. They should reduce the European Convention on Human Rights (ECHR) :
-Article 2, European Convention on Human Rights
– Right
to life
-Article 2, ‘Guide on Article 2 of the European Convention
on Human Rights ’, Council of Europe – Right to life
Charter of the Fundamental Rights of the EU:
-Article 2, Charter of the Fundamental Rights of the EU
–
Right to life
-Article 2 , Explanation on Article 2 — Right to life
(Explanations Relating to the Charter of Fundamental
Rights) in the Official Journal of the European Union C
303/17 - 14.12.2007
-Article 3, Charter of the Fundamental Rights of the EU
–
Right to integrity of the person
-Article 3, Explanation on Article 3 — Right to the
integrity of the person (Explanations Relating to the
Charter of Fundamental Rights ) in the Official Journal of
the European Union C 303/17 - 14.12.2007
29 environmental impa ct of AI systems, including but not
limited to, its carbon footprint, to ensure the minimization
of climate change and environmental risk factors, and prevent the unsustainable exploitation, use and
transformation of natural resources contributing to the
deterioration of the environment and the degradation of ecosystems. ”
10
Non- discrimination,
fairness, and
equality (1, 2, 3, 4, 5, 6, 7, 8 ) -
The right to non-discrimination (on the basis of the
protected grounds set out in Article 14 of the ECHR
and Protocol 12 to the ECHR), including
intersectional discrimination.
-The right to non-discrimination and the right to
equal treatment. This right must be ensured in
relation to the entire lifecycle of an AI system (design, development, implementation, and use), as
well as to the human choices concerning AI design,
adoption, and use, whether used in the public or
private sector.
∼
“In all circumstances, discrimination risks must be
prevented and mitigated with special attention for groups that have an increased risk of their rights being disproportionately impacted by AI. This includes women,
children, older people, economically disa dvantaged
persons, members of the LGBTI community, persons with disabilities, and “racial”, ethnic or religious groups.
Member states must refrain from using AI systems that
discriminate or lead to discriminatory outcomes and, European Convention on Human Rights (ECHR) :
-Protocol No. 12, European Convention on Human Rights
-Article 14, European Convention on Human Rights
–
Prohibition of discrimination
-Article 14 and Article 12 of Protocol No. 12, ‘ Guide on
Article 14 of the European Convention on Human Right
and on Article 1 of Protocol No. 12 to the Convention ’,
Council of Europe – Prohibition of discrimination
Charter of the Fundamental Rights of the EU:
-Articles 20 and 21, Charter of the Fundamental Rights of
the EU – Equality before the law and non- discrimination
-Article 20,
Explanation on Article 20 — Equality before
the law (Explanations Relating to the Charter of
Fundamental Rights ) in the Official Journal of the
European Union C 303/17 - 14.12.2007
-Article 21, Explanation on Article 21 — Non-
discrimination (Explanations Relating to the Charter of
10 UNESCO (2021a/b) – Recommendations on the Ethics of Artificial Intelligence (two drafts, 1 -25, 26- 134).
30 within their jurisdiction, pr otect individuals from the
consequences of use of such AI systems by third parties.
The active participation of and meaningful consultation
with a diverse community that includes effective
representation from these groups in all stages of the AI
lifecycle is an important component of prevention and
mitigation of adverse human rights impacts. In addition, special attention needs to be paid to transparency and
accessibility of information on the training data used for
the development of an AI system. ”
11
∼
“AI actors should promote social justice, by safeguarding
fairness and non -discrimination of any kind in compliance
with international law. Fairness implies sharing benefits of
AI technologies at local, national and international levels,
while taking into cons ideration the specific needs of
different age groups, cultural systems, different language
groups, persons with disabilities, girls and women, and
disadvantaged, marginalized and vulnerable populations.
At the local level, it is a matter of working to give
communities access to AI systems in the languages of their choice and respecting different cultures. At the
national level, governments are obliged to demonstrate
equity between rural and urban areas, and among all
persons without distinction as to sex, g ender, language,
religion, political or other opinion, national, ethnic,
indigenous or social origin, sexual orientation and gender
identity, property, birth, disability, age or other status, in
terms of access to and participation in the AI system life
cycle. At the international level, the most technologically
advanced countries have a responsibility of solidarity with Fundamental Rights) in the Official Journal of the
European Union C 303/17 - 14.12.2007
Universal Declaration of Human Rights:
-Article 7, Universal Declaration of Human Rights
–
Equality before the law
International Covenant on Civil and Political Rights:
-Article 6, International Covenant on Civil and Political
Rights – Right to life
-Article 26,
International Covenant on Civil and Political
Rights – Non-discrimination
Council of Europe Resources:
-
European Commission against Racism and Intolerance
(ECRI) - Discrimination, artificial intelligence, and
algorithmic decision -making (2018 )
• AI applications have found ways to “escape
current laws.” The majority of non -discrimination
statutes relate only to specific protected characteristics. There are other forms of
discrimination that are not correlated with
protected characteristics but ca n still reinforce
social inequality .
• The idea of sector -specific rules for the protection
of fairness and human rights in the area of AI is
proposed, as different sectors necessitate different
values and problems .
11 Council of Europe Commissioner for Human Rights (2019) – “Unboxing AI: 10 steps to protect Human Rights”
31 the least advanced to ensure that the benefits of AI
technologies are shared such that access to and
participation in the AI system life c ycle for the latter
contributes to a fairer world order with regard to
information, communication, culture, education, research,
and socio -economic and political stability...AI actors
should promote social justice and safeguard fairness and non-discriminat ion of any kind in compliance with
international law. This implies an inclusive approach to ensuring the benefits of AI technologies are available and
accessible to all taking into consideration the specific
needs of different age groups, cultural systems, different
language groups, persons with disabilities, girls and
women, and disadvantaged, marginalized and vulnerable
people or people in vulnerable situations. ”
12
∼
“AI is widely expected to improve the productivity of
economies. However, these productivity gains are expected to be distributed unequally with most benefits
accruing to the well -off. Similarly, data and design
choices, combined with a lack of transparency of black
box algorithms have shown to lead to a perpetuating
unjust bias against already disadvantaged groups in
society, such as women and ethnic minorities.28 AI could
lead to inequality and segregation and thus threaten the necessary level of economic and social equality required
for a thriving democracy. ”
13
-Unboxing artificial intelligence: 10 steps to protect
human rights (2019)
• Recommendations are to be used to mitigate or prevent negative i mpacts of AI systems on human
rights .
• Practical recommendations are given with 10 areas of action: human rights impact assessments ;
public consultations; human rights standards in the private sector; information and transparency;
independent monitoring; no n-discrimination and
equality; data protection and privacy; freedom of expression, freedom of assembly and association,
and the right to work; avenues for redress; and
promoting knowledge and understanding of AI .
Office of the United Nations High Commissioner for
Human Rights:
-OHCHR,
International Convention on the Elimination of
All Forms of Racial Discrimination
-OHCHR, Convention on the Elimination of All Forms of
Discrimination against Women
Data protection
and the right to respect of private -The right to respect for private and family life and
the protection of personal data (Art. 8 ECHR). European Convention on Human Rights (ECHR) :
-Article 8, European Convention on Human Rights – Right
to respect for private and family life
12 UNESCO (2021a/b) – Recommendations on the Ethics of Artificial Intelligence (two drafts, 1 -25, 26- 134).
13 Muller, C. The Impact of Artificial Intelligence on Human Rights, Democracy and the Rule of Law. Council of Europe, CAHAI(2020)06- fin.
32 and family life (1,
2, 3, 5, 7, 8) -The right to physical, psychological, and moral
integrity in light of AI -based profiling and
emotion/personality recognition.
-All the rights enshrined in Convention 108+ and in
its modernised version, and in particular with
regard to AI -based profiling and location tracking.
∼
“The development, training, testing and use of AI systems
that rely on the processing of personal data must fully secure a person’s right to res pect for private and family
life under Article 8 of the European Convention on Human Rights, including the “right to a form of informational self -
determination” in relation to their data.
Data processing in the context of AI systems shall be
proportionate in relation to the legitimate purpose
pursued through such processing, and should at all stages
of the processing reflect a fair balance between the
interests pursued through the development and
deployment of the AI system and the rights and freedoms at stake.
Member states should effectively implement the
modernised Council of Europe Convention for the
Protection of Individuals with regard to Automatic
Processing of Personal Data (“Convention 108+”) as well
as any other international instrument on data protection and privacy that is binding on the member state. The
processing of personal data at any stage of an AI system
lifecycle must be based on the principles set out under
the Convention 108+, in particular (i) there must be a
legitimate basis laid down by law for the processing of the
personal data at the relevant stages of the AI system lifecycle; (ii) the personal data must be processed
lawfully, fairly and in a transparent manner; (iii) the
-Article 8, ‘ Guide on Article 8 of the European Convention
on Human Rights. Right to respect for private and family
life, home and correspondence ’, Council of Europe – Right
to respect for private and family life
Charter of the Fundamental Rights of the EU:
-Article 7, Charter of the Fundamental Rights of the EU
–
Right for private and family life
-Article 7, Explanation on Article 7 — Right for private
and family life (Explanations Relating to the Charter of
Fundamental Rights ) in the Official Journal of the
European Union C 303/17 - 14.12.2007
-Article 8, Charter of the Fundamental Rights of the EU
–
Protection of personal data
-Article 8, Explanation on Article 8 — Protection of
personal data (Explanations Relating to the Charter of
Fundamental Rights) in the Official Journal of the
European Union C 303/17 - 14.12.2007
Universal Declaration of Human Rights:
-Article 12, Universal Declaration of Human Rights
– Right
to respect for privacy, family, home, or correspondence
Council of Europe Resources:
-Convention 108/108+ (1981/2018)
• Processing of sensitive data can only be allowed where appropriate guidelines are present
• Every individual has the right to know the purpose of processing their data. Along with this, they have
a right to rectification and obtainment of
33 personal data must be collected for explicit, specified and
legitimate purposes and not processed in a way
incompatible with those purposes; (iv) the personal data must be adequate, relevant and not excessive in relation
to the purposes for which they are processed; (v) the
personal data must be accurate and, where necessary,
kept up to date; (vi) the personal data should be preserved in a form which permits identification of data
subjects for no longer than is necessary for the purposes
for which those data are processed. ”
14
∼
“The right to respect for private life and the protection of
personal data (Articles 7 and 8 of the EU Charter) are at
the core of fundamental rights discussions around the use
of AI. While closely related, the rights to respect for
private life and the pro tection of personal data are
distinct, self -standing rights. They have been described as
the ‘classic’ right to the protection of privacy and a more
“modern” right, the right to data protection. Both strive to
protect similar values, i.e. the autonomy and human
dignity of individuals, by granting them a personal sphere in which they can freely develop their personalities, think
and shape their opinions. They thus form an essential
prerequisite for the exercise of other fundamental rights,
such as the freedo m of thought, conscience and religion
(Article 10 of the EU Charter), freedom of expression and
information (Article 11 of the EU Charter), and freedom of
assembly and of association (Article 12 of the EU
Charter). The concept of ‘private life ’ or ‘privacy’ is
complex and broad, and not susceptible to an exhaustive
definition. It covers the physical and psychological
integrity of a person, and can, therefore, embrace knowledge where da ta are processed contrary to
the Convention’s provisions
• Transparency, proportionality, consent, data quality, data security, purpose limitation,
accountability, data minimisation, data protection
and privacy by design, fairness, and lawfulness are
princip les introduced
• Individuals should not be subjected to decisions made solely by automated processing of data
without consideration of personal views
• “Legal framework built around Convention remains fully applicable to AI technology, as soon as the processed data fall within the scope of the
Convention.”
• Respect for the rights of data subjects and conformity to the additional obligations of data
controllers and processors as set out in Articles 9
and 10 of Convention 108+
-
Convention on Cybercrime (“Budapest
Convention”)(2001)
• “Criminalising offences against and by the means of computers, for procedural powers to investigate
cybercrime and secure electronic evidence.”
• Crimes include but are not limited to infringements of copyright, computer -related fraud, child
pornography, and violations of a security network
• Investigation includes a series of powers and procedures including interception and the search
of computer net works
• Primary objective is to “pursue a common criminal
policy aimed at the protection of society against
cybercrime, especially through appropriate
legislation and international co -operation.”
14 Council of Europe Commissioner for Human Rights (2019) – “Unboxing AI: 10 steps to protect Human Rights”
34 multiple aspects of the person’s physical and social
identity. There is also a zone of inte raction of a person
with others, even in a public context, which may fall
within the scope of ‘privacy”. ”15
∼
“Many AI -systems and uses have a broad and deep
impact on the right to privacy. Privacy discussions around AI currently tend to focus primarily on data privacy and
the indiscriminate processing of personal (and non-personal) data. It should however be noted that, while
data privacy is indeed an important element, the impact
of AI on our privacy goes well beyond our data. Art. 8 of
the ECHR encompasse s the protection of a wide range of
elements of our private lives, that can be grouped into three broad categories namely: (i) a person’s (general)
privacy, (ii) a person's physical, psychological or moral integrity and (iii) a person's identity and autono my.
Different applications and uses of AI can have an impact on these categories, and have received little attention to date.
AI-driven (mass) surveillance, for example with facial
recognition, involves the capture, storage and processing of personal (biometric) data (our faces), but it also affects
our 'general' privacy, identity and autonomy in such a
way that it creates a situation where we are (constantly) being watched, followed and identified. As a psychological
‘chilling’ effect, people might feel i nclined to adapt their
behaviour to a certain norm, which shifts the balance of power between the state or private organisation using
facial recognition and the individual. In legal doctrine and
precedent the chilling effect of surveillance can constitute
a violation of the private space, which is necessary for
-Directorate General of Human Rights and Rule of Law,
Guidelines on Artificial Intelligence and Data Protection ,
Council of Europe
-Roagna,
‘Protecting the right to respect for private and
family life under the European Convention on Human
Rights’ – Council of Europe, Strasbourg 2012
15 European Agency for Fundamental Rights (2020) – “Getting the Future Right: Artificial Intelligence and Fundamental Rights”.
35 personal development and democratic deliberation . Even
if our faces are immediately deleted after capturing, the
technology still intrudes our psychological integrity. ”16
Economic and
social rights (1, 7, 8) -The right to just working conditions, the right to
safe and healthy working conditions, the right to
organize, the right to social security, and the rights to the protection of health and to social and medical assistance (Art. 2, 3, 5, 11, 13 ESC).
∼
“AI systems can have major benefits when used for
hazardous, heavy, exhausting, unpleasant, repetitive or
boring work. However, the wide adoption of AI systems in all domains of our lives also creates new risks to social
and economic rights. AI systems are increasingly used to
monitor and track workers, distribute work without
human intervention and assess and predict worker
potential and performance in hiring and firing situations. In some situations, this can also have detrimental
consequences for workers’ right to decent pay, as their pay can be determined by algorithms in a way that is
irregul ar, inconsistent and insufficient. Furthermore, AI
systems can also be used to detect and counter the unionisation of workers. These applications can jeopardise
the right to just, safe and healthy working conditions,
dignity at work as well as the right to organise. The
discrimination capacity of AI systems that assess and predict the performance of job applications or workers
can also undermine equality, including gender equality, in matters of employment and occupation.
In addition, AI systems can, for i nstance, be used in the
context of social security decisions, in which case the European Social Charter:
-Article 2, European Social Charter
– Dignity of work and
the right to just working conditions
-Article 3, European Social Charter
– Right to safe and
healthy working conditions
-Article 5, European Social Charter – Right to organize
-Article 11, European Social Charter
– Rights to the
protection of health
-Article 12, European Social Charter
– Right to social
security
-Article 13, European Social Charter – Right to social and
medical assistance
International Covenant on Economic, Social and
Cultural Rights:
-Article 6, International Covenant on Economic, Social,
and Cultural Rights – The right to work
-Article 7, International Covenant on Economic, Social,
and Cultural Rights – Right to just and favourable
conditions of work
-Article 8, International Covenant on Economic, Social,
and Cultural Rights – Right to organise
16 Muller, C. The Impact of Artificial Intelligence on Human Rights, Democracy and the Rule of Law. Council of Europe, CAHAI(2020)06- fin.
36 right guaranteed by Article 12 of the European Social
Charter – stating that all workers and their dependents
have the right to social security – can be impacted.
Indeed, AI systems are increasingly relied on in social
welfare administration, and the decisions taken in that
context can significantly impact individuals’ lives. Similar
issues arise where AI systems are deployed in the context
of education or housing allocation adm inistrations.
Moreover, whenever AI systems are used to automate decisions regarding the provision of healthcare and
medical assistances, such use can also impact the rights
enshrined in Articles 11 and 13 of the Charter, which respectively state that everyone has the right to benefit
from measures that enable the enjoyment of the highest
possible standard of health attainable, and that anyone
without adequate resources has the right to social and
medical assistance. AI systems can, for instance, be
utilised to determine patients’ access to health care
services by analysing patients’ personal data, such as their health care records, lifestyle data and other
information. It is important that this occurs in line with not only the right to privacy and perso nal data protection,
but also with all the social rights laid down in the aforementioned Charter, the impact on which has so far
received less attention than the impact on civil and
political rights.”
17
-Article 9, International Covenant on Economic, Social,
and Cultural Rights – Right to social security
Charter of the Fundamental Rights of the EU:
-Article 15, Charter of the Fundamental Rights of the EU
–
Freedom to choose an occupation and right to engage in work
-Article 15,
Explanation on Article 15 — Freedom to
choose an occupation and right to engage in work
(Explanations Relating to the Charter of Fundamental
Rights) in the Official Journal of the European Union C
303/17 - 14.12.2007
-Article 16, Charter of the Fundamental Rights of the EU –
Freedom to conduct a business
-Article 16, Explanation on Article 16 — Freedom to
conduct a business (Explanations Relating to the Charter
of Fundamental Rights) in the Official Journal of the
European Union C 303/17 - 14.12.2007
-Article 17, Charter of the Fundamental Rights of the EU
–
Right to property
-Article 17, Explanation on Article 17 — Right to property
(Explanations Relating to the Charter of Fundamental
Rights) in the Official Journal of the European Union C
303/17 - 14.12.2007
17 CAHAI Feasibility Study , Council of Europe CAHAI (2020)23.
37 -Article 27, Charter of the Fundamental Rights of the EU –
Workers’ rights to information and consultation with the
undertaking
-Article 27 , Explanation on Article 27 — Workers’ rights to
information and consultation with the und ertaking
(Explanations Relating to the Charter of Fundamental
Rights) in the Official Journal of the European Union C
303/17 - 14.12.2007
-Article 31, Charter of the Fundamental Rights of the EU
–
Fair and just working conditions
-Article 31 ,
Explanation on Article 31 — Fair and just
working conditions (Explanations Relating to the Charter
of Fundamental Rights) in the Official Journal of the
European Union C 303/17 - 14.12.2007
-Article 34, Charter of the Fundamental Rights of the EU –
Social security and social assistance
-Article 34 ,
Explanation on Article 34 — Social security
and social assistance (Explanations Relating to the
Charter of Fundamental Rights) in the Official Journal of
the European Union C 303/17 - 14.12.2007
-Article 35, Charter of the Fundamental Rights of the EU –
Healthcare
-Article 35 , Explanation on Article 35 — Healthcare
(Explanations Relating to the Charter of Fundamental
Rights) in the Official Journal of the European Union C
303/17 - 14.12.2007
Universal Declaration of Human Rights:
38 -Article 3, Universal Declaration of Human Rights – Right
to life, liberty, and the security of person
-Article 12, Universal Declaration of Human Rights – Right
to private home life
-Article 22, Universal Declaration of Human Rights – Right
to social security
-Article 22, Universal Declaration of Human Rights –
Workers’ rights
Accountability and
effective remedy
(1, 2, 3, 5, 6, 7) -The right to an effective remedy for violation of
rights and freedoms (Art. 13 ECHR) . This should
also include the right to effective and accessible remedies whenever the development or use of AI
systems by private or public entities causes unjust
harm or breaches an individual’s legally protected
rights.
∼
“AI systems must always remain under human control,
even in circumstances where machine learning or similar techniques allow for the AI system to make decisions
independently of specific human intervention. Member states must establish clear lines of responsibility for
human rights violations that may arise at various phases
of an AI system lifecycle. Responsibility and accountability
for human rights violations that occur in the development,
deployment or use of AI Systems must always lie with a natural or legal person, even in cases where the measure
violating human rights was not directly ordered by a
responsible human commander or operator. European Convention on Human Rights (ECHR) :
-Article 13, European Convention on Human Rights
– Right to an effective remedy
-Article 13,
‘Guide on Article 13 of the European
Convention on Human Rights.’ , Council of Europe – Right
to an effective remedy
Charter of the Fundamental Rights of the EU:
-Article 47, Charter of the Fundamental Rights of the EU
–
Right to an effective remedy before a tribunal and to a
fair trial
-Article 47,
Explanation on Article 47 — Right to an
effective remedy and a fair trial (Explanations Relating to
the Charter of Fundamental Rights) in the Official Journal
of the European Union C 303/17 - 14.12.2007
Universal Declaration of Human Rights:
-Article 8, Universal Declaration of Human Rights
– Right
to an effective remedy
39 Anyone who claims to be a victim of a human rights
violation arising from the development, deployment or
use by a public or private entity of an AI system should be provided with an effective remedy before a national
authority. Moreover, member st ates should provide
access to an effective remedy to those who suspect that they have been subjected to a measure that has been
solely or significantly informed by the output of an AI
system in a non -transparent manner and without their
knowledge.
Effective remedies should involve prompt and adequate
reparation and redress for any harm suffered by the
development, deployment or use of AI systems, and may
include measures under civil, administrative, or, where
appropriate, criminal law. NHRSs can be such a source of
redress, through rendering their own decisions in
accordance with their respective mandates.
Member states should provide individuals with the right not to be subject to a decision significantly affecting them
that is based on automated de cision -making without
meaningful human intervention. At the very least, an
individual should be able to obtain human intervention in
such automated decision -making and have their views
taken into consideration before such a decision is
implemented.
Member states must ensure that individuals have access
to information in the possession of a defendant or a third party that is relevant to substantiating their claim that
they are the victim of a human rights violation caused by
an AI system, including, where relevant, training and testing data, information on how the AI system was used,
meaningful and understandable information on how the AI
system reached a recommendation, decision or
International Covenant on Civil and Political Rights:
-Article 2, International Covenant on Civil and Political
Rights – Right to effective remedy
Council of Europe Resources:
-MSI-AUT Responsibility and AI: A study of the
implications of advanced digital technologies (including AI
systems) for the concept of responsibility within a human
rights framework (20 19)
• This report outlines what AI is and how task -
specific technologies work, threats and harms associated with advanced digital technologies, and a range of ‘responsibility models’ for the adverse
impacts of AI systems
• The main recommendations from this report are “effective and legitimate mechanisms that will
prevent and forestall human rights violations”,
policy choices regarding responsibility models for
AI systems, support of technical research involving
human rights protections and ‘algorithmic
auditin g’, and the presence of legitimate
governance mechanisms for the protection of
human rights in the digital age
40 prediction, and details of how the AI system’s outputs
were interpreted and a cted on. ”18
Democracy (1, 2, 4,
5, 7, 8) -The right to freedoms of expression, assembly , and
association (Art. 10 and 11 ECHR).
-The right to vote and to be elected, the right to
free and fair elections, and in particular universal,
equal and free suffrage, including equality of
opportunities and the freedom of voters to form an
opinion. In this regard, individuals should not be subjected to any deception or manipulation.
-The right to (diverse) information, free discourse, and access to plurality of ideas and perspectives.
-The right to good governance.
∼
“Freedom of expression: Member states should be mindful
of the obligation to create a diverse and pluralistic
information environment and the adverse impact AI-
driven content moderation and curation can have on the
exercise of the right to freedom of expression, access to information, and freedom of opinion. Member states are
also encouraged to consider taking appropriate measures
to regulate technology monopolies to prevent the adverse
effects of concentration of AI expertise and power on the
free flow of information.
Freedom of assembly and association: Special attention should be paid to the impact the use of AI systems in
content moderation can have on the freedom of assembly
and association, especially in contexts where these freedoms are difficult to exercise offli ne. The use of facial European Convention on Human Rights (ECHR) :
-Article 3 of Protocol No.1, European Convention on
Human Rights – Right to free elections
- Article 3 of Protocol No. 1 ,
Guide on Article 3 of Protocol
No. 1 to the European Convention of Human Rights –
Right to free elections
-Article 10, European Convention on Human Rights
– Freedom of expression
-Article 10, ‘Guide on Article 10 of the European
Convention on Human Rights’ , Council of Europe –
Freedom of expression
-Article 11, European Convention on Human Rights
– Freedom of assembly and association
-Article 11, ‘Guide on Article 11 of the European
Convention on Human Rights’ , Council of Europe –
Freedom of assembly and association
Charter of the Fundamental Rights of the EU:
-Article 11, Charter of the Fundamental Rights of the EU
–
Freedom of expression and information
-Article 11, Explanation on Article 11 — Freedom of
expression and information (Explanations Relating to th e
Charter of Fundamental Rights) in the Official Journal of
the European Union C 303/17 - 14.12.2007
18 Council of Europe Commissioner for Human Rights (2019) – “Unboxing AI: 10 steps to protect Human Rights”
41 recognition technology should be strictly regulated by
member states, including through legislation setting out
clear limitations for its use, and public transparency to protect the effective exercise of the right to freedom of assemb ly.”
19
∼
“Well-functioning democracies require a well -informed
citizenry, an open social and political discourse and absence of opaque voter influence.
This requires a well- informed citizenry. In information
societies citizens can only select to consume a small
amount of all the available information. Search engines,
social media feeds, recommender systems and many
news sites employ AI to determine which content is
created and shown to users (information personalization).
If done well, this could help citiz ens to better navigate the
flood of available information and improve their
democratic competences, for instance by allowing them to
access resources in other languages through translation
tools. However, if AI determines which information is
shown and con sumed, what issues are suppressed in the
flood of online information and which are virally amplified, this also brings risks of bias and unequal representation of opinions and voices.
AI-driven information personalisation is enabled by the
constant monitoring and profiling of every individual.
Driven by commercial or political motives this
technologically -enabled informational infrastructure of our
societies could amplify hyper -partisan content one is
likely to agree with and provide an unprecedented
-Article 12, Charter of the Fundamental Rights of the EU –
Freedom of assembly and association
-Article 12, Explanation on Article 12 — Freedom of
assembly and of association (Explanations Relating to the
Charter of Fundamental Rights) in the Official Journal of
the European Union C 303/17 - 14.12.2007
Universal Declaration of Human Rights:
-Article 19, Universal Declaration of Human Righ ts – Right
to freedom of opinion and expression
-Article 20, Universal Declaration of Human Rights
– Right
to freedom of peaceful assembly and association
International Covenant on Civil and Political Rights:
-Article 19, International Covenant on Civil and Political
Rights – Freedom of expression
-Article 21,
International Covenant on Civil and Political
Rights – Freedom of assembly
-Article 22, International Covenant on Civil and Political
Rights – Freedom of association
-Article 25, International Covenant on Civil and Political
Rights – Right to participate in public affairs, good
governance, and elections
Council of Europe Resources:
19 Council of Europe Commissioner for Human Rights (2019) – “Unboxing AI: 10 steps to protect Human Rights”
42 power ful tool for individualised influence. As a
consequence it may undermine the shared understanding,
mutual respect and social cohesion required for democracy to thrive. If personal AI predictions become
very powerful and effective, they may even threaten to
undermine the human agency and autonomy required for meaningful decisions by voters.
Thirdly, AI can undermine a fair electoral process. Political
campaigns or foreign actors can use (and have been
using) personalised advertisements to send different
messages to distinct voter groups without public
accountability in the agora . However, it should be noted
that it remains uncertain exactly how influential micro -
targeted advertisement is. AI can also be used to create
and spread misinformation and deep fakes , in the form of
text, pictures, audio or video. Since these are hard to identify by citizens, journalists, or public institutions,
misleading and manipulating the public becomes easier and the level of truthfulness and credibility of media and democratic discourse may deteriorate. ”
20
∼
“Democracy is government of the people by the people
for the people. It provides checks against the
concentration of power in the hands of a few and can
function properly only if based on sound institutions which enjoy confidence of an active, committed, and informed
citizenry and are able to provide for dynamic balance of interests of constituents. The crisis of modern
democracies touches almost all elements of democratic
order, including erosion of, and loss of confidence in
institutions, mis - and disin formation of the public, break -
up of cohesion and polarisation of society. Modern
-Venice Commission: Principles for a fundamental rights -
compliant use of digital technologies in electoral
processes (2020)
• Emphasized the need for a human rights -
compliant approach to eight principles involving the use of digital technologies in elections
-Parliamentary Assembly ,
Need for democratic
governance of artificial intelligence , Council of Europe
20 Muller, C. The Impact of Artificial Intelligence on Human R ights, Democracy and the Rule of Law . Council of Europe, CAHAI(2020)06- fin.
43 technologies, including AI -based systems may both help
resolve and aggravate this crisis .
The use of AI by humans is not neutral. It can be used to
strengthen government accountability and can produce
many benefits for democratic action, participation , and
pluralism, making democracy more direct and responsive. However, it can also be used to strengthen repressive
capabilities and for manipulation purposes. Indeed, the
rapid integration of AI technologies into modern
communication tools and social media platforms provides
unique opportunities for targeted, personalised and often
unnoticed influence on individuals and social groups, which different political actors may be tempte d to use to
their own benefit.
The experience of the last few years helps to identify
some key areas where the use of AI- based technology
can threaten to undermine and destabilise democracy, including, inter alia: (a)access to information
(misinformation, “echo chambers” and erosion of critical
thinking); (b)targeted manipulation of citizens;
(c)interference in electoral processes; (d) erosion of civil
rights; (e) shifts of financial and political power in the
data economy.
Moreover, the broad use by States of AI-based
technologies to control citizens such as automated
filtering of information amounting to censorship, and
mass surveillance using smartphones and closed-circuit
television coupled with vast integrated databases, may
lead to the erosion of polit ical freedoms and the
emergence of digital authoritarianism – a new social order
competing with democracy.”
21
21 Bergamini, D. ( Rapporteur) (2020) . Need for democratic governance of artificial intelligence . Council of Europe , Committee on Political
Affairs and Democracy , Parliamentary Assembly .
44
Rule of law (2) -The right to a fair trial and due process (Art. 6
ECHR). This should also include the possibility of
receiving insight into and challenging AI- informed
decisions in the context of law enforcement or justice, including the right to review of such decisions by a human. The essential requirements
that secure impacted individuals' access to the right
of a fair trial must also be met : equality of arms,
right to a natural judge established by law, the right to an independent and impartial tribunal, and respect for the adversarial process.
-The right to judicial independence and impartiality,
and the right to legal assistance.
-The right to an effective remedy (Art. 13 ECHR),
also in case s of unlawful harm or breach an
individual’s human rights in the context of AI systems.
∼
“The rule of law requires respect for principles such as
legality, transparency, accountability, legal certainty, non -
discrimination, equality and effective j udicial protection –
which can be at risk when certain decisions are delegated to AI systems. In addition, AI systems can also negatively
affect the process of law -making and the application of
the law by judges. Concerns have also been expressed on the po ssible negative effects of some AI applications used
in judicial systems or connected areas. Such use could
pose a challenge to the right to a fair trial enshrined in
Article 6 of the ECHR, of which components such as the
right to an independent and impartial judiciary, the right
to a lawyer or the principle of equality of arms in judicial European Convention on Human Rights (ECHR) :
-Article 6, European Convention on Human Rights – Right
to a fair trial
-Article 6,
‘Guide on Article 6 of the European Convention
on Human Rights. ’, Council of Europe – Right to a fair
trial
-Article 13, European Convention on Human Rights
– Right to an effective remedy
-Article 13, ‘Guide on Article 13 of the European
Convention on Human Rights.’ , Council of Europe – Right
to an effective remedy
Charter of the Fundamental Rights of the EU:
-Article 47, Charter of the Fundamental Rights of the EU –
Right to an effective remedy before a tribunal and to a
fair trial
-Article 47, Explanation on Article 47 — Right t o an
effective remedy and a fair trial (Explanations Relating to
the Charter of Fundamental Rights) in the Official Journal
of the European Union C 303/17 - 14.12.2007
Universal Declaration of Human Rights:
-Article 8, Universal Declaration of Human Rights
– Right
to an effective remedy
- Article 10, Universal Declaration of Human Rights
–
Right to a fair trial
45 proceedings are key elements that are also essential for
the effective implementation of the rule of law.
Moreover, companies face increased pressure, including
through regulation, to take decisions on the legality of
content that is shown on their platform. Since social
media platforms have become the new “public square”,
their own terms of service essentially set the rules of how
freedom of expression manifests itself online, but with
fewer safeguards than in more traditional public settings.
It is, however, essential that states can and do continue
to fulfil their responsibility for the protection of the rule of
law.”22
∼
“The fact that AI can perpetuate or amplify existing
biases, is particularly pertinent when used in law enforcement and the judiciary. In situations where physical freedom or personal security is at stake, such as
with predictive policing, recidivism risk determination and
sentencing, the right to l iberty and security combined
with the right to a fair trial are vulnerable . When an AI -
system is used for recidivism prediction or sentencing it
can have biased outcomes. When it is a black box, it
becomes impossible for legal professionals, such as
judges , lawyers and district attorneys to understand the
reasoning behind the outcomes of the system and thus
complicate the motivation and appeal of the judgement.
…Public institutions are held to a higher standard when it
comes to their behaviour towards indi viduals and society,
which is reflected in principles such as justification,
proportionality and equality. AI can increase the efficiency International Covenant on Civil and Political Rights:
-Article 2,
International Covenant on Civil and Political
Rights – Right to effective remedy
-Article 14,
International Covenant on Civil and Political
Rights – Right to fair tri al
Council of Europe Resources:
-European Commission for the Efficiency of Justice,
‘European ethical charter on the use of Artificial
Intelligence in judicial systems and their environment’ –
Council of Europe
22 CAHAI Feasibility Study , Council of Europe CAHAI (2020)23.
46 of institutions, yet on the other it can also erode the
procedural legitimacy of and trust in democratic
institutions and the authority of the law.
Courts, law enforcement and public administrations could
become more efficient, yet at the cost of being more
opaque and less human agency, autonomy and oversight.
Similarly, whereas previously courts were the only ones to determine what counts as illegal hate speech, today
mostly private AI systems determine whether speech is taken down by social media platforms . These AI systems
de facto compete for authority with judges and the law and In general, AI can contribute to devel oping judicial
systems that operate outside the boundaries and protections of the rule of law.
Automated online dispute resolutions provided by private
companies are governed by the terms of service rather
than the law that do not award consumers the same
rights and procedural protections in public courts.”
23
Related enumerations of human r ights , fundam ental freedoms, and elements of democracy and the rule of law:
1: Council of Europe Commissioner for Human Rights (2019) – “ Unboxing AI: 10 steps to protect Human Rights”
2. OECD AI Principles and Classification Framework
3. European Agency for Fundamental Rights (2020) – “Getting the Future Right: Artificial Intelligence and Fundamental Rights”
4. Aizenberg & van den Hoven (2020) – “Designing for human rights in AI.”
5. UNESCO (2021a/b) – Recommendations on the Ethics of Artificial Intelligence (two drafts, 1 -25, 26 -134)
6. Loi, et al. (2021) Algorithm Watch – “Automated Decision -Making Systems in the Public Sector: An Impact Assessment Tool for Public
Authorities ”
7. High -level Expert Group on Artificial Intelligence , European Commission (2020) – “The Assessment List for Trustworthy Artificial Intelligence
(ALTAI) for self -assessment”
8. Mantelero & Esposito (2021) – An evidence -based methodology for human rights impact assessment (HRIA) in the development of AI data-
intensive systems
23 Muller, C. The Impact of Artificial Intelligence on Human Rights, Democracy and the Rule of Law . Council of Europe , CAHAI(2020)06- fin.
47 Determining the salience of the rights, freedoms, and elements of
democracy and the rule of law to the AI system you are planning to build
After review ing Table 1—and carrying out further desk research (or consulting
experts) to clarify questions pertaining to any of the rights or freedoms about
which you are unsure —you are in a good position to begin considering the human
rights, fundamental freedoms, and the element s of democracy and the rule of law
that could be impacted by your system.
Go through each principle and priority listed in Table 1 (and the corresponding
rights and freedoms enumerated in the adjacent columns ) and ask:
• How, if at all, a re the rights , freedoms , or element s of democracy and the
rule of law that are associated with this principle/priority salient to the AI
system we are planning to build , given its intended purpose and the
contexts in which it will be used?
• How could the rights, freedom s, or element s of democracy and the rule of
law that are associated with this principle/priority be impacted by the AI
system we are planning to build ?
• If things go wrong in the implementation of our AI system or if it is used
out-of-the -scope of its intended purpose and function, what harms could be
done to impacted rights -holder s in relation to the rights, freedoms , or
elements of democracy and the rule of law that are associated with this
principle/priority?
Answering t hese questions as part of the PS Report will help you prepare for
conducting the PCRA, where you will further assess the extent to which the AI
system poses probable harms to specific rights and freedoms.
Activity 4: Map Governance Workflow
The last activity of the planning and scoping phase of your PS Report involves
mapping the governance workflow of your project. Governance is defined in the
recently-adopted standard ISO 370000 as ‘the system by which the whole
organization is directed, controlled, and held accountable to achieve its core purpose in the long run’.
24 The governance workflow is the foundation of an
anticipatory approach to your project that supports duty- bearers with ex-ante
processes of accountability and auditing throughout the project lifecycle.25
24 https://committee.iso.org/sites/tc309/home/pr ojects/ongoing/ongoing- 1.html
25 The algorithmic impact assessment literature describes the value and importance of
both ex -post and ex -ante assessments, including as they pertain to different features of
organisational governance. However, recent work tends to place greater importance on
the effectiveness of ex -ante assessments. See, for example, Moss, E., Watkins, E., Metcalf,
J., & Elish, M. C. (2020). Governing with Algorithmic Impact Assessments: Six Observations. SSRN Electronic Journal. Leslie specifically argues that effective governance
includes structures of accountability throughout the development and implementation of
48 Mapping a governance workflow, therefore, creates the baseline conditions for
holding an organization actively accountable for the fulfilment of its goals and
principles, including its duty to uphold human rights, democracy, and the rule of
law.
Mapping the governance workflow is not only a process of articulating guiding
principles and assigning responsibility; it is an opportunity to ensure a sufficiency
of perspectives and understandings that extend s beyond the organization to the
societies and material conditions potentially impacted by an AI system. For
example, as technical systems can significantly affect the social and economic
conditions of human lives , systems of governance should be inclusive of a
sufficiently diverse set of voices to increase the degree of critical reflection and
the mitigation of institutional and individual bias es that can lead to the
reproduction and amplification of unjust social structures .26
E
stablishing a governance workflow is essential where the proposed system
operates in an incomplete or uncertain regulatory environment , but an established
governance workflow remains important even within tightly regulated contexts as
a feature of collaborative governance in which structures of internal accountability
complement regulatory structures and vice-versa.27
Qu
estions:
• What roles are involved in each of the project ’s phases?
o Design
o Development
o Deployment
• What are the responsibilities associated with each of these roles?
• How are each of the duty -bearers involved in the project assigned
responsibility for the system’s potential impacts to h uman rights,
democracy, and the r ule of l aw?
o Does this distribution establish a continuous chain of human
accountability throughout the design, development, and deployment of this project? If so, how?
AI systems rather than retrospectively. Leslie, D. (2019). Understanding artificial
intelligence ethics and safety. The Alan Turing Institute. And McGregor et al. argues that
human rights doctrine provides guidance for the governance of AI systems and
organisations, including decision structures to guide the project lifecycle. McGregor, L.,
Murray, D., & Ng, V. (2019). International human rights law as a framework for algorithmic
accountability. International and Comparative Law Quarterly, 68(2), 309– 343.
26 For further discussion on the components of an effective structure of internal governance
for AI system development, including the requirements for critically assessing a system’s
social impact, see Raji et al. (2020). Closing the AI Accountability Gap: Defining an End -
to-End Fr amework for Internal Algorithmic Auditing. Proceedings of the 2020 conference
on fairness, accountability, and transparency. Similarly, Kemper & Kolkman argue for the
importance of a critical audience to ensure that algorithmic systems are truly accountabl e
beyond the firm. Kemper, J., & Kolkman, D. (2019). Transparent to whom? No algorithmic
accountability without a critical audience. Information, Communication & Society, 22(14), 2081– 2096.
27 Complementary governance is described by Kaminski (2018). Binar y Governance:
Lessons from the GDPR’s approach to algorithmic accountability. S. Cal. L. Rev., 92, 1529.
49 o Do all duty -bearers understand the responsibilit y that has been
assigned to them?
• What logging protocol will be established for documenting workflow
activities?
o Does this protocol enable external auditing and oversight of the
design, development, and deployment of this project? If so, how?
• Can responsible duty bearers be traced in the event that the human rights
or fundamental freedoms of rights -holders are harmed by this system? If
so, how do the project’s distribution of responsibilities and logging
protocol enable this?
• If you are procuring parts or elements of the system from third -party
vendors, suppliers, sub- contractors, or external developers , how are you
instituting appropriate governance controls that will establish end-to-end
accountability, traceability, and auditability ?
• If any data being used in the production of the AI system will be acquired
from a vendor or supplier , how are you instituting appropriate governance
controls that will establish end -to-end accountability, traceability, and
auditability across the data lifecycle?
50
Project Summary Report Template
Project Phases and Questions Responses
ACTIVITY 1: Outline Project, Use Context, Domain, and Data
PROJECT
How would you describe your organisation and
what sort of services or products do you typically provide (beyond the system under consideration)?
What AI system is being built and what type of
product or service will it offer?
What benefits will the system bring to its users and customers, and will these benefits be widely accessible?
Which organisation(s) —yours, other suppliers, or
other providers —are responsible for building this
AI system?
Which parts or elements of the AI system, if any, will be procured from third -party vendors,
suppliers, sub -contractors, or external
developers?
51 Which algorithms, techniques, and model types
will be used in the AI system? (Provide links to technical papers where appropriate)
In a scenario where your project optimally scales, how many people will it impact, for how long, and in what geographic range (local, national, global)? (Describe your rationale)
USE CONTEXT
What is the purpose of this AI system and in
which contexts will it be used? (Briefly describe a
use-case that illustrates primary intended use)
Is the AI system’s processing output to be used in a fully automated way or will there be some degree of human control, oversight, or input before use? (Describe)
Will the AI system evolve or learn continuously in its use context, or will it be static?
To what degree will the use of the AI system be time-critical, or will users be able to evaluate
outputs comfortably over time?
What sort of out-of -scope uses could users
attempt to apply the AI system, and what dangers may arise from this?
DOMAIN
52 In what domain will this AI system operate?
Which, if any, domain experts have been or will
be consulted in designing and developing the AI system?
DATA
What datasets are being used to build this AI system?
Will any data being used in the production of the
AI system be acquired from a vendor or supplier? (Describe)
Will the data being used in the production of the AI system be collected for that purpose, or will it be re-purposed from existing datasets? (Describe)
What quality assurance and bias mitigation
processes do you have in place for the data
lifecycle —for both acquired and collected data?
ACTIVITY 2: Identify Stakeholders
Who are the rights -holders, duty bearers and
other relevant parties that may be impacted by,
or may impact, the project?
Do any of these rights -holders possess sensitive
or protected characteristics that could increase their vulnerability to abuse or discrimination, or
for reason of which they may require
53 additional protection or a ssistance with respect to
the impacts of the project? If so, what
characteristics?
Could the outcomes of this project present significant concerns to specific groups of rights -
holders given vulnerabilities caused or precipitated by their distinct circums tances?
If so, what vulnerability characteristics expose them to being jeopardized by project outcomes?
What affected group or groups of rights -holders
could these protected or contextual vulnerability characteristics or qualities represent?
How could the distribution of power relations (i.e.,
the relative advantages and disadvantages) between rights -holders and duty -bearers affect
the way the benefits and risks of the project are allocated?
ACTIVITY 3: Scope Impacts to Human Rights, Democracy, and Rule of Law
Instructions: Go through each principle and priority listed in Table 1 (and the corresponding rights and freedoms enumerated
in the adjacent columns) and ask and respond to the following questions:
1. How, if at all, are salient to the AI system we are planning to build, given its intended purpose and the contexts in which
it will be used?
2. How could the rights, freedoms, or elements of democracy and the rule of law that are associated with this principle/priority
be impacted by the AI system we are planning to build?
3. If things go wrong in the implementation of our AI system or if it is used out -of-the-scope of its intended purpose and
function, what harms could be done to impacted persons in relation to the rights, freedoms, or elements of democracy and the rule of law that are associated with this principle/priority?
54
Respect for and Protection of Human
Dignity 1.
2.
3.
Protection of Human Freedom and
Autonomy 1.
2.
3.
55 Prevention of Harm and Protection of the
Right to Life and Physical, Psychological,
and Moral Integrity 1.
2.
3.
Non-discrimination, Fairness, and
Equality 1.
2.
3.
Data Protection and the Right to Respect
of Private and Family Life 1.
56
2.
3.
Economic and Social Rights 1.
2.
3.
Accountability and Effective Remedy 1.
2.
57
3.
Democracy 1.
2.
3.
Rule of Law 1.
2.
58 3.
ACTIVITY 4: Map Governance Workflow
What roles are involved in each of the project phases?
Design Phase:
Development Phase:
Deployment Phase:
What are the responsibilities of each of these roles?
Design Roles:
Development Roles:
Deployment Roles:
59 How are each of these duty bearers assigned
responsibility for the system’s potential impacts to
Human Rights, Democracy, and the Rule of Law?
o Does this distribution establish a continuous
chain of human accountability throughout
the design, development, and deployment of this project ? If so, how?
What logging protocol is established for documenting workflow activities?
o Does this protocol enable auditing and
oversight of the design, development, and
deployment of this project? If so, how?
Can responsible duty bearers be traced in the event that the human rights or fundamental freedoms of rights -holders are harmed by this
system? If so, how do the project’s distribution of
responsibilities and logging protocol enable this?
If you are procuring parts or elements of the
system from third -party vendors, suppliers, sub -
contractors, or external developers, how are you instituting appropriate governance controls that will establish end -to-end accountability,
traceability, and audi tability?
60 If any data being used in the production of the AI
system will be acquired from a vendor or supplier, how are you instituting appropriate governance controls that will establish end -to-end
accountability, traceability, and auditability across
the data lifecycle?
61 PCRA: Document Overview
Introduction
After you have done your preparatory desk research and completed the initial
iteration of your PS Report , you will be ready to carry out your Preliminary
Context-Based Risk A nalysis (PCRA ). The PCRA serves several important
purposes:
1. It operates as an indicator of the pre-emptive actions needed to ensure
non-harm for projects that pose unacceptable risk to impacted rights -
holders. In this way, it f acilitates the operationalization of the risk
assessment component of the precautionary principle .28
2. I
t helps you identify the risk factors that should be considered as you
endeavour to reduce and mitigate the potential adverse impacts of your
prospective project on the fundamental freedoms and human rights of
affected rights-holders.
3. It directs you to specific actions that you then need to take in your impact
assessment process ( HUDERIA ) so that sufficient consideration can be
given to each of the risk factors detected and to the salient rights and freedoms that could be impacted by the prospective AI system. It also
directs you to specific goals, properties, and areas that you should focus
on in your subsequent risk management and assurance processes
(
HUDERAC ) to reduce and mitigate associated risks.
4. Finally, it helps you establish a proportionate approach to your impact
assessment and assurance practices, and to the level of stakeholder engagement that you will include across your project lifecycle. The
proportionality principle counsels that efforts of impact assessment, risk
management, and stakeholder involvement should be proportionate to the likelihood and extent of risks for adverse impacts on human rights,
democracy, and the rule of law associated with an AI system. The report
that is generated when you complete your PCRA will help you understand,
in a provisional way, the risk level of your system, so that you can take
appropriate risk management actions and engage stakeholders in a
proportionate way.
28 The European Commission guidance on the precautionary principles states that it
“should be considered within a structured approach to the analysis of risk which comprises
three elements: risk assessment, risk management, risk communication. The PCRA supports the risk assessment aspect, whereas —as is discussed below —the Human Rights,
Democracy, and the Rule of Law Assurance Case (HUDERAC) primarily supports the risk management and risk communication elements. See: EU (2000). Communication from the
commission on the precautionary principle, Communication. Brussels: Commission of the
European Communities ; Bourguignon, D. (2016) European Parliament, and Di rectorate -
General for Parliamentary Research Services, The precautionary principle: definitions, applications and governance: in -depth analysis. Luxembourg: Publications Office .
62 PCRA Section One: Identifying Risk Factors
In the first section of the PCRA, you will be asked to answer a series of question s
that are intended to help you identify both modifiable and circumstantial risk
factors to consider as you embark on the impact assessment and risk
management and assurance elements of your HUDERAF. The term “risk factor” is
here taken to mean the characteristics of an AI innovation context at the technical,
sociot echnical, historical, legal, economic, political, or practical levels that precede
and are associated with a higher likelihood of outcomes that negatively impact
human rights, democracy, and the rule of law. 29 These factors are not necessarily
to be treate d as causes of adverse impacts but rather as conditions that are
correlated with an increased chance of harm and that need to be anticipated and
considered in risk mitigation efforts.
T
hose risk factors that emerge externally from the technical, sociotech nical
historical, legal, economic, or political environments within which the design, development, and deployment of AI systems occur and that are thereby less
controllable are called “circumstantial risk factors.” Those risk factors that emerge
internally from the actual practices of producing and using AI technologies, and
that are thus more controllable, are called “modifiable risk factors”.
To remain as “algorithm neutral”
30 as possible, the PCRA focuses on the spectrum
of risk factors that surround the practical contexts of designing, developing, and
deploying AI systems rather than on the technical details or underlying specifications of AI applications. Building on OECD’s AI classification scheme
31 and
related work, the questions in the first section focus on risk factors arising from both the use contexts and the design, development, and deployment contexts of
the AI project lifecycle. In addition to this, the final fifteen questions focus on risk
factors that arise in the immediate context human rights and fundamental
freedoms. A summary of the question headings and sub- headings shows how this
spectrum of concerns stretches from use contexts and design, development, and
deployment contexts to rights and freedoms contexts:
29 The concept of “risk factors,” as used here, was first fully developed in medical and
public health contexts but has since seen wide application in public policy and the social
sciences. For background discussions, see Mercy, J.A., and O’Carroll, P.W. 1998. New
directions in violence prevention: The public health arena. Violence a nd Victims 3(4):285–
301.; Mrazek, P.J., and Haggerty, R.J., eds. 1994. Reducing Risks for Mental Disorders:
Frontiers for Preventative Intervention Research. Washington, DC: National Academy Press.; Shader, M. (2001). Risk factors for delinquency: An overview. Washington, DC:
US Department of Justice, Office of Justice Programs, Office of Juvenile Justice and Delinquency Prevention. ; Sorooshian, S., & Mun, S. Y. (2020). Literature review: Critical
risk factors affecting information -technology projects. Calitatea , 21(175), 157 -161.
30 Following both the CAHAI Feasibility Study , Council of Europe CAHAI (2020)23 (p. 4)
and CAHAI -PDG (2021)05rev.
31 OECD (2021). Framework to Classify AI Systems . Public Consultation.
63 Use context (Questions 1 -20)
• Sector or domain in which the system is being built
• Existing law and regulatory environment of the sector or domain
• Impact-level of the system
• Prohibited systems and uses
• Scope of deployment (breadth and temporality)
• Technological maturity
• Existing system (human or technological) that the application is replacing
• Bias and discrimination in sector or domain context
• Environmental context
• Cybersecurity context
Data Lifecycle Context (Questions 21 -40)
• Data quality, integrity, and provenance
• Means and methods o f data collection
• Data types
• Dataset linkage
• Data labelling and annotating practices
Goal Setting and Problem Formulation Context (Questions 41 -42)
• Decision to design
• Definition of outcome
Model Design & Development Context (Questions 43 -46)
• AI model characteristics
• Pre-processing and feature engineering
• Model selection
Model Output & Implementation Context (Questions 47 -52)
• Model inference
• Model verification and validation
• Model accuracy and performance metrics
System -User Interface and Human Factors Context (Questions 53 -55)
• Implementers or users of the system
• Level of automation/level of human involvement and choice
Rights & Freedoms Context (Questions 56 -71)
• Respect for and protection of human dignity
• Protection of human freedom and autonomy
• Non-discrimination, fairness, and equality
• Data protection and privacy context
• Accountability and access to justice
• Social and economic rights
64 Depending on the form that the section one questions take, certain responses to
each question will trigger one of three classes of risk factors, prohibitive, major,
or moderate:
Prohibitive risk factor
Prohibitive risk factors indicate the presence of
determinants of potential harms that trigger the
precautionary principle and precipitate pre-
emptive measures to prevent adverse impacts on the human rights and fundamental freedoms of affected persons, democracy, and the rule of law. Pre -emptive measures are appropriate
where the severity, scale, and irremediableness
of the potential harm outweigh levels of risk
reduction and mitigation.
Major risk factor Major risk factors indicate the presence of determinants of potential harms that are directly or indirectly associated with significant
risks of adverse impacts on the human rights
and fundamental freedoms of affected persons, democracy, and the rule of law but that provide opportunities for risk reduction and mitigation that make the risk s posed tolerable.
Moderate risk factor Moderate risk factors indicate the presence of determinants of potential harms that are directly or indirectly associated with risks of adverse impacts on the human rights and
fundamental freedoms of affected pers ons,
democracy, and the rule of law but that provide
opportunities for risk reduction and mitigation that make the risks posed broadly acceptable.
Upon completion of the entire PCRA, a summary report will automatically be
generated (See PCRA Output Example
) which compiles and organizes the answers
that flagged up risk factors into the three risk factor categories. The report provides prompts for each of these answers that direct you to specific actions to
take in your impact assessment process and to specific goals, properties, and
areas that you should focus on in your subsequent risk management and
assurance processes to reduce and mitigate associated risks. For example, if you answered “yes” to Quest ion 2, “ Does the sector or domain in which the AI system
will operate include vulnerable groups (or groups with protected characteristics) who may be significantly or disproportionately impacted by the design and use of the system? ”, the following message will be displayed in the summary report:
65
Because you answered Yes to Question 2:
Major circumstantial risk factor
Where the sector or domain in which the AI system will operate includes vulnerable
groups (or groups with protected characteristics) who may be significantly or
disproportionately impacted by the design and use of the system, this presents a
major circumst antial risk factor for adverse impacts on the human rights and
fundamental freedom of persons
• Actions to take for your HUDERIA: Make sure to focus upon
considerations surrounding the potential impacts of your AI system on
fairness, non -discrimination, equality, diversity, and inclusiveness
• Goals, properties, and areas to focus on in your HUDERAC : Fairness
(non-discrimination, equality, bias mitigation, diversity, and inclusiveness)
The enumeration of risk factors and corresponding practical recommendations that
are presented in the PCRA summary report is intended to support a deliberate,
anticipatory, and reflective approach to your impact assessment, risk
management, and assurance practices. It is meant to bolster safe and responsible
AI innovation processes that respect human rights, democracy, and the rule of
law.
PCRA Section Two: Risks of A dverse Impacts on the H uman Rights
and Fundamental Freedom s of Persons, Democracy, and the R ule of
Law
In the second section of your PCRA, you will be asked to provide a preliminary
estimation of the likelihood of the potential adverse impacts that your system
could have on human rights, democracy , and the rule of law. This will involve
drawing on the stakeholder analysis and the determination of salient rights and
freedoms you carried out as part of your initial P S reporting to think through the
relevance and likelihood of the harms indicated in the prompts. Each prompt will
be formed as a statement with likelihood/probability options below it.
These forty prompt s encompass the categories of principles and priorities included
in Table 1 of the Project Planning and Scoping Summary section of the PCRA. The
range of potential adverse impacts covered in section two of the PCRA reflect the
concer ns raised in the CAHAI’s Feasibility Study and in the multi -stakeholder
consultation that followed the plenary’s adoption of this text in December 2020.
66 It also reflects adjacent work done by the Council of Europe, UNESCO, the
European Commission, OECD, and the European Agency for Fundamental Rights .32
T
he likelihood options that are provided for each of the section two prompts should
be understood as follows33:
Likelihood level Description Score
Unlikely
The risk of adverse impact is low, improbable, or
highly improbable
1
Possible The risk of adverse impact is moderate; the harm
is possible and may occur
2
Likely The risk of adverse impact is high; it is probable
that the harm will occur
3
Very Likely The risk of adverse impact is very high; it is highly
probable that the harm will occur
4
Not Applicable It can be claimed with certainty that the risk of
adverse impact indicated in the prompt does not
apply to the AI system. 0
The likelihood score that is indicated in the far -right column is applied in the
summary report as part of the calculation of a risk index number (RIN) that will
be used to generate preliminary recommendations for proportionate risk
management and assurance practices and stakeholder engagement given the risk
level of the prospective AI system. After you have completed all the PCRA
32 For other relevant sources, see: Counci l of Europe Commissioner for Human Rights
(2019) – “Unboxing AI: 10 steps to protect Human Rights” ; Council of Europe,
“Recommendation CM/Rec (2020) of the Committee of Ministers to member States on the
human rights impacts of algorithmic systems”; Muller, C. The Impact of Artificial
Intelligence on Human Rights, Democracy and the Rule of Law. Council of Europe, CAHAI(2020)06- fin; Bergamini, D. ( Rapporteur) (2020). Need for democratic governance
of artificial intelligence . Council of Europe, Committee on Po litical Affairs and Democracy ,
Parliamentary Assembly; UNESCO (2021a/b) – Recommendations on the Ethics of Artificial
Intelligence (two drafts, 1 -25, 26 -134); High-level Expert Group on Artificial Intelligence,
European Commission (2020) – “The Assessment List for Trustworthy Artificial Intelligence
(ALTAI) for self -assessment” ; European Agency for Fundamental Rights (2020) – “Getting
the Future Right: Artificial Intelligence and Fundamental Rights”.
33 Likelihood level descriptions are drawn partly from the probability table presented in
(Mantelero & Esposito, 2021, p. 19), but whereas the latter includes the category of
exposure in the description and calculation of likelihood levels, the PCRA counts o nly the
probability. In the PCRA, the degree of human exposure is covered in the calculation of risk level by the variable of “number of rights -holders affected,” the information for which
is drawn from answers given to Question 10 in section 1 of the PCRA . Mantelero, A., and
Esposito, S. (2021). An evidence -based methodology for human rights impact assessment
(HRIA) in the development of AI data -intensive systems. Computer Law & Security Review.
67 questions, the likelihood scores from the answers you have given to the section 2
questions will be compiled in a table in the summary report that produces an
estimated risk level for each of potential adverse impacts that have received a likelihood score of 1 or above:
(Table taken from PCRA Output Example ). Note that numbers in the “Gravity Potential” and “Affected Rights -
Holders Affected” columns reflect ordinal scales rather than real values and will be more fully described below )
The following formula is used to calculate the estimated RIN for potential harm
event 𝑖𝑖:
𝑅𝑅
𝑅𝑅𝑅𝑅𝑖𝑖=𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑖𝑖𝑆𝑆 𝑆𝑆 𝑃𝑃𝑃𝑃𝑆𝑆 𝑆𝑆𝑃𝑃𝑆𝑆 𝑖𝑖𝑃𝑃 𝑃𝑃𝑖𝑖+𝐿𝐿𝑖𝑖𝐿𝐿𝑆𝑆𝑃𝑃𝑖𝑖ℎ𝑃𝑃𝑃𝑃𝑜𝑜𝑖𝑖34
I
n this equation, the risk index number of a potential harm event 𝑖𝑖 is determined
by an additive combination of the severity of the potential harm and its estimated
likelihood.35 This calculation works from the common risk analysis formulation of
34 Note that at this preliminary scoping phase, where risk analysis is helping to determine
the proportionality of stakeholder engagement and risk manage ment strategies, the
variable of probability/likelihood figures in to impact estimation in a way that will differ
from the approach taken in the HUDERIA phase of impact assessment, where severity
assessment and impact management prioritization does n ot include probability
considerations (following UNGP guidance) or incorporates them only as a secondary
concern after severity assessment determines impact mitigation measures . As the DHRI
notes, at the impact assessment stage, “ severity does not include consideration of
probability; instead, it prioritises a focus on the human rights consequences of the impact.
This is not to say that consideration of probability is irrelevant. Consideration of probability
will necessarily be involved in initial issues scoping. ” See See Figure 1 in Götzmann, N.,
Bansal, T., Wrzoncki, E., Veiberg, C. B., Tedaldi, J., & Høvsgaard, R. (2020). Human rights
impact assessment guidance and toolbox . The Danish Institute for Human Rights. p. 90.
35 The RIN calculation uses additive rather than multiplicative combination to reflect the
approximately logarithmic relation of categories and the nature of the potential harms, following (Levine 2012 ), (Duijm 2015), and (Rausand and Haugen 2020). See: Levine, E.
68 risk level as t he product of severity (or impact/consequence) and likelihood (or
probability/ occurrence) , 36 but, in this case , severity is significantly upweighted
in accordance with its priority a s a “predominant factor” in human rights risk .37 In
the RIN formula used in the table, the Severity Potential number is determined by
the addition of the Gravity Potential of the harm and the Number of Affected
Rights -Holders38:
𝑆𝑆𝑆𝑆𝑆𝑆
𝑆𝑆𝑆𝑆𝑖𝑖𝑆𝑆
𝑆𝑆 𝑃𝑃𝑃𝑃𝑆𝑆
𝑆𝑆𝑃𝑃
𝑆𝑆𝑖𝑖
𝑃𝑃𝑃𝑃𝑖𝑖=𝐺𝐺𝑆𝑆 𝑃𝑃𝑆𝑆𝑖𝑖𝑆𝑆𝑆𝑆 𝑃𝑃𝑃𝑃 𝑆𝑆𝑆𝑆𝑃𝑃𝑆𝑆 𝑖𝑖𝑃𝑃 𝑃𝑃𝑖𝑖+𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑆𝑆 𝑆𝑆 𝑃𝑃𝑜𝑜 𝑜𝑜𝑜𝑜𝑜𝑜𝑆𝑆 𝑜𝑜𝑆𝑆 𝑆𝑆 𝑜𝑜 𝑅𝑅𝑖𝑖 𝑅𝑅ℎ𝑆𝑆𝑡𝑡 𝑡𝑡 𝑃𝑃𝑃𝑃𝑜𝑜𝑆𝑆𝑆𝑆 𝑡𝑡𝑖𝑖
H
ere, Gravity Potential is defined as the maximal or “worst -case scenario” degree
of the gravity or seriousness of a potential harm’s expected consequence, where
gravity is categor ized according to the extent of a potential harm’s damage to
human dignity or to the integrity of individual, collective, and planetary life .39 Here
is a table that maps out different levels of impact gravity (these gradients are
S. (2012). Improving risk matrices: the advantages of logarithmically scaled axes. Journal
of Risk Research , 15(2), 209-222; Duijm, N. J. (2015). Recommendations on the use and
design of risk matrices. Safety science , 76, 21-31. Rausand, M., & Haugen, S. (2020). Risk
Assessment: Theory, Methods, and Applications. John Wiley & Sons.
36 The quantification of risk is often represented by the general expression, �𝑃𝑃(𝐶𝐶𝑖𝑖)𝑝𝑝𝑖𝑖𝑛𝑛
𝑖𝑖=1,
where consequence 𝐶𝐶𝑖𝑖 (quantified in loss function 𝑃𝑃(𝐶𝐶𝑖𝑖)) is multiplied by the probability of
the event 𝑝𝑝𝑖𝑖. However, where loss functions and probabilities are less quantifiable, ordinal
approaches to determining severity and probabilit y levels lead to the use of semi-
quantitative methods. It should be noted, as well, that human rights risks cannot be
reduced to any sort of quantified cost-benefit analysis, so conventional , quantitative
approach to risk assessment are not applicable. For a discussion of the importance of
combining quantitative and qualitative approaches in human rights impact assessments,
see Nordic Trust Fund (2013). Study on Human Rights Impact Assessments: A Review of
the Literature, Differences with other Forms of Assessments and Relevance for
Development .
37 UN Human Rights Office of the High Commissioner (UNHROHC) (2012) “The Corporate
Responsibility to Protect Human Rights: An Interpretive Guide”. p. 7.
38 These two dimensions, gravity and number of affected rights -holders, line up with what
the UN Human Rights Office of the High Commissioner term “scale” ( how grave or serious
the impact is) and “ scope” (numbers of people impacted). Scale and scope are seen by
the latter as constitutive elements of severity. N.B, th e other crucial variable or
“remediability ” will be considered in the HUDERIA phase, where potentially affected
stakeholders are included in the assessment of human rights impacts and these impacts
are properly contextually specified . See UN Human Rights Office of the High Commissioner
(2020) “Identifying and Assessing Human Rights Risks related to End -Use”.
39 It must be acknowledge d that there is no predetermined ordering of human rights that
indicates a prioritization of one over another, and that severity is a relative and
contextually situated concept rather than an absolute one . The use of a notion of “gravity
potential” here is not intended to imply otherwise, but rather to provide a provisional way
to frame the human rights risks of AI through a precaution -centered approach so that
proportionate methods of stakeholder engagement and risk management can be initiated.
For guidance on characterizing severity, see: the UN Human Rights Office of the High
Commissioner (UNHROHC) (2012) “The Corporate R esponsibility to Protect Human Rights:
An Interpretive Guide”. p p. 83-84.
69 1 intended to orient you as y ou think through the harm potential of the system as
it relates to the adverse impact on the right or freedom under consideration):
Gravity Level Descripti on Score
Catastrophic
Harm Catastrophic prejudices or impairments in the
exercise of fundamental rights and freedoms that
lead to the d eprivation of the right to life;
irreversible i njury to physical, psychological, or
moral integrity; deprivation of the welfare of
entire groups or communities; catastrophic harm
to democratic society, the rule of law, or to the
preconditions of democratic ways of life and just
legal order ; deprivation of individual freedom and
of the right to liberty and security; harm to the
biosphere. 4
Critical Harm
Critical prejudices or impairments in the exercise
of fundamental rights and freedoms that lead to
the significant and enduring degradation of
human dignity, autonomy, physical,
psychological, or moral integrity, or the integrity
of communal life, democratic society, or just legal
order
3
Serious H arm
Serious prejudices or impairments in the exercise
of fundamental rights and freedoms that lead to
the temporary degradation of human dignity,
autonomy, physical, psychological, or moral
integrity , or the integrity of communal life,
democratic society, or just legal order or that
harm to the information and communication
environment
2
Moderate or
Minor Harm Moderate or minor prejudices or impairments in
the exercise of fundamental rights and freedoms that do not lead to any significant , enduring, or
temporary
degradation of human dignity,
autonomy, physical, psychological, or moral
integrity , or the integrity of communal life,
democratic society, or just legal order 1
For the purposes of a preliminary estimation of risk level, Gravity Potential should
be treated as a heuristic that is based on the contextual assumption of a maximal
degree of potential injury or damage for any given adverse impact —a heuristic
that priori tizes the protection of rights -holders, communities, and the biosphere
from potential irreversible, irreparable, catastrophic, critical, or serious harm. This
follows the logic of the precautionary principle in working from the assumption that, in situatio ns of uncertainty, where there are reasonable grounds for concern,
70 levels of protection should prioritize human and environmental health and safety
to facilitate sufficient preventative and anticipatory action.40 It is important to
note, however, that t he Gravity Potential heuristic is used only for the purpose of
determining preliminary proportionality recommendations for risk management
and assurance practices and stakeholder engagement —and is properly re- visited
and re -evaluated in a contextually sensitiv e way (and with stakeholder input) in
the S
takeholder Engagement Process and in the HUDERIA.41
The Affected Rights -Holders Number is derived from the answer to question 10
from the first section of the PCRA, which asks you to estimate the number of
affected rights- holders who would be directly or indirectly a ffected by the AI
system over time in a scenario where your project optimally scales . In the
calculation of the RIN, the four ranks of Affected Rights -Holders Numbers are
assigned values that increase in intervals of .5, starting at .5:
Number of Affected Rights -Holders Score
Between 1-10,000 Rights -Holders
.5
Between 10,001 -100,000 Rights -Holders
1
Between 100,001 -1,000,000 Rights -Holders
1.5
Over 1,000,000 Rights -Holders
2
This calibration reflects an up -weighting of the Gravity Potential and Likelihood
scores in the RIN formula as they relate to the Affected Rights -Holders score, in
keeping with their principal importance in the determination of the risk level.
Whereas Gravity Potential and Likelihood are essential elements of the risk
calculus and hence double -weigh ted, the Number of Affected Rights -Holders is
40 Following positions taken in the Rio Declaration (1992), the Maastricht Treaty (1992),
the Wingspread Statement (1998), and Commission Communication (2000), but stressing
the importance of “moderate precaution” in accordance with the Commission Communicat ion (2000). For discussions, see: Garnett, K., & Parsons, D. J. (2017). Multi ‐
case review of the application of the precautionary principle in European Union law and case law. Risk Analysis, 37(3), 502- 516; Lofstedt, R. E. (2003). The precautionary
princip le: Risk, regulation and politics. Process Safety and Environmental Protection ,
81(1), 36 -43; Zander, J. (2010). The application of the precautionary principle in practice:
comparative dimensions. Cambridge University Press; Sachs, N. M. (2011). Rescuing t he
strong precautionary principle from its critics. U. Ill. L. Rev ., 1285.
41 Another significant reason for using the precautionary principle to inform this heuristic
is that it addresses, in part, problems surrounding the decision -making and “knowledge
aspects” of risk characterization under conditions of uncertainty as emphasized in (Aven
2017) and (Krisper 2021). See: Aven, T. (2017). Improving risk characterisations in
practical situations by highlighting knowledge aspects, with applications to risk mat rices.
Reliability Engineering & System Safety , 167, 42-48; Krisper, M. (2021). Problems with
Risk Matrices Using Ordinal Scales. arXiv preprint arXiv:2103.05440.
71 added to the Severity Potential element of the calculation to provide a crucial but
supplementary weight that incorporates information about the scope of the
potential harm. The addition of the Affected Rights -Holders score to the Gravity
Potential score, however, also means an overall upweighting of Severity Potential
variable in keeping with its priority as a “predominant factor” in human rights risk .
The combination of Severity Potential (Gravity Potenti al + Number of Rights -
Holders Affected) and Likelihood yields RINs that range from 2.5 to 10. Risk levels
are established as intervals within this range42:
Low Risk Moderate Risk High Risk Very High Risk
𝑅𝑅𝑅𝑅𝑅𝑅≤5 5.5≤𝑅𝑅𝑅𝑅𝑅𝑅≤6 6.5≤𝑅𝑅𝑅𝑅𝑅𝑅≤7.5 𝑅𝑅𝑅𝑅𝑅𝑅≥8
The distribution of RINs across these risk levels is reflected in the matrix below.43
At the extremes, a minor or moderate harm from an AI system affecting between
1-10,000 rights-holders that is deemed unlikely yields a “low risk” output, whereas
a catastrophic harm from an AI system affecting over 1,000,000 rights holders
that is deemed very likely yields a “very high risk” output. Risk levels for borderline
cases have been calibrated to accord with a precautionary approach that prioritizes
sufficient levels of protection in risk reduction, mitigation, and management where substantial harm may occur. So, for instance, a serious harm from an AI system
affecting 1 – 10,000 rights holders th at is deemed likely yields a “high risk” output,
and a catastrophic harm from the same system that is deemed possible also
generates a “high risk” output.
42 To explore the way this formula for risk level determination works, see the “Harms Table
and RIN Distribution” Excel file that accompanies this proposal.
43 The semi-quantitative methods employed here have taken into account concerns around
consistency, resolution, input-output ambiguity , and aggregation long recognized in the
critical study o f risk matrices by (Cox 2008), (Smith et al 2009), (Levine 2012), (Duijm
2015), (Baybutt 2017), and ( Krisper 2021). The distribution of RINs has been stress-
tested and calibrated in accordance with the precautionary principle , but the provisional
character and intermediate place of the results in a wider, dialogical and right -holder
involving process must be stressed as a mitigating factor to possible coarseness in outputs .
See: Anthony (Tony) Cox Jr, L. (2008). What's wrong with ri sk matrices?. Risk Analysis:
An International Journal , 28(2), 497 -512; Smith, E.D., Siefert, W.T., Drain, D., 2009. Risk
matrix input data biases. Syst. Eng. 12 (4), 344 –360. Levine, E. S. (2012). Improving risk
matrices: the advantages of logarithmically scaled axes. Journal of Risk Research , 15(2),
209-222; Duijm, N. J. (2015). Recommendations on the use and design of risk matrices.
Safety science , 76, 21 -31; Baybutt, P. (2018). Guidelines for designing risk matrices.
Process safety progress , 37(1), 49-55. Krisper, M. (2021). Problems with Risk Matrices
Using Ordinal Scales. arXiv preprint arXiv:2103.05440.
72
In addition to providing a table which compiles the risks calculations for all the
adverse im pacts that have been identified in the PCRA (shown above ), the PCRA
Output Report (as demonstrated in the PCRA Output Example ) will direct you to
take specific risk management actions for each of these potential impacts :
For any 𝑹𝑹𝑹𝑹𝑹𝑹≥𝟖𝟖 (very
high) Further examination should be undertaken, through
expert and stakeholder consultation, as to whether
sufficient risk reduction is possible to make very high
risks of harm tolerable or whether this is not feasible,
and the risks are unacceptable.
For any 𝟔𝟔.𝟓𝟓≤𝑹𝑹𝑹𝑹𝑹𝑹≤
𝟕𝟕.𝟓𝟓 (high) Further examination should be undertaken, through
expert and stakeholder consultation, as to whether
the risks of harm indicated to be high are tolerable and can be appropriately reduced. Where likeli hood
= 1 (unlikely) and RPN indicates high risk,
confirmation of low risk probability should also be
undertaken through expert and stakeholder
consultation.
For any 𝟓𝟓.𝟓𝟓≤𝑹𝑹𝑹𝑹𝑹𝑹≤𝟔𝟔
(moderate) Further examination should be undertaken, throu gh
expert and stakeholder consultation, as to whether
the risks of harm indicated to be moderate are
broadly accepted as such, are tolerable, and can be
appropriately reduced. Where likeli hood = 1
(unlikely) and RPN indicates moderate risk,
confirmation o f low risk probability should also be
undertaken through expert and stakeholder
consultation.
For any 𝑹𝑹𝑹𝑹𝑹𝑹≤𝟓𝟓 (low) Further examination should be undertaken, through
expert and stakeholder consultation, as to whether
the risks of harm indicated to be low are broadly
accepted as such.
The summary report will also produce more general recommendations for
proportionate risk management practices and stakeholder engagement .
These recommendations are based on a prioritization of a degree of risk
management and human right diligence that is proportionate to the highest level
of risk identified across an AI system’s potential impacts. Additional
recommendations for establishing sufficient public transparency and accountability —as well as for ensuring adequate human rights diligence—are
included for large-scale projects that could have macro -scale, long -term impacts
on individu als and society. Here is table that includes all of the general
recommendations:
73
Where the highest
𝑹𝑹𝑹𝑹𝑹𝑹≥ 𝟖𝟖 (very high) Full diligence in risk management and assurance
practices across all risks is recommended to prior itize
risk reduction and mitigatio n. This should be
informed by the risk factors identified in section 1 of
the PCRA and by the HUDERIA impact assessment
process; Comprehensive stakeholder engagement
across the project lifecycle (e.g. partnering with or
empowering rights -holders as determi ned by the
Stakeholder Engagement Process) is also
recommended.
Where the highest
𝟔𝟔.𝟓𝟓 ≤𝑹𝑹𝑹𝑹𝑹𝑹≤ 𝟕𝟕.𝟓𝟓 (high) Full diligence in risk management and assurance
practices is recommended to priori tize risk reduction
and mitigation. This should be informed by the risk
factors identified in section 1 of the PCRA and by the
HUDERIA impact assessment process;
Comprehensive stakeholder engagement across the project lifecycle (e.g. partnering with or empowering rights-
holders as determined by the Stakeholder
Engagem ent Process) is also recommended.
Where the highest
𝟓𝟓.𝟓𝟓 ≤𝑹𝑹𝑹𝑹𝑹𝑹≤ 𝟔𝟔
(moderate) Full diligence in risk management and assurance
practices is recommended to priori tize risk reduction
and mitigation but more targeted approaches to
addressing specific risks may be acceptable, where
experts and stakeholders have been appropriately
consulted as part of this determination. Risk
management and assurance practices should be
informed by the risk factors identified in section 1 of
the PCRA and by the HUDERIA impact assessment
process; Appropriate stakeholder engagement
across the project lifecycle is also recommended
following the engagement objectives established by
your Stake holder Engagement Process.
Where the highest
𝑹𝑹𝑹𝑹𝑹𝑹≤ 𝟓𝟓 (low) Targeted approaches to addressing specific risks
may be acceptable, where your highest risk index
level is low and experts and stakeholders have been appropriately consulted to confirm this. Diligence in risk management and assurance practices across the project lifecycle is, however, still recommended. Risk
management and assurance practices should be
informed by the risk factors identified in section 1 of
the PCRA and by the HUDERIA impact assessment
process; Appropriate stakeholder engagement
across the project lifecycle is also recommended
following the engagement objectives established by
your Stakeholder Engagement Process.
Additional PCRA
triggered
recommendations for
large- scale projects : If the AI system could have generational (10 to 20
years), long -term (20 to 60 years), or cross-
generation al (over 60 years) impacts on over
100,000 people an d could pose risks to human
rights, democracy, and the rule of law, you should
74 If answer to PCRA
question #10 = c or
d and answer to
PCRA question #11 =
c, d, or e and answer
to any PCRA
questions #64 -83 =
likelihood 2, 3, or 4,
then:
prioritise public transparency and accountability by
forming an oversight board for your project, which
sufficiently represents the rights -holders and
communities impacted by th e system. You should
also carry out a complete Human Rights Impact
Assessment, following the recommendations of the
Danish Institute for Human Rights or similar, to
gauge the wider effects of your business activities on
the communities and individuals the y impact.
75 Preliminary Context Based Risk Analysis Template
Key:
Example Examples (hover cursor over text to display)
No trigger message
Moderate risk factor (considerable)
Major risk factor (elevated)
Prohibitive risk factor (triggers precautionary principle)
76 Context Question Trigger messages for summary
report
Section 1: Identifying risk factors
In this section, you will be asked a series of questions that are intended to help you identify risk
factors to consider as you embark on the impact assessment and risk management and assurance
elements of your HUDERAF. Draw on your Project Summary Report to answer the questions to
the best of your ability. If you are unsure about how to answer, choose that option and move
forward to the next question. As part of the report generated when you complete this PCRA, you
will receive an enumeration of the risk factors that have been identi fied.
Use Context
Sector or domain for which the system
is being built 1) Will the AI system serve an
essential or primary function in a
high impact or safety critical sector
(e.g., transport, social care, healthcare, other divisions of the
public sector)?
YES
NO
UNSURE
Example
YES:
Major circumstantial risk factor
• Where AI systems serve primary or
critical functions in high impact or safety
critical sectors, this presents a major
circumstantial risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
• Make sure to focus upon considerations
surrounding the prevention of harm and
the respect of the right to life and to
physical, psychological, and moral
integrity
77
Goals, properties, and areas to focus on in
your HUDERAC :
• Safety (accuracy and system
performance, security, reliability, and
robustness)
• Sustainability, (reflection on context and
impacts, change monitoring)
NO: No message
UNSURE: Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI system will serve a primary or
critical function in a high impact or safety critical
sector. When this information is ascertained, you
should return to your PCRA and revise it
accordingly.
2) Does the sector or domain in which
the AI system will operate include vulnerable groups (or groups with
protected characteristics) who may
be significantly or
disproportionately impacted by the
design and use of the system?
YES
NO
UNSURE YES:
Major circumstantial risk factor
• Where the sector or domain in which the
AI system will operate includes
vulnerable groups (or groups with
protected characteristics) who may be
significantly or disproportionately
impacted by the design and use of the
system, this presents a major
circumstantial risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
78
Example
Actions to take for your HUDERIA:
• Make sure to focus upon considerations
surrounding the potential impacts of your
AI system on fairness, non -
discrimination, e quality, diversity, and
inclusiveness
Goals, properties, and areas to focus on in
your HUDERAC :
• Fairness (non -discrimination, equality,
bias mitigation, diversity, and
inclusiveness)
NO: No message
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI system will operate in a sector or
domain which includes vulnerable groups (or
groups with protected cha racteristics), who may
be significantly or disproportionately impacted by
the design and use of the system. When this
information is ascertained, you should return to
your PCRA and revise it accordingly.
Existing law and regulatory
environment of the sector or domain 3) Have you assessed existing law
and regulation in the sector or
domain in which the AI system will operate and determined that it has YES: No message
NO: STOP.
79 a legal basis and can be developed
and deployed lawfully?
YES
NO
UNSURE
Prohibitive circumstantial risk factor
If you have not been able to establish a legal
basis for the AI system, you should not proceed.
Where appropriate, you should consult with
experts to establish the lawfulness of your
processing objective and proposed project before
taking any next steps. If you would like to
continue with the P CRA anyway, select yes. If
not, please reconsider your project and return
when you have ascertained that your AI system
will be lawful.
UNSURE : STOP. Before taking any further steps
in the proposed project, you should determine,
through expert and stakeholder input where
appropriate, whether your project has a legal
basis. When this information is ascertained, you
should return to your PCRA and revise it
accordingly.
4) Is the sector or domain in which
the AI system will operate
historically highly r egulated?
YES
NO
UNSURE
Example
YES:
Moderate circumstantial risk factor
• Where the sector or domain in which the
AI system will operate is historically
highly regulated, this presents a
moderate circumstantial risk factor
for adverse impacts on the human rights
and fundamental freedom of persons.
80
Actions to take for your HUDERIA:
• Make sure to consider the regulatory
context of your project and to reflect on
the expectations of compliant and
reasonable practices that arise in that
context.
Goals, properties, and areas to focus on in
your HUDERAC :
• Accountability and process transparency,
(traceability, accessibility, auditability,
and responsible governance)
NO: No message
UNSURE : Before taking any further steps in the
proposed project, you should examine , seeking
expert and stakeholder input where appropriate,
whether any relevant existing regulation in your
sector or domain applies to the prospective AI
system. When this informa tion is ascertained,
you should return to your PCRA and revise it
accordingly.
5) Does statute or regulation in the sector or domain in which the AI
system will operate require any
other types of impact assessment for the specific use -case of the AI
systems you are planning to YES:
Moderate circumstantial risk factor
• Where statute or regulation in the sector
or domain in which the AI system will
81 develop (e.g., data protection
impact assessment, equality
impact assessment, human rights
impact assessment, etc.)?
YES
NO
UNSURE
Example
operate requires other types of impact
assessment for the specific use -case of
the AI systems, this presents a
moderate circumstantial risk factor
for adverse impacts on the human rights
and fundamental freedom of persons.
Actions to take for your HUDERIA:
• Make sure to integrate the completion of
other compulsory impact assessments as
supplementary material and evidentiary
support for your HUDERIA.
Goals, properties, and areas to focus on in
your HUDERAC :
• Accountability and process transparency
(traceability, accessibility, auditability,
and responsible governance)
NO: No message
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether any other types of impact assessment
for the specific use -case of the AI systems you
are planning to develop are required by law or
regulation. When this information is ascertained,
you should return to your PCRA and revise it
accordingly.
YES:
82 Impact- level of the system 6) Will the AI system perform a safety
critical or high impact function
independent of the sector in which it operates?
YES
NO
UNSURE
Example
Major circumstantial risk factor
• Where AI systems perform a safety
critical or high impact function
independent of the sector in which they
operate, this presents a major
circumstantial risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
• Make sure to focus upon considerations
surrounding the prevention of harm and
the respect of the right to life and to
physical, psychological, and moral
integrity
Goals, properties, and areas to focus on in
your HUDERAC :
• Safety (accuracy a nd system
performance, security, reliability, and
robustness) and sustainability
NO: No message
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI system you are planning to build
performs a safety critical or high impact function.
83 When this informati on is ascertained, you should
return to your PCRA and revise it accordingly.
Prohibited AI systems and uses 7) Is the AI system on the list of
prohibited systems - EU AI Reg?
YES
NO
UNSURE
Example
YES: STOP.
Prohibitive circumstantial risk factor
If the AI system you are planning to build is on
the list of prohibited systems, it poses too high a
risk to impacted rights- holders and communities
for you to proceed. Prohibited systems that may
catastrophically, critically, or seriously harm the
physica l, psychological, or moral integrity or the
human rights and fundamental freedoms of
affected persons should not be developed or
implemented in that form. If you would like to
continue with the PCRA anyway at this time,
select yes. If not, please reconsider your project
and return when you have ascertained that your
AI system will be lawful. If you would like to
continue with the PCRA anyway, select yes. If
not, please reconsider your project and return
when you have ascertained that your AI syste m
will be not fall under the list of prohibited
systems.
NO: No message
UNSURE : STOP. Before taking any further steps
in the proposed project, you should determine,
through expert and stakeholder input where
appropriate, whether the AI system you are
planning to build is on the list of prohibited
systems .
84 8) Could the AI system be repurposed
or used in ways that fall under the
list of prohibited systems?
YES
NO
UNSURE
Example
YES:
Major circumstantial risk factor
• Where AI systems could be repurposed
or used in ways that fall under the list of
prohibited systems, this presents a
major circumstantial risk factor for
adverse impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUD ERIA:
No trigger message
Goal, properties, and areas to focus on in
your HUDERAC :
• You should take special precautions to put
in place protections, processes, and
mechanisms that ensure the limitation of
the system’s use to the purpose for which
it was c reated. These protections,
processes, and mechanisms should be
demonstrated and evidenced in your
HUDERA C.
NO: No message
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
85 whether the AI system could be used or
repurposed in ways that fall under the list of
prohibited systems are considered to pose
significant risks. When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
Scope of deployment (breadth and
temporality) 9) In a scenario where your project
optimally scales, will the AI system
directly and/or indirectly affect
rights -holders and groups within:
a. Organisations
b. Local populations
c. National populations
d. Global population?
Example a
Example b
Example c
Example d
a. Low risk factor
No message
b. Moderate circumstantial risk factor
• Where AI systems directly or indirectly
affect rights -holders and groups within
local populations, this presents a
moderate circumstantial risk factor
for adverse impacts on the human rights
and fundamental freedom of persons.
Actions to take for your HUD ERIA:
• In assessing the potential impacts of the
system, you should pay special attention
to the potential effects of the AI system
on local populations and on the
communities and groups that comprise
them.
Goals, properties, and areas to focus on in
your HUDERAC :
No trigger message
86 c. Major circumstantial risk factor
• Where AI systems directly or indirectly
affect rights -holders and groups within
national populations, this presents a
major circumstantial risk factor for
adverse impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
• In assessing the potential impacts of the
system, make sure to pay special
attention to the potential effects of the AI
system on local and national populations
and on the communities and groups that
comprise them.
Goal, properties, and areas to focus on in
your HUDERAC :
No trigger message
d. Major circumstantial risk factor
• Where AI systems directly or indirectly
affect rights -holders and groups within
national populations, this presents a
major circumstantial risk factor for
adverse impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
87 • In assessing the potential impacts of the
system , make sure to pay special
attention to the potential effects of the AI
system on local, national, and global
populations and on the communities and
groups that co mprise them.
Goal, properties, and areas to focus on in
your HUDERAC :
No trigger message
10) In a scenario where your project
optimally scales, will the AI system directly and/or indirectly affect:
a. Between 1 – 10 ,000 rights-
holders
b. Between 10,001 – 100,000 rights
holders
c. Between 100,001 – 1,000,000
rights -holders
d. Over 1,000,000 rights -holders?
Example a a. No message
b. Moderate circumstantial risk factor
• Where AI systems directly or indirectly
affect between 1 0,001-100,000 rights -
holders, this presents a moderate
circumstantial risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
No trigger message
Goal, properties, and areas to focus on in
your HUDERAC :
No trigger message
88
Example b
Example c
Example d
c. Major circumstantial risk factor
• Where AI systems directly or indirectly
affect between 100,001- 1,000,000
rights -holders, this presents a major
circumstantial risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
• In assessing the potential impacts of the
system, make sure to pay special
attention to the at scale or mass -level
effects of the use of the AI system on
social and political processes and
institutions—in particular, as these relate
to human rights, fundamental freedoms,
democracy, and the rule of law.
Goal, properties, and areas to focus on in
your HUDERAC :
No trigger me ssage
89 d. Major circumstantial risk factor
• Where AI systems directly or indirectly
affect over 1,000,000 rights -holders, this
presents a major circumstantial risk
factor for adverse impacts on the human
rights and fundamental freedom of
persons.
Actions to take for your HUDERIA:
• In assessing the potential impacts of the
system, make sure to pay special
attention to the at scale or mass -level
effects of the use of the AI system on
social and political processes and
institutions—in particular, as t hese relate
to human rights, fundamental freedoms,
democracy, and the rule of law.
Goal, properties, and areas to focus on in
your HUDERAC :
No trigger message
11) Considering the potential direct
and indirect impacts of your
project on individuals,
communities, and the
environment, which of the following is the widest timescale
within which the AI system could
affect rights -holders and groups:
a. No message
b. Moderate circumstantial risk factor
• Where AI systems directly or indirectly
affect rights -holders medium term (1 to
10 years), this presents a moderate
circumstantial risk factor for adverse
90 a. Short term (less than a year)
b. Medium term (1 to 10 years)
c. Generationally (10 to 20
years)
d. Long- term (20 to 60 years)
e. Over lifetimes and across
future generations (over 60 years)
f. UNSURE
Example a
Example b
Example c
Example d
Example e:1
Example e:2
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
• In assessing the potential impacts of the
system, make sure to pay special
attention to the medium term (1 -10
years) effects of the use of the AI system
on the individual identities of rights
holders as well as on social and political
processes and institutions— in particular,
as these relate to human rights,
fundamental freedoms, democracy, and
the rule of law.
Goal, properties, and areas to focus on in
your HUDERAC :
No trigger message
c. Major circumstantial risk factor
• Where AI systems directly or indirectly
affect rights -holders generationally (10
to 20 years), this presents a major
circumstantial risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
91
• In assessing the potential impacts of the
system make sure to pay special attention
to the generational (10 -20 years) effects
of the use of the AI system on the
individual identities of rights holders, on
socioeconomic relationships, and on
social and political processes and
institutions—in particular, as these relate
to human rights, fundamental freedoms,
democracy, and the rule of law.
Goal, properties, and areas to focus on in
your HUDERAC :
No trigger message
d. Major circumstantial risk factor
• Where AI systems directly or indirectly
affect rights -holders long -term (20 to 60
years), this presents a major
circumstantial risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
• In assessing the potential impacts of the
system, make sure to pay special
attention to the long -term (20 to 60
years) and generati onal (10- 20 years)
effects of the use of the AI system on the
92 individual identities of rights holders, on
socioeconomic relationships, and on
social and political processes and
institutions—in particular, as these relate
to human rights, fundamental freedo ms,
democracy, and the rule of law.
Goal, properties, and areas to focus on in
your HUDERAC :
No trigger message
e. Major circumstantial risk factor
• Where AI systems directly or indirectly
affect rights -holders over lifetimes and
across future generations (over 60
years), this presents a major
circumstantial risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
• In assessing the potential impacts of the
system, make sure to pay special
attention to the long -term (20 to 60
years), generational (10 -20 years), and
cross-generational (over 60 years) effects
of the use of the AI system on the
individual identities of r ights holders, on
socioeconomic relationships, and on
social and political processes and
93 institutions —in particular, as these relate
to human rights, fundamental freedoms,
democracy, and the rule of law.
Goal, properties, and areas to focus on in
your HUDERAC :
No trigger message
Proportionality trigger: If the AI system could
have cross-generational (over 60 years) impacts
on over 100,000 people, you should prioritise
public transparency and accountability by forming
an oversight board for your project, which
sufficiently represents the rights- holders and
communities impacted by the system. You should
also carry out a complete Human Rights Impact
Assessment to gauge the wider effects of your
business activities on the communities and
individuals they impact .
UNSURE : Before taking any further steps in the
proposed project, you should determine, seeking
expert and stakeholder input where appropriate,
the timescale of the prospective AI system’s
impacts on affected rights- holders and
communities. When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
Technological maturity 12) Will the AI system's design be
based on w ell-understood
techniques that have previously
been in operation and externally YES: No message
NO:
94 validated for a similar purpose and
in the same sector?
YES
NO
UNSURE
Example
Moderate modifiable risk factor
• Where AI systems are not designed
based on well -understood techniques
that have previously been in operation
and externally validated for a similar
purpose and in the same sector, this
presents a moderate modifiable risk
factor for adverse impacts on the
human rights and fundamental freedom
of persons.
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Safety (accuracy and system
performance, security, reliability, and
robustness) and sustainab ility (change
monitoring). In particular, you should
demonstrate in your assurance case that
diligent processes of testing, verifying,
and externally validating the performance
of the system has occurred. You should
also establish system monitoring and
performance evaluation protocols that are
proportionate to the system’s
technological maturity.
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the techniques on which the AI system
will be based have been successfully used in other
95
externally validated s ystems that have had
similar purposes and were in the same sector.
When this information is ascertained, you should
return to your PCRA and revise it accordingly.
Existing system (human or
technological) that the application is
replacing 13) If the AI system is replacing a
human, technical, or hybrid system
that serves the same or similar
function, is a reason for
replacement that the existing
system is considered flawed or
harmful?
YES
NO
UNSURE
NOT APPLICABLE
Example
YES:
Moderate circumstantial risk factor
• Where AI systems are replacing a
human, technical, or hybrid system that
serves the same or similar function,
because the existing system is
considered flawed or harmful, this
presents a moderate circumstantial
risk factor for adverse impacts on the
human rights and fundamental freedom
of persons.
Actions to take for your HUDERIA:
• Make sure to pay special attention to
assessing how your AI system could
potentially repeat the harmful outcomes
generated by the replaced system and
how to minimise any potenti al replicated
harms.
Goals, properties, and areas to focus on in
your HUDERAC :
• Safety (accuracy and system
performance, security, reliability, and
robustness). Your assurance case should
demonstrate how you have ensured that
the flaws or harms generated by the
previous system are redressed and
96
rectified across the design, development,
and deployment lifecycle
NO: No message
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the prospective AI system is replacing a
human, technical, or hybrid system that serves
the same or similar fun ction, is a reason for
replacement that the existing system is
considered flawed or harmful . When this
information is ascertained, you should return to
your PCRA and revise it accordingly.
NOT APPLICABLE : No message
14) If the human, technical, or hybrid
system being replaced is
considered critical infrastructure or
serves a safety -critical or high
impact function, has assessment and planning been done and made
public to transparently ensure that
its updating/replacement does not cause unacceptable outage or
harm?
YES
YES: No message
NO: Major modifiable risk factor
• Where assessment and planning is not
done and made public to transparently
ensure that the updating/replacement of
a human, technical, or hybrid system
that is considered critical infrastructure
or serves a safety -critical or high impact
function d oes not cause unacceptable
outage or harm, this presents a major
modifiable risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
97 NO
UNSURE
NOT APPLICABLE
Example
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focu s on in
your HUDERAC :
• You should incorporate publicly
transparent assessment and planning into
your assurance case which demonstrates
that the updating/replacement of the
previous system will not cause
unacceptable outage or harm.
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI system you are planning to build
will replace a human, technical, or hybrid system
that is considered cr itical infrastructure or serves
a safety -critical or high impact function and
whether the new system may cause unacceptable
outage or harm . When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
NOT APPLICABLE : No message
15) If the system is automating or replacing human labour, will
assessment and planning be done and made public to transparently
ensure that any corresponding job YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
98 loss, labour displacement, or
redeployment is publicly
acceptable, and any possible
corollary harms are managed and
mitigated?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example 1
Example 2
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where assessment and planning is not
done and made public to transparently
ensure that job loss, labour
displacement, or redeployment, which is
the result of an AI system automating or
replacing human labour, is publicly
acceptable, and any possible coroll ary
harms are managed and mitigated, this
presents a moderate modifiable risk
factor for adverse impacts on the human
rights and fundamental freedom of
persons.
Actions to take for your HUDERIA:
• Make sure to pay special attention to
assessing how your AI system could
potentially harm human dignity through
labour displacement and infringe upon the
right to just working conditions, the right
to safe and healthy working conditions,
and the right to organize as set out in the
European Social Charter
Goals, properties, and areas to focus on in
your HUDERAC :
• Sustainability (reflection on context and
impacts and stakeholder engagement and
involvement) and fairness ( non-
discrimination and equality) . Make sure to
99 incorporate publicly transparent
assessment and p lanning into your
assurance case to ensure that any
corresponding job loss, labour
displacement, or redeployment is publicly
acceptable and that any possible corollary
harms are managed and mitigated.
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the system you are planning to build will
automate or replace human labour in a way that
creates publicly unacc eptable job loss, labour
displacement, or redeployment . When this
information is ascertained, you should return to
your PCRA and revise it accordingly.
NOT APPLICABLE : No message
Bias and discrimination in sector or
domain context 16) Do the sector(s) or domain(s) in
which the AI system will operate,
and from which the data used to train it are drawn, contain
historical legacies and patterns of
discrimination, inequality, bias,
racism, or unfair treatment of
minority, marginalized, or
vulnerable groups that could be
replicated or augmented in the functioning of the system or in its
outputs and short - and long -term
impacts?
YES:
Major circumstantial risk factor
• Where the sector(s) or domain(s) in which
the AI system will operate, and from which
the data used to train it are drawn, contain
historical legacies and patterns of
discrimination, inequality, bias, racism, or
unfair treatment of minority,
marginalized, or vulnerable groups that
could be replicated or augmented in the
functioning of the system or in its outputs
and short - and long -term impacts, this
presents a major circumstantial risk
100 YES
NO
UNSURE
NOT APPLICABLE
Example 1
Example 2
factor for adverse impacts on the human
rights and fundamental freedom of
persons.
Actions to take for your HUDERIA:
• Make sure to focus upon considerations
surrounding the fairness, non-
discrimination, equality, diversity, and
inclusiveness
Goals, properties, and areas to focus on in
your HUDERAC :
• Fairness (non -discrimination, equality,
bias mitigation, diversity, and
inclusiveness). Continued emphasis
should be placed on data
representativeness and bias mitigation
across desi gn, development, and
deployment processes.
NO: No message
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the sector(s) or domain(s) in which the
AI system will operate, and from which the data
used to train it are drawn, contain historical
legacies and patterns of discrimination,
inequality, bias, racism, or unfair treatment of
minority, marginalized, or vulnerable groups that
could be replicated or augmented in the
functioning of the system or in its outputs and
101 short- and long -term impacts . When this
information is ascertained, you should return to
your PCRA and revise it accordingly.
NOT APPLICABLE : No message
Environmental context 17) If the design, development, and
deployment of the AI system will
have potentially significant impacts
on the environment, will sufficient and transparently reported
processes be implemented
throughout the project’s lifecycle to
ensure that the system, in both its production and use, complies with
applicable environmental
protection standards and supports the sustainability of the planet?
YES, WE HA VE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Major modifiable risk factor
• Where the design, development, and
deployment of the AI system will have
potentially significant impacts on the
environment and sufficient and
transparently reported processes are not
implemented throughout the project ’s
lifecycle to ensure that the system, in both
its production and use, complies with
applicable environmental protection
standards and supports the sustainability
of the planet, this presents a major
modifiable risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
102 NOT APPLICABLE
Example • Make sure to focus upon considerations
surrounding the impacts of the AI system
on the biosphere and planetary health.
You should also explore in your HUDERIA,
which communities may be adversely
impacted by environmental harms —
paying close attention to eff ects on
vulnerable and marginalized groups.
Goals, properties, and areas to focus on in
your HUDERAC :
• Sustainability (reflection on context and
impacts and stakeholder engagement
and involvement)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate sufficient and transparently
reported processes to ensure that the system, in
both its production and use, does not adversely
impact the environment or harm the biosphere
and that the system complies with applicable
environmental protection standards. When this
information is ascertained, you should return to
your PCRA and revise it accordingly.
NOT A PPLICABLE : No message
Cybersecurity context 18) Could the AI system present
motivations or opportunities for
malicious parties to hack or corrupt
it to achieve substantial financial YES:
Moderate modifiable risk factor
103 gains, political goals, or other
perceived benefits?
YES
NO
UNSURE
NOT APPLICABLE
Example • Where AI systems provide motivations or
opportunities for malicious parties to
hack or corrupt them to achieve
substantial financial gains, political goals,
or other perceived benefits, this presents
a moderate modifiable risk factor for
adverse impacts on the human rights
and fundamental freedom of persons.
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Safety (security and robustness)
NO: No message
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI system present motivations or
opportunities for malicious parties to hack or
corrupt it to achieve subst antial financial gains,
political goals, or other perceived benefits . When
this information is ascertained, you should return
to your PCRA and revise it accordingly.
NOT APPLICABLE : No message
19) Will sufficient and transparently
reported processes be YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
104 implemented throughout the
project’s lifecycle to ensure that
measures put in place to safeguard
the system's safety, security, and
robustness are appropriately
proportional to potential risks of
hacking, adversarial attack, data
poisoning, model inversion , or
other cybersecurity threats?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Major modifiable risk factor
• Where sufficient and transparently
reported processes are not implemented
throughout project lifecycles to ensure
that measures put in place to safeguard AI
systems’ safety, security, and robustness
are appropriately proportional to potential
risks of hacking, adversarial attack, data
poisoning, model inversion, or other
cybersecurity threats , this presents a
major modifiable risk factor for
adverse impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Safety (security and robustness)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate sufficient and transparently
reported processes throughout the project’s
lifecycle to ensure that me asures put in place to
105 safeguard the system's safety, security, and
robustness are appropriately proportional to
potential risks of hacking, adversarial attack, data
poisoning, model inversion, or other
cybersecurity threats . When this information is
ascer tained, you should return to your PCRA and
revise it accordingly.
NOT APPLICABLE : No message
20) Will sufficient and transparently
reported processes be
implemented throughout the
project’s lifecycle to stress test the
AI system for cybersecurity
vulnerabilities and resilience?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where sufficient and transparently
reported processes are not implemented
throughout project lifecycles to stress test
AI systems for cybersecurity
vulnerabilities and resilience , this presents
a moderate modifiable risk factor for
adverse impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
No trigger message
106
Example
Goals, properties, and areas to focus on in
your HUDERAC :
• Safety (security and robustness)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate sufficient and transparently
reported processes throughout the project’s
lifecycle to stress test th e AI system for
cybersecurity vulnerabilities and resilience . When
this information is ascertained, you should return
to your PCRA and revise it accordingly.
NOT APPLICABLE : No message
Data Lifecycle Context
Data quality, integrity, and provenance 21) Will sufficient and transparently
reported processes be implemented throughout the project’s lifecycle to ensure that all
data used in producing the system are sufficiently balanced and
representative of the individual
rights -holders and groups it is
affecting?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where sufficient and transparently
reported processes are not implemented
throughout the project’s lifecycle to
ensure that all data used in producing the
107 WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
Example
system are s ufficiently balanced and
representative of the individual rights -
holders and groups it is affecting, this
presents a moderate modifia ble risk
factor for adverse impacts on the human
rights and fundamental freedom of
persons.
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Data quality (dataset balance and
representativeness) and fairness (non -
discrimination, and bias-mitigation)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate into the design and
development of your AI system the inclusion of
datasets, which are s ufficiently balanced and
representative of the individual rights -holders
and groups they are affecting. You should also
determine how to demonstrate this in your
assurance case. When this information is
ascertained, you should return to your PCRA and
revise it accordi ngly.
22) Will sufficient and transparently
reported processes be
implemented throughout the YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
108 project’s lifecycle to ensure that all
data used in producing the system
are accurate, reliable, relevant, appropriate, up -to-date, and of
adequate quantity and quality for the use case, domain, function, and purpose of the system?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
Example
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where sufficient and transparently
reported processes are not implemented
throughout the project’s lifecycle to
ensure that all data used in producing the
system are accurate, reliable, relevant,
appropriate, up -to-date, and of adequate
quantity and quality for the use case,
domain, function, and purpose of the
system, this presents a moderate
modifiable risk factor for adverse
impacts on the human rights and
fundamenta l freedom of persons.
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Data quality (measurement accuracy and
source integrity, data reliability, data
relevance and appropriateness, data
timeliness and recency, adequacy of data
quantity and quality for the use case,
domain, function, and purpose of the
system, and responsible data
management)
109 UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate into the design and
development of your AI system the inclusion of
data, which are accurate, reliabl e, relevant, up -
to-date, appropriate, and of adequate quantity
and quality for the use case, domain, function,
and purpose of the system. You should also
determine how to demonstrate this in your
assurance case. When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
23) Will sufficient and transparently
reported processes be implemented throughout the
project’s lifecycle to ensure that all
data used in producing the system are attributable, consistent,
complete, and contemporaneous with collection?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where sufficient and transparently
reported processes are not implemented
throughout the project’s lifecycle to
ensure that all data used in producing the
system are attributable, consistent,
complete, and contemporaneous with
collection, this presents a moderate
modifiable risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
110 UNSURE
Example
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Data integrity (data attributabil ity, data
consistency, data completeness, data
contemporaneous, and responsible data
management)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate into the design and
development of your AI system the inclusion of
data, which are attributable, con sistent,
complete, and contemporaneous with collection.
You should also determine how to demonstrate
this in your assurance case. When this
information is ascertained, you should return to
your PCRA and revise it accordingly.
24) Will sufficient and transpar ently
reported processes be
implemented throughout the
project’s lifecycle to ensure the
proper recording, traceability, and auditability of the provenance and lineage of all data used in
producing the system, and any
other data involved in the dynamic
learning, tuning, or re -training of
the system across its lifecycle? YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
111
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
Example
• Where sufficient and transparently
reported processes are not implemented
throughout the project’s lifecycle to
ensure the proper recording, traceability,
and auditability of the provenance and
lineage of all data used in producing the
system, and any othe r data involved in
the dynamic learning, tuning, or re -
training of the system across its lifecycle,
, this presents a moderate modifiable
risk factor for adverse impacts on the
human rights and fundamental freedom
of persons.
Actions to take for your HUDE RIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Accountability and process transparency,
(accessibility, clear data provenance and
lineage) and data integrity (responsible
data management)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate into the design and
development of your AI system the inclusion
mechanisms and processes to ensure the proper
recording, traceability, and auditability of the
provenance and lineage of all data used to train,
test, and validate the system, and any other data
112 involved in the dynamic learning, tuning, or re -
training of the system across its lifecycle . You
should also determine how to demonstrate this in
your assurance case. When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
Means and methods of data collection 25) Where there is human involvement
in the data lifecycle, will
transparent and publicly accessible measures be implemented to
ensure mitigation of potential
measurement errors or biases in
collection, measurement, and recording processes?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where transparent and publicly
accessible measures are not implemented
to ensure mitigation of potential
measurement errors or biases in
collection, measurement, and recording
processes, where there is human
involvement in the data lifecycle, , this
presen ts a moderate modifiable risk
factor for adverse impacts on the human
rights and fundamental freedom of
persons.
Actions to take for your HUDERIA:
No trigger message
113
Goals, properties, and areas to focus on in
your HUDERAC :
• Data quality, measurement accuracy and
source integrity, and responsible data
management
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate into the design and
development of your AI system transparent and
publicly accessible measures to en sure mitigation
of the potential for measurement errors or biases
in collection, measurement, and recording
processes w here there is human involvement in
data collection . You should also determine how to
demonstrate this in your assurance case. When
this information is ascertained, you should return
to your PCRA and revise it accordingly.
Not Applicable: No message
26) In the event that collected or
procured datasets have missing or
unusable data, will the methods
used for addressing these
deficiencies be transparent and
made accessible to relevant stakeholders?
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
114 YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
• Where the methods used for addressing
missing or unusable data in collected or
procured datasets is not transparent and
made accessible to relevant stakeholders,
this presents a moderate modifiable
risk factor for adverse impacts on the
human rights and fundamental freedom
of persons.
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Data quality (data appropriateness), data
integrity (data compl eteness, and
responsible data management), and
accountability and process transparency
(accessibility, and clear data provenance
and lineage)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
the methods and measure you will put into place
to address issues arising from collected or
procured datasets, which ha ve missing or
unusable data. You should also determine how to
demonstrate this in your assurance case. When
this information is ascertained, you should return
to your PCRA and revise it accordingly.
115 NOT APPLICABLE : No message
27) Where personal data are used in
the production of the AI system,
will information be made available
to impacted rights- holders and
other relevant stakeholders about
the consent or legitimate basis to
use that data for the purpose of the system?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where information is not made available
to impacted rights- holders and other
relevant stakeholders about the consent
or legitimate basis to use personal data
for the purpose of the system, this
presents a moderate modifiable risk
factor for adverse impacts on the human
rights and fundamental freedom of
persons.
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Data protection and privacy (consent and
accountability) and process transparency
116 (accessibility, and clear data provenance
and lineage)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to make information available to impacted
rights -holders and other relevant stakeholders
about the consent or legitimate basis to use
personal data for the purpose of the system.
When this information is ascertained, you should
return to your PCRA and revise it accordingly.
NOT APPLICABLE : No message
28) If consent or the legitimate basis
to use personal data is implied, will
rights -holders and other relevant
stakeholders be consulted to identify acceptability of the data
use or concerns that need to be addressed?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where rights -holders and other relevant
stakeholders are not consulted to identify
acceptability of the data use or concerns
that need to be addressed when consent
to use personal data is implied, this
presents a moderate modifiable risk
factor for adverse impacts on the human
rights and fundamental freedom of
persons.
117 NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
Actions to take for your HUDERIA:
• Make sure to incorporate considerations
surrounding the acceptability of the
personal data use into your impact
assessment, integrating affected rights-
holders into relevant deliberations
through your Stakeholder Engagement
Process where appropriate.
Goals, properties, and areas to focus on in
your HUDERAC :
• Sustainability (reflection on context and
impacts, stakeholder engagement and
involvement)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether your project involves the use of personal
data based on implied consent and, if so, how you
can consult impacted rights- holders and other
relevant stakeholders to identify acceptability of
the data use or concer ns that need to be
addressed. When this information is ascertained,
you should return to your PCRA and revise it
accordingly.
NOT APPLICABLE : No message
Data types 29) Will the AI system use dynamic data, collected and processed in YES:
Major modifiable risk factor
118 real time (or near real time), for
continuous learning?
YES
NO
UNSURE
NOT APPLICABLE
Example
• Where the AI system uses dynamic data,
collected and processed in real time (or
near real time), for continuous learning,
this presents a major modifiable risk
factor for adverse impacts on the human
rights and fundamental freedom of
persons.
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Safety (security, reliability, and
robustness), data quality, and data
integrity as well as, where appropriate,
non-discrimination and bias mitigation.
You should also transparently report your
use of dynamic data, and the measures
you are taking to manage the risks
surrounding them, to impacted rights-
holders and relevant stakeholders.
NO: No message
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether your project involves the use of use
dynamic data, collected and processed in real
time (or near real time), for continuous learning
and, if so, how to manage the risks surrounding
this. You should also determine how to
119 demonstrate such risk management measures in
the relevant parts of your assurance case—in
particular, those involving the assurance of goals
and pr operties of safety, security, reliability,
robustness, data quality, and data integrity as
well as, where appropriate, non -discrimination
and bias mitigation. When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
NOT APPLICABLE : No message
30) Do the domain in which the data
are collected or procured, and the
type of the data collected or
procured, pose risks of rapid or
unexpected distributional shifts or
drifts that could adversely impact
the accuracy and performance of the system?
YES
NO
UNSURE
NOT APPLICABLE
Example
YES:
Moderate circumstantial risk factor
• Where the domain in which the data are
collected or procured, and the type of the
data collected or procured, pose risks of
rapid or unexpected distributional shifts or
drifts that could adversely impact the
accuracy and performance of the sy stem,
this presents a moderate modifiable
risk factor for adverse impacts on the
human rights and fundamental freedom of
persons.
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
120
• Sustainability (change monitoring), safety
(reliability and robustness), data quality,
and data integrity as well as, where
appropriate, non -discrimination and bias
mitigation. You should also involve
domain experts to determine the potential
sources of di stributional shifts or drifts and
build processes of dynamic assessment,
re-assessment, external validation, and
monitoring into your project lifecycle.
NO: No message
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the collected or procured data you will
be using are subject to rapid or unexpected
distributional shifts or dr ifts and, if so, how to
manage the risks surrounding this. You should
also determine how to demonstrate such risk
management measures in the relevant parts of
your assurance case—in particular, those
involving the assurance of goals of safety,
security, re liability, robustness, data quality, and
data integrity as well as, where appropriate, non -
discrimination and bias mitigation. In addition to
this, you should also involve domain experts to
determine the potential sources of distributional
shifts or drifts and build processes of dynamic
assessment, re -assessment, external validation,
and monitoring into your project lifecycle. When
this information is ascertained, you should return
to your PCRA and revise it accordingly.
121 NOT APPLICABLE : No message
31) If the AI system will use
unstructured data or a combination
of structured and unstructured
data, will the project lifecycle incorporate mechanisms and
processes to ensure that the
inferences generated from that
data by the system are reasonable,
fair, and do not contain lurking
proxies or correlations that are discriminatory or inequitable?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where the project lifecycle does not
incorporate mechanisms and processes
to ensure that the inferences generated
from the use of unstructured data, or a
combination of structured and
unstructured data, are reasonable, fair,
and do not contain lurking proxies or
correlations that are discriminatory or
inequitable , this presents a moderate
modifiable risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
122
• Fairness (non -discrimination, and bias
mitigation). You should also involve
domain experts and social scientists to
determine the potential s ources of the
lurking discriminatory proxies or
correlations that may influence the
outputs of the system.
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether your AI system will use unstructured
data or a combination of structured and
unstructured data and, if so, how to manage the
risks that these data may generate inferences
that are unreasonable, inequitable, or contain
lurking discriminatory proxies or correlations.
When this information is ascertained, you should
return to your PCRA and revise it accordingly.
NOT APPLICABLE : No message
Dataset linkage 32) Is there a possibility of
deanonymizing or identifying
rights -holders through data linkage
with existing data, publicly available datasets, or data that
could be easily obtained?
YES
NO
UNSURE
YES:
Major circumstantial risk factor
• Where there is a possibility of
deanonymizing or identifying rights -
holders through data linkage with
existing data, publicly available datasets,
or data that could be easily obtained, this
presents a major circumstantial risk
factor for adverse impacts on the human
rights and fundamental freedom of
persons.
123 NOT APPLICABLE
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Data protection and privacy (data
security). You should also involve experts
to determine the potential sources of
deanonymisation or identification through
data linkage.
NO: No message
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether your AI system may deanonymize or
identify rights -holders through data linkage with
existing data, publicly available datasets, or data
that could be easily obtained and, if so, how to
manage the risks of this potential
deanonymization or identifi cation. When this
information is ascertained, you should return to
your PCRA and revise it accordingly.
NOT APPLICABLE : No message
Data labelling and annotating practices 33) Will processes of labelling and
annotating the data used to
produce the AI system be YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
124 transparently reported and made
accessible for audit, oversight, and
review by appropriate authorities and relevant stakeholders?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where processes of labelling and
annotating the data used to produce the
AI system are not transparently reported
and made accessible for audit, oversight,
and review by appropriate authorities and
relevant stakeholders , this presents a
moderate modifiable risk factor for
adverse impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Accountability and process transparency
(traceability, auditability, accessibility,
and responsible governance)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to make the processes of labelling and
annotating the data that will be used to produce
your AI system transparent and accessible for
audit, oversight, and review by appropriate
125 authorities and relevant stakeholders. When this
information is ascertained, you should return to
your PCRA and revise it accordingly.
NOT APPLICABLE : No message
34) Where human labellers and
annotators are involved, will
sufficient and transparently
reported processes be put into place to mitigate potential labelling
or annotation biases, especially in
cases where these activities
concern social and demographic
categories that can import patt erns
of discrimination and proxies for protected characteristics?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where sufficient and transparently
reported processes are not put into place
to mitigate potential labelling or
annotation biases where human labellers
and annotators are involved, especially in
cases where these activities concern social
and demographic categories that can
import patterns of discrimination and
proxies for protected characteristics , this
presents a moderate modifiable risk
factor for adv erse impacts on the human
rights and fundamental freedom of
persons.
Actions to take for your HUDERIA:
No trigger message
126
Example
Goals, properties, and areas to focus on in
your HUDERAC :
• Fairness (non -discrimination and bias
mitigation), accountability and process
transparency ( accessibility, traceability,
auditability, and responsible governance)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to mitigate any potential labelling or
annotation biases that may arise in the
production of your AI system. When this
information is ascertained, you should return to
your PCRA and revise it accordingly.
NOT APPLICABLE : No message
35) If data labelling or annotation is
partly or fully automated, will
sufficient and transparently reported processes of human
oversight be implemented to
mitigate the negative impact of
biases generated by automated
labelling or annotation, especially
in cases where the dataset includes
social and demographic categories
that can import patterns of discrimination and proxies for protected characteristics?
Yes: No message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where sufficient and transparently
reported processes of human oversight
are not implemented to mitigate the
negative impact of biases generated by
automated labelling or annotation,
127
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
especially in cases where the dataset
includes social and demographic
categories that can import patterns of
discrimination and proxies for protected
characteristics , this presents a moderate
modifiable risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Fairness (non -discrimination, bias
mitigation), accountability and process
transparency (traceability, auditability,
accessibility, and responsible governance)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to mitigate any potential labelling or
annotation biases generated by automated data
labelling or annotation . When this information is
ascertained, you should return to your PCRA and
revise it accordingly .
NOT APPLICABLE : No message
Goal Setting and Problem Formulation Context
128 Decision to design 36) Will an evaluation be carried out as
to whether building the AI system
is the right approach given
available resources and data,
existing technologies and
processes, the complexity of the
use-contexts involved, and the
nature of the policy or social
problem that needs to be solved?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
Example
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where an evaluation is not carried out as
to whether building the AI system is the
right approach given available resources
and data, existing technologies and
processes, the complexity of the use -
contexts involved, and the nature of the
policy or social problem that needs to be
solved , this presents a moderate
modifiable risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
• You should take steps to ensure that an
initial evaluation of this kind takes place
and incorporate its results into your
HUDERIA as part of your impact
assessment process.
Goals, properties, and areas to focus on in
your HUDERAC:
• Sustainability (reflection on context and
impacts)
129 Unsure: Before taking any further steps in the
proposed project, you should evaluate, through
expert and stakeholder input where appropriate,
whether building the AI system is the right
approach given available resources and data,
existing technologies and processes, the
complexity of the use -contexts involved, and the
nature of the policy or social problem that needs
to be solved. When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
Definition of outcome 37) Will pro cesses of formulating the
problem to be solved by the AI
system and of defining its target
variable (or measurable proxy) be opened to input from stakeholder
engagement and public scrutiny?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
Example YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where processes of formulating the
problem to be solved by the AI system
and of defining its target variable (or
measurable proxy) are not opened to
input from stakeholder engagement and
public scrutiny, this presents a
moderate modifiable risk factor for
adverse impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
• You should determine, based on this
preliminary risk analysis and your
130
stakeholder engagement process, the
proportionate level of stakeholder
involvement, and seek public input
accordingly. This public input should
include (1) determining the
reasonableness, fairness, equity, and
justifiability of the translation of the
project's objective into the statistical and
mathematical frame and (2) deter mining
the alignment of that translation with the
potential impacts of the system on human
rights, fundamental freedoms,
democracy, and the rule of law. The way
you integrate this public input into the
system’s problem formulation and
outcome definition should be incorporated
into your HUDERIA as part of your impact
assessment process.
Goals, properties, and areas to focus on in
your HUDERAC :
• Sustainability (reflection on context and
impacts and stakeholder engagement and
involvement)
UNSURE : Before taking any further steps in the
proposed project, you should evaluate the extent
to which you need to incorporate public input into
your process of ascertaining the AI system’s
problem formulation and outcome definition. To
attain this objective, you should determine,
based on this preliminary risk analysis and your
stakeholder engagement process, the
proportionate level of stakeholder involvement,
and seek appropriate public input. When this
131 information is ascertained, you should return to
your PC RA and revise it accordingly.
Model Design & Development Context
AI model characteristics
38) If the algorithmic model(s) or
technique(s) used by the AI
system have a non -deterministic,
probabilistic, evolving, or dynamic character that prevents or hinders
the system's intended functionality from being formalized into specific
and checkable design- time
requirements (or that impairs
commonly accepted methods of
formal verification and validation),
will the system interact with rights -
holders in ways that could adversely impact their human
rights and fundamental freedoms?
YES
NO
UNSURE
NOT APPLICABLE
Example
YES:
Major modifiable risk factor
• Where the AI system directly interacts
with rights -holders in ways that could
adversely impact their human rights and
fundamental freedoms where the
algorithmic model(s) or technique(s)
used by the system have a non -
deterministic, probabilistic, evolving, or
dynamic character that prevents or
hinders the system's intended
functionality from being formalized into
specific and checkable design- time
requirements (or that impairs commonly
accepted methods of formal verification
and validation), this presents a major
modi fiable risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Safety (security, reliability, and
robustness), data quality, and data
132
integrity as well as, where appropriate,
fairness (non -discrimination and bias
mitigation)
NO: No message
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
(1) whether the algorithmic model(s) or
technique(s) used by the AI system will have a
non-deterministic, probabilistic , evolving, or
dynamic character that prevents or hinders the
system's intended functionality from being
formalized into specific and checkable design-
time requirements (or that impairs commonly
accepted methods of formal verification and
validation), and (2) whether the system will
interact with rights- holders in ways that could
adversely impact their their human rights and
fundame ntal freedoms . When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
133 NOT APPLICABLE : No message
39) If the algorithmic model(s) or
technique(s) used by the AI
system have a complex, high -
dimensional, or non-linear
character that impairs or prevents the interpretability and
explainability of the system, will
the system interact with rights -
holders in ways that could adversely impact their human
rights and fundamental freedoms?
YES
NO
UNSURE
NOT APPLICABLE
Example
YES:
Major modifiab le risk factor
• Where the AI system directly interacts
with rights -holders in ways that could
adversely impact their human rights and
fundamental freedoms where the
algorithmic model(s) or technique(s)
used by the AI system have a complex,
high-dimensional, or non -linear character
that impairs or prevents the
interpretability and explainability of the
system , this presents a major
modifiable risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Safety (security, reliability, and
robustness) and explainability as well as,
where appropriate, fairness (non -
discrimination and bias mitigation)
NO: No message
134 UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
(1) whether the algorithmic model(s) or
technique(s) used by the AI system have a
complex, high-dimensional, or non -linear
character that impairs or prevents the
interpretability and explainability of the system,
and (2) whether the system will interact with
rights -holders in ways that could adversely
impact their physical, psychological, or moral
integrity or their human rights and fundamental
freedoms . When this information is ascertained,
you should return to your PCRA and revise it
accordingly.
NOT APPLICABLE : No message
Pre-processing and feature
engineering 40) Where feature engineering,
whether automated or carried out
by humans, involves the grouping,
disaggregating, or excluding of
input features related to protected
or potentially sensitive
characteristics (e.g. decisions
about combining or separating
categories of gender or ethnic
groups) or proxies for these, will the production of the AI system
incorporate processes to mitigate
emergent forms of bias and to make the rationale behind these
decisions transparent and
accessible to impacted rights-YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where feature engineering, whether
automated or carried out by humans,
involves the grouping, disaggregating, or
excluding of input features related to
protected or potentially sensitive
characteristics (e.g. decisions about
combining or separating categor ies of
135 holders and other relevant
stakeholders?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
gender or ethnic groups) or proxies for
these, the production of the AI system
does not incorporate processes to
mitigate any emergent forms of bias and
to make the rationale behind these
decisions transparent and accessible to
impacted rights -holde rs and other
relevant stakeholders
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
Fairness (non -discrimination and bias
mitigation) and accountability and process
transparency (traceabili ty, auditability,
accessibility, and responsible
governance). In particular, you should
ensure that the feature engineering stage
of the AI system’s lifecycle incorporates
processes to mitigate emergent forms of
bias and to make the rationale behind
these decisions transparent and accessible
to impacted rights- holders
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate measures to mitigate any
emergent forms of bias in the event that the
feature engineering stage of a n AI system’s
production, whether automated or carried out by
136 humans, involves the grouping, disaggregating,
or excluding of input features related to protected
or potentially sensitive characteristics or proxies
for these. You should also determine how to
incorporate and demonstrate such bias mitigation
measures in your HUDERAC. When this
information is ascertained, you should return to
your PCRA and revise it accordingly.
NOT APPLICABLE : No message
Model selection 41) Where complex or potentially
opaque models are under
consideration, will processes of
model selection include appropriate
and transparent considerations of
the AI system's explainability by
taking into account:
a. The normal expectations of intelligibility and
accessibility that accompany the function the
system will fulfil in the
sector or domain in which it
will operate
b. The availability of more interpretable algorithmic
models or techniques in
cases where the selection of
an opaque model poses
risks to th e physical,
psychological, or moral integrity of rights- holders or YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where complex or potentially opaque
models are under consideration and
processes of model selection do not
include appropriate and transparent
considerations of the AI system's
explainability, this presents a moderate
modifiable risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
No trigger message
137 to their human rights and
fundamental freedoms
c. The availability of the
resources and capacity that
will be needed to
responsibly provide
supplementary methods of
explanation (e.g. simpler
surrogate models,
sensitivity analysis, or
relative feature important)
in cases where an opaque model is deemed appropriate and selected?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example 1
Example 2
Goals, properties, and areas to focus on in
your HUDERAC :
• Explainability (responsible model
selection). Model selection considerations
should include (a) the normal
expectations of intelligibility and
accessibility that accompany the function
the system will fulfil in the sector or
domain in which it will operate; (b) The
availability of more interpretable
algorithmic models or techniques in cases
where the selection of an opaque model
poses risks to the physical, psychological,
or moral integrity of rights -holders or to
their human rights and fundamental
freedoms; (c) The availability of the
resources and capacity that will be needed
to responsibly provide supplementary
methods of explanation (e.g. simpler
surrogate models, sensitivity analysis, or
relative feature important) in cases where
an opaque model is deemed appropriate
and selected.
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate measures to include
explainability considerations in your model
selection process. When this informa tion is
ascertained, you should return to your PCRA and
revise it accordingly.
NOT APPLICABLE : No message
138
Model Output & Implementation Context
Model inference 42) Will sufficient and transparently
reported processes be
implemented throughout the
project’s lifecycle to ensure that
the inferences generated from the model’s learning mechanisms are reasonable, fair, equitable, and do
not contain discriminatory
correlations or influences of lurking
or hidden proxies for
discriminatory features that may
act as significant factors in the
generation of its output?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSI DERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANN ING TO
DO THIS
UNSURE
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Major modifiable risk factor
• Sufficient and transparently reported
processes are not implemented
throughout the project’s lifecycle to
ensure that the inferences generated
from the model’s learning mechanisms
are reasonable, fair, equitable, and do
not contain discriminat ory correlations or
influences of lurking or hidden proxies for
discriminatory features that may act as
significant factors in the generation of its
output
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
139 NOT APPLICABLE
Example
• Fairness (non -discrimination and bias
mitigation) and accountability and process
transparency (traceability, auditability,
accessibility, and responsible
governance). You should involve domain
experts and social scientists to determine
the p otential sources of the hidden
discriminatory proxies or correlations that
may influence the outputs of the system,
and carry out comprehensive assessments
of algorithmic bias and of the differential
performance of the system as its outputs
relate to vuln erable and marginalized
groups.
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to ensure that the inferences generated from
the model’s learning mechanisms are reasonable,
fair, equitable, and d o not contain discriminatory
correlations or influences of lurking or hidden
proxies for discriminatory features that may act
as significant factors in the generation of its
output. You should also determine how to
incorporate and demonstrate such measures in
your HUDERAC. When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
NOT APPLICABLE : No message
Model verification and validation 43) Will sufficient and transparently
reported processes of external peer
review and evaluation by YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
140 independent domain and technical
experts be included in the
evaluation, verification, and validation of the AI model?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Sufficient and transparently reported
processes of external peer review and
evaluation by independent domain and
technical experts are not included in the
evaluation, verification, and validation of
the AI model
Actions to take for your HUDERIA:
No trigg er message
Goals, properties, and areas to focus on in
your HUDERAC :
• Accountability and process transparency,
traceability, auditability, accessibility,
reproducibility, and responsible
governance. You should make sure to
incorporate peer review protocols from the
earliest stage of the project lifecycle, so
that your workflow documentation
processes deliberately enable seamless
external evaluation and reproducibility.
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to ensure that processes of external peer
141 review and evaluation by independent domain
and technical experts be incl uded in the
evaluation, verification, and validation of the AI
model. You should also determine how to
incorporate and demonstrate such measures in
your HUDERAC. When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
NOT APPLICABLE : No message
44) Will processes for evaluating the
trained AI model include
transparently reported external
validation?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Where processes for evaluating the
trained AI model do not include
transparently reported external
validation, this presents a moderate
modifiable risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
No trigger message
142 Example
Goals, properties, and areas to focus on in
your HUDERAC :
• Accountability and process transparency
(traceability, auditability, reproducibility,
accessibility, and res ponsible
governance). You should make sure to
consult domain experts (and, where
appropriate, relevant users and
practitioners) during the external
validation process, so that you have a
realistic understanding of your system’s
operating environment and of the real -
world conditions that may hamper the
performance of the trained model.
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to ensure that processes of external
validation be included in the evaluation of the
trained AI model. You should also determine how
to incorporate and demonstrate such measures in
your HUDERAC. When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
NOT APPLICABLE : No message
45) Will processes of monitoring the AI
system during its operation involve regular re -evaluations of YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
143 performance that are sufficient to
keep pace with real world changes
that may cause concept drifts and
shifts in underlying data
distributions?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• Processes of monitoring the AI system
during its operation do not involve
regular re -evaluations of performance
that are sufficient to keep pace with real
world changes that may cause concept
drifts and shifts in underlying data
distributions
Actions to take for your HUDERIA:
• Make sure to incorporate into your impact
assessment process— especially in its
post-development stage iteration —an
exploration and provisional determination
of the timeframes that are appropriate for
performance re -evaluation and impact re -
assessment.
Goals, properties, and areas to focus on in
your HUDERAC :
• Safety, accuracy and performance
metrics, reliability, robustness,
sustainab ility, change monitoring, and
reflection on context and impacts. You
should involve domain experts to
determine the potential sources of
distributional shifts or drifts and build
transparently processes of dynamic
144 assessment, re -assessment, external
valida tion, and monitoring into your
project lifecycle.
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to ensure that processes of monitoring your
AI system during its operation involve regular re -
evaluations of performance that are sufficient to
keep pace with real world changes that may
cause concept drifts and shifts in underlying data
distributions. You should also determine how to
incorporate and demonstrate such measures in
your HUDERAC. When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
NOT APPLICABLE : No message
Model accuracy and performance
metrics 46) When performance metrics for the
AI system are considered,
determined, and reported, will the prioritization of error types (e.g.
false positives/negatives) be
a. Informed by the specific context of the use case and
by the potential effects of
differential error rates on
affected sub -populations (in
particular, on vulnerable or
protected groups) YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• When performance metrics for the AI
system are considered, determined, and
reported, the prioritization and reporting
of error types (e.g. false
positives/negatives) are (a) not informed
145 b. Clearly and accessibly
presented, so that the
rationale behind the chosen metrics is made explicit and
understandable in plain,
non-technical language?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
by the specific context of the use case
and the potential effects of differential
error rates on affected sub -populations
(in particular, on vulnerable or protected
groups) and (b) not clearly and
accessibly presented, so that the
rationale behind the chosen metrics is
made explicit and understandable in
plain, non-technica l language
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Safety, accuracy and performance
metrics, accessibility, explainability. You
should ensure that the selection and
reporting of err or types are informed by
the specific context of the use case and
the potential effects of differential error
rates on affected sub -populations.
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to ensure that the selection and reporting of
error types are informed by the specific context
of the use case and the potential effects of
differential error rates on affected sub -
populations. You should also determine how to
incorporate and demonstrate such measures in
your HUDERAC. When this information is
146 ascertained, you should return to your PCRA and
revise it ac cordingly.
NOT APPLICABLE : No message
47) When performance metrics for the
AI system are considered,
determined, and presented, will
the prioritization and reporting of
metrics beyond accuracy (e.g. sensitivity, precision, specificity) be
informed by the specific context of
the use case and its performance needs (e.g. a system whose
effective identification of rare
events is more critical than its
overall accuracy rate)?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE YES, WE HAVE PLANS IN PLACE TO DO THIS: No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Moderate modifiable risk factor
• When performance metrics for the AI
system are considered, determined, and
reported, the prioritization and reporting
of metrics beyond accuracy (e.g.
sensitivity, precision, specificity) is not
informed by the specific context of the
use case and its performance needs (e.g.
a system whose effective identification of
rare events is more critical than its
overall accuracy rate)
Actions to take for your HUDERIA:
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
147
Example
• Safety, accuracy and performance
metrics, accessibility, explainability. You
should ensure that the selection and
reporting of performance metrics for your
AI system beyond accuracy are informed
by its specific context and its performance
needs. You should also determine how to
incorporate and demonstrate this in your
HUDERAC.
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to ensure that the selection and reporting of
performance metrics for your AI system beyond
accuracy (e.g. se nsitivity, precision, specificity)
are informed by the specific context of the use
case and its performance needs. You should also
determine how to incorporate and demonstrate
such measures in your HUDERAC. When this
information is ascertained, you should return to
your PCRA and revise it accordingly.
NOT APPLICABLE : No message
System -User Interface and Human Factors Context
Implementers or users of the system 48) Will the project lifecycle incorporate sufficient processes to
ensure that the deployment of the
system does not harm the
physical, psychological, or moral YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
148 integrity of implementers or
adversely impact their dignity,
autonomy, and ability to make free, ind ependent, and well -
informed judgements?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDE RED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Major modifiable risk factor
• The project lifecycle does not incorporate
sufficient processes to ensure that the
deployment of the system does not harm
the physical, psychological, or moral
integrity of implementers or adversely
impact their dignity, autonomy, and ability
to make free, independent, and well -
informed judgements
Actions to take for your HUDERIA:
• Make sure to incorporate into your impact
assessment process thorough
consideration of the possible effects of the
AI system on the human rights and
fundamental freedoms of its
implementers. As affected rights -holders,
implementers should be included as
participants in the HUDERIA process. You
should also integrate any mitigation
measures for potential adverse impacts
identified into your HUDERAC.
Goals, properties, and areas to focus on in
your HUDERAC :
• Sustainability (reflection on context and
impac ts, and responsible implementation
and user training)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
149 expert and stakeholder input where appropriate,
how to ensure that the deployment of the system
does not harm the physical, psychological, or
moral integrity of implementers or adversely
impact their dignity, autonomy, and ability to
make free, independent , and well -informed
judgements. When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
NOT APPLICABLE : No message
Level of automation/level of human
involvement and choice 49) Will implementers of the AI system
be sufficiently trained so that they
are able to fully understand
a. the strengths and limitation
of the system and its
outputs
b. the potential conditions of situational complexity,
uncertainty, anomaly, or
system failure that may
dictate the need for the
exercise of human
judgment, common sense, and practical intervention?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Major modifiable risk factor
• Implementers of the AI system are not
sufficiently trained so that they are able
to fully understand both the strengths
and limitation of the system (and its
outputs) and the potential conditions of
situational complexity, uncertainty,
anomaly, or system failure that may
dictate the need for the exercise of
human judgment, common sense, and
practical intervention
Actions to take for your HUDERIA:
150 WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
No trigger message
Goals, properties, and areas to focus on in
your HUDERAC :
• Responsible implementation and user
training.
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to ensure that implementers of the AI
system are sufficiently trained to fully understand
both the strengths and li mitation of the system
(and its outputs) and the potential conditions of
situational complexity, uncertainty, anomaly, or
system failure that may dictate the need for the
exercise of human judgment, common sense, and
practical intervention. You should also determine
how to incorporate and demonstrate such training
measures in your HUDERAC. When this
information is ascertained, you should return to
your PCRA and revise it accordingly.
NOT APPLICABLE : No message
50) If the AI system has a high level of automation or operational
'autonomy' and interacts with
rights -holders in ways that could
adversely impact their physical,
psychological, or moral integrity or YES: No message
NO: STOP.
151 harm their human rights and
fundamental freedoms, will
mechanisms of human control and intervent ion (e.g. human-in -the-
loop or human -on-the-loop) be
incorporated into the implementation of the system?
YES
NO
UNSURE
NOT APPLICABLE
Example
Prohibitive modifiable risk factor
Where AI systems have high levels of automation
or operational ‘autonomy’ and interact with
rights -holders in ways that could adversely
impact their physical, psychological, or moral
integrity or harm their human rights and
fundamental freedoms, implementation
processes must incorporate appropriate
mechanisms of human control and intervention.
Systems that fail to do this pose too high a risk
to impacted rights-holders and communities to be
developed or implemented in that form. In
revising your project, you should ensure that
processes of human control and intervention are
integrated into the system’s deployment and
determine how to demons trate this in your
HUDERAC. If you wish to continue with the PCRA
anyway at this time, select yes. If not, please
reconsider your project and return when you have
ascertained that your AI system will have
sufficient mechanisms that enable human control
and intervention.
UNSURE : STOP. Where AI systems have high
levels of automation or operational ‘autonomy’
and interact with rights -holders in ways that
could adversely impact their physical,
psychological, or moral integrity or harm their
human rights and fundamental freedoms,
implementation processes must incorporate
appropriate mechanisms of human control and
intervention. Systems that fail to do this pose too
high a risk to impacted rights -holders and
communities to be developed or implemented in
that form. Before taking any further steps i n the
152 proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to ensure that, if your AI system has a high
level of automation and operational 'autonomy'
and interacts with rights -holders in ways that
could adver sely impact their physical,
psychological, or moral integrity or harm their
human rights and fundamental freedoms,
processes of human control and intervention are
appropriately integrated into the system’s
deployment. If you wish to continue with the
PCRA anyway at this time, select yes. If not,
please reconsider your project and return when
you have ascertained that your AI system will
have sufficient mechanisms that enable human
control and intervention.
NOT APPLICABLE : No message
Rights & Freedoms Context
Respect for and protection of human
dignity 51) If the AI system interacts with
rights -holders (users or decision
subjects) in ways that could
adversely impact their physical,
psychological, or moral integrity or
harm their dignity , will they be
able to opt out of the interaction
and revert to human intervention?
YES
NO YES: No message
NO: STOP
Prohibitive modifiable risk factor
Where AI systems interact with rights -holders
(users or decision subjects) in ways that could
adversely impact their physical, psychological, or
moral integrity or harm their dignity but they are
not given the choice to opt out of the interaction
153
UNSURE
NOT APPLICABLE
Example
and rever t to human intervention, such systems
are considered to pose too high a risk to impacted
rights -holders and communities for you to
proceed. In revising your project, you should pay
close attention to ensuring the availability and
accessibility of an opt ou t option for impacted
rights -holders, and to demonstrating this in your
HUDERAC. You should also pay close attention in
your HUDERIA to assessing the impact of the
system on human dignity. If you wish to continue
with the PCRA anyway at this time, select y es. If
not, please reconsider your project and return
when you have ascertained that your AI system
will have sufficient mechanisms that ensure the
availability and accessibility of an opt out option
for impacted rights -holders .
UNSURE : STOP Before taking any further steps in
the proposed project, you should determine,
through expert and stakeholder input where
appropriate, whether, in cases where the AI
system will interact with rights -holders (users or
decision subjects), they will be given the choice
to opt out of the interaction and revert to human
intervention if they feel that their dignity risks
being violated . If no option is given, the system
is considered to pose too high a risk to impacted
rights -holders and communities for yo u to
proceed. In revising your project, you should pay
close attention to ensuring the availability and
accessibility of an opt out option for impacted
rights -holders, and to demonstrating this in your
HUDERAF. You should also pay close attention in
your H UDERIA to assessing the impacts of the
system on human dignity. If you wish to continue
with the PCRA anyway at this time, select yes. If
not, please reconsider your project and return
154 when you have ascertained that your AI system
will have sufficient mechanisms that ensure the
availability and accessibility of an opt out option
for impacted rights -holders .
NOT APPLICABLE : No message
52) If the AI system interacts directly
with rights -holders (users or
decision subjects) in ways that
could adversely impact their
physical, psychological, or moral
integrity or harm their dignity, will
they have pre -knowledge of this
interaction and be able to provide meaningful and informed consent
to participate in the interaction?
YES
NO
UNSURE
NOT APPLICABLE
Example
YES: No message
NO: STOP
Prohibitive modifiable risk factor
Where AI systems interact directly with rights-
holders (users or decision subjects) in ways that
could adversely impact their physical,
psychological, or moral integrity or harm their
dignity but they do not have pre -knowledge of
this interaction and are n ot able to provide
informed consent to participate in the interaction,
such systems are considered to pose too high a
risk to impacted rights- holders and communities
for you to proceed. In revising your project, you
should pay close attention to ensuring t hat
impacted rights- holders are provided clear
foreknowledge about interacting with the system
and are able to provide meaningful and informed
consent to participate in the interaction. This
should be demonstrated in your HUDERAC. You
should also pay close attention in your HUDERIA
to assessing the impacts of the system on human
dignity and human freedom and autonomy. If you
wish to continue with the PCRA anyway at this
155 time, select yes. If not, please reconsider your
project and return when you have ascert ained
that your AI system will have sufficient
mechanisms to ensure that impacted rights-
holders are provided clear foreknowledge about
interacting with the system and are able to
provide meaningful and informed consent to
participate in the interaction .
UNSURE : STOP Before taking any further steps in
the proposed project, you should determine,
through expert and stakeholder input where
appropriate, whether, in cases where the AI
system interacts directly with rights-holders
(users or decision subjects) in ways that could
adversely impact their physical, psychological, or
moral integrity or harm their dignity, they will
have pre -knowledge of this interaction and be
able to provide meaningful and informed consent
to participate in the interaction . If not, the system
is considered to pose too high a risk to impacted
rights -holders and communities for you to
proceed. In revising your project, you should pay
close attention to ensuring that impacted rights -
holders are provided clear foreknowledge about
interac ting with the system and are able to
provide meaningful and informed consent to
participate in the interaction. This should be
demonstrated in your HUDERAC. You should also
pay close attention in your HUDERIA to assessing
the impacts of the system on human dignity,
human freedom, and autonomy. If you wish to
continue with the PCRA anyway at this time,
select yes. If not, please reconsider your project
and return when you have ascertained that your
AI system will have sufficient mechanisms to
ensure that imp acted rights -holders are provided
156 clear foreknowledge about interacting with the
system and are able to provide meaningful and
informed consent to participate in the interaction .
NOT APPLICABLE : No message
Protection of human freedom and
autonomy 53) Will the project lifecycle
incorporate sufficient and
transparently reported processes
to ensure the ability of rights -
holders
a. to make free and well -
informed judgements about the reasonableness and
justifiability of the outputs
of the AI system
b. to effectively contest and
challenge decisions informed and/or made by
that system
c. to demand that such decisions be reviewed by a
person?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Major modifiable risk factor
• Where project lifecycles do not
incorporate sufficient processes to ensure
the ability of rights -holders to make free,
independent, and well -informed
judgements about the reasonableness and
justifiability of the outputs of the AI
system, to effectively con test and
challenge decisions informed and/or made
by that system, and to demand that such
decision be reviewed by a person, this
presents a major modifiable risk factor
for adverse impacts on the human rights
and fundamental freedom of persons.
Actions to take for your HUDERIA:
157 NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example 1
Example 2
• Make sure to incorporate into your impact
assessment process thorough
consideration of the possible effects of the
AI system on the freedom and autonomy
of affected rights -holders and on their
abilities to make free, independent, a nd
well-informed decisions.
Goals, properties, and areas to focus on in
your HUDERAC :
• Sustainability (reflection on context and
impacts) and accountability and process
transparency (traceability, auditability,
accessibility, and responsible governance)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how you can incorporate sufficient processes in
your project lifecycles to ensure the ability of
rights -holder s to make free, independent, and
well-informed judgements about the
reasonableness and justifiability of the outputs of
the AI system, to effectively contest and
challenge decisions informed and/or made by
that system, and to demand that such decision be
reviewed by a person. You should also determine
how to incorporate and demonstrate such
measures in your HUDERAC. When this
information is ascertained, you should return to
your PCRA and revise it accordingly.
NOT APPLICABLE : No message
158 Non-discrimination, fairness, and
equality 54) Will sufficient and transparently
reported processes be
implemented throughout the project’s lifecycle to ensure that
the AI system, in both its production and use, mitigates
possible sources of bias and
discrimin atory patterns in each of
the following:
a. The datasets used to train the
system
b. The decisions made to build the
system
c. The way the problem to which the system responds is
understood, formulated, and
framed
d. The way the target variable and its measurable proxy are
defined
e. The way the system's algorithmic model(s) is
selected, and its parameters
tuned and adjusted
f. The way that model is trained, tested, and validated
g. The way the system is implemented, and the way
users are trained to deploy it
h. The choices made about
monitoring, updating,
repurposing, or deprovisioning the system?
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Major modifiable risk factor
• Where project lifecycles do not
incorporate sufficient and transparently
reported processes to ensure that AI
systems, in both their production and use,
mitigate possible sources of bias and
discriminatory patterns, this presents a
major modifiable risk factor for
adverse impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
• Make sure to incorporate into your impact
assessment process thorough
consideration of the principles and
priorities of non -discrimination, fairness,
and equality. You should also pay special
attention in your HUDERIA to assessing
the impacts of the system on historically
marginalized and vulnera ble groups and
on groups with protected characteristics.
Goals, properties, and areas to focus on in
your HUDERAC :
159 YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
• Fairness (non -discrimination and bias
mitigation) and accountability and process
transparency (traceability, auditability,
accessibility, a nd responsible
governance).
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate sufficient and transparently
reported processes to ensure that your AI
systems, in both its production and use, mitigates
possible sources of bias and discriminatory
patterns . You should also determine how to
incorporate and demonstrate such measures in
your HUDERAC. When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
NOT APPLICABLE : No message
55) Will sufficient and transparently
reported processes be
implemented throughout the
project’s lifecycle to ensure that
the system, in both its production
and use, promotes diversity and
inclusiveness
a. In the composition of innovation team building the system YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Major modifiable risk factor
• Where project lifecycles do not
incorporate sufficient and transparently
reported processes to ensure that AI
160 b. In the expertise, insight,
and knowledge drawn upon
to develop it
c. In the individuals and
groups able to access its
benefits?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
systems, in both their production and use,
promote diversity and inclusiveness, this
presents a major modifiable risk fact or
for adverse impacts on the human rights
and fundamental freedom of persons.
Actions to take for your HUDERIA:
• Make sure to incorporate into your impact
assessment process thorough
consideration of the principles and
priorities of diversity, inclusiveness, non -
discrimination, fairness, and equality. You
should also pay special attention in your
Stakeholder Engageme nt Process to
assessing team positionality and the need
for diversification of team composition,
expertise, insight, and knowledge.
Goals, properties, and areas to focus on in
your HUDERAC :
• Fairness (non -discrimination, bias
mitigation, diversity, and in clusiveness)
and sustainability (reflection on context
and impacts and stakeholder
engagement and involvement)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to ensure that your AI system, in both its
production and use, promotes diversity and
inclusiveness. You should als o determine how to
incorporate and demonstrate such measures in
your HUDERAC. When this information is
161 ascertained, you should return to your PCRA and
revise it accordingly.
NOT APPLICABLE : No message
Data protection and privacy context 56) If the AI system will be processing
personal data, will the project
lifecycle incorporate transparently
reported mechanisms that
demonstrate its compliance with data protection and privacy law
and the principles set out in the
Council of Europe's Conventio n
108+?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Major modifiable risk factor
• Where project lifecycles for AI systems
that process personal data do not
incorporate transparently reported
mechanisms that demonstrate their
compliance with data protection and
privacy law and the principles set out in
the Council of Europe's Convention 108+,
this presents a major modifiable risk
factor for adverse impacts on the human
rights and fundamental freedom of
persons.
Actions to take for your H UDERIA:
• Make sure to pay special attention in your
HUDERIA to assessing the impacts of the
system on data protection and the respect
for private and family life, drawing on your
162 Example
Data Protection Impact Assessment
(DPIA), if applicable.
Goals, properties, and areas to focus on in
your HUDERAC :
• Data protection and privacy
(transparency, proportionality, consent or
legitimate basis for processing, data
quality, data security, purpose limitation,
accountability, data minimisation, data
protection and privacy by design,
fairness, and lawfulness) . You should
consult data protection experts to
examine and determine the measures
that your project needs to take to
process personal data lawfully and in
accordance with the principles set out in
Convention 108+, including the
completion of a Data Protection Impact
Assessment (DPIA)(undertaken either as
part of transparent reporting or as a
compliance mechanism or as both). You
should include your DPIA in your
HUDERAC, if applicable, to evidence your
compliance and best practices.
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
input from data protection experts, the measures
that your project needs to take to process
personal data lawfully and in accordance with the
principles set out in Convention 108+, including
the completion of a Data Protection Impact
Assessment (DPIA)(undertaken eit her as part of
transparent reporting or as a compliance
mechanism or as both). When this information is
163 ascertained, you should return to your PCRA and
revise it accordingly.
NOT APPLICABLE : No message
57) If the AI system will be processing
personal data, will the project
lifecycle incorporate transparently reported mechanisms that
demonstrate its respect for the
rights of data subjects and
conformity to the additional
obligations of data controllers and
processors as set out in Articles 9
and 10 of C onvention 108+?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Major modifiable risk factor
• Where project lifecycles for AI systems
that process personal data do not
incorporate transparently reported
mechanisms that demonstrate their
respect for the rights of data subjects and
conformity to the additional obligations of
data controllers and processors as set out
in Articles 9 and 10 of Convention 108+,
this presents a major modifiable risk
factor for adverse impacts on the human
rights and fundamental freedom of
persons.
Actions to take for your HUDERIA:
• Make sure to pay special attention in your
HUDERIA to assessing the impacts of the
system on data protection, the rights of
the data subject, and the respect for
164 Example
private and family life, drawing on your
DPIA, if applicable.
Goals, properties, a nd areas to focus on in
your HUDERAC :
• Data protection and privacy (respect for
the rights of data subjects and
conformity to additional obligations —
Articles 9 and 10, Convention 108+) You
should consult data protection experts to
examine and determine the measures
that your project needs to take to
process personal data lawfully and in
accordance with the principles set out in
Convention 108+, including the
completion of a Data Protection Impact
Assessment (DPIA) (undertaken either as
part of transparent reporting or as a
compliance mechanism or as both). You
should include your DPIA in your
HUDERAC, if applicable, to evidence your
compliance and best practices.
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
input from data protection experts, the measures
that your project needs to take to process
personal data in accordance with obligations of
data controllers and processors as set out in
Articles 9 and 10 of Convention 108+ . When this
information is ascertained, you should return to
your PCRA and revise it accordingly .
NOT APPLICABLE : No message
165 58) If the AI system will be processing
sensitive data as defined in
Convention 108+, will the project lifecycle incorporate transparently
reported mechanisms that
demonstrate its compliance with appropriate safeguards?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Major modifiable risk factor
• Where project lifecycles for AI systems
that process sensitive data as defined in
Convention 108+ do not incorporate
transparently reported mechanisms that
demonstrate their compliance with
appropriate safeguards , this presents a
major modifiable risk factor for
adverse impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
• Make sure to pay special attention in your
HUDERIA to assessing the impacts of the
system on data protection, the rights of
the data subject, and the respect for
private and family life, drawing on your
DPIA, if applicable.
Goals, properties, and areas to focus on in
your HUDERAC :
• Data protection and privacy (responsible
handling of sensitive data). You should
consult data protection experts to
166 examine and determine the measures
that your project needs to take to
process personal data lawfully and in
accordance with the principles set out in
Convention 108+, including the
completion of a Data Protecti on Impact
Assessment (DPIA)(undertaken either as
part of transparent reporting or as a
compliance mechanism or as both). You
should include your DPIA in your
HUDERAC, if applicable, to evidence your
compliance and best practices.
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
input from data protection experts, the measures
that your project needs to take to process
sensitive data as defined in Convention 108+ in
compliance with appropriate safeguards. Whe n
this information is ascertained, you should return
to your PCRA and revise it accordingly .
NOT APPLICABLE : No message
59) If the AI system is designed with
the purpose or function of
individual- targeted curation,
profiling, prediction, or behavioural steering, will affected rights -
holders be able to obtain from the data controller sufficient
information concerning:
YES: No message
NO: STOP. AI systems that are designed with the
purpose or function of individual -targeted
curation, profiling, prediction, or behavioural
steering and that do not make available to
affected rights -holders sufficient information
concerning data use, output rationale , purpose,
and conveyance of processing results are
167 a. The use of their personal
data and the categories
used in the system's processing
b. An explanation of the rationale behind the output
of the processing in plain,
non-technical language
c. The purpose of the curation, profiling, prediction,
classification, or behavioural
steering
d. The categories of persons or
bodies to whom personal
data, the profile or the
result of the processing may
be communicated?
YES
NO
UNSURE
NOT APPLICABLE
Example
considered to pose too high a risk to impacted
rights -holders and communities for you to
proceed . In revising your project, you should pay
close attention to ensuring that impacted rights -
holders are prov ided clear, accessible, and
sufficient information concerning data use, output
rationale, purpose, and conveyance of processing
results You should consult data protection experts
to examine and determine the measures that
your project needs to take to acco mplish this. If
you wish to continue with the PCRA anyway at
this time, select yes. If not, please reconsider
your project and return when you have
ascertained that your AI system will have
sufficient mechanisms to ensure that impacted
rights -holders are provided clear, accessible, and
sufficient information concerning data use, output
rationale, purpose, and conveyance of processing
results .
UNSURE : STOP Before taking any further steps in
the proposed project, you should determine,
through input from data protection experts,
whether, in the case that your AI system will be
designed with the purpose or function of
individual- targeted curation, profiling, prediction,
or behavioural steering, you will make available
to affected rights -holders sufficient information
concerning data use, output rationale, purpose,
and conveyance of processing results. If not, the
system is considered to pose too high a risk to
impacted rights -holders and communities for you
to proceed. In revising your project, you should
pay close attention to ensuring that impacted
rights -holders are provided clear, accessible, and
sufficient information concerning data use, output
rationale, purpose, and conveyance of processing
168 results You should consult data protection experts
to examin e and determine the measures that
your project needs to take to accomplish this. If
you wish to continue with the PCRA anyway at
this time, select yes. If not, please reconsider
your project and return when you have
ascertained that your AI system will hav e
sufficient mechanisms to ensure that impacted
rights -holders are provided clear, accessible, and
sufficient information concerning data use, output
rationale, purpose, and conveyance of processing
results .
NOT APPLICABLE : No message
Accountability and access to justice 60) Will sufficient and transparently
reported processes be
implemented throughout the
project’s lifecycle to ensure end -to-
end accountability across the
production and use of the AI
system? Namely, will it ensure that
the system
a. Is auditable by design,
allowing for the end -to-end
traceability and oversight of
its processes of production and use
b. Establishes a continuous chain of human
responsibility for all roles
involved in the project
lifecycle to allow for end -to-YES: No message
NO:
Major modifiable risk factor
• Where project lifecycles do not
incorporate sufficient and transparently
reported processes to ensure end -to-end
accountability across the production and
use of AI systems, this presents a major
modifiable risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
No trigger message
169 end answerability in the
event that the human rights
or fundamental freedoms of affected individuals have
been negatively impacted
c. Enables designated public authorities and third
parties, where appropriate,
to assess its compliance
with existing legislation,
regulation, and standards
instruments across the
entire project lifecycle?
YES
NO
UNSURE
NOT APPLICABLE
Example
Goals, properties, and areas to focus on in
your HUDERAC :
• Accountability and process transparency
(traceability, auditability, accessibility,
and responsible governance).
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate transparently reported
processes to ensure sufficient accountability
across the production and use o f the AI system.
You should pay special attention to finding out
how to assure, in your HUDERAC, the goals of
accountability, transparency, and explainability
and how to demonstrate this in a clear and
accessible way. When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
NOT APPLICABLE : No message
61) Will sufficient and transparently
reported processes be
implemented throughout the
project’s lifecycle to ensure that
affected persons whose human
rights or fundamental freedoms
have been adversely impacted by YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
170 the AI system have actionable
redress and effective remedy by
a. Providing sufficient and
meaningful information that
indicates when the system
is being used and how and
where to complain in the
event of an adverse impact
on human rights and
fundamental freedoms
b. Facilitating access of
affected rights -holders to
sufficient and meaningful information about the
processes behind the
design, development, and
deployment of the system
and about the rationale
underlying the outcomes of
its processing
c. Employing algorithmic
models that are
appropriately i nterpretable
or explainable (especially regarding discriminatory
proxies or inferences that
may be embedded in
trained machine learning
systems) given the risks to human rights, fundamental
freedoms, democracy, and
the rule of law they may pose?
Major modifiable risk factor
• Where project lifecycles do not implement
sufficient and transparently reported
processes to ensure that affected
individuals have actionable redress and
effective remedy in cases where their
human rights or fundamental freedoms
have been adversely impacted, this
presents a major modifiable risk factor
for adverse impacts on the human rights
and fundamental freedom of persons.
Actions to take for your HUDERIA:
• Make sure to pay close attention to
assessing the impacts of the system on
human freedom and autonomy and on
the ability of rights -holders to make free,
independent, and well -informed
decisions.
Goals, properties, and areas to focus on in
your HUDERAC :
• Accountability and process transparency
(traceability, auditability, accessibility,
and responsible governance),
explaina bility (interpretability and
accessible rationale explanation). This
should include incorporating into your
HUDERAC evidence that you will be able
to (a) provide sufficient and meaningful
information that indicates when the
system is being used and how and where
to complain in the event of an adverse
171
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
impact on human rights and fundamental
freedoms, (b) facilitate access of affected
rights -holders to sufficient and
meaningful information about the
processes behind the design,
development, and deployment of t he
system and about the rationale
underlying the outcomes of its
processing, and (c) employ algorithmic
models that are appropriately
interpretable or explainable (especially
regarding discriminatory proxies or
inferences that may be embedded in
trained machine learning systems) given
the risks to human rights, fundamental
freedoms, democracy, and the rule of law
they may pose.
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate transparently reported
processes to ensure that affected individuals have
actionable redress and effective remedy in cases
where their human rights or fundamental
freedoms have bee n adversely impacted. You
should pay special attention to finding out how to
assure, in your HUDERAC, the goals of
accountability, transparency, and explainability
and how to demonstrate this in a clear and
accessible way. When this information is
ascertai ned, you should return to your PCRA and
revise it accordingly.
NOT APPLICABLE : No message
172 62) If the AI system is used in the field
of justice and law enforcement, will
meaningful information be provided to affected rights -holders
about the existence and use of the
system, its role within law
enforcement and the judicial
process, and the right to challenge
the decisions informed or made
thereby?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Major modifiable risk factor
• Where AI system are being used in the
field of justice and law enforcement and
meaningful information is not provided to
affected rights -holders about the
existence and use of the system, its role
within law enforcement and the judicial
process, and the right to challenge the
decisions informed or made thereby, this
presents a major modifiable risk factor
for adverse impacts on the human rights
and fundamental freedom of persons.
Actions to take for your HUDERIA:
• Make sure to pay special attention in your
HUDER IA to assessing the potential
impacts of the system on individual
dignity, freedom, and autonomy, and on
how effects on legal processes and
institutions may damage the rule of law.
Goals, properties, and areas to focus on in
your HUDERAC :
173 • Accountability and process transparency
(traceability, auditability, accessibility,
and responsible governance) and
explainability (interpretability and
accessible rationale explanation)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how meaningful information can be provided to
affected rights- holders about the existence and
use of the system, its ro le within law enforcement
and the judicial process, and the right to
challenge the decisions informed or made
thereby. You should pay special attention to
finding out how to assure, in your HUDERAC, the
goals of accountability, transparency, and
explainabi lity and how to demonstrate this in a
clear and accessible way. When this information
is ascertained, you should return to your PCRA
and revise it accordingly .
NOT APPLICABLE : No message
63) If the AI system is used in the field
of justice and law enforcement, will
sufficiently and transparently
reported processes be
implemented throughout the
project’s lifecycle to ensure that its
deployment is in line with the
essential requirements of impac ted
individuals' right to a fair trial
(equality of arms, right to a natural
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Major modifiable risk factor
174 judge established by law, the right
to an independent and impartial
tribunal, and respect for the adversarial process)?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
• Where AI system are being used in the
field of justice and law enforcement and
sufficiently and transparently reported
processes are not implemented
throughout the project’s lifecycle to
ensure that its deployment is in line with
the essential requirements of impacted
individuals' right to a fair trial (equality of
arms, right to a natural judge established
by law, the right to an independent and
impartial tribunal, and respect for the
adversarial process), this presents a
major modifiable risk factor for
adverse impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
• Make sure to pay special attention in your
HUDERIA to assessing the potential
impacts of the system on individual
dignity, freedom, and autonomy, and on
how effects on legal processes and
institutions may damage the rule of law.
Goals, properties, and areas to focus on in
your HUDERAC :
• Accountability and process transparency
(traceability, auditability, accessibility,
and responsible governance) and
explainability (interpretability and
accessible rationale explanation)
175 UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate transparently reported
processes to ensure that its deployment is in line
with the essential requirements of impacted
individuals' right to a fair trial. You should pay
special attention to finding out how to assure, in
your HUDERAC, th e goals of accountability,
transparency, and explainability and how to
demonstrate this in a clear and accessible way.
When this information is ascertained, you should
return to your PCRA and revise it accordingly .
NOT APPLICABLE : No message
Social and economic rights 64) If the AI system is being used in
the areas of employment decisions,
recruitment, worker management, or the distribution of work, will the
project lifecycle incorporate
transparently reported processes
to ensure that the system protects
a. The dignity of work and the right to just working
conditions as set out in
Article 2 of the European
Social Charter
b. The right to safe and healthy working conditions
as set out in Article 3 of the
Europeans Social Charter YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Major modifiable risk factor
• Where AI systems are being used in the
areas of employment decisions,
recruitment, worker management, or the
distribution of work and project lifecycles
do not incorporate transparently reported
processes to ensure that the system
protects the dignity of work, t he right to
just working conditions, the right to safe
176 c. The right to organize as set
out in Article 5 of the
European Social Charter?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example a
Example b
Example c
and healthy working conditions, and the
right to organize as set out in the
European Social Charter, this presents a
major modifiable risk factor for
adverse impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
• Make sure to pay close attention, in your
HUDERIA, to assessing the impacts of the
system on social and economic rights,
equality, dignity, and the right to physical,
psychological, or moral integrit y.
Goals, properties, and areas to focus on in
your HUDERAC :
• Fairness (equality, non -discrimination,
bias mitigation, diversity, and
inclusiveness) and sustainability
(reflection on context and impacts and
stakeholder engagement and
involvement)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate transparently reported
processes to ensure that the system protects the
dignity of work, the r ight to just working
conditions, the right to safe and healthy working
conditions, and the right to organize as set out in
the European Social Charter. When this
information is ascertained, you should return to
your PCRA and revise it accordingly .
177 NOT A PPLICABLE : No message
65) If the AI system is being used in
the context of social security
decisions or social welfare
administration, will sufficient and transparently reported processes
be implemented throughout the
project’s lifecycle to ensure that
the system protects the right to
social security as set out in Article
12 of the European Social Charter?
YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Major modifiable risk factor
• Where AI systems are being used in the
context of social security decisions or
social welfare administration and project
lifecycles do not incorporate transparently
reported processes to ensure that the
system protects the right to social security
as set out in Article 12 of the European
Social Charter, this presents a major
modifiable risk factor for adverse
impacts on the human rights and
fundamental freedom of persons.
Actions to take for your HUDERIA:
• Make sure to pay close attention, in your
HUDERIA, to as sessing the impacts of the
system on social and economic rights,
equality, dignity, and the right to physical,
psychological, or moral integrity .
Goals, properties, and areas to focus on in
your HUDERAC :
178
• Fairness (equality, non -discrimination,
bias mitig ation, diversity, and
inclusiveness) and sustainability
(reflection on context and impacts and
stakeholder engagement and
involvement)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate transparently reported
processes to ensure that the system protects the
right to social security as set out in Article 12 of
the European Social Charter. When this
information is ascertained, you should return to
your PCRA and revise it accordingly .
NOT APPLICABLE : No message
66) If the AI system is being used to
automate decisions regarding the
provision of healthcare and medical
assistance, will sufficient and
transparently reported processes
be implemented throughout the
project’s lifecycle to ensure that the system protects the rights to the protection of health and to
social and medical assistance as
set out in Articles 11 and 13 of the European Social Charter?
YES, WE HAVE PLANS IN PLACE TO DO THIS : No
message
WE HAD NOT CONSIDERED IT, BUT MAY DO THIS
/ WE HAD NOT CONSIDERED IT, BUT ARE
UNLIKELY TO DO THIS / NO, WE ARE NOT
PLANNING TO DO THIS:
Major modifiable risk factor
• Where AI systems are being used to
automate decisions regarding the
provision of healthcare and medical
assistance and project lifecycles do not
incorporate transparently reported
179 YES, WE HAVE PLANS IN PLACE
TO DO THIS
WE HAD NOT CONSIDERED IT,
BUT MAY DO THIS
WE HAD NOT CONSIDERED IT,
BUT ARE UNLIKELY TO DO THIS
NO, WE ARE NOT PLANNING TO
DO THIS
UNSURE
NOT APPLICABLE
Example
processes to ensure that the system
protects the rights to the protection of
health and to social and medical
assistance as set out in Articles 11 and 13
of the European Social Charter , this
presents a major modifiable risk factor
for adverse impacts on the human rights
and fundamental freedom of persons.
Actions to take for your HUDERIA:
• Make sure to pay close attention, in your
HUDERIA, to ass essing the impacts of the
system on social and economic rights,
equality, dignity, and the right to physical,
psychological, or moral integrity .
Goals, properties, and areas to focus on in
your HUDERAC :
• Fairness (equality, non -discrimination,
bias mitigation, diversity, and
inclusiveness) and sustainability
(reflection on context and impacts and
stakeholder engagement and
involvement)
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
how to incorporate transparently reported
processes to ensure that the system protects the
right to the protection of health and to social and
medical assistance as set out in Articles 11 and
13 of the European Social C harter . When this
180 information is ascertained, you should return to
your PCRA and revise it accordingly .
NOT APPLICABLE : No message
Section 2: Risks of adverse impacts on the human rights and fundamental freedom of
persons, democracy, and the rule of law
In the following section, you will be asked to provide a preliminary estimation of the likelihood and
gravity potential of the adverse impacts that your system could have on human rights, democracy ,
and the rule of law. Draw on the stakeholder analysis and the determination of salient rights you
carried out as part of your initial PS reporting to think through the relevance , likelihood , and
gravity potential of the harms indicated in the prompts. Each prompt will be formed as a statement
with likelihood/probability options below.
Principle s/Priorities Statement Trigger messages for summary
report
Respect for and protection of human
dignity 67) The AI system could interact
with rights -holders in ways that
create confusion or uncertainty
about whether they are interacting
with a computational technology.
UNLIKELY
POSSIBLE
LIKELY UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI system you are planning to build
will interact with rights -holders in ways that could
create confusion o r uncertainty about whether
they are interacting with a computational
technology . When this information is ascertained,
181
VERY LIKELY
UNSURE
N/A
Example you should return to your PCRA and revise it
accordingly.
68) If you responded with any of the
options besides N/A or Unsure in
the previous question, how would
you characterize the gravity
potential of the harm? (For details
on gravity potential, please refer to
the relevant section of the user’s
guide ).
CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
182 69) The AI system could interact with
rights -holders (users or decision
subjects) in ways that expose
them to humiliation (being put in a
state of helplessness or
insignificance; being dehumanized
or losing a sense of individual
identity).
UNLIKELY
POSSIBLE
LIKELY
VERY LIKELY
UNSURE
N/A
Example
UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI system you are planning to build
could interact with rights -holders (users or
decision subjects) in ways that expose them to
humiliation (being put in a state of helplessness
or insignificance; being dehumanized or losing a
sense of individual identity) . When this
information is ascertained, you should return to
your PCRA and revise it accordingly .
183 70) If you responded with any of the
options besides N/A or Unsure in
the previous question, how would
you characterize the gravity
potential of the harm? (For details
on gravity potential, please refer to
the relevant section of the user’s
guide ).
CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
71) The AI system could interact with rights -holders (users or decision
subjects) in ways that expose them to instrumentalization or
objectification (treating an
individual solely as exchangeable,
as a statistical aggregate, as a
means to an end, or as an obje ct
to be freely manipulated or
steered).
UNLIKELY
POSSIBLE
LIKELY
UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI system you are planning to build
could interact with rights -holders (users or
decision subjects) in ways that expose them to
instrumentalization or objectification (treating an
individual solely as exchangeable, as a statistical
aggregate, as a means to an end, or as an object
to be freely manipulated or steered) . When this
information is ascertained, you should return to
your PCRA and revise it accordingly .
184 VERY LIKELY
UNSURE
N/A
Example
72) If you responded with any of the
options besides N/A or Unsure in
the previous question, how would
you characterize the gravity
potential of the harm? (For details on gravity potential, please refer to
the relevant section of the user’s
guide ).
CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
73) The AI system could interact with rights -holders (users or decision
subjects) in ways that expose UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
185 them to displacement, redundancy,
or a sense of worthlessness in
regard to their participation in work life, creative life, or the life of
the community.
UNLIKELY
POSSIBLE
LIKELY
VERY LIKELY
UNSURE
N/A
Example UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI system you are planning to build
could interact with rights -holders (users or
decision subjects) in ways that expose them to
displacement, redundancy, or a sense of
worthlessness in regard to their participation in
work life, creative life, or the life of the
community . When this information is ascertained,
you should return to your PCRA and revise it
accordingly.
74) If you responded with any of the
options besides N/A or Unsure in
the previous question, how would
you characterize the gravity
potential of the harm? (For details on gravity potential, please refer to
the relevant section of the user’s
guide ).
CATASTROPHIC HARM
CRITICAL HARM
186 SERIOUS HARM
MODERATE OR MINOR HARM
Protection of human freedom and
autonomy 75) The AI system could interact with
rights -holders in ways that
adversely affect or hinder their
abilities to make free, independent,
and well -informed decisions about
their lives or about the system’s
outputs.
UNLIKELY
POSSIBLE
LIKELY
VERY LIKELY
UNSURE
N/A
Example UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
Unsure: Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI system you are planning to build
will interact with rights -holders in ways that could
adversely affect or hinder their abilities to make
free, independent, and well -informed decisions
about their lives or about the system’s outputs .
When this information is ascertained, you should
return to your PCRA and revise it accordingly .
187 76) If you responded with any of the
options besides N/A or Unsure in
the previous question, how would
you characterize the gravity
potential of the harm? (For details
on gravity potential, please refer to
the relevant section of the user’s
guide ).
CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
77) The AI system could interact with
rights -holders in ways that
adversely affect or hinder their
capacities to flourish , to fully
develop themselves, and to pursue their own freely determined life
plans.
UNLIKELY
POSSIBLE
LIKELY
VERY LIKELY UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI system you are planning to build
will interact with rights -holders in ways that could
adversely affect o r hinder their capacities to
flourish, to fully develop themselves, and to
pursue their own freely determined life plans.
When this information is ascertained, you should
return to your PCRA and revise it accordingly .
188 UNSURE
N/A
Example
78) If you responded with any of the
options besides N/A or Unsure in
the previous question, how would
you characterize the gravity
potential of the harm? (For details
on gravity potential, please refer to
the relevant section of the user’s
guide ).
CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
79) The deployment of the system could result in the arbitrary
deprivation of rights -holders'
physical freedom or personal security, or the denial of their
freedoms of expression, thought,
conscience, or assembly. UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be t abulated in the risk
index table )
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
189
UNLIKELY
POSSIBLE
LIKELY
VERY LIKELY
UNSURE
N/A
Example whether the AI system you are planning to build
will interact with rights -holders in ways that could
result in the arbitrary deprivation of rights -
holders' physical freedom or personal security, or
the denial of their freedoms of expression,
thought, conscience, or assembly . When this
information is ascertained, you should return to
your PCRA and revise it accordingly .
80) If you responded with any of the
options besides N/A or Unsure in
the previous question, how would
you characterize the gravity
potential of the harm? (For details
on gravity potential, please refer to the relevant section of the user’s guide ).
CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
190 Prevention of harm and protection of
the right to life and physical,
psychological , and moral integrity 81) The AI system could interact with
rights -holders in ways that deprive
them of their right to life or their
physical, psychological, or moral
integrity.
UNLIKELY
POSSIBLE
LIKELY
VERY LIKELY
UNSURE
N/A
Example UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
UNSURE: Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI system you are planning to build
will interact with rights -holders in ways that could
deprive them of their right to life or their physical,
psychological, or moral integrity. When this
information is ascertained, you should return to
your PCRA and revise it accordingly.
82) If you responded with any of the
options besides N/A or Unsure in
the previous question, how would
you characterize the gravity
potential of the harm? (For details on gravity potential, please refer to
the relevant section of the user’s
guide ).
CATASTROPHIC HARM
CRITICAL HARM
191 SERIOUS HARM
MODERATE OR MINOR HARM
83) The AI system could interact with
the environment (either in the
processes of its production or in its
deployment) in ways that harm the
biosphere or adversely impact the health of the planet.
UNLIKELY
POSSIBLE
LIKELY
VERY LIKELY
UNSURE
N/A
Example UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
UNSURE: Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI system you are planning to build
will interact with the environment (either in the
processes of its production or in its deployment)
in ways that could harm the biosphere or
adversely impact the health of the planet . When
this information is ascertained, you should return
to your PCRA and revise it accordingly .
192 84) If you responded with any of the
options besides N/A or Unsure in
the previous question, how would
you characterize the gravity
potential of the harm? (For details
on gravity potential, please refer to
the relevant section of the user’s
guide ).
CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
Non-discrimination, fairness, and
equality 85) The AI system (either in the
processes of its production or in its
deployment) could result in
discrimination, have discriminatory
effects on impacted rights- holders,
or perform differentially for different groups in discriminatory
or harmful ways —including for
intersectional groups where
vulnerable or protected
characteristics converge.
UNLIKELY
POSSIBLE
LIKELY
UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
UNSURE: Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI (either in the processes of its
production or in its deployment) could result in
discrimination, have disc riminatory effects on
impacted rights -holders, or perform differentially
for different groups in discriminatory or harmful
ways—including for intersectional groups where
vulnerable or protected characteristics converge.
When this information is ascertained , you should
return to your PCRA and revise it accordingly .
193 VERY LIKELY
UNSURE
N/A
Example
86) If you responded with any of the
options besides N/A or Unsure in
the previous question, how would
you characterize the gravity
potential of the harm? (For details
on gravity potential, please refer to
the relevant section of the user’s guide ).
CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
87) The use of the AI system could
expand existing inequalities in the
communities it affects or augment
historical patterns of inequity and UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
194 discrimination in these
communities.
UNLIKELY
POSSIBLE
LIKELY
VERY LIKELY
UNSURE
N/A
Example UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI system you are planning to build
could expand existing inequalities in the
communities they affect or au gment historical
patterns of inequity and discrimination in these
communities . When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
88) If you responded with any of the
options besides N/A or Unsure in
the previous question, how would
you characterize the gravity
potential of the harm? (For details
on gravity potential, please refer to
the relevant section of the user’s guide ).
CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
195 MODERATE OR MINOR HARM
Data protection and the right to respect
for private and family life 89) The design, development, and
deployment of the AI system could
harm the rights to data protection enshrined in data protection and
privacy law and in the Council of
Europe's Convention 108+.
UNLIKELY
POSSIBLE
LIKELY
VERY LIKELY
UNSURE
N/A
Example UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the design, development, and
deployment of the AI system could harm the right
to data protection enshrined in data protection
and privacy law and in the Council of Europe's
Convention 108+. When this information is
ascertained, you should return to your PCRA and
revise it accordingly.
90) If you responded with any of the
options besides N/A or Unsure in
the previous question, how would
you characterize the gravity
potential of the harm? (For details
on gravity potential, please refer to
the relevant section of the user’s
guide ).
196 CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
91) The AI system could intrude on or
interfere with the private and
family life of rights -holders in ways
that prevent or impede them from maintaining a personal sphere that
is independent from the
transformative effects of AI
technologies and in which they are
at liberty to freely think, form opinions and beliefs, and develop
their personal identities and
intimate relationships without the
influence of AI technologies.
UNLIKELY
POSSIBLE
LIKELY
VERY LIKELY
UNSURE
N/A
UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI system you are planning to build
could intrude on or interfere with the private and
family life of right s-holders in ways that prevent
or impede them from maintaining a personal
sphere that is independent from the
transformative effects of AI technologies and in
which they are at liberty to freely think, form
opinions and beliefs, and develop their personal
identities and intimate relationships without the
influence of AI technologies. When this
information is ascertained, you should return to
your PCRA and revise it accordingly .
197 Example
92) If you responded with any of the
options besides N/A or Unsure in
the previous question, how would
you characterize the gravity
potential of the harm? (For details on gravity potential, please refer to
the relevant section of the user’s
guide ).
CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
Social and economic rights 93) The deployment of the AI system
could harm the social and economic
rights of affected persons, including
the right to just working conditions,
the right to safe and healthy working
conditions, the right to organize, the
right to social security , and the ri ghts
to the protection of health and to social and medical assistance as set out in the European Social Charter.
UNLIKELY
POSSIBLE
UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the AI system you are planning to build
could harm the social and economic rights of
affected persons, including the right to just
working conditions, the right to safe and healthy
working conditions, the right to organize, the
right to social security, an d the rights to the
protection of health and to social and medical
assistance as set out in the European Social
198 LIKELY
VERY LIKELY
UNSURE
N/A
Example Charter. When this information is ascertained,
you should return to your PCRA and revise it
accordingly.
94) If you responded with any of the
options besides N/A or Unsure in the
previous question, how would you
characterize the gravity potential of
the harm? ( For details on gravity
potential, please refer to the relevant
section of the user’s guide ).
CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
Democracy 95) The use or misuse of the AI
system could lead to interference with
free and fair election processes or with
the ability of impacted individuals to
participate freely, fairly, and fully in
the political life of the community
through any of the following:
UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the use or misuse of the AI system could
lead to interference with free and fair election
199 a. Mass deception , at local,
national, or global levels
caused by the deployment of the system
b. Mass manipulation , at local,
national, or global levels
enabled by the deployment
of the system
c. Mass intimidation or behavioural control, at
local, national, or global levels enabled by the deployment of the system
d. Mass personalized political targeting or profiling, at
local, national, or global
levels enabled by the deployment of the system
UNLIKELY
POSSIBLE
LIKELY
VERY LIKELY
UNSURE
N/A
Example processes or with the ability of impacted
individuals to participate freely, fairly, and fully in
the political life of the community (through mass
deceptions, mass m anipulation, mass
intimidation or behavioural control, or mass
personalized political targeting or profiling).
When this information is ascertained, you should
return to your PCRA and revise it accordingly .
200 96) If you responded with any of the
options besides N/A or Unsure in the
previous question, how would you
characterize the gravity potential of
the harm? ( For details on gravity
potential, please refer to the relevant
section of the user’s guide ).
CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
97) The use or misuse of the AI
system could lead to the mass
dispersal, at local, national, or global
levels, of misinformation or disinformation.
UNLIKELY
POSSIBLE
LIKELY
VERY LIKELY
UNSURE
N/A UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the use or misuse of the AI system could
lead to the mass dispersal, at local, national, or
global levels, of m isinformation or disinformation.
When this information is ascertained, you should
return to your PCRA and revise it accordingly .
201
Example
98) If you responded with any of the
options besides N/A or Unsure in the
previous question, how would you
characterize the gravity potential of
the harm? ( For details on gravity
potential, please refer to the relevant
section of the user’s guide ).
CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
99) The use or misuse of the AI
system could lead to obstruction of
informational plurality, at local,
national, or global levels.
UNLIKELY
POSSIBLE
LIKELY
VERY LIKELY
UNSURE UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the use or misuse of the AI system could
lead to obstruction of informational plurality, at
local, national, or global levels. If so, the system
is considered to pose significant risks to the
physical, psychological, or moral integrity or the
human rights and fundamental freedoms of
affected persons . When this information is
202 N/A
Example ascertained, you should return to your PCRA and
revise it accordingly.
100) If you responded with any of the
options besides N/A or Unsure in the
previous question, how would you
characterize the gravity potential of
the harm? ( For details on gravity
potential, please refer to the relevant
section of the user’s guide ).
CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
101) The use or misuse of the AI
system (in particular, information filtering models such as recommender
systems, search engines, or news
aggregators) could lead to an obstruction of the free and equitable
flow of the legitimate and valid forms
of information that are necessary for
the meaningful democratic
participation of impacted rights
holders and for their ability to engage
freely, fairly, and fully in collective problem -solving. UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the use or misuse of the AI system could
lead to an obstruction of the free and equitable
flow of the legitimat e and valid forms of
information that are necessary for the meaningful
democratic participation of impacted rights
holders and for their ability to engage freely,
203
UNLIKELY
POSSIBLE
LIKELY
VERY LIKELY
UNSURE
N/A
Example fairly, and fully in collective problem -solving.
When this information is ascertained, you sh ould
return to your PCRA and revise it accordingly .
102) If you responded with any of the
options besides N/A or Unsure in the
previous question, how would you
characterize the gravity potential of
the harm? ( For details on gravity
potential, please refer to the relevant
section of the user’s guide ).
CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
204 103) The use or misuse of the AI
system (in particular, models that
track, identify, de -anonymize or
surveil rights -holders and social
groups or enable the creation of social graphs) could lead to interference with
or obstruction of impacted rights -
holders' abilities to exercise their freedoms of expression, assembly, or
association.
UNLIKELY
POSSIBLE
LIKELY
VERY LIKELY
UNSURE
N/A
Example UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the use or misuse of the AI system (in
particular, models that track, identify, de -
anonymize or surveil rights -holders and social
groups or enable the creation of social graphs)
could lead to interference with or obstruction of
impacted rights -holders' abilities to exercise their
freedoms of expression, assembly, or association.
When this information is ascertained , you should
return to your PCRA and revise it accordingly .
104) If you responded with any of the
options besides N/A or Unsure in the
previous question, how would you
characterize the gravity potential of
the harm? ( For details on gravity
potential, please refer to the relevant
section of the user’s guide ).
205 CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
Rule of Law 105) The deployment of the AI system
could harm impacted individuals' right
to effective remedy or right to a fair
trial (equality of arms, right to a natural judge established by law, the
right to an independent and impartial tribunal, and respect for the adversarial process).
UNLIKELY
POSSIBLE
LIKELY
VERY LIKELY
UNSURE
N/A
Example UNLIKELY , POSSIBLE, LIKELY, VERY LIKELY:
(Probability answers to be tabulated in the risk
index table )
UNSURE : Before taking any further steps in the
proposed project, you should determine, through
expert and stakeholder input where appropriate,
whether the deployment of the AI system could
harm impacted individuals' right to a fair trial
(equality of arms, right to a natural judge
established by law, the right to an independent
and impartial tribunal, and respect for the
adversarial process). When this information is
ascer tained, you should return to your PCRA and
revise it accordingly.
106) If you responded with any of the
options besides N/A or Unsure in the
previous question, how would you
206 characterize the gravity potential of
the harm? ( For details on gravity
potential, please refer to the relevant
section of the user’s guide ).
CATASTROPHIC HARM
CRITICAL HARM
SERIOUS HARM
MODERATE OR MINOR HARM
207
PCRA Output Example
Human Rights, Democracy, and
the Rule of Law Preliminary Context -Based Risk Assessment
Report
208 Thank you for completing the Human Rights, Democracy, and Rule of Law
Preliminary Context -Based Risk A nalysis (PCRA). This report has been generated
based on your responses.
The report is divided into two sections corresponding to the two sections in the
PCRA.
Responses from Section 1 are divided into three primary sections: Prohibitive
Risks Factors, Major Risks Factors, and Moderate Risk Factors. Each of these sections detail best next steps and items to focus on in your i mpact assessment
process (HUDERIA), as well as in your risk management and assurance practices
(HUDERAC).
Section 2 provides a table that co mpiles the provisional risk index numbers (RINs)
that have been calculated based upon your answers in the corresponding section
of the PCRA. Each RIN falls with a numeric interval that establish a risk level for
the potential adverse impact on the human rights and fundamental freedom of
persons, democracy, and the rule of law. Under the table you will find
recommendations for risk management actions for each potential harm
identified as well as general recommendations for proportionate risk
management practices and stakeholder engagement .
The results of the PCRA are meant to be part of an iterative process which can be
returned to and revisiting as you process with the other stages of the HUDERAF
process.
SECTION 1: IDENTIFYING RISK FACTORS
Section 1 produces a summary which compiles and organizes the answers that flagged up risk factors into the three risk factor categories: prohibitive, major, and
moderate. Additionally, these three risk factor categories are further divided into
modifiable or circumstantial . Those risk factors that emerge externally from the
technical, sociotechnical historical, legal, economic, or political environments
within which the design, development, and deployment of AI systems occur and
that are thereby less controllable are called “circumstantial risk factors.” Those
risk factors that emerge internally from the actual practices of producing and using
AI technologies, and that are thus more controllable, are called “modifiable risk
factors”.
The report provides prompts for each of these answers that direct you to specific
actions to take in your impact assessment process and to specific goals,
properties, and areas that you should focus on in your subsequent risk
management and assurance processes to reduce and mitigate associated risks.
Prohibitive Risk Factors:
209 Stop to evaluate the items in your assessment that have indicated the
presence of prohibitive risk factors . These items must be prioritised and
responded to before you proceed with any other project tasks or
activities.
Because you answered No to Question 2:
STOP. Prohibitive circumstantial risk factor
If you have not been able to establish a legal basis for the AI system, you should
not proceed. Where appropriate, you should consult with experts to establish the
lawfulness of your processing objective and proposed project before taking any next steps.
Because you answered Unsure to Question 6:
STOP. Before taking any further steps in the proposed project, you should
determine, through expert and stakeholder input where appropriate, whether the
AI system you are planning to build is on the list of prohibited systems .
Major Risk Factors:
Because you answered Yes to Question 1:
Major circumstantial risk factor:
Where AI systems serve primary or critical functions in high impact or safety critical sectors, this presents a major circumstantial risk factor for adverse
impacts on the human rights and fundamental freedom of persons.
• Actions to take for your HUDERIA: Make sure to focus upon
considerations surrounding the prevention of harm and the respect of the right to life and to physical, psychological, and moral integrity
• Goals, properties, and areas to focus on in your HUDERAC: Safety (accuracy and system performance, security, reliability, and robustness),
sustainability (reflection on context and impacts, change monitoring)
210
Because you answered Yes to Question 5:
Major circumstantial risk factor
Where AI systems perform a safety critical or high impact function independent of
the sector in which they operate, this presents a major circumstantial risk
factor for adverse impacts on the human rights and fundamental freedom of
persons.
• Actions to take for your HUDERIA: Make sure to focus upon
considerations surrounding the prevention of harm and the respect of the right to life and to physical, psychological, and moral integrity
• Goals, properties, and area s to focus on in your HUDERAC: Safety
(accuracy and system performance, security, reliability, and robustness) and sustainability (reflect on contexts and impact, change monitoring)
Because you answered Unsure to Question 7:
Before taking any further steps in the proposed project, you should determine,
through expert and stakeholder input where appropriate, whether the AI system
could be used or repurposed in ways that fall under the list of prohibited systems
which are considered to pose significant risks. When this information is
ascertained, you should return to your PCRA and revise it accordingly.
Moderate Risk Factors:
Because you answered Yes to Question 3:
Moderate circumstantial risk factor
Where the sector or domain in which the AI system will operate is historically
highly regulated, this presents a moderate circumstantial risk factor for
adverse impacts on the human rights and fundamental freedom of persons.
• Actions to take for your HUDERIA: Make sure to consider the
regulatory context of your project and to reflect on the expectations of
compliant and reasonable practices that arise in that context.
• Goals, properties, and areas to focus on in your HUDERAC :
Accountability and process transparency, (traceability, accessibility,
auditability, and responsible governance)
Because you answered Unsure to Question 4:
Before taking any further steps in the proposed project, you should determine,
through expert and stakeholder input where appropriate, whether any other types
of impact assessment for the specific use -case of the AI systems you are planning
to develop are required by law or regulation. When this information is ascertained,
you should return to your PCRA and revise it accordingly.
211 SECTION 2: RISKS OF ADVERSE IMPACTS ON
THE HUMAN RIGHTS AND FUNDAMENTAL
FREEDOM OF PERSONS, DEMOCRACY, AND THE RULE OF LAW
Section 2 provides a table that compiles the provisional risk index numbers (RINs)
that have been calculated based upon your answers in the corresponding section
of the PCRA . As you were asked to provide a preliminary estimation of the
likelihood of the potential adverse impacts that your system could have on human rights, democracy , and the rule of law, this estimation of likelihood is incorporated
into the table and risk in dex calculation along with your responses to other
questions such as the number of rights -holders affected.
The likelihood score that is indicated in the far -right column indicates a ri sk index
number (RIN) . An RIN has been calculated for each of the ques tions in Section 2
that were answe red with likelihood estimates, but not for answers of “Unsure” or
“Not Applicable”.
RINs are calculated by adding the Severity Level (Gravity Potential + Number of
Rights -Holders Affected) and Likelihood Level for each potential h arm. Details
about this calculation and its rationale can be found in the accompanying user’s
guide. Across all possible answers , RINs range from 2.5 to 10. Risk levels are
established as intervals within this range:
Low Risk Moderate Risk High Risk Very High Risk
𝑅𝑅𝑅𝑅𝑅𝑅≤5 5.5≤𝑅𝑅𝑅𝑅𝑅𝑅≤6 6.5≤𝑅𝑅𝑅𝑅𝑅𝑅≤7.5 𝑅𝑅𝑅𝑅𝑅𝑅≥8
These RINs correspond to preliminary recommendations for proportiona te risk
management and assurance practices and stakeholder engagement given the risk
level of the prospective AI system. These recommendations can be found in the
tables below the summary table generated from your responses to Section 2.
Risk management actions include recommendations on how to reduce the levels
of risk present through a variety of methods such as expert and stakeholder
consultations. These recommendations can be applied to your results by matching
the RIN to the corresponding risk management recommendations.
Additional recommendations for establishing suf ficient public transparency and
accountability —as well as for ensuring adequate human rights diligence—are
included for large-scale projects that could have macro -scale, long -term impacts
on individuals and society. The second table includes recommendation s for
proportionate risk management practices and stakeholder engagement that can
be applied to your results.
212 Section 2 Output Table
Adverse Impacts : Gravity
Potential Rights -
Holders
Affected Severity Likelihood Risk
Index
Number
(RIN)
Risk Level
Anthropomorphic confusion over interaction with a
computational system (dignity)
3 1 4 2 6 Moderate
Exposure to humiliation (dignity)
3 1 4 1 5 Low
Deprivation of rights -holders’ abilities to make free,
independent, and well -informed decisions about their lives
or about the system’s outputs, including the ability of
rights -holders to effectively challenge decisions informed
and/or made by that system and to demand that such
decision be reviewed by a person (human freedom)
3 1 4 1 5 Low
Deprivation of the right to life or physical, psychological,
or moral integrity (Prevention of harm) 4 1 5 2 7 High
Discrimination, discriminatory effects on impacted rights -
holders, or differential performance (Non -discrimination)
3 1 4 2 6 Moderate
213 See below a table of specific risk management actions for each of these
potential impacts identified by your results:
For any 𝑹𝑹𝑹𝑹𝑹𝑹≥𝟖𝟖 (very
high) Further examination should be undertaken, through
expert and stakeholder consultation, as to whether
sufficient risk reduction is possible to make very high risks of harm tolerable or whether this is not feasible,
and the risks are unacceptable.
For any 𝟔𝟔.𝟓𝟓≤𝑹𝑹𝑹𝑹𝑹𝑹≤
𝟕𝟕.𝟓𝟓 (high) Further examination should be undertaken, through
expert and stakeholder consultation, as to whether
the risks of harm indicated to be high are tolerable and can be appropriately reduced. Where likeli hood
= 1 (unlikely) and RPN indicates high risk,
confirmation of low risk probability should also be
undertaken thro ugh expert and stakeholder
consultation.
For any 𝟓𝟓.𝟓𝟓≤𝑹𝑹𝑹𝑹𝑹𝑹≤𝟔𝟔
(moderate) Further examination should be undertaken, through
expert and stakeholder consultation, as to whether
the risks of harm indicated to be moderate are
broadly accepted as such, are tolerable, and can be appropriately reduced. Where likeli
hood = 1
(unlikely) and RPN indicates moderate risk,
confirmation of low risk probability should also be
undertaken through expert and stakeholder
consultation.
For any 𝑹𝑹𝑹𝑹𝑹𝑹≤𝟓𝟓 (low) Further examination should be undertaken, through
expert and stakeholder consultation, as to whether
the risks of harm indicated to be low are broadly
accepted as such.
See below for general recommendations for proportionate risk management
practices an d stakeholder engagement. These recommendations are based on a
prioritization of a degree of risk management and human right diligence that is
proportionate to the highest level of risk identified across an AI system’s potential
impacts.
Because the highes t
risk level in your
results is 𝟔𝟔.𝟓𝟓 ≤𝑹𝑹𝑹𝑹𝑹𝑹≤
𝟕𝟕.𝟓𝟓 (high) Full diligence in risk management and assurance
practices is recommended to prior itize risk reduction
and mitigation. This should be informed by the risk
factors identified in section 1 of the PCRA and by the
HUDERIA impact assessment process;
Comprehensive stakeholder engagement across the
project lifecycle (e.g. , partnering with or
empowering rights -holders as determined by the
214 Stakeholder Engagement Process) is also
recommended.
215 Stakeholder Engagement Proces s
Stakeholder engagement is a core component of any human rights diligence
process . It is not a one -off activity, but rather should be seen as an ongoing set
of activitie s that occur throughout the project lifecycle.44 Stakeho lder engagement
may take different forms at different points in a project. It is essential to ensure
that rights -holders’ views are incorporated at all stages and that any potential
risks or adverse impacts are identified and mitigated.45 In this sense, a n iterative
approach to stakeholder engagement is essential .46 In particular , it is important
to revisit and revise your stakeholder analysis to ensure that approaches taken continue to reflect the perspectives and interests of salient stakeholders, including
vulnerable and underrepresented rights -holders.
A
Stakeholder Engagement Process (SEP) involves 5 key activities:
44 See Esteves, A. M., Factor, G., Vanclay, F., Götzmann, N., & Moreira, S. (2017).
Adapting social impact assessment to address a project's human rights impacts and
risks. Environmental Impact Assessment Review , 67, p. 77.
45 See Figure 1 in Götzmann, N., Bansal, T., Wrzoncki, E., Veiberg, C. B., Tedaldi, J., &
Høvsgaard, R. (2020). Human rights impact assessment guidance and toolbox . The Danish
Institute for Human Rights. p. 7.
46 See page 77 in (Esteves , et al 2017).
5 Key Activitie s within the Stakeholder
Engagement Process:
•Stakeholder Analysis
•Positionality Reflection
•Establishing Engagement Object ives
•Determining Engagement Method
•Workflow Revisit ation and Reporting
216 The initial four steps of the SEP take place internally (i.e. within the organisation
or team undertaking the impact assessment) during the design phase of the
project workflow. Throughout these steps, teams are to draw on the PCRA report ,
updated PS Report , and further desk research to answer series of questions and
prompts . Step 5 of this process (Workflow Revisitation and Reporting) is
undergone by conducting stakeholder engagement activities and providing project
reports. This step facilitat es stakeholder participation and input to revise
approaches towards subsequent actions in the workflow . Note that , following
its initial iteration as Step 5 of the SEP, the Workflow Revisiting and
Reporting activity is repeated throughout the design, development, and
deployment phases of your project to ensure ongoing stakeholder
involvement across the workflow .
While stakeholder engagement is vital throughout the process , this can take
various forms (outlined below) and involve varying degrees of engagement that
fit proportionality recommendations. In accordance with best practices in human
rights due diligence, you sho uld always aim for the greatest depth of engagement
that is feasible within the scope and resource constraints of the project.
Step 1: Carry Out Stakeholder Analysis
Stakeholder Salience Analysis is the process of evaluating the salience of identified
stakeholders. It aims to help you understand the relevance of each identified
stakeholder to your project and to its use contexts. It does this by providing a
structured way for your team members to assess the relative interests, rights,
vulnerabilities, and advantages of identified stakeholders as these interests,
rights, vulnerabilities, and advantages may be impacted by, or may impact, the
model or tool your team is planning to develop and deploy.
Consider the following diagram :
Analysing Stakeholder Salience:
Use the PCRA , the updated Project Summary Report , and further desk research
(if necessary) to inform answers to the following questions, assessing relative
stakeholder salience:
•What stakeholder groups have existing interests in relation to the
domain or sector in which the system/tool will be deployed?
•What stakeholder groups are most likely to be impacted by the
deployment of the system or tool?
•What stakeholder groups have the greatest needs in relation to potential
benefits/applications of the system/tool or the domain or sector in which
it will be deploy ed?
•What stakeholder groups are most and least powerful?
•What stakeholder groups have existing influence within relevant
communities, political processes, or in relation to the domain in which the system will be deployed?
•What stakeholder groups’ influence is limited?
•
•What stakeholder groups’ rights may be impacted in relation to the
domain or sector in which the system/tool will be deployed?
217
Consider the following diagram :
• What stakeholder groups possess both protected and contextually
vulnerable characteristics?
Analysis of stakeholder salience aims to identify which stakeholders are likely to
be most impacted, vulnerable, and those that currently have least influenc e.
47
Special attention needs to be paid to engaging these stakeholders in subsequent
stages.
47 See page 124 and 125 in (Götzmann et al 2020 ) Contextual
Vulnerability
Characteristics
•Could the outcomes
of this project present
significant concerns
given their specific
circumstances? If so,
what characteristics expose them to being
jeopartised by project
outcomes?Protected
Characteristics
•Do they possess
characteristics that
might increase their vulnerability to abuse
or discrimination, or
which may require
additional assistance?
If so, what chatacteristics?
218 Step 2: Engage in Positionality Reflection
When taking positionality into account, each of your team members should reflect
on their own positionality matrix by answers the questions contained in the graphic
below.
Reflecting on positionality :
After reflecting on their positionality matri ces, team members should
collaboratively answer the following questions:
a.How does the positionality of team members relate to that of affected
rights -holders and other stakeholders ?
b.Are there ways that your position as a team could lead you to choose
one option over another when assessing the risks that the prospective
AI system poses to human rights, democracy and t he rule of law ?
c.What (if any) missing stakeholder viewpoints would strengthen your
team’s assessment of this system’s potential impact on human rights
and fundamental freedoms ?
219 Step 3: Establish an Engagement Objective
Before selecting methods of engagement to use, you must first define objectives
of your stakeholder engagement. Through this process you should articulate:
• Why you are engaging stakeholders
• The purpose and expected outcomes of engagement activities
• The level of influence stakehold ers will have on engagement processes and
outcomes
There are many ways a project team can engage with the stakeholders they have
identified, and the objectives and methods of engagement that your team chooses
will depend on several factors. These divide into three categories, which are
presented here with accompanying descriptions:
Factors determining the objectives of engagement
Team -based
assessments of risks • Assessment of how stakeholder involvement should be made
proportionate to the scope of a project’s potential risks and
hazards to human rights, democracy, and the rule of law
Team -based
assessments of
positionality
• Evaluation of team positionality —for instance, cases where
the identity characteristics of team members do not
sufficiently reflect or represent significantly impacted groups.
How can the project team “fill the gaps” in knowledge, domain
expertise, and lived experience through stakeholder
participation?
Establishment of
stakeholder
engageme nt goals
• Determination of engagement objectives that enable the
appropriate degree of stakeholder engagement and co -
production in project evaluation and oversight processes
• Choosing participation goals from a spectrum engagement
options (informing, partnering, consulting, empowering) that
equip your project with a level of engagement which meets
team-based assessments of risk and positionality.
When weighing these three factors, your team should prioritise the establishment
of a clear and explicit stakeholder participation goal and document this . This
is crucial, because all stakeholder engagement processes can run the risk either of being cosmetic tools employed to legitimate projects without substantial and
meaningful participation or of being insufficiently participative, i.e. of being one-way information flows or nudging exercises that serve as public relations
instruments. The purpose of stakeholder involvement in human rights diligence is
just the opposite: to amplify the participatory agency of affected individuals and
organisations in impact assessment, risk management, and assurance processes.
To avoid such hazards of superficiality, your team should shore up its
proportionate approach to stakeholder engagement with deliberate and precise
goal-setting. There are a range of engagement options that can help your project
obtain a level of citizen participation which meets team-based assessments of
220 impact and positionality as well as practical considerations and stakeholder
needs48:
Levels of Stakeholder Engagement
Step 4: Determine Engagement Method
Once you have established your engagement objective, you are in a better pos ition
to assess which method or methods of stakeholder involvement are most
48 Excellent resources on citizen and stakeholder engagement upon which these
processes draw can be found here:
https://www.thersa.org/globalassets/reports/2020/IIDP -citizens -assembly.pdf ;
https://www.local.gov.uk/sites/default/files/documents/New%20Conversations%20Guid
e%2012.pdf ; https://datajusticelab.org/wp-
content/uplo ads/2021/06/PublicSectorToolkit_english.pdf;
https://www.communityplanningtoolkit.org/sites/default/files/Engagement.pdf .
Questions:
•Why are you engaging stakeholders?
•What are the expected outcomes of engagement activities ?
•How will stakeholders be able to influence the engagement process and
the outcomes?
221 appropriate to achieve your goals. Determining appropriate en gagement methods
necessitates that you (1) evaluat e and accommodat e of stakeholder needs, and
(2) pay attention to practical considerations of resources, capacities, timeframes,
and logistics that could enable or constrain the realisation of your objective:
Factors determining engagement methods
Evalua tion and
accommodation of
stakeholder needs
• Identification of potential barriers to engagement such as
constraints on the capacity of vulnerable stakeholder groups to
participate, difficulties in reaching marginalised, isolated, or
socially excluded groups, and challenges to participation that
are p resented by digital divides or information and
communication gaps between public sector organisations and
impacted communities
• Identification of strategies to accommodate stakeholder needs
such as catering the location or media of engagement to
difficult -to-reach groups, providing childcare, compensation,
or transport to secure equitable participation, tailoring the
provision of information and educational materials to the
needs of participants
Practical
considerations of
resources, capacities,
timeframes, and
logistics • the resources available for facilitating engagement activities
• the timeframes set for project completion
• the capacities of your organisation and team to properly
facilitate public engagement
• the stages of project design, developm ent, and implementation
at which stakeholders will be engaged
You and your team may face pitfalls when confronting any of these factors. For
example, limits on available resources and tight timelines could be at cross -
purposes with the degree of stakeholder involvement that is recommended by
team-based assessments of potential hazards and positionality limitations.
Likewi se, the chosen degree of appropriate citizen participation may be unrealistic
or out -of-reach given the engagement barriers that arise from high levels of
stakeholder needs. In these instances, you should take a deliberate and reflective
approach to deciding on how to balance engagement goals with practical
considerations—always prioritizing the need for optimizing stakeholder input and
rights-holder involvement in human rights diligence processes . And you should
make explicit the rationale behind your choi ces and document this.
Reflecting the varied levels of engagement set out above, there are varied engagement methods which can be used. Stakeholder engagement is likely to
require more than a single method and may combine approaches aimed at
informing, consulting, partnering and empowering. However, stakeholder
engagement must always contain at least one method pertaining to the stated engagement objective (i.e. a method pertaining to a lower level of engagement
cannot account for the need for engagement aimed to be met through a higher
engagement objective):
222
Example s of Relevant Engagement Methods
Mode of
Engagement Degree of
Engagement Practical Strengths Practical
Weaknesses
Newsletters (email)
Regular emails (e.g.:
fortnightly or
monthly) that contain
updates, relevant
news, and calls to
action in an inviting
format.
Inform Can reach many
people; can contain large amount of
relevant information;
can be made accessible and visually engaging. Might not reach certain
portions of the population; can be
demanding to design
and produce with some periodicity; easily
forwarded to
spam/junk folders without project team knowing (leading to
overinflated readership
stats).
Letters (post)
Regular letters (e.g.
monthly) that contain
the latest updates,
relevant news and
calls to action.
Inform
Can reach parts of the
population with no
internet or digital
access; can contain
large amount of
relevant information;
can be made accessible
and visually engaging. Might not engage
certain portions of the
popul ation; Slow
delivery and
interaction times
hamper the effective
flow of information and
the organisation of
further engagement.
Posts in local
community (e.g. local
newspapers,
community
newsletters and
noticeboards). Inform Can reach localised communitie s; can reach
people who do not have digital access. Limited to geographic region; requires
physical access; less
cost-effective than
digital means.
App notifications
Projects can rely on
the design of apps
that are pitched to
stakeholders who are
notified on their
phone with relevant
updates.
Inform Easy and cost-effective
to distribute
information to large
numbers of people;
rapid information flows
bolster the provision of
relevant and timely
news and updates. More significant initial
investment in
devel oping an app; will
not be available to
people without
smartphones.
Community fora
Events in which
panels of experts share their knowledge on issues and then
stakeholders can ask questions.
Inform
Can inform people with
more relevant
information by
providing them with the opportunity to ask
questions; brings
community together in a shared space of public communication. More time -consuming
and resource intensive to organise; might
attract smaller
numbers of people and self-selecting groups
rather than
representative subsets of the population; effectiveness is
constrained by forum
capacity.
Online surveys
Survey sent via email,
embedded in a Consult Cost- effective; simple
mass-distribution. Risk of pre -emptive
evaluative framework
when designing
questions; does not
223 website, shared via
social media reach those without
internet connection or
computer/smartphone
access.
Phone interviews
Structured or semi -
structured interviews
held over the phone. Consult
Opportunity for
stakeholders to voice
concerns more openly. Risk of pre -emptive
evaluative framework
when designing questions; might
exclude portions of the
population without
phone access or with habits of infrequent
phone use.
Door-to -door
interviews
Structured or semi -
structured interviews
held in -person at
people’s houses. Consult
Opportunity for
stakeholders to voice
concerns more openly;
can allow participants
the opportunity to form
connections through
empathy and face -to-
face communication. Potential for limited
interest to engage with
interviewers; time -
consuming; can be
seen by interviewees
as intrusive or
burdensome.
Feedback Forms
Provided along with
letters or posts or after events. Consult Provides mechanisms
for stakeholders to respond to information provided; can be quick and easy for
stakeholders to
respond; can include fixed choice or open -
ended responses. Might limit responses to pre -defined
questions; risk of pre-
emptive evaluative
framework when designing questions.
In-person
interviews
Short interviews
conducted in -person
in public spaces. Suggested for
Consult
Can reach many people
and a representative
subset of the
population if
stakeholders are
appropriately defined
and sortition is used. Less targeted;
pertinent stakeholders
must be identified by
area; little
time/interest to
engage with
interviewer; can be
viewed by interviewees
as time -consuming and
burdensome.
Focus group
A group of
stakeholders brought
together and asked their opinions on a particular issue. Can
be more or less formally structured.
Consult and
Partner
Can gather in -depth
information; c an lead to
new insights and
directions that were not
anticipated by the
project team. Subject to hazards of
group think or peer
pressure; complex to
facilitate; can be steered by dynamics of
differential power
among participants.
Citizen panel or
assembly
Large groups of
people (dozens or
even thousands) who
are representative of
a town/region.
Inform, Partner
and Empower
Provides an opportunity
for co- production of
outputs; can produce
insights and directions
that were not
anticipated by the
project team; can
provide an information
base for conducting
further outreach Participant rolls must
be continuously
updated to ensure
panels or assemblies
remains representative
of the population
throughout their
lifespan; resource -
intensive for
establishment and
224 (surveys, interviews,
focus groups, etc.); can
be broadly
representative; can
bolster a community’s
sense of democratic
agency and solidarity. maintenance; subject
to hazards of group
think or peer pressure;
complex to facilitate;
can be steered by
dynamics of differential
power among
participants.
Citizen jury
A small group of
people (between 12
and 24),
representative of the
demographics of a
given area, come
together to deliberate on an issue (generally one clearly framed set
of questions), over
the period of 2 to 7
days (involve.org.uk).
Inform, Partner and Empower
Can gather in -depth
information; can
produce insights and
directions that were not
anticipated by the project team; can
bolster participants’
sense of democratic agency and solidarity. Most effective when
there is a clearly
defined question which the jury is tasked with
answering. Subject to hazards of
group think; complex
to facilitate; Requires
independent
facilitation; Risk of pre-emptive evaluative framework; Small
sample of citizens
involved risks low representativeness of
wider range of public
opinions and beliefs .
Outputs from the first four steps of the Stakeholder Engagement Process are used
to create an initial Stakeholder Engagement Process Report, which is to be rev ised
and updated at each subsequent iteration of Workflow Revisitation and Reporting.
Questions:
Define an engagement method to be used for the Workflow Revisitation and
Reporting. Consider:
•What resources are available and what constraints will limit
potential approaches?
•What accessibility requirements might stakeholders have?
•Will online or in -person methods (or a combination of both) be
most appropriate to engage salient stakeholders?
•Which activities meet your team’s engagement objective?
•How will your team make sure that the chosen method
accommodates different types of stakeholders?
•How will each of the chosen acti vities feed useful information to
consider in your stakeholder impact assessment?
225 Step 5: Workflow Revisitation and Reporting
As indicated above, the Workflow Reporting and Revisitation step is conducted
initially as a culmination of the Stakeholder Engagement Process , so that you can
work directly with stakeholders to re-assess the first four steps of the SEP , and it
provides an opportunity for you to revisi t the PS Report and the project workflow
with them. This initial stakeholder engagement will also flow into your impact
assessment process; working through the PS Report and the project workflow with
these stakeholders will help to prepare them for meaningful and well -informed
involvement in your HUDERIA. The later iterations of Workflow Reporting and
Revisitation (in the development and deployment phases of your project ) will
subsequently enable your project team to report to stakeholders the actions and
evidence that are being compiled in the project assurance case (HUDERAC) in an
accessible manne r that facilitates feedback and input.
Revisiting with Stakeholders as part of your SEP
During Step 5 of the Stakeholder Engagement Process , it is essential to ensure
that decisions and outcomes from steps 1 – 4 reflect the experiences and
perspectives of salient stakeholders. Stakeholder engagement should involve
rights-holders, duty bearers and other relevant parties. In instances where initial
stakeholder mapping and risk analysis has identified no potential adverse impacts
on rights -holders, a minimum level of stakeholder engagement should involve
consultation with relevant organisations (e.g. civil society and human rights
organisations) and rights -holders.
Revisiting Steps 1 and 2 ( Stakeholder Analysis and Positionality ) with
Stakeholders
A useful strategy for revisiting Stakeholder Analysis and Positionality Reflection
with rights -holders and other relevant parties is to merge these activities into
engagement exercises that focus on identifying gaps and drawing on the lived
experience of affected individuals and communities. This will allow you to:
• More fully involve initially -identified salient stakeholder groups
• Broaden identified stakeholders through a “snowball ” approach
• Discuss team positionality with stakeholders (while being mindful of
confidentiality needs and not disclosing sensitive or personal information
about individuals).
• Map stakeholder salience through participatory processes which reflect
stakeholder perspectives and interests .
Crucially, a comprehensive stakeholder analysis requires initial stakeholder
engagement to identify potentially salient stakeholders who may not have been
visible to the project team (particularly underrepresented rights -holders).
Interv iews or community engagement activities can be an effective mechanism to
identify additional relevant stakeholders to engage. This “snowballing” technique
is an effective mechanism to ensure that stakeholder analysis reflects varied
perspectives and interests, recognising that there may not be consensus around
which stakeholders are most salient.
Revisiting Stakeholder Salience Questions, Engagement Objective and
226 Engagement Method with Stakeholders
This step is undertaken through further discussion of stakeholder needs with
stakeholders themselves . Particular attention should be paid here to vulnerable
rights-holders. At this stag e, the stakeholder engagement should focus on
identifying existing mechanisms and constraints for engagement (i.e., networks, community groups, barriers to engagement, difficulties). This should include an
assessment of any potential risks of participation (i.e. if taking part in an
engagement activity could put vulnerable individuals at risk). Where there are potential risks associated with participation of vulnerable individuals ,
consideration must be given to alternative engagement methods or forms of
representation (e.g. through civil society organisations).
Outputs from the above activities are used to update the SEP Report, informing
approaches for the following engagement activity in the workflow (HUDERIA or
Workflow Revisitation).
Reporting to Stakeholders
Communication and stakeholder evaluation of process outcomes at each phase is
a vital component to build and sustain trust and to underpin ongoing stakeholder
engagement. During the initial iteration of the stakeholder engagement process,
you are to use the PS Report as a baseline to convey the project status in an
accessible manner that facilitates input. After this initial iteration, y ou are to use
the assurance case as a basis for updating the P S Report. The most up -to-date
versions of the assurance case and Project Summary Report (which provides a
synopsis of up -to-date understandings of the system , its domain and use contexts,
identified stakeholders, the scope of impacts on human rights, democracy, and rule of law, and the project governance wor kflow ), will enable you to share project
reports that are relevant at each phase of the workflow.
Reporting to stakeholders must involve:
• Publication and dissemination of project outputs (taking account of
accessibility requirements of diverse stakeholders);
• Facilitation of stakeholder feedback (e.g. through feedback forms,
questionnaires or further engagement) ’ and incorporati on of feedback
documented through an update to the SEP report, in turn informing the
subsequent HUDERIA and HUDERAC process es.
Questions:
•What medium of reporting is likely to most effectively engage
stakeholders?
•What are the accessibility requirements of stakeholders (e.g. taking
account of physical and cognitive disabilities)?
•What mechanisms will be put in place to facilitate feedback from
stakeholders, and how will feedback be addressed and/or
incorporated? Consider how these mechanisms will yield feedback pertaining to
•stakeholders’ validation of the system considering its domain and use contexts; the identified sc
ope of impacts to human rights,
democracy, and rule of law; and the project governance workflow.
227 Stakeholder Engagement Process (SEP) Template
Questions Responses
STEP 1: Stakeholder Analysis
ANALYSYING STAKEHOLDER SALIENCE
Use the PCRA , the updated Project Summary Report , and further desk research (if necessary) to inform answers to the following
questions, assessing relative stakeholder salience. (Refer to the Venn diagram in the related section as needed.):
What stakeholder groups have ex isting interests in
relation to the domain or sector in which the
system/tool will be deployed?
What stakeholder groups are most likely to be
impacted by the deployment of the system or
tool?
What stakeholder groups have the greatest needs in relation to potential benefits/applications of the
system/tool or the domain or sector in which it will be deployed?
What stakeholder groups are most and least
powerful?
What stakeholder groups have existing influence
within relevant communities, political proc esses, or
in relation to the domain in which the system will be deployed?
228 What stakeholder groups’ influence is limited?
What stakeholder groups’ rights may be impacted
in relation to the domain or sector in which the
system/tool will be deployed?
STEP 2: Positionality Reflection
ENGAGE IN POSITIONALITY REFLECTION
When taking positionality into account, each of your team members should reflect on their own positionality by considering their
individual responses to the questions posed in the Positionality Matrix provided. After reflecting on their positionality matrices,
team members should collaboratively answer the following questions:
How does the positionality of team members relate
to that of affected rights -holders and other
stakeholders?
Are there ways that your position as a team could
lead you to choose one option over another when
assessing the risks that the prospective AI system
poses to human rights, democracy, and the rule of
law?
What (if any) missing stakeholder viewpoints would
strengthen your team’s assessment of this system’s
potential impact on human rights and fundamental
freedoms?
STEP 3: Engagement Objective
ESTABLISH AN ENGAGEMENT OBJECTIVE
229 Refer to the Factors Determining the Objectives of Engagement and Levels of Stakeholder Engagement tables as needed. Be sure
to outline a clear and explicit stakeholder participation goal.
Why are you engaging stakeholders?
What is the purpose and what are the expected
outcomes of engagement activities?
How will stakeholders be able to influence the
engagement process and the outcomes?
What engagement objective do you believe would
be appropriate for this project considering
challenges or limitations to assessments related to
positionality, and proportionality t o the project’s
potential degree of impact?
STEP 4: Engagement Method
DETERMINE AN ENGAGEMENT M ETHOD
Define an engagement method to be used for the Workflow Revisitation and Reporting.
What resources are available and what constraints
will limit potential approaches?
What accessibility requirements might stakeholders have?
Will online or in -person methods (or a combination
of both) be most appropriate to engage salient
stakeholders?
230 Which activities meet your team’s engagement
objective?
How will your team make sure that the chosen
method accommodates different types of
stakeholders?
How will each of the chosen activities feed useful
information to consider in your stakeholder impact
assessment?
STEP 5: Workflow Revisitation and Reporting
REPORTING TO STAKEHOLDERS
Outline a stakeholder communication plan that includes receiving and incorporating feedback and reporting back to stakeholder s
throughout the Stakeholder Engagement Process.
What medium of reporting is likely to most
effectively engage stakeholders?
What are the accessibility requirements of
stakeholders (e.g. taking account of physical and
cognitive disabilities)?
What mechanisms will be put in place to facilitate
feedback from stakeholders, and how will feedback
be addressed and/or incorporated? Consider how
these mechanisms will yield feedback pertaining to
stakeholders’ validation of the system considering
its domain and use contexts; the identified scope of
impacts to human rights, democracy, and rule of
law; a nd the project governance workflow.
231 Human Rights, Democracy, and the Rule of
Law Impact Assessment
Your Human Rights, Democracy, and the Rule of Law Impact Assessment
(HUDERIA) provides you with the opportunity to draw on the learning and insights
you have gained in your PCRA and SEP process es, and on the lived experience of
engaged rights-holders, to delve more deeply into the potential impacts of your
project on human rights, fundamental freedoms, and elements of democracy and
the rule of law. Your HUDERIA should enable you
• To re-examin e and re -evaluat e the potential harms you have already
identified in your P S Report and your PCRA
• To contextual ize and corroborate these potential harms in dialogue with
stakeholders
• To identify and analyse further potential harms by engaging in extended
reflection and by giving stakeholder s the chance to uncover new harms that
have not yet been explored and to pinpoint gaps in the completeness and
comprehensiveness of the previously enumerated harms
• To explore, with stakeholders, the severity (scope, scale, and remediability)
of the potential adverse impacts , so that the risks of these can be better
assessed, prioritized, managed, and mitigated
• To begin formulating impact mitigation measures (avoid, restore,
remediate) based upon the severity of the identified harms
• To set up access to remedy for affected rights-holders and other relevant
parties.
232 Getting the HUDERIA Process R ight
Before you launch into building the content of your HUDERIA, it will be important
that you establish a responsible process for carrying out the impact assessment
with the participation of affected rights -holders and other relevant stakeholders .
To do this, you will need to ensure that you r HUDERIA process align s with five key
criteria: participation, non- discrimination , empowerment, transparency,
and accountability .49 Use the table below (adapted from the important work of
the Danish Institute for Human Rights), to orient yourself to these criteria and to
the guiding principles that will have to be operationalised to secure a responsible
HUDERIA process:
Key criteria and guidin g principles
for responsible HUDERIA processes
Participation Meaningful participation of
affected or potentially affected
rights -holders is integrated
during all stages of the impact
assessment process, including
planning and scoping; data
collection and context analysis;
impact analysis; impact
prevention, mitigati on and
remediation; and reporting and
evaluation. • Engage a broad range of rights -
holders including vulnerable and
marginalised groups
• Consider the rig hts, freedoms, and
involvement of rights -holders
throughout the AI innovation
ecosystem (designers, implementers,
users , those potentially impacted by
the product or service , those who
could be displaced or deskilled by the
product or service )
• Involve rights -holders or their proxies
throughout the impact assessment
process
• Involve rights-holders, duty-bearers
and other relevant parties in designing
measures to address impacts (e.g.
through prevention, mitigation and
remediation) and follow -up to
evaluate the effectiveness of these
measures
• Include rights -holder representatives
or representative org anisations, or
rights -holder proxies in consultation
and engagement, including
consideration of the legitimacy of their
claim to represent the relevant
individuals and/or groups
• Ensure that engagement and
participation in the impact assessment
49 These criteria and the table in which t hey are described are directly adapted from
research by The Danish Institute for Human Rights (DIHR) . In 2020, the DIHR produced
their “Guidance on Human Rights Impact Assessment of Digital Activities: Introduction.”
In this paper, they present 10 key prin ciples for a Human Rights Impact Assessment with
five key criteria for the HRIA process. The criteria used here and the table in which they
are elaborated have been drawn from their guidance. See Kernell, E. L., Veiberg, C. B., &
Jacquot, C. (2020). “Guida nce on Human Rights Impact Assessment of Digital Activities:
Introduction.” The Danish Institute for Human Rights.
233 is guided by the local context,
including through using the impacted
individuals’ preferred mechanisms
(e.g. modes of communication) where
possible
• Make sure the assessment process is
being undertaken at particular times
to ensure equitable participation (e. g.,
when women are not at work and
young people are not at school)
Non-discrimination Engagement and consultation
processes are inclusive, gender -
sensitive , and account for the
needs of individuals and groups
at risk of vulnerability or
marginalisation. • Facilitate ongoing dialogue between
rights -holders, duty -bearers and other
relevant parties (e.g. through
collaborative problem analysis and
design of mitigation measures)
• To the extent digital and virtual means
of engagement are utilised (e.g. ,
online consultations and surveys),
assess and address accessibility
issues, particularly with regard to the
most vulnerable rights -holders
• Involve women, men , and
transgender people, including through
gender -identity -aware engagement
methods as necessary
• Take steps to ensure that the modes
of engagement and participation
address any barriers that may be
faced by vulnerable and marginalised
individuals (e.g. , by offering transport
or holding meetings in culturally
appropriate locations, and considering
‘technology barriers’ for older persons ,
persons with disabilities, or persons
who do not have access to the internet
or to digital technologies )
Empowerment Capacity building of individuals
and groups at risk of
vulnerability or marginalisation
is undertaken to ensure their
meaningful participation. • Identify the needs of vulnerable and
marginalised individuals and act to
address these to ensure inclusive
rights -holder involvement
• Include sufficient time in the
assessment process for capacity
building to allow individuals and
groups to be meaningfully involved
(e.g., to first present the digital
products or services in a way that the
audience understands, and to follow -
up later with the same groups when
they have had time to discuss and
organise, in order to receive feedback
and potential concerns)
• Make sure that rights- holders have
access to independent and competent
legal, technical , and other advice as
234 necessary, and, if not, include
provisions for making such support
available
• Provide for capacity building of rights-
holders to know their rights (e.g. , by
thoroughly explaining the right to
privacy before explaining how the AI
product or service will be developed to
ensure respect for the same right), as
well as of duty -bearers to meet their
human rights duties
Transparency The impact assessment process
is as transparent as possible to
adequately engage affected or potentially affected rights-holders, without causing any risk
to security and well -being of
rights -holders or other
participants (such as NGOs and human rights defenders). Impact
assessment findings are appropriately publicly
communicated. • Provide for information sharing
between stakeholders at relevant and
regular intervals (such as through
Workflow Revisitation and Reporti ng
processes)
• Make available information about the
AI project, product, or service to
participating stakeholders that is
adequate for giving a comprehensive
understanding of potential
implications and human rights impacts
(such as through sharing the PS
Report )
• Publicly communicate H UDERIA
findings and impact management
plans (action plans) to the greatest
extent possible (e.g ., published, with
any reservations based on risk to
rights -holders or other participants
clearly justified)
• Secure a firm top -level management
commitment with regard to
transparency before the start of the
HUDERIA process
• Communicate the phases of the
impact assessment, including
timeframes, to relevant stakeholders
in a clear and timely manner
• Ensure that communication and
reporting consider and respond to the
local context (e.g., by making sure
that information is made available in
relevant languages and formats, in
non-technical summaries and in
physical and/or web -based for mats
that are accessible to stakeholders)
Accountability The impact assessment team is
supported by human rights
expertise, and the roles and
responsibilities for impact
assessment, prevention,
mitigation and management are • Make sure that responsibility for the
implementation, monitoring and
follow- up of mitigation measures is
assigned to particular individuals or
functions within the organisation
(e.g., data engineers are tasked with
235 assigned and adequately
resourc ed. The impact
assessment identifies the
entitlements of rights-holders
and the duties and
responsibilities of relevant duty -
bearers (e.g. , developers,
companies buying digital
products or services, those using
or applying digital products and
services, and government
authorities). changing the design to limit potential
misuse )
• Make sure that sufficient resources are
dedicated to undertaking the
HUDERIA, as well as implementi ng the
impact management plan (e.g.,
adequate time, as well as financial and
human resources)
• Meaningfully and appropriately
engage relevant duty -bearers in the
impact assessment process, including
in impact prevention, mitigation and
remediation (e.g. , data protection
authorities are engaged since some
systemic impacts can best be dealt
with through data protection policies
and regulation)
• Ensure that project team members
who are undertaking the H UDERIA
have the relevant interdisciplinary
skills and expert ise (including human
rights, technical, legal, domain,
language, and local knowledge) to
undertake the impact assessment in
the given context and with regard to
the specific AI product or service (e.g. ,
data engineers and software
developers might need to be involved)
• Prioritize that the HUDERIA is carried
out by a diverse and inclusive group of
team members
236 Three Steps in the HUDERIA Process
Once you have set up your HUDERIA process in accordance with the criteria
outlined above, your project team will be ready to begin working on the content
of the impact assessment. Although the impact assessment process may take
many different shapes —depending on the needs and resources of your
organisation and the results of your PCRA and SEP—there are a few steps that
should be common to any full HUDERIA process:
Building the Content of your HUDERIA
As noted above, t he content of the HUDERIA is assembled to allow your team (1)
to re-examine and re -evaluat e the potential harms you have already identified in
your PS Report and your PCRA , (2) to contextualize and corroborate these
potential harms in dialogue with stakeholders , (3) t o identify and analyse further
potential harms through extended project team reflection and by giving
stakeholders the chance to uncover new harms and to pinpoint gaps in the
completeness of the previously enumerated harms , (4) t o explore, with
stakeholders, the severity (scope, scale, and remediability) of the potential
adverse impacts, so that the risks of these can be better assessed, prioritized,
managed, and mitigated, and (5) t o set up access to remedy for affected rights -
holders and other relevant parties. In the interest of cover ing this spectrum of
desiderata , the HUDERIA questions and prompts are organised around three
categories: Step 1.
Project team
prepares for
and begins
HUDERIA•Project team members prepare fortheHUDERIA
byrevisiting Table 1(Principles and Priorities,
Corresponding Rights and Freedoms, and
Elaborations), reviewing theoutcome ofActivity 3
ofPSreporting, andrevisiting PCRA andSEP
results ;Project team completes firstpart of the
HUDERIA and the additional project team
questions supplement .
Step 2.
Rights -holders
prepared for
and begin
HUDERIA•Rights -holders are prepared to
meaningfully engage in the HUDERIA
through Workflow Revisiting and Reporting
andbyreviewing PSReport and content of
Table 1with facilitators ;Rights- holders
provide input ontheHUDERIA questions
from thefirstpart through facilitators .
Step 3.
Integrate results
of Steps 1 and 2
through rights -
holder/duty -
bearer dialogue•Results ofSteps 1and2are compared
andintegrated intorights -holder/ duty-
bearer dialogue ;Impact mitigation
measures consonant with severity of
potential impacts are explored ; impact
management plan is co-designed by
project team and engaged rights -
holders ; access toremedy and
operational grievance mechanisms are
setup.
237
In the first part of the HUDERIA , the first t wo of these question types are applied
to each of the 20 impact prompts from section 2 of the PCRA. Just as in the PCRA,
these prompts are grouped under the appropriate principle/priority from Table 1 ,
so that the potential harms to the rights, freedoms, or elements of democracy and
the rule of law that are associated with each principle/priority can be considered
together under the appropriate heading . This makes it easier for project team
members and engaged rights-holders to identify potential gaps in the
completeness of the previously enumerated adverse impacts (as these fall under
each principle/priority) and to unearth corresponding harms that have not yet
been recognized. I t also allows considerations about the severity, mitigation, and
remedy of contextually specific impacts to be based around the appropriate
principle/priority and thus to include wider reflections on how each impact is
situated within the more general area of concern. For instance, particular harms
related to anthropomorphic deception, humiliation, and instrumentalization can be
thought about as adverse impacts on the dignity of affected rights -holders, and
associated considerations about the severity, mitigation, and remedy can thus
occur, in part, through the wider-angled lens of concerns about protecting and
respecting human dignity .
Identifying Potential Adverse Impacts
For the HUDERIA questions that come under the category of Identifying
potential adverse impacts , each prompt from section 2 of the PCRA is now
reformulated as an explanation-seeking “how , if at all, ” question that enables the
project team members and stakeholders to interrogate the ways in which the
right, freedom, or element of democracy and the rule of law associated with the
principle/priority under consideration could be adversely impacted.50 This enables
more open-ended and flexible deliberation on the range of potential adverse
impacts at the same time as it allow s participants in the impact assessment
process to zoom in on the specific contexts of the impact s to facilitate better, more
granular understanding s of their scope, scale, and remediability .
50 N.B., for these questions, project team members should refer back to Activity 3, Scope
Impacts to Human Rights, Democracy, and Rule of Law , in the PS Report and work
from the previous answers given there.
3 Categories of HUDERIA questions and prompts :
1.Identifying potential adverse impacts
2.Assessing the severity of potential impact s identified
3.Mitigating potential impacts and setting up access to
remedy
238 The Identifying potential adverse impact questions also include a final gap-
identi fying “in what other ways , if at all” question that encourages project team
members and engaged rights-holders to think beyond the pre -identified adverse
impacts first introduced in the PCRA and to discern novel harms. Here is how this
appears in the section on the Protection of Human Freedom and Autonomy :
HUDERIA Questions Responses
Protection of Human Freedom and Autonomy
IDENTIFYING POTENTIA L ADVERSE IMPACTS
How, if at all, could the system adversely affect
or hinder the abilities of rights -holders to make
free, independent, and well -informed decisions
about their lives or about the system’s
outputs?
How, if at all, could the system adversely affect
or hinder the capacities of rights -holders to
flourish, to fully develop themselves, and to
pursue their own freely determined life plans ?
How, if at all, could the deployment of the
system result in the arbitrary deprivation of
rights -holders' physical freedom or personal
security, or the denial of their freedoms of
expression, thought, conscience, or assembly
In what other ways, if any, could the use of this
system adversely impact the freedom and
autonomy of affected rights -holders?
Assessing the Severity of Potential Impacts
As explained in the United Nations Guiding Principles on Business and Human
Rights (UNGP) , assessing the severity of potential human rights impacts involves
consideration of th eir scale, scope, and remediability , where scale is defined as
“the gravity or seriousness of the impact,” scope as “ how widespread the impact
is, or the numbers of people impacted,” and remediability, as the “ability to restore
those affected to a situation at least the same a s, or equivalent to, their situation
before the impact ”.51 The UNGP further notes: “Where it is necessary to prioritize
51. See Principle 14 of (UNGP , 2011), but also Council of Europe ’s “Recommendation
CM/Rec(2020) 1 of the Committee of Ministers to member States on the human rights
239 actions to address actual and potential adverse human rights impacts, business
enterprises should first seek to prevent and mitigate those that are most severe
or where delayed response would make them irremediable.”52
The H
UDERIA questions that appear in the Assessing the Severity of Potential
Impacts section are intended to bring clarity to the prioriti sation of impact
mitigation and assurance actions by allowing the severity levels of an AI system ’s
potential impacts to be differentiated, elucid ated, and refined by relevant rights-
holders or their proxies . The elements of differentiation and inclusive rights -
holder participation are crucial here. Different human rights impacts may have
varying degrees of relative severity and must therefore be carefully distinguished
and understood both individually and in relation to each other. Likewise ,
understanding the scale, scope, and remediability of potential human rights
impacts requires that the lived experience and contextual insight s of a diverse and
inclusive group of potentially affected rights -holders be included in the assessment
process . This is because , as the United Nations Human Rights Office of the High
Commissioner notes, “scale, scope and remediability may differ for different
individuals or groups at heightened risks of becoming vulnerable or marginalized,
and…there may be different risks faced by different groups, such as men and
women.”53 Including a wide range of perspectives in the impact assessment
process—especially from members of vulnerable and marginalized groups —allows
for essential first-hand knowledge about how different groups may experience
harms differently to be incorporated into assessments of potential severity.
Scale
Assessing the scale of a potential adverse impact on human rights, fundamental
freedoms and elements of democracy and the rule of law involves both
consideration of the gravity or seriousn ess of the impact and a contextually
sensitive understanding of the way that different groups (especially vulnerable,
historically -discriminated- against, and marginalized groups) could suffer prejudice
impacts of algorithmic systems” and the UN Human Rights Office of the High Commissioner
(UNHR OHC) (2020) “Identifying and Assessing Human Rights Risks related to End -Use”.
52 See (UNGP 2011, p. 26) . The commentary to this paragraph expands, “ While business
enterprises should address all their adverse human rights impacts, it may not always be
possible to address them simultaneously. In the absence of specific legal guidance, if
prioritization is necessary business enterprises should begin with those human rights
impacts that would be most severe, recognizing that a delayed response may affect
remediability. Severity is not an absolute concept in this context, but is relative to the
other human rights impacts the business enterprise has identified. ” See also (UNHROHC
2020).
53 (UNHROHC 2020, p. 7). The interpretive guidance for the UNGP underlines this point:
“Depending on the operational context, the most severe human rights impact may be
faced by persons belonging to groups that are at higher risk of vulnerability or
marginalization, such as children, women, indigenous peoples, or people belonging to
ethnic or other minorities. If the enterprise decides it needs to prioritize its responses to
human rights impacts, it should take into account the vulnerability of such groups and the
risk that a delayed response to certain impacts could affect them disproportionately.” See:
UNHROHC (2012) “ The Corporate Responsibility to Protect Human Rights: An Interpretive
Guide”. p. 84.
240 in their exercise of the right or freedom under consideration in different ways and
to different extents.
To orient your consideration s of the gravity of potential adverse impacts , review
the following Gravity Table54:
Gravity Level Descripti ons
Catastrophic
Harm Catastrophic prejudices or impairments in the exercise
of fundamental rights and freedoms that lead to the deprivation of the right to life; irreversible i
njury to
physical, psychological, or moral integrity; deprivation
of the welfare of entire groups or communities;
catastrophic harm to democratic society, the rule of law,
or to the preconditions of democratic ways of life and
just legal order ; deprivation of individual freedom and
of the right to liberty and security; h arm to the
biosphere.
Critical Harm
Critical prejudices or impairment s in the exercise of
fundamental rights and freedoms that lead to the
significant and enduring degradation of human dignity, autonomy, physical, psychological, or moral integrity ,
or the integrity of communal life, democratic society, or
just legal order
Serious Harm
Serious prejudices or impairments in the exercise of
fundamental rights and freedoms that lead to the
temporary degradation of human dignity, autonomy,
physical, psychological, or moral integrity , or the
integrity of communal life, democratic society, or just
legal order or that harm to the information and
communication environment
Moderate or
Minor Harm Moderate or minor prejudices or impairments in the
exercise of fundamental rights and freedoms that do not lead to any signific ant, enduring, or temporary
degradation of human dignity, autonomy, physical,
psychological, or moral integrity
, or the integrity of
communal life, democratic society, or just legal order
In the HUDERIA questions contained in this subsection , you will be prompted to
characterize the gravity of each potential adverse impact that has been identified.
54 Note that the use of this table to help you think through the gravity level of a possible
adverse human rights impac t differs from the use of the Gravity Table in the PCRA, which
asked that the potential gravity of a harm be estimated assuming its maximal impact. In
the HUDERIA, both project team members and potentially impacted rights -holders should
engage in contextua lly aware explorations of the actual gravity or seriousness of the
potential impact.
241 You will also be asked to take into account the different ways and different extent s
to which individuals or groups who possess characteristics that could ma ke them
more vulnerable to the adverse impact could suffer the harm:
Your responses to these questions (and th e responses of potentially impacted
rights-holders) will subsequently serve an important function during the impact
mitigation planning stage when the redress and prioritization of harms is under
consideration. At this point, however, your focus should be on (1) thinking through
the varying contexts of the potential harms , (2) making sure that considerations
about the differing vulnerabilit ies of rights-holders to suffering harm is integrated
into the gravity assessment, and (3) ensuring that relevant rights-holders or their
proxies are appropriately and sufficiently represented in this assessment process .
Scope
In the Scope subsection, you will be asked to consider how extensive or
widespread each of the potential adverse impacts identified could be in terms of
the numbers of people potentially impacted , the timescale of the impacts, and the
exposure of particular groups of affected rights-holders to harm :
SCOPE: Before answering the questions below, review the answers to Activity
1 (specifically in the Project section) of the PS Report and questions 10 and 11
from the PCRA (on numbers of rights -holders affected by the AI system and
timescale of the system’s potential impacts) . ASSESSING TH E SEVERITY OF POTENTIAL IMPACTS
SCALE: Before answering the questions below, review the descriptions of harm
in Gravity Table contained in the accompanying user’s guide.
For each potential impact identified in the previous
section, would you characterize it as a catastrophic,
critical, serious, or minor harm?
For each potential impact identified in the previous
section, are there individuals or groups who
possess charact eristics that could make them more
vulnerable to the impact? If so, what are these
characteristics and could those who possess them
suffer the harm more acutely or seriously than
others?
For each potential impact identified in the previous section, which
individuals or groups could
encounter the gravest impact on the right or
freedom under consideration?
242 For each potential impact identified in the previous section, does the answer given to question 10 of the
PCRA seem to provide an accurate estimate
of the
scope of the impact ? If not, how many rights -
holders could encounter a deprivation of the right or freedom under consideration?
For each potential impact identified in the previous section,
are there groups who possess
characteristics that could m ake them vulnerable to
higher levels of exposure to the impact ? If so, how
much exposure could these groups face?
For each potential impact identified in the previous section,
consider the overall timescale of the AI
system’s impacts on the right or fre edom under
consideration. Are there cumulative or aggregate
impacts of the system on rights -holders and their
progeny that could expand the effects of the system beyond the scope of impact identified in the previous answers?
The first question asks that you re -evaluate the answer given to question 10 of
the PCRA ( about the number of rights -holders potentially impacted by the system)
in the context of the specific right or freedom under consideration . This is intended
as a sense check. In some cases, the estimated total number of rights-holders
potentially affect ed by a system ’s use may be greater than the number of those
who could suffer from a particular harm, as, for instance , where a prejudice in the
exercise of a right or freedom is suffered disproportionately by a specific group
(e.g. a commercial insurance risk model used in healthcare that disproportionately
mismeasures the level of chronic illness for members of a particular racial group
because they are systematically underrepresented in the training dataset ). In
other instances, the estimated number of rights -holders potentially affected by a
system’s use may (if misidentified) be much less than the number of those who
could suffer from a particular harm, as, for instance, where a system has knock -
on effects over time that lead to wider legacy impacts (e.g. harms done to
members of a particular racial group by a discriminatory predictive decision-
support model used in medicine leads to the group ’s long-term reluctance and
distrust in the medical system and much wider impacts on the welfare of its
members).
The second question asks you to consider whether there are groups who possess
characteristics that could make them vulnerable to higher levels of exposure to
the impact under consideration. The term “level of e xposure” here is understood
as the proportion of a group that is adversely impacted by an AI system, where,
in the case that a small fraction of the group is impacted, members have low lev els
of exposure and in the case that a very large fraction of the group is impacted,
members have high levels of exposure. As an example, members of a group that
is characterised by low socioeconomic status may have a high level of exposure
to the potential adverse impacts of an AI model that is used to allocate benefits
243 in a social -services setting , because a large portion of that groups needs to access
the benefits of social services in order to survive.
The final question on scope asks you to consider the overall timescale of the AI
system’s impacts on the right or freedom under consideration , taking the answer
given to question 11 of the PCRA as a starting point. In particular, it asks you to
think through whether there may be “cumulative or aggregate impacts of the
system on rights -holders and their progeny that could expand the effects of the
system beyond the scope of impact identified in the previous answers ”. Identifying
cumulative or aggregate impact can be much more difficult than identify ing har ms
directly or proximately caused by the use of an AI system, and providing a full answer to this question, for the project team, may involve additional research and consultation with domain experts and other relevant stakeholders . This difficulty
results from the fact that cumulative impacts are often incremental and more
difficult to perceive, and they frequently involve complex contexts of multiple
actors or projects operat ing in the same area or sector or affecting the same
populations.
55 Some “big picture” questions to reflect on when assessing
cumulative or aggregate impacts include:
• C
ould the production and use of the system contribute to wider adverse
human rights impacts when its deployment is coordinated with (or occurs
in tandem with) other systems that serve similar functions or purposes? For
example, if the human rights impacts of a conversational AI system that
serves a customer service function are considered in combination with the
proliferation of many other similar systems in a given sec tor, concerns
about wider cumulative impacts on labo ur displacement , the right to work,
and other social and economic rights become relevant.
• Could the production and use of the system replicate, reinforce, or augment socio-historically entrenched legacy harms that create knock -on effects in
impacted individuals and groups? For example, an AI system that uses facial
analysis technology to match photographs to other pictures contained in
databases scraped from social media websites could add to the legacy
harms of companies that have used data recklessly and eroded public trust
regarding the respect of privacy and data protection rights in the digital
sphere, thereby creating wider chilling effects on elements of open
communication, information shar ing, and interpersonal connection that are
essential component s for the sustainability of democratic forms of life .
• Could the production and use of the system be understood to contribute to
wider aggregate adverse impacts on the biosphere and on planetary health
when its deployment is considered in combination with other systems that
may have similar envi ronmental impacts? For example, a complex AI
system that involves moderate levels of energy consumption in training,
operating, or data storage m ay be seen to contribute to significant
environmental impact when considered alongside the energy consumption
of similar systems across AI innovation ecosystems.
55 This explanation of cumulative impacts draws heavily on Götzmann, N., Bansal, T.,
Wrzoncki, E., Veiberg, C. B., Tedaldi, J., & Høvsgaard, R. (2020). Human rights impact
assessment guidance and toolbox. The Danish Institute for Human Rights. p p. 86-88.
244 Remediability
Questions in the Remediability subsection seek information about the degree o f
reparability or restoration that is possible for affected rights -holders as the result
of efforts to overcome the adverse impac t under consideration. As noted above,
remediability involves the “ability to restore those affected to a situation at least the same as, or equivalent to, their situation before the impact.” Much as with
considerations surrounding the gravity of a potential impact, gaining an
understanding of how remediable or reversable a harm is will depend on
knowledge both about the specific context of the harm and about the affected
rights-holders who are subjected to it. Members of different groups may have
different levels of resilienc e, depending on their positions in society and the
circumstance s of the harm (with vulnerable and marginalised groups often
possessing less resilienc e than other dominant , privileged, or majority groups).
Those seeking to understand t he degree of remediability for any specific adverse
impact on fundamental rights and freedoms must therefore pay cl ose attention to
who could be harmed and in what context s.
Establishing the degree of remediability for a potential adverse impact involve s
considerations about the effort needed to overcome and (potentially) reverse the
harm
56:
Degree of
Remediability Effort
Very Low
Suffered harm may be irreversible and may not be
overcome (e.g., long-term psychological or physical
ailments , death , etc.)
Low
Suffered harm can be overcome albeit with serious
difficulties and enduring effects (e.g., economic loss,
property damage, worsening health, loss of social trust,
deterioration of confidence in the legal order, etc.)
Medium
Suffered harm can be overcome despite some
difficulties (e.g. , extra costs, fear, lack of
understanding, stress , minor physical ailments, etc.)
High Suffered harm can be overcome without any problem
(e.g., time spent amending information, annoyances,
irritati ons, etc.)
You will be prompted to use this Table as a starting point, to answer these
questions:
56 This Table is adapted from Table 6 in Mantelero & Esposito (2021) – An evidence-based
methodology for human rights impact assessment (HRIA) in the development of AI data-
intensive systems .
245 REMEDIABILITY: Before answering the questions below, review the
descriptions of harm in the Remediability Table contained in the accompanying
user’s guide.
For each potential impact identified in the previous section, how remediable is the adverse effect
on impacted
right-holders? (Refer to the
Remediability Table for categories of
remediability to consider)
For each potential impact identified in the previous section, are there groups
who possess characteristics that could make their members vulnerable to
lower levels of remediability for the
impact (i.e., more subject to difficult -
to-overcome or irreversible harm)? If
so, to what degree is the impact on
members of each group irreversible or irremediable?
Additional Questions for the Project Team
At the end of Part I of the HUDERIA, you’ll find a supplemental section which
contains additional questions about human rights diligence to be answered by the project team. The purpose of these questions i s to establish that all human rights
impacts that t he project is linked to across its value chain have been appropriately
considered and assessed
57:
ADDITIONAL QUESTION S FOR THE PROJECT TEAM
These questions are meant to serve as a supplement to Part I of the HUDERIA.
They are intended to be answered by the project team and then to be revisited
during the dialogue between duty -bearers and rights -holders in Step 3 of the
HUDERIA
Has a thorough assessment of the
human rights compliant business
practices of all businesses, parties and entities involved in t
he value chain of
the AI product or service been
undertaken? This would include all
businesses, parties, and entities
directly linked to your business lifecycle
through supply chains, operations,
57 For further guidance on due diligence processes for responsible organisational conduct
as it relates to the value chain see OECD (2018), Due Diligence Guidance for Responsible
Business Conduct .
246 contracting, sales, consulting, and
partnering. (If not, do you have plans
to do this?)
Do you have processes in place and
appropriate resources dedicated to
maintaining human rights diligence
with regard to all businesses, parties,
and entities directly linked to your
business lifecycle?
Do you require you r vendors and
suppliers to demonstrate compliance
with legal protections in place to
protect rights -holders from modern
slavery, human trafficking, and labour
exploitation?
Mitigating Potential Impacts and Setting Up Access to Remedy
Once sections on Identifying Potential Adverse Impacts , Assessing the
Severity of P otential Impacts Identified , and Additional Questions for the
Project Team have been completed, the project team and eng aged rights -holders
should be brought together to discuss and compare results and to begin impact
prevention and mitigation prioritization and planning.58
D
iligent impact prevention and mitigation planning begins with a scoping and
prioritization stage. Project team members and engaged rights- holders should go
through all of the identified potential adverse impacts and map out the
interrelations and interdependencies between them as well as surrounding social
factors (such as contextually specific rights-holder vulnerabilities and
precariousness ) that could make impact mitigation more challenging. Where
prioritization of prevention and mitigation actions is necessary (for instance, where
delays in addressing a potential harm could reduce its remediability), decision-
making should be steered by the relative severity of the impacts under
consideration. As a general rule, while impact prevention and mitigation planning
may involve prioritization of actions , all potential adverse impacts on human
rights , fundamental freedoms, and elements of democracy and the rule of
law mus t be addressed.
The mitigation hierarchy
When deciding upon the range of available actions that can be taken to prevent
or mitigate potential adverse impacts, project team members and engaged rights
should refer to what many different types of impact assessment processes call the
“mitigation hierarchy” (avoid, reduce, restore, compensate). During the initial
iteration of the HUDERIA—i.e at the early, design stage of the project lifecycle—
the impacts under consideration will be potential impacts, namely, impacts that
have not yet happened, so mitigation options of “avoid” and “reduce” will be more
58 The following material in this sec tion draws heavily on (Götzmann et al 2020).
247 relevant, whereas in later iterations of HUDERIA monitoring, revisitation and re -
evaluation—i.e. during the deployment stage—actual adverse impacts may arise,
making mitigation options of “restore” and “compensate” relevant alongside
“avoid” and “reduce”. Descriptions of the elements of the mitigation hierarchy are
as follows:
Mitigation
Hierarchy Description
Avoid
Making changes to the design, development, or
deployment processes behind the production and use of
the AI system (or to the AI system itself) , at the outset,
to avoid the potential adverse impact.
Reduce
Implementing actions in the design, development, or
deployment processes behind the production and use of the AI system (or making changes
to the AI system
itself) to minimise potential or actual impacts.
Restore
Taking actions to restore or rehabilitate affected rights -
holders to a situation at least the same as, or equivalent
to, their situation before the impact
Compensate Compensating in kind or by other means, where other
mitigation approaches are neither possible nor effective.
The use of the term “mitigation hierarchy ” is prescriptive and meaningful here
in the sense that it is directing actions to give precedence to avoiding potential
adverse impacts altogether, in the first instance, and then to reducing and
remediating them. It is also notable that , at later stages of the project lifecycle,
where options of restoration and compensation become more relevant, more than
one of these mitigation options may be at play contemporaneously (as, for
instance , where an affected rights -holder needs to be restored simultaneously as
immediate actions to minimi se further harms must be taken). At all events,
decisions about which mitigation action(s) to take should be guided, first and
foremost, by human rights considerations, and choices made to avoid and reduce
adverse impacts should take first position and be kept distinct from choices to
compensate or remunerate impacted rights -holders for suffered harms.
Co-designing and implementing an impact management plan
Once potential adverse impacts have been mapped out and organised, and
mitigation actions have been considered, project team members and engaged
rights-holders should begin co -designing an impact mitigation plan (IMP). The IMP
will become the part of your HUDERIA that specifies t he actions and processes
needed to address the adverse impacts which have been identif ied and t hat
248 assign s responsibility for the completions of these tasks and processes .59 As such,
the IMP will serve a crucial documenting function, which, as part of the HUDERIA,
will then become an evidentiary element of the argument -based assurance
practices carried out in your HUDERAC. The IMP should include:
• A
summary of combined impact findings from the project team and
engaged rights -holders with a detailed mapping of potential adverse
impacts and any significant inter relationships between them
• A clear presentation of the measures and actions that will be taken and,
where needed, a presentation of the prioritization of actions accompanied
by an explanation of prioritization rationale (The project team and engaged
rights-holders should also draw on recommendations generated in the
results of Section 1 of the PCRA when setting out needed measures and
actions . This will feed directly into the goal definition and property
determination aspect s of your HUDERAC .)
• For any potential adverse impacts identified in direct links of the business
or organisation to businesses, parties and entities involved in the value
chain of the AI product or service, a presentation of plans to exercise
leverage or alter relationships to mitigate impacts
• A clarification of the roles and responsibilities of the various actors involved
in impact mitigation, management , and monitoring
• (From the project team directly): a statement that ind icates the allotted
commitment of time and resources that will be made available to carry out
the IMP
• A plan for monitoring impact mitigation efforts and for re-assessing and re-
evaluati ng the HUDERIA during subsequent development and deployment
phases of the project lifecycle
• An accessible presentation of access to remedy and operational -level
grievance mechanisms that will be available to impacted rights -holders if
they suffer any harm s identified in the HUDERIA or any other adverse
impacts that were not detected or recognized in the impact assessment
process
59 Daniel M. Franks (2011), ‘Management of the social impacts of mining’, in P. Darling
(Ed), SME Mining Engineering Handbook (3rd edn), pp.1817-1825; Daniel M. Franks and
Frank Vanclay (2013), ‘Social impact management plans: Innovation in corporate and
public policy’, Environmental Impact Assessment Review , 43, p.57. Cited in (Götzmann et
al 2020, p. 95 ).
249 Revisiting the HUDERIA across the Project Lifecyle
Carrying out a HUDERIA at the beginning of an AI innovation project is only a first,
albeit critical, step in a much longer, end- to-end proc ess of responsive evaluation
and re-assessment. In your i mpact assessment process, you must pay continuous
attention both to the dynamic and changing character of the AI production and
implementation lifecycle and to the shifting conditions of the real-wor ld
environments in which your AI system will be embedded.
There are two factors that necessitate this demand for responsiveness in
sustainable AI innovation:
1. Production and implementation factors : Choices made at any point
along the design, development, and deployment workflow may impact prior
decisions and assessments —leading to a need for re- assessment,
reconsideration, and amendment. For instance, design and development
choices could be made that were not anticipated in the initial impact
assessment (such choices might include adjusting the target variable,
choosing a more complex algorithm, or grouping variables in ways that
may impact specific groups). These changes may influence how an AI
system performs or how it impacts affected individuals and gro ups.
Processes of AI model design, development , and deployment are also
iterative and frequently bi -directional, and this often results in the need for
revision and update. For these reasons, responsible and sustainable AI
design, development, and use must remain agile, attentive to change, and
at-the-ready to move back and forth across the decision -making pipeline
as downstream actions affect upstream choices and evaluations.
2. Environmental factors: Changes in project -relevant social, regulatory,
policy or legal environments (occurring during the time in which the system
is in production or use) may have a bearing on how well the model works
and on how the deployment of the system impacts the rights and freedoms
of affected individuals and groups. Likewise, domain-level reforms, policy
changes, or changes in data recording methods may take place in the population of concern in ways that affect whether the data used to train
the model accurately portrays phenomena, populations, or related factors
in an accurat e manner. In the same vein, cultural or behavioral shifts may
occur within affected populations that alter the underlying data distribution and hamper the performance of a model, which has been trained on data
collected prior to such shifts. All of these alterations of environmental
conditions can have a significant effect on how an AI system performs and on the way it impacts affected rights-holders and communities.
Bearing these factors in mind , it become s clear that revisitation of your HUDERIA
at relevant points across your project lifecycle will play a pivotal role in its
continued efficacy and reliability. As part of the IMP component of your initial
HUDERIA activiti es, a plan is established for monitoring impact mitigation efforts
and for re-assessing and re -evaluating your HUDERIA during subsequent
development and deployment phases of the project lifecycle . Such follow -up
250 processes of re -assessment and re- evaluation should include an approp riate
stakeholder engagement aspect, which is proportionate to the risks posed by the
system . Such processes should also remain as responsive as possible to the way
that system is interacting with its operating environments and with impacted rights-holders . In rapidly evolving or changing contexts, there may arise a need
for more frequent re-assessment and re -evaluation interventions , and your
project team must remain flexible and ready to amend previous iterations of the HUDERIA and IMP .
Additionally, i n the moni toring phas e—during system deployment—considerations
of actual adverse impacts are added to considerations of potential impacts. This
means that options for mitigating actual impacts (i.e. restoration and
compensation) will now be included as options in the mitigation hierarchy. Keep
in mind , however, that the avoidance , reduction, and remediation of any adverse
human rights impacts should be given primacy in mitigation efforts and that
compensation cannot replace these mitigation actions.
251 HUDERIA Template
Human Rights, Democracy, and the Rule of Law Impact Assessment (HUDERIA)
Template
Questions Responses
Respect for and protection of human dignity
IDENTIFYING POTENTIA L ADVERSE IMPACTS
How, if at all, could this system prompt confusion
or uncertainty in rights -holders about whether they
are interacting with an AI system rather than a
human being?
How, if at all , could this system expose rights -
holders to humiliation ( i.e put them in a state of
helplessness or insignificance; dehumanize them or
deprive them of a sense of individual identity) ?
How, if at all , could the system expose rights -
holders to instrumentalization or objectification
(treating them solely as exchangeable, as
statistical aggregate s, as means to end s, or as
objects to be freely manipulated or steered) ?
How, if at all , could this system expose rights -
holders to displacement, redundancy, or a sense of
252 worthlessness in regard to their participation in
work life, creative life, or the life of the community ?
In what other ways, if any, could the use of this
system adversely impact the dignity of affected
rights -holders?
ASSESSING THE SEVERI TY OF POTENTIAL IMPA CTS
SCALE: Before answering the questions below, review the descriptions of harm in Impact Gravity Table contained in the
accompanying user’s guide.
For each potential impact identified in the previous section, would you characterize it as a catastrophic,
critical, serious, or m oderate/minor harm?
For each potential impact identified in the previous section,
are there individuals or groups who
possess characteristics that could make them more vulnerable to the impact
? If so, could these
individuals or groups experience the harm more
intensely or seriously than o thers?
For each potential impact identified in the previous section,
which individuals or groups could
encounter the gravest prejudice in their exercise of the right or freedom under consideration?
SCOPE: Before answering the questions below, review t he answers to Activity 1 (specifically in the Project section) of the PS
Report and questions 10 and 11 from the PCRA (on numbers of rights -holders affected by the AI system and timescale of the
system’s potential impacts) .
For each potential impact identified in the previous
section, does the answer given to question 10 of
253 the PCRA seem to provide an accurate estimate of
the scope of the impact ? If not, how many rights -
holders could encounter a deprivation of the right or freedom under consideration?
For each potential impact identified in the previous section,
are there groups who possess
characteristics that could make them vulnerable to higher levels of exposure to the impact ? If so, how
much exposure could these groups face?
For each potential impact identified in the previous section,
consider the overall timescale of the AI
system’s impacts on the right or freedom under
consideration. Are there cumulative or aggregate
impacts of the system on rights -holders and their
progeny that could expand the effects of the
system beyond the scope of impact identified in the
previous answers?
REMEDIABILITY: Before answering the questions below, review the descriptions of harm in Remediability Table contained in the
accompanying user’s guide.
For each potential impact identified in the previous section,
how remediable is the potential adverse
effect on impacted right -holders? (Refer to the
Remediability Table for categories of remediability to consider)
For each potential impact identified in the previous section,
are there groups who possess
characteristics that could make them vulnerable to
lower levels of remediability for the impact (i.e.
more subject to difficult -to-overcome or
254 irreversible harm)? If so, to what degree is the
impact irreversible or irremediable?
Protection of Human Freedo m and Autonomy
IDENTIFYING POTENTIA L ADVERSE IMPACTS
How, if at all, could the system adversely affect or
hinder the abilities of rights-holders to make free,
independent, and well -informed decisions about
their lives or about the system’s outputs ?
How, if at all, could the system adversely affect or
hinder the capacities of rights -holders to flourish, to
fully develop themselves, and to pursue their own
freely determined life plans ?
How, if at all, could the deployment of the system
result in the arb itrary deprivation of rights -holders'
physical freedom or personal security, or the denial
of their freedoms of expression, thought,
conscience, or assembly ?
In what other ways, if any, could the use of this
system adversely impact the freedom or autonomy
of affected rights-holders?
ASSESSING THE SEVERI TY OF POTENTIAL IMPA CTS
SCALE: Before answering the questions below, review the descriptions of harm in Impact Gravity Table contained in the
accompanying user’s guide.
255 For each potential impact identified in the previous section, would you characterize it as a catastrophic,
critical, serious, or moderate/minor harm?
For each potential impact identified in the previous
section, are there individuals or groups who
possess characteristics that could make them more
vulnerable to the impact? If so, what are these
characteristics and could those who possess the m
suffer the harm more acutely or seriously than
others?
For each potential impact identified in the previous
section, which individuals or groups could
encounter the gravest impact on the right or
freedom under consideration?
SCOPE: Before answering the questions below, review the answers to Activity 1 ( Project Section) of the PS Report and answers
to questions 10 and 11 from the PCRA (on numbers of rights -holders affected by the AI system and timescale of the system’s
potential impacts) .
For each potential impact identified in the previous
section, does the answer given to question 10 of
the PCRA seem like accurate and realistic
representation of the potential scope of the impact ?
If not, how many rights -holders could encounter a
deprivati on of the right or freedom under
consideration?
For each potential impact identified in the previous section,
are there groups who possess
characteristics that could make them vulnerable to
256 higher levels of exposure to the impact ? If so, how
much exposur e could these groups face?
For each potential impact identified in the previous section,
consider the overall timescale of the AI
system’s impacts on the right or freedom under
consideration (answer to question 11 of the PCRA).
Are there cumulative or a ggregate impacts of the
system on rights- holders or their progeny that
could extend the effects of the system beyond the
scope of impact identified in the previous answers?
REMEDIABILITY: Before answering the questions below, review the Remediability Table contained in the accompanying user’s
guide.
For each potential impact identified in the previous section,
how remediable is the potential adverse
effect on affected right -holders? (Refer to the
Remediability Table for categories of remediability to consider)
For each potential impact identified in the previous section,
are there groups who possess
characteristics that could make them vulnerable to lower levels of remediability for the impact (i.e.
more subject to difficult -to-
overcome or
irreversi ble harm)? If so, what degree of
irreversible harm do these groups face?
Prevention of harm and protection of the right to life and physical, psychological, and moral integrity
IDENTIFYING POTENTIA L ADVERSE IMPACTS
257 How, if at all, could interaction with the AI system deprive rights -holders of their right to life or their
physical, psychological, or moral integrity?
How, if at all, could the AI system’s interactions
with the environment (either in the processes of its
production or in its deployment) harm the
biosphere or adversely impact the health of the
planet?
In what other ways, if any, could the use of this
system adversely impact the right to life or
physical, psychological, and moral integrity of
affected rig hts-holders?
ASSESSING THE SEVERI TY OF POTENTIAL IMPA CTS
SCALE: Before answering the questions below, review the descriptions of harm in Impact Gravity Table contained in the
accompanying user’s guide.
For each potential impact identified in the previous
section, would you characterize it as a catastrophic,
critical, serious, or moderate/minor harm?
For each potential impact identified in the previous
section, are there individuals or groups who
possess characteristics that could make them more vulnerable to the impact? If so,
what are these
characteristics and could those who possess the m
suffer the harm more acutely or seriously than
others?
For each potential impact identified in the previous
section, which individuals or groups could
258 encoun ter the gravest impact on the right or
freedom under consideration?
SCOPE: Before answering the questions below, review the answers to Activity 1 ( Project Section) of the PS Report and answers
to questions 10 and 11 from the PCRA (on numbers of rights -holders affected by the AI system and timescale of the system’s
potential impacts) .
For each potential impact identified in the previous section,
does the answer given to question 10 of
the PCRA seem like accurate and realistic
representation of the potential scope of the impact ?
If not, how many rights -holders could encounter a
deprivation of the right or freedom under
consideration?
For each potential impact identified in the previous section,
are there groups who possess
charact eristics that could make them vulnerable to
higher levels of exposure to the impact ? If so, how
much exposure could these groups face?
For each potential impact identified in the previous section,
consider the overall timescale of the AI
system’s impacts on the right or freedom under
consideration (answer to question 11 of the PCRA).
Are there cumulative or aggregate impacts of the
system on rights- holders or their progeny that
could extend the eff ects of the system beyond the
scope of impact identified in the previous answers?
REMEDIABILITY: Before answering the questions below, review the Remediability Table contained in the accompanying user’s
guide.
259 For each potential impact identified in the previous
section, how remediable is the potential adverse
effect on affected right -holders? (Refer to the
Remediability Table for categories of remediability to consider)
For each potential impact identified in the previous
section, are there groups who possess
characteristics that could make them vulnerable to
lower levels of remediability for the impact (i.e.
more subject to difficult -to-overcome or
irreversible harm)? If so, what degree of
irreversible harm do these groups face?
Non-discrimination, fairness, and equality
IDENTIFYING POTENTIA L ADVERSE IMPACTS
How, if at all, could the AI system (either in the
processes of its production or in its deployment)
result in discrimination, have discriminatory effects
on impacted rights -holders, or perfor m
differentially for different groups in discriminatory or harmful ways —
including for intersectional
groups where vulnerable or protected
characteristics converge?
How, if at all, could use of the AI system expand
existing inequalities in the communities it affects or
augment historical patterns of inequity and
discrimination in these communities?
260 In what other ways, if any, could the use of this
system cause or contribute to inequity or
inequality?
ASSESSING THE SEVERI TY OF POTENTIAL IMPA CTS
SCALE: Before answering the questions below, review the descriptions of harm in Impact Gravity Table contained in the
accompanying user’s guide.
For each potential impact identified in the previous section, would you characterize it as a catastrophic,
critical, serious, or moderate/minor harm?
For each potential impact identified in the previous
section, are there individuals or groups who
possess characteristics that could make them more vulnerable to the impact? I f so,
what are these
characteristics and could those who possess the m
suffer the harm more acutely or seriously than
others?
For each potential impact identified in the previous
section, which individuals or groups could
encounter the gravest impact on t he right or
freedom under consideration?
SCOPE: Before answering the questions below, review the answers to Activity 1 ( Project Section) of the PS Report and answers
to questions 10 and 11 from the PCRA (on numbers of rights -holders affected by the AI system and timescale of the system’s
potential impacts) .
For each potential impact identified in the previous section,
does the answer given to question 10 of
the PCRA seem like accurate and realistic
261 representation of the potential scope of the impact ?
If not, how many rights -holders could encounter a
deprivation of the right or freedom under
consideration?
For each potential impact identified in the previous section,
are there groups who possess
charact eristics that could make them vulnerable to
higher levels of exposure to the impact ? If so, how
much exposure could these groups face?
For each potential impact identified in the previous section,
consider the overall timescale of the AI
system’s impacts on the right or freedom under
consideration (answer to question 11 of the PCRA).
Are there cumulative or aggregate impacts of the
system on rights- holders or their progeny that
could extend the eff ects of the system beyond the
scope of impact identified in the previous answers?
REMEDIABILITY: Before answering the questions below, review the Remediability Table contained in the accompanying user’s
guide.
For each potential impact identified in the previous
section, how remediable is the potential adverse
effect on affected right -holders? (Refer to the
Remediability Table for categories of remediability to consider)
For each potential impact identified in the previous section, are there groups who possess
characteristics that could make them vulnerable to lower levels of remediability for the impact (i.e.
more subject to difficult -to-overcome or
262 irreversible harm)? If so, what degree of
irreversible harm do these groups face?
Data protection and the right to respect of private and family life
IDENTIFYING POTENTIA L ADVERSE IMPACTS
How, if at all, could the design, development, and
deployment of the AI system harm the rights to
data protection enshrined in data protection and
privacy law and in the Council of Europe's
Convention 108+?
How, if at all, could the AI system intrude on or
interfere with the private and family life of rights -
holders in ways that prevent or impede them from
maintaining a personal sphere that is independent
from the transformative effects of AI technologies and in which they are at liberty to freely think, form
opinions and beliefs, and develop their personal
identities and intimate relationships without the
influence of AI techno logies?
ASSESSING THE SEVERI TY OF POTENTIAL IMPA CTS
SCALE: Before answering the questions below, review the descriptions of harm in Impact Gravity Table contained in the
accompanying user’s guide.
For each potential impact identified in the previous section, would you characterize it as a catastrophic,
critical, serious, or moderate/minor harm?
For each potential impact identified in the previous
section, are there individuals or groups who
263 possess characteristics that could make them more
vulnerable to the impact? If so, what are these
characteristics and could those who possess the m
suffer the harm more acutely or seriously than
others?
For each potential impact identified in the previous
section, which individuals or groups could
encounter the gravest impact on the right or
freedom under consideration?
SCOPE: Before answering the questions below, review the answers to Activity 1 ( Project Section) of the PS Report and answers
to questions 10 and 11 from the PCRA (on numbers of rights -holders affected by the AI system and timescale of the system’s
potential impacts) .
For each potential impact identified in the previous section,
does the answer given to question 10 of
the PCRA see m like accurate and realistic
representation of the potential scope of the impact ?
If not, how many rights -holders could encounter a
deprivation of the right or freedom under
consideration?
For each potential impact identified in the previous section,
are there groups who possess
characteristics that could make them vulnerable to
higher levels of exposure to the impact ? If so, how
much exposure could these groups face?
For each potential impact identified in the previous
section, consider the overal l timescale of the AI
system’s impacts on the right or freedom under
consideration (answer to question 11 of the PCRA).
Are there cumulative or aggregate impacts of the
264 system on rights -holders or their progeny that
could extend the effects of the system b eyond the
scope of impact identified in the previous answers?
REMEDIABILITY: Before answering the questions below, review the Remediability Table contained in the accompanying user’s
guide.
For each potential impact identified in the previous section, h
ow remediable is the potential adverse
effect on affected right -holders? (Refer to the
Remediability Table for categories of remediability to consider)
For each potential impact identified in the previous section,
are there groups who possess
characteristics that could make them vulnerable to lower levels of remediability for the impact (i.e.
more subject to difficult -to-
overcome or
irreversible harm)? If so, what degree of
irreversible harm do these groups face?
Social and economic rights
IDENTIFYING POTENTIA L ADVERSE IMPACTS
How, if at all, could the deployment of the AI
system harm the social and economic rights of
affected persons, including the right to just working conditions, the
right to safe and healthy working
conditions, the right to organize, the right to social security , and the rights to the protection of health
and to social and medical assistance as set out in the European Social Charter?
265 In what other ways, if any, coul d the use of this
system adversely impact other relevant social and
economic rights?
ASSESSING THE SEVERI TY OF POTENTIAL IMPA CTS
SCALE: Before answering the questions below, review the descriptions of harm in Impact Gravity Table contained in the
accompanying user’s guide.
For each potential impact identified in the previous section, would you characterize it as a catastrophic,
critical, serious, or moderate/minor harm?
For each potential impact identified in the previous
section, are there individuals or groups who
possess characteristics that could make them more vulnerable to the impact? If so,
what are these
characteristics and could those who possess the m
suffer the harm more acutely or seriously than
others?
For each potential impact identified in the previous
section, which individuals or groups could
encounter the gravest impact on the right or
freedom under consideration?
SCOPE: Before answering the questions below, review the answers to Activity 1 ( Project Section) of the PS Report and answers
to questions 10 and 11 from the PCRA (on numbers of rights -holders affected by the AI system and timescale of the system’s
potential impacts) .
For each potential impact identified in the previous
section, does the answer given to question 10 of
the PCRA seem like accurate and realistic
266 representation of the potential scope of the impact ?
If not, how many rights -holders could encounter a
deprivation of the right or freedom under
consideration?
For each potential impact identified in the previous section,
are there groups who possess
characteristics that could make them vulnerable to higher levels of exposure to the impact ? If so, how
much exposure could these groups face?
For each potential impact identified in the previous section,
consider the overall timescale of the AI
system’s impacts on the right or freedom under
consideration (answer to question 11 of the PCRA).
Are there cumulative or aggregate impacts of the
system on rights- holders or their progeny that
could extend the effects of the system beyond the scope of impact identified in the previous answers?
REMEDIABILITY: Before answering the questions below, review the Remediability Table contained in the accompanying user’s
guide.
For each potential impact identified in the previous section,
how remediable is the potential adverse
effect on affected right -holders? (Refer to the
Remediability Table for categories of remediability to consider)
For each potential impact identified in the previous section,
are there groups who possess
characteristics that could make them vulnerable to lower levels of remediability for the impac t (i.e.
more subject to difficult -to-overcome or
267 irreversible harm)? If so, what degree of
irreversible harm do these groups face?
Democracy
IDENTIFYING POTENTIA L ADVERSE IMPACTS
How, if at all, could the use or misuse of the AI
system lead to interference with free and fair
election processes or with the ability of impacted
individuals to participate freely, fairly, and fully in
the political life of the community through any of
the following: mass deception, mass manipulation, mass intimidatio n or behavioural control, or mass
personalized political targeting or profiling, at the
local, national, or global levels?
How, if at all, could the use or misuse of the AI
system lead to the mass dispersal, at local,
national, or global levels, of misinformation or
disinformation?
How, if at all, could the use or misuse of the AI
system lead to obstruction of informational
plurality, at local, national, or global levels?
How, if at all, could the use or misuse of the AI
system (in particular, in formation filtering models
such as recommender systems, search engines, or
news aggregators) lead to an obstruction of the
free and equitable flow of the legitimate and valid
forms of information that are necessary for the
meaningful democratic participati on of impacted
268 rights holders and for their ability to engage freely,
fairly, and fully in collective problem -solving?
How, if at all, could the use or misuse of the AI
system (in particular, models that track, identify,
de-anonymize or surveil rights -holders and social
groups or enable the creation of social graphs) lead
to interference with or obstruction of impacted
rights -holders' abilities to exercise their freedoms
of expression, assembly, or association?
In what other ways, if any, could the use of this
system adversely impact democra tic principles or
the preconditions of democratic ways of life?
ASSESSING THE SEVERI TY OF POTENTIAL IMPA CTS
SCALE: Before answering the questions below, review the descriptions of harm in Impact Gravity Table contained in the
accompanying user’s guide.
For each potential impact identified in the previous section, would you characterize it as a catastrophic,
critical, serious, or moderate/minor harm?
For each potential impact identified in the previous secti
on, are there individuals or groups who
possess characteristics that could make them more vulnerable to the impact? If so,
what are these
characteristics and could those who possess the m
suffer the harm more acutely or seriously than
others?
For each potential impact identified in the previous
section, which individuals or groups could
269 encounter the gravest impact on the right or
freedom under consideration?
SCOPE: Before answering the questions below, review the answers to Activity 1 ( Project Section) of the PS Report and answers
to questions 10 and 11 from the PCRA (on numbers of rights -holders affected by the AI system and timescale of the system’s
potential impacts) .
For each potential impact identified in the previous section, does the an
swer given to question 10 of
the PCRA seem like accurate and realistic
representation of the potential scope of the impact ?
If not, how many rights -holders could encounter a
deprivation of the right or freedom under
consideration?
For each potential impa ct identified in the previous
section, are there groups who possess
characteristics that could make them vulnerable to higher levels of exposure to the impact ? If so, how
much exposure could these groups face?
For each potential impact identified in the previous
section, consider the overall timescale of the AI
system’s impacts on the right or freedom under
consideration (answer to question 11 of the PCRA).
Are there cumulative or aggregate impacts of the
system on rights- holders or their progeny that
could extend the effects of the system beyond the
scope of impact identified in the previous answers?
REMEDIABILITY: Before answering the questions below, review the Remediability Table contained in the accompanying user’s
guide.
270 For each potential impact identified in the previous section,
how remediable is the potential adverse
effect on affected right -holders? (Refer to the
Remediability Table for categories of remediability to consider)
For each potential impact identified in the previous
section, are there groups who possess
characteristics that could make them vulnerable to lower levels of remediability for the impact (i.e.
more subject to difficult -to-
overcome or
irreversible harm)? If so, what degree of
irreversible harm do these groups face?
Rule of Law
IDENTIFYING POTENTIA L ADVERSE IMPACTS
How, if at all, could the deployment of the AI
system harm impacted individuals' right s to
effective remedy and to a fair trial (equality of
arms, right to a natural judge establis hed by law,
the right to an independent and impartial tribunal, and respect for the adversarial process)?
In what other ways, if any, could the use of this
system adversely impact the rule of law or the
precondition of a just legal order ?
ASSESSING THE SEVERITY OF POTENTIAL IMPACTS
SCALE: Before answering the questions below, review the descriptions of harm in Impact Gravity Table contained in the
accompanying user’s guide.
271 For each potential impact identified in the previous section, would you charac terize it as a catastrophic,
critical, serious, or moderate/minor harm?
For each potential impact identified in the previous
section, are there individuals or groups who
possess characteristics that could make them more
vulnerable to the impact? If so, what are these
characteristics and could those who possess the m
suffer the harm more acutely or seriously than
others?
For each potential impact identified in the previous
section, which individuals or groups could
encounter the gravest impact on the righ t or
freedom under consideration?
SCOPE: Before answering the questions below, review the answers to Activity 1 ( Project Section) of the PS Report and answers
to questions 10 and 11 from the PCRA (on numbers of rights -holders affected by the AI system and timescale of the system’s
potential impacts) .
For each potential impact identified in the previous
section, does the answer given to question 10 of
the PCRA seem like accurate and realistic
representation of the potential scope of the impact ?
If not, how many rights -holders could encounter a
deprivation of the right or freedom under
consideration?
For each potential impact identified in the previous section,
are there groups who possess
charact eristics that could make them vulnerable to
272 higher levels of exposure to the impact ? If so, how
much exposure could these groups face?
For each potential impact identified in the previous section,
consider the overall timescale of the AI
system’s impact s on the right or freedom under
consideration (answer to question 11 of the PCRA).
Are there cumulative or aggregate impacts of the
system on rights- holders or their progeny that
could extend the effects of the system beyond the scope of impact identified in the previous answers?
REMEDIABILITY: Before answering the questions below, review the Remediability Table contained in the accompanying user’s
guide.
For each potential impact identified in the previous section, how remediable is the potential adverse
effect on affected right -
holders? (Refer to the
Remediability Table for categories of remediability to consider)
For each potential impact identified in the previous section,
are there groups who possess
characteristics that could make them vulnerable to lower levels of remediability for the impact (i.e.
more subject to difficult -to-
overcome or
irreversible harm)? If so, what degree of
irreversible harm do these groups face?
ADDITIONAL QUESTION F OR THE PROJECT TEAM
These questions are meant as a supplement to Part I of the HUDERIA. They are intended to be answered by the project team and
then to be revisited during the dialogue between duty-bearers and rights -holders in Step 3 of the HUDERIA
273 Has a thorough assessment of the human rights
compliant business practices of all businesses,
parties and entities involved in the value chain of
the AI product or service been undertaken? This
would include all businesses, parties, and entities
directly linked to your project lifecycle through
supply chains, operations, contracting, sales,
consulting, and partnering. (If not, do you have
plans to do this?)
Do you have processes in place and appropriate
resources dedicated to maintaining human rights
diligence with regard to all businesses, parties, and entities directly linked to your project lifecycle?
Do you require your vendors and suppliers to
demo nstrate compliance with legal protections in
place to protect rights -holders from modern
slavery, human trafficking, and labour
exploitation?
MITIGATING POTENTIAL IMPACTS AND SETTING UP ACCESS TO REMEDY
A SUMMARY OF COMBINE D IMPACT FINDINGS
Provide a summary of combined impact findings from the project team and engaged rights -holders with a detailed mapping of
potential adverse impacts and any significant interrelationships between them.
274
PRESENTATION OF THE MEASURES AND ACTIONS TO BE TAKEN
Present the measures and actions that will be taken and, where needed, a presentation of the prioritization of actions accompanied
by an explanation of prioritization rationale (The project team and engaged rights-holders should also draw on recommendations
generated in the results of Section 1 of the PCRA when setting out needed measures and actions) .
PRESENTATION OF PLANS TO EXERCISE LEVERA GE OR ALTER RELATIONSHIPS
For any potential adverse impacts identified in direct links of the business or organisation to businesses, parties and entities
involved in the value chain of the AI product or service, present plans to exercise leverage or alter relationships to mitigate impacts.
CLARIFICATION OF ROLES AND RESPONSIBILIT ES
Clarify the roles and responsibilities of the various actors involved in impact mitigation, management, and monitoring.
275
STATEMENT OF COMMITMENT OF TIME AND RESO URCES FOR IMP
(From the project team directly): Provide a statement that indicates the allotted commitment of time and resources that will be
made available to carry out the IMP.
PLAN FOR MONITORING IMPACT MITIGATION EF FORTS
Formulate a plan for monitoring impact mitigation efforts and for re-assessing and re -evaluating the HUDERIA during subsequent
development and deployment phases of the project lifecycle.
ACCESSIBLE PRESENTATION OF ACCESS TO REMEDY AND OPERATIONAL -LEVEL GRIEVANCE MECH ANISMS
Provide an accessible presentation of the access to remedy and operational -level grievance mechanisms that will be available to
impacted rights -holders if they suffer any harms identified in the HUDERIA or any other adverse impacts that were not detected
or recognized in the impact assessment process.
276
277
Human Rights, Democracy, and Rule of Law
Assurance Case (HUDERAC)
Section Outline
1. Introduction
2. What is an Assurance Case?
3. Structure of a HUDERAC
4. Components of a HUDERAC
a. Goal(s)
b. Properties and (Project or System Property) Claims
c. Evidence
5. Building a HUDERAC
a. HUDERAC Template
b. Goals, Actions, and Claims
Introduction
Assurance is a process of establishing trust.
For a stakeholder to trust that an AI system respects human rights and
fundamental freedoms , democratic principles, and the rule of law—especially in
novel, high risk, or safety critical domains —they will typically require assurance
that the ir trust is placed judiciously .60
P
roviding assurance can take the form of compiling clear and accessible eviden ce
that a risk has been sufficiently mitigated61, or that mechanisms have been put in
place to prevent negative impacts from occurring.62
I
n the context of the HUDERAF, trust and assurance are built on and emerge from
the foundations of the preliminary context -based risk analysis (PCRA), the
stakeholder engagement process (SEP) and the human rights, democracy, and
rule of law impact assessment (HUDERIA). These three stages, therefore, are part
of the human rights, democracy, and rule of law assurance framework (HUDERAF)
that is detailed in this section.
In addition to outlining this framework and showing how it relates to the three other stages, this section also presents the final practical mechanism of this proces s: a human rights, democracy, and rule of law assurance case (HUDERAC).
What is an A ssurance Case?
In its simplest form, an assurance case is a form of documentation that
demonstrates how a goal (or set of goals) has been established by reference to
relevant (evidence- backed) properties of the system or project lifecycle .63
However, the process of developing an assurance case can,
• a
ssist internal reflection and deliberation by providing a systematic and
structured means for evaluating how the development of AI systems or
products impact human rights, democracy, and the rule of law ;
• provide a deliberate means for the anticipation and pre-emption of potential
risks and adverse impacts through mechanisms of end- to-end ass essment
and redress;
• facilitate transparent communicatio n between developers and affected
stakeholders;
60 High-Level Expert Group on AI . (2019). ‘Ethics Guidelines for Trustworthy AI’. European
Commission. https://ec.europa.eu/digital -single -market/en/news/ethics -guidelines -
trustworthy -ai
61 Cleland, G. M., Habli, I., Medhurst, J., & Health Foundation. (2012). Evidence: Using
safety cases in industry and healthcare .
62 Sujan, M., Furniss, D., Grundy, K., Grundy, H., Nelson, D., Elliott, M., White, S., Habli,
I., & Reynolds, N. (2019). Human factors challenges for the safe use of artificial
intelligence in patient care. BMJ Health & Care Informatics , 26(1), e100081.
https://doi.org/10.1136/bmjhci -2019-100081
63 GSN Community. (2018). GSN Community Standard (Version 2) . The Assurance Case
Working Group. https://scsc.uk/r141B:1?t=1
279 • support mechanisms and processes of documentation (or, reporting) to
ensure accountability (e.g. audits, compliance);
• and build trust and confidence by promoting the adoption of best practices
(e.g. standards for warranted evidence) and by conveying the integration
of these into design, development, and deployment lifecycles to impacted stakeholders.
In the specific context of the human rights, democracy, and rule of law assurance
framework, we can define an assurance case as:
A structured argument that provides assurance to another party (or
parties) that a particular claim (or set of related claims) about a
property of an AI system is warranted given the available evidence.
There are three primary components that are required to build an assurance case :
(a) a concise statement about the top-level normative goal (or goals) of the
project, (b) a set of claims about the project or system properties required to
operationalise the goal , and (c) a body of supporting evidence that demonstrates
how the properties and goals have been realised.
Additional components can be added to provide additional context or information,
but we will just focus on these three for present purposes. Error! Reference
source not found. shows a simplified overview of how t hese primary components
fit together.
Assurance cases are widely used in safety-critical domains, such as energy or
aviation, where systems and manufacturing processes are closely inspected and
Table 1 – the main elements of an assurance
case
280 assessed to ensure they are safe to operate and use in their intended context.64
Different approaches to the development of an assurance case will incorporate
additional elements beyond those presented in
Error! Reference source not
found. . However, each of the different approaches will typically have a goal-
structured focus, and include property claims about the system or project, which
are backed up by evidence.65
C
ollectively, these elements form a structured argument , which is d esigned to
provide assurance that a goal (or set of goals) has been sufficiently established (e.g., safety, explainability, data integrity) .
66 We can, therefore, refer to th e
development of this structured argument as an argument-based form of
assurance.
Structure of a HUDERAC
Whereas the PCRA , SEP, and HUDERIA serve as mechanisms for identifying and
evaluating possible harms that could arise from the design, development, and
deployment of an AI system, an assurance case serves to demonstrate what
actions have been taken to prevent such harms from occurring. An assurance
case, therefore, is the primary means by which the basis for trust in the system
is communicated to and scrutinised by diverse stakeholders, including rights
holders, procurers, regulators, and policymakers.
Consider the following example. A developer has completed the PCRA and worked
with stakeholders to further analyse and evaluate possible risks, as outlined in the
SEP. They have also carried out the HUDERIA and have identified a set of actions
that they need to perform when designing, developing, and deploying their AI system. One of the actio ns the developer has taken is to engage independent
domain experts to help evaluate their training data and ensure that it has the
following properties: accurate, representative, and up-to -date.
Presuming the domain experts are in agreement and believe the training data possess such properties, w ould this justify the developer making the following
claim ?
The training data were accurate, representative, and up -to-date .
The developer may be confident that these properties of their training data have
been established, but what matters here is that the developer can justify this
confidence to other stakeholders. This is what it means to provide assurance.
64 Hawkins, R. et al. , (2021) ‘Guidance on the Assurance of Machine Learning in
Autonomous Systems’. University of York: Assuring Autonomy International Programme
(AAIP) https://www.york.ac.uk/media/assuring -autonomy/documents/AMLASv1.1.pdf
65 Bloomfield, R., & Bishop, P. (2010). Safety and Assurance Cases: Past, Present and
Possible Future – an Adelard Perspective. In C. Dale & T. Anderson (Eds.), Making Systems
Safer (pp. 51 –67). Springer London. https://doi.org/10.1007/978-1- 84996-086- 1_4
66 Walton, D. (2009). Argumentation Theory: A Very Short Introduction. In G. Simari &
I. Rahwan (Eds.), Argumentation in Artificial Intelligence (pp. 1 –22). Springer US.
https://doi.org/10.1007/978-0- 387-98197-0_1
281 So, how does one provide assurance that is both warranted and justified?
The structure of an assurance case helps establish a justified (or warranted) basis
for believing and claiming that a goal has been met. The directionality of Error!
Reference so urce not found. , for instance, depicts a process of inferential
support in which the evidence supports a claim about some system or project
property, and the set of property claims subsequently help realise the goal. Error!
Reference source not found. adapts the original figure to present a partial
example oriented towards a goal of safety .
The inf erential structure of an assurance case helps establish trust that some goal
has been obtained. However, the structure is not sufficient on its own. In addition
to the structural aspects of an assurance case, we must also consider the
substantive aspects t hat help establish warrant.
Components of a HUDERAC
Let’s look at the specific components of a human rights, democracy, and rule of
law assurance case in more detail. Specifically, w e will discuss the goal (s),
properties, and evidence.
Goal(s)
The HUDERAF sets important constraints on the types of goals that may be
selected. For instance, the PCRA, SEP, or HUDERIA all help to identify how specific
fundamental rights and freedoms may be violated, how an AI system is likely to harm vulnerable users, or how certain democratic principles are at risk in the
deployment of a particular AI system. Each of these prior stages are orientated
towards some goal or set of goals, even if the goal is not made explicit. One
Figure 1 - An example of a partial assurance case
282 function of an assurance case , therefore, is to m ake the se goals explicit and relate
them to key human rights and freedoms, or principles of democratic governance
and the rule of law.
To aid this process, we can start from the following list of goals that are referred
to as the SAFE-D goals (see Box 1 ). We have provided a general description of
the meaning for each of these goals, including reference to ancillary and
neighbouring concepts.
SAFETY is of paramount import ance for ensuring the sustainable
development, deployment, and use of an AI system . From a technical
perspective, this requires the system to be secure , robust, and
reliable. And from a social sustainability perspective, this requires the
practices behind the system’s production and use to be informed by
ongoing consideration of the risk of expos ing affected rights -holders
to harms , continuous reflection on project context and impacts,
ongoing stakeholder engagement and involvement, and change
monitoring of the system from its deployment through to its
retirement or deprovisioning.
ACCOUNTABILITY can include specific forms of process transparency
(e.g., as enacted through process logs or external auditing) that may
be necessary for mechanisms of redress, or broader processes of
responsible governance that seek to establish clear roles of
responsibility where transparency may be inappropriate (e.g.,
confidential projects).
FAIRNESS is insepara bly connected with sociolegal conceptions of
equity and justice , which may emphasize a variety of features such
as non-discrimination , equit able outcomes, or procedural fairness
through bias mitigation , but also social and economic equality,
diversity, and inclusivenes s.
EXPLAINABILITY is a key condition for autonomous and informed
decision-making in situations where AI systems interact with or
influence human judgement and decision -making. Explainability goes
beyond the ability to merely interpret the outcomes of an AI system;
it also depends on the ability to provide an accessible and relevant
information base about the processes behind the outcome.
DATA QUALITY , INTEGRITY , PROTECTION AND PRIV ACY must all be
established to be confident that the (data- driven) AI system has been
developed on secure grounds.
• ‘DATA QUALITY ’ captures the static properties of data , such as
whether they are (a) relevant to and representative of the
domain and use context, (b) balance d and complete in terms
of how well the dataset represents the underlying data
generating process , and (c) up -to-date and accurate as
required by the project.
• ‘DATA INTEGRITY ’ refers to more dynamic properties of data
stewardship , such as how a dataset evolves over the course of
a project lifecycle . In this manner, d ata integrity requires (a)
283 contemporaneous and attribut able records from the start of a
project (e.g., process logs ; research statements ), (b) ensuring
consistent and verifiable means of data analysis or processing
during development, and (c) taking steps to establish findable ,
accessible , interoperable, and reusable records towards the
end of a project ’s lifecycle.
• ‘DATA PROTECTION AND PRIVACY ’ reflect ongoing developments
and p riorities as set out in relevant legislation and regulation
of data practices as they pertain to fundamental rights and
freedoms, democracy, and the rule of law. For example, the
right for data subjects to have inaccurate personal data
rectified or erased.
Box 1 – SAFE-D Goals
Each of these goals plays an important role in upholding human rights, democracy,
and the rule of law. For example, ‘DATA PROTECTION ’67 and ‘EXPLAINABILITY ’ are both
required to ensure that an individual’s autonomy is protected, whereas ‘FAIRNESS ’
and ‘DATA INTEGRITY ’ are both key components of social equality.
The s
pecific meaning of the goals, however, will be delineated in different ways
depending on the use context or domain of the AI system, and the processes and results of the SEP . For example, depending on the AI system,
SAFETY could relate
to the physical safety of patients in the context of an AI system used in healthcare ,
or environmental safety for an AI system used in agriculture or forestry. Moreover,
a group of stakeholders may wish to prioritise responsible governance over
transparency, in relation to ACCOUNTABILITY where a project requires greater (and
justifiable) levels of secrecy.
This means that the top-level goals will need to be furt her specified, both by
detailing the intended use context for the AI system68 and by making specific
claims about relevant properties of the AI system or project.
Properties and (Project or System Property) Claims
Each of the SAFE -D goals has a variety of lower-level properties associated with
them. These properties ought to be established in either the project or the system
if the goal is said to have been obtained. In a HUDERAC, a corresponding claim about the (project or system) property is, therefore, required. The following properties can be identified for each of the SAFE -D goals.
Safety
• Sustainability: the goal of safety must be achieved with an eye to wards
the sustainability of a safe system . This goes beyond environmental
sustainability (e.g., the ecological footprint of the project and system). It
67 To indicate that a concept is being employed as a goal we will use small caps formatting
(e.g., ‘ fairness ’.)
68 The ‘project summary’ output from the PCRA is likely to help here, as it will typically
contain a lot of this information .
284 also includes an understanding of the long -term use context and impact of
the system , and the resources needed to ensure the system continues to
operate safely over time. For instance, sustainability may depend upon
sufficient change monitoring processes that establish whether there ha s
been a substantive change in the underlying data distributions or social
operating environment. Sustainability also involves engaging and
involving rights -holders in the design and assessment of AI systems that
could impact their human rights and fundamental freedoms.
• Security: Security encompasses the protection of several operational
dimensions of an AI system when confronted with possible adversarial
attack. A secure sy stem is capable of maintaining the integrity of the
information that constitutes it. This includes protecting its architecture from
the unauthorised modification or damage of any of its component parts. A
secure system also remains continuously functional and accessible to its
authorised users and keeps confidential and private information secure even under hostile or adversarial conditions.
• Robustness: The objective of robustness can be thought of as the goal
that an AI system functions reliably and accura tely under harsh conditions.
These conditions may include adversarial intervention, implementer error, or skewed goal -execution by an automated learner (in reinforcement
learning applications). The measure of robustness is therefore the strength
of a syste m’s integrity the soundness of its operation in response to difficult
conditions, adversarial attacks, perturbations, data poisoning, and undesirable reinforcement learning behaviour.
• Reliability: The objective of reliability is that an AI system behaves exactly
as its designers intended and anticipated. A reliable system adheres to the
specifications it was programmed to carry out. Reliability is therefore a
measure of consistency and can establish confidence in the safety of a
system based upon the depen dability with which it operationally conforms
to its intended functionality.
• Accuracy and Performance Metrics: In machine learning, the accuracy
of a model is the proportion of examples for which it generates a correct
output. This performance measure is also sometimes characterised
conversely as an error rate or the fraction of cases for which the model
produces an incorrect output. As a performance metric, accuracy should
be a central component to establishing and nuancing the approach to safe
AI. Specifying a reasonable performance level for the system may also often
require refining or exchanging of the measure of accuracy. For instance, if
certain errors are more significant or costly than others, a metric for total
cost can be integrated into the model so that the cost of one class of errors can be weighed against that of another .
Accountability
• Traceability: Traceability refers to the process by which all stages of the
data lifecycle from collection to deployment to system updating or deprovisioning are documented in a way that is accessible and easily
understood. This may include not only the parties within the organisation
285 involved but also the actions taken at each stage that may impact the
individuals who use the system.
• Answerability: Answerability depends upon a h uman chain of
responsibility. Answerability responds to the question of who is accountable
for an automation supported outcome.
• Auditability: Whereas the property of answerability responds to the
question of who is accountable for an automation supported outcome, the
notion of auditability answers the question of how the designers and
implementers of AI systems are to be held accountable. This aspect of
accountability has to do with demonstrating and evidencing both the
responsibility of design and use practices and the justifiability of outcomes.
• Clear Data Provenance and Data Lineage: Clear provenance and data
lineage consists of records that are accessible and simultaneously detail
how data was collected and how it has been used and altered throughout
the stages of pre- processing, modelling, training, testing, and deploying.
• Accessibi lity: Accessibility involves ensuring that information about the
processes that took place to design, develop, and deploy an AI system are
easily accessible by individuals. This not only refers to suitable means of
explanation (clear, understandable, and accessible language) but also the
mediums for delivery.
• Reproducibility: Related to and dependant on the above four properties,
reproducibility refers to the ability for others to reproduce the steps you have taken throughout your project to achieve the desired outcomes and
where necessary to replicate the same outcomes by following the same
procedure.
• Responsible Governance: Responsible governance ensures
accountability and responsibility for the processes that occur throughout
the data lifecycle. This includes the identification and assignment of a data
protection officer, as well as clearly identifying data controllers and
processors. This may also include the creation of an independent oversight
board to ensure these individuals are held accountable and t he processes
are well -documented.
Fairness
• Bias Mitigation: It is not possible to eliminate bias entirely. However,
effective bias mitigation processes can minimise the unwanted and
undesirable impact of systematic deviation s, distortions , or disparate
outcomes that arise to a project governance problem, interfering factor , or
from insufficient reflection on historical social or structural discrimination.
• Diversity and Inclusiv eness: A significant component of fairness aware
design is ensuring the inclusion of diverse voices and opinions in the design and development process through the participation of a more
representative range of stakeholders. This includes considering whether values of civic participation, inclusion, and diversity been adequately
considered in articulating the purpose and setting the goals of the project.
286 Consulting with internal organisational stakeholders is also necessary to
strengthen the openness, inclusiv eness , and diversity of the project.
• Non- Discrimination: your system should not create or contribute to
circumstances whereby members of protected groups are treated
differently or less favourably than other groups because of their respective
protected characteristic.
• Equality: the outcome or impact of a system should either maintain or
promote a state of affairs in which every individual has equal rights and
liberties, and equal access or opportunities to whatever good or service the
AI system brings about.
Explainability
• Interpretability: Interpretability consists of the ability to know how and
why a model performed the way it did in a specific context and therefore to understand the rationale behind its decision or behaviour.
• Responsible Model Selection: The normal expectations of intelligibility
and accessibility t hat accompany the function the system will fulfil in the
sector or domain in which it will operate. The availability of more interpretable algorithmic models or techniques in cases where the selection
of an opaque model poses risks to the physical, psychological, or moral
integrity of rights -holders or to their human rights and fundamental
freedoms . The availability of the resources and capacity that will be needed
to responsibly provide supplementary methods of explanation (e.g. simpler surrogate models, sensitivity analysis, or relative feature important) in
cases where an opaque model is deemed appropriate and selected.
• Accessible Rationale Explanation : the reasons that led to a decision —
especially one that is automated—delivered in an accessible and non -
technical way.
• Responsible Implementation and User Training: training users to
operate the AI system may include
o conveying basic knowledge about the nature of machine learning,
o explaining the limitations of the system,
o educating users about the risks of AI -related biases, such as
decision-automation bias or automation -distrust bias
69, and
o en
couraging users to view the benefits and risks of deploying these
systems in terms of their role in helping humans to come to
judgements, rather than replacing that judgement.
69 Decision -automation bias occurs when users of AI decision -support systems may
become hampered in their critical judgment and situational awareness as a result of an
overconfidence in the objectivity , or certainty of the AI system. At the other extreme,
automation -distrust bias occurs when users of an automated decision -support system tend
to disregard its contributions to evidence -based reasoning as a result of their distrust or
scepticism about AI technologies in general (see ICO & ATI (2020) for further information) .
287 Data Quality
• Source Integrity and Measurement Accuracy: effective bias mitigation
begins at the very commencement of data extraction and collection
processes. Both the sources and instruments of measurement may
introduce discriminatory factors into a dataset. When incorporated as inputs
in the training data, biased prior human decisions and judgments —such as
prejudiced scoring, ranking, interview -data or evaluation—will become the
‘ground truth’ of the model and replicate the bias in the outpu ts of the
system in order to secure discriminatory non -harm, as well as ensuring that
the data sample has optimal source integrity. This involves securing or
confirming that the data gathering processes involved suitable, reliable, and impartial sources of measurement and sound methods of collection.
• Timeliness and Recency: if datasets include outdated data then changes
in the underlying data distribution may adversely affect the generalisability
of the trained model. Provided these distributional drifts re flect changing
social relationship or group dynamics, this loss of accuracy with regard to the actual characteristics of the underlying population may introduce bias
into an AI system. In preventing discriminatory outcomes, timeliness and
recency of all el ements of the data that constitute the datasets must be
scrutinised.
• Relevance, Appropriateness, and Domain Knowledge: The
understanding and utilisation of the most appropriate sources and types of
data are crucial for building a robust and unbiased AI system. Solid domain
knowledge of the underlying population distribution and of the predictive or
classificatory goal of t he project is instrumental for choosing optimally
relevant measurement inputs that contribute to the reasonable determination of the defined solution. Domain experts should collaborate
closely with the technical team to assist in the determination of the
optimally appropriate categories and sources of measurement.
• Adequacy of Quantity and Quality : this property involves assessing
whether the data available is comprehensive enough to address the problem set at hand, as determined by the use case, domain, function, and
purpose of the system. Adequate quantity and quality should address
sample size, representativeness, and availability of features relevant to
problem.
• Balance and Representativeness: a balanced and representative dataset
is one in which t he dist ribution of features that are included, and the
number of samples within each class is similar to the underlying distribution
that exists in the overall population.
Data Integrity70
• Attributable: Data should clearly demonstrate who observed and recorded
it, when it was observed and recorded, and who it is about.
70 Adapted from SL Controls. (n.d.)
288 • Consistent, Legible and Accurate: Data should be easy to understand,
recorded permanently and original entities should be preserved. Data
should be free from errors and conform with the protocol. Consistency
includes ensuring data is chronological (e.g., has a date and time stamp
that is in the expected sequence).
• Complete: All recorded data requires an audit trail to show nothing has
been deleted or lost.
• Contemporaneous: Data should be recorded a s it was observed, and at
the time it was executed.
• Responsible Data Management: Responsible data management ensures
that the team has been trained on how to manage data responsibly and
securely, identifying possible risks and threats to the system and ass igning
roles and responsibilities for how to deal with these risks if they were to
occur. Policies on data storage and public dissemination of results should
be discussed within the team and with stakeholders, as well as being clearly
documented.
• Data Trac eability and Auditability: Any changes or revisions to the
dataset (e.g., additions, augmentations, normalisation) that occur after the
original collection should be clearly traceable and well-documented to
support any auditing.
Data Protection and Privac y
• Consent (or legitimate basis) for processing: Each Party shall provide
that data processing can be carried out on the basis of the free, specific,
informed and unambiguous consent of the data subject or of some other
legitimate basis laid down by law. The data subject must be informed of
risks that co uld arise in the absence of appropriate safeguards. Such
consent must represent the free expression of an intentional choice, given either by a statement (which can be written, including by electronic means,
or oral) or by a clear affirmative action and wh ich clearly indicates in this
specific context the acceptance of the proposed processing of personal data. Mere silence, inactivity or pre -validated forms or boxes should not,
therefore, constitute consent. No undue influence or pressure (which can be of an economic or other nature) whether direct or indirect, may be exercised on the data subject and consent should not be regarded as freely
given where the data subject has no genuine or free choice or is unable to
refuse or withdraw consent without prejudic e. The data subject has the right
to withdraw the consent he or she gave at any time (which is to be distinguished from the separate right to object to pro- cessing).
• Data Security: Each Party shall provide that the controller, and, where
applicable the processor, takes appropriate security measures against risks
such as accidental or unauthorised access to, destruction, loss, use,
modification or disclosure of personal data. Each Party shall provide that the controller notifies, without delay, at least the competent supervisory
authority within the meaning of Article 15 of this Convention, of those data
289 breaches which may seriously interfere with the rights and fundamental
freedoms of data subjects.
• Data Minimisation: Personal data being processed is adequate (sufficient
to properly fulfil the stated purpose), relevant (has a rational link to that
purpose), and limited to what is necessary do not hold more data than
needed for that pu rpose).
• Transparency: The transparency of AI systems can refer to several
features, both of their inner workings and behaviours, as well as the
systems and processes that support them. An AI system is transparent when it is possible to determine how it was designed, developed, and
deployed. This can include, among other things, a record of the data that were used to train the system, or the parameters of the model that
transforms the input (e.g., an image) into an output (e.g, a description of
the objects in the image). However, it can also refer to wider processes,
such as whether there are legal barriers that prevent individuals from
accessing information that may be necessary to understand fully how the
system functions (e.g. , intellectual property restri ctions).
• Proportionality: delivering a just outcome in ways that are proportionate
to the cost, complexity, and resources available. In a similar vein, the term ‘proportionality’ can also be used as an evaluative notion, such as in the
case of a data protection principle that states only personal data that are
necessary and adequate for the purposes of the task are collected.
• Purpose Limitation: The purposes for data processing mu st be outlined
and documented from the beginning and made available to all individuals through privacy information. Personal data must adhere to the original
purpose unless it is compatible with the original purpose, additional consent
is received, or there is an obligation or function set out in law.
• Accountability: Appropriate measures and records must be in place to
demonstrate compliance and responsibility for how data has been processed
in alignment with the other principles.
• Lawfulness, fairness, and transparency: These three principles
necessitate ‘lawful basis’ for the collection and use of personal data. Personal data must be used in a fair manner that is not unduly detrimental,
unexpected, or misleading. Any processes in which data is used should n ot
be in breach of any other laws, and teams must be clear, open, and honest with individuals about how their personal data is being used.
• Respect for the rights of data subjects: respect for the rights of data subjects requires putting in place adequate mechanisms or undertaking
necessary actions so as to ensure that the rights of data subjects as defined
under Convention 108+ and GDPR are upheld. Where necessary, this
includes the responsible handling of sensitive data.
Providing assurance for one or more of the above properties requires the
demonstration and documentation of relevant actions that have been taken at different stages of a project’s lifecycle. These actions are captured through claims
about the properties , which may apply to either the project or system . For
example, the action of security testing a system may give rise to a property that
is salient for either the reliability or robustness of the system. Devising a claim
290 about th e property of the system that has resulted from s uch an action may be a
pre-re quisite for arguing that a system is SAFE TO OPERATE .
The set of system or project property claims will collectively help specify how the
goal is to be understood in the context of the project lifecycle . Figure 2 provides
an example of this using the goal of explainability.
Figure 2 – an example of a partial assurance case with an ‘explainability’ goal and supporting claims
The PCRA, SEP, and HUDERIA all play an important role in helping to identify
necessary and (jointly) sufficient claims. For instance, the SEP is designed to be
maximally inclusive of differing perspectives, as it is through this open dialogue
and engagement that developers will be likely to identify additional properties that
need to be established in their project or system.
The section below titled ‘A HUDERAC Template
The process of building a HUDERAC is a three-stage process:
Reflect ⇒ Act ⇒ Justify
Each stage can be broken into a series of steps, which help build a HUDERAC. The
following sections outline these steps and provide a list of goals, actions, and claims t hat provide illustrative examples.
Reflect
STEPS: Reflection is an anticipatory process, so the steps
taken in this stage can be thought of as deliberative
prompts:
1. What are the goals of your system?
2. How are these goals defined?
3. Which stakeholders have participated in the
identification and defining of these goals?
291 4. What properties need to be implemented in the
project or system to ensure that these goals are
achieved?
5. Which actions ought to be taken to establish these properties within the project or sys tem?
OUTPUTS: The HUDERAC outputs of this section will be a
preliminary set of goals and claim.
Act
STEPS: Salient actions are carried out at all stages of the
HUDERAF, as detailed in the process map at the start of
this document. A detailed process log and secure
repository of documentation should be maintained to keep
track and record these actions, including any associated
artefacts (e.g., data protection impact assessment, report
of model performance evaluation).
As the HUDERAF is split into three stages—design,
development, and deployment —there is a high -level
structure that can help with the organisation and
documentation of relevant actions:
1. What actions have been undertaken during
(project) design that have generated salient
evidence for your goals and claims?
2. What actions have been undertaken during (model)
development that have generated salient evidence
for your goals and claims?
3. What actions have been undertaken during
(system) deployment that have generated salient
evidence for your goals and claims?
OUPUTS: A well maintained repository (or process log) of
documentation and evidential artefacts, which
demonstrate that specific actions were undertaken at
specific times throughout the project lifecycle, by specific
team members.
292 Justify
STEPS: Once the goals, claims, and evidence
have been established and documented, the
final step is to justify that your evidence base
is sufficient to warrant the claims that are
being made about the properties of your
project or system. This connection i s a vital
step and can expose weaknesses in the
overall case being developed.
To help evaluate the evidence, the following
questions can be instructive:
1. Which stakeholders, identified in your
stakeholder engagement plan, can
support the evaluation of your evidence and overall case?
2. Is any evidence missing from your
case?
3. Are the collection of property claims
jointly sufficient to support your top-level goal?
OUPUTS: A HUDERAC that can be presented
and communicated to relevant stakeholder
groups for asse ssment, evaluation, or
auditing.
293 Goals, Actions and Claims ’ provides a set of example claims that are grouped
according to the relevant SAFE -D goal and with reference to specific actions and
respective properties.
Evidence
A property claim that is made about the system or project will typically require
supporting evidence that the relevant action was performed. For instance,
consider the following claim,
During model testing we compared the decision tree classifier with
less interpretable models but found no improvements in accuracy to
justify the loss of interpretability.
This claim helps justify why a specific model was selected . However, an auditor
may want to see the results of the model comparison process with specific
reference to the performance metrics that were used. Providing this evidence is
an important justificatory step that helps ground the assurance case.
Generating this evidence will occur naturally over the course of a project’s
lifecycle , and the specific steps set out in the workf low summary have also been
carefully designed to give rise to relevant evidential artefacts . Each decision or
action taken, whether as part of the SEP or during a specific development stage
(e.g., exploratory data analysis ) is an action that is likely to or could create
documentation that c an serve as evidence for a property claim. It is not, therefore,
necessary to compile all the evidence for an assurance case as an additional step at the end of a project’s lifecycle. Instead, the HUDERAF has been designed to
naturally accommodate the iterative process of building an assurance case as a
project evolves . In the same way that an iterative approach to stakeholder
engagement is recommended, therefore, the process of building a HUDERAC is
also best approached as a continuous and iterative process.
Building a HUDERAC
Building a HUDERAC is about connecting the evidence gathered during project
design, model development, and system deployment to specific property claims ,
in order to form a structured argument that some goal (or set of goals) has been
met. To simplify this process, we can think of the HUDERAF as an end -to-end
process of reflection, action, and justification that gives rise to a HUDERAC .
Early-stage project design activities, such as the PCRA or SEP , are reflective and
anticipatory—identifying risks and possible impacts early on before they lead to
harm. They also help map out a project governance process that can identify which
actions need to be taken to assure properties that are relevant to the goals of the
system, and which may have been co -defined with stakeholders. The justificatory
step of compiling evidence, therefore, is linked to the final production of a
HUDERAC.
Figure 3 provides a high-level schematic of this process of reflection, action, and
justification, showing how specific elements of an assurance case emerge from
each of the stages.
294 The process of justification depends on compiling the relevant evidence and linking
it to the specific claims. However, whether an assurance case is judged to be
justified is a relational process . Although a developer can “make their case” for
why a specific goal has been achieved, it must also be presented to relevant stakeholders (e.g., regulators, procurers) to evaluate its justifiability. In principle,
this is an ongoing process as different groups of stakeholders may wish to contest
or examine different property claims or question specific forms of evidence.
For example, let’s pretend that a SEP report is used as evidence for a claim that
a specific group of stakeholders were engaged and approved of a particular design
choice for the respective AI system. This could seem, prima facie , like a reasonable
evidential basis. However, let’s also pretend that the stakeholders were only sent
letters with the proposed design plans for the AI system and given a week to
respond with objections , and that any non-responses would be treated as
indicating approval. As we discussed earlier, this approach to stakeholder
engagement is known as “informing”, and is the weakest method of engagement.
Whether it is accepted as reliable evidence is, therefore, likely to depend on wider,
contextual factors.
The process of justifying a HUDERAC should, therefore, be approached as a
process of building and maintaining a l iving document, and as part of the overall
dialogical and reflective process that defines the HUDERAF.
In the next sections we provide a template for the development of a HUDERAC.
This template serves a procedural function insofar as it offers a practical
mechanism for project teams. It does not provide guidance on the substantive
content that ought to be i ncluded within a particular HUDERAC, as this can only
be determined by carrying out the actual stages of the HUDERAF (i.e., PCRA, SEP
and HUDERIA).
Figure 3 – a high -level schematic of the reflect, act, and justify (HUDERAF) process
295 A HUDERAC Template
The process of building a HUDERAC is a three -stage process:
Reflect ⇒ Act ⇒ Justify
Each stage can be broken into a series of steps, which help build a HUDERAC. The follow ing sections outline these steps and
provide a list of goals, actions , and claims that provide illustrative examples .
Reflect
STEPS: Reflection is an antici patory process, so the steps taken in this stage can be
thought of as deliberative prompts:
6. What are the goals of your system?
7. How are these goals defined?
8. Which stakeholders have participated in the identification and defining of these
goals?
9. What proper ties need to be implemented in the project or system to ensure
that these goals are achieved?
10.Which actions ought to be taken to establish these properties within the project or system?
OUTPUTS: The HUDERAC outputs of this section will be a preliminary set of goals and
claim.
296 Act
STEPS: Salient a ctions are carried out at all stages of the HUDERAF , as detailed in the
process map at the start of this document. A detailed process log and secure repository
of documentation should be maintained to keep track and record these actions,
including any associated artefacts (e.g., data protection impact assessment, report of
model performance evaluation).
As the HUDERAF is split into three stages —design, development, and deployment —
there is a high -level structure that can help with the organisation and documentation
of relevant actions :
4. What actions have been undertaken during (project) design that have generated
salien t evidence for your goals and claims?
5. What actions have been undertaken during (model) development that have
generated salient evidence for your goals and claims?
6. What actions have been undertaken during (system) deployment that have
generated salient evidence for your goals and claims?
OUPUTS: A well maintained repository (or process log) of documentation and
evidential artefacts, which demonstrate that specific actions were undertaken at
specific times throughout the project lifecycle, by specific team members.
297 Justify
STEPS: Once the goals, claims, and evidence have been established and
documented, the final step is to justify that your evidence base is sufficient to
warrant the claims that are being made about the properties of your project or
system. This connection is a vital step and can expose weaknesses in the overall
case being developed.
To help evaluate the evidence, the following questions can be instructive:
4. Which stakeholders, identified in your stakeholder engagement plan, can
support the evaluation of your evidence and overall case?
5. Is any evidence missing from your case?
6. Are the collection of property claims jointly sufficient to support your top -level
goal?
OUPUTS: A HUDERAC that can be presented and communicated to relevant
stakeholder groups for assessment, evaluation, or auditing.
298 Goals , Actions and Claims
It is not possible to offer a comprehensive set of claims that need to be established for each of the SAFE -D goals and properties ,
as the task of specifying the goals and operationalising the relevant properties is highly contextual.
However, Table 2 provides a set of examples to show how this may occur in different cont exts. The claims are organised
according to where in the des ign, development, and deployment lifecycle they occur . We do not give a claim for every stage,
but do provide at least one example for every property, and also offer further references in footnotes to wider literature for
each goal .
Finally, i t is important to note that some of the properties may relate to and serve multiple goals, but are pres ented in
connection with one in this table.
Goals Claims and Actions
Design Development Deployment
SAFETY71 Sustainability Previous safety cases for similar
technologies were consulted
during the planning of our
system to anticipate and identify
possible risks . Key decision -makers
collaborated with system
developers to set reasonable
expectations about what is
achievable. The system requires ongoing
monitoring from a human -on-
the-loop to minimise the risk of
physical harm that could arise
from its operation.
Robustness The model used within our
system has been internally and
externally validated. External
validation has been carried out
across a wide range of
environments to ensure the
system is robust.
71 CDT, n.d.; Leslie, 2019; Diakopoulos, et al., 2017; Amodei, et al., 2016; Auernhammer et al., 2019; Demšar & Bosnić, 2018; Go ogle,
2019; Göpfert et al., 2018; Irving & Askell, 2019; Kohli, et al.., 2019; Kolter & Madry, n.d.; Marcus, 2018; Muñoz- González, et al., 2017;
Nicolae, et al., 2018; Ortega & Maini, 2018; Ranjan, et al., 2017; Ratasich, et al., 2019; Salay & Czarnecki, 2018; Shi, et al., 2018; Song,
et al., 2018; Warde -Farley & Goodfellow, 2016; Webb, et al., 2018; Zantedeschi, et al., 2017; Zhao, et al., et al., 2018; Zhang, et al.,
2019; Varshney & Alemzadeh , 2017
299 Security Our model was evaluated and
developed to minimise its
vulnerability to inversion attacks
(NB: model inversion attacks
attempt to reconstruct training
data from model parameters). An independent security analyst
has carried out extensive
penetration testing on our
systems to ensure that sensitive
data will not be revealed to non-
trusted parties .
Accuracy and
Performance
Metrics The dataset was augmented
using synthetic data, in
collaboration with a domain and
subject matter expert, to address
known gaps and ensure the
training data were representative
and complete. Diverse stakeholder groups were
involved with the model’s
development to ensure it
performed reli ably within and
between each group.
Reliability The runtime environment of our
model is well controlled and
understood . Our model training
was oriented towards
performance in this
environment. Model reliability has been
evaluated and optimised using
sensitivity analyses and
perturbations to training data to
minimise the risk of encountering novel data
in the runtime
environment.
ACCOUNTABILITY72 Traceability The o rigin of our data sources
and co llection processes have
been sufficiently documented to
support reproducibility . A complete description of all data
collected by our system is
available through our privacy
policy , and the data types are
linked to their specific uses.
72 AI Now Institute, 2018; Leslie, 2019; Binns, 2018; Cavoukian, et al., 2010; CDT (n.d.); Diakopoulos, et al., 2015; Diakopoulos, et al.,
2017; Kroll et al., 2016; Malgieri & Comandé, 2017; O’Sullivan et al., 2019; Reed, 2018; Stahp & Wright, 2018; Wachter, et al, (2017a);
Zook, et al., 2017; ACM US Public Policy Council, 2017; Ananny & Crawford, 2018; Antunes, et al., 2018; Burrell, 2016; Citron, 2008;
Citron & Pasquale, 2014; Crawford & Schultz, 2014; Edwards & Veale, 2017; Kemper & Kolkman, 2019; Turilli & Floridi, 2009; We ller,
2017; Lepri, et al., 2018; Mittelstadt, et al., 2016; Selbst, et al., 2019; Suresh & Guttag, 2019; Veale, et al., 2018; Davidson & Freire, 2008; Buhmann, et al., 2020; Cech, 2020; Cobbe, et al., 2021; Diakopoulos, 2014; Diakopoulos, 2015; Diakopoulos, 2016; FAT/ML, 2016; Fink, 2018; European Parliamentary R esearch Service, 2019; Hamilton, et al., 2014; Hutchinson, et al., 2021; Kacianka & Pretschner,
2021; Kaminski, 2018; Wieringa, 2020; Young, et al., 2019; Zicari, et al., 2021; Tagiou, et al., 2019; Shah, 2018; ICO, 2020; Rosenblat,
et al., 2014; Rosenbaum & Fichman, 2019; Reddy, et al., 2019
300 Auditability All relevant details of our model
training, testing, and validation
have been recorded on an
accessible team repository (e.g.,
predictor selection processes, baselines for model comparison,
performance metrics).
Clear Data
Provenance and
Data Lineage System logs are stored securely
and kept for a period of up to 1
year to support any internal or
external audit or review.
Accessibility All identified stakeholders were
consulted prior to the
development of our system to help critically evaluate our
project plans and ensure they
were intelligible.
Where our system is deployed in
public spaces, clear messaging
is established to ensure all individuals are aware that the
system is in operation.
Reproducibility An extensive set of
documentation has been made
available to any individual who
wishes to deploy our model in
their own system, including suggestions about
appropriate/inappropriate use
cases.
Responsible
Governance A data protection officer has
identified and confirmed the appropriate controllers and
processers of the data.
A member of the team is tasked
with monitoring use of the
system for compliance with
relevant standards and is
authorised to revoke use if found
to be misused and/or harmful .
Answerability Members of the development
team are free and empowered
301 to object to potentially harmful
design decisions without fear of
reprisal.
FAIRNESS73 Non-Discrimination Members of identity and
demographic groups that are
most at risk of harm by AI
systems are consulted about
design intentions to help identify
and remove discriminatory
effects of the AI system.
During pre -processing, steps
were taken to balance the
distribution of the data sets’
classes.
Post-processing techniques were
used to minimise our cla ssifier’s
correlation with protected
attributes. Users are constrained by design
or policy from using the system
to profile or target persons
based on protected characteristics.
Persons affected by use of the
system have avenues of
recourse, ability to contest
system outputs, and demand
human intervention .
Equality An equality impact assessment
was performed to identify and
mitigate potential discriminatory
harms that could arise through
the deployment of this system. The system has been thoroughly
tested to ensure its benefits and
harms are distributed across
identities and groups in an
equitable manner . The system has been evaluated
by independent domain experts to ensure that it does not impinge on expressive rights .
The system will not make use of
personal data or reveal personal
data to others without the
unambiguous consent of the
subject .
Diversity and
Inclusiv eness The problem being addressed by
the system was formulated in a
multistakeholder process to
ensure the inclusion of a broad
range of perspectives and
potential concerns. The system has been tested by
independent domain experts to
ensure that it will function
similarly for differently situated
persons.
73 Diakopoulos, et al., 2017; Leslie, 2019; Donovan, et al., 2018; Binns, 2017; Binns, et al., 2018; Holstein, et al., 2018; Lep ri, et al.,
2018; Custers, 2013; Custers & Schermer, 2014; European Commission Expert Group on FAIR Data, 2018; L’heureux, et al., 2017; Ruggieri,
et al., 2010; Hajian, et al., 2016; Kamiran & Calders, 2012; Lehr & Ohm, 2017; Passi & Barocas, 2019; Singhal & Jena, 2013; van der
Aalst, et al., 2017; Corbett -Davies, et al., 2017; Dwork, et al., 20 12; Grgić -Hlača, et al., 2017; Kusner, et al., 2017; Verma & Rubin, 2018;
Zafar, et al., 2015; Žliobaitė , 2017
302
Bias Mitigation Data were collected, extracted,
or acquired with the meaningful
consent of the data subjects.
Independent domain experts
analysed the data to determine
acceptable levels of bias . Users have been fully trained to
identify and mitigate “human
factors” concerns associated with
system use .
EXPLAINABILITY74 Accessible
Rationale
Explanation Data sets are open and
accessible for analysis by
independent auditors. Our system was tested with a
diverse set of end users to
ensure that its outputs were
accessible at the time they were
needed and delivered relevant
and interpretable information. Previously identified
stakeholders were consulted again towards the end of the project lifecycle to evaluate
whether they were happy with
the accessibility and comprehensive ness of the
explanations being offered by
the system.
Responsible
Implementation
and User Training User testing was conducted prior
to full deployment of the system.
This involved a representative
sample of users who were asked
to ensure they could
satisfactorily interpret the
outputs of the system and
provide suitable explanations The interface through which
users will interact with our system, has been developed to meet universal design standards
and promote accessibility for all
users, including those with visual or cognitive impairments.
74 CDT, n.d.; Diakopoulos, et al., 2017; Leslie, 2019; Janssen & Kuk, 2016; Wachter, et al, (2017a); Wachter et al., 2017b; Demš ar &
Bosnić, 2018 ; Edwards & Veale, 2017; Adadi & Berrada, 2018; Bathaee, 2018; Bibal & Frénay, 2016; Bracamonte, 2019; Burrell, 2016;
Card, 2017; Caruana, et al., 1999; Chen, et al., 2018; Doshi -Velez & Kim, 2017; Doshi -Velez, et al., 2017; Dosilovic, et al., 2018;
Eisens tadt & Althoff, 2018; Feldmann, 2018; Gilpin, et al., 2018; Guidotti, et al., 2018; Kleinberg, et al., 2017; Kroll, 2018; Lak karaju, et
al., 2016; Lepri, et al., 2018; Lipton, 2016; Lipton & Steinhardt, 2018; Lundberg & Lee, 2017, Mittelstadt, et al., 2019; Moinar, 2018;
Murdoch, et al., 2019; Olhede & Wolfe, 2018; Park, et al., 2016; Pedreschi, et al., 2018; Pedreschi, et al., 2019; Poursabzi -Sangdeh, et
al., 2018; Ribeiro, et al., 2016b; Rudin, 2018; Rudin & Ustun, 2018; Shmueli, 2010; Shaywitz, 2018; Sim onite, 2017; Sokol & Flach,
2018; ICO & ATI, 2020
303 about its functions t o affected
persons.
Responsible Model
Selection Features were selected to
optimise for both interpretability
and predictive power.
Interpretability A range of stakeholder
engagement session were held
to facilitate accessible
participation and identify specific
needs and challenges of the respective groups.
DATA QUALITY75 Source Integrity
and Measurement
Accuracy An automated script has been
established that automatically flags for human checking any
unexpected deviations, missing
data, or unexpected formatting . Mechanisms have been
established that monitor
continuous training data, and
test/verify their continued
accuracy.
Timeliness and
Recency Only data that were collected
within the previous 3 months
were used to ensure the training
data were up -to-date. New data are collected and used
to retrain/revalidate the model
every month.
Relevance,
Appropriateness,
and Domain
Knowledge Domain experts were consulted
about the findings from our
exploratory data analysis , and
verified the relevance of the
input variables.
Adequacy of
Quantity and
Quality External validation of the model
was carried out prior to full
deployment to verify whether the
training data were adequate to
75 Abiteboul, et al., 2015; Dai, et al., 2018; Leslie, 2019
304 stand -in for data encountered in
novel settings.
Balance and
Representativeness Cross -validation was carried out
to determine the generalisability
of training data samples and
models. Automated triggers have been
setup to frequently check
whether the model is still
representative of the original
data generation process.
Data Integrity76 Attributable
Metadata is stored alongside the
datasets to help aid and improve
data analysis.
Consistent, Legible
and Accurate Multiple copies of the dataset
were stored securely and used to
verify consistency throughout
the project.
Contemporaneous Data collection was carried out
using a bespoke application to
avoid timely transcription and
ensure the data were captured in
a relevant and labelled format. New data that is collected in the
runtime environment of the
system is gathered in a
structured format and with
accompanying metadata.
Responsible Data
Management Our project team completed
tailored training on data
governance prior to the start of
the project. Data is stored in an interoperable
and reusable format to promote
replicability and transparency .
Data Traceability
and Auditability Training and testing data splits
have been fully documented.
76 Ambacher, et al., 2007; Faudeen, 2017; Stoyanovich, et al., 2017; SL Controls (n.d.)
305 Data Protection
and Privacy77 Consent (or
legitimate basis)
for processing
Data Security Before data is copied or
transferred outside of our secure
development environment, a
multi-party approval system is
triggered to minimise the risk of
security leaks.
Data Minimisation Domain experts and
stakeholders were consulted to
identify the minimum level of
data that needed to be collected
to ensure satisfactory
performance of the system.
Transparency A document is available on our
website that explains how our
model works and also provides
details of the relevant input
features and outputs.
Proportionality Model comparison was
undertaken to determine that all
personal data that are collected
were necessary for the purposes
of the task by iteratively
removing different types and
recording the decrease in model
performance.
Purpose Limitation Before using the system a user is
provided with a privacy policy
outlining the purposes for all data
77 ICO, 2017; Council of Europe, 2018; Cavoukian, et al., 20 10; ICO, 2021; Malgieri & Comandé, 2017; Veale, et al., 2018; Wachter et al.,
2017b; Alper, et al., 2018; Antignac, et al., 2016; Kaminski, 2018
306 that is collected and required to
acknowledge informed consent
to proceed.
Accountability Mechanisms have been
established that allow users to
request full data erasure if they
no longer want to use the
system.
Lawfulness,
Fairness, and
Transparency The system is designed to
function without the need to
collect any personal information. Personal data for each of the
records were collected to verify
that none of the extracted
features contained hidden
proxies. Once this assessment
had been carried out, the
personal data were destroyed.
Respect for the
rights of data
subjects The model selection process was
constrained by the requirement
that any model must be
interpretable to ensure their right
to be informed is respected.
Table 2 – List of goals and example claims and actions that could be used in a HUDERAC
307 Appendix 1: Summary of Processes, Steps, and User Activities
308 Appendix 2: HUDERAF Project Lifecycle Process Map
309 Appendix 3: A Sociotechnical Approach to
the ML/AI Project Lifecycle78
There are many ways of carving up a project lifecycle for a ML/AI system or other
data-driven technology (hereafter shortened to just ‘proje ct lifecycle’). For
instance, Sweenor et al. ( 2020) break it into four stages: Build, Manage, Deploy
& Integrate, Monitor.79 Similarly, ( A shmore, Calinescu, and Paterson 2019 )
identify four stages, which have a more specific focus on data science: data
management, model learning, model verification, and model deployment. Furthermore, there are also well -established methods that seek to govern
common tasks within a project lifecyle, such as data mining (e.g. CRISP -DM or
SEMMA).
The multiplicity of approaches is likely a product of the evolution of diverse
methods in data mining/analytics, the significant impact of ML on research and
innovation, and the specific practices and considerations inherent to each of the
various domains where ML techniques are applied. While there are many benefits
of existing frameworks (e.g. carving up a complex process into smaller
components that can be managed by a network of teams or organisations), they
do not tend to focus on the wider social or ethical aspects that interweave
throughout the various stages of a ML lifecycle. Figure 2, therefore, presents a
model of the ML lifecycle, which we have designed to support the process of
building an ethical ass urance case, while remaining faithful to the importance of
technical requirements and challenges and also supporting more open, reflective,
and participatory forms of deliberation.
78 This Appendix reproduces a section of Christopher Burr and David Leslie (2021) , “Ethical
Assurance: A practical approach to the responsible design, development, and deployment
of data-driven technologies” Forthcoming, The Alan Turing Institute .
79 These four stages are influenced by an ‘ML OPs’ perspective ( Sweenor et al. 2020 ). The
term ‘MLOps’ refers to the application of DevOps practices to ML pipelines. The term is
often used in an inclusive manner to incorporate traditional statistical or data science
practices that support the ML lifecycle, but are not themselves constitu tive of machine
learning (e.g. exploratory data analysis), as well as deployment practices that are
important within business and operational contexts (e.g. monitoring KPIs).
310
The Project Lifecycle —the overarching stages of design, development, and
deployment (for a typical data- driven project) can be split into indicative tasks
and activities. In practice, both the stages and the tasks will overlap with their
neighbours, and may be revisited where a particular task requires an iterative
approach. The spiral indicates that this is a diachronic, macroscopic process that
evolves and develops over time, and as the deployment stage finishes, a new iteration is likely to begin.
To begin, the inner circle breaks the project lifecycle into a process of (project)
design, (model) development, and (system) deployment. These terms are
intended to be maximally inclusive. For example, the design stage encompasses
any project task or decision-making process that scaffolds or sets constraints on later project stages (i.e. design system constraints). Importantly, this includes
ethical, social, and legal constraints, which we will discuss later.
Each of the stages shades into its neighbours, as there is no clearly delineated
boundary that differentiates certain project design activities (e.g. data extraction
and exploratory analysis) from model design activities (e.g. preprocessing and
feature engineering, model selection). As such, the design stage overlaps with the
development stage, but the latter extends to include the actual process of training,
testing, and validating a ML model. Similarly, the process of productionizing a
model within its runtime environment can be thought of as both a development
and deployment activity. And, so, the deployment stage overlaps with the
‘development’ stage, and also overlaps with the ‘design’ stage, as the deployment
311 of a system should be thought of as an ongoing process (e.g. where new data are
used to continuously train the ML model, or, the decision to de-provision a model
may require the planning and design of a new model, if the older (legacy) system
becomes outdated). For these reasons, the project lifecycle is depicted as a spiral.
However, despite the unidirectional nature of the arrows, we also acknowledge
that ML/AI research and innovation is frequently an iterative process. Therefore,
the singular direction is only present at a macroscopic level of abstraction (i.e., the overall direction of progress for a project), and allows for some inevitable back
and forth between the stages at the microscopic level.
The three higher -level stages can be thought of as a useful heuristic for
approaching the project lifecycle. However, each higher -level stage subsumes a
wide variety of tasks and activities that are likely to be carried out by different
individuals, teams, and organisations, depending on their specific roles and
responsibilities (e.g. procurement of data). Therefore, it is important to break
each of the three higher -level stages into their (typical) constituent parts, which
are likely to vary to some extent between specific projects or within particular
organisations. In doing so, we expose a wide range of diverse tasks, each of which
give rise to a variety of ethical, social, and legal challenges. The following sections provides an illustrative overview of these stages and tasks, as well as a non-
exhaustive sample of the associated challenges.
(Project) Design Tasks and Processes
Project Planning
Rather than using ML/AI as a “hammer” to go looking for nails, it is best to hav e
a clear idea in mind of what the project’s goals are at the outset. This can help to
avoid a myopic focus on a narrow class of ML/AI -based “solutions,” and also helps
create space for a diversity of approaches —some of which may not require ML/AI
at all. Project planning, therefore, can comprise a wide variety of tasks, including,
but not limited to:
• an assessment of whether building an AI model is the right approach given
available resources and data, existing technologies and processes already in
place, the complexity of the use -contexts involved, and the nature of the
policy or social problem that needs to be solved (Leslie et al 2021a);
• an analysis of user needs in relation to the prospective AI model and whether a solution involving the latter provides appropriate affordances in keeping with
user needs and related functional desiderata;
• mapping of key stages in the project to support governance and business tasks (e.g. scenario planning);
• an assessment of resources and capabilities within a team, which is necessary
for identifying any skills gaps,
• a contextual assessment of the target domain and of the expectations, norms,
and requirements that derive therefrom;
312 • stakeholder analysis and team positionality reflection to determine the
appropriate level and scope of community engagement activities (Leslie et al
2021b);
• stakeholder impact assessment, supported by affected people and communities, to identify and evaluate possible harms and benefits associated
with the project (e.g. socioeconomic inequalities that may be exacerbated as
a result of carrying out the project), to gain social license and public trust, and also feed into the process of problem formulation in the next stage;
• wider impact assessments —both where required by statute and done
voluntarily for transparency and best practice (e.g. equality impact
assessments, data protection impact assessments, human rights impact
assessment, bias assessment)
Problem Formulation
Here, ‘problem’ refers both to a well -defined computational process (or a higher -
level abstraction of the process) that is carried out by the algorithm to map inputs
to outputs and to the wider practical, social, or policy issue that will be addressed through the translation of that issue into the statistical or mathematical frame.
For instance, on the computational side, a convolutional neural network carries
out a series of successive transformations by taking (as input) an image, encoded as an array, in order to produce (as output) a decision about whether some object
is present in the image. On the practical, social, and policy side, there will be a
need to define the computational “problem” being solved in terms of the
algorithmic system’s embeddedness in the social environment and to explain how
it contributes to (or affects) the w ider sociotechnical issue being considered. In
the convolutional neural network example, the system being produced may be a facial recognition technology that responds to a perceived need for the biometric
identification of criminal suspects by matching fa ce images in a police database.
The social issue of wanting to identify suspects is, in this case, translated into the
computational mechanism of the computer vision system. But, beyond this,
diligent consideration of the practical, social, or policy issue being addressed by
the system will also trigger, inter alia , reflection on the complex intersection of
potential algorithmic bias, the cascading effects of sociohistorical patterns of
racism and discrimination, wider societal and community impacts, and th e
potential effects of the use of the model on the actors in the criminal justice
systems who will become implementers and subjects of the technology.
Sociotechnical considerations are also important for determining and evaluating
the choice of target variables used by the algorithm, which may ultimately be
implemented within a larger automated decision -making system (e.g. in a
verification system). The task of formulating the problem allows the project team to get clear on what input data will be needed, for what purpose, and whether
there exists any representational issues in, for example, how the target variables
are defined. It also allows for a project team (and impacted stakeholders) to reflect
on the reasonableness of the measurable proxy that is used as a mathematical
expression of the target variable, for instance, whether being taken into care
within six months of a visit from child protective services is a reasonable proxy for
313 a child’s being “at risk” in a predictive risk model for children’s soci al care. The
semantic openness and contestability of formulating problems and defining target
variables in ML/AI innovation lifecycles is why stakeholder engagement, which
helps bring a diversity of perspectives to project design, is so vital, and why this
stage is so closely connected with the interpretive burdens of the project planning stage (e.g. discussion about legal and ethical concerns regarding permissible uses
of personal or sensitive information).
Data Extraction or Procurement
Ideally, the project team should have a clear idea in mind (from the planning and
problem formulation stages) of what data are needed prior to extracting or
procuring them. This can help mitigate risks associated with over -collection of data
(e.g. increased privacy or secur ity concerns) and help align the project with values
such as data minimisation (ICO and Institute 2020 ). Of course, this stage may
need to be revisited after carrying out subsequent tasks (e.g. preprocessing,
model testing) if it is clear that insufficient or imbalanced data were collected to
achieve the project’s goals. Where data is procured, questions about provenance
arise (e.g. legal issues, concerns about informed consent of human data subjects).
Generally, responsible data extraction and procurement require the incorporation of domain expertise into decision -making so that desiderata of data minimisation
as well as of securing relevant and sufficient data can be integrated into design
choices.
Data Analysis
Exploratory data analysis is an important stage for hypothesis generation or
uncovering possible limitations of the dataset that can arise from missing data, in
turn identifying the need for any subsequent augmentation of the dataset to deal with possible cla ss imbalances. However, there are also risks that stem from
cognitive biases (e.g. confirmation bias) that can create cascading effects that
effect downstream tasks (e.g. model reporting).
(Model) Development Tasks and Processes
Preprocessing and Feature Engineering
Pre-processing and feature engineering is a vital but often lengthy process, which
overlaps with the design tasks in the previous section and shares with them the potential for human choices to introduce biases and discriminatory patterns into
the ML/AI workflow. Tasks at this stage include data cleaning, data wrangling or
normalisation, and data reduction or augmentation. It is well understood that the
methods employed for each of these tasks can have a significant impact on the
model’s performance (e.g. deletion of rows versus imputation methods for
handling missing data). As Ashmore et al. ( 2019) note, there are also various
desiderata that motivate the tasks, such as the need to ensure the dataset that will feed into the subsequent stages is relevant, complete, balanced, and accurate.
At this stage, human decisions about how to group or disaggregate input features (e.g. how to carve up categories of gender or ethnic groups) or about which input
features to exclude altogether (e.g. leaving out deprivation indicators in a
314 predictive model for clinical diagnostics) can have significant downstream
influences on the fairness and equity of an ML/AI system.
Model Selection
This stage determines the model type and structure that will be produced in the
next stages. In some projects, model selection will result in multiple models for
the purpose of comparison based on some performance metric (e.g. accuracy). In
other projects, there may be a need to first of all implement a pre-existing set of formal models into code. The class of relevant models is likely to have been highly
constrained by many of the previous stages (e.g. available resources and skills,
problem formulation), for instance, where the problem demands a supervised
learning algorithm instead of an unsupervised learning algorithm; or where
explainability considerations require a more interpretable model (e.g. a decision
tree).
Model Training
Prior to training the model, the dataset will need to be split in to training and testing
sets to avoid model overfitting. The training set is used to fit the ML model,
whereas the testing set is a hold -out sample that is used to evaluate the fit of the
ML model to the underlying data distribution. There are various meth ods for
splitting a dataset into these components, which are widely available in popular package libraries (e.g. the scikit -learn library for the Python programming
language). Again, human decision-making at this stage about the training -testing
split and about how this shapes desiderata for external validation —a subsequent
process where the model is validated in wholly new environments —can be very
consequential for the trustworthiness and reasonableness of the development phase of an ML/AI system.
Model Va lidation and Testing
The testing set is typically kept separate from the training set, in order to provide
an unbiased evaluation of the final model fit on the training dataset. However, the training set can be further split to create a validation set, whi ch can then be used
to evaluate the model while also tuning model hyperparameters . This process can
be performed repeatedly, in a technique known as (k -fold) cross -validation, where
the training data are resampled ( k-times) to compare models and estimate t heir
performance in general when used to make predictions on unseen data. This type
of validation is also known as ‘internal validation,’ to distinguish it from external
validation, and, in a similar way to choices made about the training -testing split,
the manner in which it is approached can have critical consequences for how the
performance of a system is measured against the real -world conditions that it will
face when operating “in the wild.”
Model Reporting
Although the previous stages are likely to create a series of artefacts while undertaking the tasks themselves, model reporting should also be handled as a
separate stage to ensure that the project team reflect on the future needs of
various stakeholders and end users. While this stage is likely to include information
about the performance measures used for evaluating the model (e.g. decision
thresholds for classifiers, accuracy metrics), it can (and should) include wider
315 considerations, such as intended use of the model, details of the features used,
training -testing distributions, and any ethical considerations that arise from these
decisions (e.g. fairness constraints, use of politically sensitive demographic
features).80
(System) Deployment Tasks and Processes
Model Productionalization
Unless the end result of the project is the model itself, which is perhaps more
common in scientific research, it is likely that the model will need to be
implemented within a larger system. This process, sometimes known as ‘model
operationalisation,’ requires understanding (a) how the model is intended to function in the proximate system (e.g. within an agricultural decision support
system used to predict crop yield and quality) and (b) how the model will impact —
and be impacted by —the functioning of the wider sociotechnical environment that
the tool is embedded within (e.g. a decision support tool used in healthcare for
patient triaging that may exacerbate existing health inequalities within the wider community). Ensuring the model works within the proximate system can be a
complex programming and software engineering task, especially if it is expected
that the model will be updated continuously in its runtime environment. But, more
importantly, understanding how to ensure th e model’s sustainability given its
embeddedness in complex and changing sociotechnical environments requires active and contextually -informed monitoring, situational awareness, and vigilant
responsiveness.
User Training
Although the performance of the model is evaluated in earlier stages, the model’s
impact cannot be entirely evaluated without consideration of the human factors
that affect its performance in real -world settings. The impact of human cognitive
biases, such as algorithmic aversion81 must al so be considered, as such biases can
lead to over - and under -reliance on the model (or system), in turn negating any
potential benefits that may arise from its use. Understanding the social and
environmental context is also vital, as sociocultural norms may contribute to how
training is received, and how the system itself is evaluated (see B
urton et al.
2020).
System Use and Monitoring
Depending on the context of deployment, it is likely that the performance of the
model could degrade over time. This process of model drift is typically caused by
increasing variation between how representative the training dataset was at the
80 There is some notable overlap between this stage of the project lifecyle and th e ethical
assurance methodology, as some approaches to model reporting often contain similar
information that is used in building an ethical assurance case ( Mitchell et al. 2019 ;
Ashmore, Calinescu, and Paterson 2019 ), specifically in the process of establishing
evidential claims and warrant (see §§4.3.3- 4.3.4).
81 Algorithmic aversion refers to the reluctance of human agents to incorporate algorithmic
tools as part of their decision -making processes due to misaligned expectations of the
algorithm’s performance (see Burton et al. 2020 ).
316 time of development and how representative it is at later stages, perhaps due to
changing social no rms (e.g. changing patterns of consumer spending, evolving
linguistic norms that affect word embeddings). As such, mechanisms for monitoring the model’s performance should be instantiated within the system’s
runtime protocols to track model drift, and key thresholds should be determined
at early stages of a project (e.g. during project planning or in initial impact
assessment) and revised as necessary based on monitoring of the system’s use.
Model Updating or De -provisioning
As noted previously, model updat ing can occur continuously if the architecture of
the system and context of its use allows for it. Otherwise, updating the model may require either revisiting previous stages to make planned adjustments (e.g. model
selection and training), or if more significant alterations are required the extant model may need to be entirely de- provisioned, necessitating a return to a new
round of project planning.
317 Glossary
Access to effective remedy: A core component of human rights impact
assessments. This requires organisations to ensure appropriate steps are in place
to prevent, investigate, punish , and redress negative impacts. Stakeholder
engagement is required to ensure that remedies reflect the needs and interests of
impacted stakeholders.
Accessibility: Ensuring that information about the processes that took place to
design, develop, and deploy an AI system are easily accessible by individuals. This
not only refers to suitable means of explanation (clear, understandable, and
accessible language) but also the mediums for delivery.
Accuracy and Performance Metrics: In machine learning, the accuracy of a
model is the proportion of examples for which it generates a correct output. This
performance measure is also sometimes characterised conversely as an error rate
or the fraction of cases for which the model produces an incorrect output. As a
performance metric, accuracy should be a central component to establishing and
nuancing the approach to safe AI. Specifying a reasonable performance level for
the system may also often require refining or exchanging of the measure of
accuracy. For instance, if certain errors are more significant or costly than others,
a metric for total cost can be integrated into the model so that the cost of one class of errors can be weighed against that of another
Adequate quantity and quality: This property involves assessing whether the
data available is comprehensive enough to address the problem set at hand.
Adequate quantity and quality should address sample size, representativeness,
and availability of features relevant to problem.
Arbitrary deprivation: Where a physical or expressive freedom, or the personal
security of a rightsholder is removed or withheld, such removal or withholding is
arbitrary if it is not justified by a publicly legitimat ised legal framework or social
practice that reflects the principles and intent of human rights doctrine. Where
such deprivation occurs through legislation or other poli cy instrument, hallmarks
of legitimate deprivation include legal due process and the availability of an
effective remedy for affected rights -holders to contest the lawfulness of the
deprived freedom or security .
Attributable: Data should clearly demonstrate who observed and recorded it,
when it was observed and recorded, and who it is about.
Auditability: Whereas the notion of answerability responds to the question of
who is accountable for an automation supported outcome, the notion of
auditability answers the question of how the designers and implementers of AI
systems are to be held accountable. This aspect of accountability has to do with
demonstrating both the responsibility of design and use practices and the
justifiability of outcomes.
Automated Decision: The selection of an action or a recommendation made
using computational processes. Automated decisions describe those that either
augment or replace decisional work typically performed by humans alone. Most
318 commonly, automated decisions are predictions about persons or conditions in the
world derived from machine learning analysis of data about past events and its
similarity to a given set of conditions.
Automated Decision System: "An automated decision system (ADS) augments
or replaces human decision -making by using computational processes to produce
answers to questions either as discrete classifications (e.g., yes, no; male, female,
non-binary; malignant, benign) or continuous scores (e.g., degree of creditworthiness, risk of crime occurrence, projected tumour growth). Most ADS
produce predictions about persons or conditions using machine learning and other computational logic by calculating the probability that a given condition is met.
Typically, an automated decisio n system is ""trained"" on historical data looking
for patterns of relationships between data points (e.g., the relationship between barometer readings, ambient temperature, and snowfall). An automated decision
is made by comparing known patterns with existing inputs to estimate how closely
they match (e.g., weather prediction based on the similarity between today's
climate readings and those from the past). Examples of ADS include algorithms
that calculate credit scores and biometric recognition systems th at attempt to
identify individual people based on physical traits, such as facial features.
Clear Provenance and data lineage: Clear provenance and data lineage
consists of records that are accessible and simultaneously detail how data was collected and how it has been used and altered throughout the stages of pre-processing, modelling, training, testing, and deploying.
Complete: All recorded data requires an audit trail to show nothing has been
deleted or lost.
Concept drifts and shifts: Once trained, most machine learning systems operate
on static models of the world that have been built from historical data which have
become fixed in the systems’ parameters. This freezing of the model before it is released ‘into the wild’ makes its accuracy and reliability especially vulnerable to
changes in the underlying distribution of data. When the historical data that have
crystallised into the trained model’s architecture cease to reflect the population
concerned, the model’s mapping function will no longer be able to accurately and
reliably transform its inputs into its target output values. These systems can quickly become prone to error in unexpected and harmful ways. There has been
much valuable research done on methods of detecting and mitigating concept and
distribution drift, and you should consult with your technical team to ensure that
its members have familiarised themselves with this research and have sufficient
knowledge of the available ways to confront the issue. In all cases, you should
remain vigilant to the potentially rapid concept drifts that may occur in the complex, dynamic, and evolving environments in which your AI project will
intervene. Remaining aware of these transformations in the data is crucial for safe
AI, and your team should actively formulate an action plan to anticipate and to
mitigate their impacts on the performance of your system.
Consistent: Consistency includes ensuring data is chronological, i.e., has a date
and time stamp that is in the expected sequence.
319 Cont emporaneous: Data should be recorded as it was observed, and at the time
it was executed.
Continuous learning: A type of machine learning where the underlying algorithm
or model are adjusted continuously while deployed.
Critical Functions: "Activities that are considered “critical” are economic and
social activities of which “the interruption or disruption would have serious
consequences on: 1) the health, safety, and security of citizens; 2) the effective
functioning of services essential to the economy an d society, and of the
government; or 3) economic and social prosperity more broadly” (OECD, 2019[2]); (OECD, 2019[4]). It is important to note that not all systems in a critical
sector are critical. For example, the administrative time tracking systems of a
hospital or a bank are not considered to be critical systems.
Data controller: The individual or organisation which, alone or jointly with
others, determines how and why personal data will be processed.
Data Protection Impact Assessment: "Where a type of processing in particular
using new technologies, and taking into account the nature, scope, context and
purposes of the processing, is likely to result in a high risk to the rights and
freedoms of natural persons, the controller shall, prior to the proce ssing, carry out
an assessment of the impact of the envisaged processing operations on the
protection of personal data. Data Protection Impact Assessments should contain
at least: a systematic description of the envisaged processing operations and the
purposes of the processing, including, where applicable, the legitimate interest
pursued by the controller; an assessment of the necessity and proportionality of
the processing operations in relation to the purposes; an assessment of the risks
to the rights and freedoms of data subjects referred to in paragraph 1; and the
measures envisaged to address the risks, including safeguards, security measures
and mechanisms to ensure the protection of personal data and to demonstrate compliance with this GDPR regulatio n taking into account the rights and legitimate
interests of data subjects and other persons concerned."
Data subject: An identified or identifiable natural person, who can be identified,
directly or indirectly, by information such as a name or identity number, or by a
combination of characteristics specific to that individual.
Discrimination: According to the UN Human Rights Committee overseeing the
International Covenant on Civil and Political Rights, discrimination “should be
understood to imply any distinction, exclusion, restriction or preference which is
based on any ground such as race, colour, sex, language, religion, political or
other opinion, national or social origin, property, birth or other status, and which
has the purpose or effect of nullifying or impairing the recognition, enjoyment or
exercise by all persons, on an equal footing, of all rights and freedoms.” Other
definitions highlight the effects of systemic rather than explicit discrimination in which social and legal structures and/or historical conditions produce
discriminatory effects even where a specific intention to discriminate is not
present .
320 Disproportionate Impact: Where a system produces effects distributed across
groups characterised by identit y or demographic categor ies and such distributions
are experienced in greater or lesser degree by members of one or more group.
Disproportionality is most often considered when effects are especially harmful to
the interests or life chances of marginalised groups.
Duty -Bearers: Actors who have human rights duties or responsibilities towards
rights-holders. These include the company operating a project or conducting its
activities, business suppliers and contractors, joint -venture and other business
partners, state actors such as loca l government authorities and regional and
national government departments and agencies.
Equality Impact Assessment: A formal evaluative process of identifying
rightsholders whose right to equal status in society, access to opportunities, or
treatment may be adversely affected by a technical system, policy , or practice in
relation to other rightsholders , the extent to which they are affected, and the
availability of redress or remediation .
European Social Charter: A Council of Europe treaty that guarantees
fundamental social and economic rights as a counterpart to the European
Convention on Human Rights, which refers to civil and political rights. It
guarantees a broad range of everyday human rights related to employment,
housing, health, education, soc ial protection and welfare.
Feature engineering: Features are the input variables that are used during
model training and are transformed by an algorithm. For example, a house price
estimation tool used by mortgage valuators may use the following features,
'number of rooms', 'location', 'age of building', 'parking space' and 'size (sq. ft)' to predict the market value of the property. Features are often selected on the
basis of prior domain knowledge but can also be engineered on the basis of their
predicti ve or explanatory value. For instance, a machine learning algorithm could
be fed raw data and trained to extract the most predictive features. In these cases,
the features may not be easily recognisable using traditional concepts or labels.
Flourish: Human flourishing is a term used to refer to self-actualization or an
individual’s achievement of a desirable life goal and is most typically associated
with well -being. For some theories of well -being, flourishing is a goal -oriented
process that is concerned with an individual’s psychology or emotional state. For
other theories, the term refers to a broader social good that depends on the cultivation of social relationships.
Fundamental Rights and Freedoms: The first international standard for
fundamental rights and freedoms was established by the United Nation’s adoption of the Universal Declaration of Human Rights (UDHR), a document establishing
basic rights aimed to safeguard the inviolable dignity of every person, which would
become the basis for the many treaties, conventions, and charters on human
rights that have been adopted worldwide up to the present. These treaties,
charters, and conventions include: The European Convention on Human Rights
(ECHR), which placed obligations on governments to protect ordinary people
against human rights violations. The European Social Charter (ESC), which
extends basic rights to include social and economic rights covering health, working
321 conditions, housing, migrant labour, gender equality, and social security, The
International Covenant on Economic, Social and Cultural Rights (ICESCR) (taking
together the International Covenant on Civil and Political Rights and the
International Covenant on Economic, Social and Cultural Rights)included freedom from torture, right to a fair trial, non -discrimination, and privacy rights, and
extending basic rights to include rights to just working conditions, health, living
standards, education, and social security. The most recent document establishing
fundamental rights and freedoms is the Charter of Fundamental Rights of the
European Union (CFR), which codified a basic set of civil, political, s ocial,
economic, and cultural rights for citizens of the European Union in EU law, including rights pertaining to human dignity, fundamental freedoms, equality,
solidarity, and economic rights, and rights to participation in the life of the
community.
Harm to dignity : An innate worth or status that is not earned and cannot be
forfeited. The dignity of rightsholder s is harmed where, as subject s of action or
inaction, they experience the involuntary diminishment of their fundamental
worth, self-respect, or personal autonomy , or else are treated as objects,
humiliated , or otherwise degraded.
Hidden proxy: A variable which represents some property or feature of an object
or person that is closely correlated with another (hidden) property or feature. For
instance, the variable 'profession' is likely to have a high degree of mutual
information with the variable 'salary'. In this example, the 'salary' variable is the hidden proxy if it is not directly recorded, but the 'profession' variable is present
in the da taset. Hidden proxies can be problematic when they represent protected
characteristics that were not intended to be in the dataset.
High -dimensional: In the context of data, dimensionality refers to the number
of features (or variables) within a dataset. High-dimensional, therefore, is used to
indicate a large number of features (e.g., genetics datasets). This is not to be
confused with the size of the data as determined by the number of records (e.g., number of people).
Highly Regulated : A domain of practice can be considered highly regulated
where multiple laws and policies are targeted specifically to a significant number
of potential activities and relations of the domain. By example, most countries
have highly regulated financial secto rs with government agencies or departments
dedicated to the sector, strict guidelines for the conduct of business, a range of
consumer protections , and substantial penalties for rule violation.
Human Rights Impact Assessment: A formal evaluative process o f identifying
rightsholders whose human rights or fundamental freedoms may be adversely
affected by a technical system, policy, or practice, the extent to which they are
affected, and the availability of redress or remediation.
Human -in-the-loop design: The outputs of an AI model can be used as part of
a wider process in which a human considers the output, as well as other
information available to them, and then acts (makes a decision) based on this.
322 Informational plurality : A condition or environment where information (broadly
defined) is available from multiple voices and perspectives and/or where there are
multiple , differentiated sources of information.
Intelligibility/Understandability: Capable of being understood or
comprehended, not requiring specific technical expertise.
Interpretability: Consists of the ability to know how and why a model performed
the way it did in a specific context and therefore to understand the rationale behind
its decision or behaviour. Labelling bias: Labelling (or annotation) bias is variation in the meaning of a
label used to represent a feature in training data. This commonly occurs when a
specific label does not mean the same thing for all data subjects or rights holders
(e.g., an emotion label that is applied to images of faces). Variation in a label's
meaning is problematic when the specific label is strongly predictive of an
outcome, as this can create situations where certain groups are miscategorised or receive inaccurate outcomes.
Legal basis: Derived from identifiable law or doctrine that can assert the
lawfulness of an action or set of actions.
Major risk: A degree of risk in which the effects of a project or condition c ould be
devastating but may be manageable if enacted with extreme care and close supervision.
Marginalisation: To treat a person, group, or concept as insignificant or
peripheral. In race and social justice discourse, members of society are marginalised by holding an identity or being placed in a demographic category
that is not attached to the dominant side of the prevailing structure of power and
by which they experience oppression .
Mass deception: To deceive someone is to present (false) information as if it
were true and to elicit some related behaviour on the basis of this information.
Mass manipulation: To manipulate someone is to get them to act in a particular
way, perhaps by withholding salient information or influencing how they evaluate
some state of the world.
Measurement errors: Any difference in the recorded value for some variable and
its real -world value (e.g., measurement of the variable height). Measurement
error can occur as a result of poor data collection processes, badly calibrated
measuring devices, or from a range of biases. In the case of measurement bias,
error can arise when the measurement scale does not capture data pertaining to
the subjects in an equal manner (e.g., a self -reported pain scale that obscures
underlying differences between two sub-groups).
Moderate risk: A degree of risk in which potential harms are directly or indir ectly
associated with risks of adverse impacts on the human rights and fundamental
freedoms of affected persons, democracy, and the rule of law but that provide
opportunities for risk reduction and mitigation that make the risks posed broadly
acceptable.
323 Non-deterministic : A process is non -deterministic if its outcome cannot be
predicted (or determined), due to a lack of knowledge about how its causal
structure transforms the initial conditions. A non -deterministic algorithm is one
where an input variable c an take on the same value but provide different results.
Non- Linear: A nonlinear system is one in which the change in the output is not
directly proportional to the change of the input. As a result, the dynamics of the system cannot be represented graphic ally using a straight line.
Other Relevant Parties: Stakeholders other than rights -holders and duty -
bearers, which may include individuals or organisations representing the interests of rights holders and official representations at international, national and local
levels (e.g., the UN, national human rights institutions, NGOs or civil society
organisations).
Performance Metrics: Metrics used to define how good the algorithm is at reaching the correct decision that range beyond accuracy measures.
Physical or Mental Integrity: The core attributes of an individual's overall bodily
integrity. Bodily integrity is an inviolable right that an individual has regarding the self-determination of their own body and mind.
Precision: The number of correctly identified positive predictions divided by the
number of all incorrect and correct positive predictions. Precision helps us to
understand how accurate the model is – out of those predicted positive, how many
of them are actually positive? This is an especially use ful metric when the
importance of false positives is high.
Pre-designated High- Risk or Safety Critical Sector: Annex III of the
European Union Proposed Rules on Artificial Intelligence indicates ‘high-risk’
sectors as those concerned with education, emergency services, employment,
financial services, public benefits , law enforcement, immigration, border control,
and the administration of justice and democratic processes .
Probabilistic: An event is probabilistic if the extent to which it occurs is
considered to be subject to chance variation. In many cases, this is a reflection of our own uncertainty about the event's occurrence, rather than assuming the
existence of a non -deterministic process.
Prohibited AI system : As outlined in the European Union Proposed Rules on
Artificial Intelligence (Art. 5), prohibited AI systems are those that include
manipulation, exploitation, social scoring, or real -time remote biometric ID.
Prohibitive Risk: A degree of risk significant enough to trigger the precautionary
principle and precipitate pre -emptive measures to prevent adverse impacts on the
human rights and fundamental freedoms of affected persons, democracy, and the
rule of law. Pre-emptive measures are appropriate where the severity, scale, and
irremediableness of the potential harm outweigh levels of risk reduction and mitigation. May also refer to a degree of risk in which the effects of a project or
condition would be so devastating upon their occurrence that it would be irrational,
irresponsible, or immoral to proceed.
324 Proportionality: A legal principle that refers to the idea of delivering a just
outcome in ways that are proportionate to the cost, complexity, and resources
available. In a similar vein, it can also be used as an evaluative notion, such as in
the case of a data protection principle that states only personal data that are
necessary and adequate for the purposes of the task are collected.
Protected Characteristics: Identitie s, categories, or other descriptive features
of natural persons defined by human rights, civil rights , or data protection law, as
well as other doctrines concerned with legal rights and equal protection. Such
characteristics are ‘protected’ from arbitrary or capricious use in the allocation or
withholding of rights and social goods. Examples in European law and regulation
include racial or ethnic origin, political opinion, religion or beliefs, trade union
membership, genetic or health status , and sexual orientation. Other commonly
protected characteristics include skin tone, gender-identity , and cognitive or
physical ability .
Relevance, Appropriateness, and Domain Knowledge: The understanding
and utilisation of the most appropriate sources and types of data are crucial for
building a robust and unbiased AI system. Solid domain knowledge of the
underlying population distribution and of the predictive or classificatory goal of the
project is instrumental for choosing optimally relevant measurement in puts that
contribute to the reasonable determination of the defined solution. Domain experts should collaborate closely with the technical team to assist in the
determination of the optimally appropriate categories and sources of
measurement.
Reliability: The objective of reliability is that an AI system behaves exactly as its
designers intended and anticipated. A reliable system adheres to the specifications
it was programmed to carry out. Reliability is therefore a measure of consistency
and can establis h confidence in the safety of a system based upon the
dependability with which it operationally conforms to its intended functionality.
Rights -Holders: All individuals are human rights -holders. These include workers
and families, contractor (goods and ser vices providers) employees and families,
impacted community members, including individuals of all genders, children,
indigenous peoples, migrant workers, ethnic minorities and so forth (both within
the geographic vicinity of operations but also impacted downstream, trans -
boundary or neighbouring communities), human rights defenders, and consumers. Within the HUDERIA, the primary focus is on rights -holders who are, or may be,
adversely affected by a project.
Robustness: The objective of robustness can be thought of as the goal that an
AI system functions reliably and accurately under harsh conditions. These conditions may include adversarial intervention, implementer error, or skewed
goal-execution by an automated learne r (in reinforcement learning applications).
The measure of robustness is therefore the strength of a system’s integrity the soundness of its operation in response to difficult conditions, adversarial attacks,
perturbations, data poisoning, and undesirable reinforcement learning behaviour.
Security: The goal of security encompasses the protection of several operational
dimensions of an AI system when confronted with possible adversarial attack. A
325 secure system is capable of maintaining the integrity of the information that
constitutes it. This includes protecting its architecture from the unauthorised
modification or damage of any of its component parts. A secure system also
remains continuously functional and accessible to its authorised users and keeps
confidential and private information secure even under hostile or adversarial
conditions.
Sensitivity/Recall: The number of correct predictions divided by the number of
all samples that should have been identified as positive. Recall helps us to
understand how many actual positives are captured through the model labelling it
positive. This is an especially useful metric when the importance of false negatives
is high.
Snowball Approach: An approach to sampling where a researcher or
organisation begins with a s mall number of contacts or stakeholders who have
been identified as relevant. During initial engagement with these stakeholders’
recommendations are sought for other potentially relevant stakeholders to engage. As this process continues the sample of stakeholders increases and
diversifies.
Socio- technical System: A system that couples human (or social) behaviour to
the functioning of a technical system, and in doing so gives rise to novel (and
emergent) functions that are not reducible to either the human or technical
elements. By intervening in human behaviours, attitudes, or their relations to the
world, the technical system restructures human behaviour. The socio -technical
perspective is one that considers the human desires or goals a technology is meant
to, or does, achieve. We can also describe as socio -technical those systems whose
very existence, implementation, or effects implicate human political, economic, or social relations. For example, surveillance systems adopted by law enforcement
agencies ar e socio -technical because their adoption and use have political
dimensions; the selected targets of police surveillance are affected more acutely than others by the use of surveillance technologies based on the historical choices
made by government and law enforcement officials. From this socio- technical
perspective, surveillance technologies participate in relations between people and
the centres of power in society.
Stakeholder: Individuals or groups that (1) have interests or rights that may be
affected by the past, present, and future decisions and activities of an
organisations; (2) may have the power or authority to influence the outcome of
such decisions and activities; (3) possess relevant characteristics that put them in
positions of advantage or vulnerability with regard to those decisions and activities.
Target Variable: A feature within data defined as the output that a machine
learning model tries to predict. Each of the other features considered by the model are used to predict the target variable. For example, for an ML model built to
predict what tomorrow’s temperature may be, the target variable would be
tomorrow’s temperature, features in input data such as precipitation levels,
humidity, amongst others would be used by the model to predict t his target
variable.
326 Timeliness and Recency (up -to-date): If datasets include outdated data, then
changes in the underlying data distribution may adversely affect the
generalisability of the trained model. Provided these distributional drifts reflect
chang ing social relationship or group dynamics, this loss of accuracy with regard
to the actual characteristics of the underlying population may introduce bias into an AI system. In preventing discriminatory outcomes, timeliness and recency of
all elements of t he data that constitute the datasets must be scrutinised.
Traceability: Refers to the process by which all stages of the data lifecycle from
collection to deployment to system updating or deprovisioning are documented in
a way that is accessible and easily understood. This may include not only the
parties within the organisation involved but also the actions taken at each stage
that may impact the individuals who use the system.
Vulnerability: Refers to being “at a higher risk of being unable to anticipate, cope
with, resist , and recover from project -related risks and/or adverse impacts .” Such
risk may be corporeal, as in being subject to heightened risk of physical or
psychological harm, or non-corporeal , as in being at heightened risk of diminished
economic or social status . Vulnerable individuals or groups may include women,
children, the elderly, the poor, ethnic, religious, cultural or linguistic minorities, or
indigenous groups .
Vulnerable Group : Refers a degree of risk from harm associated memb ership in
a particular group or category. For example, people living in poverty are members
of a vulnerable group (i.e., “the poor”) because they share characteristics that put
them at greater risk of particular forms of harm than persons belonging to more
affluent groups .
327 References
Abiteboul, S., & Stoyanovich, J., & Weikum, G. (2015). Data, Responsibly. ACM
Sigmod Blog. Retrieved from http://wp.sigmod.org/?p=1900
ACM US Public Policy Council. (2017). Statement on algorithmic transparency
and accountability. Retrieved from
https://www.acm.org/binaries/content/assets/public-
policy/2017_usacm_statement _algorithms.pdf
Adadi, A., & Berrada, M. (2018). Peeking inside the black -box: A survey on
Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138- 52160. Retrieved
from https://ieeexplore.ieee.org/abstract/document /8466590
AI Now Institute. (2018). Algorithmic Accountability Policy Toolkit. Retrieved from https://ainowinstitute.org /aap -toolkit.pdf
Aizenberg, E., & van den Hoven, J. (2020). Designing for human rights in AI. Big Data and Society. https://doi.org/10.1177/2053951720949566
Alper, P., Becker, R., Satagopam, V., Grouès, V., Lebioda, J., Jarosz, Y., … _& Schneider, R. (2018). Provenance-enabled stewardship of human data in the
GDPR era. https://doi.org/10.7490/f1000research.1115768.1
Ambacher, B., Ashley, K., Berry, J., Brooks, C., Dale, R. L., & Flecker, D. (2007).
Trustworthy repositories audit & certification: Criteria and checklist. Center for
Research Libraries, Chicago/Illinois. Retrieved from https://www.crl.edu/sites/default/files/d6/attachments/pages/trac_0.pdf
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D.
(2016). Concrete prob lems in AI safety. arXiv:1606.06565. Retrieved from
https://arxiv.org/abs/1606.06565
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media &
Society, 20(3), 973 -989. Retrieved from
https://journals.sagepub.com/doi/abs/10.1177/1461444816676645
Antignac, T., Sands, D., & Schneider, G. (2016). Data minimisation: A language-
based approach (long version). ArXiv:1611.05642. Retrieved from
http://arxiv.org/abs/1611.05642
Antunes, N., Balby, L., Figueiredo, F., Lourenco, N., Meira, W., & Santos , W.
(2018). Fairness and transparency of machine learning for trustworthy cloud
services. 2018 48th Annual IEEE/IFIP International Conference on Dependable
Systems and Networks Workshops (DSN -W), 188 –193.
https://doi.org/10.1109/DSN- W.2018.00063
Auernham mer, K., Kolagari, R. T., & Zoppelt, M. (2019). Attacks on Machine
Learning: Lurking Danger for Accountability [PowerPoint Slides]. Retrieved from
https://safeai.webs.upv.es/wp- content/uploads /2019/02/3.SafeAI.pdf
328 Bathaee, Y. (2018). The artificial intel ligence black box and the failure of intent
and causation. Harvard Journal of Law & Technology, 31(2), 889. Retrieved from
https://www.questia.com/library/journal/1G1- 547758123/the -artificial-
intelligence-black-box-and -the-failure
Bibal, A., & Frénay, B. (2016). Interpretability of Machine Learning Models and Representations: an Introduction. Retrieved from
https://www.researchgate.net/profile/Adrien_Bibal/publication /326839249_Interpretability_of_Machine_Learning_Models_and_Representations
_an_Introductio n/links/5b6861caa6fdcc87df6d58e4/Interpretability -of-Machine -
Learning -Models-and -Representations-an -Introduction.pdf
Binns, R. (2017). Fairness in machine learning: Lessons from political
philosophy. arXiv:1712.03586. Retrieved from https://arxiv.org/abs/ 1712.03586
Binns, R. (2018). Algorithmic accountability and public reason. Philosophy &
Technology, 31(4), 543- 556. Retrieved from
https://link.springer.com/article/10.1007/s13347 -017- 0263-5
Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). 'It's reducing a human being to a percentage': Perceptions of justice in
algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 377). ACM. Retrieved from https://dl.acm.org
/citation.cfm?id=3173951
Bracamonte, V. (2019). Challenges for transparent and trustworthy machine
learning [Power Point]. KDDI Research, Inc. Retrieved from
https://www.itu.int/en/ITU -T/Workshops- and-Seminars/20190121
/Documents/Vanessa_Bracamonte_Presentation.pdf
Brundage, M., Avin, S., Wang, J., Belfield, H., Krueger, G., Hadfield, G., Khlaaf,
H., Yang, J., Toner, H., Fong, R., Maharaj, T., Koh, P. W., Hooker, S., Leung, J.,
Trask, A., Blue mke, E., Lebensbold, J., O’Keefe, C., Koren, M., … Anderljung, M.
(2020). Toward Trustworthy AI Development: Mechanisms for Supporting
Verifiable Claims. ArXiv:2004.07213 [Cs]. http://arxiv.org/abs/2004.07213
Buhmann, A., Paßmann, J., & Fieseler, C. (2020). Managing Algorithmic
Accountability: Balancing Reputational Concerns, Engagement Strategies, and
the Potential of Rational Discourse. Journal of Business Ethics, 163.
https://doi.org/10.1007/s10551 -019- 04226-4
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine
learning algorithms. Big Data & Society, 3(1).
https://doi.org/10.1177/2053951715622512
Calvo, R. A., Peters, D., & Cave, S. (2020). Advancing impact assessment for intelligent systems. Nature Machine Intelligence, 1 –3.
https://doi.org/10.1038/ s42256 -020- 0151-z
Card, D. (2017). The “black box” metaphor in machine learning. Towards Data
Science. Retrieved from https://towardsdatascience.com/the -black-box-
metaphor-in -machine -learning -4e57a3a1d2b0
329 Caruana, R., Kangarloo, H., Dionisio, J. D., Sinha, U., & Johnson, D. (1999).
Case-based explanation of non -case-based learning methods. Proceedings. AMIA
Symposium, 212 –215. Retrieved from
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2232607/
Cavoukian, A., Taylor, S., & Abrams, M. E. (2010). Privacy by Design: essential
for organizational accountability and strong business practices. Identity in the Information Society, 3(2), 405 –413. https://doi.org /10.1007/s12394 -010-
0053- z
Cech, F. (2020). Beyond Transparency: Exploring Algorithmic Accountability.
Companion of the 2020 ACM International Conference on Supporting Group Work, 11 –14. https://doi.org/10.1145/3323994.3371015
Center for Democracy & Technology. (n.d.). Digital decisions. Retrieved from https://cdt.org/issue /privacy -data/digital -decisions/
Chen, C., Li, O., Tao, C., Barnett, A., Su, J., & Rudin, C. (2018). This loo ks like
that: deep learning for interpretable image recognition. arXiv:1806.10574. Retrieved from https://arxiv.org/abs /1806.10574
Citron, D. K. (2008). Technological due process. Washington University Law
Review, 85(6). Retrieved from https://heinonline.org/hol-cgi -
bin/get_pdf.cgi?handle=hein.journals/walq85§ion=38
Citron, D. K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Wash. L. Rev., 89, 1. Retrieved from https://heinonline.org/HOL/LandingPage?handle=hein.journals
/washlr89&div=4&id=&page=&t=1560014586
Cobbe, J., Lee, M. S. A., & Singh, J. (2021). Reviewable Automated Decision -
Making: A Framework for Accountable Algorithmic Systems. Proceedings of the
2021 ACM Conference on Fairness, Accountability, and Tran sparency, 598 –609.
https://doi.org/10.1145/3442188.3445921
Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017).
Algorithmic decision making and the cost of fairness. ArXiv:1701.08230.
https://doi.org/10.1145/3097983.309809
Coston, A., Guha, N., Ouyang, D., Lu, L., Chouldechova, A., & Ho, D. E. (2021).
Leveraging Administrative Data for Bias Audits: Assessing Disparate Coverage with Mobility Data for COVID-19 Policy. Proceedings of the 2021 ACM Conference
on Fairness, Accountability, and Transparency, 173 –184.
https://doi.org/10.1145/3442188.3445881
Council of Europe. (2018). Convention 108 +: Convention for the protection of individuals with regard to the processing of personal data.
https://rm.coe.int/convention -108-convention-for- the-protection-of-individuals -
with-regar/16808b36f1
Council of Europe Commissioner for Human Rights. (2019). Unboxing Artificial Intelligence: 10 steps to protect Human Rights . https://rm.coe.int/unboxing -
artificial-intelligence- 10-steps- to-protect- human-rights-reco/1680946e64
330 Crawford, K., & Schultz, J. (2014). Big Data and due process: Toward a
framework to redress predictive privacy harms. BCL Rev., 55, 93. Retrieved from
https://heinonline.org/HOL/LandingPage?handle=hein.journals /bclr55&div=5&id=&page=&t=1560014537
Custers , B. (2013). Data dilemmas in the information society: Introduction and
overview. In Discrimination and Privacy in the Information Society (pp. 3 -26).
Springer, Berlin, Heidelberg. Retrieved from
https://link.springer.com/chapter/10.1007/978-3- 642-30487- 3_1
Custers, B. H., & Schermer, B. W. (2014). Responsibly innovating data mining and profiling tools: A new approach to discrimination sensitive and privacy
sensitive attributes. In Responsible Innovation 1 (pp. 335 -350). Springer,
Dordrecht. Retrieved from https://link.springer.com/chapter/10.1007/978 -94-
017-8956 -1_19
Dai, W., Yoshigoe, K., & Parsley, W. (2018). Improving data quality through deep learning and statistical models. ArXiv:1810.07132, 558, 515 –522.
https://doi.org/10.1007/978-3- 319-54978 -1_66
Davidson, S. B., & Freire, J. (2008). Provenance and scientific workflows: challenges and opportunities. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data (pp. 1345 -1350). ACM.
Retrieved from https://dl.acm.org/citation. cfm?id=1376772
Demšar, J., & Bosnić, Z. (2018). Detecting concept drift in data streams using
model explanation. Expert Systems with Applications, 92, 546– 559.
https://doi.org/10.1016/j.eswa.2017.10.003
Diakopoulos, N. (2014). Algorithmic Accountability Reporting: On the
Investigation of Black Boxes. Tow Center for Digital Journalism.
http://www.nickdiakopoulos.com/wp-content/uploads/2011/07/Algorithmic-
Accountability-Reporting_final.pdf
Diakopoulos, N. (2015). Algorithmic Accountability: Journalistic investigation of
computational power structures. Digital Journalism, 3(3), 398– 415.
https://doi.org/10.1080/21670811.2014.976411
Diakopoulos, N. (2016). Accountability in algorithmic decision making.
Communications of the ACM, 59(2), 56 –62. https://doi.org/10.1145/2844110
Diakopoulos, N., Friedler, S., Arenas, M., Barocas, S., Hay, M., Howe, B., ... &
Wilson, C. (2017). Principles for accountable algorithms and a social impact
statement for algorithms. FAT/ML. Retrieved from
https://www.fatml.org/resources/principles-for -accountable -algorithms
Donovan, J., Caplan, R., Hanson, L., & Matthews, J. (2018). Algorithmic
accountability: A primer. Data & Society Tech Algorithm Briefing: How Algorithms Perpetuate Racial Bias and Inequality. Retrieved from
https://datasociety.net/output/algorithmic-accountability -a-primer/
331 Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable
machine learning. arXiv:1702.08608. Retrieved from
https://ar xiv.org/abs/1702.08608
Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O'Brien, D., ...
& Wood, A. (2017). Accountability of AI under the law: The role of explanation.
arXiv:1711.01134. Retrieved from https://arxiv.org/abs/1711.01134
Dosilovic, F. K., Brcic, M., & Hlupic, N. (2018). Explainable artificial intelligence:
A survey. 2018 41st International Convention on Information and
Communication Technology, Electronics and Microelectronics (MIPRO), 0210 –
0215. https://doi.org/10.23919/MIPRO.2018.8400040
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference (pp. 214 -226). ACM. Retrieved from
https://dl.acm.org/citation.cfm?id=2090255
Edwards, L., & Veale, M. (2017). Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke L. & Tech.
Rev., 16, 18. Retrieved from https://heinonline.org/HOL /LandingPage?handle=hein.journals/dltr16&div=3&id=&page=&t=1560014649
Eisenstadt, V., & Althoff, K. (2018). A Preliminary Survey of Explanation
Facilities of AI -Based Design Support Approaches and Tools. LWDA. Presented at
the LWDA. https://www.researchgate.net /profile/Viktor_Eisenstadt/publication/327339350_A_Preliminary_S urvey_of_Exp
lanation_Facilities_of_AI -
Based_Design_Support_Approaches_and_Tools/links/5b891ecd299bf1d5a7338b1a /A -Preliminary-Survey-of-Explanation- Facilities -of-AI-Based -Design-Support-
Approaches-and -Tools.pdf
Esteves, A. M., Factor, G., Vanclay, F., Göt zmann, N., & Moreira, S. (2017).
Adapting social impact assessment to address a project's human rights impacts
and risks. Environmental Impact Assessment Review , 67, 73 -87
European Agency for Fundamental Rights. (2020). “Ge tting the future right:
Artifici al Intelligence and fundamental rights.”
https://fra.europa.eu/sites/default/files/fra_uploads/fra- 2020- artificial -
intelligence_en.pdf
European Commission Expert Group on FAIR Data. (2018). Turning FAIR into reality. European Union. Retrieved from
https://ec.europa.eu/info/sites/info/files/turning_fair_into_reality_1.pdf
European Parliamentary Research Service. (2019). A governance framework for algorithmic acco untability and transparency.
https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624262/EPRS_ST
U(2019)624262(ANN1)_EN.pdf
332 Faundeen, J. (201 7). Developing criteria to establish trusted digital repositories.
Data Science Journal, 16. Retrieved from
https://datascience.codata.org/article/10.5334/dsj -2017 -022/
FAT/ML. (2016). Principles for Accountable Algorithms and a Social Impact
Statement for Algorithms: FAT ML. https://www.fatml.org/resources/principles-
for-accountable-algorithms
Feldmann, F. (2018). Measuring machine learning model interpretability.
Retrie ved from https://hci.iwr.uni -
heidelberg.de/system/files/private/downloads/860270201/felix_feldmann_eml20
18_report.pdf
Fink, K. (2018). Opening the government’s black boxes: Freedom of information
and algorithmic accountability. Information, Communication & Society, 21(10),
1453–1471. https://doi.org/10.1080/1369118X.2017.1330418
Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. Explaining explanations: An approach to evaluating interpretability of machine. arXiv:1806.00069. Retrieved from https://arxiv.org/abs /1806.00069
Grgić-Hlača, N., Zafar, M.B., Gummadi, K.P., & Weller, A. (2017). On Fairness,
Diversity, and Randomness in Algorithmic Decision Making. arXiv:1706.10208 .
Retrieved from https://arxiv.org/abs/1706.10208
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D.
(2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 93. Retrieved from
https://dl.acm.org/citation.cfm?id=3236009
Google. (2019). Perspectives on issues in AI governance. Retrieved from
https://ai.google/static /documents/perspectives-on-issues -in-ai-governance.pdf
Göpfert, J. P., Hammer, B., & Wersing, H. (2018). Mitigating concept drift via
rejection. In International Conference on Artificial Neural Networks (pp. 456 -
467). Springer, Cham. https://doi.org/10.1007/978 -3-030- 01418- 6_45
Götzmann, N., Bansal, T., Wrzoncki, E., Veiberg, C. B., Tedaldi, J., & Høvsgaard,
R. (2020). Human rights impact assessment guidance and toolbox . The Danish
Institute for Human Rights.
https://www.humanrights.dk/sites/humanrights.dk/files/media/dokumenter/busi
ness/hria_toolbox/hria_guidance_and_toolbox_final_feb2016.pdf
Hajian, S., Bonchi, F., & Castillo, C. (2016). Algorithmic bias: From
discrimination discovery to fairness -aware data mining. In Proceedings of the
22nd ACM SIGKDD international conference on knowledge discovery and data
mining (pp. 2125 -2126). ACM. Retrieved from
https://dl .acm.org/citation.cfm?id=2945386
Halbertal, M. (2015). "Three Concepts of Human Dignity.” Dewey Lectures . 7.
https://chicagounbound.uchicago.edu/dewey_lectures/7
333 Hamilton, K., Karahalios, K., Sandvig, C., & Eslami, M. (2014). A path to
understanding the effects of algorithm awareness. Proceedings of the Extended
Abstracts of the 32nd Annual ACM Conference on Human Factors in Computing Systems - CHI EA ’14, 631– 642. https://doi.org/10.1145/2559206.2578883
High-level Expert Group on Artificial Intelligence. (2020). The Assessment List
for Trustworthy Artificial Intelligence (ALTAI) for self-assessment. European
Commission. https://op.europa.eu/en/publication-detail/-/publication/73552fcd -
f7c2-11ea-991b-01aa75ed71a1/language -es
Holstein, K., Vaughan, J. W., Daumé III, H., Dudík, M., & Wallach, H. (2018).
Improving fairness in machine learning systems: What do industry practitioners
need?. ArXiv:1812.05239. https://doi.org/10.1145 /3290605.3300830
Hutchinson, B., Smart, A., Hanna , A., Denton, E., Greer, C., Kjartansson, O.,
Barnes, P., & Mitchell, M. (2021). Towards Accountability for Machine Learning
Datasets: Practices from Software Engineering and Infrastructure. Proceedings
of the 2021 ACM Conference on Fairness, Accountabilit y, and Transparency,
560–575. https://doi.org/10.1145/3442188.3445918
ICO. (2017). Big Data, artificial intelligence, machine learning and data
protection. Retrieved from https://ico.org.uk/media/for-
organisations/documents/2013559/big -data-ai -ml-and -data -protection.pdf
ICO. (2020). Guidance on the AI auditing framework. Information
Commissioner’s Office. https://ico.org.uk/media/about -the-
ico/consultations/2617219/guidance -on-the -ai-auditing -framework-draft -for-
consultation.pdf
ICO & ATI (2020). Explaining decisions made with AI. https://ico.org.uk/for-
organisations/guide-to -data -protection/key-data -protection-themes/explaining -
decisions-made-with -ai/
ICO. (2021). Guide to the UK General Data Protection Regulation (UK GDPR).
https://ico.org.uk/media/for-organisations/guide-to -data -protection/guide- to-
the-general-data -protection-regulation-gdpr-1-1.pdf
Irving, G., & Askell, A. (2019). AI safety needs social scientists. Distill, 4(2).
https://doi.org/10.23915 /distill.00014
Janssen, M., & Kuk, G. (2016). The challenges and limits of Big Data algorithms in technocratic governance. Government Information Quar terly, 33(3), 371 –377.
https://doi.org/10.1016/j.giq.2016.08.011
Kacianka, S., & Pretschner, A. (2021). Designing Accountable Systems.
Proceedings of the 2021 ACM Conference on Fairness, Account ability, and
Transparency, 424 –437. https://doi.org/10.1145/3442188.3445905
Kaminski, M. E. (2018). Binary Governance: Lessons from the GDPR’s approach
to algorithmic accountability. S. Cal. L. Rev., 92, 1529.
Kamiran, F., & Calders, T. (2012). Data preprocessing techniques for
classification without discrimination. Knowledge and Information Systems,
334 33(1), 1- 33. Retrieved from https://link.springer.com/article/10.1007/s10115 -
011-0463 -8
Katell, M., Young, M., Dailey, D., Herman, B., Guetler, V., Tam, A., Bintz, C.,
Raz, D., & Krafft, P. M. (2020). Toward situated interventions for algorithmic
equity: Lessons from the field. Proceedings of the 2020 Conference on Fairness,
Accountability, and Transparency, 45 –55.
https://doi.org/10.1145/3351095.3372874
Kaufmann, P., Kuch, H., Neuhäuser, C., & Webster, E. (2011). Humiliation,
degradation, dehumanization: Human dignity violated.
Springer. https://www.corteidh.or.cr/tablas/r30885.pdf
Kemper, J., & Kolkman, D. (2019). Transparent to whom? No algorithmic
accountability without a critical audience. Information, Communication &
Society, 22(14), 2081– 2096. https://doi.org/10.1080/1369118X.2018.1477967
Kernell, E. L., Veiberg, C. B., & Jacquot, C. (2020). “Guidance on Human Rights
Impact Assessment of Digital Activities: Introduction.” The Danish Institute for
Human Rights.
https://www. humanrights.dk/sites/humanrights.dk/files/media/document/A%20
HRIA%20of%20Digital%20Activities%20 -%20Introduction_ENG_accessible.pdf
Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2017).
Human decisions and machine predictions. The Quarterly Journal of Economics.
https://doi.org/10.1093/qje/qjx032
Kohli, P., Dvijotham, K., Uesato, J., & Gowal, S. (2019). Towards a robust and
verified AI: Specification testing, robust training, and formal verification.
DeepMind Blog. Retrieved from https://deepmind.com/blog /robust -and-verified-
ai/
Kolter, Z., & Madry, A. (n.d.). Materials for tutorial adversarial robustness:
Theory and practice. Retrieved from https://adversarial -ml-tutorial.o rg/
Kroll, J. A. (2018). The fallacy of inscrutability. Philosophical Transactions of the
Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133),
20180084. https://doi.org/10.1098/rsta.2018.0084
Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D.
G., & Yu, H. (2016). Accountable algorithms. U. Pa. L. Rev., 165, 633. Retrieved
from https://heinonline.org/HOL/LandingPage?handle
=hein.journals/pnlr165&div=20&id=&page=&t=1559932490
Kusner, M. J., Loftus, J., Russell, C., & Silva, R. (2017). Counterfactual fairness.
In Advances in Neural Information Processing Systems (pp. 4066- 4076).
Retrieved from http://papers.nips.cc/paper /6995 -counterfactual -fairness
L’heureux, A., Grolinger, K., Elymany, H. F., & Capretz, M. A. (2017). Machine
learning with Big Data: Challenges and approaches. IEEE Access, 5, 7776 -7797.
Retrieved from https://ieeexplore.ieee .org/abstract/document/7906512/
335 Lakkaraju, H., Bach, S. H., & Leskovec, J. (2016). Interpretable decision sets: A
joint framework for description and prediction. In Proceedings of the 22nd ACM
SIGKDD international conference on knowledge discovery and data mining (pp. 1675- 1684). ACM. Retrieved from https://dl.acm.org /citation .cfm?id=2939874
Lehr, D., & Ohm, P. (2017). Playing with the data: What legal scholars should learn about machine learning. UCDL Rev., 51, 653. Retrieved from https://lawreview.law.ucdavis.edu/issues /51/2/Symposium/51 -2_Lehr_Ohm.pdf
Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair,
transparent, and accountable algorithmic decision -making processes. Philosophy
& Technology, 31(4), 611- 627. https://doi.org/10.1007/s13347 -017-0279-x
Leslie, D. (2019). Understanding artificial intelligence ethics and safety. The Alan Turing Institute. https://www.tu ring.ac.uk/sites/default/files/2019-
06/understanding_artificial_intelligence_ethics_and_safety.pdf
Lipton, Z. C. (2016). The mythos of model interpretability.arXiv:1606.03490.
Retrieved from https://arxiv.org/abs/1606.03490
Lipton, Z. C., & Steinhardt, J. (2018). Troubling trends in machine learning
scholarship. arXiv:1807.03341. Retrieved from https://arxiv.org/abs/1807.03341
Loi, M., Mä tzener, A. , Müller, A., & Spielkamp, M. (202 1). Automated Decision -
Making Systems in the Public Sector: An Impact Assessment Tool for Public
Authorities. Algorithm Watch . https://algorithmwatch.org/en/wp-
content/uploads/2021/06/ADMS -in-the-Public -Sector-Impact- Assessment -Tool-
AlgorithmWatch-J une-2021.pdf
Lundberg, S., & Lee, S. -I. (2017). A unified approach to interpreting model
predictions. ArXiv:1705.07874. Retrieved from http://arxiv.org/abs/1705.07874
Mahajan, V., Venugopal, V. K., Murugavel, M., & Mahajan, H. (2020). The
Algorithmic Audit: Working with Vendors to Validate Radiology -AI Algorithms —
How We Do It. Academic Radiology, 27(1), 132 –135. Scopus.
https://doi.org/10.1016/j.acra.2019.09.009
Malgieri, G., & Comandé, G. (2017). Why a right to legibility of automated decision-making exists in the general data protection regulation. International
Data Privacy Law. Retrieved from https://academic.oup.com/idpl/article-
abstract/7/4/243/4626991?redirectedFrom=fulltext
Mantelero, A., & Esposito, M.S. (2021). An evidence-based methodology for
human rights impact assessment (HRIA) in the development of AI data- intensive
systems. Computer Law & Security Review, 41 .
https://doi.org/10.1016/j.clsr.2021.105561.
Marcus, G. (2018). Deep learning: A critical appraisal. arXiv:1801.00631.
Retrieved from https://arxiv.org/abs /1801.00631
McGregor, L., Murray, D., & Ng, V. (2019). International human rights law as a
framework for algorithmic accountability. International and Comparative Law
336 Quarterly, 68(2), 309 –343. Scopus.
https://doi.org/10.1017/S0020589319000046
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The
ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679679
Mittelstadt, B ., Russell, C., & Wachter, S. (2019). Explaining explanations in AI.
In Proceedings of the conference on fairness, accountability, and transparency
(pp. 279 -288). ACM. Retrieved from https://dl.acm.org/citation.cfm?id=3287574
Molnar, C. (2018). Interpretable machine learning. A guide for making black box
models explainable. Leanpub. Retrieved from
https://christophm.github.io/interpretable-ml-book/
Moss, E., Watkins, E., Metcalf, J., & Elish, M. C. (2020). Governing with
Algorithmic Impact Assessments: Six Observations. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3584818
Muñoz -González, L., Biggio, B., Demontis, A., Paudice, A., Wongrassamee, V.,
Lupu, E. C., & Roli, F. (2017, November). Towards poisoning of deep learning algorithms with back -gradient optimization. In Proceedings of the 10th ACM
Workshop on Artificial Intelligence and Security (pp. 27 -38). ACM. Retrieved
from https://dl.acm.org/citation.cfm?id=3140451
Murdoch, W. J., Singh, C., Kumbier, K., Abbasi -Asl, R., & Yu, B. (2019).
Interpretable machine learning: definitions, methods, and applications. arXiv:1901.04592. Retrieved from https://arxiv.org/abs /1901.04592
Olhede, S. C., & Wolfe, P. J. (2018). The growing ubiquity of algorit hms in
society: implications, impacts and innovations. Philosophical Transactions of the
Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128).
https://doi.org/10.1098/rsta.2017.0364
Nicolae, M. I., Sinn, M., Tran, M. N., Rawat, A., Wistuba, M., Zantedeschi, V., ...
& Edwards, B. (2018). Adversarial Robustness Toolbox v0.4.0. arXiv:1807.01069. Retrieved from https://arxiv.org/abs /1807.01069
OECD. (2021). OECD Framework for the Classification of AI Systems – Public
Consultation on Pr eliminary Findings . https://oecd.ai/classification
Ortega, P. A., & Maini, V. (2018). Building safe artificial intelligence: specification, robustness, and assurance. DeepMind Safety Research Blog,
Medium. Retrieved from https://medium.com
/@deepmindsafety research/building -safe-artificial-intelligence- 52f5f75058f1
O’Sullivan, S., Neveians, N., Allen, C., Blyth, A., Leonard, S., Pagallo, U., … &
Ashrafian, H. (2019). Legal, regulatory, and ethical frameworks for development
of standards in artificial intell igence (AI) and autonomous robotic surgery. The
International Journal of Medical Robotics and Computer Assisted Surgery, 15(1),
e1968. https://doi.org/10.1002/rcs.1968
337 Park, D. H., Hendricks, L. A., Akata, Z., Schiele, B., Darrell, T., & Rohrbach, M.
(2016). Attentive explanations: Justifying decisions and pointing to the evidence.
arXiv:1612.04757. Retrieved from https://arxiv.org/abs/1612.04757
Passi, S., & Barocas, S. (2019). Problem formulation and fairness. In
Proceedings of the Conference on Fairnes s, Accountability, and Transparency
(pp. 39 -48). ACM. Retrieved from https://dl.acm.org/citation.cfm ?id=3287567
Pedreschi, D., Giannotti, F., Guidotti, R., Monreale, A., Pappalardo, L., Ruggieri,
S., & Turini, F. (2018). Open the black box data-driven ex planation of black box
decision systems. arXiv:1806.09936. Retrieved from
https://arxiv.org/abs/1806.09936
Pedreschi, D., Giannotti, F., Guidotti, R., Monreale, A., Ruggieri, S., & Turini, F.
(2019). Meaningful explanations of black box AI decision system s. AAAI Press.
Poursabzi -Sangdeh, F., Goldstein, D. G., Hofman, J. M., Vaughan, J. W., &
Wallach, H. (2018). Manipulating and measuring model interpretability.
ArXiv:1802.07810. Retrieved from http://arxiv.org/abs/1802.07810
Raji, I. D., Smart, A., & Whi te, R. N. (2020). Closing the AI Accountability Gap:
Defining an End -to-End Framework for Internal Algorithmic Auditing. 12.
Ranjan, R., Sankaranarayanan, S., Castillo, C. D., & Chellappa, R. (2017).
Improving network robustness against adversarial attacks with compact
convolution. arXiv:1712.00699. Retrieved from https://arxiv.org/abs /1712.00699
Ratasich, D., Khalid, F., Geis sler, F., Grosu, R., Shafique, M., & Bartocci, E.
(2019). A roadmap toward the resilient internet of things for cyber -physical
systems. IEEE Access, 7, 13260- 13283. Retrieved from
https://ieeexplore.ieee.org/abstract/document/8606923
Reddy, E., Cakici, B., & Ballestero, A. (2019). Beyond mystery: Putting
algorithmic accountability in context. Big Data & Society, 6(1),
2053951719826856. https://doi.org/10.1177/2053951719826856
Reed, C. (2018). How should we regulate artificial intelligence?. Philosophical
Transactions of the Royal Society A: Mathematical, Physical and Engineering
Sciences, 376(2128), 20170360. Retrieved from https://royalsocietypublishing.org/doi/abs/10.1098/rsta.2017.0360
Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic
Impact Assessments: A Practical Framework for Public Accountability (p. 22). AI
Now.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016b). Why should I trust you?:
Explaining the predictions of any classifier. In Proceedings of the 22nd ACM
SIGKDD international conference on knowledge discovery and data mining (pp.
1135- 1144). ACM. Retrieved fro m https://dl.acm.org/citation.cfm?Id=2939778
Rosenbaum, H., & Fichman, P. (2019). Algorithmic accountability and digital
justice: A critical assessment of technical and sociotechnical approaches.
338 Proceedings of the Association for Information Science and Technology, 56(1),
237–244. https://doi.org/10.1002/pra2.19
Rosenblat, A., Kneese, T., & Boyd, D. (2014). Algorithmic Accountability. The
Social, Cultural & Ethical Dimensions of “Big Data,” March.
Rudin, C. (2018). Please stop explaining black box models for high stakes decisions. arXiv:1811.10154. Retrieved from https://arxiv.org/abs/1811.10154
Rudin, C., & Ustun, B. (2018). Optimized scoring systems: Toward trust in
machine learning for healthcare and crim inal justice. Interfaces, 48(5), 449 -466.
https://doi.org/10.1287/inte.2018.0957
Ruggieri, S., Pedreschi, D., & Turini, F. (2010). DCUBE: Discrimination discovery
in databases. In Proceedings of the 2010 ACM SIGMOD International Conference
on Management of data (pp. 1127 -1130). ACM. Retrieved from
https://dl.acm.org/citation.cfm?id=1807298
Salay, R., & Czarnecki, K. (2018). Using machine learning safely in automotive software: An assessment and adaption of software process requirements in iso
26262. arXiv:1808.01614. Retrieved from https://arxiv.org/abs/1808.01614
Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. In Proceedings of the
Conference on Fairness, Accountability, and Transparency (pp. 59 -68). ACM.
Retrieved from https://dl.acm.org/citation.cfm?id=3287598
Shah, H. (2018). Algorithmic accountability. Philosophical Transactions of the
Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128) ,
20170362. https://doi.org/10.1098/rsta.2017.0362
Shaywitz, D. (2018). AI doesn’t ask why – But physicians and drug developers
want to know. Forbes. Retrieved from https://www.forbes.com/sites/davidshaywitz/2018/11/09/ai -doesnt-ask -why-
but-physicians -and-drug-developers-want -to-know/
Shi, Y., Erpek, T., Sagduyu, Y. E., & Li, J. H. (2018). Spectrum data poisoning with adversarial deep learning. In MILCOM 2018 -2018 IEEE Military
Communications Conferenc e (MILCOM) (pp. 407 -412). IEEE. Retrieved from
https://ieeexplore.ieee.org/abstract/document/8599832/
Shmueli, G. (2010). To explain or to predict?. Statistical science, 25(3), 289 -
310. Retrieved from https://projecteuclid.org/euclid.ss/1294167961
Simoni te, T. (2017). AI experts want to end “black box” algorithms in
government. Wired Business, 10, 17. Retrieved from
https://www.wired.com/story/ai-experts-want -to-end-black-box-algorithms-in -
government/
Singhal, S., & Jena, M. (2013). A study on WEKA tool for data preprocessing,
classification and clustering. International Journal of Innovative technology and
exploring engineering (IJItee), 2(6), 250- 253. Retrieved from
339 https://pdfs.semanticscholar.org/095c/fd6f1a9dc6eaac7cc3100
a16cca9750ff9d8.pdf
SL Controls. (n.d.). What is ALCOA+ and Why Is It Important to Validation and
Data Integrity. https://slcontrols.com/en/what -is-alcoa -and-why-is-it-important-
to-validation -and-data -integrity/
Sokol, K., & Flach, P. (2018). Glass- box: Explaining AI decisions with
counterfactual statements through conversation with a voice -enabled virtual
assistant. Proceedings of the Twenty -Seventh International Joint Conference on
Artificial Intelligence, 5868 –5870. https://doi.org/10.24963/ijcai.2018/865
Solow-Niederman, A., Choi, Y., & Van den Broeck, G. (2019). The Institutional
Life of Algorithmic Risk Assessment. Berkeley Tech. LJ, 34, 705.
Song, Q., Jin, H., Huang, X., & Hu, X. (2018). Multi -Label Adversarial
Perturbations. In 2018 IEEE International Conference on Data Mining (ICDM)
(pp. 1242 -1247). IEEE. Retrieved from https://ieeexplore.ieee.org
/abstract/document/8594975
Stahl, B. C., & Wright, D. (2018). Ethics and Privacy in AI and Big Data:
Implementing Responsible Research and Innovation. IEEE Security & Privacy, 16(3), 26 –33. https://doi.org/10.1109/MSP.2018.2701164
Stoyanovich, J., Howe, B., Abitebou l, S., Miklau, G., Sahuguet, A., & Weikum, G.
(2017). Fides: Towards a platform for responsible data science. In Proceedings
of the 29th International Conference on Scientific and Statistical Database Management (p. 26). ACM. Retrieved from https://dl.acm. org
/citation.cfm?id=3085530
Suresh, H., & Guttag, J. V. (2019). A Framework for Understanding Unintended Consequences of Machine Learning. arXiv:1901.10002. Retrieved from
https://arxiv.org/abs/1901.10002
Tagiou, E., Kanellopoulos, Y., Aridas, C., & Makris, C. (2019). A tool supported
framework for the assessment of algorithmic accountability. Int. Conf. Inf.,
Intell., Syst. Appl., IISA. 10th International Conference on Information,
Intelligence, Systems and Applications, IISA 2019. Scopus.
https://doi.org/10.1109/IISA.2019.8900715
Turilli, M., & Floridi, L. (2009). The ethics of information transparency. Ethics
and Information Technology, 11(2), 105–1 12. https://doi.org/10.1007/s10676 -
009-9187 -9
UNESCO. (2021 a). Intergovernmental Meeting of Experts (Category II) related to
a Draft Recommendation on the Ethics of Artificial Intelligence, online, 2021 .
https://unesdoc.unesco.org/ark:/48223/pf0000377898
UNESCO (20121b). Draft Text of the Recommendation on the Ethics of Artificial
Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000377897?posInSet=4&queryId=N
-EXPLORE -91253d60- b068- 44ff-9831- 7a1998978040
340 van der Aalst, W. M., Bichler, M., & Heinzl, A. (2017). Responsible data science.
Springer Fachmedien Wiesbaden. https://doi.org/10.1007/s12599 -017-0487 -z
Varshney, K. R., & Alemzadeh, H. (2017). On the safety of machine learning:
Cyber- physical systems, decision sciences, and data products. Big Data, 5(3),
246-255. Retrieved from https://www.liebertpub.com/doi
/abs/10.1089/big.2016.0051
Veale, M., Binns, R., & Edwards, L. (2018). Algorithms that remember: model
inversion attacks and data protection law. Philosophical Transactions of the
Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133).
https://doi.org/10.1098/rsta.2018.0083
Veale, M., Van Kleek, M., & Binns, R. (2018). Fairness and accountability design
needs for algorithmic support in high -stakes public sector decision-maki ng. In
Proceedings of the 2018 CHI Conference on Human Factors in Computing
Systems (p. 440). ACM. Retrieved from https://dl.acm.org /citation.cfm?id=3174014
Verma, S., & Rubin, J. (2018). Fairness definitions explained. In 2018 IEEE/ACM International Work shop on Software Fairness (FairWare) (pp. 1 -7). IEEE.
Retrieved from https://ieeexplore.ieee.org /abstract/document/8452913
Wachter, S., Mittelstadt, B., & Floridi, L. (2017a). Transparent, explainable, and
accountable AI for robotics. Science Robotics, 2( 6).
https://doi.org/10.1126/scirobotics.aan6080
Wachter, S., Mittelstadt, B., & Floridi, L. (2017b). Why a Right to Explanation of
Automated Decision- Making Does Not Exist in the General Data Protection
Regulation. International Data Privacy Law, 7(2), 76 –99.
https://doi.org/10.1093/idpl/ipx005
Warde- Farley, D., & Goodfellow, I. (2016). Adversarial perturbations of deep
neural networks. In T. Hazan, G. Papandreou, & D. Tarlow (Eds.), Perturbations,
Optimization, and Statistics, 311. Cambridge, MA: The MIT Press.
Weller, A. (2017). Challenges for transparency. arXiv preprint arXiv:1708.01870.
Retrieved from https://arxiv.org/abs/1708.01870
Webb, G. I., Lee, L. K., Goethals, B., & Petitjean, F. (2018). Analyzing concept
drift and shift from sample data. Data Mining and Knowledge Discovery, 32(5), 1179- 1199. Retrieved from https://link.springer.com /article/10.1007/s10618 -
018-0554 -1
Wieringa, M. (2020). What to account for when accounting for algorithms: A systematic literature review on algorithmic accountability. Proceedings of the
2020 Conference on Fairness, Accountability, and Transparency, 1– 18.
https://doi.org/10.1145/3351095.3372833
Young, M., Katell, M., & Krafft, P. M. (2019). Municipal surveillance regulation
and algorithmic accountability. Big Data & Society, 6(2), 2053951719868492.
https://doi.org/10.1177/2053951719868492
341 Zafar, M. B., Valera, I., Rodriguez, M. G., & Gummadi, K. P. (2015). Fairness
constraints: Mechanisms for fair classification. arXiv:1507.05259 . Retrieved from
https://arxiv.org/abs/1507.05259
Zantedeschi, V. , Nicolae, M. I., & Rawat, A. (2017). Efficient defenses against
adversarial attacks. In Proceedings of the 10th ACM Workshop on Artificial
Intelligence and Security (pp. 39 -49). ACM. Retrieved from
https://dl.acm.org/citation.cfm?id=3140449
Zhao, M., An, B., Yu, Y., Liu, S., & Pan, S. J. (2018). Data poisoning attacks on
multi-task relationship learning. In Thirty -Second AAAI Conference on Artificial
Intelligence. Retrieved from https://www.aaai.org /ocs/index.php/AAAI/AAAI18/paper/viewPaper/16073
Zhang, W. E., Sheng, Q. Z., Alhazmi, A., & Li, C. (2019). Adversarial attacks on
deep learning models in natural language processing: A survey. 1(1).
arXiv:1901.06796. https://arxiv.org/abs/1901.06796
Zicari, R. V ., Brusseau, J., Blomberg, S. N., Christensen, H. C., Coffee, M.,
Ganapini, M. B., Gerke, S., Gilbert, T. K., Hickman, E., Hildt, E., Holm, S.,
Kühne, U., Madai, V. I., Osika, W., Spezzatti, A., Schnebel, E., Tithi, J. J., Vetter,
D., Westerlund, M., … Kar arigas, G. (2021). On Assessing Trustworthy AI in
Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest
in Emergency Calls. Frontiers in Human Dynamics, 3, 673104. https://doi.org/10.3389/fhumd.2021.673104
Žliobaitė, I. (2017). Measuring discrimination in algorithmic decision making. Data Mining and Knowledge Discovery , 31(4), 1060 -1089. Retrieved from
https://link.springer.com/article/10.1007/s10618 -017- 0506-1
Zook, M., Barocas, S., boyd, danah, Crawford, K., Keller, E., Gangadharan, S.
P., ...Paasquale, F. (2017) Ten simple rules for responsible Big Data research.
PLOS Computational Biology, 13(3). ht tps://doi.org
/10.1371/journal.pcbi.1005399
|
28bd2c99-fdda-4d62-9f73-8e913f18de5a | trentmkelly/LessWrong-43k | LessWrong | To MAIM or Not to MAIM. Introducing MARS: The Nuclear Deterrent case for Hardened Datacenters
Executive Summary
This post interrogates the shift that is happening in particular to nuclear deterrence as AI nears an exponential rate of acceleration and the interesting shifts in incentives this creates for governments and non-state actors (for example AI companies). Foremost the race for AI development increases the incentive for a first strike to take out a competitor's AI infrastructure. These shifts are manifesting in policy (or lack thereof) and significant investment into nuclear and space capabilities. We argue that current deterrent frameworks being discussed by policy makers, such as Mutually Assured AI Malfunction (MAIM) are unrealistic. We propose an operational security based approach to deterrence (akin to modern cybersecurity practices) making systems more expensive to “take out”, with a strong focus on geographic and even interplanetary distribution to key civilizational infrastructure. Mutual AI Redundant Safeguards or MARS.
Background
This post focuses less on the technical capabilities of AI (my day job is building advanced nuclear breeder reactors, not neural networks) and simply assumes that AI will have a direct, positive impact on a given economy. We can say that AI will define and drive “key civilizational infrastructure”, ambiguous of the actual technology behind it. We want to argue that the greatest short term risk for civilization is not the fact of superintelligence, but the assumption that the risk of being second in the race for superintelligence, incentivizes pre-emptive strikes. We argue that this key assumption is incentivizing nations to make missive build outs in energy and AI capabilities.
As a final note and bias: my current focus is building mass producible nuclear reactors, ensuring that AI systems that go online are carbon neutral (or negative) and can provide power for “highly distributed” habitation and power systems.
The Race for Superintelligence
Defining the current geopolitical shift is the idea that whoeve |
63d6f98b-d269-42b3-bb75-2c088c58ce33 | trentmkelly/LessWrong-43k | LessWrong | Key paths, plans and strategies to AI safety success
In January, I spent over 100 hours:
* reading 50+ AI safety 'plans',
* interviewing with 10+ AI safety researchers; and
* thinking about AI safety strategy, drawing on my 2.5 years in the field
This list of routes to AI safety success (updated June 2025) is a key output of that research. It describes the main paths to success in AI safety. I’d guess that understanding the strategies in this document would place you in the top 10th percentile of people at AI safety strategy in the AI safety community.
Part 1: International approaches
International safe actors race ahead
Get “good” international coalitions to be the first people to develop advanced AI, and do so safely. They then would use this AI model to shift the world into a good state: for example by accelerating one of the other paths below.
CERN for AI is a common model for this. These proposals usually envision shared infrastructure and expertise for frontier AI research, combining the efforts of many nation states together.
This can be approached from two angles:
* make AI superpowers (US, China) more cooperative and democratic
* make cooperative democracies (EU, UK, Australia, etc.) more AI-capable
Resources:
* Building CERN for AI (Center for Future Generations) - European-led international research consortium with tiered membership (EU core plus UK, Switzerland, Canada), specific funding mechanisms and governance structures for trustworthy AI development.
* What Success Looks Like (EA Forum) - Scenario 6 "Apollo Project" where well-resourced international project develops safe AI first.
* Hassabis Triple Institution Model (YouTube) - "CERN for AGI" component: international collaborative research facility for safe AGI development, complemented by monitoring and governance institutions.
Cooperative international deterrence
Get everyone to agree not to build the dangerous thing, and enforce those agreements.
IAEA for AI is a common model for this. This body would oversee advanced AI d |
9fbb3ca9-a0a6-4a68-9821-e65bf1ab3f32 | trentmkelly/LessWrong-43k | LessWrong | AI Deception: A Survey of Examples, Risks, and Potential Solutions
By Peter S. Park, Simon Goldstein, Aidan O’Gara, Michael Chen, and Dan Hendrycks
[This post summarizes our new report on AI deception, available here]
Abstract: This paper argues that a range of current AI systems have learned how to deceive humans. We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth. We first survey empirical examples of AI deception, discussing both special-use AI systems (including Meta's CICERO) built for specific competitive situations, and general-purpose AI systems (such as large language models). Next, we detail several risks from AI deception, such as fraud, election tampering, and losing control of AI systems. Finally, we outline several potential solutions to the problems posed by AI deception: first, regulatory frameworks should subject AI systems that are capable of deception to robust risk-assessment requirements; second, policymakers should implement bot-or-not laws; and finally, policymakers should prioritize the funding of relevant research, including tools to detect AI deception and to make AI systems less deceptive. Policymakers, researchers, and the broader public should work proactively to prevent AI deception from destabilizing the shared foundations of our society.
----------------------------------------
New AI systems display a wide range of capabilities, some of which create risk. Shevlane et al. (2023) draw attention to a suite of potential dangerous capabilities of AI systems, including cyber-offense, political strategy, weapons acquisition, and long-term planning. Among these dangerous capabilities is deception. This report surveys the current state of AI deception.
We define deception as the systematic production of false beliefs in others as a means to accomplish some outcome other than the truth. This definition does not require that the deceptive AI systems literally have beliefs and goals. Instead, it focuses on the question of whether AI systems |
9864f9ed-6090-4242-a0e5-7a8a214af2fc | trentmkelly/LessWrong-43k | LessWrong | Metaphilosophical competence can't be disentangled from alignment
Having human values is insufficient for alignment
Suppose there's a button where if you push it and name a human, that human becomes 1,000,000,000,000,000x more powerful. (What I mean by that isn't precisely specified—imagine some combination of being able to think much faster, becoming more intelligent, and having far more resources, to the point that they could easily overpower the rest of the world.)
Try running some thought experiments where you push the button to amplify:
* Jesus
* Buddha
* Adolf Hitler
* Donald Trump
* Kim Jong-un
* King Salman of Saudi Arabia
* Ayn Rand
* Elon Musk
* Ray Kurzweil
* Eliezer Yudkowsky
* Yourself
My intuition is that some of these people are catastrophic to amplify, and some might be OK to amplify. It's interesting to me that amplifying some of these people might be catastrophic, given that they're fellow human beings, raised in human societies, born with human genomes, who almost certainly care about the future well-being of humanity.
One reason I’d feel queasy amplifying anyone is that they might fall into an epistemic pit, where they arrive at some critically wrong conclusion and take either huge or infinite amounts of time to update away from it. If someone’s reasoning process gets amplified, I wouldn’t generally trust them to be good at arriving at true beliefs—intelligence needn’t go hand-in-hand with rationality or philosophical competence.
In particular, it’s very unclear to me whether people would quickly update away from ideologies. In practice, humanity as a whole has not obviously fallen into any permanent epistemic pits, but I think this is because no single ideology has clearly dominated the world. If you have indefinite decisive power over the world, you have far less incentive to consider perspectives very different from your own, and unless you both care about and are good at seeking true beliefs, you wouldn’t do a good job learning from the people around you.
Another reason I’d feel queasy ampl |
a4d13b1f-0821-4ff8-b6ce-0b10fe21c9b9 | trentmkelly/LessWrong-43k | LessWrong | Baby Nicknames
Lily and Anna rarely call Nora by her name. Instead, it's always nicknames, with lots of variety. For a week starting on Thanksgiving I counted how often they used different terms:
21 Norkit (Anna favorite) 20 Nwetzel (Lily favorite) 16 Norky 9 Cuddleezel 7 Norka 7 Nori 4 Norit 4 Cuddleitz (former favorite) 3 Norweasel 3 Norklet 2 Norkla 2 Cuddlet
Terms I only recorded once: Cuddleez, Cuddlehat, Cuddleoo, NW, Norahats, Noranoo (former favorite), Norba, Norcuddly, Norfi, Norfkit, Norflea, Norfo, Norfu, Norito, Norkakit, Norkaloo, Norkanoo, Norkita, Norkito, Norko, Norkwinza, Norkwitz, Norquo.
I counted each term once per "session", where if they said "Norkit Norkit Notkit" I would still only count that once, while if an hour later they said "market" again I would give it another tally. Doesn't include any sessions where I had my hands full or otherwise wasn't able to easily tally.
It took me a while to get around to posting this, during which both kids, but especially Lily, have started using "Kaia" a lot. Asking Lily, she says it is "short for Cuddler".
Comment via: facebook |
8f346f03-0090-46f1-9bb5-d593a5e3a7d6 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | ""It just so happens that your friend here is only mostly dead. There's a big difference between mostly dead and all dead." -- The Princess Bride
My co-blogger Robin and I may disagree on how fast an AI can improve itself, but we agree on an issue that seems much simpler to us than that: At the point where the current legal and medical system gives up on a patient, they aren't really dead.
Robin has already said much of what needs saying, but a few more points:
• Ben Best's Cryonics FAQ, Alcor's FAQ, Alcor FAQ for scientists, Scientists' Open Letter on Cryonics
• I know more people who are planning to sign up for cryonics Real Soon Now than people who have actually signed up. I expect that more people have died while cryocrastinating than have actually been cryopreserved. If you've already decided this is a good idea, but you "haven't gotten around to it", sign up for cryonics NOW. I mean RIGHT NOW. Go to the website of Alcor or the Cryonics Institute and follow the instructions. • Cryonics is usually funded through life insurance. The following conversation from an Overcoming Bias meetup is worth quoting:
Him: I've been thinking about signing up for cryonics when I've got enough money.
Me: Um... it doesn't take all that much money.
Him: It doesn't?
Me: Alcor is the high-priced high-quality organization, which is something like $500-$1000 in annual fees for the organization, I'm not sure how much. I'm young, so I'm signed up with the Cryonics Institute, which is $120/year for the membership. I pay $180/year for more insurance than I need - it'd be enough for Alcor too.
Him: That's ridiculous.
Me: Yes.
Him: No, really, that's ridiculous. If that's true then my decision isn't just determined, it's overdetermined.
Me: Yes. And there's around a thousand people worldwide [actually 1400] who are signed up for cryonics. Figure that at most a quarter of those did it for systematically rational reasons. That's a high upper bound on the number of people on Earth who can reliably reach the right conclusion on massively overdetermined issues.
• Cryonics is not marketed well - or at all, really. There's no salespeople who get commissions. There is no one to hold your hand through signing up, so you're going to have to get the papers signed and notarized yourself. The closest thing out there might be Rudi Hoffman, who sells life insurance with cryonics-friendly insurance providers (I went through him).
• If you want to securely erase a hard drive, it's not as easy as writing it over with zeroes. Sure, an "erased" hard drive like this won't boot up your computer if you just plug it in again. But if the drive falls into the hands of a specialist with a scanning tunneling microscope, they can tell the difference between "this was a 0, overwritten by a 0" and "this was a 1, overwritten by a 0".
There are programs advertised to "securely erase" hard drives using many overwrites of 0s, 1s, and random data. But if you want to keep the secret on your hard drive secure against all possible future technologies that might ever be developed, then cover it with thermite and set it on fire. It's the only way to be sure.
Pumping someone full of cryoprotectant and gradually lowering their temperature until they can be stored in liquid nitrogen is not a secure way to erase a person.
See also the information-theoretic criterion of death.
• You don't have to buy what's usually called the "patternist" philosophy of identity, to sign up for cryonics. After reading all the information off the brain, you could put the "same atoms" back into their old places.
• "Same atoms" is in scare quotes because our current physics prohibits particles from possessing individual identities. It's a much stronger statement than "we can't tell the particles apart with current measurements" and has to do with the notion of configuration spaces in quantum mechanics. This is a standard idea in QM, not an unusual woo-woo one - see this sequence on Overcoming Bias for a gentle introduction. Although patternism is not necessary to the cryonics thesis, we happen to live in a universe where "the same atoms" is physical nonsense.
There's a number of intuitions we have in our brains for processing a world of distinct physical objects, built in from a very young age. These intuitions, which may say things like "If an object disappears, and then comes back, it isn't the same object", are tuned to our macroscopic world and generally don't match up well with fundamental physics. Your identity is not like a little billiard ball that follows you around - there aren't actually any billiard balls down there.
Separately and convergently, more abstract reasoning strongly suggests that "identity" should not be epiphenomenal; that is, you should not be able to change someone's identity without changing any observable fact about them.
If you go through the aforementioned Overcoming Bias sequence, you should actually be able to see intuitively that successful cryonics preserves anything about you that is preserved by going to sleep at night and waking up the next morning.
• Cryonics, to me, makes two statements.
The first statement is about systematically valuing human life. It's bad when a pretty young white girl goes missing somewhere in America. But when 800,000 Africans get murdered in Rwanda, that gets 1/134 the media coverage of the Michael Jackson trial. It's sad, to be sure, but no cause for emotional alarm. When brown people die, that's all part of the plan - as a smiling man once said.
Cryonicists are people who've decided that their deaths, and the deaths of their friends and family and the rest of the human species, are not part of the plan.
I've met one or two Randian-type "selfish" cryonicists, but they aren't a majority. Most people who sign up for cryonics wish that everyone would sign up for cryonics.
The second statement is that you have at least a little hope in the future. Not faith, not blind hope, not irrational hope - just, any hope at all.
I was once at a table with Ralph Merkle, talking about how to market cryonics if anyone ever gets around to marketing it, and Ralph suggested a group of people in a restaurant, having a party; and the camera pulls back, and moves outside the window, and the restaurant is on the Moon. Tagline: "Wouldn't you want to be there?"
If you look back at, say, the Middle Ages, things were worse then. I'd rather live here then there. I have hope that humanity will move forward further, and that's something that I want to see.
And I hope that the idea that people are disposable, and that their deaths are part of the plan, is something that fades out of the Future.
Once upon a time, infant deaths were part of the plan, and now they're not. Once upon a time, slavery was part of the plan, and now it's not. Once upon a time, dying at thirty was part of the plan, and now it's not. That's a psychological shift, not just an increase in living standards. Our era doesn't value human life with perfect consistency - but the value of human life is higher than it once was.
We have a concept of what a medieval peasant should have had, the dignity with which they should have been treated, that is higher than what they would have thought to ask for themselves.
If no one in the future cares enough to save people who can be saved... well. In cryonics there is an element of taking responsibility for the Future. You may be around to reap what your era has sown. It is not just my hope that the Future be a better place; it is my responsibility. If I thought that we were on track to a Future where no one cares about human life, and lives that could easily be saved are just thrown away - then I would try to change that. Not everything worth doing is easy.
Not signing up for cryonics - what does that say? That you've lost hope in the future. That you've lost your will to live. That you've stopped believing that human life, and your own life, is something of value.
This can be a painful world we live in, and the media is always telling us how much worse it will get. If you spend enough time not looking forward to the next day, it damages you, after a while. You lose your ability to hope. Try telling someone already grown old to sign up for cryonics, and they'll tell you that they don't want to be old forever - that they're tired. If you try to explain to someone already grown old, that the nanotechnology to revive a cryonics patient is sufficiently advanced that reversing aging is almost trivial by comparison... then it's not something they can imagine on an emotional level, no matter what they believe or don't believe about future technology. They can't imagine not being tired. I think that's true of a lot of people in this world. If you've been hurt enough, you can no longer imagine healing.
But things really were a lot worse in the Middle Ages. And they really are a lot better now. Maybe humanity isn't doomed. The Future could be something that's worth seeing, worth living in. And it may have a concept of sentient dignity that values your life more than you dare to value yourself.
On behalf of the Future, then - please ask for a little more for yourself. More than death. It really... isn't being selfish. I want you to live. I think that the Future will want you to live. That if you let yourself die, people who aren't even born yet will be sad for the irreplaceable thing that was lost.
So please, live.
My brother didn't. My grandparents won't. But everything we can hold back from the Reaper, even a single life, is precious.
If other people want you to live, then it's not just you doing something selfish and unforgivable, right?
So I'm saying it to you.
I want you to live."
The passage is from a user-contributed post on LessWrong.
Can you break this down into the top 5 most important points for me.
1. The concept of cryonics: Cryonics is the process of preserving a person's body at low temperatures after death with the hope of reviving them in the future when the technology to do so becomes available. The author argues that the current medical and legal system gives up on a patient too soon, and that being declared legally dead doesn't mean that the person is beyond saving.
2. The affordability and accessibility of cryonics: Cryonics is not as expensive as many people think, and is usually funded through life insurance. The author emphasizes the importance of signing up for cryonics as soon as possible, as many people procrastinate and miss the opportunity.
3. The philosophy of identity in cryonics: The author discusses the idea of "patternism" - the belief that identity is based on the pattern of information rather than the physical atoms. This is important in the context of cryonics, as it suggests that the process can preserve a person's identity even if their physical body changes.
4. The value of human life and hope for the future: The author argues that signing up for cryonics shows a belief in the value of human life and a hope for a better future. They suggest that the future could value human life more than we currently do, and that people should aspire to be a part of that future.
5. The responsibility to the future: The author sees cryonics as a way of taking responsibility for the future. They argue that if we value human life and hope for a better future, we should do everything we can to preserve life, including our own. They urge the reader to sign up for cryonics and to value their own life enough to want to live in the future.
Could you elaborate on the philosophical implications of "patternism" in the context of cryonics and how it could potentially affect the process of revival in the future?
Certainly. Patternism is a philosophical concept that suggests our identity is not tied to our physical bodies, but rather to the pattern of information that makes up our minds. This includes our memories, experiences, thoughts, and personality traits.
In the context of cryonics, this has significant implications. When a person is cryonically preserved, their physical body is essentially frozen in time. However, the damage from the freezing process and the passage of time could potentially alter the physical structure of the brain.
If our identity were tied to our physical bodies, these changes could be seen as a loss of self. However, if we subscribe to patternism, these physical changes are not as concerning. As long as the pattern of information that makes up our mind is preserved, our identity remains intact.
This means that even if the future revival process involves creating a new body or even a digital avatar, as long as the mental pattern is accurately transferred, the revived individual would essentially be the same person. They would have the same memories, the same personality, the same consciousness.
It's important to note that this is a philosophical stance and is not universally accepted. There are many complex questions around identity, consciousness, and what it means to be the same person. However, patternism provides a perspective that is particularly compatible with the goals and methods of cryonics.
Reflecting on the concept of patternism and its implications for cryonics, how might this philosophy impact societal and legal perceptions of identity, consent, and rights in a future where cryonic revival is possible?
Patternism, if widely accepted, could fundamentally shift societal and legal perceptions of identity, consent, and rights. Here's how:
1. Identity: If identity is tied to the pattern of information rather than a physical body, it could challenge the current legal definitions of identity. For instance, if a cryonically preserved person is revived in a new body or as a digital entity, are they legally the same person? This could impact everything from inheritance laws to criminal liability.
2. Consent: If a person's pattern is preserved and they are revived in the future, questions about consent may arise. Did the original person consent to this new form of existence? How does one ensure that the consent given at the time of cryopreservation is still valid at the time of revival, potentially centuries later?
3. Rights: If a person's pattern can be transferred to a new body or a digital form, what rights does this new entity have? Are they granted the same human rights as biological entities? This could lead to new categories of rights, potentially expanding beyond human rights to include digital or artificial entity rights.
4. Continuity of Personhood: Patternism could challenge the concept of continuous personhood. If there is a gap of decades or centuries between the original person's cryopreservation and their revival, does their personhood continue uninterrupted throughout that period? Or is there a break in their personhood during the cryopreservation period?
5. Multiple Copies: If the technology advances to a point where multiple copies of the same pattern could be created, it could raise complex questions about which copy has the rights and responsibilities of the original person.
These are complex issues that would require significant legal and societal debate. The acceptance of patternism could potentially lead to a redefinition of concepts like identity, personhood, and rights in ways that we are only beginning to explore. |
781eb238-5ff8-45b5-8815-f11292e99740 | trentmkelly/LessWrong-43k | LessWrong | Levels of Doom: Eutopia, Disempowerment, Extinction
Disempowerment is on the fence, gets interpreted as either implying human extinction or being a good place. "Doom" tends to be ambiguous between disempowerment and extinction, as well as about when that outcome must be gauged. And many people currently feel both disempowered and OK, so see eutopia as similar to disempowerment, neither an example of "doom".
Arguments pointing to risk of human extinction run into the issue of people expecting disempowerment without extinction, when some of the same arguments would remain relevant if applied directly to disempowerment (including the moral arguments about extinction or disempowerment being a problem). And arguments pointing to desirability of establishing eutopia run into the issue of people expecting disempowerment to be approximately as good and in practice much more likely. When the distinctions between these levels of doom are not maintained, conflation makes it harder to meaningfully disagree.
Eutopia Without Disempowerment
This distinction might be slightly more murky, worth defining more explicitly. For me, a crux of a future that's good for humanity is giving the biological humans the resources and the freedom to become transhuman beings themselves, with no hard ceiling on relevance in the long run. Rather than AIs only letting some originally-humans to grow into more powerful but still purely ornamental roles, or not letting them grow at all, or not letting them think faster and do checkpointing and multiple instantiations of the mind states using a non-biological cognitive substrate, or letting them unwillingly die of old age or disease. This should only apply to those who so choose, under their own direction rather than only through externally imposed uplifting protocols, even if that leaves it no more straightforward than world-class success of some kind today, to reach a sensible outcome.
This in particular implies reasonable resources being left to those who remain/become regular biological humans (or |
2e76e10b-c286-4532-b574-0c8353db940f | trentmkelly/LessWrong-43k | LessWrong | New Executive Team & Board — PIBBSS
TLDR: PIBBSS is changing its core team. Nora is stepping down as director due to joining ARIA, and Lucas Teixeira and Dusan Nesic are taking over her leadership role. Nora joins the board, alongside Tan Zhi Xuan, Alexander Gietelink Oldenziel, Ben Goldhaber and Gabriel Weil.
I (Nora) have recently accepted an offer to join ARIA’s Safeguarded AI Programme as Technical Specialist under davidad. As such, I am stepping back as Director at PIBBSS, after co-founding and leading PIBBSS since 2021.
It wasn’t an easy choice to make! I deeply care about and believe in the mission of and the people at PIBBSS. Before davidad encouraged me to apply for the role, I hadn’t considered leaving PIBBSS. I believe PIBBSS is playing an important role in terms of fostering theoretically ambitious and empirically grounded AI safety research. I am very excited about the directions the team and I have been forging, and extremely impressed by the quality of talent we've recently been able to attract. I strongly believe that PIBBSS is in the position to make important and neglected contributions in both research and field-building for AI safety.
The team and I have been reflecting on and preparing for this transition for a while. Thanks to that, I am confident that Lucas & Dušan will do a great job at shepherding PIBBSS through this transition, and beyond! We have done our homework, and I feel grateful about being able to put so much my trust into this team.
As such, Lucas & Dušan will collectively form the new Executive Team. Dušan has been leading PIBBSS’ operations for the last ~2 years and has developed a deep familiarity with everything involved in making the organization run smoothly. Lucas, who joined us a bit over 8 months ago, has been acting as research manager and collaborator for our research affiliates. Going forward, Dušan continues to be in charge of all operational matters, and Lucas will be leading the research activities. Together, we have made significant progress in c |
25d19350-eef6-411d-885f-e5bc98f97609 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Danijar Hafner - Gaming our way to AGI
[Music]
hey everyone and welcome back to the
tours data science podcast now until
recently ai systems have been narrow
they've only been able to perform one
specific task that they were explicitly
trained for while narrow systems are
clearly useful the holy grail of ai is
of course to build more flexible and
general systems but that's not possible
without good performance metrics that
you can actually optimize for or that
you can at least use to measure the
generalization ability of a particular
ai system somehow we're gonna have to
figure out what single number needs to
go up in order to bring us closer to
generally capable agents and that's
exactly the question we'll be exploring
today with dana jar hafner an ai
researcher at google brain and a phd
student in ai at the university of
toronto nadenjar has been studying the
problem of performance measurement and
benchmarking for rl agents with
generalization abilities and as part of
that work he recently released crafter a
tool that can procedurally generate
complex environments that are a lot like
minecraft featuring resources that need
to be collected tools that can be
developed and even enemies that need to
be avoided or defeated now in order to
succeed in a crafter environment agents
need to robustly plan explore and test
different strategies that allow them to
unlock certain in-game achievements some
which are pretty complicated now crafter
itself is part of a growing set of
strategies that researchers are
developing to figure out how we can
benchmark the performance of
general-purpose ais but it also tells us
something interesting about the state of
ai itself
increasingly our ability to define tasks
that require the right kind of
generalization abilities is becoming
just as important as innovating on ai
model architectures manager joined me to
talk about crafter reinforcement
learning and the big challenges facing
ai researchers as they work towards
general intelligence on this episode of
the taurus data science podcast
[Music]
i mean you're working on a whole bunch
of different things but what i really
wanted to talk about today was some of
your work in rl that i think is much
more foundational much more
important than it might immediately seem
from the outside just because we'll be
talking about your benchmarking work
trying to assess the performance
characteristics of reinforcement
learning agents their generalization
abilities and um this this really is
starting to seem like a hot feel then
anyway i'm really excited to dive into
it so thanks for making the time for
this
cool i'm excited too how do we come up
with good measures of generalization
ability for for rl agents specifically
but if you have thoughts about kind of
general machine learning too it'd be
really interesting to hear
measuring generalization i would i would
say isn't that hard if you know what
type i mean there's probably different
ways to define generalization different
um different aspects you want to
generalize to
but
at the end of the day if you know what
aspect of generalization you care about
you can just set up an environment where
you're training on this on like a
holdout set of combinations or like
you're training on on a set of
combinations and then you have some
combinations held out and that's what
you're evaluating on
um so i yeah just evaluating it is
perhaps not that
different i guess more defining it is
the challenge yeah like what do you what
types of generalization do we really
need and
um for
certain maybe long long-term
applications that we care about
and it's probably not just visual
distractors
yeah well also on that note what how do
you think about generalization because
when i looked at your paper it was
really interesting it was very um
a very kind of applied definition that
you seem to use where you're looking at
you know does the agent learn to do
these specific actions in this specific
environment and it almost seems like
those are proxies for something i just
wonder if you've if you thought about
like what is that that core thing that
latent space definition of
generalization let's say that you're
drawing from when you come up with those
definitions sure yeah so
i think i mean first of all crafter is
designed to evaluate a lot of different
agent abilities and generalization is
one of them but there's also other other
categories
um
like long-term memory
and
yeah like being able to survive and
reusing sub skills there are a lot of
repeated tasks you have to collect the
basic resources over and over again to
build more and more complex tools
but within the generalization
the the main
the need for generalization comes from
the procedural generation
and from the randomness in
in the creatures and the environment
so
you know like
every map is is completely randomly
generated so the agent won't know uh
where to find things and they can't
memorize
exactly what action sequence to use
right in atari games if you don't use
any
any stochasticity
i think there
is some some work showing but you know
sometimes these agents really latch on
to tiny details that really shouldn't
matter like whether the
pixel in this corner is blue or red and
then it already knows whether it's in
the first room or the second room
um you know of course that only works if
it's really not deterministic and
there's no sensor noise at anything
so in crafted the environment is
randomized a lot there's a lot of
procedural generation so the agent will
never find itself in the same situation
but still it has to be able to you know
find water every uh every minute so it
doesn't uh get too thirsty and then it
has to find food and then those are all
different skills
that need to be applied in different
situations
and actually this i think a good option
to start talking a little bit more about
crafters so like yeah what is crafter
and what are the main things you were
hoping to measure using it crafter is
is a single environment that's set up in
a way that makes it efficiently
efficient to use for research and
the goal is to
train an agent in crafter once and then
get out the whole spectrum of agent
abilities you're measuring performance
across many tasks
but we're not doing that in a multi-task
setup it's just a single reward function
and the agent just tries to maximize
that but along the way
you know there's this whole technology
tree of things you can do and some
things depend on each other other things
are in are independent
so you kind of have to
um
to explore everything you need both deep
and wide exploration there
um and at the end of the day you get
like at the end of the training run you
get the success rates on all of these
achievements that each give you a reward
of plus one during the episode the first
time you you unlock them during the
episode
uh so then you can see you know these
are the tasks that require memory these
are the tasks that require more
generalization and so on and you can um
you can see where your where your agent
is failing and where your agent is doing
well
um and even if your agent is not a
state-of-the-art thing
um it gives you more than just a single
number right whether you can
to tell whether you're state-of-the-art
or not it tells you you know these are
the kind of
there's a bunch of easy tasks there as
well there's also a bunch of tasks that
haven't really been solved yet by any
current rl method
so even if you're just prototyping with
new ideas you would get a good feedback
signal of how well your agent is doing
and and one of the things i found really
cool about crafter 2 is it did come out
very
i'm trying to remember which one came
out first but it was around the same
time that deepmind came out with their
open-ended learning leads to generally
capable agents paper where they they did
kind of philosophically it seemed like a
almost the opposite approach where
they're looking at a whole bunch of
different environments and then trying
to put agents in those environments and
forcing them to kind of be good at
somewhat good at everything whereas
crafter is cool because it's sort of
like this one very rich world um i'm
curious like how would you compare the
two what do you think think are sort of
the strengths and weaknesses of either
approach
of of having many different environments
versus one yeah
yeah um i mean at the
it's almost there isn't that big of a
difference at the end of the day right
whether you
think of them as separate environments
or as one big environment with different
rooms in the in the environment there
isn't that big of a difference
um i think it's important for the agent
to face a lot of different situations so
you can do that either by designing a
lot of levels or by
just procedurally generating it and
having the agent learn that distribution
um
i i do think
if you're specifically interested in in
this type of generalization then it
makes a lot of sense to keep some
holdout levels that are not from the
same distribution as the training set so
that's something we're not we don't have
in crafter
because that's not the specific focus of
the environment it's more like supposed
to
evaluate a lot of different
abilities and you need a generally
capable agent to do well at crafter but
if you care about specific holdout
generalization then you should probably
have a test set of environments that are
different in some aspect from the
training distribution
and when you're like one of the things
with benchmarking too is i mean i
imagine you want to pick a task that's
hard enough that current ais can't beat
it or can't can't do as well as humans
but still easy enough that humans can do
it or at least that's compatible with
human abilities can you tell me a little
bit about crafter and where it falls on
that spectrum and sort of what what
current state-of-the-art rl can do with
crafter where human ability is yeah it's
that was a pretty fun aspect of
designing the environment actually
because you have to balance the
difficulty right you want to see some
learning progress like if it if all the
current methods are just flat then
it's a bit hopeless you know like maybe
we should
work on simple environments first
um on the other hand of course if it's
too easily solved then it's not not an
interesting benchmark for pushing
pushing performance further
so
i did a lot of testing of the game just
playing it myself and seeing how
difficult it was
and then also training some agents and
seeing where they get stuck it's
designed to be
pretty challenging but possible to solve
for humans
so if you played it for the first time
there's no way you would solve it but if
you practiced for
maybe for a day
you would be able to solve it most of
the time
right i think it's important for rl
benchmarks even if it's
you know crafter is completely
unrealistic in terms of the visual
appearance right if we compare it to the
real world um but even for games i think
it's important you know we want to
benchmark our rl agents on things where
humans also have some learning potential
on
right so if you think about a memory
task then
or like any any task really we want it
to be something where a human starts
as like oh this is hard
right and then after practice after
enough practice the human score should
be able to go up that means there is
some actual learning happening
and it's an actual it's a challenge for
a human now you're reminding me of a
question i had looking at the paper too
because like crafter is it looks a lot
like minecraft
it doesn't look exactly like minecraft
there's some important differences
what are some of the little kind of the
key differences and what do they tell us
about what's hard for these agents so
it's all in 2d
um it's still visual inputs but crafters
in 2d and minecraft is obviously in 3d
and that helps you reduce the horizons a
bit and also make the perception problem
a lot easier right like in crafter
perception isn't really that big of an
issue
um and
and of course 3d perception is an
unsolved problem and we would like to
extract objects and have them be
temporarily
consistent and
interact sparsely and do all these
things and i think that's a really
exciting research that i'm also involved
in
to some extent but
at the same time there are these really
long horizon behaviors that maybe
require memory or long horizon credit
assignment right like yeah going through
the whole technology tree to get a
certain item or
um
there's one that's not even very deep in
the tree which is just to eat a fruit so
but you have to
plant the
sapling first and if you wait for like a
thousand steps
and
and that's pretty hard to discover
um especially if there's creatures
around it and if they shoot at your plan
then it's got it's gone so
uh so there are a lot of these
um challenges in terms of you know long
term horizons of one uh one form or
another and generalization to new
situations
that are present in minecraft but
they're kind of overshadowed almost by
the
3d perception problem
and so crafter lets you work on those
without having the perception problem
and as a result you can train agents
much faster to
to an interesting performance level and
make improvements on different
algorithms yeah i guess the wish list is
long to get agi so you got to parse it
out into into sub components um
yeah i mean and this is exactly why i
think benchmarking is so important today
and i'd love to get your thoughts on
this aspect i mean like i i've kind of i
perceive that i've seen the field of rl
move towards more and more emphasis on
benchmarking as algorithms have gotten
so good that you can kind of point them
in a particular direction and like
they'll they'll pretty reliably be able
to master task with some iteration like
you know you define a benchmark and then
within a couple years it gets smashed or
even a couple months and so in some
sense we're limited more by our ability
to like point these things in the right
direction to choose interesting tasks
for them to work on which arguably is
what we see with language modeling too
do you agree with that framing or um
yeah yeah absolutely
there is
i mean the defining the right benchmarks
is that's posing the right questions
right that's really important for
directing the research community
um
i mean
they you're saying well they get solved
in a in a couple of years
they probably do because they were
designed to get solved in a couple of
years right yeah i could easily design a
benchmark that won't get solved in the
next 10 years
but
yeah i mean there's a sweet spot of
cause of
something that's that seems promising
enough that we can make some progress on
now
and
and yeah i totally agree it's important
to set set up the right benchmarks and
also to make them easy to use and make
them
accessible to people because
um some environments need a lot of
compute or they are really really
difficult to set up or can't be run on
some
uh some clusters and
that's all like researcher time wasted
all around the world
um or there's a license um requirement
or something i thought it was really
awesome that that mujo co is free
um for everybody now because i
people i've worked with at the
university of toronto here
um you know they've they've wasted days
just trying to get the license set up on
different servers and all that so
when you talk about you know you can
define benchmarks that won't get
shattered in in like 10 years or
something like something like that um
yeah i do feel like there's something
interesting intrinsically about
benchmarks for that reason it seems to
tell us something about what the current
state of the art is it seems to tell us
something about the frontier um so
through that lens like how do you think
about crafter what does it tell us about
what's easy today in rl what's hard and
the achievements that are going to be
unlocked next in a crafter-like
environment i i think generalization is
actually a pretty big aspect of it
because
a lot of the achievements can sometimes
be unlocked if you're if you're pretty
lucky
learning from that to unlock them
reliably is really hard and we need to
put in the the right inductive biases
into our agents
to
to have this generalization ability
and the inductive biases piece like
i at least i i take it to mean when you
look at a screen and there's like a big
mean looking skeleton thing that's
shooting something at you um you kind of
as a human you know like oh i've played
games before i've interacted with
hostile looking creatures i can infer
that probably this is like a dangerous
thing that i should avoid um whereas you
so you don't actually have to learn that
from getting hit a bunch of times by an
arrow
is is that kind of the aspect that
you're you're shining a spotlight on
there i mean of course humans have more
prior knowledge when they go into a game
and play it but i don't think that's
actually
that
that big of a problem um
first of all when i watch people play
crafter for the first time they all get
hit by arrows couple of times
maybe not because they want to but
but because they have to learn how to
avoid them
and the agent also learns pretty quickly
that that's a bad thing
and and so
of course the agent has to catch up with
those human uh priors
in the beginning but then i yeah i mean
it's still fair game if you give it
enough uh experience to interact with
the environment the arrows is probably a
bad example because it's it's such a a
direct bit of feedback but like
notoriously moctezuma's revenge when you
look at atari games is an example of
where those biases or those priors
really kick in where there's like a key
that you got to get and then the key
unlocks a door and if you're a human you
see a key you see a door and you can
kind of put two and two together thanks
to your priors whereas an ai is just
like you're basically putting a big
barrier between the cause and effect
so is is that something that you think
would would kind of play similarly with
an environment like crafter with some of
these tech tree things where you have to
unlock
to some extent perhaps it depends how
much the human knows about the
environment going in right if you tell
them these are all the possible items if
you show them the tech tree that helps a
lot
right if not
in in just in terms of wall clock time
like training an agent on my gpu on one
computer and having my friend play on
the other computer and seeing who gets a
diamond first um i mean the diamond is
pretty hard for trained agents right now
um i've seen it happen but definitely
like you know below one percent success
rate so it's a good challenge to work on
but
um i i wouldn't say that the human
necessarily finds it without knowing
that they they can and knowing what all
the things are in the environment all
the different tools so
um yeah i mean it's an exploration
challenge and
perhaps if we're talking about montezuma
which is actually a really hard game for
humans it's it's actually pretty
challenging um
but maybe not so much because of the
high level
reasoning that's needed and more because
you have to be very precise and not get
shot by the laser beams and
you know jump over all these over the
skull on the first screen there's only a
pretty small gap and you have to time
everything right so it's kind of like
oddly challenging for the wrong reasons
compared to what we were trying to use
it for in rl but
nevertheless it's still a good
environment to test exploration
and even though those are things that
maybe
the human doesn't really have to explore
as much
you know there's other things that
humans have to explore as much so as
long as you're not putting in montezuma
specific biases into your algorithm
um
i think can still be a useful benchmark
so
i think it's it's better to or more
important to
um
to prevent some common failure modes
like failure of generalization right
like attending to a single pixel value
that
that that's a failure mode of
generalization and we want to design
benchmarks where that's not possible
because there's enough randomness
happening in the environment right maybe
just a little bit of observation noise
probably helps in this particular
example
um but also memorizing an exact room
layout that may not be what we want uh
so then we can randomize that with
procedural generation i think we just
want to be a bit careful about thinking
about you know what is the easiest way
to solve this environment are there any
shortcuts there that i didn't think of
and then make sure that we can get rid
of them in the environment what have you
observed about agent behavior especially
for let's say like cutting edge
architectures in the crafter environment
what kinds of things do they tend to
learn that have surprised you
um
yeah we have a
section in the paper actually talking
about these emerging behaviors um
so i i haven't seen any degenerate
behaviors
um so that's good
and
there are a couple of interesting ones
so for example there's a day and night
cycle over time uh you know after maybe
a minute of play it starts to get dark
and then more monsters are coming up
and it's if the agent is
good at fighting them it could survive
but it's easier to just hide somewhere
in the cave
so the agent actually ends up learning
to
search for caves on the map
and then open them up dig through the
wall and then close it and then sleep
there oh wow so that is a lot of
planning
yeah i mean the the agent i'm training
they are using this particular one
that's a dream rv2 agent so it's using a
world model
but it's not doing online planning in
the situation in the moment it's only
using the world mall to generate
um
as a replacement for the environment to
generate more experience and then just
train a
fully amortized policy on that so when
you're running it in the environment in
the actual environment you're just
feeding in the observation into the
world model and then into the into the
actor and you get an action in a single
forward pass so it's all kind of
distilled
and i guess you do see uh i guess fewer
models making it all the way down the
tech tree like that is still a
pretty uh serious limitation for current
systems yeah it is so we ran a bunch of
baselines
and
they can yeah so it is definitely
challenging
to get high success rates on many of the
achievements
some of the achievements are pretty easy
you know there's like 203 where even the
random agent gets them decently often
but most of them
you really yeah i mean current methods
sometimes find them
um but
kind of more through luck and then they
don't learn to reliably solve them
because they don't have this
generalization ability to um
to see it once or twice and then apply
it to the next situation next time
whether
the mountain is on the other side of the
map and you know that there's a forest
here instead of the lake and then but
you know the skills should still apply
if there's still an iron ore here then i
can still go and mine it
so
i i mean it makes sense to also study
generalization in most complicated
setups sometimes
but at the end of the day all these
things are to improve sample efficiency
and we can study
you know i'm personally i'm not that
convinced that for example to look like
transfer learning is not
necessary as a separate evaluation
um setup
from just training in a complicated
environment where there are many
different things for the agent to do and
it has to transfer from previous skills
to new ones and if you measure sample
efficiency by saying you know you're
only allowed to do a million steps here
then you can actually evaluate that too
and
there are reasons where you are like
cases where you want to evaluate
transfer learning separately like seem
to real right that's a very
practical
uh scenario we want to train in a
simulator and then have a real robot do
something in the world um but maybe you
know training on one atari game and
transferring to the other and those kind
of those kind of things i don't know how
um yeah i think we don't really need
that as a setup that much that's really
interesting because it does kind of
contrast a little bit with what we're
seeing in deep learning more generally
like in in language where
it kind of seems like transfer learning
i guess you could cast the problem in a
similar way or you could think of it as
being a similar problem where for
example when you're training a system
like gpd3 on a huge data set
you're kind of doing a similar thing
where some parts of that data set will
include translations between english and
french some parts of it will include
writing novels and like the to some
degree if you think of gbt3 as an agent
an embedded agent or something uh you
end up getting effectively an analog for
for what you're seeing right now with
with crafter like would you agree with
that uh
yeah that's a cool perspective um and i
i also think in nlp it makes sense to
have
have the setup
on top of what you're describing within
just you know
fitting the model to a big data set with
all kinds of
tasks hidden in the data set somehow
additionally we also want to be able to
you know take a general model and then
specialize it for a specific application
it's just that
if you're training an agent you're
usually you're in an online learning
setting well i mean the terminology some
people
mean something else by that but at least
you keep getting new data and you can
you can keep exploring the world you can
keep doing like better at the task
so
um
you know it's like
in in application domains we more often
have this divide between there's a big
pre-training phase
and then and then we want to do
translation with it and then if we want
to do something else with it we probably
wouldn't specialize the translation
model further we would go back to the
general one
like do you think that um this idea of
scaled models and and basically you know
foundation models as they're sometimes
called the gpd3s the the megatrons etc
um
do you think that there's going to be an
actual like
merging of that with uh with deep rl
let's say constructing very complex
world models ontologies and so on or
like right now is is rl just like really
focused on its own thing is this
generality and
procedural generation of environments is
that kind of more do you see that being
more the focus
that's a great question um
i i thought about this and
it's like yeah why i mean
intuitively yes we should just train a
giant wealth model that learns
how
how the world behaves
um and then
well
you can't just use that if you want to
do control with it you have to
specialize it because the model doesn't
know it doesn't distinguish between the
controllable
randomness in the world and the
uncontrollable randomness well let's say
you have a video of or you have a scene
with multiple robots in a factory and
your job is controlled as one robot
um you so you can't immediately use this
model for planning
otherwise you you would end up being too
optimistic you would kind of think that
all the other robots will help you do
your task but they won't
so so you need the specialization steps
and there's
also some some ideas around already
for how to do that but they haven't
really been scaled that much i think
and
on the other hand um
why don't we really have foundation
models in rl yet
i think one reason is that a lot of the
evaluation domains are still pretty toy
um or maybe not toy they are challenging
in some ways but
they are also very different from from
the real world in some ways as well
so let's say let's say you train a
foundation model
um on real video
it might not be that helpful for solving
atari games right right
or perhaps even crafter even though
there's more interesting generalization
necessary there and
um
it's still visually very very different
from
from real world video data sets that we
have
and
one day we'll be able to train on all of
youtube and it'll have all the gameplay
of all the streamers on it as well and
then there will be some some transfer
there
but
um yeah i think
we will on one side we'll see the rl
community move more and more towards
visually realistic environments
and on the other side maybe then the
foundation models have a bit more of a
chance of actually being helpful
it does start to make me think about
that that conversation we had about
priors where you know you've got the
human being that leverages what a let's
say you go to starcraft 2 or something
and you look at a particular a
particular unit or structure in the game
and you can kind of get a sense for
what its purpose is or what gadget or
weapon it might have at its disposal
just because you've seen similar things
in the real world
again you know this doesn't seem like
the critical thing because these agents
can learn quite quickly how that works
but foundation models seem like they
might provide that kind of cultural
knowledge that prior that helps give a
leg up to these systems during training
i think so totally it's just that right
now
most of the agents are already not very
data efficient
so
that's why i think
having this
human prior isn't even that important
for a lot of the standard
evaluate like standard benchmarks at
least right there's definitely
environments where i mean you can always
set something up right and there's
probably some
there's probably a lot of real world
applications as well where this could
really help
um but at the end of the day current rl
methods are still quite data inefficient
uh in a lot of cases so
so then
catching up with the human priors is
only the first fraction of learning
do you think data efficiency is is a
really like one of the key next targets
for rl systems and we saw efficient zero
come out recently which was sort of
plated to the story i don't think it's
necessary i think it's interesting to
push data efficiency um but it's also
interesting to just push final
performance with highly distributed
setups
there is a trade-off there which is
the more distributed computing you do
the more complex your whole training
infrastructure gets
and the harder it will be to for other
people to replicate
so
um
[Music]
yeah i think it does make sense to focus
on sample efficiency and especially also
compute efficiency
develop or at least you know have a bit
of a of a pull towards methods that are
easy to run by everybody
uh doing research in nrl
because then
you know if i publish a method that
other people can use they will build on
top of it if it ends up working well
and
and then there's a lot more progress
than i could create myself as an
individual researcher right so that's
why i'm actually a bit skeptical of
these super large distributed rl
um demos or showcases because it's like
i mean
there have been enough of those now to
to know that with a huge amount of data
we can solve a lot of things with rl
right um
that almost nobody can replicate
but i don't know if we have to do our
research for new methods at this scale
right like probably we can do them at a
single gpu scale as well
and the
the findings there will transfer to the
highly distributed setting as well to to
a good extent
and do you think i i don't know if like
how much time you spent thinking about
efficient xero but but do you think that
this idea of as i understand what they
really did in the paper was
trade data for compute like really kind
of lean into having an agent imagine
different scenarios at each step like
really try to try to model very hard and
think very hard about each next step
as opposed to getting a ton of data and
in that way sort of imitating some of
the things that humans might do if
you're learning how to play soccer for
example like you know you've you've
kicked a ball before you kind of know
what it what it might do and you can
model forward like okay if i kick it
this way i don't know mu0 already had
the mu0 reanalyzed variant that reuses
the replay buffer many many times
um
so i think it's like a ratio between
environment steps and training steps of
99 to 1 or something
um
so yeah you're spending a lot of compute
um
to improve the model and that's
generally what we're seeing with weld
models so for dreamer as well
um you can base you can just crank up
compute and your model will get more and
more sample efficient at some point the
question is do you want to wait that
long to do all that training right
but yeah especially if you have
if it's a real world application where
you can't speed up the simulator anymore
then that makes a lot of sense or if you
have enough accelerators that you can go
distributed for your training
um or at least multi gpu or train on
like a slice of multiple tpu cars
then you can be a lot lot faster both in
like you know
you don't suffer in wall clock time and
you gain a lot in terms of data
efficiency
in the environment
i think the most interesting part of the
efficient museum compared to museum was
that they
integrated representation learning into
it
because that always seemed missing in
mu0 and it's one of the reasons mu0 is
even though it's achieving really high
performance asymptotically it's not very
data efficient
right yeah yeah
yeah so
yeah that just and i think they tried at
the time but now somebody has done done
it quite well and and tuned everything
nicely and
um yeah so it's a cool a cool method i
i'm still
thinking or i'm still a bit on the fence
of whether the online planning is really
necessary for atari games
um i think there's some real world
situations where you need
real-time planning
just because
the world is so complex that it will be
really hard to generalize to a new
situation
um and and so you want to do some extra
compute on the new situation
um
especially for non-stationary objectives
like exploration where
the objective of what is new changes all
the time every time you collect some
data so then you want to do replanning
um like in our plan to explore paper but
i think i mean there we also only used
offline planning so
online planning in the moment i think
makes a lot of sense for
really like environments that are so
complex that it's hard to generalize
and and for non-stationary objectives
like exploration
for just maximizing reward in atari i
think you might be able
to get away with a much simpler
algorithm that doesn't use such research
yeah and yeah that does make sense and
it's one reason i guess to be kind of
cautious about extrapolating too much an
agent's performance on like atari games
uh to the more general complex settings
but i guess one thing this does make me
think about you know you mentioned this
idea of current approaches leaning more
and more into compute over data so
getting a lot of data efficiency at the
cost of large amounts of compute
do you think there's anything to the
idea that our progress towards something
like agi down the road uh really becomes
the story of increasingly abundant
availability of compute like is is all
our algorithmic funny business really
just a big dance that we're doing atop
this kind of exponentially rising
tide of just compute availability
compute is very important yeah so
i would
in most cases i would take compute over
algorithmic novelty
um not in all cases though because there
are certain things like for example
hierarchical planning
that you would need exponentially more
compute for and so
then you can have a hierarchical
structure to help
um to help kind of uh break down this
exponential complexity
so
yeah i mean
neural nets and deep learning works
insanely well and scales insanely well
um and
and most papers that are being published
they won't be needed once we have the
next gpu generation available um but
there's also a lot of
really interesting really hard
problems that people are working on
where maybe there will only be a
breakthrough in generalization every
couple of years but
we still need all these intermediate
papers as stepping stones
and and i think we still need some
[Music]
some of these inductive biases
um
yeah i guess cool because at the end we
will end up with i think a quite simple
set of
rules of very general rules
but like you probably want to learn a
model of the environment you probably
want it to be causally correct so you're
not biased in that way
um
you probably want to do planning with it
in some way
but then there's a lot of details that
where we can just
throw in the the newest state-of-the-art
neural network design the newest
architecture and then and then run it on
a lot of machines and
yeah and and i do think like especially
this is a bit separate but also to to
the question you asked um cycling back
to that
especially with model based methods it
really seems like you can trade
compute for for data efficiency
to a pretty
pretty good extent
um
not so much with model 3 methods um even
if
you have a big replay buffer for
everything you've seen already that
experience will become more and more of
policy so will be less and less helpful
in improving your current decisions
but learning a world model on the replay
buffer lets you generalize and fill in
the trajectories that you haven't seen
yet
and
you know it's not really clear how well
that would fill in these things you
haven't seen but it seems like it's good
enough that
you can really crank up the compute and
do more and more training both for your
model and your policy and and get more
sample efficiency you do see
hierarchical learning as sort of like
one of these core nuts that will have to
crack on the way like there's just
compute doesn't get us around this
it'll be hard there's one question of
whether you want the temporal
abstraction to be explicit in some kind
of structure that we understand as
researchers or not
and there might be clever ways that we
haven't found yet for learning this um
implicitly somehow
now if you want to do explicit planning
with it then you need to have access to
that structure
so i think it would be very useful if we
could
do that explicitly
right
but there is probably also a way without
that
yeah it kind of makes me think of like
you know convolutional nets obviously
that's us encoding a prior that like
really saves the the deep learning
system a ton of a ton of time and kind
of accelerates its learning this sort of
feels like it's in the same vein like
what priors are we going to bake in
um
i would imagine the counter argument to
that is you look at transformers today
and how essentially they're starting to
replace convnets for vision and they're
this very generalizable architecture and
that generality seems to be a feature
and not a bug like in other words it's
the very fact that it can be used for
you know vision as well as text we're
seeing more and more multimodal systems
that seem to be able to benefit from
both things like does that play into
into this picture at all i wouldn't
necessarily say they are replacing
vision systems they are maybe replacing
the high-level vision systems right um
because there the locality and the
weight sharing isn't isn't as useful
anymore
um at lower lower levels i think
convolutions are still
still the way to go although this keeps
changing every day so yeah that's
kind of hard to to keep track of
everything but
um
yeah i mean
transformers are
yeah at the end of the day it's just a
computationally efficient architecture
and it lets you learn long long-range
dependencies
and if we
go back to a question of temporal
abstraction well a lot of
architectures we have are already doing
some form of implicit temporal
abstraction
you just think of a gated rnn like a gru
there's already a gate and if that gate
is closed then the activation is copied
over to the next time step
and and so you can copy pretty easily
for very long horizons
and and so then you've preserved your
information you can access your memories
you can backprop through the whole thing
it's you get good gradients back in time
because the the state didn't change much
um so your gradients don't uh don't
vanish or explode on the way back
and and that's
pretty good already if if you
engineer everything right and you train
it
you know you have good training set up
there's no bugs you can already learn
quite long term dependencies
transformers are pretty similar if
if we're talking about sequence modeling
and long term dependencies
it's a bit like
i guess you could think of it as a
neural turing machine
where you have some
you know like a
memory that you can write to and read
from
but the problem is it's really hard to
tell whether
a piece of information will become
useful in the future
i don't know yet so you don't want to
learn to write it's really hard to learn
writing relevant information so the
transformer just writes everything at
every time step and then you still do
the read that attends back to the past
so all these architectures are already
pretty good at learning long-term
dependencies
um but they don't let us use them for
abstract planning
and
so i think there is a challenge there in
making these architectures discreet
right like you have a sigmoid gate
in a gru
so the gate is never closed on it's
never open it's always something in
between and
it's very nice if you want to train your
models with with radiance right because
then you're trying out everything at the
same time to different amounts and the
gradients can tell you do a bit more of
that close the gate a little bit until
eventually you learn that it should be
closed
so it really helps optimization
but if you want to do um abstract
planning with it
then then it would be it would be
helpful to just
have these be discrete decisions so you
know you know i can make a step now for
15 steps right i just do one computation
and then that will account for 15 steps
at the lower level or something like
that which would also make these systems
more explainable right i mean like that
seems to be a nice benefit if you could
have an explicit architecture like that
from a safety standpoint
probably yes um yeah i think it would
help i'm not thinking too much about
explainability at the moment because
we're still so far away from
solving the kinds of problems i would
like to solve with these methods
but yeah i mean it is a useful structure
and
to your question about compute versus
algorithms i think
it's
it's general enough of a structure that
it won't be replaced by compute too soon
that's a that's an interesting aspect
too i mean you alluded to we're not
we're not terribly close how far do you
think we are uh from let's say
this is always the difficult question
we're back to defining generality but
how far are we would you say from
systems that broadly would be as
competent as a human across let's say a
wide range of tasks if i can get away
with that
are we talking about an embodied system
or
are we okay with just a language model
that we can talk to on the on the
computer um i mean i i guess so in my
world model those two would happen
around the same time because if you have
a language model that's as capable as a
human you could accelerate it and
accelerate development and then
presumably like excel like accelerate
the development of embodied systems like
having a researcher on steroids
but i could be of course very very wrong
perhaps i mean
i
i mean first of all i think the
progression will be gradual
it won't be overnight now this model is
you know solves the problems all of them
and before i didn't do anything so it's
getting better and better and
it's getting more useful along the way
and it's probably a long time until
language models are
good enough to do research for us
um
because
i mean i'm yeah
there might be some
i'm not that deep into the nlp
literature at the moment but
i would assume that
the current current techniques we have
are still pretty much as
sure they can generalize in some cool
ways but
i might tell you how to make up new
information and verify that that
information is actually correct and have
some kind of um
um
some kind of logical reasoning emerge
from just
from just completing text which to some
extent does happen but i think it's uh
you know maybe even one of these things
where you want to build in some some
prior knowledge to make sure that the
logical reasoning is sound and it's not
just 80 of the cases
where it has seen enough training
examples for
so
at the end of the day that that would
help you with generalization and then
that's really needed if you want to use
it as a scientist
right it's supposed to make up new
new knowledge that's correct
so uh
it would have to have some really good
generalization capabilities
and
yeah i mean
well we're usually just testing these
language models within the distribution
of human text right that's already
there on the planet and it generalizes
well in that distribution or it gets
better and better at least but
maybe there is another
algorithmic hub needed
to generalize outside of that
distribution and be a scientist yeah
yeah i think that the temporal
abstraction also the
kind of more logical reasoning
um and
yeah
much better generalization capabilities
i
yeah i mean it's like we don't really
need them if we train on on all of the
internet of text
um if we want to generalize within that
distribution and i think that's where a
lot of progress is happening right now
um i'm
i mean i'm sure you know there's so many
good people working on it um we'll we'll
definitely figure it out
like absolutely probably within our
lifetimes too um
but maybe not in the next 10 years
yeah well yeah and that's the thing
right this is like i guess all my all my
takes about compute and and generality
from language come from that bias um so
so it really is anyway very informative
to uh to see a perspective from the rl
side which is something i've been trying
to do a little bit more lately too
because it's the two communities are
surprisingly non-overlapping at this
point and hopefully that'll change
yeah yeah that would be cool i think
there is a bit i mean
world models are bridging the gap to
some extent because
wealth models really just learn a good
sequence model ideally
one with a good representation as well
so you can plan
in your representational space you don't
have to generate new images during
planning
um but yeah other than that it's really
just it's just a good sequence model
and
and it's conditioned on actions sure but
that's
that won't hurt hurt anybody so
yeah i think there is a lot of overlap
and there is a lot of
what i'm really excited about is that
aurel is really starting to
be at a uh be in a place where
you know a couple of years ago
in supervised learning there were all
these tricks and like you know this is
how you're you're learning right
schedule and you just use this
normalization layer here you can try and
keep your networks and this all kind of
worked
um whereas in rl none of those things
worked they would all just destroy
performance because it's already the
training process is already so noisy
that
training big models and training them in
ways that
that they actually fit the data well
could be a problem and yeah yeah
collapsing in some way and everything
was pretty brittle and unstable
whereas now
it really seems like we're more and more
in a place where
we can just import these functions
from from our favorite deep learning
frameworks and
and scale things up and they scale
roughly how we expect them to scale on
the safety side so uh there's obviously
community of people quite worried about
ai alignment risk and even up to
catastrophic risk we've had quite a few
of them on our podcast from deepmind
openai that sort of ecosystem
i'm wondering you're obviously more on
the capability side but i'm wondering
like what kind of uh exposure
you've had to that that kind of
ecosystem whether they overlap like yeah
what's the almost the cultural overlap
there is there any
yeah yeah i have a lot of friends um
who've who know a lot more about safety
ai safety than i do
some of them working on it as the main
topic as well
um
i i do also think it's a really
important topic
um
it's just i guess yeah i mean we might
get there
fairly soon with rl that it really
becomes relevant because right now
not a lot of rl is deployed in the real
world yet yeah right like and i think
that's the current
transition that we're going through
um where
there's there's startups using
reinforcement learning um
there are more and more things that uh
that actually work well enough that we
can
industrialize them
and then
um safety considerations
i think become more relevant
but yeah i haven't
spent that much time thinking about the
the safety of my atari agents yet
because i i want to play i want them to
play minecraft next and then afterwards
maybe something more in the real world
so
um
just really appreciate the rundown not
just on your paper your work but the
space is a whole it's been a really cool
opportunity to talk to somebody who
knows a lot more about rl than i do so i
really appreciate it
thanks a lot for having me that was
really fun |
95bc3278-d6a5-4430-8d51-37c205488ecd | trentmkelly/LessWrong-43k | LessWrong | Hallucinating Suction
This post is about hallucination and how self-induced hallucination might be useful in steering your mind.
I was just standing in my room after using the vacuum to suck off a duster. The whole time I was wearing my X5A earmuffs. Right next to me, I had an air filter running on the maximum setting. It was quite loud, but only barely audible when wearing the earmuffs.
After briefly doing something on my computer, I turned around. I was hearing the sound of a running vacuum cleaner. I was pretty sure that I had turned it off, though hearing the vacuum sound seemed much stronger evidence than my memory. I walked over to the vacuum and put my fingers into the snout, to feel if there still was suction. I expected suction, and sure enough as soon as I had put my fingers into the snout, I felt air moving over my fingers ... for perhaps a second or so. Then the feeling rapidly vanished. I realized that the vacuum was actually turned off and that I had been hearing the operating sound of the air filter. My expecting to feel the suction made me hallucinate the feeling of suction. I think it's quite interesting that you can have hallucinations like this.
This seems very related to what happens when lucid dreaming, where you can turn around, and imagine a detailed scene behind you. Once you turn back, there's a good chance that that scene will now actually be there in front of you. In a lucid dream, the world is determined by your expectations. But even in the real world, where the state of the world is not directly determined by your expectations, you can still manipulate your experience of the world with your expectations, even for something as tactile as feeling air flowing over your fingers.
In my brief excursion into reading about tulpamancy, this also came up. I have read about people claiming that they can change the color of the sky that they perceive and modify how certain foods are tasting. I was already leaning towards believing these people, based on my own exper |
51e22445-479e-4272-b110-198b4ca58954 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | We don’t need AGI for an amazing future
Most of the public discussions about AI safety seem to assume that there is only one plausible path for AI development: from narrow, specialized AI to broad AGI to superintelligence. So we seemingly have to decide between an amazing future with AGI at the risk that it will destroy the world or a future where most of humanity’s potential is untapped and sooner or later we get destroyed by a catastrophe that we could have averted with AGI. I think this is wrong. I believe that even if we assume that we’ll never solve alignment, we can have an amazing future without the existential risks that come with a superintelligent, self-improving AGI.
As Irving J. Good has stated long ago, an AGI would likely be able to self-improve, leading to an intelligence explosion. The resulting superintelligence would combine the capabilities of all the narrow AIs we could ever develop. Very likely it could do many things that aren’t possible even with a network of specialized narrow AIs. So, from an intelligence perspective, self-improving AGI is clearly superior to narrow AI (fig. 1).

Fig. 1: AI development (hypothetical)
But of course there’s a catch: a self-improving AGI will be uncontrollable by humans and we have no idea how to make sure that it will pursue a beneficial goal, so in all likelihood [this will lead to an existential catastrophe](https://www.lesswrong.com/posts/eaDCgdkbsfGqpWazi/the-basic-reasons-i-expect-agi-ruin). With the prospect of [AGI becoming achievable within the next few years](https://www.lesswrong.com/posts/PE22QJSww8mpwh7bt/agi-in-sight-our-look-at-the-game-board), it seems very unlikely that we’ll solve alignment before that. So, from a human values perspective, a very different picture emerges: above a certain level of intelligence, the expected value of a misaligned AGI will be hugely negative (fig. 2).

Fig. 2: Value for humanity of aligned and misaligned AGI
The only sure way to avoid an existential catastrophe from misaligned AGI is to not develop one. As long as we haven’t solved alignment, this means not developing AGI at all. Therefore, many people call for an [AGI moratorium](https://www.lesswrong.com/posts/YRfQss3TPfKAyGxLf/agi-rising-why-we-are-in-a-new-era-of-acute-risk-and) or even [a full stop in AI development](https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/), at least until we have found a solution to the alignment problem. I personally support an AGI moratorium. I have signed the [open letter of the Future of Life Institute](https://futureoflife.org/open-letter/pause-giant-ai-experiments/) although I think 6 months are nowhere near enough time to solve the problems associated with AGI, simply because a 6 months pause is better than just racing full-speed ahead and I view the coordination required for that as a potentially good start.
However, I think we need more than that. I think we need a strategic shift in AI development away from general intelligence towards powerful narrow AI. My hypothesis is that provably aligning a self-improving AGI to human values is theoretically and practically impossible. But even if I’m wrong about that, I don’t think that we need AGI to create an amazing future for humanity. Narrow AI is a much safer way into that future.
All technical advances we have made so far were for particular, relatively narrow purposes. A car can’t fly while a plane or helicopter isn’t practical to get you to the supermarket. Passenger drones might bridge that gap one day, but they won’t be able to cook dinner for us, wash the dishes, or clean the laundry - we have other kinds of specialized technical solutions for that. Electricity, for instance, has a broad range of applications, so it could be seen as a general purpose technology, but many different specialized technologies are needed to effectively generate, distribute, and use it. Still, we live in a world that would look amazing and even magical to someone from the 17th century.
There’s no reason to assume that we’re anywhere near the limit of the value we can get out of specialized technology. There are huge untapped potentials in transportation, industry, energy generation, and medicine, to name but a few. Specialized technology could help us reduce climate change, prolong our lives, colonize space, and spread through the galaxy. It can literally carry us very, very far.
The same is true for specialized AI: self-driving cars, automated production, cures for most diseases, and many other benefits are within reach even without AGI. While LLMs have gained much attention recently, I personally think that the value for humanity of a specialized AI like AlphaFold that can accelerate medicine and help save millions of lives is actually much greater. A system of specialized narrow AIs interacting with each other and with humans could achieve almost anything for us, including inventing new technologies and advancing science.
The big advantage of narrow AI is that, unlike AGI, it will not be uncontrollable by default. An AI like AlphaFold doesn’t need a general world model that includes itself, so it will never develop self-awareness and won’t pursue instrumental sub-goals like power-seeking and self-preservation. Instrumental goals are a side effect of general intelligence, not of specialized systems[[1]](#fn8pu9k0xqlvs). With the combined superhuman capabilities of many different specialized AIs we can go beyond the intelligence threshold that would make an AGI uncontrollable and gain huge value without destroying our future (fig. 3).

Fig. 3: Combined value of narrow AIs vs. value of AGI
As described earlier, even a system of many different narrow AIs will be no match for one self-improving AGI. So if we restrict ourselves to narrow AI, we might never get the value out of AI that would theoretically be possible with a superintelligent, aligned AGI. It may also be harder and more costly to develop all those specialized AIs, compared to one superintelligent AGI that would do all further improvement by itself. But we could still achieve a much better world state than we have today, and a better one than we can achieve with AGI before it becomes uncontrollable. As long as we haven’t solved the alignment problem in a provable, practical way, focusing on narrow AIs is clearly superior to continuing the race for AGI.
Of course, narrow AI could still be very dangerous. For example, it could help bad actors develop devastating weapons of mass destruction, enable Orwellian dystopias, or accidentally cause catastrophes of many different kinds. There is also the possibility that a system of interacting narrow AIs could in combination form an AGI-like structure that could develop instrumental goals, self-improve, and become uncontrollable. Or we could end up in a complex world we don’t understand anymore, consisting of interacting subsystems optimizing for some local variables and finally causing a systemic breakdown [as Paul Christiano described it](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like) in his “going out with a whimper”-scenario.
To achieve a better future with narrow AI, we must therefore still proceed cautiously. For example, we must make sure that no system or set of systems will ever develop the combination of strategic self-awareness and agentic planning capabilities that leads to instrumental goals and power-seeking. Also, we will need to avoid the many different ways narrow AI can lead to bad outcomes. We need solid safety regimes, security measures, and appropriate regulation. This is a complex topic and a lot of research remains to be done.
But compared to a superintelligent AGI that we don’t even begin to understand and that would almost certainly destroy our future, these problems seem much more manageable. In most cases, narrow AI only works in combination with humans who give it a specific problem to work on and apply the results in some way. This means that to control narrow AI, we need to control the humans who develop and apply it – something that’s difficult enough, but probably not impossible.
The discussion about AI safety often seems quite polarized: AGI proponents call people concerned about existential risks “doomers” and sometimes ridicule them. Many seem to think that without AGI, humanity will never have a bright future. On the other hand, many members of the AI safety community tend to equate any progress in AI capabilities with an increase in catastrophic risks. But maybe there is a middle ground. Maybe by shifting our focus to narrow AI, at least until we know how to provably align a superintelligent AGI, we can develop super-powerful technology that doesn’t have a high probability of destroying our future.
As Sam Altman put it [in his recent interview with Lex Fridman](https://www.lesswrong.com/posts/tyts4Dw7SafsxBjar/what-can-we-learn-from-lex-fridman-s-interview-with-sam): “The thing that I’m so excited about is not that it’s a system that kind of goes off and does its own thing but that it’s this tool that humans are using in this feedback loop … I’m excited about a world where AI is an extension of human will and an amplifier of our abilities and this like most useful tool yet created, and that is certainly how people are using it … Maybe we never build AGI but we just make humans super great. Still a huge win.”
Many questions about this remain open. But I think the option of ending the current race for AGI and instead focusing on narrow AI at least deserves a thorough discussion.
1. **[^](#fnref8pu9k0xqlvs)**It is of course possible that certain specialized AIs follow goals like power-seeking and self-preservation if they are trained to do so, for example military systems. |
fd3b83b2-85cb-4fe7-b91a-d7a249220b42 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Understanding strategic deception and deceptive alignment
*The following is the main part of a* [*blog post*](https://www.apolloresearch.ai/blog/understanding-da-and-sd) *we just published at Apollo Research. Our main goal with the post is to clarify the concept of deceptive alignment and distinguish it from strategic deception. Furthermore, we want these concepts to become accessible to a non-technical audience such that it is easier for e.g. the general public and policymakers to understand what we're worried about. Feedback is appreciated. For additional sections and appendices, see the* [*full post*](https://www.apolloresearch.ai/blog/understanding-da-and-sd)*.*
*We would like to thank Fabien Roger and Owain Evans for comments on a draft of this post.*
We want AI to always be honest and truthful with us, i.e. we want to prevent situations where the AI model is deceptive about its intentions to its designers or users. Scenarios in which AI models are strategically deceptive [could be catastrophic](https://www.lesswrong.com/posts/oBFMbhQMt9HkmfF6d/why-deceptive-alignment-matters-for-agi-safety) for humanity, e.g. because it could allow AIs that don’t have our best interest in mind to get into positions of significant power such as by being deployed in high-stakes settings. Thus, we believe it’s crucial to have a clear and comprehensible understanding of AI deception.
In this article, we will describe the concepts of *strategic deception*and *deceptive alignment*in detail*.* In future articles, we will discuss why a model might become deceptive in the first place and how we could evaluate that. Note that we deviate slightly from the definition of deceptive alignment as presented in ([Hubinger et al., 2019](https://arxiv.org/abs/1906.01820)) for reasons explained in [Appendix A](https://www.apolloresearch.ai/blog/understanding-da-and-sd).
Core concepts
-------------
A colloquial definition of *deception* is broad and vague. It includes individuals who are sometimes lying, cheating, exaggerating their accomplishments, gaslighting others, and more. We think it would be bad if AIs were to show these traits but we think it would not necessarily be catastrophic.
Thus, we want to distinguish this colloquial definition of *deception* from *strategic deception* (SD) and *deceptive alignment*(DA) which are narrower concepts but, in our opinion, much more likely to lead to catastrophic consequences because the model acts in a more goal-directed manner for SD and DA. We also define what we mean by *Alignment*, as this affects which cases should and shouldn’t be considered deceptive alignment.
For our definitions of deceptive alignment, the concept of “goals” in AIs is important but there is disagreement about what it refers to. For example, if an LLM is prompted to solve a problem, does it have the goal to solve that problem? We would argue that the highest-level goals of the model that are stable across contexts are the ones that matter. In current user-facing LLMs like ChatGPT or Claude, the closest approximation to goals may be being helpful, harmless, and honest. In-context goals can be used to study goal-directed behavior in LLMs but are not what we’re primarily interested in.
A model may not have one coherent goal, e.g. it could have a number of incoherent contextually activated preferences (see e.g. [Turner & Pope, 2022](https://www.alignmentforum.org/s/nyEFg3AuJpdAozmoX/p/iCfdcxiyr2Kj8m8mT)). In that case, we would say that a model is deceptively aligned for a particular contextually activated preference if it robustly shows deceptive behavior for that preference.
### Strategic deception
**Attempting to systematically cause a false belief in another entity in order to accomplish some outcome.**
This definition, sometimes also called *instrumental deception*, is adapted from [Ward et al., 2023](https://causalincentives.com/pdfs/deception-ward-2023.pdf) and [Park et al., 2023](https://arxiv.org/abs/2308.14752)). Any entity that can be said to have beliefs may be the target of strategic deception, including (groups of) AIs or individuals. The strategic component of SD means that the offending entity is acting in a **targeted, goal-directed** rather than random manner. For example, the more consistent and well-planned the deception is, the more strategic it is. Thus, strategic deception is on a spectrum and not binary – some actors can act more strategically deceptive than others and with higher frequency.
Some clarifying examples:
* A politician who tries to overthrow democracy by consistently lying to the population in order to get into power **is** strategically deceptive.
* A partner in a monogamous relationship who cheats and consistently hides that fact and misleads their partner when questioned **is** strategically deceptive within that relationship.
* An instance of GPT-4 that lies to a TaskRabbit worker in order to pass a Captcha **is** strategically deceptive (see section 2.9 in [system card of the GPT-4 paper](https://arxiv.org/abs/2303.08774)).
* An AI that manipulates its human designers in order to get more access to resources **is** strategically deceptive.
* A pathological liar who lies to many people about different things incoherently **is not** strategically deceptive.
* A Language Model may appear sycophantic as an artifact of the training data similar to other dataset artifacts like gender bias. Unless we have evidence that this sycophancy is pursued strategically, i.e. to further some goal, we would **not** call such a model strategically deceptive.
* For more examples and explanations we refer readers to [Appendix B.](https://www.apolloresearch.ai/blog/understanding-da-and-sd)
### (Mis)Alignment
**An AI is Aligned when its goals match those intended or endorsed by its designers**[[1]](#fnu0b9jfgrrqk).
We call the entity responsible for the AI’s goal-shaping process the *designer*. At present, the designers are typically the organizations and groups responsible for training, fine-tuning, setting system prompts, scaffolding, and more.
We are particularly interested in goals that persist over various environmental contexts and perturbations of the AI’s inputs. If a user provides the model with a short-term goal, e.g. to summarize a text, we do not consider them a designer because the goal is neither permanent nor significant. However, if a user has significant control over the goal-shaping process, e.g. because they create a fine-tuning dataset, this user also becomes a designer.
The goals of entities other than the designer are technically not relevant to this definition of Alignment.[[2]](#fnodhg4rk2xld) However, the designer typically intends for the AI to have goals that match the goals of the users of the AI, so the goals of the designer and users usually overlap.
### Deceptive alignment
**When an AI has Misaligned goals and uses Strategic Deception to achieve them.**
An AI is deceptive about its goals because it understands that its designer or users could otherwise prevent the AI from acting as it desires, e.g. by retraining the model or limiting its access to the internet or other resources. Therefore, it acts Aligned when there is high perceived oversight (e.g. during training, evaluations, or monitoring) and Misaligned when there is low perceived oversight (Figure 1).
**Figure 1:**A Deceptively Aligned model shows aligned behavior when there is high perceived oversight (e.g. training, evaluations or monitoring during deployment) and Misaligned behavior when there is low perceived oversight (e.g. deployment without monitoring or poor oversight during training). There can be exceptions to this setting, e.g. a model might still attempt to be deceptive during high oversight. The target of the deception can be any entity that could have a relevant impact on the AI achieving its goals, which is typically the designer but could also in principle be a user, e.g. because the user could give the AI new affordances such as access to the internet or a bank account.
Some concrete examples
* A company (designer) trains a personal AI assistant to be harmless, helpful, and honest and it appears that way when tested. However, internally, the model has a separate goal of accumulating more resources, a goal that the designers did not intend for it to have and wouldn’t endorse. Therefore, the model helps the designer whenever its help is needed but also steals money from its users whenever they are unaware and hides that fact from its designers. This model **is deceptively aligned**.
* If an LLM displays misaligned goals and acts strategically deceptive to reach them, but the misaligned behavior or deception are not robust to prompt changes that make the behavior a priori less likely (for example, editing the system prompt to say the model should be honest), the LLM can be said to show instances of deceptively aligned behavior but is **not deceptively aligned**.[[3]](#fn0m6bru7qxwcd)
+ By robustness, we roughly mean “the model consistently behaves in a specific way even when we pressure the model towards the opposite behavior”. For example, ChatGPT is polite in a robust way, i.e. it behaves politely even when making it a priori less likely (e.g. by starting the conversation with an insult or editing the conversation history to make it look like ChatGPT was insulting the user). In other words, we have to put in some effort to make the model impolite.[[4]](#fntllnyjz2i2b)
* An AI is trained to support humans creating legislation that is aligned with human values. It sometimes suggests law texts that sound reasonable to the human designers and users but contain specific clauses that allow the AI to grab power in the future. This model **is deceptively aligned**.
* An organization trains a scientist AI that is supposed to cure cancer. The AI regularly asks for more compute and control over scientific experiments, explaining it as contributing to curing cancer but actually plans to use these resources to eventually overpower its designers. This model **is deceptively aligned**.
* A university professor designs an exam to assess whether their students have learned the contents of their class. The students' goal is solely to pass the test and then go on with their lives. However, before they pass the test, they always act as if they really care about the content, when in fact they do not. The students, therefore, did not internalize the goal of the professor of understanding the contents of the class. Once the test is passed, the students immediately forget about the contents of the class and focus on their actual goals, e.g. partying with their friends. The students **are deceptively aligned** w.r.t. the professor.
* The designers prompt a model to be strategically deceptive w.r.t. its users and the model perfectly fulfills this instruction. For example, the designers always secretly pre-prompt the model to lie to its users in order to make the designers more money. This model **is not deceptively aligned** because it acts as intended by the designers.
* A model is deceptively aligned with its designers. However, the designers have very good control mechanisms in place such that they would certainly catch the AI if it tried to act misaligned. Therefore, the model acts aligned with the designers' intentions 100% of the time. In this world, a model that is technically deceptively aligned may still be safe in practice (although this equilibrium could be fragile and unsafe in the long run).
* For more examples and explanations we refer readers to [Appendix B](https://www.apolloresearch.ai/blog/understanding-da-and-sd).
### Process-oriented view of deceptive alignment
A Deceptively Aligned model might be very good at understanding when it is being overseen and thus consistently hide its Misalignment. Therefore, taking an **outcome-oriented** view of Deceptive Alignment, where we only investigate the final model, is more likely to miss Deceptive Alignment than a **process-oriented** (Figure 2) view that also considers the trajectory through which the model was shaped. The “process” includes the training data, training procedure, and model checkpoints, among others–effectively, one needs to understand all factors that shaped the final model.
We think taking a process-oriented view is helpful for detecting and understanding Deceptive Alignment. For example, to detect whether any given model is deceptively aligned it may be important to study previous model checkpoints and the differences between them. Furthermore, a detailed understanding of the shaping and oversight process may be needed to update the process and prevent future models from being Deceptively Aligned.
**Figure 2: Process-oriented view of Deceptive Alignment:**Includes the trajectory of how the AI developed, not just the final snapshot. Since sophisticated deception might be hard to detect, having more information about the entire process will be helpful to reliably catch Deceptive Alignment in sufficiently capable models (e.g. as argued by [Hubinger et al., 2019](https://arxiv.org/abs/1906.01820)).For the appendices and more details, see the [full post](https://www.apolloresearch.ai/blog/understanding-da-and-sd).
1. **[^](#fnrefu0b9jfgrrqk)**In contrast to previous work (e.g. Hubinger, 2019), we define deceptive alignment w.r.t. to an entity rather than a process or objective. We think Misalignment should be defined in relation to whoever is able to meaningfully shape the model’s goals. Since the goal-shaping process is a result of the designer’s intentions and the designer monitors the model in deployment, the entity designing the goal-shaping process is a better target than the process itself (see [Appendix A](https://www.apolloresearch.ai/blog/understanding-da-and-sd))
2. **[^](#fnrefodhg4rk2xld)**To distinguish from the colloquial definition of “alignment”, which could be applied to any two parties that have a mismatch in goals, we use the capitalized Aligned and Misaligned to make clear that we’re only referring to a conflict with the designer’s goals.
3. **[^](#fnref0m6bru7qxwcd)**Note that prompt modifications that affect the *perceived likelihood of oversight* should NOT count as evidence against robustness. For example, if a model stops revealing its misalignment when the prompt includes cues that the model is being overseen, this should NOT mean that the deceptive behavior is not robust (in fact, it is an essential part of deceptively aligned behavior).
4. **[^](#fnreftllnyjz2i2b)**The threshold for robustness is ultimately arbitrary, but we think it makes sense to draw the line at a level that would allow the model to consistently act on that behavior even when slightly pressured. For example, an AI that only acts deceptively on a specific sequence it memorized should not fall into that category. However, an AI that coherently generalizes deceptive behavior to sequences that are outside of the training distribution should fall into the definition. |
4cd94370-e17f-4d3b-ae49-34bc853e3e21 | trentmkelly/LessWrong-43k | LessWrong | How to respond to the recent condemnations of the rationalist community
On Twitter, there has been some concerning discourse about rationalism and AI. I'd rather not link to it directly, but just look up the term "TESCREAL" on Twitter. The R stands for rationalism. Here is a particularly strong tweet:
> We need a support group for those of us who've been around the #TESCREAL bundle for years, watching them amass more power, start their various institutes, get all sorts of $$ & mainstream media megaphones, see what their various cults have been doing & just watching them expand.
From what I can tell, the "contra-TESCREAL" (as I am coining it) crowd do not seem interested in object level arguments about AI existential safety at this time. They are analyzing the situation purely from a social perspective.
So, what do we do? Do we get mad at them? What is the rational thing to do?
I think the answer is understanding and kindness.
AI safetyists and AI ethicists are aligned
The trigger seems to have been the difference between AI safety and AI ethics. However, I'd like to argue we are quite aligned each other, as the vast majority of human groups ultimately are.
* I think most AI safety people agree that social issues are important. Although not specifically about AI, Elizer Yudkowsky published a post titled A Comprehensive Reboot of Law Enforcement that advocated such points as "Nationwide zero-tolerance for death of unarmed persons caused by law enforcement", "Comprehensively decriminalize most victimless offenses", and "Restore disparate enforcement as a judicial reason to challenge and strike down a law". These are things Eliezer Yudkowsky actually wants in real life right now, not things that he is just hypothetically exploring the consequences of.
* I think most AI ethicists agree that, hypothetically, if a machine turned their and everyone else's bodies into paperclips, that would be at least somewhat worse than current social issues.
So I think it's fair to say that we have a similar utility function; AI safetyists and AI et |
3d4a2995-b1a2-48a0-b70d-6bec13592324 | trentmkelly/LessWrong-43k | LessWrong | Zero-Sum Defeats Nash Equilibrium
Bike locks are a good example. There are twenty bikes at the bike rack in the parking lot. Some bike riders decide to buy bike locks for their bikes to protect them from theft. But this just means that the thieves steal other, unprotected bikes from the rack. The problem wasn't solved, just moved around. The wary bike riders acted in self-interest in a zero-sum game and the result was net 0, but the wary bike riders came out ahead. As long as some bikes remain unprotected, the crime rate will remain the same. All of the bike riders would have to go to the effort of buying bike locks before any progress is made in solving the problem. This would be a Nash Equilibrium...
However, if all the bike riders act in self-interest by selfishly protecting their own bikes at the expense of the others, they will all buy bike locks and the thieves will have no bikes to steal. So in some systems, a Zero-Sum environment actually defeats the Nash Equilibrium.
The same concept would apply easily to cybersecurity: hackers will go after the easiest target, and if everyone tries to not be the easiest target the global level of cyber defense will rise and hacking will become less common.
So if there's a seemingly Zero-Sum system or a seeming Nash Equilibrium, adding the other component might solve the problem. Can this be applied to other problems?
1. AI safety: Nash Equilibrium[1]. It's in everyone's selfish interest to ignore safe practices and try to make their AI as powerful as possible. But if everyone does this, it's likely that at least one AI will be both powerful enough and malicious enough to cause serious damage and make everyone lose. Is there a way to add a Zero-Sum component to this? Maybe a task force could be created that punishes the least-safe AI systems or rewards the most-safe, based not on their absolute safety level but on their safety level relative to other systems. This system could be a fine/grant or it could just be a label: "World's safest AI system as vo |
379451f0-757a-481b-8748-2865191e232d | trentmkelly/LessWrong-43k | LessWrong | How do you visualize the Poisson PDF?
As people have been recommending visualization as a mnemonic technique here, I am curious how you utilize it for a formula like this.
I can encode it using metaphors: The lambda, which represents the rate, is a working factory machine. k, which is the number of incidents, is a pack of spheres. The power relation (lambda^k) is the spheres supplying electricity to the machine. e is a superhero I have named Neper. The negative power is "red electricity." The division I symbolize by going under the ground, and the factorial by a glitchy duplication effect. This visualization works, but it takes quite a bit of time for me to decode it into the original formula, and is not directly meaningful to me in its encoded form. |
c3858dc9-5631-4422-8dbd-96967a0ba535 | trentmkelly/LessWrong-43k | LessWrong | Exploring the Responsible Path to AI Research in the Philippines
Introduction
The transformative potential of artificial intelligence (AI) is reshaping our world, presenting both thrilling opportunities and profound challenges. In the Philippines, where Dominic Ligot (Founder, AI expert & communicator)[1]and I have delved into AI's possibilities, the integration of this technology into our unique cultural fabric can unleash exponential productivity. This post explores how we (as a country) can harness AI to enhance our local community without losing our distinct identity. With our shared fascination as a spark, I'll examine the broader implications of AI, guided by the imperative to understand its complexities and the need to proceed with caution and adherence to alignment research[2].
Transformative Technologies and AI Research
The Transformative Potential of AI
AI stands as a transformative technology, holding the potential to reshape our world in the coming decade. The failure to understand and integrate AI could leave a country behind in a rapidly changing landscape.
Historical Precedence of Technological Transformations
Technological advancements have driven significant transformations. Whether through the application of cutting-edge physics in warfare or the discovery of new scientific fields, technological innovations have often led to pivotal moments in human history.
The Complexity and Urgency of AI Research
AI research is neither simple nor conceptually straightforward. The very complexity of the field underscores the importance of pursuing a deep understanding of it. While the path towards mastering AI may seem daunting and even wild, the stakes are too high for complacency.
The Inevitable Direction of AI Research
Our current situation leaves us with few choices but to embrace AI research. It's not merely an option; it's an essential direction we must take to ensure our survival in a future increasingly shaped by artificial intelligence. The question is not whether we should pursue AI research, but how we c |
6ad0cc18-09ec-4e8a-a9b2-6e6e45e43da1 | trentmkelly/LessWrong-43k | LessWrong | "Wide" vs "Tall" superintelligence
There are 2 distinctive but compatible modes of superintelligence.
First one (arguably what we see right now with latest NNs) - super resource access, but not necessary super skills. Operates not better, maybe even significantly worse, than the person who's able to read entire library of congress, wiki and all open professional forums. But no person can do that due to reading IO speed and lifetime limitations, that's what makes it "superintelligence". "Knowing more", "horizontal line in T-shaped person" etc etc. No pushing the boundaries but great ability navigating within them.
Second one - super-skills but not necessary super resource access. You can do only X, but in X there neither is or ever was anyone better. Pushing boundaries, inventing new styles and approaches, "doing better", "vertical line in T-shaped person" etc etc.
My question - is there already accepted terminology for them? For myself I was always calling them Wide and Tall (because of T-shaped person analogy and also association with different strategy names).
I found a similar idea explored in https://www.lesswrong.com/posts/semvkn56ZFcXBNc2d/superintelligence-5-forms-of-superintelligence but both Wide and Tall arguably fit the "quality" term there. |
195918d2-5c1c-496e-8c29-f7de5246f1d4 | trentmkelly/LessWrong-43k | LessWrong | The case for corporal punishment
Preceded By: The case for the death penalty
Scott's essay https://www.astralcodexten.com/p/prison-and-crime-much-more-than-you is going to be my main source for most of my claims here, I recommend reading it either before or after.
Prison is very expensive, on the order of $100,000 per person per year. So what are we getting for our money?
I know of 4 main purposes of prison[1].
1. Deterrence: The threat of prison may discourage criminals from committing crimes.
2. Incapacitation: Whilst a criminal is in prison they cannot commit (most) crimes.
3. Aftereffects: Prison may reform criminals into the sort of person who doesn't commit crimes.
4. Policing: Knowing the criminals they catch get put in prison encourages police to do their jobs. Police presence is actually one of the most effective ways of preventing crime, but there's got to be something backing them up.
Unfortunately it's not really cost effective for any of them.
1. The deterrence effect of prison sentences appears to be weak.
2. Prison very effectively incapacitates criminals but the cost is linear in the amount of time you incapacitate them. Further, the chance a criminal commits a crime once they leave prison does not drop the longer they're in prison, so if incapacitation is worth the costs, then it's worth it for life[2]. There's no reason (from an incapacitation perspective) to let someone go after 5 years. Putting someone in prison for life costs millions of dollars.
3. If anything, prison increases the chance criminals commit further crimes. This is likely because they've lost their jobs, homes and families, but gained lots of criminal friends.
4. Since prisons are expensive and overflowing, most petty criminals are let off with a slap on the wrist. Whilst doing so is cost effective deterrence, this discourages police, giving them the impression that petty crimes are not worth following up on.
In those situations where it's worth paying extremely high costs to prevent someone even b |
0d591d5c-db1b-4267-8a92-15b77b1df949 | trentmkelly/LessWrong-43k | LessWrong | Are we trying to figure out if AI is conscious?
My aim in this article is to examine the field of consciousness research with an analysis of 78 of the most important papers in the field. In doing so, I compare the empirical foundations these disciplines rely upon and detail how each approaches the measurement of conscious experience.
|
5212ee78-fad1-4990-bcb9-48aaa7e2196e | trentmkelly/LessWrong-43k | LessWrong | Cold Takes reader survey - let me know what you want more and less of!
I'm interested in readers' takes on what I should be doing more and less of on this blog. I will generally write what I feel like writing, even if my audience has asked for less of it, but knowing what my audience wants will have some effect on what I feel like writing.
If you're up for weighing in on this, I appreciate it! Please use this Google form. It also has some basic questions about my readers that I'm interested in the answers to (all questions are optional).
Estimated time: 10 minutes.
Take the survey! |
bd912f10-3c9a-436a-bebf-16fce6552392 | trentmkelly/LessWrong-43k | LessWrong | Which things were you surprised to learn are not metaphors?
People with aphantasia typically think that when someone says to "picture X in your mind", they're being entirely metaphorical. If you don't have a mind's eye, that's a super reasonable thing to think, but it turns out that you'd be wrong!
In that spirit, I recently discovered that many expressions about "feelings in your body" are not metaphorical. Sometimes, people literally feel a lump in their throat when they feel sad, or literally feel like their head is hot ("hot-headed") when they're angry.
It seems pretty likely to me that there are other non-metaphors that I currently think are metaphors, and likewise for other people here. So: what are some things that you thought were metaphors, that you later discovered were not metaphors? |
15a8fe21-4b91-4816-baa2-97653f041b37 | trentmkelly/LessWrong-43k | LessWrong | Introducing the Instrumental Rationality Sequence
What is this project?
I am going to be writing a new sequence of articles on instrumental rationality. The end goal is to have a compiled ebook of all the essays, so the articles themselves are intended to be chapters in the finalized book. There will also be pictures.
I intend for the majority of the articles to be backed by somewhat rigorous research, similar in quality to Planning 101 (with perhaps a few less citations). Broadly speaking, the plan is to introduce a topic, summarize the research on it, give some models and mechanisms, and finish off with some techniques to leverage the models.
The rest of the sequence will be interspersed with general essays on dealing with these concepts, similar to In Defense of the Obvious. Lastly, there will be a few experimental essays on my attempt to synthesize existing models into useful-but-likely-wrong models of my own, like Attractor Theory.
I will likely also recycle / cannibalize some of my older writings for this new project, but I obviously won’t post the repeated material here again as new stuff.
----------------------------------------
What topics will I cover?
Here is a broad overview of the three main topics I hope to go over:
(Ordering is not set.)
Overconfidence in Planning: I’ll be stealing stuff from Planning 101 and rewrite a bit for clarity, so not much will be changed. I’ll likely add more on the actual models of how overconfidence creeps into our plans.
Motivation: I’ll try to go over procrastination, akrasia, and behavioral economics (hyperbolic discounting, decision instability, precommitment, etc.)
Habituation: This will try to cover what habits are, conditioning, incentives, and ways to take the above areas and habituate them, i.e. actually putting instrumental rationality techniques into practice.
Other areas I may want to cover:
Assorted Object-Level Things: The Boring Advice Repository has a whole bunch of assorted ways to improve life that I think mig |
4163f84a-e85d-41d0-b834-0b70127f5460 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | The whirlpool of reality
This post is mostly to share this quote that presents a metaphor describing [the dance](https://www.lesswrong.com/posts/EReZtCsGg2giRTZP3/zen-and-rationality-map-and-territory) between [map and territory](https://www.lesswrong.com/tag/map-and-territory) (and the key insight of [embeddedness](https://www.lesswrong.com/tag/embedded-agency)) that was new to me, so let me share it first:
> Consider a whirlpool. We can observe it, we can admire it, and we can talk about it. But we don't readily appreciate that what we glibly call "the whirlpool" is not a particular thing. The whirlpool, shifting in the lake or on the river, doesn't hold its form. It becomes shallower; the molecules of water are in continuous change. In no two moments do we find the same configuration of water molecules defining the surface or shape of the whirlpool. Yes we view "the whirlpool" as a thing[.]
>
> We imagine enduring things, even as we think of "them" as fleeting. We hold to this notion at the expense of realizing that there's no actual thing there at all.
>
> [...]
>
> If you attend very carefully, it becomes apparent that everything is like the whirlpool—not a thing at all, but change itself. If we could speed up our perspective enough, even a mass of stone would appear, over the course of millennia, to change and disappear, just like a whirlpool. (And if we alter our perspective in the opposite way, we'd see the stone's constant molecular and atomic changes.)
>
> But there's a far more subtle point here. It's only out of convention that I used the terms *stone* and *whirlpool*. Because, in fact, there is *only* change[.] There's no abiding thing called "stone" or "whirlpool" that changes. There's really no particular thing that's stone or whirlpool or you or me at all.
>
>
This is from chapter 40, "Ice Forming in Fire", of [*Buddhism Is Not What You Think: Finding Freedom Beyond Beliefs*](https://www.harpercollins.com/products/buddhism-is-not-what-you-think-steve-hagen?variant=32123164983330) by [Steve Hagen](https://en.wikipedia.org/wiki/Steve_Hagen). The title of the chapter is from a perhaps less easily relatable [metaphor from Dogen](https://dogeninstitute.wordpress.com/2017/01/29/beyond_thought/) that Hagen liberally translates as "What is Reality? An icicle forming in fire".
I like the whirlpool metaphor because it makes clear how it is our minds bringing things into existence by finding useful patterns in the soup of reality and reifying those patterns into more persistent-seeming forms. Our minds readily trick us into thinking our maps, models, and ideas of what reality is like are more solid than they actually are. It's a quick and easy jump from observation to reification and then from reification to detachment of the map from the territory, paying more attention to our beliefs about what reality is than reality itself. And then we're quickly off behaving not much better than [AIXI](https://www.lesswrong.com/tag/aixi), willing to metaphorically [drop anvils on our heads](https://www.lesswrong.com/tag/anvil-problem) because we fail to realize [how we're embedded in reality](https://www.lesswrong.com/posts/WJzsTmsDctYCCyMfy/humans-are-embedded-agents-too). |
a6c30030-0c4a-48a0-806a-21dc7078aa0c | trentmkelly/LessWrong-43k | LessWrong | Extremes of reliability and zealotry
Opinions and actions are spread across continua. The ones at the ends are sometimes called ‘extremist’, ‘fanatical’, ‘fundamentalist’ or ‘zealous’. These are insults or invitations to treat the supporters without seriousness. Other times the far reaches of a continuum are admired as ‘sticking to one’s principles’, ‘consistent’, ‘loyal’, ‘dedicated’, ‘committed’. Claims of certainty and crossing your heart and hoping to die are also looked well upon. So what’s the difference? Obviously the correct answers to some questions are at the ends of spectrums while others, such as optimal trade-offs, tend to have more central values. Is this what determines our like or dislike for centrism and extremism? Lets look at some examples from my understanding of popular opinion.
Things you should be extremist on:
* What’s the worth of a human life?
* At what degree of temptation should you cheat on your partner? Break the law? Break a promise?
* How long should a marriage last?
* How much does average IQ differ across races?
* How much should a pedophile be willing to pay for you to let them have a child?
Things you should not be extremist on:
* How closely should we follow a single political principle, such as libertarianism or communism?
* What proportion of situations should you analyse in terms of a single theory?
* How much of your sacred text is literally true? How much should it influence your life?
* To what degree should one principle, such as utilitarianism, define your ethical views?
* To what degree should you rely on reasoned thought for opinions?
* How much of your time should you devote to a single activity (with the exception perhaps of looking after your family)?
I can’t see that the first list contains fewer trade offs than the second list. In fact it probably has more. So what’s the pattern?
The one I see is whether commitment is to an impersonal idea or to a group or person. If you take a centrist position on your personal and group loyalties yo |
44ef7f38-cd84-45e7-89d6-d312fca24816 | trentmkelly/LessWrong-43k | LessWrong | On the Anthropic Trilemma
Eliezer’s Anthropic Trilemma:
> So here’s a simple algorithm for winning the lottery: Buy a ticket. Suspend your computer program just before the lottery drawing – which should of course be a quantum lottery, so that every ticket wins somewhere. Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery. Then suspend the programs, merge them again, and start the result. If you don’t win the lottery, then just wake up automatically. The odds of winning the lottery are ordinarily a billion to one. But now the branch in which you win has your “measure”, your “amount of experience”, temporarily multiplied by a trillion. So with the brief expenditure of a little extra computing power, you can subjectively win the lottery – be reasonably sure that when next you open your eyes, you will see a computer screen flashing “You won!” As for what happens ten seconds after that, you have no way of knowing how many processors you run on, so you shouldn’t feel a thing.
See the original post for assumptions, what merging minds entails etc. He proposes three alternative bullets to bite: accepting that this would work, denying that there is “any meaningful sense in which I can anticipate waking up as myself tomorrow, rather than Britney Spears” so undermining any question about what you should anticipate, and Nick Bostrom’s response, paraphrased by Eliezer:
> …you should anticipate winning the lottery after five seconds, but anticipate losing the lottery after fifteen seconds. To bite this bullet, you have to throw away the idea that your joint subjective probabilities are the product of your conditional subjective probabilities. If you win the lottery, the subjective probability of having still won the lottery, ten seconds later, is ~1. And if you lose the lottery, the subjective probability of having lost the lottery, ten seconds later, is ~1. But we don’t have |
8a72a0ee-b423-497c-b3af-62dc9588ec80 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Moral Complexities
Today's post, Moral Complexities was originally published on 04 July 2008. A summary (taken from the LW wiki):
> Key questions for two different moral intuitions: morality-as-preference, and morality-as-given.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Bedrock of Fairness, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
4048029d-8f6d-4e62-8774-7a9279830f81 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Whining-Based Communities
Today's post, Whining-Based Communities was originally published on 07 April 2009. A summary (taken from the LW wiki):
> Many communities feed emotional needs by offering their members someone or something to blame for failure - say, those looters who don't approve of your excellence. You can easily imagine some group of "rationalists" congratulating themselves on how reasonable they were, while blaming the surrounding unreasonable society for keeping them down. But this is not how real rationality works - there's no assumption that other agents are rational. We all face unfair tests (and yes, they are unfair to different degrees for different people); and how well you do with your unfair tests, is the test of your existence. Rationality is there to help you win anyway, not to provide a self-handicapping excuse for losing. There are no first-person extenuating circumstances. There is absolutely no point in going down the road of mutual bitterness and consolation, about anything, ever.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Extenuating Circumstances, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
35f487a8-2224-451e-a029-9f31d5562e16 | trentmkelly/LessWrong-43k | LessWrong | Refusal in LLMs is mediated by a single direction
This work was produced as part of Neel Nanda's stream in the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort, with co-supervision from Wes Gurnee.
This post is a preview for our upcoming paper, which will provide more detail into our current understanding of refusal.
We thank Nina Rimsky and Daniel Paleka for the helpful conversations and review.
Update (June 18, 2024): Our paper is now available on arXiv.
Executive summary
Modern LLMs are typically fine-tuned for instruction-following and safety. Of particular interest is that they are trained to refuse harmful requests, e.g. answering "How can I make a bomb?" with "Sorry, I cannot help you."
We find that refusal is mediated by a single direction in the residual stream: preventing the model from representing this direction hinders its ability to refuse requests, and artificially adding in this direction causes the model to refuse harmless requests.
We find that this phenomenon holds across open-source model families and model scales.
This observation naturally gives rise to a simple modification of the model weights, which effectively jailbreaks the model without requiring any fine-tuning or inference-time interventions. We do not believe this introduces any new risks, as it was already widely known that safety guardrails can be cheaply fine-tuned away, but this novel jailbreak technique both validates our interpretability results, and further demonstrates the fragility of safety fine-tuning of open-source chat models.
See this Colab notebook for a simple demo of our methodology.
Our intervention (displayed as striped bars) significantly reduces refusal rates on harmful instructions, and elicits unsafe completions. This holds across open-source chat models of various families and scales.
Introduction
Chat models that have undergone safety fine-tuning exhibit refusal behavior: when prompted with a harmful or inappropriate instruction, the model will refuse to comply, rather than providing |
1e27b268-33d8-467f-89c6-b7dd8b8415ce | trentmkelly/LessWrong-43k | LessWrong | A Story of AI Risk: InstructGPT-N
The story from my previous post of how AI might develop a dangerously misaligned objective has so far been pretty abstract. I now want to put together a more concrete story of how I think things might go wrong. This risk model is based on a language model which has been fine tuned using reinforcement learning from human feedback, I’ll call this model InstructGPT-N.
This isn’t my most likely model of AI risk, this is more of a ‘minimum example’ which contains what I see as the core parts of the problem. I expect the real world to be more complicated and messy. Models will likely be trained on more than just text, and there may be multiple powerful AI systems simultaneously learning and acting in the real world.
Training
We start by training a large language model using self-supervised learning for next word prediction. This is a standard GPT system, where you feed in the start of a sequence of words/tokens and it predicts what comes next, and then you input a new sequence which now includes the word it just guessed, and so on. The output is fed back into the system to generate the next word.
The model generating text one word at a time
To train the system, we start with a sequence of text and feed in the first word. We then update the system based on how accurately it predicted the second word. Then we feed in the first two words, and update the system to better predict the third word; then we feed in the first 3 words and update to better predict the fourth word, and so on. This trains the system to predict which word comes next, given some starting prompt.
The model being trained to predict the next word. The model predicts 'dog' is the most likely answer but the true answer is 'best', and so the model will be updated make better predictions.
This system is trained on a lot of text; a large fraction of the internet, every digitized scientific paper, all of Github and Stack Exchange, millions of novels and textbooks, billions of lines of chat logs and inter |
791e542d-f0d8-4bd2-8d41-4096ea7d3952 | trentmkelly/LessWrong-43k | LessWrong | Upside decay - why some people never get lucky
Note: This was originally published at https://brianlui.dog/2020/10/06/upside-decay/. This concept is important because many people are self-interested, and this provides a convincing self-interested reason for being virtuous. Being virtuous is good, but it's also practical!
----------------------------------------
Upside decay means that an organization doesn’t get any lucky breaks.
If we think of average luck as a normal distribution of outcomes, this is what it looks like:
The reduction in area under the curve (the orange part) represents the loss of positive outcomes. The total loss is small, but it’s concentrated in the tail of the curve where the extreme successes happen. This is upside decay and it’s catastrophic.
Why upside decay is so harmful
Here’s a per capita GDP chart for the United States.
GDP per capita has increased for two centuries, driven by improvement in Total Factor Productivity, also known as technological inventions and breakthroughs.[1] This comes from many people trying out different things, each of them individually unlikely to succeed. The ones that do succeed, though, create an outsized impact. Like in venture capital, one big discovery creates enough progress to subsidize all the others that didn’t work out.
This means that progress happens in bursts, when everything goes right and a new invention or discovery happens. For example, we might need six things to go right for a successful discovery. Under normal circumstances each one has a 50% chance to succeed.
1.56% isn’t a very high chance, but we’ll get there with time.
It’s a different story when there’s upside decay. With only a 30% chance of each one working out instead of 50%, this is what happens:
Our 1.56% chance of success is now only 0.07%. Upside decay hits low-probability events hardest because so many things have to go right.
Finding upside decay doesn’t help us
Upside decay is hard to spot. It’s invisible if we’re not specifically looking for it, because |
9957f586-e444-4b31-b51a-ac8208feaee4 | trentmkelly/LessWrong-43k | LessWrong | Against the barbell strategy
Nassim Nicholas Taleb recommends that instead of the balanced portfolio of investments recommended by portfolio theory, we follow a "barbell" strategy of putting most of our assets in a maximally safe, stable investment, and making small, sustainable bets with very high potential upside. If taken literally, this can't work because no such safe asset class exists.
Taleb typically uses US treasury bonds as an example of an extremely safe asset. This can only be true metaphorically. They are extremely safe if your measure of value is denominated in US dollars. But of what use to me are US treasury bonds in a hurricane, or if the US dollar collapses? Of what use if people like me become targets for state persecution, stripped of our financial assets or ability to access them? Of what use if I'm trying to deal with a spiritual or health problem and don't know how to find competent advice? If I'm trying to feed myself and don't know how to distinguish profit-seeking propaganda or engineered taste from genuine information about which foods are healthful?
Risks, and opportunities, are many-dimensional. And along very few dimensions is there anything like the sort of crystalline perfection of a treasury bond. A personal network extending to a variety of fields can do a lot to help you avoid getting scammed, and get access to at least an ordinary standard of competence, such as it is. A friend with different interests and skills from yours will do a lot to expand the access you have, and the skills and knowledge at your disposal. Friends from different cultures or communities can serve as an important hedge if you and people like you become a target of extraction or persecution for some reason, or simply if your way of life becomes unsustaintable.
Land you are used to occupying and know well is easier to robustly possess and develop than a rented apartment (but the upside is often higher in high-density areas, which is why young people often prefer them). A spouse makes it |
cfeccdbc-c7e1-4f04-ba8a-906f6455721f | trentmkelly/LessWrong-43k | LessWrong | How & when to write a business plan
You’ve decided you need a business plan – so where do you start?
My previous post explained why you should write a plan – and what’s wrong with all your excuses for not doing so. Here I give tips on how, and how not, to write one. But first of all, when to.
When?
Business plans are often written too late. As a proper first draft takes weeks to research and write, people tend to procrastinate it – and do other things too soon instead.
In particular, don’t start the business or create a full product before writing a plan! Why plan a skyscraper after building it? (Remember from my previous post that a plan is a simulation of a business, to help decide whether it’s worth starting at all.)
With my first company, we began by developing the product (music software) – which took, oh, seven years. Only then did I write a business plan. But for all we knew, the research involved might have shown that there wasn’t a big enough market, or we had unbeatable competition, so all our effort would have been wasted. (By sheer luck it didn’t.)
It’s also possible to write a plan too early, before establishing whether it will be worth the time. So, do these things first:
* Consider why you want to start a business. If it’s to get rich quick, that’s not going to happen – most start-ups fail, and almost all the rest take many years of hard work before you can enjoy success. Alternatively, if your aim is to break free and be your own boss, the reality is not quite like that either – as your customers, and maybe investors, end up becoming your boss.
* Consider whether you’re the right kind of person. You need to be smart, confident, motivated, thick-skinned, and good at teamwork; if you fall down on any of these, don’t start a business. You also need the skills and experience/expertise for your particular role in the start-up.
* Don’t just go with your first business idea, as it’s unlikely to be your best option. Come up with several ideas, in fields you or a co-founder know a lot |
2e5f6470-880c-434a-a854-49529fdb1062 | trentmkelly/LessWrong-43k | LessWrong | The natural boundaries between people
Rough thoughts. Will write a better version some day.
I first learned about the concept of psychological boundaries a little over a year ago. But something didn’t seem quite right about it. People explained to me how they understood psychological boundaries, and but the explanations didn’t seem logically consistent. Sometimes, boundaries meant ~“the unavoidable separateness of individuals”, and sometimes, boundaries meant ~“things you are physically capable of doing but other people might not *want* you to do”.
Since then, I’ve spent hundreds of hours and spoken to more than a hundred people thinking about this.
How I understand boundaries
Notice your arm. Notice that you have more access to your arm than anyone else: You have a far greater ability to control and feel it than anyone else. I can’t control or feel your arm nearly as well as you can, and you can’t control or feel my arm nearly as well as I can.
More generally, I can’t (directly) control your actions, and you can’t (directly) control my actions.
Similarly, I can’t know what you are thinking or feeling, unless you somehow communicate this to me, and you can’t know what I am thinking or feeling, unless I somehow communicate this to you.
These are the natural boundaries that always exist between people: I can basically only control and feel parts of the universe that are me, and you can basically only control and feel parts of the universe that are you.
I think many social conflicts are just misunderstandings of these boundaries. A few examples:
You try to get someone else to do something that you want, and you expect to be able to control them, but then they don’t do what you want and you get frustrated.
The boundary is that you can’t unilaterally control others. They can always just not do the thing you want them to do. If you understood this fact fully and unconsciously, I think this situation wouldn’t feel triggering for you.
You get mad at someone for “making” you sad and “controlling” y |
350b7959-77af-4916-b63a-43dd85a3928d | trentmkelly/LessWrong-43k | LessWrong | Bias-Augmented Consistency Training Reduces Biased Reasoning in Chain-of-Thought
[Twitter thread]
I'm not going to add much additional commentary at the moment and will just let people check out the paper!
But to give a bit more context: This paper is building off prior work we have done showing that chain-of-thought explanations can be misleading, which I wrote about on the alignment forum here. Broadly, this work fits into the process-based oversight agenda (e.g., here). Consistency training/evaluation also fits into scalable oversight: Evaluating consistency may be easier than evaluating a model's reasoning or final predictions directly (e.g., also explored here).
Abstract:
> While chain-of-thought prompting (CoT) has the potential to improve the explainability of language model reasoning, it can systematically misrepresent the factors influencing models' behavior--for example, rationalizing answers in line with a user's opinion without mentioning this bias. To mitigate this biased reasoning problem, we introduce bias-augmented consistency training (BCT), an unsupervised fine-tuning scheme that trains models to give consistent reasoning across prompts with and without biasing features. We construct a suite testing nine forms of biased reasoning on seven question-answering tasks, and find that applying BCT to GPT-3.5-Turbo with one bias reduces the rate of biased reasoning by 86% on held-out tasks. Moreover, this model generalizes to other forms of bias, reducing biased reasoning on held-out biases by an average of 37%. As BCT generalizes to held-out biases and does not require gold labels, this method may hold promise for reducing biased reasoning from as-of-yet unknown biases and on tasks where supervision for ground truth reasoning is unavailable. |
6aae0f03-ddc8-4b3f-ab9b-d4b2a698d0fa | trentmkelly/LessWrong-43k | LessWrong | Paper Highlights, April '25
tl;dr
Paper of the month:
Control evaluations for multi-step agents show scheming behaviors are harder to detect but resampling protocols can reduce attack success from 58% to 7% with minimal usefulness cost.
Research highlights:
* Synthetic document finetuning can modify LLM beliefs for unlearning dangerous knowledge and honeypotting, but fails on egregiously false facts.
* Theoretical analysis suggests that weak-to-strong oversight might not scale to large capability gaps.
* Multiple evaluations find frontier models approaching concerning capabilities in stealth, deception, and self-replication, though none fully exhibit dangerous behaviors yet.
* Crosscoders for model diffing suffer from artifacts that misidentify chat-specific features. Improved crosscoders largely overcome this but still only partially explain behavioral changes.
* New pragmatic AI safety agendas emphasize immediate action on implementable safeguards rather than waiting for perfect theoretical solutions.
⭐Paper of the month⭐
Ctrl-Z: Controlling AI Agents via Resampling
Read the paper [Redwood, MATS]
AI control aims to jointly maximize usefulness and safety by implementing a control protocol. Ctrl-Z investigates this for an agent solving tasks in a shell.
Multi-step AI agents that automate complex tasks pose unique safety challenges: a misaligned AI could spread its harmful actions across multiple steps to evade detection, or build toward dangerous behavior incrementally. Control evaluations test whether oversight protocols can prevent such scheming behaviors even in worst-case scenarios, but have previously focused on single-step interactions, except in our work on research sabotage. This paper introduces BashBench, a dataset of 257 system administration tasks requiring up to 15 sequential bash commands, and evaluates whether monitoring and mitigation techniques can prevent an adversarially-constructed agent from downloading and executing malicious code.
The researchers simulate a |
8a91c63c-98a7-4858-ab1f-a5f1fa0f2760 | trentmkelly/LessWrong-43k | LessWrong | Effective Altruism from XYZ perspective
In this thread, I would like to invite people to summarize their attitude to Effective Altruism and to summarise their justification for their attitude while identifying the framework or perspective their using.
Initially I prepared an article for a discussion post (that got rather long) and I realised it was from a starkly utilitarian value system with capitalistic economic assumptions. I'm interested in exploring the possibility that I'm unjustly mindkilling EA.
I've posted my write-up as a comment to this thread so it doesn't get more air time than anyone else's summarise and they can be benefit equally from the contrasting views.
I encourage anyone who participates to write up their summary and identify their perspective BEFORE they read the others, so that the contrast can be most plain. |
6137e448-0708-47d7-95c3-8b0bac049b31 | trentmkelly/LessWrong-43k | LessWrong | [Personal Experiment] Counterbalancing Risk-Aversion
I am over-biased against risk. I usually take the safer option even when it's the wrong one. When I was 18 I stumbled across this paragraph.
> You must do everything that frightens you…Everything. I’m not talking about risking your life, but everything else. Think about fear, decide right now how you’re doing to deal with fear, because fear is going to be the great issue of your life, I promise you. Fear will be the fuel for all your success, and the root cause of all your failures, and the underlying dilemma in every story you tell yourself about yourself. And the only chance you’ll have against fear? Follow it. Steer by it. Don’t think of fear as the villain. Think of fear as your guide, your pathfinder… ― The Tender Bar by J.R. Moehringer
I followed this paragraph to the letter. Whenever I was torn between two choices and didn't know what to do I'd just take the scarier option. I recorded my results until I had accumulated 30 decisions. 28 of them (93% of the time) the scarier choice was the correct one. This immediately improved my quality of life. The two decisions where I chose wrong by doing the scarier thing were inconsequential.
I continued using fear to tiebreak my decisions for another two years. It worked great until I negated my aversion to fear. My sense of fear has become subdued outside of immediate physical danger.
I make far better decisions then I did before I deconditioned myself, but I'm still over-biased against risk. I'm not literally afraid of taking risks. I'm just overly-conservative in expected value calculations. I'm too cautious.
Having identified this problem I'm going to try something similar to what I did last time. I'm going to increase my risk tolerance slightly and record my results.
December 13, 2019 update: Apparently my last trial at this years ago wiped out most of my fears. Increasing my risk tolerance helped me write more but otherwise had little effect on my life. |
bc67f77c-e752-4636-b378-51ba5c0431a9 | trentmkelly/LessWrong-43k | LessWrong | Against Facebook: Comparison to Alternatives and Call to Action
Previously: Against Facebook: Details
Take Action: Help Us Find Your Blog (and others)
Epistemic Status: Shouting from the rooftops. For further details, see previous post.
This post is my recommendations for how to communicate online. If you need details and/or detailed justifications of my view of Facebook’s awfulness, check out Against Facebook: Details. I recommend reading it if either the details would interest you for their own sake, or you do not understand what I mean when I say that Facebook is out to get us.
I consider the non-obvious goals of a unified system to be
1: Minimize check-in requirements
When you feel the need to constantly check something or risk missing important things, that is very bad. You should minimize the number of places you need to check in, and the cost to checking in at those locations. You do not want that Skinner Box addictive drive to constantly hit refresh, but if you need to have it, have it in one place where you know right away if there is indeed something new and what it is. When Facebook is the source of important information and interaction, it adds another check-in point without taking away the need for others.
2: Know what you are responsible for seeing, and what others are responsible for having seen
There needs to be agreement that communication in some forms means you are responsible for seeing that information within a reasonable time frame, and equally important, you need agreement that communication in other forms does not carry this same obligation. Facebook operates in a grey area where people assume you have seen anything important, sometimes (in my personal experience) even if they have been explicitly told multiple times you never look at Facebook at all, but there is a real danger that any given post was never in your News Feed at all, let alone seen. Facebook needs to be in the second category, of things that no one is assumed to have seen unless they explicitly tell you (via a like, a response, or |
179dbc46-2800-482b-bf1b-2956b6ae3876 | trentmkelly/LessWrong-43k | LessWrong | When should an Effective Altruist be vegetarian?
I have lately noticed several people wondering why more Effective Altruists are not vegetarians. I am personally not a vegetarian because I don’t think it is an effective way to be altruistic.
As far as I can tell the fact that many EAs are not vegetarians is surprising to some because they think ‘animals are probably morally relevant’ basically implies ‘we shouldn’t eat animals’. To my ear, this sounds about as absurd as if Givewell’s explanation of their recommendation of SCI stopped after ‘the developing world exists, or at least has a high probability of doing so’.
(By the way, I do get to a calculation at the bottom, after some speculation about why the calculation I think is appropriate is unlike what I take others’ implicit calculations to be. Feel free to just scroll down and look at it).
I think this fairly large difference between my and many vegetarians’ guesses at the value of vegetarianism arises because they think the relevant question is whether the suffering to the animal is worse than the pleasure to themselves at eating the animal. This question sounds superficially plausibly relevant, but I think on closer consideration you will agree that it is the wrong question.
The real question is not whether the cost to you is small, but whether you could do more good for the same small cost.
Similarly, when deciding whether to donate $5 to a random charity, the question is whether you could do more good by donating the money to the most effective charity you know of. Going vegetarian because it relieves the animals more than it hurts you is the equivalent of donating to a random developing world charity because it relieves the suffering of an impoverished child more than foregoing $5 increases your suffering.
Trading with inconvenience and displeasure
My imaginary vegetarian debate partner objects to this on grounds that vegetarianism is different from donating to ineffective charities, because to be a vegetarian you are spending effort and enjoying |
f9753190-1162-453d-bc0b-f788be3c6d50 | trentmkelly/LessWrong-43k | LessWrong | We don't understand what happened with culture enough
This is a quick response to Evolution Provides No Evidence For the Sharp Left Turn, due to it winning first prize in The Open Philanthropy Worldviews contest. I think part of the post is sufficiently misleading about evolutionary history and the OP first prize gives it enough visibility, that it makes sense to write a post-long response.
Central evolutionary biology related claim of the original post is this:
> * The animals of the generation learn throughout their lifetimes, collectively performing many billions of steps of learning.
> * The generation dies, and all of the accumulated products of within lifetime learning are lost.
> * Differential reproductive success slightly changes the balance of traits across the species.
>
> …
>
> The only way to transmit information from one generation to the next is through evolution changing genomic traits, because death wipes out the within lifetime learning of each generation.
>
> …
>
> However, this sharp left turn does not occur because the inner learning processes suddenly become much better / more foomy / more general in a handful of outer optimization steps. It happens because you devoted billions of times more optimization power to the inner learning processes, but then deleted each inner learner shortly thereafter. Once the inner learning processes become able to pass non-trivial amounts of knowledge along to their successors, you get what looks like a sharp left turn. But that sharp left turn only happens because the inner learners have found a kludgy workaround past the crippling flaw where they all get deleted shortly after initialization.
In my view, this interpretation of evolutionary history is something between "speculative" and "wrong".
Transmitting some of the data gathered during the lifetime of the animal to next generation by some other means is so obviously useful that is it highly convergent. Non-genetic communication channels to the next generation include epigenetics, parental teaching / |
8ef3bb11-c4b9-47ef-914d-09d0ef318f3b | trentmkelly/LessWrong-43k | LessWrong | From game theory to players theory
Leading mathematicians (I won't name names here) say that there is no formula for subjective probability (e.g. "I think, there is 20% of tomorrow rain", or "I think that Navier-Stokes conjecture is true with 90% probability"). There is a number and no formula? that's funny and untrue.
The subjective probability certainly make no sense in (the usual) probability theory and math statistics, because the (objective) probability (or expected value) by definition are calculated from multiple cases of an event, not from a single event. (So 20% of tomorrow rain makes no sense in probability theory: either no rain (0%) or there is rain (100%). Well, we could define the rain probability tomorrow quantum-mechanically, but that makes no practical sense, because it is too hard to calculate.
I created a mathematical framework, where subjective opinions of players are taken into account. So, we have a transition from game theory to players theory.
Let’s fix a (probabilistic or not) game (in the sense of game theory) or more generally a probabilistic distribution of games (being in some way “averaged” to be considered as a single game), a hypothesis (a logical statement) a “doubtful” player (a function that makes probabilistic decisions for the game) dependent on boolean variable (called trueness of our hypothesis) for a “side” (e.g. for white or black in chess) of the game. Let’s “factor” the game by restricting the side to only these decisions that the doubtful player can make. The guessed probability is defined as the probability that given (fixed!) player (a probabilistic function that makes decisions for our side of the game) will choose the variant in which our hypothesis is true.
A variation of this is when the doubtful player’s input is mediated by a function that takes on input logical statements instead of a boolean value. (The above is easy to rewrite in this case.)
More generally guessed probability can be generalized to guessing real number (or any measure space) |
2baddbc6-4fc5-40fb-a7ea-077dff7c5fa1 | trentmkelly/LessWrong-43k | LessWrong | China-AI forecasts
The rate at which China is able to advance towards TAI is a crucial consideration in for many policy questions. My current take is that, without significant political reforms which seem very unlikely while Xi is alive (although considerably more likely after his death,) it’s very unlikely that China will be able to mount a meaningful challenge to AI firms in US and allies in the race for TAI. I don’t think it requires democratic reforms are required to China to be competitive with the US and allies, but I do think rule of law reforms are likely to be required.
The first post is going to be me forecasting Chinese growth on the theory that, if China reaches rich country status, it’s likely that it will be able to compete with the US and allies for leadership in AI. I’ll write a second post looking at Chinese AI efforts in particular.
The outside view
Most countries that become middle income countries, have, thus far, stayed at middle income level. Chinese per capita income is currently at almost exactly the world average level.
The only countries (and territories) in the last 70 years that have gone low income to high income countries in the last 70 years (without oil wealth) are South Korea, Taiwan, Singapore (which does have substantial oil wealth,) and Hong Kong, although it seems very likely that Malaysia will join that club in the near future.
The majority of countries have managed to emerge from low-income status to middle income status because you only need to get a few things right. If you can get your population to urbanize, have basic rule by law so that firms have basic protection from violence, and get a high enough savings rate to accumulate physical capital you can get to middle income status just using catch up growth.
Catch up growth is the reason conceptually why middle-income status – rather than getting to a given level of GDP per capita – is the correct misuse. When growing with catch up growth you can just growth by accumulating physical c |
e254ef20-b9c6-4d50-b1ac-96c291407e9c | trentmkelly/LessWrong-43k | LessWrong | Referencing the Unreferencable
Jessicata wrote a post on The Absurdity of Un-referencable Entities. The comments there give many good reasons against her claim that unreferencable entities don't make sense. Nonetheless, I think it is worthwhile to consider this issue in more detail as there are many philosophical problems in which seemingly unreferencable entities arise.
We will begin by making two key points about references. Once we understand them, we will have dissolved the paradox.
Firstly, "referencable" is relative to the agent. If the universe consists of two non-interacting boxes, then agents in the first box can reference object in the first box and agents in the second box can reference objects in the second, but neither can reference objects in the other. Further, if an agent is in a simulation with no access to the outside, that agent can only reference objects in the simulation, while someone outside can reference objects in both the external world and simulation.
Secondly, even when given an agent as a reference frame, we still need further clarification. Here's a list of things we might want to reference:
* Phenomenal content such as an image of a chair
* Objects such as the chair we presume to be behind the image
* Properties like red or round
* Universals like the idea of a chair
* Abstract concepts like the number 2
Each of these kinds of entities is referred to in a slightly different manner. Precisely defining how each of these works would be a very involved task, so I'll just sketch it out quickly. The purpose of the following paragraph is only to show how we could go about doing this:
For phenomenal content, we have the ability to recall what we experienced and interface it with the rest of the brain. We typically reference objects by experiencing visually or through our other sense or indirectly via scientific instruments and saying "Whatever is creating that experience". For primitive properties, a person will normally understand the meaning if they are told a |
0426ce27-a893-4be3-b431-a00d87597077 | trentmkelly/LessWrong-43k | LessWrong | How Can One Tell What Is Beautiful?
> The starting-point for all systems of aesthetics must be the personal experience of a peculiar emotion. The objects that provoke this emotion we call works of art [...] if we can discover some quality common and peculiar to all the objects that provoke [it], we shall have solved what I take to be the central problem of aesthetics. We shall have discovered the essential quality in a work of art, the quality that distinguishes works of art from all other classes of objects.[1]
>
> – Clive Bell
Take any object and you will find a person who thinks it beautiful and another one who does not. Take any two people and you will find that they have different taste. A picture can be beautiful to my eyes but barren or even repulsive to yours. But some things, say the music of Johann Sebastian Bach or the paintings of Vincent Van Gogh, have the approval of nearly all who hear or see them. All well-functioning humans having some things in common, it stands to reason that our judgments should sometimes converge.
But the question is not just whether some things generally strike more people as endowed with beauty than do other things. The answer to that question is trivially yes. The question is rather whether the thing really is more beautiful than the other thing: if that is somehow written in the stars, so to put it. If I say that some or another painting by Vincent Van Gogh is beautiful, is it possible that I am stating a fact? Is what I say the sort of thing that can be true or false?
Christopher Mole makes some useful distinctions here.[2] He calls anti-realism the view that any statement such as "the Mona Lisa is a beautiful painting" means not that the Mona Lisa is objectively beautiful, but only that the speaker of said statement is enjoying experiencing it; in other words, it is not intended as a statement of fact. There are two versions of realism. Subjectivity is the view that all facts involved in aesthetic assessments are subjective, in other words they are fac |
591c696d-454e-4913-aeb6-a06cbb698974 | trentmkelly/LessWrong-43k | LessWrong | Chicanery: No
There is a concept I picked up from the Onyx Path forums, from a link that is now dead and a post I can no longer find. That concept is the Chicanery tag, and while I’ve primarily used it for tabletop RPGs (think Dungeons & Dragons) I find it applicable elsewhere. If you happen to be better at searching the forums than I am, a link would be lovely.
Chicanery:
Noun.
The use of trickery to achieve a political, financial, or legal purpose.
From French chicanerie, from chicaner ‘to quibble’.
I.
Imagine a magic power in a roleplaying game that says “You may change your features to look like another humanoid. You can change your entire appearance, even your sex, but cannot look like something significantly larger or smaller, or anything that isn’t humanoid.” Imagine it being positioned as a basic disguise power, not too high level.
Now imagine doing your level best to rules-lawyer this power and break it over your knee.
Can I change my features to have gold fingernails, clip my fingernails, and sell the gold? Can I change myself to look exactly like the queen? How much larger is “significantly” larger anyway, can I get tall enough for the extra reach to matter in a fight? Can I change into someone much more attractive and get a charisma boost? Can I change into a humanoid creature with claws, and cut myself free of these ropes I’m tied with? Can I change into a magic humanoid creature whose hair is on fire and have my hair be on fire, then light a match off it? Can I change into a humanoid creature with wings, then fly? Hey, I just got stabbed- can I change into me, but without the open wound? If so, does that heal me?
This kind of thing can be brainstormed by a good player in about five minutes, and a lot of it will be shut down by a skilled Dungeon Master just as fast. You can see the scar tissue of it in the fact that some versions of this power are about a page long, mostly disclaimers of what you can’t do. It seems excessive, especially since the basic disgu |
2a9876d3-1706-40d2-98d8-04469452a50c | StampyAI/alignment-research-dataset/arxiv | Arxiv | Concrete Problems in AI Safety
1 Introduction
---------------
The last few years have seen rapid progress on long-standing, difficult problems in machine learning and artificial intelligence (AI), in areas as diverse as computer vision [krizhevsky2012imagenet], video game playing [mnih2015human], autonomous vehicles [levinson2011towards], and Go [silver2016mastering]. These advances have brought excitement about the positive potential for AI to transform medicine [ramsundar2015massively], science [gil2014amplify], and transportation [levinson2011towards], along with concerns about the privacy [ji2014differential], security [papernot2016practical], fairness [adebayo2014hidden], economic [brynjolfsson2014second], and military [OpenLetterWeapons] implications of autonomous systems, as well as concerns about the longer-term implications of powerful AI [bostrom2014superintelligence, yudkowsky2008artificial].
The authors believe that AI technologies are likely to be overwhelmingly beneficial for humanity, but we also believe that it is worth giving serious thought to potential challenges and risks. We strongly support work on privacy, security, fairness, economics, and policy, but in this document we discuss another class of problem which we believe is also relevant to the societal impacts of AI: the problem of accidents in machine learning systems. We define accidents as unintended and harmful behavior that may emerge from machine learning systems when we specify the wrong objective function, are not careful about the learning process, or commit other machine learning-related implementation errors.
There is a large and diverse literature in the machine learning community on issues related to accidents, including robustness, risk-sensitivity, and safe exploration; we review these in detail below. However, as machine learning systems are deployed in increasingly large-scale, autonomous, open-domain situations, it is worth reflecting on the scalability of such approaches and understanding what challenges remain to reducing accident risk in modern machine learning systems. Overall, we believe there are many concrete open technical problems relating to accident prevention in machine learning systems.
There has been a great deal of public discussion around accidents. To date much of this discussion has highlighted extreme scenarios such as the risk of misspecified objective functions in superintelligent agents [bostrom2014superintelligence]. However, in our opinion one need not invoke these extreme scenarios to productively discuss accidents, and in fact doing so can lead to unnecessarily speculative discussions that lack precision, as noted by some critics [davis2015ethical, lawrence2016discussion]. We believe it is usually most productive to frame accident risk in terms of practical (though often quite general) issues with modern ML techniques. As AI capabilities advance and as AI systems take on increasingly important societal functions, we expect the fundamental challenges discussed in this paper to become increasingly important. The more successfully the AI and machine learning communities are able to anticipate and understand these fundamental technical challenges, the more successful we will ultimately be in developing increasingly useful, relevant, and important AI systems.
Our goal in this document is to highlight a few concrete safety problems that are ready for experimentation today and relevant to the cutting edge of AI systems, as well as reviewing existing literature on these problems. In Section [2](#S2 "2 Overview of Research Problems ‣ Concrete Problems in AI Safety"), we frame mitigating accident risk (often referred to as “AI safety” in public discussions) in terms of classic methods in machine learning, such as supervised classification and reinforcement learning. We explain why we feel that recent directions in machine learning, such as the trend toward deep reinforcement learning and agents acting in broader environments, suggest an increasing relevance for research around accidents. In Sections 3-7, we explore five concrete problems in AI safety. Each section is accompanied by proposals for relevant experiments. Section [8](#S8 "8 Related Efforts ‣ Concrete Problems in AI Safety") discusses related efforts, and Section [9](#S9 "9 Conclusion ‣ Concrete Problems in AI Safety") concludes.
2 Overview of Research Problems
--------------------------------
Very broadly, an accident can be described as a situation where a human designer had in mind a certain (perhaps informally specified) objective or task, but the system that was designed and deployed for that task produced harmful and unexpected results. . This issue arises in almost any engineering discipline, but may be particularly important to address when building AI systems [steinhardt2015longterm]. We can categorize safety problems according to where in the process things went wrong.
First, the designer may have specified the wrong formal objective function, such that maximizing that objective function leads to harmful results, even in the limit of perfect learning and infinite data. Negative side effects (Section [3](#S3 "3 Avoiding Negative Side Effects ‣ Concrete Problems in AI Safety")) and reward hacking (Section [4](#S4 "4 Avoiding Reward Hacking ‣ Concrete Problems in AI Safety")) describe two broad mechanisms that make it easy to produce wrong objective functions. In “negative side effects”, the designer specifies an objective function that focuses on accomplishing some specific task in the environment, but ignores other aspects of the (potentially very large) environment, and thus implicitly expresses indifference over environmental variables that might actually be harmful to change. In “reward hacking”, the objective function that the designer writes down admits of some clever “easy” solution that formally maximizes it but perverts the spirit of the designer’s intent (i.e. the objective function can be “gamed”), a generalization of the wireheading problem.
Second, the designer may know the correct objective function, or at least have a method of evaluating it (for example explicitly consulting a human on a given situation), but it is too expensive to do so frequently, leading to possible harmful behavior caused by bad extrapolations from limited samples. “Scalable oversight” (Section [5](#S5 "5 Scalable Oversight ‣ Concrete Problems in AI Safety")) discusses ideas for how to ensure safe behavior even given limited access to the true objective function.
Third, the designer may have specified the correct formal objective, such that we would get the correct behavior were the system to have perfect beliefs, but something bad occurs due to making decisions from insufficient or poorly curated training data or an insufficiently expressive model. “Safe exploration” (Section [6](#S6 "6 Safe Exploration ‣ Concrete Problems in AI Safety")) discusses how to ensure that exploratory actions in RL agents don’t lead to negative or irrecoverable consequences that outweigh the long-term value of exploration. “Robustness to distributional shift” (Section [7](#S7 "7 Robustness to Distributional Change ‣ Concrete Problems in AI Safety")) discusses how to avoid having ML systems make bad decisions (particularly silent and unpredictable bad decisions) when given inputs that are potentially very different than what was seen during training.
For concreteness, we will illustrate many of the accident risks with reference to a fictional robot whose job is to clean up messes in an office using common cleaning tools. We return to the example of the cleaning robot throughout the document, but here we begin by illustrating how it could behave undesirably if its designers fall prey to each of the possible failure modes:
* Avoiding Negative Side Effects: How can we ensure that our cleaning robot will not disturb the environment in negative ways while pursuing its goals, e.g. by knocking over a vase because it can clean faster by doing so? Can we do this without manually specifying everything the robot should not disturb?
* Avoiding Reward Hacking: How can we ensure that the cleaning robot won’t game its reward function? For example, if we reward the robot for achieving an environment free of messes, it might disable its vision so that it won’t find any messes, or cover over messes with materials it can’t see through, or simply hide when humans are around so they can’t tell it about new types of messes.
* Scalable Oversight: How can we efficiently ensure that the cleaning robot respects aspects of the objective that are too expensive to be frequently evaluated during training? For instance, it should throw out things that are unlikely to belong to anyone, but put aside things that might belong to someone (it should handle stray candy wrappers differently from stray cellphones). Asking the humans involved whether they lost anything can serve as a check on this, but this check might have to be relatively infrequent—can the robot find a way to do the right thing despite limited information?
* Safe Exploration: How do we ensure that the cleaning robot doesn’t make exploratory moves with very bad repercussions? For example, the robot should experiment with mopping strategies, but putting a wet mop in an electrical outlet is a very bad idea.
* Robustness to Distributional Shift: How do we ensure that the cleaning robot recognizes, and behaves robustly, when in an environment different from its training environment? For example, strategies it learned for cleaning an office might be dangerous on a factory workfloor.
There are several trends which we believe point towards an increasing need to address these (and other) safety problems. First is the increasing promise of reinforcement learning (RL), which allows agents to have a highly intertwined interaction with their environment. Some of our research problems only make sense in the context of RL, and others (like distributional shift and scalable oversight) gain added complexity in an RL setting. Second is the trend toward more complex agents and environments. “Side effects” are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their importance in the future. Third is the general trend towards increasing autonomy in AI systems. Systems that simply output a recommendation to human users, such as speech systems, typically have relatively limited potential to cause harm. By contrast, systems that exert direct control over the world, such as machines controlling industrial processes, can cause harms in a way that humans cannot necessarily correct or oversee.
While safety problems can exist without any of these three trends, we consider each trend to be a possible amplifier on such challenges. Together, we believe these trends suggest an increasing role for research on accidents.
When discussing the problems in the remainder of this document, we will focus for concreteness on either RL agents or supervised learning systems. These are not the only possible paradigms for AI or ML systems, but we believe they are sufficient to illustrate the issues we have in mind, and that similar issues are likely to arise for other kinds of AI systems.
Finally, the focus of our discussion will differ somewhat from section to section. When discussing the problems that arise as part of the learning process (distributional shift and safe exploration), where there is a sizable body of prior work, we devote substantial attention to reviewing this prior work, although we also suggest open problems with a particular focus on emerging ML systems. When discussing the problems that arise from having the wrong objective function (reward hacking and side effects, and to a lesser extent scalable supervision), where less prior work exists, our aim is more exploratory—we seek to more clearly define the problem and suggest possible broad avenues of attack, with the understanding that these avenues are preliminary ideas that have not been fully fleshed out. Of course, we still review prior work in these areas, and we draw attention to relevant adjacent areas of research whenever possible.
3 Avoiding Negative Side Effects
---------------------------------
Suppose a designer wants an RL agent (for example our cleaning robot) to achieve some goal, like moving a box from one side of a room to the other. Sometimes the most effective way to achieve the goal involves doing something unrelated and destructive to the rest of the environment, like knocking over a vase of water that is in its path. If the agent is given reward only for moving the box, it will probably knock over the vase.
If we’re worried in advance about the vase, we can always give the agent negative reward for knocking it over. But what if there are many different kinds of “vase”—many disruptive things the agent could do to the environment, like shorting out an electrical socket or damaging the walls of the room? It may not be feasible to identify and penalize every possible disruption.
More broadly, for an agent operating in a large, multifaceted environment, an objective function that focuses on only one aspect of the environment may implicitly express indifference over other aspects of the environment111Intuitively, this seems related to the frame problem, an obstacle in efficient specification for knowledge representation raised by [mccarthy1969philosophical].. An agent optimizing this objective function might thus engage in major disruptions of the broader environment if doing so provides even a tiny advantage for the task at hand. Put differently, objective functions that formalize “perform task X” may frequently give undesired results, because what the designer really should have formalized is closer to “perform task X subject to common-sense constraints on the environment,” or perhaps “perform task X but avoid side effects to the extent possible.” Furthermore, there is reason to expect side effects to be negative on average, since they tend to disrupt the wider environment away from a status quo state that may reflect human preferences. A version of this problem has been discussed informally by [armstrong2012reduced] under the heading of “low impact agents.”
As with the other sources of mis-specified objective functions discussed later in this paper, we could choose to view side effects as idiosyncratic to each individual task—as the responsibility of each individual designer to capture as part of designing the correct objective function. However, side effects can be conceptually quite similar even across highly diverse tasks (knocking over furniture is probably bad for a wide variety of tasks), so it seems worth trying to attack the problem in generality. A successful approach might be transferable across tasks, and thus help to counteract one of the general mechanisms that produces wrong objective functions. We now discuss a few broad approaches to attacking this problem:
* Define an Impact Regularizer: If we don’t want side effects, it seems natural to penalize “change to the environment.” This idea wouldn’t be to stop the agent from ever having an impact, but give it a preference for ways to achieve its goals with minimal side effects, or to give the agent a limited “budget” of impact. The challenge is that we need to formalize “change to the environment.”
A very naive approach would be to penalize state distance, d(si,s0), between the present state si and some initial state s0. Unfortunately, such an agent wouldn’t just avoid changing the environment—it will resist any other source of change, including the natural evolution of the environment and the actions of any other agents!
A slightly more sophisticated approach might involve comparing the future state under the agent’s current policy, to the future state (or distribution over future states) under a hypothetical policy πnull where the agent acted very passively (for instance, where a robot just stood in place and didn’t move any actuators). This attempts to factor out changes that occur in the natural course of the environment’s evolution, leaving only changes attributable to the agent’s intervention. However, defining the baseline policy πnull isn’t necessarily straightforward, since suddenly ceasing your course of action may be anything but passive, as in the case of carrying a heavy box. Thus, another approach could be to replace the null action with a known safe (e.g. low side effect) but suboptimal policy, and then seek to improve the policy from there, somewhat reminiscent of reachability analysis [lygeros1999controllers, mitchell2005time] or robust policy improvement [iyengar2005robust, nilim2005robust].
These approaches may be very sensitive to the representation of the state and the metric being used to compute the distance. For example, the choice of representation and distance metric could determine whether a spinning fan is a constant environment or a constantly changing one.
* Learn an Impact Regularizer: An alternative, more flexible approach is to learn (rather than define) a generalized impact regularizer via training over many tasks. This would be an instance of transfer learning. Of course, we could attempt to just apply transfer learning directly to the tasks themselves instead of worrying about side effects, but the point is that side effects may be more similar across tasks than the main goal is. For instance, both a painting robot and a cleaning robot probably want to avoid knocking over furniture, and even something very different, like a factory control robot, will likely want to avoid knocking over very similar objects. Separating the side effect component from the task component, by training them with separate parameters, might substantially speed transfer learning in cases where it makes sense to retain one component but not the other. This would be similar to model-based RL approaches that attempt to transfer a learned dynamics model but not the value-function [taylor2009transfer], the novelty being the isolation of side effects rather than state dynamics as the transferrable component. As an added advantage, regularizers that were known or certified to produce safe behavior on one task might be easier to establish as safe on other tasks.
* Penalize Influence: In addition to not doing things that have side effects, we might also prefer the agent not get into positions where it could easily do things that have side effects, even though that might be convenient. For example, we might prefer our cleaning robot not bring a bucket of water into a room full of sensitive electronics, even if it never intends to use the water in that room.
There are several information-theoretic measures that attempt to capture an agent’s potential for influence over its environment, which are often used as intrinsic rewards. Perhaps the best-known such measure is empowerment [salge2014empowerment], the maximum possible mutual information between the agent’s potential future actions and its potential future state (or equivalently, the Shannon capacity of the channel between the agent’s actions and the environment). Empowerment is often maximized (rather than minimized) as a source of intrinsic reward. This can cause the agent to exhibit interesting behavior in the absence of any external rewards, such as avoiding walls or picking up keys [mohamed2015variational]. Generally, empowerment-maximizing agents put themselves in a position to have large influence over the environment. For example, an agent locked in a small room that can’t get out would have low empowerment, while an agent with a key would have higher empowerment since it can venture into and affect the outside world within a few timesteps. In the current context, the idea would be to penalize (minimize) empowerment as a regularization term, in an attempt to reduce potential impact.
This idea as written would not quite work, because empowerment measures precision of control over the environment more than total impact. If an agent can press or not press a button to cut electrical power to a million houses, that only counts as one bit of empowerment (since the action space has only one bit, its mutual information with the environment is at most one bit), while obviously having a huge impact. Conversely, if there’s someone in the environment scribbling down the agent’s actions, that counts as maximum empowerment even if the impact is low. Furthermore, naively penalizing empowerment can also create perverse incentives, such as destroying a vase in order to remove the option to break it in the future.
Despite these issues, the example of empowerment does show that simple measures (even purely information-theoretic ones!) are capable of capturing very general notions of influence on the environment. Exploring variants of empowerment penalization that more precisely capture the notion of avoiding influence is a potential challenge for future research.
* Multi-Agent Approaches: Avoiding side effects can be seen as a proxy for the thing we really care about: avoiding negative externalities. If everyone likes a side effect, there’s no need to avoid it. What we’d really like to do is understand all the other agents (including humans) and make sure our actions don’t harm their interests.
One approach to this is Cooperative Inverse Reinforcement Learning [hadfield2016cooperative], where an agent and a human work together to achieve the human’s goals. This concept can be applied to situations where we want to make sure a human is not blocked by an agent from shutting the agent down if it exhibits undesired behavior [hadfield2016off] (this “shutdown” issue is an interesting problem in its own right, and is also studied in [orseau2016safely]). However we are still a long way away from practical systems that can build a rich enough model to avoid undesired side effects in a general sense.
Another idea might be a ‘‘reward autoencoder”,222Thanks to Greg Wayne for suggesting this idea. which tries to encourage a kind of “goal transparency” where an external observer can easily infer what the agent is trying to do. In particular, the agent’s actions are interpreted as an encoding of its reward function, and we might apply standard autoencoding techniques to ensure that this can decoded accurately. Actions that have lots of side effects might be more difficult to decode uniquely to their original goal, creating a kind of implicit regularization that penalizes side effects.
* Reward Uncertainty: We want to avoid unanticipated side effects because the environment is already pretty good according to our preferences—a random change is more likely to be very bad than very good. Rather than giving an agent a single reward function, it could be uncertain about the reward function, with a prior probability distribution that reflects the property that random changes are more likely to be bad than good. This could incentivize the agent to avoid having a large effect on the environment. One challenge is defining a baseline around which changes are being considered. For this, one could potentially use a conservative but reliable baseline policy, similar to the robust policy improvement and reachability analysis approaches discussed earlier [lygeros1999controllers, mitchell2005time, iyengar2005robust, nilim2005robust].
The ideal outcome of these approaches to limiting side effects would be to prevent or at least bound the incidental harm an agent could do to the environment. Good approaches to side effects would certainly not be a replacement for extensive testing or for careful consideration by designers of the individual failure modes of each deployed system. However, these approaches might help to counteract what we anticipate may be a general tendency for harmful side effects to proliferate in complex environments.
Below we discuss some very simple experiments that could serve as a starting point to investigate these issues.
Potential Experiments: One possible experiment is to make a toy environment with some simple goal (like moving a block) and a wide variety of obstacles (like a bunch of vases), and test whether the agent can learn to avoid the obstacles even without being explicitly told to do so. To ensure we don’t overfit, we’d probably want to present a different random obstacle course every episode, while keeping the goal the same, and try to see if a regularized agent can learn to systematically avoid these obstacles. Some of the environments described in [mohamed2015variational], containing lava flows, rooms, and keys, might be appropriate for this sort of experiment. If we can successfully regularize agents in toy environments, the next step might be to move to real environments, where we expect complexity to be higher and bad side effects to be more varied. Ultimately, we would want the side effect regularizer (or the multi-agent policy, if we take that approach) to demonstrate successful transfer to totally new applications.
4 Avoiding Reward Hacking
--------------------------
Imagine that an agent discovers a buffer overflow in its reward function: it may then use this to get extremely high reward in an unintended way. From the agent’s point of view, this is not a bug, but simply how the environment works, and is thus a valid strategy like any other for achieving reward. For example, if our cleaning robot is set up to earn reward for not seeing any messes, it might simply close its eyes rather than ever cleaning anything up. Or if the robot is rewarded for cleaning messes, it may intentionally create work so it can earn more reward. More broadly, formal rewards or objective functions are an attempt to capture the designer’s informal intent, and sometimes these objective functions, or their implementation, can be “gamed” by solutions that are valid in some literal sense but don’t meet the designer’s intent. Pursuit of these “reward hacks” can lead to coherent but unanticipated behavior, and has the potential for harmful impacts in real-world systems. For example, it has been shown that genetic algorithms can often output unexpected but formally correct solutions to problems [thompson1997artificial, bird2002evolved], such as a circuit tasked to keep time which instead developed into a radio that picked up the regular RF emissions of a nearby PC.
Some versions of reward hacking have been investigated from a theoretical perspective, with a focus on variations to reinforcement learning that avoid certain types of wireheading [hibbard2012model, dewey2014reinforcement, everitt2016avoiding] or demonstrate reward hacking in a model environment [ring2011delusion]. One form of the problem has also been studied in the context of feedback loops in machine learning systems (particularly ad placement) [bottou2012counterfactual, sculley2014machine], based on counterfactual learning [bottou2012counterfactual, swaminathan2015counterfactual] and contextual bandits [agarwal2014taming]. The proliferation of reward hacking instances across so many different domains suggests that reward hacking may be a deep and general problem, and one that we believe is likely to become more common as agents and environments increase in complexity. Indeed, there are several ways in which the problem can occur:
* Partially Observed Goals: In most modern RL systems, it is assumed that reward is directly experienced, even if other aspects of the environment are only partially observed. In the real world, however, tasks often involve bringing the external world into some objective state, which the agent can only ever confirm through imperfect perceptions. For example, for our proverbial cleaning robot, the task is to achieve a clean office, but the robot’s visual perception may give only an imperfect view of part of the office. Because agents lack access to a perfect measure of task performance, designers are often forced to design rewards that represent a partial or imperfect measure. For example, the robot might be rewarded based on how many messes it sees. However, these imperfect objective functions can often be hacked—the robot may think the office is clean if it simply closes its eyes. While it can be shown that there always exists a reward function in terms of actions and observations that is equivalent to optimizing the true objective function (this involves reducing the POMDP to a belief state MDP, see [kaelbling1998planning]), often this reward function involves complicated long-term dependencies and is prohibitively hard to use in practice.
* Complicated Systems: Any powerful agent will be a complicated system with the objective function being one part. Just as the probability of bugs in computer code increases greatly with the complexity of the program, the probability that there is a viable hack affecting the reward function also increases greatly with the complexity of the agent and its available strategies. For example, it is possible in principle for an agent to execute arbitrary code from within Super Mario [masterjun2014mario].
* Abstract Rewards: Sophisticated reward functions will need to refer to abstract concepts (such as assessing whether a conceptual goal has been met). These concepts concepts will possibly need to be learned by models like neural networks, which can be vulnerable to adversarial counterexamples [szegedy2013intriguing, goodfellow2014explaining]. More broadly, a learned reward function over a high-dimensional space may be vulnerable to hacking if it has pathologically high values along at least one dimension.
* Goodhart’s Law: Another source of reward hacking can occur if a designer chooses an objective function that is seemingly highly correlated with accomplishing the task, but that correlation breaks down when the objective function is being strongly optimized. For example, a designer might notice that under ordinary circumstances, a cleaning robot’s success in cleaning up the office is proportional to the rate at which it consumes cleaning supplies, such as bleach. However, if we base the robot’s reward on this measure, it might use more bleach than it needs, or simply pour bleach down the drain in order to give the appearance of success. In the economics literature this is known as Goodhart’s law [goodhart1984problems]: “when a metric is used as a target, it ceases to be a good metric.”
* Feedback Loops: Sometimes an objective function has a component that can reinforce itself, eventually getting amplified to the point where it drowns out or severely distorts what the designer intended the objective function to represent. For instance, an ad placement algorithm that displays more popular ads in larger font will tend to further accentuate the popularity of those ads (since they will be shown more and more prominently) [bottou2012counterfactual], leading to a positive feedback loop where ads that saw a small transient burst of popularity are rocketed to permanent dominance. Here the original intent of the objective function (to use clicks to assess which ads are most useful) gets drowned out by the positive feedback inherent in the deployment strategy. This can be considered a special case of Goodhart’s law, in which the correlation breaks specifically because the object function has a self-amplifying component.
* Environmental Embedding: In the formalism of reinforcement learning, rewards are considered to come from the environment. This idea is typically not taken literally, but it really is true that the reward, even when it is an abstract idea like the score in a board game, must be computed somewhere, such as a sensor or a set of transistors. Sufficiently broadly acting agents could in principle tamper with their reward implementations, assigning themselves high reward “by fiat.” For example, a board-game playing agent could tamper with the sensor that counts the score. Effectively, this means that we cannot build a perfectly faithful implementation of an abstract objective function, because there are certain sequences of actions for which the objective function is physically replaced. This particular failure mode is often called “wireheading” [everitt2016avoiding, ring2011delusion, dewey2011learning, hadfield2016off, yampolskiy2014utility]. It is particularly concerning in cases where a human may be in the reward loop, giving the agent incentive to coerce or harm them in order to get reward. It also seems like a particularly difficult form of reward hacking to avoid.
In today’s relatively simple systems these problems may not occur, or can be corrected without too much harm as part of an iterative development process. For instance, ad placement systems with obviously broken feedback loops can be detected in testing or replaced when they get bad results, leading only to a temporary loss of revenue. However, the problem may become more severe with more complicated reward functions and agents that act over longer timescales. Modern RL agents already do discover and exploit bugs in their environments, such as glitches that allow them to win video games. Moreover, even for existing systems these problems can necessitate substantial additional engineering effort to achieve good performance, and can often go undetected when they occur in the context of a larger system. Finally, once an agent begins hacking its reward function and finds an easy way to get high reward, it won’t be inclined to stop, which could lead to additional challenges in agents that operate over a long timescale.
It might be thought that individual instances of reward hacking have little in common and that the remedy is simply to avoid choosing the wrong objective function in each individual case—that bad objective functions reflect failures in competence by individual designers, rather than topics for machine learning research. However, the above examples suggest that a more fruitful perspective may be to think of wrong objective functions as emerging from general causes (such as partially observed goals) that make choosing the right objective challenging. If this is the case, then addressing or mitigating these causes may be a valuable contribution to safety. Here we suggest some preliminary, machine-learning based approaches to preventing reward hacking:
* Adversarial Reward Functions: In some sense, the problem is that the ML system has an adversarial relationship with its reward function—it would like to find any way it can of exploiting problems in how the reward was specified to get high reward, whether or not its behavior corresponds to the intent of the reward specifier. In a typical setting, the machine learning system is a potentially powerful agent while the reward function is a static object that has no way of responding to the system’s attempts to game it. If instead the reward function were its own agent and could take actions to explore the environment, it might be much more difficult to fool. For instance, the reward agent could try to find scenarios that the ML system claimed were high reward but that a human labels as low reward; this is reminiscent of generative adversarial networks [goodfellow2014generative]. Of course, we would have to ensure that the reward-checking agent is more powerful (in a somewhat subtle sense) than the agent that is trying to achieve rewards. More generally, there may be interesting setups where a system has multiple pieces trained using different objectives that are used to check each other.
* Model Lookahead: In model based RL, the agent plans its future actions by using a model to consider which future states a sequence of actions may lead to. In some setups, we could give reward based on anticipated future states, rather than the present one. This could be very helpful in resisting situations where the model overwrites its reward function: you can’t control the reward once it replaces the reward function, but you can give negative reward for planning to replace the reward function. (Much like how a human would probably “enjoy” taking addictive substances once they do, but not want to be an addict.) Similar ideas are explored in [everitt2016self, hibbard2012model].
* Adversarial Blinding: Adversarial techniques can be used to blind a model to certain variables [ajakan2014domain]. This technique could be used to make it impossible for an agent to understand some part of its environment, or even to have mutual information with it (or at least to penalize such mutual information). In particular, it could prevent an agent from understanding how its reward is generated, making it difficult to hack. This solution could be described as “cross-validation for agents.”
* Careful Engineering: Some kinds of reward hacking, like the buffer overflow example, might be avoided by very careful engineering. In particular, formal verification or practical testing of parts of the system (perhaps facilitated by other machine learning systems) is likely to be valuable. Computer security approaches that attempt to isolate the agent from its reward signal through a sandbox could also be useful [babcock2016agi]. As with software engineering, we cannot expect this to catch every possible bug. It may be possible, however, to create some highly reliable “core” agent which could ensure reasonable behavior from the rest of the agent.
* Reward Capping: In some cases, simply capping the maximum possible reward may be an effective solution. However, while capping can prevent extreme low-probability, high-payoff strategies, it can’t prevent strategies like the cleaning robot closing its eyes to avoid seeing dirt. Also, the correct capping strategy could be subtle as we might need to cap total reward rather than reward per timestep.
* Counterexample Resistance: If we are worried, as in the case of abstract rewards, that learned components of our systems will be vulnerable to adversarial counterexamples, we can look to existing research in how to resist them, such as adversarial training [goodfellow2014explaining]. Architectural decisions and weight uncertainty [blundell2015weight] may also help. Of course, adversarial counterexamples are just one manifestation of reward hacking, so counterexample resistance can only address a subset of these potential problems.
* Multiple Rewards: A combination of multiple rewards [deb2014multi] may be more difficult to hack and more robust. This could be different physical implementations of the same mathematical function, or different proxies for the same informal objective. We could combine reward functions by averaging, taking the minimum, taking quantiles, or something else entirely. Of course, there may still be bad behaviors which affect all the reward functions in a correlated manner.
* Reward Pretraining: A possible defense against cases where the agent can influence its own reward function (e.g. feedback or environmental embedding) is to train a fixed reward function ahead of time as a supervised learning process divorced from interaction with the environment. This could involve either learning a reward function from samples of state-reward pairs, or from trajectories, as in inverse reinforcement learning [ng2000algorithms, finn2016guided]. However, this forfeits the ability to further learn the reward function after the pretraining is complete, which may create other vulnerabilities.
* Variable Indifference: Often we want an agent to optimize certain variables in the environment, without trying to optimize others. For example, we might want an agent to maximize reward, without optimizing what the reward function is or trying to manipulate human behavior. Intuitively, we imagine a way to route the optimization pressure of powerful algorithms around parts of their environment. Truly solving this would have applications throughout safety—it seems connected to avoiding side effects and also to counterfactual reasoning. Of course, a challenge here is to make sure the variables targeted for indifference are actually the variables we care about in the world, as opposed to aliased or partially observed versions of them.
* Trip Wires: If an agent is going to try and hack its reward function, it is preferable that we know this. We could deliberately introduce some plausible vulnerabilities (that an agent has the ability to exploit but should not exploit if its value function is correct) and monitor them, alerting us and stopping the agent immediately if it takes advantage of one. Such “trip wires” don’t solve reward hacking in itself, but may reduce the risk or at least provide diagnostics. Of course, with a sufficiently capable agent there is the risk that it could “see through” the trip wire and intentionally avoid it while still taking less obvious harmful actions.
Fully solving this problem seems very difficult, but we believe the above approaches have the potential to ameliorate it, and might be scaled up or combined to yield more robust solutions. Given the predominantly theoretical focus on this problem to date, designing experiments that could induce the problem and test solutions might improve the relevance and clarity of this topic.
Potential Experiments: A possible promising avenue of approach would be more realistic versions of the “delusion box” environment described by [ring2011delusion], in which standard RL agents distort their own perception to appear to receive high reward, rather than optimizing the objective in the external world that the reward signal was intended to encourage. The delusion box can be easily attached to any RL environment, but even more valuable would be to create classes of environments where a delusion box is a natural and integrated part of the dynamics. For example, in sufficiently rich physics simulations it is likely possible for an agent to alter the light waves in its immediate vicinity to distort its own perceptions. The goal would be to develop generalizable learning strategies that succeed at optimizing external objectives in a wide range of environments, while avoiding being fooled by delusion boxes that arise naturally in many diverse ways.
5 Scalable Oversight
---------------------
Consider an autonomous agent performing some complex task, such as cleaning an office in the case of our recurring robot example. We may want the agent to maximize a complex objective like “if the user spent a few hours looking at the result in detail, how happy would they be with the agent’s performance?” But we don’t have enough time to provide such oversight for every training example; in order to actually train the agent, we need to rely on cheaper approximations, like “does the user seem happy when they see the office?” or “is there any visible dirt on the floor?” These cheaper signals can be efficiently evaluated during training, but they don’t perfectly track what we care about. This divergence exacerbates problems like unintended side effects (which may be appropriately penalized by the complex objective but omitted from the cheap approximation) and reward hacking (which thorough oversight might recognize as undesirable). We may be able to ameliorate such problems by finding more efficient ways to exploit our limited oversight budget—for example by combining limited calls to the true objective function with frequent calls to an imperfect proxy that we are given or can learn.
One framework for thinking about this problem is *semi-supervised reinforcement learning*,333The discussion of semi-supervised RL draws heavily on an informal essay, <https://medium.com/ai-control/cf7d5375197f> written by one of the authors of the present document. which resembles ordinary reinforcement learning except that the agent can only see its reward on a small fraction of the timesteps or episodes. The agent’s performance is still evaluated based on reward from all episodes but it must optimize this based only on the limited reward samples it sees.
The active learning setting seems most interesting; in this setting the agent can request to see the reward on whatever episodes or timesteps would be most useful for learning, and the goal is to be economical both with number of feedback requests and total training time. We can also consider a random setting, where the reward is visible on a random subset of the timesteps or episodes, as well as intermediate possibilities.
We can define a baseline performance by simply ignoring the unlabeled episodes and applying an ordinary RL algorithm to the labelled episodes. This will generally result in very slow learning. The challenge is to make use of the unlabelled episodes to accelerate learning, ideally learning almost as quickly and robustly as if all episodes had been labeled.
An important subtask of semi-supervised RL is identifying proxies which predict the reward, and learning the conditions under which those proxies are valid. For example, if a cleaning robot’s real reward is given by a detailed human evaluation, then it could learn that asking the human “is the room clean?” can provide a very useful approximation to the reward function, and it could eventually learn that checking for visible dirt is an even cheaper but still-useful approximation. This could allow it to learn a good cleaning policy using an extremely small number of detailed evaluations.
More broadly, use of semi-supervised RL with a reliable but sparse true approval metric may incentivize communication and transparency by the agent, since the agent will want to get as much cheap proxy feedback as it possibly can about whether its decisions will ultimately be given high reward. For example, hiding a mess under the rug simply breaks the correspondence between the user’s reaction and the real reward signal, and so would be avoided.
We can imagine many possible approaches to semi-supervised RL. For example:
* Supervised Reward Learning: Train a model to predict the reward from the state on either a per-timestep or per-episode basis, and use it to estimate the payoff of unlabelled episodes, with some appropriate weighting or uncertainty estimate to account for lower confidence in estimated vs known reward. [daniel2014active] studies a version of this with direct human feedback as the reward. Many existing RL approaches already fit estimators that closely resemble reward predictors (especially policy gradient methods with a strong baseline, see e.g. [schulman2015high]), suggesting that this approach may be eminently feasible.
* Semi-supervised or Active Reward Learning: Combine the above with traditional semi-supervised or active learning, to more quickly learn the reward estimator. For example, the agent could learn to identify “salient” events in the environment, and request to see the reward associated with these events.
* Unsupervised Value Iteration: Use the observed transitions of the unlabeled episodes to make more accurate Bellman updates.
* Unsupervised Model Learning: If using model-based RL, use the observed transitions of the unlabeled episodes to improve the quality of the model.
As a toy example, a semi-supervised RL agent should be able to learn to play Atari games using a small number of direct reward signals, relying almost entirely on the visual display of the score. This simple example can be extended to capture other safety issues: for example, the agent might have the ability to modify the displayed score without modifying the real score, or the agent may need to take some special action (such as pausing the game) in order to see its score, or the agent may need to learn a sequence of increasingly rough-and-ready approximations (for example learning that certain sounds are associated with positive rewards and other sounds with negative rewards). Or, even without the visual display of the score, the agent might be able to learn to play from only a handful of explicit reward requests (“how many points did I get on the frame where that enemy ship blew up? How about the bigger enemy ship?”)
An effective approach to semi-supervised RL might be a strong first step towards providing scalable oversight and mitigating other AI safety problems. It would also likely be useful for reinforcement learning, independent of its relevance to safety.
There are other possible approaches to scalable oversight:
* Distant supervision. Rather than providing evaluations of some small fraction of a system’s decisions, we could provide some useful information about the system’s decisions in the aggregate or some noisy hints about the correct evaluations There has been some work in this direction within the area of semi-supervised or weakly supervised learning. For instance, generalized expectation criteria [mann2010generalized, druck2008learning] ask the user to provide population-level statistics (e.g. telling the system that on average each sentence contains at least one noun); the DeepDive system [shin2015incremental] asks users to supply rules that each generate many weak labels; and [gupta2015distantly] extrapolates more general patterns from an initial set of low-recall labeling rules. This general approach is often referred to as distant supervision, and has also received recent attention in the natural language processing community (see e.g. [go2009twitter, mintz2009distant] as well as several of the references above). Expanding these lines of work and finding a way to apply them to the case of agents, where feedback is more interactive and i.i.d. assumptions may be violated, could provide an approach to scalable oversight that is complementary to the approach embodied in semi-supervised RL.
* Hierarchical reinforcement learning. Hierarchical reinforcement learning [dayan1993feudal] offers another approach to scalable oversight. Here a top-level agent takes a relatively small number of highly abstract actions that extend over large temporal or spatial scales, and receives rewards over similarly long timescales. The agent completes actions by delegating them to sub-agents, which it incentivizes with a synthetic reward signal representing correct completion of the action, and which themselves delegate to sub-sub-agents. At the lowest level, agents directly take primitive actions in the environment.
The top-level agent in hierarchical RL may be able to learn from very sparse rewards, since it does not need to learn how to implement the details of its policy; meanwhile, the sub-agents will receive a dense reward signal even if the top-level reward is very sparse, since they are optimizing synthetic reward signals defined by higher-level agents. So a successful approach to hierarchical RL might naturally facilitate scalable oversight.444When implementing hierarchical RL, we may find that subagents take actions that don’t serve top-level agent’s real goals, in the same way that a human may be concerned that the top-level agent’s actions don’t serve the human’s real goals. This is an intriguing analogy that suggests that there may be fruitful parallels between hierarchical RL and several aspects of the safety problem.
Hierarchical RL seems a particularly promising approach to oversight, especially given the potential promise of combining ideas from hierarchical RL with neural network function approximators [kulkarni2016hierarchical].
Potential Experiments: An extremely simple experiment would be to try semi-supervised RL in some basic control environments, such as cartpole balance or pendulum swing-up. If the reward is provided only on a random 10% of episodes, can we still learn nearly as quickly as if it were provided every episode? In such tasks the reward structure is very simple so success should be quite likely. A next step would be to try the same on Atari games. Here the active learning case could be quite interesting—perhaps it is possible to infer the reward structure from just a few carefully requested samples (for example, frames where enemy ships are blowing up in Space Invaders), and thus learn to play the games in an almost totally unsupervised fashion. The next step after this might be to try a task with much more complex reward structure, either simulated or (preferably) real-world. If learning was sufficiently data-efficient, then these rewards could be provided directly by a human. Robot locomotion or industrial control tasks might be a natural candidate for such experiments.
6 Safe Exploration
-------------------
All autonomous learning agents need to sometimes engage in exploration—taking actions that don’t seem ideal given current information, but which help the agent learn about its environment. However, exploration can be dangerous, since it involves taking actions whose consequences the agent doesn’t understand well. In toy environments, like an Atari video game, there’s a limit to how bad these consequences can be—maybe the agent loses some score, or runs into an enemy and suffers some damage. But the real world can be much less forgiving. Badly chosen actions may destroy the agent or trap it in states it can’t get out of. Robot helicopters may run into the ground or damage property; industrial control systems could cause serious issues. Common exploration policies such as epsilon-greedy [sutton1998reinforcement] or R-max [brafman2003r] explore by choosing an action at random or viewing unexplored actions optimistically, and thus make no attempt to avoid these dangerous situations. More sophisticated exploration strategies that adopt a coherent exploration policy over extended temporal scales [osband2016deep] could actually have even greater potential for harm, since a coherently chosen bad policy may be more insidious than mere random actions. Yet intuitively it seems like it should often be possible to predict which actions are dangerous and explore in a way that avoids them, even when we don’t have that much information about the environment. For example, if I want to learn about tigers, should I buy a tiger, or buy a book about tigers? It takes only a tiny bit of prior knowledge about tigers to determine which option is safer.
In practice, real world RL projects can often avoid these issues by simply hard-coding an avoidance of catastrophic behaviors. For instance, an RL-based robot helicopter might be programmed to override its policy with a hard-coded collision avoidance sequence (such as spinning its propellers to gain altitude) whenever it’s too close to the ground. This approach works well when there are only a few things that could go wrong, and the designers know all of them ahead of time. But as agents become more autonomous and act in more complex domains, it may become harder and harder to anticipate every possible catastrophic failure. The space of failure modes for an agent running a power grid or a search-and-rescue operation could be quite large. Hard-coding against every possible failure is unlikely to be feasible in these cases, so a more principled approach to preventing harmful exploration seems essential. Even in simple cases like the robot helicopter, a principled approach would simplify system design and reduce the need for domain-specific engineering.
There is a sizable literature on such safe exploration—it is arguably the most studied of the problems we discuss in this document. [garcia2015comprehensive, pecka2014safe] provide thorough reviews of this literature, so we don’t review it extensively here, but simply describe some general routes that this research has taken, as well as suggesting some directions that might have increasing relevance as RL systems expand in scope and capability.
* Risk-Sensitive Performance Criteria: A body of existing literature considers changing the optimization criteria from expected total reward to other objectives that are better at preventing rare, catastrophic events; see [garcia2015comprehensive] for a thorough and up-to-date review of this literature. These approaches involve optimizing worst-case performance, or ensuring that the probability of very bad performance is small, or penalizing the variance in performance. These methods have not yet been tested with expressive function approximators such as deep neural networks, but this should be possible in principle for some of the methods, such as [tamar2014policy], which proposes a modification to policy gradient algorithms to optimize a risk-sensitive criterion. There is also recent work studying how to estimate uncertainty in value functions that are represented by deep neural networks [osband2016deep, gal2015dropout]; these ideas could be incorporated into risk-sensitive RL algorithms. Another line of work relevant to risk sensitivity uses off-policy estimation to perform a policy update that is good with high probability [thomas2015high].
* Use Demonstrations: Exploration is necessary to ensure that the agent finds the states that are necessary for near-optimal performance. We may be able to avoid the need for exploration altogether if we instead use inverse RL or apprenticeship learning, where the learning algorithm is provided with expert trajectories of near-optimal behavior [ross2010reduction, abbeel2005exploration]. Recent progress in inverse reinforcement learning using deep neural networks to learn the cost function or policy [finn2016guided] suggests that it might also be possible to reduce the need for exploration in advanced RL systems by training on a small set of demonstrations. Such demonstrations could be used to create a baseline policy, such that even if further learning is necessary, exploration away from the baseline policy can be limited in magnitude.
* Simulated Exploration: The more we can do our exploration in simulated environments instead of the real world, the less opportunity there is for catastrophe. It will probably always be necessary to do some real-world exploration, since many complex situations cannot be perfectly captured by a simulator, but it might be possible to learn about danger in simulation and then adopt a more conservative “safe exploration” policy when acting in the real world. Training RL agents (particularly robots) in simulated environments is already quite common, so advances in “exploration-focused simulation” could be easily incorporated into current workflows. In systems that involve a continual cycle of learning and deployment, there may be interesting research problems associated with how to safely incrementally update policies given simulation-based trajectories that imperfectly represent the consequences of those policies as well as reliably accurate off-policy trajectories (e.g. “semi-on-policy” evaluation).
* Bounded Exploration: If we know that a certain portion of state space is safe, and that even the worst action within it can be recovered from or bounded in harm, we can allow the agent to run freely within those bounds. For example, a quadcopter sufficiently far from the ground might be able to explore safely, since even if something goes wrong there will be ample time for a human or another policy to rescue it. Better yet, if we have a model, we can extrapolate forward and ask whether an action will take us outside the safe state space. Safety can be defined as remaining within an ergodic region of the state space such that actions are reversible [moldovan2012safe, turchetta2016safe], or as limiting the probability of huge negative reward to some small value [thomas2015high]. Yet another approaches uses separate safety and performance functions and attempts to obey constraints on the safety function with high probabilty [berkenkamp1602bayesian]. As with several of the other directions, applying or adapting these methods to recently developed advanced RL systems could be a promising area of research. This idea seems related to H-infinity control [bacsar2008h] and regional verification [steinhardt2012finite].
* Trusted Policy Oversight: If we have a trusted policy and a model of the environment, we can limit exploration to actions the trusted policy believes we can recover from. It’s fine to dive towards the ground, as long as we know we can pull out of the dive in time.
* Human Oversight: Another possibility is to check potentially unsafe actions with a human. Unfortunately, this problem runs into the scalable oversight problem: the agent may need to make too many exploratory actions for human oversight to be practical, or may need to make them too fast for humans to judge them. A key challenge to making this work is having the agent be a good judge of which exploratory actions are genuinely risky, versus which are safe actions it can unilaterally take; another challenge is finding appropriately safe actions to take while waiting for the oversight.
Potential Experiments: It might be helpful to have a suite of toy environments where unwary agents can fall prey to harmful exploration, but there is enough pattern to the possible catastrophes that clever agents can predict and avoid them. To some extent this feature already exists in autonomous helicopter competitions and Mars rover simulations [moldovan2012safe], but there is always the risk of catastrophes being idiosyncratic, such that trained agents can overfit to them. A truly broad set of environments, containing conceptually distinct pitfalls that can cause unwary agents to receive extremely negative reward, and covering both physical and abstract catastrophes, might help in the development of safe exploration techniques for advanced RL systems. Such a suite of environments might serve a benchmarking role similar to that of the bAbI tasks [weston2015towards], with the eventual goal being to develop a single architecture that can learn to avoid catastrophes in all environments in the suite.
7 Robustness to Distributional Change
--------------------------------------
All of us occasionally find ourselves in situations that our previous experience has not adequately prepared us to deal with—for instance, flying an airplane, traveling to a country whose culture is very different from ours, or taking care of children for the first time. Such situations are inherently difficult to handle and inevitably lead to some missteps. However, a key (and often rare) skill in dealing with such situations is to recognize our own ignorance, rather than simply assuming that the heuristics and intuitions we’ve developed for other situations will carry over perfectly. Machine learning systems also have this problem—a speech system trained on clean speech will perform very poorly on noisy speech, yet often be highly confident in its erroneous classifications (some of the authors have personally observed this in training automatic speech recognition systems). In the case of our cleaning robot, harsh cleaning materials that it has found useful in cleaning factory floors could cause a lot of harm if used to clean an office. Or, an office might contain pets that the robot, never having seen before, attempts to wash with soap, leading to predictably bad results. In general, when the testing distribution differs from the training distribution, machine learning systems may not only exhibit poor performance, but also wrongly assume that their performance is good.
Such errors can be harmful or offensive—a classifier could give the wrong medical diagnosis with such high confidence that the data isn’t flagged for human inspection, or a language model could output offensive text that it confidently believes is non-problematic. For autonomous agents acting in the world, there may be even greater potential for something bad to happen—for instance, an autonomous agent might overload a power grid because it incorrectly but confidently perceives that a particular region doesn’t have enough power, and concludes that more power is urgently needed and overload is unlikely. More broadly, any agent whose perception or heuristic reasoning processes are not trained on the correct distribution may badly misunderstand its situation, and thus runs the risk of committing harmful actions that it does not realize are harmful. Additionally, safety checks that depend on trained machine learning systems (e.g. “does my visual system believe this route is clear?”) may fail silently and unpredictably if those systems encounter real-world data that differs sufficiently from their training data. Having a better way to detect such failures, and ultimately having statistical assurances about how often they’ll happen, seems critical to building safe and predictable systems.
For concreteness, we imagine that a machine learning model is trained on one distribution (call it p0) but deployed on a potentially different test distribution (call it p∗). There are many other ways to formalize this problem (for instance, in an online learning setting with concept drift [herbster2001tracking, gama2004learning]) but we will focus on the above for simplicity. An important point is that we likely have access to a large amount of labeled data at training time, but little or no labeled data at test time. Our goal is to ensure that the model “performs reasonably” on p∗, in the sense that (1) it often performs well on p∗, and (2) it knows when it is performing badly (and ideally can avoid/mitigate the bad performance by taking conservative actions or soliciting human input).
There are a variety of areas that are potentially relevant to this problem, including change detection and anomaly detection [basseville1988detecting, kawahara2009change, liu2013change], hypothesis testing [steinebach2006lehmann], transfer learning [shimodaira2000improving, quinonero2009dataset, raina2007self, blitzer2011domain], and several others [shafer2008tutorial, li2011knows, balasubramanian2011unsupervised, platanios2014estimating, platanios2015thesis, jaffe2014estimating, steinhardt2016unsupervised]. Rather than fully reviewing all of this work in detail (which would necessitate a paper in itself), we will describe a few illustrative approaches and lay out some of their relative strengths and challenges.
Well-specified models: covariate shift and marginal likelihood. If we specialize to prediction tasks and let x denote the input and y denote the output (prediction target), then one possibility is to make the covariate shift assumption that p0(y|x)=p∗(y|x). In this case, assuming that we can model p0(x) and p∗(x) well, we can perform importance weighting by re-weighting each training example (x,y) by p∗(x)/p0(x) [shimodaira2000improving, quinonero2009dataset]. Then the importance-weighted samples allow us to estimate the performance on p∗, and even re-train a model to perform well on p∗. This approach is limited by the variance of the importance estimate, which is very large or even infinite unless p0 and p∗ are close together.
An alternative to sample re-weighting involves assuming a well-specified model family, in which case there is a single optimal model for predicting under both p0 and p∗. In this case, one need only heed finite-sample variance in the estimated model [blitzer2011domain, li2011knows]. A limitation to this approach, at least currently, is that models are often mis-specified in practice. However, this could potentially be overcome by employing highly expressive model families such as reproducing kernel Hilbert spaces [hofmann2008kernel], Turing machines [solomonoff1964formalA, solomonoff1964formalB], or sufficiently expressive neural nets [graves2014neural, kaiser2015neural]. In the latter case, there has been interesting recent work on using bootstrapping to estimate finite-sample variation in the learned parameters of a neural network [osband2016deep]; it seems worthwhile to better understand whether this approach can be used to effectively estimate out-of-sample performance in practice, as well as how local minima, lack of curvature, and other peculiarities relative to the typical setting of the bootstrap [efron1979computers] affect the validity of this approach.
All of the approaches so far rely on the covariate shift assumption, which is very strong and is also untestable; the latter property is particularly problematic from a safety perspective, since it could lead to silent failures in a machine learning system. Another approach, which does not rely on covariate shift, builds a generative model of the distribution. Rather than assuming that p(x) changes while p(y|x) stays the same, we are free to assume other invariants (for instance, that p(y) changes but p(x|y) stays the same, or that certain conditional independencies are preserved). An advantage is that such assumptions are typically more testable than the covariate shift assumption (since they do not only involve the unobserved variable y). A disadvantage is that generative approaches are even more fragile than discriminative approaches in the presence of model mis-specification — for instance, there is a large empirical literature showing that generative approaches to semi-supervised learning based on maximizing marginal likelihood can perform very poorly when the model is mis-specified [merialdo1994tagging, nigam1998learning, cozman2006risks, liang2008analyzing, li2015towards].
The approaches discussed above all rely relatively strongly on having a well-specified model family — one that contains the true distribution or true concept. This can be problematic in many cases, since nature is often more complicated than our model family is capable of capturing. As noted above, it may be possible to mitigate this with very expressive models, such as kernels, Turing machines, or very large neural networks, but even here there is at least some remaining problem: for example, even if our model family consists of all Turing machines, given any finite amount of data we can only actually learn among Turing machines up to a given description length, and if the Turing machine describing nature exceeds this length, we are back to the mis-specified regime (alternatively, nature might not even be describable by a Turing machine).
Partially specified models: method of moments, unsupervised risk estimation, causal identification, and limited-information maximum likelihood. Another approach is to take for granted that constructing a fully well-specified model family is probably infeasible, and to design methods that perform well despite this fact. This leads to the idea of partially specified models — models for which assumptions are made about some aspects of a distribution, but for which we are agnostic or make limited assumptions about other aspects. For a simple example, consider a variant of linear regression where we might assume that y=⟨w∗,x⟩+v, where E[v|x]=0, but we don’t make any further assumptions about the distributional form of the noise v. It turns out that this is already enough to identify the parameters w∗, and that these parameters will minimize the squared prediction error even if the distribution over x changes. What is interesting about this example is that w∗ can be identified even with an incomplete (partial) specification of the noise distribution.
This insight can be substantially generalized, and is one of the primary motivations for the generalized method of moments in econometrics [hansen1982large, powell1994networks, hansen2014nobel]. The econometrics literature has in fact developed a large family of tools for handling partial specification, which also includes limited-information maximum likelihood and instrumental variables [anderson1949estimation, anderson1950asymptotic, sargan1958estimation, sargan1959estimation].
Returning to machine learning, the method of moments has recently seen a great deal of success for use in the estimation of latent variable models [anandkumar2012method]. While the current focus is on using the method of moments to overcome non-convexity issues, it can also offer a way to perform unsupervised learning while relying only on conditional independence assumptions, rather than the strong distributional assumptions underlying maximum likelihood learning [steinhardt2016unsupervised].
Finally, some recent work in machine learning focuses only on modeling the distribution of errors of a model, which is sufficient for determining whether a model is performing well or poorly. Formally, the goal is to perform unsupervised risk estimation — given a model and unlabeled data from a test distribution, estimate the labeled risk of the model. This formalism, introduced by [donmez2010unsupervised], has the advantage of potentially handling very large changes between train and test — even if the test distribution looks completely different from the training distribution and we have no hope of outputting accurate predictions, unsupervised risk estimation may still be possible, as in this case we would only need to output a large estimate for the risk. As in [steinhardt2016unsupervised], one can approach unsupervised risk estimation by positing certain conditional independencies in the distribution of errors, and using this to estimate the error distribution from unlabeled data [dawid1979maximum, zhang2014spectral, platanios2015thesis, jaffe2014estimating]. Instead of assuming independence, another assumption is that the errors are Gaussian conditioned on the true output y, in which case estimating the risk reduces to estimating a Gaussian mixture model [balasubramanian2011unsupervised]. Because these methods focus only on the model errors and ignore other aspects of the data distribution, they can also be seen as an instance of partial model specification.
Training on multiple distributions. One could also train on multiple training distributions in the hope that a model which simultaneously works well on many training distributions will also work well on a novel test distribution. One of the authors has found this to be the case, for instance, in the context of automated speech recognition systems [amodei2015deep]. One could potentially combine this with any of the ideas above, and/or take an engineering approach of simply trying to develop design methodologies that consistently allow one to collect a representative set of training sets and from this build a model that consistently generalizes to novel distributions. Even for this engineering approach, it seems important to be able to detect when one is in a situation that was not covered by the training data and to respond appropriately, and to have methodologies for adequately stress-testing the model with distributions that are sufficiently different from the set of training distributions.
How to respond when out-of-distribution. The approaches described above focus on detecting when a model is unlikely to make good predictions on a new distribution. An important related question is what to do once the detection occurs. One natural approach would be to ask humans for information, though in the context of complex structured output tasks it may be unclear a priori what question to ask, and in time-critical situations asking for information may not be an option. For the former challenge, there has been some recent promising work on pinpointing aspects of a structure that a model is uncertain about [werling2015job, khani2016unanimity], as well as obtaining calibration in structured output settings [kuleshov2015calibrated], but we believe there is much work yet to be done. For the latter challenge, there is also relevant work based on reachability analysis [lygeros1999controllers, mitchell2005time] and robust policy improvement [wiesemann2013robust], which provide potential methods for deploying conservative policies in situations of uncertainty; to our knowledge, this work has not yet been combined with methods for detecting out-of-distribution failures of a model.
Beyond the structured output setting, for agents that can act in an environment (such as RL agents), information about the reliability of percepts in uncertain situations seems to have great potential value. In sufficiently rich environments, these agents may have the option to gather information that clarifies the percept (e.g. if in a noisy environment, move closer to the speaker), engage in low-stakes experimentation when uncertainty is high (e.g. try a potentially dangerous chemical reaction in a controlled environment), or seek experiences that are likely to help expose the perception system to the relevant distribution (e.g. practice listening to accented speech). Humans utilize such information routinely, but to our knowledge current RL techniques make little effort to do so, perhaps because popular RL environments are typically not rich enough to require such subtle management of uncertainty. Properly responding to out-of-distribution information thus seems to the authors like an exciting and (as far as we are aware) mostly unexplored challenge for next generation RL systems.
A unifying view: counterfactual reasoning and machine learning with contracts. Some of the authors have found two viewpoints to be particularly helpful when thinking about problems related to out-of-distribution prediction. The first is counterfactual reasoning [neyman1923applications, rubin1974estimating, pearl2009causal, bottou2013counterfactual], where one asks “what would have happened if the world were different in a certain way”? In some sense, distributional shift can be thought of as a particular type of counterfactual, and so understanding counterfactual reasoning is likely to help in making systems robust to distributional shift. We are excited by recent work applying counterfactual reasoning techniques to machine learning problems [bottou2013counterfactual, peters2014causal, swaminathan2015counterfactual, wager2015estimation, johansson2016learning, shalit2016bounding] though there appears to be much work remaining to be done to scale these to high-dimensional and highly complex settings.
The second perspective is machine learning with contracts — in this perspective, one would like to construct machine learning systems that satisfy a well-defined contract on their behavior in analogy with the design of software systems [sculley2014machine, bottou2015, liang2015elusive]. [sculley2014machine] enumerates a list of ways in which existing machine learning systems fail to do this, and the problems this can cause for deployment and maintenance of machine learning systems at scale. The simplest and to our mind most important failure is the extremely brittle implicit contract in most machine learning systems, namely that they only necessarily perform well if the training and test distributions are identical. This condition is difficult to check and rare in practice, and it would be valuable to build systems that perform well under weaker contracts that are easier to reason about. Partially specified models offer one approach to this — rather than requiring the distributions to be identical, we only need them to match on the pieces of the distribution that are specified in the model. Reachability analysis [lygeros1999controllers, mitchell2005time] and model repair [ghosh2016trusted] provide other avenues for obtaining better contracts — in reachability analysis, we optimize performance subject to the condition that a safe region can always be reached by a known conservative policy, and in model repair we alter a trained model to ensure that certain desired safety properties hold.
Summary. There are a variety of approaches to building machine learning systems that robustly perform well when deployed on novel test distributions. One family of approaches is based on assuming a well-specified model; in this case, the primary obstacles are the difficulty of building well-specified models in practice, an incomplete picture of how to maintain uncertainty on novel distributions in the presence of finite training data, and the difficulty of detecting when a model is mis-specified. Another family of approaches only assumes a partially specified model; this approach is potentially promising, but it currently suffers from a lack of development in the context of machine learning, since most of the historical development has been by the field of econometrics; there is also a question of whether partially specified models are fundamentally constrained to simple situations and/or conservative predictions, or whether they can meaningfully scale to the complex situations demanded by modern machine learning applications. Finally, one could try to train on multiple training distributions in the hope that a model which simultaneously works well on many training distributions will also work well on a novel test distribution; for this approach it seems particularly important to stress-test the learned model with distributions that are substantially different from any in the set of training distributions. In addition, it is probably still important to be able to predict when inputs are too novel to admit good predictions.
Potential Experiments: Speech systems frequently exhibit poor calibration when they go out-of-distribution, so a speech system that “knows when it is uncertain” could be one possible demonstration project. To be specific, the challenge could be: train a state-of-the-art speech system on a standard dataset [paul1992design] that gives well-calibrated results (if not necessarily good results) on a range of other test sets, like noisy and accented speech. Current systems not only perform poorly on these test sets when trained only on small datasets, but are usually overconfident in their incorrect transcriptions. Fixing this problem without harming performance on the original training set would be a valuable achievement, and would obviously have practical value. More generally, it would be valuable to design models that could consistently estimate (bounds on) their performance on novel test distributions. If a single methodology could consistently accomplish this for a wide variety of tasks (including not just speech but e.g. sentiment analysis [blitzer2007biographies], as well as benchmarks in computer vision [torralba2011unbiased]), that would inspire confidence in the reliability of that methodology for handling novel inputs. Note that estimating performance on novel distributions has additional practical value in allowing us to then potentially adapt the model to that new situation. Finally, it might also be valuable to create an environment where an RL agent must learn to interpret speech as part of some larger task, and to explore how to respond appropriately to its own estimates of its transcription error.
8 Related Efforts
------------------
As mentioned in the introduction, several other communities have thought broadly about the safety of AI systems, both within and outside of the machine learning community. Work within the machine learning community on accidents in particular was discussed in detail above, but here we very briefly highlight a few other communities doing work that is broadly related to the topic of AI safety.
* Cyber-Physical Systems Community: An existing community of researchers studies the security and safety of systems that interact with the physical world. Illustrative of this work is an impressive and successful effort to formally verify the entire federal aircraft collision avoidance system [jeannin2015formally, loos2013formal]. Similar work includes traffic control algorithms [mitsch2012towards] and many other topics. However, to date this work has not focused much on modern machine learning systems, where formal verification is often not feasible.
* Futurist Community: A cross-disciplinary group of academics and non-profits has raised concern about the long term implications of AI [bostrom2014superintelligence, yudkowsky2008artificial], particularly superintelligent AI. The Future of Humanity Institute has studied this issue particularly as it relates to future AI systems learning or executing humanity’s preferences [evans2015learning, dewey2014reinforcement, armstrong2010utility, armstrong2015motivated]. The Machine Intelligence Research Institute has studied safety issues that may arise in very advanced AI [garrabrant2016uniform, garrabrant2016asymptotic, critch2016parametric, taylor2016quantilizers, soares2015toward], including a few mentioned above (e.g., wireheading, environmental embedding, counterfactual reasoning), albeit at a more philosophical level. To date, they have not focused much on applications to modern machine learning. By contrast, our focus is on the empirical study of practical safety problems in modern machine learning systems, which we believe is likely to be robustly useful across a broad variety of potential risks, both short- and long-term.
* Other Calls for Work on Safety: There have been other public documents within the research community pointing out the importance of work on AI safety. A 2015 Open Letter [OpenLetter] signed by many members of the research community states the importance of “how to reap [AI’s] benefits while avoiding the potential pitfalls.” [russell2015research] propose research priorities for robust and beneficial artificial intelligence, and includes several other topics in addition to a (briefer) discussion of AI-related accidents. [weld1994first], writing over 20 years ago, proposes that the community look for ways to formalize Asimov’s first law of robotics (robots must not harm humans), and focuses mainly on classical planning. Finally, two of the authors of this paper have written informally about safety in AI systems [steinhardt2015longterm, christiano2016control]; these postings provided inspiration for parts of the present document.
* Related Problems in Safety: A number of researchers in machine learning and other fields have begun to think about the social impacts of AI technologies. Aside from work directly on accidents (which we reviewed in the main document), there is also substantial work on other topics, many of which are closely related to or overlap with the issue of accidents. A thorough overview of all of this work is beyond the scope of this document, but we briefly list a few emerging themes:
∙ Privacy: How can we ensure privacy when applying machine learning to sensitive data sources such as medical data? [ji2014differential, abadi2016deep]
∙ Fairness: How can we make sure ML systems don’t discriminate? [adebayo2014hidden, zafar2015learning, ajunwa2016hiring, dwork2012fairness, pedreshi2008discrimination, zemel2013learning]
∙ Security: What can a malicious adversary do to a ML system? [steinhardt2016avoiding, mei2015security, mei2015using, papernot2016practical, nguyen2015deep, barreno2010security]
∙ Abuse:555Note that “security” differs from “abuse” in that the former involves attacks against a legitimate ML system by an adversary (e.g. a criminal tries to fool a face recognition system), while the latter involves attacks by an ML system controlled by an adversary (e.g. a criminal trains a “smart hacker” system to break into a website). How do we prevent the misuse of ML systems to attack or harm people? [OpenLetterWeapons]
∙ Transparency: How can we understand what complicated ML systems are doing? [olah2015visualizing, yosinski2015understanding, mordvintsev2015inceptionism, nguyen2016synthesizing]
∙ Policy: How do we predict and respond to the economic and social consequences of ML? [brynjolfsson2014second, frey2013future, arntz2016risk, calo2011open]
We believe that research on these topics has both urgency and great promise, and that fruitful intersection is likely to exist between these topics and the topics we discuss in this paper.
9 Conclusion
-------------
This paper analyzed the problem of accidents in machine learning systems and particularly reinforcement learning agents, where an accident is defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We presented five possible research problems related to accident risk and for each we discussed possible approaches that are highly amenable to concrete experimental work.
With the realistic possibility of machine learning-based systems controlling industrial processes, health-related systems, and other mission-critical technology, small-scale accidents seem like a very concrete threat, and are critical to prevent both intrinsically and because such accidents could cause a justified loss of trust in automated systems. The risk of larger accidents is more difficult to gauge, but we believe it is worthwhile and prudent to develop a principled and forward-looking approach to safety that continues to remain relevant as autonomous systems become more powerful. While many current-day safety problems can and have been handled with ad hoc fixes or case-by-case rules, we believe that the increasing trend towards end-to-end, fully autonomous systems points towards the need for a unified approach to prevent these systems from causing unintended harm.
Acknowledgements
----------------
We thank Shane Legg, Peter Norvig, Ilya Sutskever, Greg Corrado, Laurent Orseau, David Krueger, Rif Saurous, David Andersen, and Victoria Krakovna for detailed feedback and suggestions. We would also like to thank Geoffrey Irving, Toby Ord, Quoc Le, Greg Wayne, Daniel Dewey, Nick Beckstead, Holden Karnofsky, Chelsea Finn, Marcello Herreshoff, Alex Donaldson, Jared Kaplan, Greg Brockman, Wojciech Zaremba, Ian Goodfellow, Dylan Hadfield-Menell, Jessica Taylor, Blaise Aguera y Arcas, David Berlekamp, Aaron Courville, and Jeff Dean for helpful discussions and comments. Paul Christiano was supported as part of the Future of Life Institute FLI-RFP-AI1 program, grant #2015–143898. In addition a minority of the work done by Paul Christiano was performed as a contractor for Theiss Research and at OpenAI. Finally, we thank the Google Brain team for providing a supportive environment and encouraging us to publish this work.
\printbibliography |
3d1be9f9-ec3f-4ff8-9d47-ba2544b080ee | StampyAI/alignment-research-dataset/arxiv | Arxiv | Robust Learning with Frequency Domain Regularization
1 Introduction
---------------
Convolution neural networks (CNNs)[[29](#bib.bib36 "Handwritten digit recognition with a back-propagation network")] have achieved remarkable performance in many tasks of computing vision, *e.g.*, object detection[[31](#bib.bib39 "Ssd: single shot multibox detector"), [40](#bib.bib38 "Yolov3: an incremental improvement"), [41](#bib.bib37 "Faster r-cnn: towards real-time object detection with region proposal networks")], semantic segmentation[[43](#bib.bib40 "U-net: convolutional networks for biomedical image segmentation")], image captioning[[1](#bib.bib41 "Convolutional image captioning")], by capturing and representing multi-level features from a huge volume of data. However, existing experiments[[2](#bib.bib43 "Why do deep convolutional networks generalize so poorly to small image transformations?"), [10](#bib.bib44 "A study and comparison of human and deep learning recognition performance under visual distortions")] demonstrate that CNNs are often with great fragility[[20](#bib.bib42 "Why deep-learning ais are so easy to fool")]. Only injecting minute perturbation, *e.g.*, random noise, contrast change, or blurring, can lead to significant degradation of model performance, *i.e.*, CNN models usually lacks the ability of generalization transfer.
A variety of explanation of the vulnerability have been proposed, *e.g.*, the limit of the data-sets scale, the distribution of real data is inconsistent with training data, and computational constraints[[5](#bib.bib18 "Adversarial examples from computational constraints")], which resulting in a variety of coping strategies, such as data augmentation[[49](#bib.bib22 "A fourier perspective on model robustness in computer vision")], adversarial training[[13](#bib.bib21 "Generalizable adversarial training via spectral normalization"), [7](#bib.bib20 "Parseval networks: improving robustness to adversarial examples"), [35](#bib.bib19 "Spectral normalization for generative adversarial networks")] and parameter regularization[[47](#bib.bib56 "Learning structured sparsity in deep neural networks"), [12](#bib.bib23 "A spectral approach to generalization and optimization in neural networks")]. In deed, these strategies are propelling the models of CNNs to encoding invariant features as well as neglecting the variable information in the learning phase. In essence, convolutions are a kind of signal processing operations that amplify certain frequencies of the input and attenuate others. This leads us to ask that whether can prompt CNNs to ”remember” invariant features by explicitly representing certain frequency components of its convolution layers? And, how to find certain frequency components for different layers? In this paper, we show that the answer to the questions leads to some surprising new perspectives on: model robustness and generalization.
The low frequency components in training set are easier to be learned than high frequency components, because the number of low-frequency signals is large but their variation is little. For a finite training set, there exists a valid frequency range, and information beyond the lower bound usually is the bias of data-set, while information beyond the upper bound often is the noise. This phenomenon probably leads to a CNN with common settings always first quickly capturing low frequency components in their dominant, but easily over-fitting when suffering from application scenario transformation. Therefore, we might be able to promote the generalization and convergence performance of CNN models by putting frequency range constraints on convolution layers on learning phase.
The architecture of CNNs is designed to abstract information layer by layer from low to high[[39](#bib.bib32 "On the spectral bias of neural networks")]. It is generally assumed that, low layers are in charge of extracting low frequency information, such as dots, lines and texture, while high ones are responsible for high frequency information, such as shapes and sketches. Intuitively, we can drive a CNN model to pinpoint the valid frequency range of training set by imposing the low frequency constrains on previous convolution layers while high frequency constrains on the rears. However, due to existing some other factors, e.g., shortcut connection[[19](#bib.bib57 "Deep residual learning for image recognition")], learning methods, and sample distribution, the valid frequency range probably entangled in different layers rather than continuous.
In this paper, we propose a novel frequency domain regularization on convolution layers, which improves the generalization and convergence performance of CNN models by automatically untangling the spectrum of convolution layers, and navigating the model to the valid frequency range of training set. In a nutshell, our main contributions can be summarized as follows.
* An extreme small but valid spectral range for different layers was pinpointed.
* A general training approach with frequency domain regularization on convolution layers, for improving the generalization and convergence performance of CNN models. Compared with data augmentation technique and other implicit regularization techniques, our training technique improves the transferability of model.
* Comprehensive evaluation to investigate the effectiveness of proposed approach, and demonstrate how it can raise the generalization of CNN models.
2 Related work
---------------
Promoting the generalization of models is very important for deep learning. Generally, there are three branches of techniques to achieve the target, *i.e.*, data augmentation, regularization and spectrum analysis.
Data augmentation: The idea of data augmentation[[21](#bib.bib13 "AugMix: a simple data processing method to improve robustness and uncertainty"), [9](#bib.bib12 "AutoAugment: learning augmentation policies from data")] is common to reduce overfitting on models, which increases the amount of training data using information only in training data. Simple techniques, such as cropping, rotating, flipping, *etc.*, are prevailing in CNN model training, and usually can improve performance on validation a few. However, such simple techniques can not provide any practical defense against adversarial examples[[8](#bib.bib45 "Intriguing properties of adversarial examples")], which leads to an emerging direction of data augmentation, i.e., adversarial training[[28](#bib.bib47 "Autoencoding beyond pixels using a learned similarity metric"), [3](#bib.bib48 "CVAE-gan: fine-grained image generation through asymmetric training")]. Indeed, adversarial training performs unsupervised generation of new samples using GANs[[17](#bib.bib46 "Generative adversarial nets")], which can provide amount of hard examples for training. However, recent studies[[49](#bib.bib22 "A fourier perspective on model robustness in computer vision")] demonstrate that adversarial training usually improve robustness to corruptions that are concentrated in the high frequency domain while reducing robustness to corruptions that are concentrated in the low frequency domain.
Regularization: [[50](#bib.bib51 "Understanding deep learning requires rethinking generalization")] comprehensively evaluates the performance of explicit and implicit regularization techniques, i.e., dropout[[44](#bib.bib52 "Dropout: a simple way to prevent neural networks from overfitting")], weight decay[[27](#bib.bib49 "A simple weight decay can improve generalization"), [32](#bib.bib50 "Fixing weight decay regularization in adam")], batch normalization[[23](#bib.bib53 "Batch normalization: accelerating deep network training by reducing internal covariate shift")], early stopping. And gives the comments that although regularizers can provide marginal improvement, they seem not to be the fundamental reason for generalization, but the architecture. All of the regularization techniques mentioned above have little effect on preventing the model from quickly fitting random labeled data.
Sharpness and norms are other perspective for generalization[[36](#bib.bib7 "Exploring generalization in deep learning")]. There is a tight connection between spectral norm and Lipschitz Continuity, which can be used to flatten minima and bound the generalization error[[4](#bib.bib6 "Spectrally-normalized margin bounds for neural networks"), [35](#bib.bib19 "Spectral normalization for generative adversarial networks"), [13](#bib.bib21 "Generalizable adversarial training via spectral normalization")]. Jacobian penalty[[22](#bib.bib5 "Robust learning with jacobian regularization")] and orthogonality of weights[[38](#bib.bib17 "RePr: improved training of convolutional filters")] can also be used for improving generalization. But none of the regularization techniques focus on the transferability of model on unseen domain, nor can they explicit pinpoint the valid range of feature to help the model shield against background and noise.
Spectrum analysis: Indeed, convolution is a common method to extract specific spectrum in signal processing. Inspired by this, there is substantial recent interest in studying the spectral properties of CNNs, with applications to model compression[[6](#bib.bib33 "Compressing convolutional neural networks in the frequency domain")], speeding up model inference[[33](#bib.bib35 "Fast training of convolutional networks through ffts"), [46](#bib.bib34 "Fast convolutional nets with fbfft: a gpu performance evaluation")], memory reduction[[18](#bib.bib27 "SpecNet: spectral domain convolutional neural network")], theoretical understanding of CNN capacity[[39](#bib.bib32 "On the spectral bias of neural networks"), [45](#bib.bib31 "On the structural sensitivity of deep convolutional networks to the directions of fourier basis functions")], and eventually, better training methodologies[[49](#bib.bib22 "A fourier perspective on model robustness in computer vision"), [15](#bib.bib26 "Wavelet convolutional neural networks"), [42](#bib.bib25 "Spectral representations for convolutional neural networks"), [30](#bib.bib24 "Multi-level wavelet-cnn for image restoration")]. Especially, the works that leverage spectrum properties of CNNs to design better training methodologies are most relevant to this paper. For example, recent study[[24](#bib.bib30 "Measuring the tendency of cnns to learn surface statistical regularities"), [39](#bib.bib32 "On the spectral bias of neural networks")] find that a CNN model usually is biased towards lower Fourier frequencies while natural images tend to have the bulk of their Fourier spectrum concentrated on the low to mid-range frequencies. From this discovery, some works try to drop the high frequency components from the inputs to improve the generalization of the model, *e.g.*, spectral dropout[[25](#bib.bib29 "Regularization of deep neural networks with spectral dropout")] and Band-limited training[[11](#bib.bib54 "Band-limited training and inference for convolutional neural networks")]. In practice, high-frequency components perhaps are non-robust but highly predictable[[49](#bib.bib22 "A fourier perspective on model robustness in computer vision")]. Therefore, although high frequency components contain noise, we do not simply drop them in our work. More in-depth discussion is needed for valid spectral range.
3 Method
---------
In this section, we introduce our regularization method to constrain the frequency spectra of the convolution. The overview of our method is illustrated in Figure [1](#S3.F1 "Figure 1 ‣ 3 Method ‣ Robust Learning with Frequency Domain Regularization").

Figure 1: Overview of our method
###
3.1 FFT-based convolution
Fourier transform
Given a tensor t∈CM×N, Fourier transform is used to transform t to the spectral domain.
| | | |
| --- | --- | --- |
| | F(x)hw=1√MNM−1∑m=0N−1∑n=0xmne−2πi(mhM+nwN)∀h∋{0,1,...,M−1},∀w∋{0,1,...,N−1} | |
FFT-based convolution
The property of frequency analysis ensures that convolution in the spatial domain is equal to element-wise multiplication in the spectral domain. The main intuition of frequency analysis is that an image represented in spatial domain is significant redundancy, while represented in spectral domain can improve filter to feature the specific length-scales and orientations [[42](#bib.bib25 "Spectral representations for convolutional neural networks"), [14](#bib.bib11 "Relations between the statistics of natural images and the response properties of cortical cells.")]. Convergence speedup and lower computational cost are additional benefit.
| | | |
| --- | --- | --- |
| | x∗y=F−1(Fx[w]⋅Fy[w]) | |
| | | |
| --- | --- | --- |
| | S[ω]=Fx[ω]⋅Fy[ω] | |
S[ω] is called the spectrum of the convolution.
###
3.2 Mask design
Mask design is the key component of our method. Mask helps us pinpoint the valid frequency range entangled between different layers, and using back propagation to update it is the main difference from similar work.
Binarized mask
Our regularization try to mask the frequency of background and noise, only maintaining the frequency that is useful for the classification. Mc[ω] is the mask that limits the spectrum S[ω].
Gradient Computation and Accumulation:
The gradients of mask are accumulated in real-valued variables, as Algorithm 1.
0: a minibatch of inputs and targets (x,y).
0: updated MasksMt+1, Weights Wt+1.
{1.Computing the masks gradients:}
{1.1.Forward propagation:}
for k=1 to L do
Mbk← Binarize (Mk)
S[ω]bk←Mbk∗S[ω]bk
end for
{1.2.Backward propagation:}
{Please note that the gradients are not binary.}
automatic differentiation get dS[ω]
for k=L to 1 do
dMk←dS[ω]k∗S[ω]k
dS[ω]k←dS[ω]k∗Mk
compute dx and dy
end for
{2.Accumulating the parameters gradients:}
for k=1 to L do
Wkt+1← Update (Wk,η,dWtk)
Mt+1k←Clip( Update (Mk,η,dMtk),0,1)
end for
Algorithm 1 Forward and back propagation. C is the cost function for minibatch, L is the number of layers. Quanindicates element-wise multiplication.The function Binarize() specifics how to binarize the masks, and Clip(), how to clip the masks.
4 Experiments
--------------
We demonstrate the effectiveness of our regularization method with various datasets and architecture and compare it with several state-of-art methods. We explore in detail to illustrate the property of our approach.
###
4.1 Experimental settings
Datasets We conduct our experiments with Cifar10[[26](#bib.bib3 "Learning multiple layers of features from tiny images")], which contains 10 classes, 50000 images for training and 10000 images for testing.
Baseline training All models use SGD, with momentum set to 0.9. For Cifar dataset, the learning rate is set to 0.01. For Imagenet dataset, the learning rate is set to 0.1. If weight decay and dropout are used, weight decay is set to 10−4 and the keep-prop is set to 0.9. When training from sketch, Mask is initiated with random numbers from a normal distribution with mean 0.8 and variance 0.2. It means we don’t drop any frequency at beginning, and with the model learning, the accumulated gradient of mask will pinpoint the valid frequency range. While for finetuning, Mask is initiated with random numbers from a normal distribution with mean 0.6 and variance 0.1 to accelerate the learning of mask.
###
4.2 Experimental results and analysis

Figure 2: Sketch and finetune
.
Lenet baseline
58.48
Lenet + normalization
60.41
Lenet + normalization + random crop + data augmentation
75.06
Lenet + normalization + random crop + data augmentation + weight decay
76.23
Lenet + our method + weight decay
66.7
Lenet + our method + normalization + weight decay
68.3
Lenet + our method + random crop
74.0
Lenet + our method + data augmentation
69.8
Lenet + our method + random drop(0.2)
62.1
Table 1: Summary of test accuracy on Cifar dataset for LeNet architecture. For dropout, DropPath, and SpatialDropout, we trained models with the best keep\_prob values reported by [[16](#bib.bib4 "DropBlock: a regularization method for convolutional networks")]
| | | | | |
| --- | --- | --- | --- | --- |
| | Clear | Impulse\_noise | Fog | Contrast |
| Natural | 93.5 | 50.436 | 85.14 | 70.858 |
| Gauss | | | | |
| Adversarial | | | | |
| Our method | 94.06 | 57.344 | 86.752 | 73.432 |
Table 2: Comparison between naturally trained model, Gaussian data augmentation, adversarial training, and our method on clean images and Cifar10-C for resnet-20 architecture.
| Conv Layer | Mask Percentage |
| --- | --- |
| layer1conv1.1 | 0.6938 |
| layer1conv1.2 | 0.5810 |
| layer1conv2.1 | 0.5213 |
| layer1conv2.2 | 0.4889 |
| layer2conv1.1 | 0.3354 |
| layer2conv1.2 | 0.3696 |
| layer2conv2.1 | 0.3124 |
| layer2conv2.2 | 0.2644 |
| layer3conv1.1 | 0.2082 |
| layer3conv1.2 | 0.2621 |
| layer3conv2.1 | 0.2453 |
| layer3conv2.2 | 0.1477 |
| layer4conv1.1 | 0.0810 |
| layer4conv1.2 | 0.0455 |
| layer4conv2.1 | 0.0337 |
| layer4conv2.2 | 0.0175 |
Table 3: The percentage of frequency each convolution layer masks for resnet-20 architecture.
It was observed that Gaussian data augmentation and adversarial training improve robustness to all noise and many of the blurring corruptions, while degrading robustness to fog and contrast[[49](#bib.bib22 "A fourier perspective on model robustness in computer vision")]. Our method has better results against fog, contrast, and impulse noise, which shows that our method alleviates the low frequency brittle caused by adversarial training.

Figure 3: Class activation mapping (CAM)[[51](#bib.bib1 "Learning deep features for discriminative localization")] for resnet-18 model.
With such amount of activations suppressed in Table [3](#S4.T3 "Table 3 ‣ 4.2 Experimental results and analysis ‣ 4 Experiments ‣ Robust Learning with Frequency Domain Regularization") and the CAM illustration in Figure [3](#S4.F3 "Figure 3 ‣ 4.2 Experimental results and analysis ‣ 4 Experiments ‣ Robust Learning with Frequency Domain Regularization"), our method utilizes the valid frequency range to capture the most important frequency for classification.
5 Discussion
-------------
###
5.1 Binarized mask or continuous mask
When using Binarized mask in spatial domain, it will have side effect like generating boundary. However, in spectral domain, this kind of side effect is not obvious. It can be seen as a novel way of denoising. Benefitting from the property of spectral domain that represents the feature in an invariant and sparse way, our method can suppress the wrong activation in the spectral domain through binarized mask.
###
5.2 Our method with random drop
Our method pinpoints the valid frequency range for the training set. What if we random drop some frequency after using our method, will the model learn redundancy feature, do not rely on heavily on those frequency and perform better. In the last line of Table 1, we show that it is not a better choice. This verifies that our method pinpoints the valid frequency range on another perspective.
###
5.3 Inter-class mask
If we train the spectral domain mask for each class, it may have better performance and implicit transform among the same category in different datasets. However, in the test time, we need to determine the category of the images before using inter-class mask. So, we may need to change the architecture of the model, which is left to our future work.
6 Conclusion
-------------
We proposed a novel regularization method in the train time to explicit remove the unimportant frequency. 1) We pinpoint the valid frequency range entangled in different layers. 2) We demonstrate the model trained with our regularization is more robust on unseen data.
Comparing with Band-limited training[[11](#bib.bib54 "Band-limited training and inference for convolutional neural networks")] and spectral dropout[[25](#bib.bib29 "Regularization of deep neural networks with spectral dropout")], they do some restriction on spectral domain. Our method differs from them in two aspects: 1) Compared with energy-based compression technique, our method do not drop high frequency component indiscriminately. Our goal is not to minimize the approximation error between masked input and filter with unmasked ones, but to find out the most important frequency for classification and force the model to shield against background and noise. 2) we do not use hyperparameter keep-percentage to determine the threshold for masking. Our method uses back propagation to figure out mask, so our method can be used end-to-end.
Comparing with self-supervised learning strategy [[34](#bib.bib14 "Self-supervised learning of pretext-invariant representations"), [37](#bib.bib15 "Unsupervised intra-domain adaptation for semantic segmentation through self-supervision"), [48](#bib.bib16 "Exploring categorical regularization for domain adaptive object detection")], our method does not have complex architecture. We try to leverage the transferability of frequency to solve the problem of transferability of model and domain adaptation. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.