document
string | content
string |
|---|---|
EU.pdf
|
Official Journal
of the European UnionEN
L series
2024/1689 12.7.2024
REGULA TION (EU) 2024/1689 OF THE EUROPEAN P ARLIAMENT AND OF THE COUNCIL
of 13 June 2024
laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008,
(EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1 139 and (EU) 2019/2144 and Dir ectives
2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)
(Text with EEA relevance)
THE EUROPEAN P ARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION,
Having regard to the Treaty on the Functioning of the European Union, and in particular Articles 16 and 114
thereof,
Having regard to the proposal from the European Commission,
After transmission of the draft legislative act to the national parliaments,
Having regard to the opinion of the European Economic and Social Committee (1),
Having regard to the opinion of the European Central Bank (2),
Having regard to the opinion of the Committee of the Regions (3),
Acting in accordance with the ordinary legislative procedure (4),
Whereas:
(1)The purpose of this Regulation is to improve the functioning of the internal market by laying down
a uniform legal framework in particular for the development, the placing on the market, the putting into
service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union
values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring
a high level of protection of health, safety , fundamental rights as enshrined in the Charter of Fundamental
Rights of the European Union (the ‘Charter ’), including democracy , the rule of law and environmental
protection, to protect against the harmful effects of AI systems in the Union, and to support innovation.
This Regulation ensures the free movem ent, cross-border , of AI-based goods and services, thus preventing
Member States from imposing restrictions on the development, marketing and use of AI systems, unless
explicitly authorised by this Regulation.
(2)This Regulation should be applied in accordance with the values of the Union enshrined as in the Charter ,
facilitating the protection of natural persons, undertakings, democracy , the rule of law and environmental
protection, while boosting innovation and employment and making the Union a leader in the uptake of
trustworthy AI.
(3)AI systems can be easily deployed in a large variety of sectors of the economy and many parts of society ,
including across borders, and can easily circulate throughout the Union. Certain Member States have
already explored the adoption of national rules to ensure that AI is trustworthy and safe and is developed
and used in accordance with fundame ntal rights obligations. Diver ging national rules may lead to the
fragmentation of the internal market and may decrease legal certainty for operators that develop, import or
use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured
in order to achieve trustworthy AI, while diver gences hampering the free circulation, innovation,
deployment and the uptake of AI systems and related products and services within the internal market
should be prevented by laying down uniform obligations for operators and guaranteeing the uniform
protection of overriding reasons of public interest and of rights of persons throughout the internal market
on the basis of Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent
that this Regulation contains specific rules on the protection of individuals with regard to the processing of
personal data concerning restrictions of the use of AI systems for remote biometric identification for the
purpose of law enforcement, of the use of AI systems for risk assessments of natural persons for the
purpose of law enforcement and of the use of AI systems of biometric categorisation for the purpose of law
enforcement, it is appropriate to base this Regulation, in so far as those specific rules are concerned, on
Article 16 TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to
consult the European Data Protection Board.
(4)AI is a fast evolving family of technologies that contributes to a wide array of economic, environmental
and societal benefits across the entire spectrum of industries and social activities. By improving prediction,
optimising operations and resource allocation, and personalising digital solutions available for individuals
and organisations, the use of AI can provide key competitive advantages to undertakings and support
socially and environmentally beneficial outcomes, for example in healthcare, agriculture, food safety ,2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 1/110
education and training, media, sports, culture, infrastructure management, energy, transport and logistics,
public services, security , justice, resource and energy efficiency , environmental monitoring, the
conservation and restoration of biodiversity and ecosystems and climate change mitigation and adaptation.
(5)At the same time, depending on the circumstances regarding its specific application, use, and level of
technological development, AI may generate risks and cause harm to public interests and fundamental
rights that are protected by Union law. Such harm might be material or immaterial, including physical,
psychological, societal or economic harm.
(6)Given the major impact that AI can have on society and the need to build trust, it is vital for AI and its
regulatory framework to be developed in accordance with Union values as enshrined in Article 2 of the
Treaty on European Union (TEU), the fundamental rights and freedoms enshrined in the Treaties and,
pursuant to Article 6 TEU, the Charter . As a prerequisite, AI should be a human-centric technology . It
should serve as a tool for people, with the ultimate aim of increasing human well-being.
(7)In order to ensure a consistent and high level of protection of public interests as regards health, safety and
fundamental rights, common rules for high-risk AI systems should be establish ed. Those rules should be
consistent with the Charter , non-discriminatory and in line with the Union’ s international trade
commitments. They should also take into account the European Declaration on Digital Rights and
Principles for the Digital Decade and the Ethics guidelines for trustworthy AI of the High-Level Expert
Group on Artificial Intelligence (AI HLEG).
(8)A Union legal framework laying down harmonised rules on AI is therefore needed to foster the
development, use and uptake of AI in the internal market that at the same time meets a high level of
protection of public interests, such as health and safety and the protection of fundamental rights, including
democracy , the rule of law and environmental protection as recognised and protected by Union law. To
achieve that objective, rules regulating the placing on the market, the putting into service and the use of
certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and
allowing those systems to benefit from the principle of free movement of goods and services. Those rules
should be clear and robust in protecting fundamental rights, supportive of new innovative solutions,
enabling a European ecosystem of public and private actors creating AI systems in line with Union values
and unlocking the potential of the digital transformation across all regions of the Union. By laying down
those rules as well as measures in support of innovation with a particular focus on small and medium
enterprises (SMEs), including startups, this Regulation supports the objective of promoting the European
human-centric approach to AI and being a global leader in the development of secure, trustworthy and
ethical AI as stated by the European Council (5), and it ensures the protection of ethic al principles, as
specifically requested by the European Parliament (6).
(9)Harmonised rules applicable to the placing on the market, the putting into service and the use of high-risk
AI systems should be laid down consistently with Regulation (EC) No 765/2008 of the European
Parliament and of the Council (7), Decision No 768/2008/EC of the European Parliament and of the
Council (8) and Regulation (EU) 2019/1020 of the European Parliament and of the Council (9) (New
Legislative Framework). The harmonised rules laid down in this Regulation should apply across sectors
and, in line with the New Legislative Framework, should be without prejudice to existing Union law, in
particular on data protection, consumer protection, fundamental rights, employment, and protection of
workers, and product safety , to which this Regulation is complementary . As a consequence, all rights and
remedies provided for by such Union law to consumers, and other persons on whom AI systems may have
a negative impact, including as regards the compensation of possible damages pursuant to Council
Directive 85/374/EEC (10) remain unaffected and fully applica ble. Furthermore, in the context of
employment and protection of workers, this Regulation should therefore not affect Union law on social
policy and national labour law, in compliance with Union law, concerning employment and working
conditions, including health and safety at work and the relationship between employers and workers. This
Regulation should also not affect the exercise of fundamental rights as recognised in the Member States
and at Union level, including the right or freedom to strike or to take other action covered by the specific
industrial relations systems in Member States as well as the right to negotiate, to conclude and enforce
collective agreements or to take collecti ve action in accordance with national law. This Regulation should
not affect the provisions aiming to improve working conditions in platform work laid down in a Directive
of the European Parliament and of the Council on improving working conditions in platform work.
Moreover , this Regulation aims to strengthen the effectiveness of such existing rights and remedies by
establishing specific requirements and obligations, including in respect of the transparency , technical
documentation and record-keeping of AI systems. Furthermore, the obligations placed on various operators
involved in the AI value chain under this Regulation should apply without prejudice to national law, in
compliance with Union law, having the effect of limiting the use of certain AI systems where such law falls
outside the scope of this Regulation or pursues legitimate public interest objectives other than those
pursued by this Regulation. For example, national labour law and law on the protection of minors, namely
persons below the age of 18, taking into account the UNCRC General Comment No 25 (2021) on
children’ s rights in relation to the digital environment, insofar as they are not specific to AI systems and
pursue other legitimate public interest objectives, should not be af fected by this Regulation.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 2/110
(10)The fundamental right to the protection of personal data is safeguarded in particular by Regulations (EU)
2016/679 (11) and (EU) 2018/1725 (12) of the European Parliament and of the Council and Directive (EU)
2016/680 of the European Parliament and of the Council (13). Directive 2002/58/EC of the European
Parliament and of the Council (14) additionally protects private life and the confidentiality of
communications, including by way of providing conditions for any storing of personal and non-personal
data in, and access from, terminal equipment. Those Union legal acts provide the basis for sustainable and
responsible data processing, including where data sets include a mix of personal and non-personal data.
This Regulation does not seek to affect the application of existing Union law governing the processing of
personal data, including the tasks and powers of the independent supervisory authorities competent to
monitor compliance with those instruments. It also does not affect the obligations of providers and
deployers of AI systems in their role as data controllers or processors stemming from Union or national law
on the protection of personal data in so far as the design, the development or the use of AI systems involves
the processing of personal data. It is also appropriate to clarify that data subjects continue to enjoy all the
rights and guarantees awarded to them by such Union law, including the rights related to solely automated
individual decision-making, including profiling. Harmonised rules for the placing on the market, the
putting into service and the use of AI systems established under this Regulation should facilitate the
effective implementation and enable the exercise of the data subjects’ rights and other remedies guaranteed
under Union law on the protection of personal data and of other fundamental rights.
(11)This Regulation should be without prejudice to the provisions regarding the liability of providers of
intermediary services as set out in Regulation (EU) 2022/2065 of the Europ ean Parliament and of the
Council (15).
(12)The notion of ‘AI system’ in this Regul ation should be clearly defined and should be closely aligned with
the work of international organisations working on AI to ensure legal certainty , facilitate international
conver gence and wide acceptance, while providing the flexibility to accommodate the rapid technological
developments in this field. Moreover , the definition should be based on key characteristics of AI systems
that distinguish it from simpler traditio nal software systems or programming approaches and should not
cover systems that are based on the rules defined solely by natural persons to automatically execute
operations. A key characteristic of AI systems is their capability to infer. This capability to infer refers to
the process of obtaining the outputs, such as predictions, content, recommend ations, or decisions, which
can influence physical and virtual environments, and to a capability of AI systems to derive models or
algorithms, or both, from inputs or data. The techniques that enable inference while building an AI system
include machine learning approaches that learn from data how to achieve certain objectives, and logic- and
knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to
be solved. The capacity of an AI system to infer transcends basic data processing by enabling learning,
reasoning or modelling. The term ‘machine-based’ refers to the fact that AI systems run on machines. The
reference to explicit or implicit objectives underscores that AI systems can operate according to explicit
defined objectives or to implicit objectives. The objectives of the AI system may be different from the
intended purpose of the AI system in a specific context. For the purposes of this Regulation, environments
should be understood to be the contexts in which the AI systems operate, where as outputs generated by the
AI system reflect different functions performed by AI systems and include predictions, content,
recommendations or decisions. AI systems are designed to operate with varying levels of autonomy ,
meaning that they have some degree of independence of actions from human involvement and of
capabilities to operate without human intervention. The adaptiveness that an AI system could exhibit after
deployment, refers to self-learning capabilities, allowing the system to change while in use. AI systems can
be used on a stand-alone basis or as a component of a product, irrespective of whether the system is
physically integrated into the product (embedded) or serves the functionality of the product without being
integrated therein (non-embedded).
(13)The notion of ‘deployer ’ referred to in this Regulation should be interpreted as any natural or legal person,
including a public authority , agency or other body , using an AI system under its authority , except where the
AI system is used in the course of a personal non-professional activity . Depending on the type of AI
system, the use of the system may af fect persons other than the deployer .
(14)The notion of ‘biometric data’ used in this Regulation should be interpreted in light of the notion of
biometric data as defined in Article 4, point (14) of Regulation (EU) 2016/679, Article 3, point (18) of
Regulation (EU) 2018/1725 and Article 3, point (13) of Directive (EU) 2016/680. Biometric data can allow
for the authentication, identification or categorisation of natural persons and for the recognition of
emotions of natural persons.
(15)The notion of ‘biometric identification’ referred to in this Regulation should be defined as the automated
recognition of physical, physiological and behavioural human features such as the face, eye movement,
body shape, voice, prosody , gait, postur e, heart rate, blood pressure, odour , keystrokes characteristics, for
the purpose of establishing an individua l’s identity by comparing biometric data of that individual to stored
biometric data of individuals in a reference database, irrespective of whether the individual has given its
consent or not. This excludes AI systems intended to be used for biometric verification, which includes
authentication, whose sole purpose is to confirm that a specific natural person is the person he or she2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 3/110
claims to be and to confirm the identity of a natural person for the sole purpose of having access to
a service, unlocking a device or having security access to premises.
(16)The notion of ‘biometric categorisation’ referred to in this Regulation shoul d be defined as assigning
natural persons to specific categories on the basis of their biometric data. Such specific categories can
relate to aspects such as sex, age, hair colour , eye colour , tattoos, behavioural or personality traits,
language, religion, membership of a national minority , sexual or political orienta tion. This does not include
biometric categorisation systems that are a purely ancillary feature intrin sically linked to another
commercial service, meaning that the feature cannot, for objective technical reasons, be used without the
principal service, and the integration of that feature or functionality is not a means to circumvent the
applicability of the rules of this Regulation. For example, filters categorising facial or body features used
on online marketplaces could constitute such an ancillary feature as they can be used only in relation to the
principal service which consists in selling a product by allowing the consumer to preview the display of the
product on him or herself and help the consumer to make a purchase decision. Filters used on online social
network services which categorise facial or body features to allow users to add or modify pictures or
videos could also be considered to be ancillary feature as such filter cannot be used without the principal
service of the social network services consisting in the sharing of content online.
(17)The notion of ‘remote biometric identification system’ referred to in this Regulation should be defined
functionally , as an AI system intended for the identification of natural persons without their active
involvement, typically at a distance, through the comparison of a person’ s biometric data with the
biometric data contained in a reference database, irrespectively of the particula r technology , processes or
types of biometric data used. Such remote biometric identification systems are typically used to perceive
multiple persons or their behaviour simultaneously in order to facilitate signifi cantly the identification of
natural persons without their active involvement. This excludes AI systems intended to be used for
biometric verification, which includes authentication, the sole purpose of which is to confirm that
a specific natural person is the person he or she claims to be and to confirm the identity of a natural person
for the sole purpose of having access to a service, unlocking a device or having security access to premises.
That exclusion is justified by the fact that such systems are likely to have a minor impact on fundamental
rights of natural persons compared to the remote biometric identification syste ms which may be used for
the processing of the biometric data of a large number of persons without their active involvement. In the
case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur
all instantaneously , near-instantaneously or in any event without a significant delay . In this regard, there
should be no scope for circumventing the rules of this Regulation on the ‘real-t ime’ use of the AI systems
concerned by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near -live’
material, such as video footage, generated by a camera or other device with similar functionality . In the
case of ‘post’ systems, in contrast, the biometric data has already been captured and the comparison and
identification occur only after a signific ant delay . This involves material, such as pictures or video footage
generated by closed circuit television cameras or private devices, which has been generated before the use
of the system in respect of the natural persons concerned.
(18)The notion of ‘emotion recognition system’ referred to in this Regulation should be defined as an AI
system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of
their biometric data. The notion refers to emotions or intentions such as happiness, sadness, anger , surprise,
disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement. It does not include
physical states, such as pain or fatigue, including, for example, systems used in detecting the state of
fatigue of professional pilots or drivers for the purpose of preventing accidents. This does also not include
the mere detection of readily apparent expressions, gestures or movements, unless they are used for
identifying or inferring emotions. Those expressions can be basic facial expre ssions, such as a frown or
a smile, or gestures such as the movem ent of hands, arms or head, or characte ristics of a person’ s voice,
such as a raised voice or whispering.
(19)For the purposes of this Regulation the notion of ‘publicly accessible space’ should be understood as
referring to any physical space that is accessible to an undetermined numb er of natural persons, and
irrespective of whether the space in question is privately or publicly owned, irrespective of the activity for
which the space may be used, such as for commerce, for example, shops, restaurants, cafés; for services,
for example, banks, professional activities, hospitality; for sport, for example, swimming pools, gyms,
stadiums; for transport, for example, bus, metro and railway stations, airports, means of transport; for
entertainment, for example, cinemas, theatres, museums, concert and conference halls; or for leisure or
otherwise, for example, public roads and squares, parks, forests, playground s. A space should also be
classified as being publicly accessible if, regardless of potential capacity or security restrictions, access is
subject to certain predetermined conditions which can be fulfilled by an undete rmined number of persons,
such as the purchase of a ticket or title of transport, prior registration or having a certain age. In contrast,
a space should not be considered to be publicly accessible if access is limited to specific and defined
natural persons through either Union or national law directly related to public safety or security or through
the clear manifestation of will by the person having the relevant authority over the space. The factual
possibility of access alone, such as an unlocked door or an open gate in a fence, does not imply that the
space is publicly accessible in the presence of indications or circumstances suggesting the contrary , such2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 4/110
as. signs prohibiting or restricting access. Company and factory premises , as well as offices and
workplaces that are intended to be accessed only by relevant employees and service providers, are spaces
that are not publicly accessible. Publicly accessible spaces should not include prisons or border control.
Some other spaces may comprise both publicly accessible and non-publicly accessible spaces, such as the
hallway of a private residential building necessary to access a doctor ’s office or an airport. Online spaces
are not covered, as they are not physical spaces. Whether a given space is accessible to the public should
however be determined on a case-by- case basis, having regard to the specificities of the individual
situation at hand.
(20)In order to obtain the greatest benefits from AI systems while protecting fundamental rights, health and
safety and to enable democratic control, AI literacy should equip providers, deployers and affected persons
with the necessary notions to make informed decisions regarding AI systems. Those notions may vary with
regard to the relevant context and can include understanding the correct application of technical elements
during the AI system’ s development phase, the measures to be applied during its use, the suitable ways in
which to interpret the AI system’ s output, and, in the case of affected persons, the knowledge necessary to
understand how decisions taken with the assistance of AI will have an impact on them. In the context of the
application this Regulation, AI literacy should provide all relevant actors in the AI value chain with the
insights required to ensure the appropria te compliance and its correct enforcem ent. Furthermore, the wide
implementation of AI literacy measures and the introduction of appropriat e follow-up actions could
contribute to improving working conditions and ultimately sustain the consolidation, and innovation path
of trustworthy AI in the Union. The Euro pean Artificial Intelligence Board (the ‘Board’) should support the
Commission, to promote AI literacy tools, public awareness and understanding of the benefits, risks,
safeguards, rights and obligations in relation to the use of AI systems. In cooperation with the relevant
stakeholders, the Commission and the Member States should facilitate the draw ing up of voluntary codes
of conduct to advance AI literacy among persons dealing with the development, operation and use of AI.
(21)In order to ensure a level playing field and an effective protection of rights and freedoms of individuals
across the Union, the rules established by this Regulation should apply to providers of AI systems in a non-
discriminatory manner , irrespective of whether they are established within the Union or in a third country ,
and to deployers of AI systems established within the Union.
(22)In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when
they are not placed on the market, put into service, or used in the Union. This is the case, for example,
where an operator established in the Union contracts certain services to an operator established in a third
country in relation to an activity to be performed by an AI system that would qualify as high-risk. In those
circumstances, the AI system used in a third country by the operator could process data lawfully collected
in and transferred from the Union, and provide to the contracting operator in the Union the output of that
AI system resulting from that processing, without that AI system being placed on the market, put into
service or used in the Union. To preven t the circumvention of this Regulation and to ensure an effective
protection of natural persons located in the Union, this Regulation should also apply to providers and
deployers of AI systems that are established in a third country , to the extent the output produced by those
systems is intended to be used in the Union. Nonetheless, to take into account existing arrangements and
special needs for future cooperation with foreign partners with whom information and evidence is
exchanged, this Regulation should not apply to public authorities of a third country and international
organisations when acting in the framework of cooperation or international agreements concluded at Union
or national level for law enforcement and judicial cooperation with the Union or the Member States,
provided that the relevant third country or international organisation provides adequate safeguards with
respect to the protection of fundamental rights and freedoms of individuals. Where relevant, this may cover
activities of entities entrusted by the third countries to carry out specific tasks in support of such law
enforcement and judicial cooperation. Such framework for cooperation or agreements have been
established bilaterally between Memb er States and third countries or between the European Union,
Europol and other Union agencies and third countries and international organisations. The authorities
competent for supervision of the law enforcement and judicial authorities under this Regulation should
assess whether those frameworks for cooperation or international agreements include adequate safeguards
with respect to the protection of fundamental rights and freedoms of individuals. Recipient national
authorities and Union institutions, bodies, offices and agencies making use of such outputs in the Union
remain accountable to ensure their use complies with Union law. When those international agreements are
revised or new ones are concluded in the future, the contracting parties should make utmost efforts to align
those agreements with the requirements of this Regulation.
(23)This Regulation should also apply to Union institutions, bodies, offices and agencies when acting as
a provider or deployer of an AI system.
(24)If, and insofar as, AI systems are placed on the market, put into service, or used with or without
modification of such systems for militar y, defence or national security purposes , those should be excluded
from the scope of this Regulation regardless of which type of entity is carrying out those activities, such as
whether it is a public or private entity . As regards military and defence purposes, such exclusion is justified
both by Article 4(2) TEU and by the specificities of the Member States’ and the common Union defence
policy covered by Chapter 2 of Title V TEU that are subject to public internatio nal law, which is therefore2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 5/110
the more appropriate legal framework for the regulation of AI systems in the context of the use of lethal
force and other AI systems in the context of military and defence activities. As regards national security
purposes, the exclusion is justified both by the fact that national security remains the sole responsibility of
Member States in accordance with Article 4(2) TEU and by the specific nature and operational needs of
national security activities and specific national rules applicable to those activ ities. Nonetheless, if an AI
system developed, placed on the market, put into service or used for military , defence or national security
purposes is used outside those tempora rily or permanently for other purpose s, for example, civilian or
humanitarian purposes, law enforcemen t or public security purposes, such a system would fall within the
scope of this Regulation. In that case, the entity using the AI system for other than military , defence or
national security purposes should ensure the compliance of the AI system with this Regulation, unless the
system is already compliant with this Regulation. AI systems placed on the market or put into service for
an excluded purpose, namely military , defence or national security , and one or more non-excluded
purposes, such as civilian purposes or law enforcement, fall within the scope of this Regulation and
providers of those systems should ensure compliance with this Regulation. In those cases, the fact that an
AI system may fall within the scope of this Regulation should not affect the possibility of entities carrying
out national security , defence and milit ary activities, regardless of the type of entity carrying out those
activities, to use AI systems for national security , military and defence purposes, the use of which is
excluded from the scope of this Regulation. An AI system placed on the market for civilian or law
enforcement purposes which is used with or without modification for military , defence or national security
purposes should not fall within the scope of this Regulation, regardless of the type of entity carrying out
those activities.
(25)This Regulation should support innovation, should respect freedom of science, and should not undermine
research and development activity . It is therefore necessary to exclude from its scope AI systems and
models specifically developed and put into service for the sole purpose of scientific research and
development. Moreover , it is necessary to ensure that this Regulation does not otherwise affect scientific
research and development activity on AI systems or models prior to being placed on the market or put into
service. As regards product-oriented research, testing and development activity regarding AI systems or
models, the provisions of this Regulation should also not apply prior to those systems and models being
put into service or placed on the marke t. That exclusion is without prejudice to the obligation to comply
with this Regulation where an AI system falling into the scope of this Regulation is placed on the market or
put into service as a result of such research and development activity and to the application of provisions
on AI regulatory sandboxes and testing in real world conditions. Furthermore, without prejudice to the
exclusion of AI systems specifically developed and put into service for the sole purpose of scientific
research and development, any other AI system that may be used for the conduct of any research and
development activity should remain subject to the provisions of this Regulation. In any event, any research
and development activity should be carried out in accordance with recognised ethical and professional
standards for scientific research and should be conducted in accordance with applicable Union law .
(26)In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined
risk-based approach should be followed. That approach should tailor the type and content of such rules to
the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain
unacceptable AI practices, to lay down requirements for high-risk AI systems and obligations for the
relevant operators, and to lay down transparency obligations for certain AI systems.
(27)While the risk-based approach is the basis for a proportionate and effective set of binding rules, it is
important to recall the 2019 Ethics guidelines for trustworthy AI developed by the independent AI HLEG
appointed by the Commission. In those guidelines, the AI HLEG developed seven non-binding ethical
principles for AI which are intended to help ensure that AI is trustworthy and ethically sound. The seven
principles include human agency and oversight; technical robustness and safety; privacy and data
governance; transparency; diversity , non-discrimination and fairness; societa l and environmental well-
being and accountability . Without prejudice to the legally binding requirements of this Regulation and any
other applicable Union law, those guide lines contribute to the design of coherent, trustworthy and human-
centric AI, in line with the Charter and with the values on which the Union is founded. According to the
guidelines of the AI HLEG, human agency and oversight means that AI systems are developed and used as
a tool that serves people, respects human dignity and personal autonomy , and that is functioning in a way
that can be appropriately controlled and overseen by humans. Technical robustness and safety means that
AI systems are developed and used in a way that allows robustness in the case of problems and resilience
against attempts to alter the use or performance of the AI system so as to allow unlawful use by third
parties, and minimise unintended harm. Privacy and data governance means that AI systems are developed
and used in accordance with privacy and data protection rules, while processing data that meets high
standards in terms of quality and integrity . Transparency means that AI systems are developed and used in
a way that allows appropriate traceab ility and explainability , while making humans aware that they
communicate or interact with an AI system, as well as duly informing deployers of the capabilities and
limitations of that AI system and affected persons about their rights. Diversity , non-discrimination and
fairness means that AI systems are developed and used in a way that includes diverse actors and promotes
equal access, gender equality and cultural diversity , while avoiding discriminatory impacts and unfair
biases that are prohibited by Union or national law. Social and environmental well-being means that AI2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 6/110
systems are developed and used in a sustainable and environmentally friendly manner as well as in a way
to benefit all human beings, while monitoring and assessing the long-term impacts on the individual,
society and democracy . The application of those principles should be translated, when possible, in the
design and use of AI models. They shou ld in any case serve as a basis for the drafting of codes of conduct
under this Regulation. All stakeholders, including industry , academia, civil society and standardisation
organisations, are encouraged to take into account, as appropriate, the ethical principles for the
development of voluntary best practices and standards.
(28)Aside from the many beneficial uses of AI, it can also be misused and provide novel and powerful tools for
manipulative, exploitative and social control practices. Such practices are particularly harmful and abusive
and should be prohibited because they contradict Union values of respect for human dignity , freedom,
equality , democracy and the rule of law and fundamental rights enshrined in the Charter , including the right
to non-discrimination, to data protection and to privacy and the rights of the child.
(29)AI-enabled manipulative techniques can be used to persuade persons to engage in unwanted behaviours, or
to deceive them by nudging them into decisions in a way that subverts and impairs their autonomy ,
decision-making and free choices. The placing on the market, the putting into service or the use of certain
AI systems with the objective to or the effect of materially distorting human behaviour , whereby
significant harms, in particular having sufficiently important adverse impacts on physical, psychological
health or financial interests are likely to occur , are particularly dangerous and should therefore be
prohibited. Such AI systems deploy subliminal components such as audio, image, video stimuli that
persons cannot perceive, as those stimul i are beyond human perception, or other manipulative or deceptive
techniques that subvert or impair perso n’s autonomy , decision-making or free choice in ways that people
are not consciously aware of those techniques or, where they are aware of them, can still be deceived or are
not able to control or resist them. This could be facilitated, for example, by machine-brain interfaces or
virtual reality as they allow for a higher degree of control of what stimuli are presented to persons, insofar
as they may materially distort their behaviour in a significantly harmful mann er. In addition, AI systems
may also otherwise exploit the vulnerab ilities of a person or a specific group of persons due to their age,
disability within the meaning of Directive (EU) 2019/882 of the Europea n Parliament and of the
Council (16), or a specific social or economic situation that is likely to make those persons more vulnerable
to exploitation such as persons living in extreme poverty , ethnic or religious minorities. Such AI systems
can be placed on the market, put into service or used with the objective to or the effect of materially
distorting the behaviour of a person and in a manner that causes or is reasonably likely to cause significant
harm to that or another person or groups of persons, including harms that may be accumulated over time
and should therefore be prohibited. It may not be possible to assume that there is an intention to distort
behaviour where the distortion results from factors external to the AI system which are outside the control
of the provider or the deployer , namely factors that may not be reasonably foreseeable and therefore not
possible for the provider or the deployer of the AI system to mitigate. In any case, it is not necessary for the
provider or the deployer to have the intention to cause significant harm, provided that such harm results
from the manipulative or exploitative AI-enabled practices. The prohibitions for such AI practices are
complementary to the provisions contained in Directive 2005/29/EC of the European Parliament and of the
Council (17), in particular unfair commercial practices leading to economic or financial harm s to consumers
are prohibited under all circumstances, irrespective of whether they are put in place through AI systems or
otherwise. The prohibitions of manipulative and exploitative practices in this Regulation should not affect
lawful practices in the context of medical treatment such as psychological treatment of a mental disease or
physical rehabilitation, when those practices are carried out in accordance with the applicable law and
medical standards, for example explicit consent of the individuals or their legal representatives. In addition,
common and legitimate commercial practices, for example in the field of advertising, that comply with the
applicable law should not, in themselves, be regarded as constituting harmful manipulative AI-enabled
practices.
(30)Biometric categorisation systems that are based on natural persons’ biometric data, such as an individual
person’ s face or fingerprint, to deduce or infer an individuals’ political opinions, trade union membership,
religious or philosophical beliefs, race, sex life or sexual orientation should be prohibited. That prohibition
should not cover the lawful labelling, filtering or categorisation of biometric data sets acquired in line with
Union or national law according to biometric data, such as the sorting of images according to hair colour or
eye colour , which can for example be used in the area of law enforcement.
(31)AI systems providing social scoring of natural persons by public or private actors may lead to
discriminatory outcomes and the exclusion of certain groups. They may viola te the right to dignity and
non-discrimination and the values of equality and justice. Such AI systems evaluate or classify natural
persons or groups thereof on the basis of multiple data points related to their social behaviour in multiple
contexts or known, inferred or predicted personal or personality characteristi cs over certain periods of
time. The social score obtained from such AI systems may lead to the detrimental or unfavourable
treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context
in which the data was originally generated or collected or to a detrimental treatment that is disproportionate
or unjustified to the gravity of their social behaviour . AI systems entailing such unacceptable scoring
practices and leading to such detriment al or unfavourable outcomes should therefore be prohibited. That2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 7/110
prohibition should not affect lawful evaluation practices of natural persons that are carried out for
a specific purpose in accordance with Union and national law .
(32)The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly
accessible spaces for the purpose of law enforcement is particularly intrusive to the rights and freedoms of
the concerned persons, to the extent that it may affect the private life of a large part of the population,
evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly
and other fundamental rights. Technical inaccuracies of AI systems intended for the remote biometric
identification of natural persons can lead to biased results and entail discrimina tory effects. Such possible
biased results and discriminatory effects are particularly relevant with regard to age, ethnicity , race, sex or
disabilities. In addition, the immediacy of the impact and the limited opportunities for further checks or
corrections in relation to the use of such systems operating in real-time carry heightened risks for the rights
and freedoms of the persons concerned in the context of, or impacted by , law enforcement activities.
(33)The use of those systems for the purpose of law enforcement should therefore be prohibited, except in
exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve
a substantial public interest, the importance of which outweighs the risks. Those situations involve the
search for certain victims of crime including missing persons; certain threats to the life or to the physical
safety of natural persons or of a terrorist attack; and the localisation or identification of perpetrators or
suspects of the criminal offences listed in an annex to this Regulation, where those criminal offences are
punishable in the Member State concerned by a custodial sentence or a detention order for a maximum
period of at least four years and as they are defined in the law of that Member State. Such a threshold for
the custodial sentence or detention order in accordance with national law contributes to ensuring that the
offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification
systems. Moreover , the list of criminal offences provided in an annex to this Regulation is based on the 32
criminal offences listed in the Council Framework Decision 2002/584/JHA (18), taking into account that
some of those offences are, in practice, likely to be more relevant than others, in that the recourse to ‘real-
time’ remote biometric identification could, foreseeably , be necessary and proportionate to highly varying
degrees for the practical pursuit of the localisation or identification of a perpetrator or suspect of the
different criminal offences listed and having regard to the likely differences in the seriousness, probability
and scale of the harm or possible negati ve consequences. An imminent threat to life or the physical safety
of natural persons could also result from a serious disruption of critical infrastructure, as defined in
Article 2, point (4) of Directive (EU) 2022/2557 of the European Parliament and of the Council (19), where
the disruption or destruction of such critical infrastructure would result in an imminent threat to life or the
physical safety of a person, including through serious harm to the provision of basic supplies to the
population or to the exercise of the core function of the State. In addition, this Regulation should preserve
the ability for law enforcement, border control, immigration or asylum autho rities to carry out identity
checks in the presence of the person concerned in accordance with the conditions set out in Union and
national law for such checks. In particular , law enforcement, border control, immigration or asylum
authorities should be able to use information systems, in accordance with Union or national law, to identify
persons who, during an identity check, either refuse to be identified or are unable to state or prove their
identity , without being required by this Regulation to obtain prior authorisation. This could be, for
example, a person involved in a crime, being unwilling, or unable due to an accident or a medical
condition, to disclose their identity to law enforcement authorities.
(34)In order to ensure that those systems are used in a responsible and proportionate manner , it is also
important to establish that, in each of those exhaustively listed and narrowly defined situations, certain
elements should be taken into account, in particular as regards the nature of the situation giving rise to the
request and the consequences of the use for the rights and freedoms of all persons concerned and the
safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric
identification systems in publicly accessible spaces for the purpose of law enforcement should be deployed
only to confirm the specifically targeted individual’ s identity and should be limited to what is strictly
necessary concerning the period of time, as well as the geographic and personal scope, having regard in
particular to the evidence or indications regarding the threats, the victims or perpetrator . The use of the
real-time remote biometric identification system in publicly accessible spaces should be authorised only if
the relevant law enforcement authority has completed a fundamental rights impact assessment and, unless
provided otherwise in this Regulation, has registered the system in the database as set out in this
Regulation. The reference database of persons should be appropriate for each use case in each of the
situations mentioned above.
(35)Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the
purpose of law enforcement should be subject to an express and specific authorisation by a judicial
authority or by an independent administrative authority of a Member State whos e decision is binding. Such
authorisation should, in principle, be obtained prior to the use of the AI system with a view to identifying
a person or persons. Exceptions to that rule should be allowed in duly justified situations on grounds of
urgency , namely in situations where the need to use the systems concerned is such as to make it effectively
and objectively impossible to obtain an authorisation before commencing the use of the AI system. In such
situations of urgency , the use of the AI system should be restricted to the absolu te minimum necessary and2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 8/110
should be subject to appropriate safeguards and conditions, as determined in national law and specified in
the context of each individual urgent use case by the law enforcement authority itself. In addition, the law
enforcement authority should in such situations request such authorisation while providing the reasons for
not having been able to request it earlier , without undue delay and at the latest within 24 hours. If such an
authorisation is rejected, the use of real-time biometric identification systems linked to that authorisation
should cease with immediate effect and all the data related to such use should be discarded and deleted.
Such data includes input data directly acquired by an AI system in the course of the use of such system as
well as the results and outputs of the use linked to that authorisation. It shoul d not include input that is
legally acquired in accordance with another Union or national law. In any case, no decision producing an
adverse legal effect on a person should be taken based solely on the output of the remote biometric
identification system.
(36)In order to carry out their tasks in accordance with the requirements set out in this Regulation as well as in
national rules, the relevant market surve illance authority and the national data protection authority should
be notified of each use of the real-time biometric identification system. Market surveillance authorities and
the national data protection authorities that have been notified should submit to the Commission an annual
report on the use of real-time biometric identification systems.
(37)Furthermore, it is appropriate to provide , within the exhaustive framework set by this Regulation that such
use in the territory of a Member State in accordance with this Regulation shou ld only be possible where
and in as far as the Member State concerned has decided to expressly provide for the possibility to
authorise such use in its detailed rules of national law. Consequently , Member States remain free under this
Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of
some of the objectives capable of justifying authorised use identified in this Regulation. Such national
rules should be notified to the Commission within 30 days of their adoption.
(38)The use of AI systems for real-time remote biometric identification of natural persons in publicly
accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data.
The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on
Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data
contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of
biometric data involved in an exhaustive manner . Therefore, such use and processing should be possible
only in as far as it is compatible with the framework set by this Regulation, without there being scope,
outside that framework, for the compet ent authorities, where they act for purpose of law enforcement, to
use such systems and process such data in connection thereto on the grounds listed in Article 10 of
Directive (EU) 2016/680. In that context, this Regulation is not intended to provide the legal basis for the
processing of personal data under Artic le 8 of Directive (EU) 2016/680. However , the use of real-time
remote biometric identification systems in publicly accessible spaces for purposes other than law
enforcement, including by competent authorities, should not be covered by the specific framework
regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other
than law enforcement should therefore not be subject to the requirement of an authorisation under this
Regulation and the applicable detailed rules of national law that may give ef fect to that authorisation.
(39)Any processing of biometric data and other personal data involved in the use of AI systems for biometric
identification, other than in connection to the use of real-time remote biometric identification systems in
publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, should
continue to comply with all requireme nts resulting from Article 10 of Directive (EU) 2016/680. For
purposes other than law enforcement, Article 9(1) of Regulation (EU) 2016/679 and Article 10(1) of
Regulation (EU) 2018/1725 prohibit the processing of biometric data subject to limited exceptions as
provided in those Articles. In the application of Article 9(1) of Regulation (EU) 2016/679, the use of
remote biometric identification for purposes other than law enforcement has already been subject to
prohibition decisions by national data protection authorities.
(40)In accordance with Article 6a of Protocol No 21 on the position of the United Kingdom and Ireland in
respect of the area of freedom, security and justice, as annexed to the TEU and to the TFEU, Ireland is not
bound by the rules laid down in Article 5(1), first subparagraph, point (g), to the extent it applies to the use
of biometric categorisation systems for activities in the field of police cooperation and judicial cooperation
in criminal matters, Article 5(1), first subparagraph, point (d), to the extent it applies to the use of AI
systems covered by that provision, Article 5(1), first subparagraph, point (h), Article 5(2) to (6) and
Article 26(10) of this Regulation adopte d on the basis of Article 16 TFEU which relate to the processing of
personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or
Chapter 5 of Title V of Part Three of the TFEU, where Ireland is not bound by the rules governing the
forms of judicial cooperation in criminal matters or police cooperation which require compliance with the
provisions laid down on the basis of Article 16 TFEU.
(41)In accordance with Articles 2 and 2a of Protocol No 22 on the position of Denmark, annexed to the TEU
and to the TFEU, Denmark is not bound by rules laid down in Article 5(1), first subparagraph, point (g), to
the extent it applies to the use of biometric categorisation systems for activities in the field of police
cooperation and judicial cooperation in criminal matters, Article 5(1), first subparagraph, point (d), to the
extent it applies to the use of AI systems covered by that provision, Article 5(1), first subparagraph, point2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 9/110
(h), (2) to (6) and Article 26(10) of this Regulation adopted on the basis of Article 16 TFEU, or subject to
their application, which relate to the processing of personal data by the Member States when carrying out
activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU.
(42)In line with the presumption of innocence, natural persons in the Union should always be judged on their
actual behaviour . Natural persons should never be judged on AI-predicted behaviour based solely on their
profiling, personality traits or characteristics, such as nationality , place of birth, place of residence, number
of children, level of debt or type of car, without a reasonable suspicion of that person being involved in
a criminal activity based on objective verifiable facts and without human assessment thereof. Therefore,
risk assessments carried out with regard to natural persons in order to assess the likelihood of their
offending or to predict the occurrence of an actual or potential criminal offence based solely on profiling
them or on assessing their personality traits and characteristics should be prohibited. In any case, that
prohibition does not refer to or touch upon risk analytics that are not based on the profiling of individuals
or on the personality traits and characteristics of individuals, such as AI systems using risk analytics to
assess the likelihood of financial fraud by undertakings on the basis of suspicious transactions or risk
analytic tools to predict the likelihoo d of the localisation of narcotics or illicit goods by customs
authorities, for example on the basis of known traf ficking routes.
(43)The placing on the market, the putting into service for that specific purpose, or the use of AI systems that
create or expand facial recognition databases through the untar geted scraping of facial images from the
internet or CCTV footage, should be prohibited because that practice adds to the feeling of mass
surveillance and can lead to gross violations of fundamental rights, including the right to privacy .
(44)There are serious concerns about the scientific basis of AI systems aiming to identify or infer emotions,
particularly as expression of emotions vary considerably across cultures and situations, and even within
a single individual. Among the key shortcomings of such systems are the limited reliability , the lack of
specificity and the limited generalisability . Therefore, AI systems identifying or inferring emotions or
intentions of natural persons on the basis of their biometric data may lead to discriminatory outcomes and
can be intrusive to the rights and freedoms of the concerned persons. Considering the imbalance of power
in the context of work or education, combined with the intrusive nature of these systems, such systems
could lead to detrimental or unfavourable treatment of certain natural persons or whole groups thereof.
Therefore, the placing on the market, the putting into service, or the use of AI systems intended to be used
to detect the emotional state of individuals in situations related to the workplace and education should be
prohibited. That prohibition should not cover AI systems placed on the market strictly for medical or safety
reasons, such as systems intended for therapeutical use.
(45)Practices that are prohibited by Union law, including data protection law, non-discrimination law,
consumer protection law , and competition law , should not be af fected by this Regulation.
(46)High-risk AI systems should only be placed on the Union market, put into service or used if they comply
with certain mandatory requirements. Those requirements should ensure that high-risk AI systems
available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to
important Union public interests as recognised and protected by Union law. On the basis of the New
Legislative Framework, as clarified in the Commission notice ‘The “Blue Guide” on the implementation of
EU product rules 2022’ (20), the general rule is that more than one legal act of Union harmonisation
legislation, such as Regulations (EU) 2017/745 (21) and (EU) 2017/746 (22) of the European Parliament and
of the Council or Directive 2006/42/EC of the European Parliament and of the Council (23), may be
applicable to one product, since the making available or putting into service can take place only when the
product complies with all applicable Union harmonisation legislation. To ensure consistency and avoid
unnecessary administrative burdens or costs, providers of a product that contain s one or more high-risk AI
systems, to which the requirements of this Regulation and of the Union harmonisation legislation listed in
an annex to this Regulation apply , should have flexibility with regard to operational decisions on how to
ensure compliance of a product that contains one or more AI systems with all applicable requirements of
the Union harmonisation legislation in an optimal manner . AI systems identif ied as high-risk should be
limited to those that have a significant harmful impact on the health, safety and fundamental rights of
persons in the Union and such limitation should minimise any potential restriction to international trade.
(47)AI systems could have an adverse impact on the health and safety of persons, in particular when such
systems operate as safety components of products. Consistent with the objectives of Union harmonisation
legislation to facilitate the free moveme nt of products in the internal market and to ensure that only safe
and otherwise compliant products find their way into the market, it is important that the safety risks that
may be generated by a product as a whole due to its digital components, including AI systems, are duly
prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of
manufacturing or personal assistance and care should be able to safely operate and performs their functions
in complex environments. Similarly , in the health sector where the stakes for life and health are particularly
high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be
reliable and accurate.
(48)The extent of the adverse impact caused by the AI system on the fundamen tal rights protected by the
Charter is of particular relevance when classifying an AI system as high risk. Those rights include the right2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 10/110
to human dignity , respect for private and family life, protection of personal data, freedom of expression and
information, freedom of assembly and of association, the right to non-discrimination, the right to
education, consumer protection, workers’ rights, the rights of persons with disabilities, gender equality ,
intellectual property rights, the right to an effective remedy and to a fair trial, the right of defence and the
presumption of innocence, and the right to good administration. In addition to those rights, it is important
to highlight the fact that children have specific rights as enshrined in Article 24 of the Charter and in the
United Nations Convention on the Right s of the Child, further developed in the UNCRC General Comment
No 25 as regards the digital environment, both of which require consid eration of the children’ s
vulnerabilities and provision of such protection and care as necessary for their well-being. The
fundamental right to a high level of environmental protection enshrined in the Charter and implemented in
Union policies should also be considered when assessing the severity of the harm that an AI system can
cause, including in relation to the health and safety of persons.
(49)As regards high-risk AI systems that are safety components of products or systems, or which are
themselves products or systems falling within the scope of Regulation (EC) No 300/2008 of the European
Parliament and of the Council (24), Regulation (EU) No 167/2013 of the European Parliament and of the
Council (25), Regulation (EU) No 168/2013 of the European Parliament and of the Council (26), Directive
2014/90/EU of the European Parliament and of the Council (27), Directive (EU) 2016/797 of the European
Parliament and of the Council (28), Regulation (EU) 2018/858 of the European Parliament and of the
Council (29), Regulation (EU) 2018/1 139 of the European Parliament and of the Council (30), and
Regulation (EU) 2019/2144 of the European Parliament and of the Council (31), it is appropriate to amend
those acts to ensure that the Commission takes into account, on the basis of the technical and regulatory
specificities of each sector , and without interfering with existing governance, conformity assessment and
enforcement mechanisms and authorities established therein, the mandatory requirements for high-risk AI
systems laid down in this Regulation when adopting any relevant delegated or implementing acts on the
basis of those acts.
(50)As regards AI systems that are safety components of products, or which are themselves products, falling
within the scope of certain Union harmonisation legislation listed in an annex to this Regulation, it is
appropriate to classify them as high-ri sk under this Regulation if the product concerned under goes the
conformity assessment procedure with a third-party conformity assessment body pursuant to that relevant
Union harmonisation legislation. In particular , such products are machinery , toys, lifts, equipment and
protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure
equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical
devices, in vitr o diagnostic medical devices, automotive and aviation.
(51)The classification of an AI system as high-risk pursuant to this Regulation should not necessarily mean that
the product whose safety component is the AI system, or the AI system itself as a product, is considered to
be high-risk under the criteria establishe d in the relevant Union harmonisation legislation that applies to the
product. This is, in particular , the case for Regulations (EU) 2017/745 and (EU) 2017/746, where a third-
party conformity assessment is provided for medium-risk and high-risk products.
(52)As regards stand-alone AI systems, namely high-risk AI systems other than those that are safety
components of products, or that are themselves products, it is appropriate to classify them as high-risk if, in
light of their intended purpose, they pose a high risk of harm to the health and safety or the fundamental
rights of persons, taking into account both the severity of the possible harm and its probability of
occurrence and they are used in a number of specifically pre-defined areas specified in this Regulation. The
identification of those systems is based on the same methodology and criteria envisaged also for any future
amendments of the list of high-risk AI systems that the Commission should be empowered to adopt, via
delegated acts, to take into account the rapid pace of technological development, as well as the potential
changes in the use of AI systems.
(53)It is also important to clarify that there may be specific cases in which AI systems referred to in pre-defined
areas specified in this Regulation do not lead to a significant risk of harm to the legal interests protected
under those areas because they do not materially influence the decision-making or do not harm those
interests substantially . For the purposes of this Regulation, an AI system that does not materially influence
the outcome of decision-making should be understood to be an AI system that does not have an impact on
the substance, and thereby the outcome, of decision-making, whether human or automated. An AI system
that does not materially influence the outcome of decision-making could include situations in which one or
more of the following conditions are fulfilled. The first such condition should be that the AI system is
intended to perform a narrow procedural task, such as an AI system that transforms unstructured data into
structured data, an AI system that classifies incoming documents into categories or an AI system that is
used to detect duplicates among a large number of applications. Those tasks are of such narrow and limited
nature that they pose only limited risks which are not increased through the use of an AI system in
a context that is listed as a high-risk use in an annex to this Regulation. The second condition should be
that the task performed by the AI system is intended to improve the result of a previously completed
human activity that may be relevant for the purposes of the high-risk uses listed in an annex to this
Regulation. Considering those character istics, the AI system provides only an additional layer to a human2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 11/110
activity with consequently lowered risk. That condition would, for example, apply to AI systems that are
intended to improve the language used in previously drafted documents, for example in relation to
professional tone, academic style of language or by aligning text to a certain brand messaging. The third
condition should be that the AI system is intended to detect decision-making patterns or deviations from
prior decision-making patterns. The risk would be lowered because the use of the AI system follows
a previously completed human assessment which it is not meant to replace or influence, without proper
human review . Such AI systems include for instance those that, given a certain grading pattern of a teacher ,
can be used to check ex post whether the teacher may have deviated from the grading pattern so as to flag
potential inconsistencies or anomalies. The fourth condition should be that the AI system is intended to
perform a task that is only preparatory to an assessment relevant for the purposes of the AI systems listed
in an annex to this Regulation, thus making the possible impact of the output of the system very low in
terms of representing a risk for the assessment to follow . That condition covers, inter alia, smart solutions
for file handling, which include various functions from indexing, searching, text and speech processing or
linking data to other data sources, or AI systems used for translation of initial documents. In any case, AI
systems used in high-risk use-cases listed in an annex to this Regulation should be considered to pose
significant risks of harm to the health, safety or fundamental rights if the AI system implies profiling
within the meaning of Article 4, point (4) of Regulation (EU) 2016/679 or Article 3, point (4) of Directive
(EU) 2016/680 or Article 3, point (5) of Regulation (EU) 2018/1725. To ensure traceability and
transparency , a provider who considers that an AI system is not high-risk on the basis of the conditions
referred to above should draw up documentation of the assessment before that system is placed on the
market or put into service and should provide that documentation to national competent authorities upon
request. Such a provider should be obliged to register the AI system in the EU database established under
this Regulation. With a view to providing further guidance for the practical implementation of the
conditions under which the AI systems listed in an annex to this Regulation are, on an exceptional basis,
non-high-risk, the Commission should, after consulting the Board, provide guidelines specifying that
practical implementation, completed by a comprehensive list of practical examples of use cases of AI
systems that are high-risk and use cases that are not.
(54)As biometric data constitutes a special category of personal data, it is appropriate to classify as high-risk
several critical-use cases of biometric systems, insofar as their use is permitted under relevant Union and
national law. Technical inaccuracies of AI systems intended for the remote biometric identification of
natural persons can lead to biased results and entail discriminatory effects. The risk of such biased results
and discriminatory effects is particularly relevant with regard to age, ethnicity , race, sex or disabilities.
Remote biometric identification systems should therefore be classified as high-risk in view of the risks that
they pose. Such a classification exclu des AI systems intended to be used for biometric verification,
including authentication, the sole purpose of which is to confirm that a specific natural person is who that
person claims to be and to confirm the identity of a natural person for the sole purpose of having access to
a service, unlocking a device or having secure access to premises. In addition, AI systems intended to be
used for biometric categorisation according to sensitive attributes or characteristics protected under
Article 9(1) of Regulation (EU) 2016/679 on the basis of biometric data, in so far as these are not
prohibited under this Regulation, and emotion recognition systems that are not prohibited under this
Regulation, should be classified as high-risk. Biometric systems which are intended to be used solely for
the purpose of enabling cybersecurity and personal data protection measures should not be considered to be
high-risk AI systems.
(55)As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk
the AI systems intended to be used as safety components in the management and operation of critical
digital infrastructure as listed in point (8) of the Annex to Directive (EU) 2022 /2557, road traffic and the
supply of water , gas, heating and electricity , since their failure or malfunctioni ng may put at risk the life
and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social
and economic activities. Safety components of critical infrastructure, including critical digital
infrastructure, are systems used to directly protect the physical integrity of critical infrastructure or the
health and safety of persons and property but which are not necessary in order for the system to function.
The failure or malfunctioning of such components might directly lead to risks to the physical integrity of
critical infrastructure and thus to risks to health and safety of persons and property. Components intended
to be used solely for cybersecurity purposes should not qualify as safety components. Examples of safety
components of such critical infrastructure may include systems for monitoring water pressure or fire alarm
controlling systems in cloud computing centres.
(56)The deployment of AI systems in education is important to promote high-quality digital education and
training and to allow all learners and teachers to acquire and share the necessary digital skills and
competences, including media literacy , and critical thinking, to take an active part in the economy , society ,
and in democratic processes. However , AI systems used in education or vocational training, in particular
for determining access or admission, for assigning persons to educational and vocational training
institutions or programmes at all levels , for evaluating learning outcomes of persons, for assessing the
appropriate level of education for an individual and materially influencing the level of education and
training that individuals will receive or will be able to access or for monitoring and detecting prohibited
behaviour of students during tests should be classified as high-risk AI systems, since they may determine2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 12/110
the educational and professional course of a person’ s life and therefore may affect that person’ s ability to
secure a livelihood. When improperly designed and used, such systems may be particularly intrusive and
may violate the right to education and training as well as the right not to be discriminated against and
perpetuate historical patterns of discrimination, for example against women, certain age groups, persons
with disabilities, or persons of certain racial or ethnic origins or sexual orientation.
(57)AI systems used in employment, worker s management and access to self-emplo yment, in particular for the
recruitment and selection of persons, for making decisions affecting terms of the work-related relationship,
promotion and termination of work-related contractual relationships, for allocating tasks on the basis of
individual behaviour , personal traits or characteristics and for monitoring or evaluation of persons in work-
related contractual relationships, should also be classified as high-risk, since those systems may have an
appreciable impact on future career prospects, livelihoods of those persons and workers’ rights. Relevant
work-related contractual relationships should, in a meaningful manner , involve employees and persons
providing services through platforms as referred to in the Commission Work Programme 2021. Throughout
the recruitment process and in the evaluation, promotion, or retention of persons in work-related
contractual relationships, such systems may perpetuate historical patterns of discrimination, for example
against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins
or sexual orientation. AI systems used to monitor the performance and behaviour of such persons may also
undermine their fundamental rights to data protection and privacy .
(58)Another area in which the use of AI systems deserves special consideration is the access to and enjoyment
of certain essential private and public services and benefits necessary for people to fully participate in
society or to improve one’s standard of living. In particular , natural persons applying for or receiving
essential public assistance benefits and services from public authorities namely healthcare services, social
security benefits, social services providing protection in cases such as maternity , illness, industrial
accidents, dependency or old age and loss of employment and social and housing assistance, are typically
dependent on those benefits and servi ces and in a vulnerable position in relation to the responsible
authorities. If AI systems are used for determining whether such benefits and services should be granted,
denied, reduced, revoked or reclaimed by authorities, including whether beneficiaries are legitimately
entitled to such benefits or services, those systems may have a significant impact on persons’ livelihood
and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human
dignity or an effective remedy and should therefore be classified as high-risk. Nonetheless, this Regulation
should not hamper the development and use of innovative approaches in the public administration, which
would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do
not entail a high risk to legal and natural persons. In addition, AI systems used to evaluate the credit score
or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine
those persons’ access to financial resources or essential services such as housing, electricity , and
telecommunication services. AI systems used for those purposes may lead to discrimination between
persons or groups and may perpetuate historical patterns of discrimination, such as that based on racial or
ethnic origins, gender , disabilities, age or sexual orientation, or may create new forms of discriminatory
impacts. However , AI systems provided for by Union law for the purpose of detecting fraud in the offering
of financial services and for prudential purposes to calculate credit institutions’ and insurance
undertakings’ capital requirements should not be considered to be high-risk under this Regulation.
Moreover , AI systems intended to be used for risk assessment and pricing in relation to natural persons for
health and life insurance can also have a significant impact on persons’ livelihood and if not duly designed,
developed and used, can infringe their fundamental rights and can lead to serious consequences for
people’ s life and health, including financial exclusion and discrimination. Finally , AI systems used to
evaluate and classify emer gency calls by natural persons or to dispatch or establish priority in the
dispatching of emer gency first response services, including by police, firefighte rs and medical aid, as well
as of emer gency healthcare patient triage systems, should also be classified as high-risk since they make
decisions in very critical situations for the life and health of persons and their property .
(59)Given their role and responsibility , actions by law enforcement authorities involving certain uses of AI
systems are characterised by a significa nt degree of power imbalance and may lead to surveillance, arrest
or deprivation of a natural person’ s liberty as well as other adverse impacts on fundamental rights
guaranteed in the Charter . In particular , if the AI system is not trained with high-quality data, does not meet
adequate requirements in terms of its performance, its accuracy or robustness, or is not properly designed
and tested before being put on the market or otherwise put into service, it may single out people in
a discriminatory or otherwise incorrect or unjust manner . Furthermore, the exerc ise of important procedural
fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence
and the presumption of innocence, could be hampered, in particular , where such AI systems are not
sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk,
insofar as their use is permitted under relevant Union and national law, a numbe r of AI systems intended to
be used in the law enforcement context where accuracy , reliability and transparency is particularly
important to avoid adverse impacts, retain public trust and ensure accountabili ty and effective redress. In
view of the nature of the activities and the risks relating thereto, those high-risk AI systems should include
in particular AI systems intended to be used by or on behalf of law enforcement authorities or by Union
institutions, bodies, offices, or agencies in support of law enforcement authorities for assessing the risk of2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 13/110
a natural person to become a victim of criminal offences, as polygraphs and similar tools, for the
evaluation of the reliability of eviden ce in in the course of investigation or prosecution of criminal
offences, and, insofar as not prohibited under this Regulation, for assessing the risk of a natural person
offending or reoffending not solely on the basis of the profiling of natural persons or the assessment of
personality traits and characteristics or the past criminal behaviour of natural persons or groups, for
profiling in the course of detection, investigation or prosecution of crimi nal offences. AI systems
specifically intended to be used for administrative proceedings by tax and customs authorities as well as by
financial intelligence units carrying out administrative tasks analysing information pursuant to Union anti-
money laundering law should not be classified as high-risk AI systems used by law enforcement authorities
for the purpose of prevention, detection , investigation and prosecution of crimi nal offences. The use of AI
tools by law enforcement and other relevant authorities should not become a factor of inequality , or
exclusion. The impact of the use of AI tools on the defence rights of suspects should not be ignored, in
particular the difficulty in obtaining meaningful information on the functionin g of those systems and the
resulting dif ficulty in challenging their results in court, in particular by natural persons under investigation.
(60)AI systems used in migration, asylum and border control management affect persons who are often in
particularly vulnerable position and who are dependent on the outcome of the actions of the competent
public authorities. The accuracy , non-discriminatory nature and transparency of the AI systems used in
those contexts are therefore particularly important to guarantee respect for the fundamental rights of the
affected persons, in particular their rights to free movement, non-discrimination, protection of private life
and personal data, international protection and good administration. It is therefor e appropriate to classify as
high-risk, insofar as their use is permitted under relevant Union and national law, AI systems intended to be
used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies
charged with tasks in the fields of migration, asylum and border control management as polygraphs and
similar tools, for assessing certain risks posed by natural persons entering the territory of a Member State
or applying for visa or asylum, for assisting competent public authorities for the examination, including
related assessment of the reliability of evidence, of applications for asylum, visa and residence permits and
associated complaints with regard to the objective to establish the eligibility of the natural persons applying
for a status, for the purpose of detecting, recognising or identifying natural persons in the context of
migration, asylum and border control management, with the exception of verification of travel documents.
AI systems in the area of migration, asylum and border control management covered by this Regulation
should comply with the relevant proced ural requirements set by the Regulation (EC) No 810/2009 of the
European Parliament and of the Council (32), the Directive 2013/32/EU of the European Parliament and of
the Council (33), and other relevant Union law. The use of AI systems in migration, asylum and border
control management should, in no circu mstances, be used by Member States or Union institutions, bodies,
offices or agencies as a means to circumvent their international obligations under the UN Convention
relating to the Status of Refugees done at Geneva on 28 July 1951 as amended by the Protocol of
31 January 1967. Nor should they be used to in any way infringe on the principle of non-refoulement, or to
deny safe and effective legal avenues into the territory of the Union, including the right to international
protection.
(61)Certain AI systems intended for the administration of justice and democratic processes should be classified
as high-risk, considering their potentially significant impact on democracy , the rule of law, individual
freedoms as well as the right to an effective remedy and to a fair trial. In particular , to address the risks of
potential biases, errors and opacity , it is appropriate to qualify as high-risk AI systems intended to be used
by a judicial authority or on its behalf to assist judicial authorities in researching and interpreting facts and
the law and in applying the law to a concrete set of facts. AI systems intended to be used by alternative
dispute resolution bodies for those purposes should also be considered to be high-risk when the outcomes
of the alternative dispute resolution proceedings produce legal effects for the parties. The use of AI tools
can support the decision-making power of judges or judicial independence, but should not replace it: the
final decision-making must remain a human-driven activity . The classification of AI systems as high-risk
should not, however , extend to AI systems intended for purely ancillary administrative activities that do not
affect the actual administration of justice in individual cases, such as anonymis ation or pseudonymisation
of judicial decisions, documents or data, communication between personnel, administrative tasks.
(62)Without prejudice to the rules provided for in Regulation (EU) 2024/900 of the European Parliament and
of the Council (34), and in order to address the risks of undue external interference with the right to vote
enshrined in Article 39 of the Charter , and of adverse effects on democracy and the rule of law, AI systems
intended to be used to influence the outcome of an election or referendum or the voting behaviour of
natural persons in the exercise of their vote in elections or referenda should be classified as high-risk AI
systems with the exception of AI systems whose output natural persons are not directly exposed to, such as
tools used to organise, optimise and structure political campaigns from an administrative and logistical
point of view .
(63)The fact that an AI system is classified as a high-risk AI system under this Regulation should not be
interpreted as indicating that the use of the system is lawful under other acts of Union law or under national
law compatible with Union law, such as on the protection of personal data, on the use of polygraphs and
similar tools or other systems to detect the emotional state of natural persons. Any such use should2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 14/110
continue to occur solely in accordance with the applicable requirements resulting from the Charter and
from the applicable acts of secondary Union law and national law. This Regulation should not be
understood as providing for the legal ground for processing of personal data, including special categories
of personal data, where relevant, unless it is specifically otherwise provided for in this Regulation.
(64)To mitigate the risks from high-risk AI systems placed on the market or put into service and to ensure
a high level of trustworthiness, certain mandatory requirements should apply to high-risk AI systems,
taking into account the intended purpose and the context of use of the AI system and according to the risk-
management system to be established by the provider . The measures adopted by the providers to comply
with the mandatory requirements of this Regulation should take into account the generally acknowledged
state of the art on AI, be proportionate and effective to meet the objectives of this Regulation. Based on the
New Legislative Framework, as clarified in Commission notice ‘The “Blue Guide” on the implementation
of EU product rules 2022’, the general rule is that more than one legal act of Union harmonisation
legislation may be applicable to one product, since the making available or putting into service can take
place only when the product complies with all applicable Union harmonisation legislation. The hazards of
AI systems covered by the requirements of this Regulation concern different aspects than the existing
Union harmonisation legislation and therefore the requirements of this Regulation would complement the
existing body of the Union harmonisation legislation. For example, machinery or medical devices products
incorporating an AI system might present risks not addressed by the essential health and safety
requirements set out in the relevant Union harmonised legislation, as that sectoral law does not deal with
risks specific to AI systems. This calls for a simultaneous and complementary application of the various
legislative acts. To ensure consistency and to avoid an unnecessary administrative burden and unnecessary
costs, providers of a product that contains one or more high-risk AI system, to which the requirements of
this Regulation and of the Union harmonisation legislation based on the New Legislative Framework and
listed in an annex to this Regulation apply , should have flexibility with regard to operational decisions on
how to ensure compliance of a product that contains one or more AI system s with all the applicable
requirements of that Union harmonised legislation in an optimal manner . That flexibility could mean, for
example a decision by the provider to integrate a part of the necessary testing and reporting processes,
information and documentation require d under this Regulation into already existing documentation and
procedures required under existing Union harmonisation legislation based on the New Legislative
Framework and listed in an annex to this Regulation. This should not, in any way , undermine the obligation
of the provider to comply with all the applicable requirements.
(65)The risk-management system should consist of a continuous, iterative process that is planned and run
throughout the entire lifecycle of a high-risk AI system. That process should be aimed at identifying and
mitigating the relevant risks of AI systems on health, safety and fundamental rights. The risk-management
system should be regularly reviewed and updated to ensure its continuing effectiveness, as well as
justification and documentation of any significant decisions and actions taken subject to this Regulation.
This process should ensure that the provider identifies risks or adverse impacts and implements mitigation
measures for the known and reasonably foreseeable risks of AI systems to the health, safety and
fundamental rights in light of their intended purpose and reasonably foresee able misuse, including the
possible risks arising from the interact ion between the AI system and the environment within which it
operates. The risk-management system should adopt the most appropriate risk-management measures in
light of the state of the art in AI. When identifying the most appropriate risk-management measures, the
provider should document and explain the choices made and, when relevant, involve experts and external
stakeholders. In identifying the reasonably foreseeable misuse of high-risk AI systems, the provider should
cover uses of AI systems which, while not directly covered by the intended purpose and provided for in the
instruction for use may nevertheless be reasonably expected to result from readily predictable human
behaviour in the context of the specific characteristics and use of a particular AI system. Any known or
foreseeable circumstances related to the use of the high-risk AI system in accordance with its intended
purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and
safety or fundamental rights should be included in the instructions for use that are provided by the
provider . This is to ensure that the deployer is aware and takes them into account when using the high-risk
AI system. Identifying and implementing risk mitigation measures for foreseeable misuse under this
Regulation should not require specific additional training for the high-risk AI system by the provider to
address foreseeable misuse. The providers however are encouraged to consid er such additional training
measures to mitigate reasonable foreseeable misuses as necessary and appropriate.
(66)Requirements should apply to high-risk AI systems as regards risk management, the quality and relevance
of data sets used, technical documentation and record-keeping, transparency and the provision of
information to deployers, human oversight, and robustness, accuracy and cybersecurity . Those
requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights. As no
other less trade restrictive measures are reasonably available those requirements are not unjustified
restrictions to trade.
(67)High-quality data and access to high-quality data plays a vital role in providing structure and in ensuring
the performance of many AI systems, especially when techniques involving the training of models are
used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 15/110
become a source of discrimination prohibited by Union law. High-quality data sets for training, validation
and testing require the implementation of appropriate data governance and management practices. Data
sets for training, validation and testing, including the labels, should be relevant, sufficiently representative,
and to the best extent possible free of errors and complete in view of the intended purpose of the system. In
order to facilitate compliance with Union data protection law, such as Regulation (EU) 2016/679, data
governance and management practices should include, in the case of personal data, transparency about the
original purpose of the data collection. The data sets should also have the appropriate statistical properties,
including as regards the persons or groups of persons in relation to whom the high-risk AI system is
intended to be used, with specific attention to the mitigation of possible biases in the data sets, that are
likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to
discrimination prohibited under Union law, especially where data outputs influence inputs for future
operations (feedback loops). Biases can for example be inherent in underlying data sets, especially when
historical data is being used, or generated when the systems are implemented in real world settings. Results
provided by AI systems could be influenced by such inherent biases that are inclined to gradually increase
and thereby perpetuate and amplify existing discrimination, in particular for persons belonging to certain
vulnerable groups, including racial or ethnic groups. The requirement for the data sets to be to the best
extent possible complete and free of errors should not affect the use of privacy-preserving techniques in the
context of the development and testing of AI systems. In particular , data sets should take into account, to
the extent required by their intended purpose, the features, characteristics or elements that are particular to
the specific geographical, contextual, behavioural or functional setting which the AI system is intended to
be used. The requirements related to data governance can be complied with by having recourse to third
parties that offer certified compliance services including verification of data governance, data set integrity ,
and data training, validation and testing practices, as far as compliance with the data requirements of this
Regulation are ensured.
(68)For the development and assessment of high-risk AI systems, certain actors, such as providers, notified
bodies and other relevant entities, such as European Digital Innovation Hubs, testing experimentation
facilities and researchers, should be able to access and use high-quality data sets within the fields of
activities of those actors which are related to this Regulation. European common data spaces established by
the Commission and the facilitation of data sharing between businesses and with government in the public
interest will be instrumental to provide trustful, accountable and non-discriminatory access to high-quality
data for the training, validation and testing of AI systems. For example, in health, the European health data
space will facilitate non-discriminatory access to health data and the training of AI algorithms on those
data sets, in a privacy-preserving, secure, timely , transparent and trustworthy manner , and with an
appropriate institutional governance. Relevant competent authorities, including sectoral ones, providing or
supporting the access to data may also support the provision of high-quality data for the training, validation
and testing of AI systems.
(69)The right to privacy and to protection of personal data must be guaranteed throughout the entire lifecycle
of the AI system. In this regard, the principles of data minimisation and data protection by design and by
default, as set out in Union data protection law, are applicable when personal data are processed. Measures
taken by providers to ensure complianc e with those principles may include not only anonymisation and
encryption, but also the use of technology that permits algorithms to be brought to the data and allows
training of AI systems without the transmission between parties or copying of the raw or structured data
themselves, without prejudice to the requirements on data governance provided for in this Regulation.
(70)In order to protect the right of others from the discrimination that might result from the bias in AI systems,
the providers should, exceptionally , to the extent that it is strictly necessary for the purpose of ensuring bias
detection and correction in relation to the high-risk AI systems, subject to appropriate safeguards for the
fundamental rights and freedoms of natural persons and following the application of all applicable
conditions laid down under this Regu lation in addition to the conditions laid down in Regulations
(EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, be able to process also special
categories of personal data, as a matter of substantial public interest within the meaning of Article 9(2),
point (g) of Regulation (EU) 2016/679 and Article 10(2), point (g) of Regulation (EU) 2018/1725.
(71)Having comprehensible information on how high-risk AI systems have been developed and how they
perform throughout their lifetime is essential to enable traceability of those systems, verify compliance
with the requirements under this Regulation, as well as monitoring of their operations and post market
monitoring. This requires keeping records and the availability of technical documentation, containing
information which is necessary to assess the compliance of the AI system with the relevant requirements
and facilitate post market monitoring. Such information should include the general characteristics,
capabilities and limitations of the system , algorithms, data, training, testing and validation processes used
as well as documentation on the relevant risk-management system and drawn in a clear and comprehensive
form. The technical documentation should be kept up to date, appropriately throughout the lifetime of the
AI system. Furthermore, high-risk AI systems should technically allow for the automatic recording of
events, by means of logs, over the duration of the lifetime of the system.
(72)To address concerns related to opacity and complexity of certain AI systems and help deployers to fulfil
their obligations under this Regulation, transparency should be required for high-risk AI systems before2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 16/110
they are placed on the market or put it into service. High-risk AI systems should be designed in a manner to
enable deployers to understand how the AI system works, evaluate its functio nality , and comprehend its
strengths and limitations. High-risk AI systems should be accompanied by appropriate information in the
form of instructions of use. Such information should include the characteristics, capabilities and limitations
of performance of the AI system. Those would cover information on possib le known and foreseeable
circumstances related to the use of the high-risk AI system, including deployer action that may influence
system behaviour and performance, under which the AI system can lead to risks to health, safety , and
fundamental rights, on the changes that have been pre-determined and assessed for conformity by the
provider and on the relevant human oversight measures, including the measures to facilitate the
interpretation of the outputs of the AI system by the deployers. Transparency , including the accompanying
instructions for use, should assist deployers in the use of the system and support informed decision making
by them. Deployers should, inter alia, be in a better position to make the correct choice of the system that
they intend to use in light of the obligations applicable to them, be educate d about the intended and
precluded uses, and use the AI system correctly and as appropriate. In order to enhance legibility and
accessibility of the information included in the instructions of use, where appropriate, illustrative
examples, for instance on the limitation s and on the intended and precluded uses of the AI system, should
be included. Providers should ensure that all documentation, including the instructions for use, contains
meaningful, comprehensive, accessible and understandable information, taking into account the needs and
foreseeable knowledge of the target deployers. Instructions for use should be made available in a language
which can be easily understood by tar get deployers, as determined by the Member State concerned.
(73)High-risk AI systems should be designed and developed in such a way that natural persons can oversee
their functioning, ensure that they are used as intended and that their impacts are addressed over the
system’ s lifecycle. To that end, appropri ate human oversight measures should be identified by the provider
of the system before its placing on the market or putting into service. In particular , where appropriate, such
measures should guarantee that the system is subject to in-built operational constraints that cannot be
overridden by the system itself and is responsive to the human operator , and that the natural persons to
whom human oversight has been assigned have the necessary competence, training and authority to carry
out that role. It is also essential, as appropriate, to ensure that high-risk AI systems include mechanisms to
guide and inform a natural person to whom human oversight has been assigned to make informed decisions
if, when and how to intervene in order to avoid negative consequences or risks, or stop the system if it does
not perform as intended. Considering the significant consequences for persons in the case of an incorrect
match by certain biometric identification systems, it is appropriate to provide for an enhanced human
oversight requirement for those systems so that no action or decision may be taken by the deployer on the
basis of the identification resulting from the system unless this has been separately verified and confirmed
by at least two natural persons. Those persons could be from one or more entities and include the person
operating or using the system. This requirement should not pose unnecessary burden or delays and it could
be sufficient that the separate verificatio ns by the different persons are automatically recorded in the logs
generated by the system. Given the specificities of the areas of law enforcement, migration, border control
and asylum, this requirement should not apply where Union or national law considers the application of
that requirement to be disproportionate.
(74)High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level
of accuracy , robustness and cybersecurity , in light of their intended purpose and in accordance with the
generally acknowledged state of the art. The Commission and relevant organis ations and stakeholders are
encouraged to take due consideration of the mitigation of risks and the negative impacts of the AI system.
The expected level of performance metrics should be declared in the accompanying instructions of use.
Providers are urged to communicate that information to deployers in a clear and easily understandable way,
free of misunderstandings or misleading statements. Union law on legal metro logy, including Directives
2014/31/EU (35) and 2014/32/EU (36) of the European Parliament and of the Council, aims to ensure the
accuracy of measurements and to help the transparency and fairness of commercial transactions. In that
context, in cooperation with relevant stakeholders and organisation, such as metrology and benchmarking
authorities, the Commission should encourage, as appropriate, the development of benchmarks and
measurement methodologies for AI systems. In doing so, the Commission should take note and collaborate
with international partners working on metrology and relevant measurement indicators relating to AI.
(75)Technical robustness is a key requirement for high-risk AI systems. They should be resilient in relation to
harmful or otherwise undesirable behaviour that may result from limitations within the systems or the
environment in which the systems operate (e.g. errors, faults, inconsistenci es, unexpected situations).
Therefore, technical and organisational measures should be taken to ensure robustness of high-risk AI
systems, for example by designing and developing appropriate technical solutions to prevent or minimise
harmful or otherwise undesirable behaviour . Those technical solution may include for instance mechanisms
enabling the system to safely interrupt its operation (fail-safe plans) in the presence of certain anomalies or
when operation takes place outside certa in predetermined boundaries. Failure to protect against these risks
could lead to safety impacts or negatively affect the fundamental rights, for example due to erroneous
decisions or wrong or biased outputs generated by the AI system.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 17/110
(76)Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their
use, behaviour , performance or compromise their security properties by malicious third parties exploiting
the system’ s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as
training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks or membership inference),
or exploit vulnerabilities in the AI system’ s digital assets or the underlying ICT infrastructure. To ensure
a level of cybersecurity appropriate to the risks, suitable measures, such as security controls, should
therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the
underlying ICT infrastructure.
(77)Without prejudice to the requirements related to robustness and accuracy set out in this Regulation, high-
risk AI systems which fall within the scope of a regulation of the European Parliament and of the Council
on horizontal cybersecurity requirements for products with digital elements, in accordance with that
regulation may demonstrate compliance with the cybersecurity requiremen ts of this Regulation by
fulfilling the essential cybersecurity requirements set out in that regulation. When high-risk AI systems
fulfil the essential requirements of a regulation of the European Parliament and of the Council on
horizontal cybersecurity requirements for products with digital elements, they should be deemed compliant
with the cybersecurity requirements set out in this Regulation in so far as the achievement of those
requirements is demonstrated in the EU declaration of conformity or parts thereof issued under that
regulation. To that end, the assessment of the cybersecurity risks, associated to a product with digital
elements classified as high-risk AI system according to this Regulation, carried out under a regulation of
the European Parliament and of the Council on horizontal cybersecurity requirements for products with
digital elements, should consider risks to the cyber resilience of an AI system as regards attempts by
unauthorised third parties to alter its use, behaviour or performance, including AI specific vulnerabilities
such as data poisoning or adversarial attacks, as well as, as relevant, risks to fundamental rights as required
by this Regulation.
(78)The conformity assessment procedure provided by this Regulation should apply in relation to the essential
cybersecurity requirements of a product with digital elements covered by a regulation of the European
Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements
and classified as a high-risk AI system under this Regulation. However , this rule should not result in
reducing the necessary level of assuranc e for critical products with digital elements covered by a regulation
of the European Parliament and of the Council on horizontal cybersecurity requirements for products with
digital elements. Therefore, by way of derogation from this rule, high-risk AI systems that fall within the
scope of this Regulation and are also qualified as important and critical products with digital elements
pursuant to a regulation of the European Parliament and of the Council on horizontal cybersecurity
requirements for products with digital elements and to which the conformity assessment procedure based
on internal control set out in an annex to this Regulation applies, are subject to the conformity assessment
provisions of a regulation of the European Parliament and of the Council on horizontal cybersecurity
requirements for products with digital elements insofar as the essential cybersecurity requirements of that
regulation are concerned. In this case, for all the other aspects covered by this Regulation the respective
provisions on conformity assessment based on internal control set out in an annex to this Regulation should
apply . Building on the knowledge and expertise of ENISA on the cybersecurity policy and tasks assigned
to ENISA under the Regulation (EU) 2019/881 of the European Parliament and of the Council (37), the
Commission should cooperate with ENISA on issues related to cybersecurity of AI systems.
(79)It is appropriate that a specific natural or legal person, defined as the provider , takes responsibility for the
placing on the market or the putting into service of a high-risk AI system, regardless of whether that natural
or legal person is the person who designed or developed the system.
(80)As signatories to the United Nations Convention on the Rights of Persons with Disabilities, the Union and
the Member States are legally obliged to protect persons with disabilities from discrimination and promote
their equality , to ensure that persons with disabilities have access, on an equal basis with others, to
information and communications technologies and systems, and to ensure respect for privacy for persons
with disabilities. Given the growing importance and use of AI systems, the application of universal design
principles to all new technologies and services should ensure full and equal access for everyone potentially
affected by or using AI technologies, including persons with disabilities, in a way that takes full account of
their inherent dignity and diversity . It is therefore essential that providers ensure full compliance with
accessibility requirements, including Directive (EU) 2016/2102 of the European Parliament and of the
Council (38) and Directive (EU) 2019/882. Providers should ensure compliance with these requirements by
design. Therefore, the necessary measures should be integrated as much as possible into the design of the
high-risk AI system.
(81)The provider should establish a sound quality management system, ensure the accomplishment of the
required conformity assessment procedure, draw up the relevant documentation and establish a robust post-
market monitoring system. Providers of high-risk AI systems that are subject to obligations regarding
quality management systems under relevant sectoral Union law should have the possibility to include the
elements of the quality management system provided for in this Regulation as part of the existing quality
management system provided for in that other sectoral Union law. The complementarity between this2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 18/110
Regulation and existing sectoral Union law should also be taken into accoun t in future standardisation
activities or guidance adopted by the Commission. Public authorities which put into service high-risk AI
systems for their own use may adopt and implement the rules for the quality management system as part of
the quality management system adopted at a national or regional level, as appropriate, taking into account
the specificities of the sector and the competences and or ganisation of the public authority concerned.
(82)To enable enforcement of this Regulation and create a level playing field for operators, and, taking into
account the different forms of making available of digital products, it is important to ensure that, under all
circumstances, a person established in the Union can provide authorities with all the necessary information
on the compliance of an AI system. Therefore, prior to making their AI systems available in the Union,
providers established in third countries should, by written mandate, appoint an authorised representative
established in the Union. This authorised representative plays a pivotal role in ensuring the compliance of
the high-risk AI systems placed on the market or put into service in the Union by those providers who are
not established in the Union and in serving as their contact person established in the Union.
(83)In light of the nature and complexity of the value chain for AI systems and in line with the New Legislative
Framework, it is essential to ensure legal certainty and facilitate the compliance with this Regulation.
Therefore, it is necessary to clarify the role and the specific obligations of relevant operators along that
value chain, such as importers and distributors who may contribute to the development of AI systems. In
certain situations those operators could act in more than one role at the same time and should therefore
fulfil cumulatively all relevant obligations associated with those roles. For exam ple, an operator could act
as a distributor and an importer at the same time.
(84)To ensure legal certainty , it is necessary to clarify that, under certain specific conditions, any distributor ,
importer , deployer or other third-party should be considered to be a provider of a high-risk AI system and
therefore assume all the relevant obligations. This would be the case if that party puts its name or
trademark on a high-risk AI system already placed on the market or put into service, without prejudice to
contractual arrangements stipulating that the obligations are allocated otherwise. This would also be the
case if that party makes a substantial modification to a high-risk AI system that has already been placed on
the market or has already been put into service in a way that it remains a high-risk AI system in accordance
with this Regulation, or if it modifies the intended purpose of an AI system, including a general-purpose AI
system, which has not been classified as high-risk and has already been placed on the market or put into
service, in a way that the AI system becomes a high-risk AI system in accordance with this Regulation.
Those provisions should apply without prejudice to more specific provisions established in certain Union
harmonisation legislation based on the New Legislative Framework, together with which this Regulation
should apply . For example, Article 16(2) of Regulation (EU) 2017/745, establ ishing that certain changes
should not be considered to be modifications of a device that could affect its compliance with the
applicable requirements, should continue to apply to high-risk AI systems that are medical devices within
the meaning of that Regulation.
(85)General-purpose AI systems may be used as high-risk AI systems by themselves or be components of other
high-risk AI systems. Therefore, due to their particular nature and in order to ensure a fair sharing of
responsibilities along the AI value chain, the providers of such systems should, irrespective of whether they
may be used as high-risk AI systems as such by other providers or as compone nts of high-risk AI systems
and unless provided otherwise under this Regulation, closely cooperate with the providers of the relevant
high-risk AI systems to enable their compliance with the relevant obligations under this Regulation and
with the competent authorities established under this Regulation.
(86)Where, under the conditions laid down in this Regulation, the provider that initially placed the AI system
on the market or put it into service should no longer be considered to be the provider for the purposes of
this Regulation, and when that provider has not expressly excluded the change of the AI system into
a high-risk AI system, the former provider should nonetheless closely coopera te and make available the
necessary information and provide the reasonably expected technical access and other assistance that are
required for the fulfilment of the obligations set out in this Regulation, in particular regarding the
compliance with the conformity assessment of high-risk AI systems.
(87)In addition, where a high-risk AI system that is a safety component of a product which falls within the
scope of Union harmonisation legislation based on the New Legislative Frame work is not placed on the
market or put into service independently from the product, the product manufacturer defined in that
legislation should comply with the oblig ations of the provider established in this Regulation and should, in
particular , ensure that the AI system embedded in the final product complies with the requirements of this
Regulation.
(88)Along the AI value chain multiple parties often supply AI systems, tools and services but also components
or processes that are incorporated by the provider into the AI system with various objectives, including the
model training, model retraining, model testing and evaluation, integration into software, or other aspects
of model development. Those parties have an important role to play in the value chain towards the provider
of the high-risk AI system into which their AI systems, tools, services, components or processes are
integrated, and should provide by written agreement this provider with the necessary information,
capabilities, technical access and other assistance based on the generally acknowledged state of the art, in2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 19/110
order to enable the provider to fully comply with the obligations set out in this Regulation, without
compromising their own intellectual property rights or trade secrets.
(89)Third parties making accessible to the public tools, services, processes, or AI components other than
general-purpose AI models, should not be mandated to comply with requirements targeting the
responsibilities along the AI value chain, in particular towards the provider that has used or integrated
them, when those tools, services, processes, or AI components are made accessible under a free and open-
source licence. Developers of free and open-source tools, services, processes, or AI components other than
general-purpose AI models should be encouraged to implement widely adopte d documentation practices,
such as model cards and data sheets, as a way to accelerate information sharing along the AI value chain,
allowing the promotion of trustworthy AI systems in the Union.
(90)The Commission could develop and recommend voluntary model contractual terms between providers of
high-risk AI systems and third parties that supply tools, services, components or processes that are used or
integrated in high-risk AI systems, to facilitate the cooperation along the value chain. When developing
voluntary model contractual terms, the Commission should also take into account possible contractual
requirements applicable in specific sectors or business cases.
(91)Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their
use, including as regards the need to ensure proper monitoring of the performance of an AI system in
a real-life setting, it is appropriate to set specific responsibilities for deployers. Deployers should in
particular take appropriate technical and organisational measures to ensure they use high-risk AI systems in
accordance with the instructions of use and certain other obligations should be provided for with regard to
monitoring of the functioning of the AI systems and with regard to record-keeping, as appropriate.
Furthermore, deployers should ensure that the persons assigned to implement the instructions for use and
human oversight as set out in this Regulation have the necessary competence, in particular an adequate
level of AI literacy , training and authority to properly fulfil those tasks. Those obligations should be
without prejudice to other deployer obligations in relation to high-risk AI systems under Union or national
law.
(92)This Regulation is without prejudice to obligations for employers to inform or to inform and consult
workers or their representatives under Union or national law and practice, including Directive 2002/14/EC
of the European Parliament and of the Council (39), on decisions to put into service or use AI systems. It
remains necessary to ensure information of workers and their representatives on the planned deployment of
high-risk AI systems at the workplace where the conditions for those information or information and
consultation obligations in other legal instruments are not fulfilled. Moreover , such information right is
ancillary and necessary to the objective of protecting fundamental rights that underlies this Regulation.
Therefore, an information requirement to that effect should be laid down in this Regulation, without
affecting any existing rights of workers.
(93)Whilst risks related to AI systems can result from the way such systems are designed, risks can as well
stem from how such AI systems are used. Deployers of high-risk AI system therefore play a critical role in
ensuring that fundamental rights are protected, complementing the obligations of the provider when
developing the AI system. Deployers are best placed to understand how the high-risk AI system will be
used concretely and can therefore identify potential significant risks that were not foreseen in the
development phase, due to a more precise knowledge of the context of use, the persons or groups of
persons likely to be affected, including vulnerable groups. Deployers of high-risk AI systems listed in an
annex to this Regulation also play a critical role in informing natural persons and should, when they make
decisions or assist in making decisions related to natural persons, where applicable, inform the natural
persons that they are subject to the use of the high-risk AI system. This information should include the
intended purpose and the type of decisions it makes. The deployer should also inform the natural persons
about their right to an explanation provided under this Regulation. With regard to high-risk AI systems
used for law enforcement purposes, that obligation should be implemented in accordance with Article 13 of
Directive (EU) 2016/680.
(94)Any processing of biometric data involved in the use of AI systems for biometric identification for the
purpose of law enforcement needs to comply with Article 10 of Directive (EU) 2016/680, that allows such
processing only where strictly necessary , subject to appropriate safeguards for the rights and freedoms of
the data subject, and where authorised by Union or Member State law. Such use, when authorised, also
needs to respect the principles laid down in Article 4 (1) of Directive (EU) 2016/680 including lawfulness,
fairness and transparency , purpose limitation, accuracy and storage limitation.
(95)Without prejudice to applicable Union law, in particular Regulation (EU) 2016/679 and Directive (EU)
2016/680, considering the intrusive nature of post-remote biometric identification systems, the use of post-
remote biometric identification systems should be subject to safeguards. Post-remote biometric
identification systems should always be used in a way that is proportionate, legitimate and strictly
necessary , and thus targeted, in terms of the individuals to be identified, the location, temporal scope and
based on a closed data set of legally acquired video footage. In any case, post-remote biometric
identification systems should not be used in the framework of law enforcement to lead to indiscriminate
surveillance. The conditions for post-remote biometric identification should in any case not provide a basis2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 20/110
to circumvent the conditions of the prohibition and strict exceptions for real time remote biometric
identification.
(96)In order to efficiently ensure that fundamental rights are protected, deployers of high-risk AI systems that
are bodies governed by public law, or private entities providing public services and deployers of certain
high-risk AI systems listed in an annex to this Regulation, such as banking or insurance entities, should
carry out a fundamental rights impact assessment prior to putting it into use. Services important for
individuals that are of public nature may also be provided by private entities. Private entities providing
such public services are linked to tasks in the public interest such as in the areas of education, healthcare,
social services, housing, administration of justice. The aim of the fundamental rights impact assessment is
for the deployer to identify the specific risks to the rights of individuals or groups of individuals likely to
be affected, identify measures to be taken in the case of a materialisation of those risks. The impact
assessment should be performed prior to deploying the high-risk AI system, and should be updated when
the deployer considers that any of the relevant factors have changed. The impact assessment should
identify the deployer ’s relevant processes in which the high-risk AI system will be used in line with its
intended purpose, and should include a description of the period of time and frequency in which the system
is intended to be used as well as of specific categories of natural persons and groups who are likely to be
affected in the specific context of use. The assessment should also include the identification of specific
risks of harm likely to have an impac t on the fundamental rights of those persons or groups. While
performing this assessment, the deployer should take into account information relevant to a proper
assessment of the impact, including but not limited to the information given by the provider of the high-
risk AI system in the instructions for use. In light of the risks identified, deployers should determine
measures to be taken in the case of a materialisation of those risks, including for example governance
arrangements in that specific context of use, such as arrangements for human oversight according to the
instructions of use or, complaint handling and redress procedures, as they could be instrumental in
mitigating risks to fundamental rights in concrete use-cases. After performing that impact assessment, the
deployer should notify the relevant market surveillance authority . Where appropriate, to collect relevant
information necessary to perform the impact assessment, deployers of high-risk AI system, in particular
when AI systems are used in the public sector , could involve relevant stakeholders, including the
representatives of groups of persons likely to be affected by the AI system, independent experts, and civil
society organisations in conducting such impact assessments and designing measures to be taken in the
case of materialisation of the risks. The European Artificial Intelligence Office (AI Office) should develop
a template for a questionnaire in order to facilitate compliance and reduce the administrative burden for
deployers.
(97)The notion of general-purpose AI models should be clearly defined and set apart from the notion of AI
systems to enable legal certainty . The definition should be based on the key functional characteristics of
a general-purpose AI model, in particul ar the generality and the capability to competently perform a wide
range of distinct tasks. These models are typically trained on large amounts of data, through various
methods, such as self-supervised, unsupervised or reinforcement learning. General-purpose AI models may
be placed on the market in various ways, including through libraries, application programming interfaces
(APIs), as direct download, or as physical copy . These models may be further modified or fine-tuned into
new models. Although AI models are essential components of AI systems, they do not constitute AI
systems on their own. AI models require the addition of further components, such as for example a user
interface, to become AI systems. AI models are typically integrated into and form part of AI systems. This
Regulation provides specific rules for general-purpose AI models and for gene ral-purpose AI models that
pose systemic risks, which should apply also when these models are integrated or form part of an AI
system. It should be understood that the obligations for the providers of general-purpose AI models should
apply once the general-purpose AI models are placed on the market. When the provider of a general-
purpose AI model integrates an own model into its own AI system that is made available on the market or
put into service, that model should be considered to be placed on the market and, therefore, the obligations
in this Regulation for models should continue to apply in addition to those for AI systems. The obligations
laid down for models should in any case not apply when an own model is used for purely internal
processes that are not essential for providing a product or a service to third parties and the rights of natural
persons are not affected. Considering their potential significantly negative effects, the general-purpose AI
models with systemic risk should always be subject to the relevant obligations under this Regulation. The
definition should not cover AI models used before their placing on the market for the sole purpose of
research, development and prototyping activities. This is without prejudice to the obligation to comply
with this Regulation when, following such activities, a model is placed on the market.
(98)Whereas the generality of a model could, inter alia, also be determined by a number of parameters, models
with at least a billion of parameters and trained with a large amount of data using self-supervision at scale
should be considered to display signi ficant generality and to competently perform a wide range of
distinctive tasks.
(99)Large generative AI models are a typical example for a general-purpose AI model, given that they allow for
flexible generation of content, such as in the form of text, audio, images or video, that can readily
accommodate a wide range of distinctive tasks.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 21/110
(100) When a general-purpose AI model is integrated into or forms part of an AI system, this system should be
considered to be general-purpose AI system when, due to this integration, this system has the capability to
serve a variety of purposes. A general-purpose AI system can be used directly , or it may be integrated into
other AI systems.
(101) Providers of general-purpose AI models have a particular role and responsibility along the AI value chain,
as the models they provide may form the basis for a range of downstream systems, often provided by
downstream providers that necessitate a good understanding of the models and their capabilities, both to
enable the integration of such models into their products, and to fulfil their obligations under this or other
regulations. Therefore, proportionate transparency measures should be laid down, including the drawing
up and keeping up to date of documentation, and the provision of information on the general-purpose AI
model for its usage by the downstream providers. Technical documentation should be prepared and kept
up to date by the general-purpose AI model provider for the purpose of making it available, upon request,
to the AI Office and the national competent authorities. The minimal set of elements to be included in
such documentation should be set out in specific annexes to this Regulation. The Commission should be
empowered to amend those annexes by means of delegated acts in light of evolving technological
developments.
(102) Software and data, including models, released under a free and open-source licence that allows them to be
openly shared and where users can freely access, use, modify and redistribute them or modified versions
thereof, can contribute to research and innovation in the market and can provide significant growth
opportunities for the Union economy . General-purpose AI models released under free and open-source
licences should be considered to ensur e high levels of transparency and openness if their parameters,
including the weights, the information on the model architecture, and the information on model usage are
made publicly available. The licence should be considered to be free and open-source also when it allows
users to run, copy , distribute, study , change and improve software and data, including models under the
condition that the original provider of the model is credited, the identical or comparable terms of
distribution are respected.
(103) Free and open-source AI components covers the software and data, including models and general-purpose
AI models, tools, services or processes of an AI system. Free and open-source AI components can be
provided through different channels, including their development on open repositories. For the purposes
of this Regulation, AI components that are provided against a price or otherwise monetised, including
through the provision of technical support or other services, including through a software platform, related
to the AI component, or the use of personal data for reasons other than exclusively for improving the
security , compatibility or interoperability of the software, with the exception of transactions between
microenterprises, should not benefit from the exceptions provided to free and open-source AI components.
The fact of making AI components available through open repositories should not, in itself, constitute
a monetisation.
(104) The providers of general-purpose AI models that are released under a free and open-source licence, and
whose parameters, including the weights, the information on the model architecture, and the information
on model usage, are made publicly available should be subject to exceptions as regards the transparency-
related requirements imposed on general-purpose AI models, unless they can be considered to present
a systemic risk, in which case the circumstance that the model is transparent and accompanied by an
open-source license should not be considered to be a sufficient reason to exclude compliance with the
obligations under this Regulation. In any case, given that the release of general-purpose AI models under
free and open-source licence does not necessarily reveal substantial information on the data set used for
the training or fine-tuning of the model and on how compliance of copyright law was thereby ensured, the
exception provided for general-purpo se AI models from compliance with the transparency-related
requirements should not concern the obligation to produce a summary about the content used for model
training and the obligation to put in place a policy to comply with Union copyright law, in particular to
identify and comply with the reservation of rights pursuant to Article 4(3) of Directive (EU) 2019/790 of
the European Parliament and of the Council (40).
(105) General-purpose AI models, in particular large generative AI models, capable of generating text, images,
and other content, present unique innovation opportunities but also challenges to artists, authors, and other
creators and the way their creative content is created, distributed, used and consumed. The development
and training of such models require access to vast amounts of text, images, videos and other data. Text
and data mining techniques may be used extensively in this context for the retrieval and analysis of such
content, which may be protected by copyright and related rights. Any use of copyright protected content
requires the authorisation of the rightsholder concerned unless relevant copyright exceptions and
limitations apply . Directive (EU) 2019/790 introduced exceptions and limitations allowing reproductions
and extractions of works or other subject matter , for the purpose of text and data mining, under certain
conditions. Under these rules, rightsholders may choose to reserve their rights over their works or other
subject matter to prevent text and data mining, unless this is done for the purposes of scientific research.
Where the rights to opt out has been expressly reserved in an appropriate manner , providers of general-
purpose AI models need to obtain an authorisation from rightsholders if they want to carry out text and
data mining over such works.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 22/110
(106) Providers that place general-purpose AI models on the Union market should ensure compliance with the
relevant obligations in this Regulation. To that end, providers of general-purpose AI models should put in
place a policy to comply with Union law on copyright and related rights, in particular to identify and
comply with the reservation of rights expressed by rightsholders pursuant to Article 4(3) of Directive (EU)
2019/790. Any provider placing a general-purpose AI model on the Union mark et should comply with this
obligation, regardless of the jurisdiction in which the copyright-relevant acts underpinning the training of
those general-purpose AI models take place. This is necessary to ensure a level playing field among
providers of general-purpose AI models where no provider should be able to gain a competitive advantage
in the Union market by applying lower copyright standards than those provided in the Union.
(107) In order to increase transparency on the data that is used in the pre-training and training of general-
purpose AI models, including text and data protected by copyright law, it is adequate that providers of
such models draw up and make publicly available a sufficiently detailed summary of the content used for
training the general-purpose AI model. While taking into due account the need to protect trade secrets and
confidential business information, this summary should be generally comprehensive in its scope instead of
technically detailed to facilitate parties with legitimate interests, including copyright holders, to exercise
and enforce their rights under Union law, for example by listing the main data collections or sets that went
into training the model, such as large private or public databases or data archives, and by providing
a narrative explanation about other data sources used. It is appropriate for the AI Office to provide
a template for the summary , which should be simple, effective, and allow the provider to provide the
required summary in narrative form.
(108) With regard to the obligations imposed on providers of general-purpose AI models to put in place a policy
to comply with Union copyright law and make publicly available a summary of the content used for the
training, the AI Office should monito r whether the provider has fulfilled those obligations without
verifying or proceeding to a work-by-work assessment of the training data in terms of copyright
compliance. This Regulation does not affect the enforcement of copyright rules as provided for under
Union law .
(109) Compliance with the obligations applicable to the providers of general-purp ose AI models should be
commensurate and proportionate to the type of model provider , excluding the need for compliance for
persons who develop or use models for non-professional or scientific research purposes, who should
nevertheless be encouraged to voluntarily comply with these requirements. Without prejudice to Union
copyright law, compliance with those obligations should take due account of the size of the provider and
allow simplified ways of compliance for SMEs, including start-ups, that should not represent an excessive
cost and not discourage the use of such models. In the case of a modification or fine-tuning of a model,
the obligations for providers of general-purpose AI models should be limited to that modification or fine-
tuning, for example by complementing the already existing technical documentation with information on
the modifications, including new training data sources, as a means to comply with the value chain
obligations provided in this Regulation.
(110)General-purpose AI models could pose systemic risks which include, but are not limited to, any actual or
reasonably foreseeable negative effects in relation to major accidents, disruptions of critical sectors and
serious consequences to public health and safety; any actual or reasonably foreseeable negative effects on
democratic processes, public and economic security; the dissemination of illegal, false, or discriminatory
content. Systemic risks should be understood to increase with model capabilities and model reach, can
arise along the entire lifecycle of the model, and are influenced by conditions of misuse, model reliability ,
model fairness and model security , the level of autonomy of the model, its access to tools, novel or
combined modalities, release and distribution strategies, the potential to remove guardrails and other
factors. In particular , international approaches have so far identified the need to pay attention to risks from
potential intentional misuse or unintended issues of control relating to align ment with human intent;
chemical, biological, radiological, and nuclear risks, such as the ways in which barriers to entry can be
lowered, including for weapons development, design acquisition, or use; offensive cyber capabilities, such
as the ways in vulnerability discovery , exploitation, or operational use can be enabled; the effects of
interaction and tool use, including for example the capacity to control physical systems and interfere with
critical infrastructure; risks from models of making copies of themselves or ‘self-replicating’ or training
other models; the ways in which models can give rise to harmful bias and discrimination with risks to
individuals, communities or societies; the facilitation of disinformation or harming privacy with threats to
democratic values and human rights; risk that a particular event could lead to a chain reaction with
considerable negative effects that could affect up to an entire city, an entire domain activity or an entire
community .
(111)It is appropriate to establish a methodology for the classification of general-purpose AI models as general-
purpose AI model with systemic risks . Since systemic risks result from particularly high capabilities,
a general-purpose AI model should be considered to present systemic risks if it has high-impact
capabilities, evaluated on the basis of appropriate technical tools and methodolo gies, or significant impact
on the internal market due to its reach. High-impact capabilities in general-p urpose AI models means
capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI
models. The full range of capabilities in a model could be better understood after its placing on the market2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 23/110
or when deployers interact with the model. According to the state of the art at the time of entry into force
of this Regulation, the cumulative amou nt of computation used for the training of the general-purpose AI
model measured in floating point operations is one of the relevant approximations for model capabilities.
The cumulative amount of computation used for training includes the computation used across the
activities and methods that are intended to enhance the capabilities of the model prior to deployment, such
as pre-training, synthetic data generation and fine-tuning. Therefore, an initial threshold of floating point
operations should be set, which, if met by a general-purpose AI model, leads to a presumption that the
model is a general-purpose AI model with systemic risks. This threshold should be adjusted over time to
reflect technological and industrial changes, such as algorithmic improvements or increased hardware
efficiency , and should be supplemented with benchmarks and indicators for model capability . To inform
this, the AI Office should engage with the scientific community , industry , civil society and other experts.
Thresholds, as well as tools and benchmarks for the assessment of high-impact capabilities, should be
strong predictors of generality , its capabilities and associated systemic risk of general-purpose AI models,
and could take into account the way the model will be placed on the market or the number of users it may
affect. To complement this system, there should be a possibility for the Commission to take individual
decisions designating a general-purpose AI model as a general-purpose AI model with systemic risk if it is
found that such model has capabilities or an impact equivalent to those captured by the set threshold. That
decision should be taken on the basis of an overall assessment of the criteria for the designation of
a general-purpose AI model with system ic risk set out in an annex to this Regulation, such as quality or
size of the training data set, number of business and end users, its input and output modalities, its level of
autonomy and scalability , or the tools it has access to. Upon a reasoned request of a provider whose model
has been designated as a general-purpo se AI model with systemic risk, the Commission should take the
request into account and may decide to reassess whether the general-purpose AI model can still be
considered to present systemic risks.
(112)It is also necessary to clarify a procedure for the classification of a general-purpose AI model with
systemic risks. A general-purpose AI model that meets the applicable threshold for high-impact
capabilities should be presumed to be a general-purpose AI models with systemic risk. The provider
should notify the AI Office at the latest two weeks after the requirements are met or it becomes known
that a general-purpose AI model will meet the requirements that lead to the presumption. This is especially
relevant in relation to the threshold of floating point operations because training of general-purpose AI
models takes considerable planning which includes the upfront allocation of compute resources and,
therefore, providers of general-purpose AI models are able to know if their model would meet the
threshold before the training is completed. In the context of that notification, the provider should be able
to demonstrate that, because of its specific characteristics, a general-purpose AI model exceptionally does
not present systemic risks, and that it thus should not be classified as a general-purpose AI model with
systemic risks. That information is valuable for the AI Office to anticipate the placing on the market of
general-purpose AI models with systemic risks and the providers can start to engage with the AI Office
early on. That information is especially important with regard to general-purpose AI models that are
planned to be released as open-source, given that, after the open-source model release, necessary measures
to ensure compliance with the obligations under this Regulation may be more dif ficult to implement.
(113)If the Commission becomes aware of the fact that a general-purpose AI model meets the requirements to
classify as a general-purpose AI model with systemic risk, which previously had either not been known or
of which the relevant provider has failed to notify the Commission, the Commission should be
empowered to designate it so. A system of qualified alerts should ensure that the AI Office is made aware
by the scientific panel of general-purpos e AI models that should possibly be classified as general-purpose
AI models with systemic risk, in addition to the monitoring activities of the AI Of fice.
(114)The providers of general-purpose AI models presenting systemic risks should be subject, in addition to the
obligations provided for providers of general-purpose AI models, to obligations aimed at identifying and
mitigating those risks and ensuring an adequate level of cybersecurity protection, regardless of whether it
is provided as a standalone model or embedded in an AI system or a product. To achieve those objectives,
this Regulation should require providers to perform the necessary model evaluations, in particular prior to
its first placing on the market, including conducting and documenting adversarial testing of models, also,
as appropriate, through internal or indep endent external testing. In addition, providers of general-purpose
AI models with systemic risks should continuously assess and mitigate systemic risks, including for
example by putting in place risk-management policies, such as accountability and governance processes,
implementing post-market monitoring, taking appropriate measures along the entire model’ s lifecycle and
cooperating with relevant actors along the AI value chain.
(115)Providers of general-purpose AI models with systemic risks should assess and mitigate possible systemic
risks. If, despite efforts to identify and prevent risks related to a general-purpose AI model that may
present systemic risks, the development or use of the model causes a serious incident, the general-purpose
AI model provider should without undue delay keep track of the incident and report any relevant
information and possible corrective measures to the Commission and national competent authorities.
Furthermore, providers should ensure an adequate level of cybersecurity protection for the model and its
physical infrastructure, if appropriate, along the entire model lifecycle. Cybersec urity protection related to2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 24/110
systemic risks associated with malicious use or attacks should duly consider accidental model leakage,
unauthorised releases, circumvention of safety measures, and defence against cyberattacks, unauthorised
access or model theft. That protection could be facilitated by securing model weights, algorithms, servers,
and data sets, such as through operational security measures for infor mation security , specific
cybersecurity policies, adequate techn ical and established solutions, and cyber and physical access
controls, appropriate to the relevant circumstances and the risks involved.
(116)The AI Office should encourage and facilitate the drawing up, review and adaptation of codes of practice,
taking into account international approaches. All providers of general-purpose AI models could be invited
to participate. To ensure that the codes of practice reflect the state of the art and duly take into account
a diverse set of perspectives, the AI Office should collaborate with relevant national competent
authorities, and could, where appropriate, consult with civil society organis ations and other relevant
stakeholders and experts, including the Scientific Panel, for the drawing up of such codes. Codes of
practice should cover obligations for providers of general-purpose AI models and of general-purpose AI
models presenting systemic risks. In addition, as regards systemic risks, codes of practice should help to
establish a risk taxonomy of the type and nature of the systemic risks at Union level, including their
sources. Codes of practice should also be focused on specific risk assessment and mitigation measures.
(117)The codes of practice should represent a central tool for the proper compliance with the obligations
provided for under this Regulation for providers of general-purpose AI models. Providers should be able
to rely on codes of practice to demonst rate compliance with the obligations. By means of implementing
acts, the Commission may decide to approve a code of practice and give it a general validity within the
Union, or, alternatively , to provide comm on rules for the implementation of the relevant obligations, if, by
the time this Regulation becomes applicable, a code of practice cannot be finalised or is not deemed
adequate by the AI Office. Once a harmonised standard is published and assessed as suitable to cover the
relevant obligations by the AI Office, compliance with a European harmonised standard should grant
providers the presumption of conformity . Providers of general-purpose AI models should furthermore be
able to demonstrate compliance using alternative adequate means, if codes of practice or harmonised
standards are not available, or they choose not to rely on those.
(118)This Regulation regulates AI systems and AI models by imposing certain requirements and obligations for
relevant market actors that are placing them on the market, putting into service or use in the Union,
thereby complementing obligations for providers of intermediary services that embed such systems or
models into their services regulated by Regulation (EU) 2022/2065. To the extent that such systems or
models are embedded into designated very large online platforms or very large online search engines, they
are subject to the risk-management framework provided for in Regulation (EU) 2022/2065. Consequently ,
the corresponding obligations of this Regulation should be presumed to be fulfilled, unless significant
systemic risks not covered by Regulat ion (EU) 2022/2065 emer ge and are identified in such models.
Within this framework, providers of very large online platforms and very large online search engines are
obliged to assess potential systemic risks stemming from the design, functioning and use of their services,
including how the design of algorithmic systems used in the service may contribute to such risks, as well
as systemic risks stemming from potential misuses. Those providers are also obliged to take appropriate
mitigating measures in observance of fundamental rights.
(119)Considering the quick pace of innovation and the technological evolution of digital services in scope of
different instruments of Union law in particular having in mind the usage and the perception of their
recipients, the AI systems subject to this Regulation may be provided as intermediary services or parts
thereof within the meaning of Regulatio n (EU) 2022/2065, which should be interpreted in a technology-
neutral manner . For example, AI systems may be used to provide online search engines, in particular , to
the extent that an AI system such as an online chatbot performs searches of, in principle, all websites, then
incorporates the results into its existing knowledge and uses the updated know ledge to generate a single
output that combines dif ferent sources of information.
(120) Furthermore, obligations placed on providers and deployers of certain AI systems in this Regulation to
enable the detection and disclosure that the outputs of those systems are artificially generated or
manipulated are particularly relevant to facilitate the effective implementation of Regulation (EU)
2022/2065. This applies in particular as regards the obligations of providers of very large online platforms
or very large online search engines to identify and mitigate systemic risks that may arise from the
dissemination of content that has been artificially generated or manipulated, in particular risk of the actual
or foreseeable negative effects on democ ratic processes, civic discourse and electoral processes, including
through disinformation.
(121) Standardisation should play a key role to provide technical solutions to provid ers to ensure compliance
with this Regulation, in line with the state of the art, to promote innovation as well as competitiveness and
growth in the single market. Complianc e with harmonised standards as defined in Article 2, point (1)(c),
of Regulation (EU) No 1025/2012 of the European Parliament and of the Council (41), which are normally
expected to reflect the state of the art, should be a means for providers to demonstrate conformity with the
requirements of this Regulation. A balan ced representation of interests involving all relevant stakeholders
in the development of standards, in particular SMEs, consumer organisations and environmental and
social stakeholders in accordance with Articles 5 and 6 of Regulation (EU) No 1025/2012 should2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 25/110
therefore be encouraged. In order to facilitate compliance, the standardisation requests should be issued
by the Commission without undue delay . When preparing the standardisation request, the Commission
should consult the advisory forum and the Board in order to collect relevant expertise. However , in the
absence of relevant references to harmo nised standards, the Commission should be able to establish, via
implementing acts, and after consultation of the advisory forum, common specifications for certain
requirements under this Regulation. The common specification should be an exceptional fall back solution
to facilitate the provider ’s obligation to comply with the requirements of this Regulation, when the
standardisation request has not been accepted by any of the European standar disation organisations, or
when the relevant harmonised standards insuf ficiently address fundamental rights concerns, or when the
harmonised standards do not comply with the request, or when there are delays in the adoption of an
appropriate harmonised standard. Where such a delay in the adoption of a harmonised standard is due to
the technical complexity of that standard, this should be considered by the Commission before
contemplating the establishment of common specifications. When developing common specifications, the
Commission is encouraged to cooperate with international partners and international standardisation
bodies.
(122) It is appropriate that, without prejudice to the use of harmonised standards and common specifications,
providers of a high-risk AI system that has been trained and tested on data reflecting the specific
geographical, behavioural, contextual or functional setting within which the AI system is intended to be
used, should be presumed to comply with the relevant measure provided for under the requirement on data
governance set out in this Regulation. Without prejudice to the requirements related to robustness and
accuracy set out in this Regulation, in accordance with Article 54(3) of Regulation (EU) 2019/881, high-
risk AI systems that have been certified or for which a statement of conformity has been issued under
a cybersecurity scheme pursuant to that Regulation and the references of which have been published in
the Official Journal of the European Union should be presumed to comply with the cybersecurity
requirement of this Regulation in so far as the cybersecurity certificate or statement of conformity or parts
thereof cover the cybersecurity requirement of this Regulation. This remains without prejudice to the
voluntary nature of that cybersecurity scheme.
(123) In order to ensure a high level of trustworthiness of high-risk AI systems, those systems should be subject
to a conformity assessment prior to their placing on the market or putting into service.
(124) It is appropriate that, in order to minimi se the burden on operators and avoid any possible duplication, for
high-risk AI systems related to product s which are covered by existing Union harmonisation legislation
based on the New Legislative Framewo rk, the compliance of those AI systems with the requirements of
this Regulation should be assessed as part of the conformity assessment already provided for in that law.
The applicability of the requirements of this Regulation should thus not affect the specific logic,
methodology or general structure of conformity assessment under the relev ant Union harmonisation
legislation.
(125) Given the complexity of high-risk AI systems and the risks that are associated with them, it is important to
develop an adequate conformity assessment procedure for high-risk AI systems involving notified bodies,
so-called third party conformity assessment. However , given the current experience of professional pre-
market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to
limit, at least in an initial phase of application of this Regulation, the scope of application of third-party
conformity assessment for high-risk AI systems other than those related to products. Therefore, the
conformity assessment of such systems should be carried out as a general rule by the provider under its
own responsibility , with the only exception of AI systems intended to be used for biometrics.
(126) In order to carry out third-party conformity assessments when so required, notified bodies should be
notified under this Regulation by the national competent authorities, provided that they comply with a set
of requirements, in particular on independence, competence, absence of conflicts of interests and suitable
cybersecurity requirements. Notification of those bodies should be sent by national competent authorities
to the Commission and the other Member States by means of the electronic notification tool developed
and managed by the Commission pursuant to Article R23 of Annex I to Decision No 768/2008/EC.
(127) In line with Union commitments under the World Trade Organization Agreement on Technical Barriers to
Trade, it is adequate to facilitate the mutual recognition of conformity assess ment results produced by
competent conformity assessment bodies, independent of the territory in which they are established,
provided that those conformity assessment bodies established under the law of a third country meet the
applicable requirements of this Regulation and the Union has concluded an agreement to that extent. In
this context, the Commission should actively explore possible international instruments for that purpose
and in particular pursue the conclusion of mutual recognition agreements with third countries.
(128) In line with the commonly established notion of substantial modification for products regulated by Union
harmonisation legislation, it is appropriate that whenever a change occurs which may affect the
compliance of a high-risk AI system with this Regulation (e.g. change of operating system or software
architecture), or when the intended purpose of the system changes, that AI system should be considered to
be a new AI system which should under go a new conformity assessment. However , changes occurring to
the algorithm and the performance of AI systems which continue to ‘learn’ after being placed on the2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 26/110
market or put into service, namely automatically adapting how functions are carried out, should not
constitute a substantial modification, provided that those changes have been pre-determined by the
provider and assessed at the moment of the conformity assessment.
(129) High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that
they can move freely within the internal market. For high-risk AI systems embedded in a product,
a physical CE marking should be affixed, and may be complemented by a digital CE marking. For high-
risk AI systems only provided digitally , a digital CE marking should be used. Member States should not
create unjustified obstacles to the placin g on the market or the putting into service of high-risk AI systems
that comply with the requirements laid down in this Regulation and bear the CE marking.
(130) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety
of persons, the protection of the environment and climate change and for society as a whole. It is thus
appropriate that under exceptional reasons of public security or protection of life and health of natural
persons, environmental protection and the protection of key industrial and infrastructural assets, market
surveillance authorities could authorise the placing on the market or the putting into service of AI systems
which have not under gone a conformit y assessment. In duly justified situation s, as provided for in this
Regulation, law enforcement authorities or civil protection authorities may put a specific high-risk AI
system into service without the authorisation of the market surveillance authority , provided that such
authorisation is requested during or after the use without undue delay .
(131) In order to facilitate the work of the Commission and the Member States in the AI field as well as to
increase the transparency towards the public, providers of high-risk AI systems other than those related to
products falling within the scope of relevant existing Union harmonisation legisl ation, as well as providers
who consider that an AI system listed in the high-risk use cases in an annex to this Regulation is not high-
risk on the basis of a derogation, should be required to register themselves and information about their AI
system in an EU database, to be established and managed by the Commission. Before using an AI system
listed in the high-risk use cases in an annex to this Regulation, deployers of high-risk AI systems that are
public authorities, agencies or bodies, should register themselves in such database and select the system
that they envisage to use. Other deployers should be entitled to do so voluntarily . This section of the EU
database should be publicly accessible, free of charge, the information should be easily navigable,
understandable and machine-readable. The EU database should also be user-friendly , for example by
providing search functionalities, includi ng through keywords, allowing the general public to find relevant
information to be submitted upon the registration of high-risk AI systems and on the use case of high-risk
AI systems, set out in an annex to this Regulation, to which the high-risk AI systems correspond. Any
substantial modification of high-risk AI systems should also be registered in the EU database. For high-
risk AI systems in the area of law enforcement, migration, asylum and border control management, the
registration obligations should be fulfilled in a secure non-public section of the EU database. Access to the
secure non-public section should be strictly limited to the Commission as well as to market surveillance
authorities with regard to their national section of that database. High-risk AI systems in the area of
critical infrastructure should only be registered at national level. The Commission should be the controller
of the EU database, in accordance with Regulation (EU) 2018/1725. In order to ensure the full
functionality of the EU database, when deployed, the procedure for setting the database should include the
development of functional specifications by the Commission and an independent audit report. The
Commission should take into account cybersecurity risks when carrying out its tasks as data controller on
the EU database. In order to maximise the availability and use of the EU database by the public, the EU
database, including the information made available through it, should comply with requirements under the
Directive (EU) 2019/882.
(132) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks
of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain
circumstances, the use of these systems should therefore be subject to specific transparency obligations
without prejudice to the requirements and obligations for high-risk AI system s and subject to targeted
exceptions to take into account the special need of law enforcement. In particular , natural persons should
be notified that they are interacting with an AI system, unless this is obvious from the point of view of
a natural person who is reasonably well-informed, observant and circumspec t taking into account the
circumstances and the context of use. When implementing that obligation, the characteristics of natural
persons belonging to vulnerable groups due to their age or disability should be taken into account to the
extent the AI system is intended to intera ct with those groups as well. Moreover , natural persons should be
notified when they are exposed to AI systems that, by processing their biometric data, can identify or infer
the emotions or intentions of those persons or assign them to specific categories. Such specific categories
can relate to aspects such as sex, age, hair colour , eye colour , tattoos, personal traits, ethnic origin,
personal preferences and interests. Such information and notifications should be provided in accessible
formats for persons with disabilities.
(133) A variety of AI systems can generate large quantities of synthetic content that becomes increasingly hard
for humans to distinguish from human-generated and authentic content. The wide availability and
increasing capabilities of those systems have a significant impact on the integrity and trust in the
information ecosystem, raising new risks of misinformation and manipulation at scale, fraud,2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 27/110
impersonation and consumer deception. In light of those impacts, the fast technological pace and the need
for new methods and techniques to trace origin of information, it is appropriate to require providers of
those systems to embed technical solutions that enable marking in a machine readable format and
detection that the output has been generated or manipulated by an AI system and not a human. Such
techniques and methods should be sufficiently reliable, interoperable, effective and robust as far as this is
technically feasible, taking into account available techniques or a combination of such techniques, such as
watermarks, metadata identifications, cryptographic methods for proving provenance and authenticity of
content, logging methods, fingerprints or other techniques, as may be appropriate. When implementing
this obligation, providers should also take into account the specificities and the limitations of the different
types of content and the relevant technological and market developments in the field, as reflected in the
generally acknowledged state of the art. Such techniques and methods can be implemented at the level of
the AI system or at the level of the AI model, including general-purpose AI models generating content,
thereby facilitating fulfilment of this obligation by the downstream provider of the AI system. To remain
proportionate, it is appropriate to envisage that this marking obligation shou ld not cover AI systems
performing primarily an assistive function for standard editing or AI systems not substantially altering the
input data provided by the deployer or the semantics thereof.
(134) Further to the technical solutions employed by the providers of the AI system, deployers who use an AI
system to generate or manipulate image, audio or video content that appreciably resembles existing
persons, objects, places, entities or even ts and would falsely appear to a person to be authentic or truthful
(deep fakes), should also clearly and distinguishably disclose that the content has been artificially created
or manipulated by labelling the AI output accordingly and disclosing its artificial origin. Compliance with
this transparency obligation should not be interpreted as indicating that the use of the AI system or its
output impedes the right to freedom of expression and the right to freedom of the arts and sciences
guaranteed in the Charter , in particular where the content is part of an evidently creative, satirical, artistic,
fictional or analogous work or programme, subject to appropriate safeguards for the rights and freedoms
of third parties. In those cases, the transparency obligation for deep fakes set out in this Regulation is
limited to disclosure of the existence of such generated or manipulated content in an appropriate manner
that does not hamper the display or enjoyment of the work, including its normal exploitation and use,
while maintaining the utility and quality of the work. In addition, it is also appropriate to envisage
a similar disclosure obligation in relation to AI-generated or manipulated text to the extent it is published
with the purpose of informing the public on matters of public interest unless the AI-generated content has
under gone a process of human review or editorial control and a natural or legal person holds editorial
responsibility for the publication of the content.
(135) Without prejudice to the mandatory nature and full applicability of the transparency obligations, the
Commission may also encourage and facilitate the drawing up of codes of practice at Union level to
facilitate the effective implementation of the obligations regarding the detection and labelling of
artificially generated or manipulated content, including to support practical arrangements for making, as
appropriate, the detection mechanisms accessible and facilitating cooperation with other actors along the
value chain, disseminating content or checking its authenticity and provenance to enable the public to
effectively distinguish AI-generated content.
(136) The obligations placed on providers and deployers of certain AI systems in this Regulation to enable the
detection and disclosure that the outpu ts of those systems are artificially generated or manipulated are
particularly relevant to facilitate the effective implementation of Regulation (EU) 2022/2065. This applies
in particular as regards the obligations of providers of very large online platforms or very large online
search engines to identify and mitigate systemic risks that may arise from the dissemination of content
that has been artificially generated or manipulated, in particular the risk of the actual or foreseeable
negative effects on democratic processes, civic discourse and electoral processes, including through
disinformation. The requirement to label content generated by AI systems under this Regulation is without
prejudice to the obligation in Article 16(6) of Regulation (EU) 2022/2065 for providers of hosting services
to process notices on illegal content received pursuant to Article 16(1) of that Regulation and should not
influence the assessment and the decisio n on the illegality of the specific content. That assessment should
be performed solely with reference to the rules governing the legality of the content.
(137) Compliance with the transparency obligations for the AI systems covered by this Regulation should not be
interpreted as indicating that the use of the AI system or its output is lawful under this Regulation or other
Union and Member State law and should be without prejudice to other transparency obligations for
deployers of AI systems laid down in Union or national law .
(138) AI is a rapidly developing family of technologies that requires regulatory oversight and a safe and
controlled space for experimentation, while ensuring responsible innovation and integration of appropriate
safeguards and risk mitigation measures. To ensure a legal framework that promotes innovation, is future-
proof and resilient to disruption, Memb er States should ensure that their national competent authorities
establish at least one AI regulatory sandbox at national level to facilitate the development and testing of
innovative AI systems under strict regulatory oversight before these systems are placed on the market or
otherwise put into service. Member States could also fulfil this obligation throu gh participating in already
existing regulatory sandboxes or establishing jointly a sandbox with one or more Member States’2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 28/110
competent authorities, insofar as this participation provides equivalent level of national coverage for the
participating Member States. AI regulatory sandboxes could be established in physical, digital or hybrid
form and may accommodate physical as well as digital products. Establishing authorities should also
ensure that the AI regulatory sandboxes have the adequate resources for their functioning, including
financial and human resources.
(139) The objectives of the AI regulatory sandboxes should be to foster AI innovation by establishing
a controlled experimentation and testing environment in the development and pre-marketing phase with
a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union
and national law. Moreover , the AI regulatory sandboxes should aim to enhance legal certainty for
innovators and the competent authorities’ oversight and understanding of the opportunities, emer ging risks
and the impacts of AI use, to facilitate regulatory learning for authorities and undertakings, including with
a view to future adaptions of the legal framework, to support cooperation and the sharing of best practices
with the authorities involved in the AI regulatory sandbox, and to accelerate access to markets, including
by removing barriers for SMEs, including start-ups. AI regulatory sandboxes should be widely available
throughout the Union, and particular attention should be given to their accessibility for SMEs, including
start-ups. The participation in the AI regulatory sandbox should focus on issues that raise legal uncertainty
for providers and prospective providers to innovate, experiment with AI in the Union and contribute to
evidence-based regulatory learning. The supervision of the AI systems in the AI regulatory sandbox
should therefore cover their development, training, testing and validation before the systems are placed on
the market or put into service, as well as the notion and occurrence of substantial modification that may
require a new conformity assessment procedure. Any significant risks identifie d during the development
and testing of such AI systems should result in adequate mitigation and, failing that, in the suspension of
the development and testing process. Where appropriate, national competent authorities establishing AI
regulatory sandboxes should cooperate with other relevant authorities, including those supervising the
protection of fundamental rights, and could allow for the involvement of other actors within the AI
ecosystem such as national or Europ ean standardisation organisations, notified bodies, testing and
experimentation facilities, research and experimentation labs, European Digital Innovation Hubs and
relevant stakeholder and civil society organisations. To ensure uniform implementation across the Union
and economies of scale, it is appropriate to establish common rules for the AI regulatory sandboxes’
implementation and a framework for cooperation between the relevant authorities involved in the
supervision of the sandboxes. AI regulatory sandboxes established under this Regulation should be
without prejudice to other law allowi ng for the establishment of other sandboxes aiming to ensure
compliance with law other than this Regulation. Where appropriate, relevant competent authorities in
charge of those other regulatory sandbox es should consider the benefits of using those sandboxes also for
the purpose of ensuring compliance of AI systems with this Regulation. Upon agreement between the
national competent authorities and the participants in the AI regulatory sandb ox, testing in real world
conditions may also be operated and supervised in the framework of the AI regulatory sandbox.
(140) This Regulation should provide the legal basis for the providers and prospe ctive providers in the AI
regulatory sandbox to use personal data collected for other purposes for developing certain AI systems in
the public interest within the AI regulatory sandbox, only under specified conditions, in accordance with
Article 6(4) and Article 9(2), point (g), of Regulation (EU) 2016/679, and Articles 5, 6 and 10 of
Regulation (EU) 2018/1725, and without prejudice to Article 4(2) and Article 10 of Directive (EU)
2016/680. All other obligations of data controllers and rights of data subjects under Regulations (EU)
2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680 remain applicable. In particular , this
Regulation should not provide a legal basis in the meaning of Article 22(2), point (b) of Regulation (EU)
2016/679 and Article 24(2), point (b) of Regulation (EU) 2018/1725. Providers and prospective providers
in the AI regulatory sandbox should ensure appropriate safeguards and coop erate with the competent
authorities, including by following their guidance and acting expeditiously and in good faith to adequately
mitigate any identified significant risks to safety , health, and fundamental rights that may arise during the
development, testing and experimentation in that sandbox.
(141) In order to accelerate the process of development and the placing on the market of the high-risk AI
systems listed in an annex to this Regulation, it is important that providers or prospective providers of
such systems may also benefit from a specific regime for testing those systems in real world conditions,
without participating in an AI regulatory sandbox. However , in such cases, taking into account the
possible consequences of such testing on individuals, it should be ensured that appropriate and sufficient
guarantees and conditions are introduced by this Regulation for providers or prospective providers. Such
guarantees should include, inter alia, requesting informed consent of natural persons to participate in
testing in real world conditions, with the exception of law enforcement where the seeking of informed
consent would prevent the AI system from being tested. Consent of subjects to participate in such testing
under this Regulation is distinct from, and without prejudice to, consent of data subjects for the processing
of their personal data under the relevant data protection law. It is also important to minimise the risks and
enable oversight by competent authorities and therefore require prospective providers to have a real-world
testing plan submitted to competent market surveillance authority , register the testing in dedicated sections
in the EU database subject to some limited exceptions, set limitations on the period for which the testing
can be done and require additional safeguards for persons belonging to certain vulnerable groups, as well2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 29/110
as a written agreement defining the roles and responsibilities of prospective providers and deployers and
effective oversight by competent personnel involved in the real world testing. Furthermore, it is
appropriate to envisage additional safeguards to ensure that the predictions, recommendations or decisions
of the AI system can be effectively reversed and disregarded and that personal data is protected and is
deleted when the subjects have withdrawn their consent to participate in the testing without prejudice to
their rights as data subjects under the Union data protection law. As regards transfer of data, it is also
appropriate to envisage that data collected and processed for the purpose of testing in real-world
conditions should be transferred to third countries only where appropriate and applicable safeguards under
Union law are implemented, in particular in accordance with bases for transfer of personal data under
Union law on data protection, while for non-personal data appropriate safeg uards are put in place in
accordance with Union law, such as Regulations (EU) 2022/868 (42) and (EU) 2023/2854 (43) of the
European Parliament and of the Council.
(142) To ensure that AI leads to socially and environmentally beneficial outcomes, Member States are
encouraged to support and promote research and development of AI solutions in support of socially and
environmentally beneficial outcomes, such as AI-based solutions to increase accessibility for persons with
disabilities, tackle socio-economic inequalities, or meet environmental targets, by allocating sufficient
resources, including public and Union funding, and, where appropriate and provided that the eligibility
and selection criteria are fulfilled, considering in particular projects which pursue such objectives. Such
projects should be based on the principle of interdisciplinary cooperation betwe en AI developers, experts
on inequality and non-discrimination, accessibility , consumer , environmental, and digital rights, as well as
academics.
(143) In order to promote and protect innovation, it is important that the interests of SMEs, including start-ups,
that are providers or deployers of AI systems are taken into particular account. To that end, Member States
should develop initiatives, which are targeted at those operators, including on awareness raising and
information communication. Member States should provide SMEs, includ ing start-ups, that have
a registered office or a branch in the Union, with priority access to the AI regulatory sandboxes provided
that they fulfil the eligibility conditions and selection criteria and without precluding other providers and
prospective providers to access the sandboxes provided the same conditions and criteria are fulfilled.
Member States should utilise existing channels and where appropriate, establish new dedicated channels
for communication with SMEs, including start-ups, deployers, other innovators and, as appropriate, local
public authorities, to support SMEs throughout their development path by providing guidance and
responding to queries about the implementation of this Regulation. Where appropriate, these channels
should work together to create syner gies and ensure homogeneity in their guidance to SMEs, including
start-ups, and deployers. Additionally , Member States should facilitate the participation of SMEs and
other relevant stakeholders in the standa rdisation development processes. Moreover , the specific interests
and needs of providers that are SMEs, including start-ups, should be taken into account when notified
bodies set conformity assessment fees. The Commission should regularly assess the certification and
compliance costs for SMEs, including start-ups, through transparent consultations and should work with
Member States to lower such costs. For example, translation costs related to mandatory documentation
and communication with authorities may constitute a significant cost for providers and other operators, in
particular those of a smaller scale. Member States should possibly ensure that one of the languages
determined and accepted by them for relevant providers’ documentation and for communication with
operators is one which is broadly understood by the largest possible number of cross-border deployers. In
order to address the specific needs of SMEs, including start-ups, the Commission should provide
standardised templates for the areas covered by this Regulation, upon request of the Board. Additionally ,
the Commission should complement Member States’ efforts by providing a single information platform
with easy-to-use information with regards to this Regulation for all providers and deployers, by organising
appropriate communication campaigns to raise awareness about the obligations arising from this
Regulation, and by evaluating and promoting the conver gence of best practices in public procurement
procedures in relation to AI systems. Medium-sized enterprises which until recently qualified as small
enterprises within the meaning of the Annex to Commission Recommendation 2003/361/EC (44) should
have access to those support measures, as those new medium-sized enterprises may sometimes lack the
legal resources and training necessary to ensure proper understanding of, and compliance with, this
Regulation.
(144) In order to promote and protect innovation, the AI-on-demand platform, all relevant Union funding
programmes and projects, such as Digital Europe Programme, Horizon Europe, implemented by the
Commission and the Member States at Union or national level should, as appropriate, contribute to the
achievement of the objectives of this Regulation.
(145) In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the
market as well as to facilitate compliance of providers, in particular SMEs, including start-ups, and
notified bodies with their obligations under this Regulation, the AI-on-demand platform, the European
Digital Innovation Hubs and the testing and experimentation facilities establishe d by the Commission and
the Member States at Union or national level should contribute to the implementation of this Regulation.
Within their respective mission and fields of competence, the AI-on-demand platform, the European2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 30/110
Digital Innovation Hubs and the testing and experimentation Facilities are able to provide in particular
technical and scientific support to providers and notified bodies.
(146) Moreover , in light of the very small size of some operators and in order to ensure proportionality
regarding costs of innovation, it is appropriate to allow microenterprises to fulfil one of the most costly
obligations, namely to establish a quality management system, in a simplif ied manner which would
reduce the administrative burden and the costs for those enterprises witho ut affecting the level of
protection and the need for compliance with the requirements for high-risk AI systems. The Commission
should develop guidelines to specify the elements of the quality management system to be fulfilled in this
simplified manner by microenterprises.
(147) It is appropriate that the Commission facilitates, to the extent possible, access to testing and
experimentation facilities to bodies, groups or laboratories established or accredited pursuant to any
relevant Union harmonisation legislation and which fulfil tasks in the context of conformity assessment of
products or devices covered by that Union harmonisation legislation. This is, in particular , the case as
regards expert panels, expert laboratories and reference laboratories in the field of medical devices
pursuant to Regulations (EU) 2017/745 and (EU) 2017/746.
(148) This Regulation should establish a gove rnance framework that both allows to coordinate and support the
application of this Regulation at national level, as well as build capabilities at Union level and integrate
stakeholders in the field of AI. The effective implementation and enforcement of this Regulation require
a governance framework that allows to coordinate and build up central expertise at Union level. The AI
Office was established by Commission Decision (45) and has as its mission to develop Union expertise
and capabilities in the field of AI and to contribute to the implementation of Union law on AI.
Member States should facilitate the tasks of the AI Office with a view to support the development of
Union expertise and capabilities at Union level and to strengthen the functioning of the digital single
market. Furthermore, a Board composed of representatives of the Member States, a scientific panel to
integrate the scientific community and an advisory forum to contribute stakeholder input to the
implementation of this Regulation, at Union and national level, should be established. The development of
Union expertise and capabilities should also include making use of existing resources and expertise, in
particular through syner gies with structures built up in the context of the Union level enforcement of other
law and syner gies with related initiatives at Union level, such as the EuroHPC Joint Undertaking and the
AI testing and experimentation facilities under the Digital Europe Programme.
(149) In order to facilitate a smooth, effective and harmonised implementation of this Regulation a Board
should be established. The Board should reflect the various interests of the AI eco-system and be
composed of representatives of the Member States. The Board should be responsible for a number of
advisory tasks, including issuing opinions, recommendations, advice or contributing to guidance on
matters related to the implementation of this Regulation, including on enforcement matters, technical
specifications or existing standards regarding the requirements established in this Regulation and
providing advice to the Commission and the Member States and their national competent authorities on
specific questions related to AI. In order to give some flexibility to Member States in the designation of
their representatives in the Board, such representatives may be any persons belonging to public entities
who should have the relevant competences and powers to facilitate coordination at national level and
contribute to the achievement of the Board’ s tasks. The Board should establish two standing sub-groups to
provide a platform for cooperation and exchange among market surveillance authorities and notifying
authorities on issues related, respectively , to market surveillance and notif ied bodies. The standing
subgroup for market surveillance should act as the administrative cooperation group (ADCO) for this
Regulation within the meaning of Article 30 of Regulation (EU) 2019/1020. In accordance with Article 33
of that Regulation, the Commission should support the activities of the standing subgroup for market
surveillance by undertaking market evaluations or studies, in particular with a view to identifying aspects
of this Regulation requiring specific and urgent coordination among market surveillance authorities. The
Board may establish other standing or temporary sub-groups as appropriate for the purpose of examining
specific issues. The Board should also cooperate, as appropriate, with relevant Union bodies, experts
groups and networks active in the context of relevant Union law, including in particular those active under
relevant Union law on data, digital products and services.
(150) With a view to ensuring the involvement of stakeholders in the implementation and application of this
Regulation, an advisory forum should be established to advise and provide technical expertise to the
Board and the Commission. To ensure a varied and balanced stakeholde r representation between
commercial and non-commercial interes t and, within the category of commercial interests, with regards to
SMEs and other undertakings, the advisory forum should comprise inter alia industry , start-ups, SMEs,
academia, civil society , including the social partners, as well as the Fundamental Rights Agency , ENISA,
the European Committee for Standardization (CEN), the European Committee for Electrotechnical
Standardization (CENELEC) and the European Telecommunications Standards Institute (ETSI).
(151) To support the implementation and enforcement of this Regulation, in particular the monitoring activities
of the AI Office as regards general-purpose AI models, a scientific panel of independent experts should be
established. The independent experts constituting the scientific panel should be selected on the basis of
up-to-date scientific or technical expertise in the field of AI and should perform their tasks with2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 31/110
impartiality , objectivity and ensure the confidentiality of information and data obtained in carrying out
their tasks and activities. To allow the reinforcement of national capacities necessary for the effective
enforcement of this Regulation, Member States should be able to request support from the pool of experts
constituting the scientific panel for their enforcement activities.
(152) In order to support adequate enforcem ent as regards AI systems and reinforce the capacities of the
Member States, Union AI testing supp ort structures should be established and made available to the
Member States.
(153) Member States hold a key role in the application and enforcement of this Regulation. In that respect, each
Member State should designate at least one notifying authority and at least one market surveillance
authority as national competent authorities for the purpose of supervis ing the application and
implementation of this Regulation. Member States may decide to appoint any kind of public entity to
perform the tasks of the national competent authorities within the meanin g of this Regulation, in
accordance with their specific national organisational characteristics and needs. In order to increase
organisation efficiency on the side of Member States and to set a single point of contact vis-à-vis the
public and other counterparts at Memb er State and Union levels, each Member State should designate
a market surveillance authority to act as a single point of contact.
(154) The national competent authorities should exercise their powers independently , impartially and without
bias, so as to safeguard the principles of objectivity of their activities and tasks and to ensure the
application and implementation of this Regulation. The members of these autho rities should refrain from
any action incompatible with their duties and should be subject to confidentiality rules under this
Regulation.
(155) In order to ensure that providers of high- risk AI systems can take into account the experience on the use of
high-risk AI systems for improving their systems and the design and development process or can take any
possible corrective action in a timely manner , all providers should have a post-market monitoring system
in place. Where relevant, post-market monitoring should include an analysis of the interaction with other
AI systems including other devices and software. Post-market monitoring should not cover sensitive
operational data of deployers which are law enforcement authorities. This system is also key to ensure that
the possible risks emer ging from AI systems which continue to ‘learn’ after being placed on the market or
put into service can be more efficiently and timely addressed. In this context, providers should also be
required to have a system in place to report to the relevant authorities any seriou s incidents resulting from
the use of their AI systems, meaning incident or malfunctioning leading to death or serious damage to
health, serious and irreversible disruption of the management and operation of critical infrastructure,
infringements of obligations under Union law intended to protect fundamental rights or serious damage to
property or the environment.
(156) In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by
this Regulation, which is Union harmonisation legislation, the system of market surveillance and
compliance of products established by Regulation (EU) 2019/1020 should apply in its entirety . Market
surveillance authorities designated pursuant to this Regulation should have all enforcement powers laid
down in this Regulation and in Regulation (EU) 2019/1020 and should exercise their powers and carry out
their duties independently , impartially and without bias. Although the majority of AI systems are not
subject to specific requirements and obligations under this Regulation, market surveillance authorities
may take measures in relation to all AI systems when they present a risk in accordance with this
Regulation. Due to the specific nature of Union institutions, agencies and bodies falling within the scope
of this Regulation, it is appropriate to designate the European Data Protection Supervisor as a competent
market surveillance authority for them. This should be without prejudice to the designation of national
competent authorities by the Member States. Market surveillance activities shou ld not affect the ability of
the supervised entities to carry out their tasks independently , when such independence is required by
Union law .
(157) This Regulation is without prejudice to the competences, tasks, powers and independence of relevant
national public authorities or bodies which supervise the application of Union law protecting fundamental
rights, including equality bodies and data protection authorities. Where necessary for their mandate, those
national public authorities or bodies should also have access to any docume ntation created under this
Regulation. A specific safeguard procedure should be set for ensuring adequat e and timely enforcement
against AI systems presenting a risk to health, safety and fundamental rights. The procedure for such AI
systems presenting a risk should be applied to high-risk AI systems presenting a risk, prohibited systems
which have been placed on the market, put into service or used in violation of the prohibited practices laid
down in this Regulation and AI systems which have been made available in violation of the transparency
requirements laid down in this Regulation and present a risk.
(158) Union financial services law includes internal governance and risk-management rules and requirements
which are applicable to regulated financial institutions in the course of provision of those services,
including when they make use of AI systems. In order to ensure coherent application and enforcement of
the obligations under this Regulation and relevant rules and requirements of the Union financial services
legal acts, the competent authorities for the supervision and enforcement of those legal acts, in particular2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 32/110
competent authorities as defined in Regulation (EU) No 575/2013 of the European Parliament and of the
Council (46) and Directives 2008/48/EC (47), 2009/138/EC (48), 2013/36/EU (49), 2014/17/EU (50) and
(EU) 2016/97 (51) of the European Parliament and of the Council, should be designated, within their
respective competences, as competent authorities for the purpose of supervisi ng the implementation of
this Regulation, including for market surveillance activities, as regards AI systems provided or used by
regulated and supervised financial institutions unless Member States decide to designate another authority
to fulfil these market surveillance tasks. Those competent authorities should have all powers under this
Regulation and Regulation (EU) 2019/1020 to enforce the requirements and obligations of this
Regulation, including powers to carry our ex post market surveillance activities that can be integrated, as
appropriate, into their existing supervisory mechanisms and procedures under the relevant Union financial
services law. It is appropriate to envisage that, when acting as market surveillance authorities under this
Regulation, the national authorities responsible for the supervision of credit institutions regulated under
Directive 2013/36/EU, which are participating in the Single Supervisory Mechanism established by
Council Regulation (EU) No 1024/2013 (52), should report, without delay , to the European Central Bank
any information identified in the cours e of their market surveillance activities that may be of potential
interest for the European Central Bank’ s prudential supervisory tasks as specified in that Regulation. To
further enhance the consistency between this Regulation and the rules applicable to credit institutions
regulated under Directive 2013/36/EU, it is also appropriate to integrate some of the providers’ procedural
obligations in relation to risk management, post marketing monitoring and documentation into the
existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited
derogations should also be envisaged in relation to the quality management system of providers and the
monitoring obligation placed on deployers of high-risk AI systems to the extent that these apply to credit
institutions regulated by Directive 2013/36/EU. The same regime should apply to insurance and re-
insurance undertakings and insurance holding companies under Directive 2009/138/EC and the insurance
intermediaries under Directive (EU) 2016/97 and other types of financial institutions subject to
requirements regarding internal governance, arrangements or processes established pursuant to the
relevant Union financial services law to ensure consistency and equal treatment in the financial sector .
(159) Each market surveillance authority for high-risk AI systems in the area of biometrics, as listed in an annex
to this Regulation insofar as those systems are used for the purposes of law enforcement, migration,
asylum and border control management, or the administration of justice and democratic processes, should
have effective investigative and corrective powers, including at least the power to obtain access to all
personal data that are being processed and to all information necessary for the performance of its tasks.
The market surveillance authorities should be able to exercise their powers by acting with complete
independence. Any limitations of their access to sensitive operational data under this Regulation should be
without prejudice to the powers conferred to them by Directive (EU) 2016/680. No exclusion on
disclosing data to national data protection authorities under this Regulation should affect the current or
future powers of those authorities beyond the scope of this Regulation.
(160) The market surveillance authorities and the Commission should be able to propose joint activities,
including joint investigations, to be conducted by market surveillance authorit ies or market surveillance
authorities jointly with the Commission, that have the aim of promoting compliance, identifying non-
compliance, raising awareness and providing guidance in relation to this Regulation with respect to
specific categories of high-risk AI systems that are found to present a serious risk across two or more
Member States. Joint activities to promote compliance should be carried out in accordance with Article 9
of Regulation (EU) 2019/1020. The AI Office should provide coordination support for joint
investigations.
(161) It is necessary to clarify the responsibilities and competences at Union and national level as regards AI
systems that are built on general-purpose AI models. To avoid overlapping competences, where an AI
system is based on a general-purpose AI model and the model and system are provided by the same
provider , the supervision should take place at Union level through the AI Office, which should have the
powers of a market surveillance authority within the meaning of Regulation (EU) 2019/1020 for this
purpose. In all other cases, national market surveillance authorities remain responsible for the supervision
of AI systems. However , for general-pur pose AI systems that can be used directly by deployers for at least
one purpose that is classified as high-risk, market surveillance authorities should cooperate with the AI
Office to carry out evaluations of compliance and inform the Board and other market surveillance
authorities accordingly . Furthermore, market surveillance authorities should be able to request assistance
from the AI Office where the market surveillance authority is unable to conclude an investigation on
a high-risk AI system because of its inability to access certain information related to the general-purpose
AI model on which the high-risk AI system is built. In such cases, the procedure regarding mutual
assistance in cross-border cases in Chapter VI of Regulation (EU) 2019/1020 should apply mutatis
mutandis .
(162) To make best use of the centralised Union expertise and syner gies at Union level, the powers of
supervision and enforcement of the obligations on providers of general-purpose AI models should be
a competence of the Commission. The AI Office should be able to carry out all necessary actions to
monitor the effective implementation of this Regulation as regards general-pur pose AI models. It should2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 33/110
be able to investigate possible infringem ents of the rules on providers of general-purpose AI models both
on its own initiative, following the results of its monitoring activities, or upon request from market
surveillance authorities in line with the conditions set out in this Regulation. To support effective
monitoring of the AI Office, it should provide for the possibility that downstream providers lodge
complaints about possible infringements of the rules on providers of general-purpose AI models and
systems.
(163) With a view to complementing the governance systems for general-purpose AI models, the scientific panel
should support the monitoring activities of the AI Office and may, in certain cases, provide qualified alerts
to the AI Office which trigger follow-ups, such as investigations. This should be the case where the
scientific panel has reason to suspect that a general-purpose AI model poses a concrete and identifiable
risk at Union level. Furthermore, this should be the case where the scientific panel has reason to suspect
that a general-purpose AI model meets the criteria that would lead to a classification as general-purpose
AI model with systemic risk. To equip the scientific panel with the information necessary for the
performance of those tasks, there should be a mechanism whereby the scientific panel can request the
Commission to require documentation or information from a provider .
(164) The AI Office should be able to take the necessary actions to monitor the effective implementation of and
compliance with the obligations for providers of general-purpose AI models laid down in this Regulation.
The AI Office should be able to investigate possible infringements in accordance with the powers
provided for in this Regulation, including by requesting documentation and information, by conducting
evaluations, as well as by requesting measures from providers of general-purpose AI models. When
conducting evaluations, in order to make use of independent expertise, the AI Office should be able to
involve independent experts to carry out the evaluations on its behalf. Compliance with the obligations
should be enforceable, inter alia, throug h requests to take appropriate measures , including risk mitigation
measures in the case of identified systemic risks as well as restricting the making available on the market,
withdrawing or recalling the model. As a safeguard, where needed beyond the procedural rights provided
for in this Regulation, providers of general-purpose AI models should have the procedural rights provided
for in Article 18 of Regulation (EU) 2019/1020, which should apply mutatis mutandis , without prejudice
to more specific procedural rights provided for by this Regulation.
(165) The development of AI systems other than high-risk AI systems in accordance with the requirements of
this Regulation may lead to a larger uptake of ethical and trustworthy AI in the Union. Providers of AI
systems that are not high-risk should be encouraged to create codes of conduct, including related
governance mechanisms, intended to foster the voluntary application of some or all of the mandatory
requirements applicable to high-risk AI systems, adapted in light of the intended purpose of the systems
and the lower risk involved and taking into account the available technical solutions and industry best
practices such as model and data cards. Providers and, as appropriate, deployers of all AI systems, high-
risk or not, and AI models should also be encouraged to apply on a voluntary basis additional
requirements related, for example, to the elements of the Union’ s Ethics Guidelines for Trustworthy AI,
environmental sustainability , AI literacy measures, inclusive and diverse design and development of AI
systems, including attention to vulnerable persons and accessibility to persons with disability ,
stakeholders’ participation with the involvement, as appropriate, of relevant stakeholders such as business
and civil society organisations, academia, research organisations, trade unions and consumer protection
organisations in the design and development of AI systems, and diversity of the development teams,
including gender balance. To ensure that the voluntary codes of conduct are effective, they should be
based on clear objectives and key performance indicators to measure the achiev ement of those objectives.
They should also be developed in an inclusive way, as appropriate, with the involvement of relevant
stakeholders such as business and civil society organisations, academia, research organisations, trade
unions and consumer protection organisation. The Commission may develop initiatives, including of
a sectoral nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data
for AI development, including on data access infrastructure, semantic and technical interoperability of
different types of data.
(166) It is important that AI systems related to products that are not high-risk in accordance with this Regulation
and thus are not required to comply with the requirements set out for high-risk AI systems are
nevertheless safe when placed on the market or put into service. To contribute to this objective,
Regulation (EU) 2023/988 of the European Parliament and of the Council (53) would apply as a safety net.
(167) In order to ensure trustful and constructive cooperation of competent authoriti es on Union and national
level, all parties involved in the application of this Regulation should respect the confidentiality of
information and data obtained in carrying out their tasks, in accordance with Union or national law. They
should carry out their tasks and activities in such a manner as to protect, in particular , intellectual property
rights, confidential business information and trade secrets, the effective implementation of this
Regulation, public and national security interests, the integrity of criminal and administrative proceedings,
and the integrity of classified information.
(168) Compliance with this Regulation should be enforceable by means of the imposi tion of penalties and other
enforcement measures. Member States should take all necessary measures to ensure that the provisions of
this Regulation are implemented, including by laying down effective, proportionate and dissuasive2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 34/110
penalties for their infringement, and to respect the ne bis in idem principle. In order to strengthen and
harmonise administrative penalties for infringement of this Regulation, the upper limits for setting the
administrative fines for certain specific infringements should be laid down. Whe n assessing the amount of
the fines, Member States should, in each individual case, take into account all relevant circumstances of
the specific situation, with due regard in particular to the nature, gravity and duration of the infringement
and of its consequences and to the size of the provider , in particular if the provider is an SME, including
a start-up. The European Data Protecti on Supervisor should have the power to impose fines on Union
institutions, agencies and bodies falling within the scope of this Regulation.
(169) Compliance with the obligations on providers of general-purpose AI models imposed under this
Regulation should be enforceable, inter alia, by means of fines. To that end, appropriate levels of fines
should also be laid down for infringement of those obligations, including the failure to comply with
measures requested by the Commission in accordance with this Regulatio n, subject to appropriate
limitation periods in accordance with the principle of proportionality . All decisions taken by the
Commission under this Regulation are subject to review by the Court of Justice of the European Union in
accordance with the TFEU, including the unlimited jurisdiction of the Court of Justice with regard to
penalties pursuant to Article 261 TFEU.
(170) Union and national law already provide effective remedies to natural and legal persons whose rights and
freedoms are adversely affected by the use of AI systems. Without prejudice to those remedies, any
natural or legal person that has grounds to consider that there has been an infrin gement of this Regulation
should be entitled to lodge a complaint to the relevant market surveillance authority .
(171) Affected persons should have the right to obtain an explanation where a deployer ’s decision is based
mainly upon the output from certain high-risk AI systems that fall within the scope of this Regulation and
where that decision produces legal effects or similarly significantly affects those persons in a way that
they consider to have an adverse impact on their health, safety or fundamental rights. That explanation
should be clear and meaningful and should provide a basis on which the affected persons are able to
exercise their rights. The right to obtain an explanation should not apply to the use of AI systems for
which exceptions or restrictions follow from Union or national law and should apply only to the extent
this right is not already provided for under Union law .
(172) Persons acting as whistleblowers on the infringements of this Regulation should be protected under the
Union law. Directive (EU) 2019/1937 of the European Parliament and of the Council (54) should therefore
apply to the reporting of infringements of this Regulation and the protection of persons reporting such
infringements.
(173) In order to ensure that the regulatory framework can be adapted where necessary , the power to adopt acts
in accordance with Article 290 TFEU should be delegated to the Commission to amend the conditions
under which an AI system is not to be considered to be high-risk, the list of high-risk AI systems, the
provisions regarding technical documentation, the content of the EU decla ration of conformity the
provisions regarding the conformity assessment procedures, the provisions establishing the high-risk AI
systems to which the conformity assessment procedure based on assessment of the quality management
system and assessment of the technical documentation should apply , the threshold, benchmarks and
indicators, including by supplementing those benchmarks and indicators, in the rules for the classification
of general-purpose AI models with systemic risk, the criteria for the designation of general-purpose AI
models with systemic risk, the technica l documentation for providers of gener al-purpose AI models and
the transparency information for providers of general-purpose AI models. It is of particular importance
that the Commission carry out appropri ate consultations during its preparatory work, including at expert
level, and that those consultations be conducted in accordance with the principles laid down in the
Interinstitutional Agreement of 13 April 2016 on Better Law-Making (55). In particular , to ensure equal
participation in the preparation of delegated acts, the European Parliament and the Council receive all
documents at the same time as Member States’ experts, and their experts systematically have access to
meetings of Commission expert groups dealing with the preparation of delegated acts.
(174) Given the rapid technological developments and the technical expertise required to effectively apply this
Regulation, the Commission should evaluate and review this Regulation by 2 August 2029 and every four
years thereafter and report to the European Parliament and the Council. In addition, taking into account
the implications for the scope of this Regulation, the Commission should carry out an assessment of the
need to amend the list of high-risk AI systems and the list of prohibited practices once a year. Moreover ,
by 2 August 2028 and every four years thereafter , the Commission should evaluate and report to the
European Parliament and to the Counci l on the need to amend the list of high- risk areas headings in the
annex to this Regulation, the AI systems within the scope of the transparency obligations, the
effectiveness of the supervision and governance system and the progress on the development of
standardisation deliverables on energy efficient development of general-purpose AI models, including the
need for further measures or actions. Finally , by 2 August 2028 and every three years thereafter , the
Commission should evaluate the impact and effectiveness of voluntary codes of conduct to foster the
application of the requirements provided for high-risk AI systems in the case of AI systems other than
high-risk AI systems and possibly other additional requirements for such AI systems.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 35/110
(175) In order to ensure uniform conditions for the implementation of this Regulation, implementing powers
should be conferred on the Commission. Those powers should be exercised in accordance with Regulation
(EU) No 182/201 1 of the European Parliament and of the Council (56).
(176) Since the objective of this Regulation, namely to improve the functioning of the internal market and to
promote the uptake of human centric and trustworthy AI, while ensuring a high level of protection of
health, safety , fundamental rights enshrined in the Charter , including democracy , the rule of law and
environmental protection against harmf ul effects of AI systems in the Union and supporting innovation,
cannot be sufficiently achieved by the Member States and can rather , by reason of the scale or effects of
the action, be better achieved at Union level, the Union may adopt measures in accordance with the
principle of subsidiarity as set out in Article 5 TEU. In accordance with the principle of proportionality as
set out in that Article, this Regulation does not go beyond what is necessary in order to achieve that
objective.
(177) In order to ensure legal certainty , ensure an appropriate adaptation period for operators and avoid
disruption to the market, including by ensuring continuity of the use of AI systems, it is appropriate that
this Regulation applies to the high-risk AI systems that have been placed on the market or put into service
before the general date of application thereof, only if, from that date, those systems are subject to
significant changes in their design or intended purpose. It is appropriate to clarify that, in this respect, the
concept of significant change should be understood as equivalent in substance to the notion of substantial
modification, which is used with regard only to high-risk AI systems pursuant to this Regulation. On an
exceptional basis and in light of public accountability , operators of AI systems which are components of
the large-scale IT systems established by the legal acts listed in an annex to this Regulation and operators
of high-risk AI systems that are intended to be used by public authorities should, respectively , take the
necessary steps to comply with the requirements of this Regulation by end of 2030 and by 2 August 2030.
(178) Providers of high-risk AI systems are encouraged to start to comply , on a voluntary basis, with the
relevant obligations of this Regulation already during the transitional period.
(179) This Regulation should apply from 2 August 2026. However , taking into account the unacceptable risk
associated with the use of AI in certain ways, the prohibitions as well as the general provisions of this
Regulation should already apply from 2 February 2025. While the full effect of those prohibitions follows
with the establishment of the governanc e and enforcement of this Regulation, anticipating the application
of the prohibitions is important to take account of unacceptable risks and to have an effect on other
procedures, such as in civil law. Moreover , the infrastructure related to the governance and the conformity
assessment system should be operational before 2 August 2026, therefore the provisions on notified
bodies and governance structure should apply from 2 August 2025. Given the rapid pace of technological
advancements and adoption of general-purpose AI models, obligations for providers of general-purpose
AI models should apply from 2 August 2025. Codes of practice should be ready by 2 May 2025 in view of
enabling providers to demonstrate compliance on time. The AI Office should ensure that classification
rules and procedures are up to date in light of technological developments. In addition, Member States
should lay down and notify to the Commission the rules on penalties, including administrative fines, and
ensure that they are properly and effectively implemented by the date of application of this Regulation.
Therefore the provisions on penalties should apply from 2 August 2025.
(180) The European Data Protection Supervisor and the European Data Protection Board were consulted in
accordance with Article 42(1) and (2) of Regulation (EU) 2018/1725 and deliv ered their joint opinion on
18 June 2021,
HAVE ADOPTED THIS REGULA TION:
CHAPTER I
GENERAL PROVISIONS
Article 1
Subject matter`
1. The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake
of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health,
safety , fundamental rights enshrined in the Charter , including democracy , the rule of law and environmental
protection, against the harmful ef fects of AI systems in the Union and supporting innovation.
2. This Regulation lays down:
(a)harmonised rules for the placing on the market, the putting into service, and the use of AI systems in the
Union;
(b)prohibitions of certain AI practices;
(c)specific requirements for high-risk AI systems and obligations for operators of such systems;
(d)harmonised transparency rules for certain AI systems;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 36/110
(e)harmonised rules for the placing on the market of general-purpose AI models;
(f)rules on market monitoring, market surveillance, governance and enforcement;
(g)measures to support innovation, with a particular focus on SMEs, including start-ups.
Article 2
Scope
1. This Regulation applies to:
(a)providers placing on the market or putting into service AI systems or placing on the market general-
purpose AI models in the Union, irrespective of whether those providers are established or located within
the Union or in a third country;
(b)deployers of AI systems that have their place of establishment or are located within the Union;
(c)providers and deployers of AI systems that have their place of establishmen t or are located in a third
country , where the output produced by the AI system is used in the Union;
(d)importers and distributors of AI systems;
(e)product manufacturers placing on the market or putting into service an AI system together with their
product and under their own name or trademark;
(f)authorised representatives of providers, which are not established in the Union;
(g)affected persons that are located in the Union.
2. For AI systems classified as high-risk AI systems in accordance with Article 6(1) related to products
covered by the Union harmonisation legislation listed in Section B of Annex I, only Article 6(1), Articles 102 to
109 and Article 112 apply . Article 57 applies only in so far as the requirements for high-risk AI systems under
this Regulation have been integrated in that Union harmonisation legislation.
3. This Regulation does not apply to areas outside the scope of Union law, and shall not, in any event, affect
the competences of the Member States concerning national security , regardless of the type of entity entrusted
by the Member States with carrying out tasks in relation to those competences.
This Regulation does not apply to AI systems where and in so far they are placed on the market, put into
service, or used with or without modification exclusively for military , defence or national security purposes,
regardless of the type of entity carrying out those activities.
This Regulation does not apply to AI systems which are not placed on the market or put into service in the
Union, where the output is used in the Union exclusively for military , defence or national security purposes,
regardless of the type of entity carrying out those activities.
4. This Regulation applies neither to public authorities in a third country nor to international organisations
falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use
AI systems in the framework of international cooperation or agreements for law enforcement and judicial
cooperation with the Union or with one or more Member States, provided that such a third country or
international organisation provides adequate safeguards with respect to the protection of fundamental rights and
freedoms of individuals.
5. This Regulation shall not affect the application of the provisions on the liability of providers of
intermediary services as set out in Chapter II of Regulation (EU) 2022/2065.
6. This Regulation does not apply to AI systems or AI models, including their output, specifically developed
and put into service for the sole purpose of scientific research and development.
7. Union law on the protection of personal data, privacy and the confidentiality of communications applies to
personal data processed in connection with the rights and obligations laid down in this Regulation. This
Regulation shall not affect Regulation (EU) 2016/679 or (EU) 2018/1725, or Directive 2002/58/EC or (EU)
2016/680, without prejudice to Article 10(5) and Article 59 of this Regulation.
8. This Regulation does not apply to any research, testing or development activity regarding AI systems or AI
models prior to their being placed on the market or put into service. Such activities shall be conducted in
accordance with applicable Union law . Testing in real world conditions shall not be covered by that exclusion.
9. This Regulation is without prejudic e to the rules laid down by other Union legal acts related to consumer
protection and product safety .
10. This Regulation does not apply to obligations of deployers who are natural persons using AI systems in the
course of a purely personal non-professional activity .
11. This Regulation does not preclude the Union or Member States from maintaining or introducing laws,
regulations or administrative provisions which are more favourable to worke rs in terms of protecting their
rights in respect of the use of AI systems by employers, or from encouraging or allowing the application of
collective agreements which are more favourable to workers.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 37/110
12. This Regulation does not apply to AI systems released under free and open-source licences, unless they are
placed on the market or put into service as high-risk AI systems or as an AI system that falls under Article 5 or
50.
Article 3
Definitions
For the purposes of this Regulation, the following definitions apply:
(1)‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy
and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers,
from the input it receives, how to generate outputs such as predictions, content, recommendations, or
decisions that can influence physical or virtual environments;
(2)‘risk’ means the combination of the probability of an occurrence of harm and the severity of that harm;
(3)‘provider ’ means a natural or legal person, public authority , agency or other body that develops an AI
system or a general-purpose AI model or that has an AI system or a general-pu rpose AI model developed
and places it on the market or puts the AI system into service under its own name or trademark, whether for
payment or free of char ge;
(4)‘deployer ’ means a natural or legal person, public authority , agency or other body using an AI system
under its authority except where the AI system is used in the course of a personal non-professional activity;
(5)‘authorised representative’ means a natural or legal person located or establis hed in the Union who has
received and accepted a written mandate from a provider of an AI system or a general-purpose AI model
to, respectively , perform and carry out on its behalf the obligations and procedures established by this
Regulation;
(6)‘importer ’ means a natural or legal person located or established in the Union that places on the market an
AI system that bears the name or trademark of a natural or legal person established in a third country;
(7)‘distributor ’ means a natural or legal person in the supply chain, other than the provider or the importer ,
that makes an AI system available on the Union market;
(8)‘operator ’ means a provider , product manufacturer , deployer , authorised representative, importer or
distributor;
(9)‘placing on the market’ means the first making available of an AI system or a general-purpose AI model on
the Union market;
(10)‘making available on the market’ mean s the supply of an AI system or a general-purpose AI model for
distribution or use on the Union market in the course of a commercial activity , whether in return for
payment or free of char ge;
(11)‘putting into service’ means the supply of an AI system for first use directly to the deployer or for own use
in the Union for its intended purpose;
(12)‘intended purpose’ means the use for which an AI system is intended by the provider, including the specific
context and conditions of use, as specified in the information supplied by the provider in the instructions
for use, promotional or sales materials and statements, as well as in the technical documentation;
(13)‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its
intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with
other systems, including other AI systems;
(14)‘safety component’ means a component of a product or of an AI system which fulfils a safety function for
that product or AI system, or the failur e or malfunctioning of which endangers the health and safety of
persons or property;
(15)‘instructions for use’ means the inform ation provided by the provider to inform the deployer of, in
particular , an AI system’ s intended purpose and proper use;
(16)‘recall of an AI system’ means any measure aiming to achieve the return to the provider or taking out of
service or disabling the use of an AI system made available to deployers;
(17)‘withdrawal of an AI system’ means any measure aiming to prevent an AI system in the supply chain being
made available on the market;
(18)‘performance of an AI system’ means the ability of an AI system to achieve its intended purpose;
(19)‘notifying authority’ means the national authority responsible for setting up and carrying out the necessary
procedures for the assessment, designation and notification of conformity asses sment bodies and for their
monitoring;
(20)‘conformity assessment’ means the process of demonstrating whether the requirements set out in
Chapter III, Section 2 relating to a high-risk AI system have been fulfilled;
(21)‘conformity assessment body’ means a body that performs third-party confor mity assessment activities,
including testing, certification and inspection;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 38/110
(22)‘notified body’ means a conformity assessment body notified in accordance with this Regulation and other
relevant Union harmonisation legislation;
(23)‘substantial modification’ means a change to an AI system after its placing on the market or putting into
service which is not foreseen or planned in the initial conformity assessment carried out by the provider
and as a result of which the complianc e of the AI system with the requireme nts set out in Chapter III,
Section 2 is affected or results in a modification to the intended purpose for which the AI system has been
assessed;
(24)‘CE marking’ means a marking by which a provider indicates that an AI system is in conformity with the
requirements set out in Chapter III, Section 2 and other applicable Union harmonisation legislation
providing for its af fixing;
(25)‘post-market monitoring system’ means all activities carried out by providers of AI systems to collect and
review experience gained from the use of AI systems they place on the market or put into service for the
purpose of identifying any need to immediately apply any necessary corrective or preventive actions;
(26)‘market surveillance authority’ means the national authority carrying out the activities and taking the
measures pursuant to Regulation (EU) 2019/1020;
(27)‘harmonised standard’ means a harmonised standard as defined in Article 2(1), point (c), of Regulation
(EU) No 1025/2012;
(28)‘common specification’ means a set of technical specifications as defined in Article 2, point (4) of
Regulation (EU) No 1025/2012, providing means to comply with certain requirements established under
this Regulation;
(29)‘training data’ means data used for training an AI system through fitting its learnable parameters;
(30)‘validation data’ means data used for providing an evaluation of the trained AI system and for tuning its
non-learnable parameters and its learning process in order , inter alia, to prevent underfitting or overfitting;
(31)‘validation data set’ means a separate data set or part of the training data set, either as a fixed or variable
split;
(32)‘testing data’ means data used for providing an independent evaluation of the AI system in order to confirm
the expected performance of that system before its placing on the market or putting into service;
(33)‘input data’ means data provided to or directly acquired by an AI system on the basis of which the system
produces an output;
(34)‘biometric data’ means personal data resulting from specific technical processi ng relating to the physical,
physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic
data;
(35)‘biometric identification’ means the automated recognition of physical, physiological, behavioural, or
psychological human features for the purpose of establishing the identity of a natural person by comparing
biometric data of that individual to biometric data of individuals stored in a database;
(36)‘biometric verification’ means the automated, one-to-one verification, includ ing authentication, of the
identity of natural persons by comparing their biometric data to previously provided biometric data;
(37)‘special categories of personal data’ means the categories of personal data referred to in Article 9(1) of
Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU)
2018/1725;
(38)‘sensitive operational data’ means operational data related to activities of prevention, detection,
investigation or prosecution of criminal offences, the disclosure of which could jeopardise the integrity of
criminal proceedings;
(39)‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions or
intentions of natural persons on the basis of their biometric data;
(40)‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to
specific categories on the basis of their biometric data, unless it is ancillary to another commercial service
and strictly necessary for objective technical reasons;
(41)‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons,
without their active involvement, typically at a distance through the comparison of a person’ s biometric
data with the biometric data contained in a reference database;
(42)‘real-time remote biometric identifica tion system’ means a remote biometric identification system,
whereby the capturing of biometric data, the comparison and the identification all occur without
a significant delay , comprising not only instant identification, but also limited short delays in order to avoid
circumvention;
(43)‘post-remote biometric identification system’ means a remote biometric identification system other than
a real-time remote biometric identification system;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 39/110
(44)‘publicly accessible space’ means any publicly or privately owned physical place accessible to an
undetermined number of natural persons, regardless of whether certain condit ions for access may apply ,
and regardless of the potential capacity restrictions;
(45)‘law enforcement authority’ means:
(a)any public authority competent for the prevention, investigation, detection or prosecution of criminal
offences or the execution of criminal penalties, including the safeguarding against and the prevention
of threats to public security; or
(b)any other body or entity entrusted by Member State law to exercise public authority and public powers
for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the
execution of criminal penalties, including the safeguarding against and the prevention of threats to
public security;
(46)‘law enforcement’ means activities carried out by law enforcement authorities or on their behalf for the
prevention, investigation, detection or prosecution of criminal offences or the execution of criminal
penalties, including safeguarding against and preventing threats to public security;
(47)‘AI Office’ means the Commission’ s function of contributing to the implementation, monitoring and
supervision of AI systems and general-p urpose AI models, and AI governance, provided for in Commission
Decision of 24 January 2024; references in this Regulation to the AI Office shall be construed as references
to the Commission;
(48)‘national competent authority’ means a notifying authority or a market surveillance authority; as regards AI
systems put into service or used by Union institutions, agencies, offices and bodies, references to national
competent authorities or market surveillance authorities in this Regulation shall be construed as references
to the European Data Protection Supervisor;
(49)‘serious incident’ means an incident or malfunctioning of an AI system that directly or indirectly leads to
any of the following:
(a)the death of a person, or serious harm to a person’ s health;
(b)a serious and irreversible disruption of the management or operation of critical infrastructure;
(c)the infringement of obligations under Union law intended to protect fundamental rights;
(d)serious harm to property or the environment;
(50)‘personal data’ means personal data as defined in Article 4, point (1), of Regulation (EU) 2016/679;
(51)‘non-personal data’ means data other than personal data as defined in Article 4, point (1), of Regulation
(EU) 2016/679;
(52)‘profiling’ means profiling as defined in Article 4, point (4), of Regulation (EU) 2016/679;
(53)‘real-world testing plan’ means a document that describes the objectives, methodology , geographical,
population and temporal scope, monitoring, or ganisation and conduct of testing in real-world conditions;
(54)‘sandbox plan’ means a document agreed between the participating provider and the competent authority
describing the objectives, conditions, timeframe, methodology and requirements for the activities carried
out within the sandbox;
(55)‘AI regulatory sandbox’ means a controlled framework set up by a competent authority which offers
providers or prospective providers of AI systems the possibility to develop, train, validate and test, where
appropriate in real-world conditions, an innovative AI system, pursuant to a sandbox plan for a limited
time under regulatory supervision;
(56)‘AI literacy’ means skills, knowledge and understanding that allow providers, deployers and affected
persons, taking into account their respective rights and obligations in the context of this Regulation, to
make an informed deployment of AI systems, as well as to gain awareness about the opportunities and
risks of AI and possible harm it can cause;
(57)‘testing in real-world conditions’ means the temporary testing of an AI system for its intended purpose in
real-world conditions outside a laboratory or otherwise simulated environment, with a view to gathering
reliable and robust data and to assessing and verifying the conformity of the AI system with the
requirements of this Regulation and it does not qualify as placing the AI system on the market or putting it
into service within the meaning of this Regulation, provided that all the conditions laid down in Article 57
or 60 are fulfilled;
(58)‘subject’, for the purpose of real-world testing, means a natural person who participates in testing in real-
world conditions;
(59)‘informed consent’ means a subject’ s freely given, specific, unambiguous and voluntary expression of his
or her willingness to participate in a particular testing in real-world conditions, after having been informed
of all aspects of the testing that are relevant to the subject’ s decision to participate;
(60)‘deep fake’ means AI-generated or manipulated image, audio or video conte nt that resembles existing
persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 40/110
(61)‘widespread infringement’ means any act or omission contrary to Union law protecting the interest of
individuals, which:
(a)has harmed or is likely to harm the collective interests of individuals residing in at least two Member
States other than the Member State in which:
(i)the act or omission originated or took place;
(ii)the provider concerned, or, where applicable, its authorised representative is located or established;
or
(iii)the deployer is established, when the infringement is committed by the deployer;
(b)has caused, causes or is likely to cause harm to the collective interests of individuals and has common
features, including the same unlawful practice or the same interest being infringed, and is occurring
concurrently , committed by the same operator , in at least three Member States;
(62)‘critical infrastructure’ means critical infrastructure as defined in Article 2, point (4), of Directive
(EU) 2022/2557;
(63)‘general-purpose AI model’ means an AI model, including where such an AI model is trained with a large
amount of data using self-supervision at scale, that displays significant generality and is capable of
competently performing a wide range of distinct tasks regardless of the way the model is placed on the
market and that can be integrated into a variety of downstream systems or applications, except AI models
that are used for research, development or prototyping activities before they are placed on the market;
(64)‘high-impact capabilities’ means capab ilities that match or exceed the capabilities recorded in the most
advanced general-purpose AI models;
(65)‘systemic risk’ means a risk that is specific to the high-impact capabilities of general-purpose AI models,
having a significant impact on the Union market due to their reach, or due to actual or reasonably
foreseeable negative effects on public health, safety , public security , fundamen tal rights, or the society as
a whole, that can be propagated at scale across the value chain;
(66)‘general-purpose AI system’ means an AI system which is based on a general-purpose AI model and which
has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI
systems;
(67)‘floating-point operation’ means any mathematical operation or assignment involving floating-point
numbers, which are a subset of the real numbers typically represented on computers by an integer of fixed
precision scaled by an integer exponent of a fixed base;
(68)‘downstream provider ’ means a provider of an AI system, including a general -purpose AI system, which
integrates an AI model, regardless of whether the AI model is provided by themselves and vertically
integrated or provided by another entity based on contractual relations.
Article 4
AI literacy
Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI
literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking
into account their technical knowledge, experience, education and training and the context the AI systems are to
be used in, and considering the persons or groups of persons on whom the AI systems are to be used.
CHAPTER II
PROHIBITED AI PRACTICES
Article 5
Prohibited AI practices
1. The following AI practices shall be prohibited:
(a)the placing on the market, the putting into service or the use of an AI syste m that deploys subliminal
techniques beyond a person’ s consciousness or purposefully manipulative or deceptive techniques, with the
objective, or the effect of materially distorting the behaviour of a person or a group of persons by
appreciably impairing their ability to make an informed decision, thereby causing them to take a decision
that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that
person, another person or group of persons significant harm;
(b)the placing on the market, the putting into service or the use of an AI system that exploits any of the
vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific
social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that
person or a person belonging to that group in a manner that causes or is reasonably likely to cause that
person or another person significant harm;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 41/110
(c)the placing on the market, the putting into service or the use of AI systems for the evaluation or
classification of natural persons or groups of persons over a certain period of time based on their social
behaviour or known, inferred or predicted personal or personality characteristics, with the social score
leading to either or both of the following:
(i)detrimental or unfavourable treatment of certain natural persons or groups of persons in social contexts
that are unrelated to the contexts in which the data was originally generated or collected;
(ii)detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified
or disproportionate to their social behaviour or its gravity;
(d)the placing on the market, the putting into service for this specific purpose, or the use of an AI system for
making risk assessments of natural persons in order to assess or predict the risk of a natural person
committing a criminal offence, based solely on the profiling of a natural person or on assessing their
personality traits and characteristics; this prohibition shall not apply to AI systems used to support the
human assessment of the involvement of a person in a criminal activity , which is already based on
objective and verifiable facts directly linked to a criminal activity;
(e)the placing on the market, the putting into service for this specific purpose, or the use of AI systems that
create or expand facial recognition databases through the untar geted scraping of facial images from the
internet or CCTV footage;
(f)the placing on the market, the putting into service for this specific purpose, or the use of AI systems to
infer emotions of a natural person in the areas of workplace and education institutions, except where the
use of the AI system is intended to be put in place or into the market for medical or safety reasons;
(g)the placing on the market, the putting into service for this specific purpose, or the use of biometric
categorisation systems that categorise individually natural persons based on their biometric data to deduce
or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or
sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric
datasets, such as images, based on biometric data or categorizing of biometric data in the area of law
enforcement;
(h)the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes
of law enforcement, unless and in so far as such use is strictly necessary for one of the following
objectives:
(i)the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation
of human beings, as well as the search for missing persons;
(ii)the prevention of a specific, substantial and imminent threat to the life or physical safety of natural
persons or a genuine and present or genuine and foreseeable threat of a terrorist attack;
(iii)the localisation or identification of a person suspected of having committed a criminal offence, for the
purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for
offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence
or a detention order for a maximum period of at least four years.
Point (h) of the first subparagraph is without prejudice to Article 9 of Regulation (EU) 2016/679 for the
processing of biometric data for purposes other than law enforcement.
2. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes
of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), shall be
deployed for the purposes set out in that point only to confirm the identity of the specifically targeted
individual, and it shall take into account the following elements:
(a)the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale
of the harm that would be caused if the system were not used;
(b)the consequences of the use of the system for the rights and freedoms of all persons concerned, in
particular the seriousness, probability and scale of those consequences.
In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the
purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h),
of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in
accordance with the national law author ising the use thereof, in particular as regards the temporal, geographic
and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible
spaces shall be authorised only if the law enforcement authority has complete d a fundamental rights impact
assessment as provided for in Article 27 and has registered the system in the EU database according to
Article 49. However , in duly justified cases of urgency , the use of such systems may be commenced without the
registration in the EU database, provided that such registration is completed without undue delay .
3. For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of
law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be
subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose
decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 42/110
accordance with the detailed rules of national law referred to in paragraph 5. However , in a duly justified
situation of urgency , the use of such system may be commenced without an authorisation provided that such
authorisation is requested without undue delay , at the latest within 24 hours. If such authorisation is rejected,
the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use
shall be immediately discarded and deleted.
The competent judicial authority or an independent administrative authority whose decision is binding shall
grant the authorisation only where it is satisfied, on the basis of objective evidence or clear indications
presented to it, that the use of the ‘real-time’ remote biometric identification system concerned is necessary for,
and proportionate to, achieving one of the objectives specified in paragraph 1, first subparagraph, point (h), as
identified in the request and, in particular , remains limited to what is strictly necessary concerning the period of
time as well as the geographic and personal scope. In deciding on the request, that authority shall take into
account the elements referred to in paragraph 2. No decision that produces an adverse legal effect on a person
may be taken based solely on the output of the ‘real-time’ remote biometric identification system.
4. Without prejudice to paragraph 3, each use of a ‘real-time’ remote biometric identification system in
publicly accessible spaces for law enforcement purposes shall be notified to the relevant market surveillance
authority and the national data protec tion authority in accordance with the national rules referred to in
paragraph 5. The notification shall, as a minimum, contain the information specified under paragraph 6 and
shall not include sensitive operational data.
5. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-
time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement
within the limits and under the conditions listed in paragraph 1, first subparagraph, point (h), and paragraphs 2
and 3. Member States concerned shall lay down in their national law the necessary detailed rules for the
request, issuance and exercise of, as well as supervision and reporting relating to, the authorisations referred to
in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, first
subparagraph, point (h), including which of the criminal offences referred to in point (h)(iii) thereof, the
competent authorities may be authorised to use those systems for the purposes of law enforcement. Member
States shall notify those rules to the Com mission at the latest 30 days following the adoption thereof. Member
States may introduce, in accordance with Union law, more restrictive laws on the use of remote biometric
identification systems.
6. National market surveillance authorities and the national data protection authorities of Member States that
have been notified of the use of ‘real-time’ remote biometric identification systems in publicly accessible
spaces for law enforcement purposes pursuant to paragraph 4 shall submit to the Commission annual reports on
such use. For that purpose, the Commission shall provide Member States and national market surveillance and
data protection authorities with a template, including information on the number of the decisions taken by
competent judicial authorities or an independent administrative authority whose decision is binding upon
requests for authorisations in accordance with paragraph 3 and their result.
7. The Commission shall publish annual reports on the use of real-time remote biometric identification
systems in publicly accessible spaces for law enforcement purposes, based on aggregated data in Member
States on the basis of the annual reports referred to in paragraph 6. Those annual reports shall not include
sensitive operational data of the related law enforcement activities.
8. This Article shall not af fect the prohibitions that apply where an AI practice infringes other Union law .
CHAPTER III
HIGH-RISK AI SYSTEMS
SECTION 1
Classification of AI systems as high-risk
Article 6
Classification rules for high-risk AI systems
1. Irrespective of whether an AI system is placed on the market or put into service independently of the
products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the
following conditions are fulfilled:
(a)the AI system is intended to be used as a safety component of a product, or the AI system is itself
a product, covered by the Union harmonisation legislation listed in Annex I;
(b)the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as
a product, is required to under go a third-party conformity assessment, with a view to the placing on the
market or the putting into service of that product pursuant to the Union harmonisation legislation listed in
Annex I.
2. In addition to the high-risk AI syste ms referred to in paragraph 1, AI systems referred to in Annex III shall
be considered to be high-risk.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 43/110
3. By derogation from paragraph 2, an AI system referred to in Annex III shall not be considered to be high-
risk where it does not pose a significa nt risk of harm to the health, safety or fundamental rights of natural
persons, including by not materially influencing the outcome of decision making.
The first subparagraph shall apply where any of the following conditions is fulfilled:
(a)the AI system is intended to perform a narrow procedural task;
(b)the AI system is intended to improve the result of a previously completed human activity;
(c)the AI system is intended to detect decision-making patterns or deviations from prior decision-making
patterns and is not meant to replace or influence the previously completed human assessment, without
proper human review; or
(d)the AI system is intended to perform a preparatory task to an assessment relev ant for the purposes of the
use cases listed in Annex III.
Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be
high-risk where the AI system performs profiling of natural persons.
4. A provider who considers that an AI system referred to in Annex III is not high-risk shall document its
assessment before that system is placed on the market or put into service. Such provider shall be subject to the
registration obligation set out in Article 49(2). Upon request of national competent authorities, the provider
shall provide the documentation of the assessment.
5. The Commission shall, after consulting the European Artificial Intelligence Board (the ‘Board’), and no
later than 2 February 2026, provide guidelines specifying the practical implementation of this Article in line
with Article 96 together with a comprehensive list of practical examples of use cases of AI systems that are
high-risk and not high-risk.
6. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend
paragraph 3, second subparagraph, of this Article by adding new conditions to those laid down therein, or by
modifying them, where there is concrete and reliable evidence of the existence of AI systems that fall under the
scope of Annex III, but do not pose a significant risk of harm to the health, safety or fundamental rights of
natural persons.
7. The Commission shall adopt delegated acts in accordance with Article 97 in order to amend paragraph 3,
second subparagraph, of this Article by deleting any of the conditions laid down therein, where there is concrete
and reliable evidence that this is necessary to maintain the level of protection of health, safety and fundamental
rights provided for by this Regulation.
8. Any amendment to the conditions laid down in paragraph 3, second subparagraph, adopted in accordance
with paragraphs 6 and 7 of this Article shall not decrease the overall level of protection of health, safety and
fundamental rights provided for by this Regulation and shall ensure consistency with the delegated acts adopted
pursuant to Article 7(1), and take account of market and technological developments.
Article 7
Amendments to Annex III
1. The Commission is empowered to adopt delegated acts in accordance with Article 97 to amend Annex III
by adding or modifying use-cases of high-risk AI systems where both of the following conditions are fulfilled:
(a)the AI systems are intended to be used in any of the areas listed in Annex III;
(b)the AI systems pose a risk of harm to health and safety , or an adverse impact on fundamental rights, and
that risk is equivalent to, or greater than, the risk of harm or of adverse impact posed by the high-risk AI
systems already referred to in Annex III.
2. When assessing the condition unde r paragraph 1, point (b), the Commission shall take into account the
following criteria:
(a)the intended purpose of the AI system;
(b)the extent to which an AI system has been used or is likely to be used;
(c)the nature and amount of the data processed and used by the AI system, in particular whether special
categories of personal data are processed;
(d)the extent to which the AI system acts autonomously and the possibility for a human to override a decision
or recommendations that may lead to potential harm;
(e)the extent to which the use of an AI system has already caused harm to health and safety , has had an
adverse impact on fundamental rights or has given rise to significant concerns in relation to the likelihood
of such harm or adverse impact, as demonstrated, for example, by reports or documented allegations
submitted to national competent authorities or by other reports, as appropriate;
(f)the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its
ability to af fect multiple persons or to disproportionately af fect a particular group of persons;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 44/110
(g)the extent to which persons who are potentially harmed or suffer an adverse impact are dependent on the
outcome produced with an AI system, in particular because for practical or legal reasons it is not
reasonably possible to opt-out from that outcome;
(h)the extent to which there is an imbalance of power , or the persons who are potentially harmed or suffer an
adverse impact are in a vulnerable position in relation to the deployer of an AI system, in particular due to
status, authority , knowledge, economic or social circumstances, or age;
(i)the extent to which the outcome produced involving an AI system is easily corrigible or reversible, taking
into account the technical solutions available to correct or reverse it, whereby outcomes having an adverse
impact on health, safety or fundamental rights, shall not be considered to be easily corrigible or reversible;
(j)the magnitude and likelihood of benefit of the deployment of the AI system for individuals, groups, or
society at lar ge, including possible improvements in product safety;
(k)the extent to which existing Union law provides for:
(i)effective measures of redress in relation to the risks posed by an AI system, with the exclusion of
claims for damages;
(ii)effective measures to prevent or substantially minimise those risks.
3. The Commission is empowered to adopt delegated acts in accordance with Article 97 to amend the list in
Annex III by removing high-risk AI systems where both of the following conditions are fulfilled:
(a)the high-risk AI system concerned no longer poses any significant risks to fundamental rights, health or
safety , taking into account the criteria listed in paragraph 2;
(b)the deletion does not decrease the overall level of protection of health, safety and fundamental rights under
Union law .
SECTION 2
Requirements for high-risk AI systems
Article 8
Compliance with the r equir ements
1. High-risk AI systems shall comply with the requirements laid down in this Section, taking into account their
intended purpose as well as the generally acknowledged state of the art on AI and AI-related technologies. The
risk management system referred to in Article 9 shall be taken into account when ensuring compliance with
those requirements.
2. Where a product contains an AI system, to which the requirements of this Regulation as well as
requirements of the Union harmonisation legislation listed in Section A of Annex I apply , providers shall be
responsible for ensuring that their produ ct is fully compliant with all applicable requirements under applicable
Union harmonisation legislation. In ensuring the compliance of high-risk AI systems referred to in paragraph 1
with the requirements set out in this Section, and in order to ensure consistency , avoid duplication and minimise
additional burdens, providers shall have a choice of integrating, as appropriate, the necessary testing and
reporting processes, information and documentation they provide with regard to their product into
documentation and procedures that already exist and are required under the Union harmonisation legislation
listed in Section A of Annex I.
Article 9
Risk management system
1. A risk management system shall be established, implemented, documented and maintained in relation to
high-risk AI systems.
2. The risk management system shall be understood as a continuous iterative process planned and run
throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It
shall comprise the following steps:
(a)the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI
system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance
with its intended purpose;
(b)the estimation and evaluation of the risks that may emer ge when the high-risk AI system is used in
accordance with its intended purpose, and under conditions of reasonably foreseeable misuse;
(c)the evaluation of other risks possibly arising, based on the analysis of data gathered from the post-market
monitoring system referred to in Article 72;
(d)the adoption of appropriate and targeted risk management measures designed to address the risks identified
pursuant to point (a).2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 45/110
3. The risks referred to in this Article shall concern only those which may be reasonably mitigated or
eliminated through the development or design of the high-risk AI system, or the provision of adequate technical
information.
4. The risk management measures referred to in paragraph 2, point (d), shall give due consideration to the
effects and possible interaction resulting from the combined application of the requirements set out in this
Section, with a view to minimising risks more effectively while achieving an appropriate balance in
implementing the measures to fulfil those requirements.
5. The risk management measures referred to in paragraph 2, point (d), shall be such that the relevant residual
risk associated with each hazard, as well as the overall residual risk of the high- risk AI systems is judged to be
acceptable.
In identifying the most appropriate risk management measures, the following shall be ensured:
(a)elimination or reduction of risks identified and evaluated pursuant to paragraph 2 in as far as technically
feasible through adequate design and development of the high-risk AI system;
(b)where appropriate, implementation of adequate mitigation and control measures addressing risks that
cannot be eliminated;
(c)provision of information required pursuant to Article 13 and, where appropriate, training to deployers.
With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration
shall be given to the technical knowledge, experience, education, the training to be expected by the deployer ,
and the presumable context in which the system is intended to be used.
6. High-risk AI systems shall be tested for the purpose of identifying the most appropriate and targeted risk
management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended
purpose and that they are in compliance with the requirements set out in this Section.
7. Testing procedures may include testing in real-world conditions in accordance with Article 60.
8. The testing of high-risk AI systems shall be performed, as appropriate, at any time throughout the
development process, and, in any event , prior to their being placed on the market or put into service. Testing
shall be carried out against prior defined metrics and probabilistic threshol ds that are appropriate to the
intended purpose of the high-risk AI system.
9. When implementing the risk management system as provided for in paragra phs 1 to 7, providers shall give
consideration to whether in view of its intended purpose the high-risk AI system is likely to have an adverse
impact on persons under the age of 18 and, as appropriate, other vulnerable groups.
10. For providers of high-risk AI systems that are subject to requirements regarding internal risk management
processes under other relevant provisions of Union law, the aspects provided in paragraphs 1 to 9 may be part
of, or combined with, the risk management procedures established pursuant to that law .
Article 10
Data and data governance
1. High-risk AI systems which make use of techniques involving the training of AI models with data shall be
developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in
paragraphs 2 to 5 whenever such data sets are used.
2. Training, validation and testing data sets shall be subject to data governa nce and management practices
appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular:
(a)the relevant design choices;
(b)data collection processes and the origin of data, and in the case of personal data, the original purpose of the
data collection;
(c)relevant data-preparation processing operations, such as annotation, labelling, cleaning, updating,
enrichment and aggregation;
(d)the formulation of assumptions, in particular with respect to the information that the data are supposed to
measure and represent;
(e)an assessment of the availability , quantity and suitability of the data sets that are needed;
(f)examination in view of possible biases that are likely to affect the health and safety of persons, have
a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially
where data outputs influence inputs for future operations;
(g)appropriate measures to detect, prevent and mitigate possible biases identified according to point (f);
(h)the identification of relevant data gaps or shortcomings that prevent complianc e with this Regulation, and
how those gaps and shortcomings can be addressed.
3. Training, validation and testing data sets shall be relevant, sufficiently representative, and to the best extent
possible, free of errors and complete in view of the intended purpose. They shall have the appropriate statistical2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 46/110
properties, including, where applicable, as regards the persons or groups of persons in relation to whom the
high-risk AI system is intended to be used. Those characteristics of the data sets may be met at the level of
individual data sets or at the level of a combination thereof.
4. Data sets shall take into account, to the extent required by the intended purpose, the characteristics or
elements that are particular to the specific geographical, contextual, behaviou ral or functional setting within
which the high-risk AI system is intended to be used.
5. To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation
to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of
such systems may exceptionally process special categories of personal data, subject to appropriate safeguards
for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations
(EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in
order for such processing to occur:
(a)the bias detection and correction cannot be effectively fulfilled by processing other data, including
synthetic or anonymised data;
(b)the special categories of personal data are subject to technical limitations on the re-use of the personal data,
and state-of-the-art security and privacy-preserving measures, including pseudonymisation;
(c)the special categories of personal data are subject to measures to ensure that the personal data processed
are secured, protected, subject to suitab le safeguards, including strict controls and documentation of the
access, to avoid misuse and ensure that only authorised persons have access to those personal data with
appropriate confidentiality obligations;
(d)the special categories of personal data are not to be transmitted, transferred or otherwise accessed by other
parties;
(e)the special categories of personal data are deleted once the bias has been corrected or the personal data has
reached the end of its retention period, whichever comes first;
(f)the records of processing activities pursuant to Regulations (EU) 2016/679 and (EU) 2018/1725 and
Directive (EU) 2016/680 include the reasons why the processing of special categories of personal data was
strictly necessary to detect and correct biases, and why that objective could not be achieved by processing
other data.
6. For the development of high-risk AI systems not using techniques involving the training of AI models,
paragraphs 2 to 5 apply only to the testing data sets.
Article 1 1
Technical documentation
1. The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the
market or put into service and shall be kept up-to date.
The technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system
complies with the requirements set out in this Section and to provide national competent authorities and
notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of
the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV. SMEs,
including start-ups, may provide the elements of the technical documentation specified in Annex IV in
a simplified manner . To that end, the Commission shall establish a simplified technical documentation form
targeted at the needs of small and microenterprises. Where an SME, including a start-up, opts to provide the
information required in Annex IV in a simplified manner , it shall use the form referred to in this paragraph.
Notified bodies shall accept the form for the purposes of the conformity assessment.
2. Where a high-risk AI system related to a product covered by the Union harmonisation legislation listed in
Section A of Annex I is placed on the market or put into service, a single set of technical documentation shall
be drawn up containing all the information set out in paragraph 1, as well as the information required under
those legal acts.
3. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend
Annex IV, where necessary , to ensure that, in light of technical progress, the technical documentation provides
all the information necessary to assess the compliance of the system with the requirements set out in this
Section.
Article 12
Record-keeping
1. High-risk AI systems shall technica lly allow for the automatic recording of events (logs) over the lifetime
of the system.
2. In order to ensure a level of traceability of the functioning of a high-risk AI system that is appropriate to the
intended purpose of the system, logging capabilities shall enable the recording of events relevant for:2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 47/110
(a)identifying situations that may result in the high-risk AI system presenting a risk within the meaning of
Article 79(1) or in a substantial modification;
(b)facilitating the post-market monitoring referred to in Article 72; and
(c)monitoring the operation of high-risk AI systems referred to in Article 26(5).
3. For high-risk AI systems referred to in point 1 (a), of Annex III, the logging capabilities shall provide, at
a minimum:
(a)recording of the period of each use of the system (start date and time and end date and time of each use);
(b)the reference database against which input data has been checked by the system;
(c)the input data for which the search has led to a match;
(d)the identification of the natural persons involved in the verification of the results, as referred to in
Article 14(5).
Article 13
Transpar ency and pr ovision of information to deployers
1. High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is
sufficiently transparent to enable deployers to interpret a system’ s output and use it appropriately . An
appropriate type and degree of transparency shall be ensured with a view to achieving compliance with the
relevant obligations of the provider and deployer set out in Section 3.
2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or
otherwise that include concise, complete, correct and clear information that is relevant, accessible and
comprehensible to deployers.
3. The instructions for use shall contain at least the following information:
(a)the identity and the contact details of the provider and, where applicable, of its authorised representative;
(b)the characteristics, capabilities and limitations of performance of the high-risk AI system, including:
(i)its intended purpose;
(ii)the level of accuracy , including its metrics, robustness and cybersecurity referred to in Article 15
against which the high-risk AI system has been tested and validated and which can be expected, and
any known and foreseeable circumstances that may have an impact on that expected level of accuracy ,
robustness and cybersecurity;
(iii)any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance
with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to
risks to the health and safety or fundamental rights referred to in Article 9(2);
(iv)where applicable, the technical capabilities and characteristics of the high-risk AI system to provide
information that is relevant to explain its output;
(v)when appropriate, its performance regarding specific persons or groups of persons on which the
system is intended to be used;
(vi)when appropriate, specifications for the input data, or any other relevant information in terms of the
training, validation and testing data sets used, taking into account the intended purpose of the high-risk
AI system;
(vii)where applicable, information to enable deployers to interpret the output of the high-risk AI system
and use it appropriately;
(c)the changes to the high-risk AI system and its performance which have been pre-determined by the
provider at the moment of the initial conformity assessment, if any;
(d)the human oversight measures referred to in Article 14, including the technical measures put in place to
facilitate the interpretation of the outputs of the high-risk AI systems by the deployers;
(e)the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any
necessary maintenance and care measures, including their frequency , to ensure the proper functioning of
that AI system, including as regards software updates;
(f)where relevant, a description of the mechanisms included within the high-r isk AI system that allows
deployers to properly collect, store and interpret the logs in accordance with Article 12.
Article 14
Human oversight
1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-
machine interface tools, that they can be effectively overseen by natural persons during the period in which
they are in use.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 48/110
2. Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may
emer ge when a high-risk AI system is used in accordance with its intended purpose or under conditions of
reasonably foreseeable misuse, in particular where such risks persist despite the application of other
requirements set out in this Section.
3. The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the
high-risk AI system, and shall be ensured through either one or both of the following types of measures:
(a)measures identified and built, when technically feasible, into the high-risk AI system by the provider
before it is placed on the market or put into service;
(b)measures identified by the provider before placing the high-risk AI system on the market or putting it into
service and that are appropriate to be implemented by the deployer .
4. For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the
deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate
and proportionate:
(a)to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly
monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and
unexpected performance;
(b)to remain aware of the possible tendency of automatically relying or over-relying on the output produced
by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide
information or recommendations for decisions to be taken by natural persons;
(c)to correctly interpret the high-risk AI system’ s output, taking into account, for example, the interpretation
tools and methods available;
(d)to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override
or reverse the output of the high-risk AI system;
(e)to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or
a similar procedure that allows the system to come to a halt in a safe state.
5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 of
this Article shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the
basis of the identification resulting from the system unless that identification has been separately verified and
confirmed by at least two natural persons with the necessary competence, training and authority .
The requirement for a separate verific ation by at least two natural persons shall not apply to high-risk AI
systems used for the purposes of law enforcement, migration, border control or asylum, where Union or
national law considers the application of this requirement to be disproportionate.
Article 15
Accuracy , robustness and cybersecurity
1. High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level
of accuracy , robustness, and cybersecurity , and that they perform consistently in those respects throughout their
lifecycle.
2. To address the technical aspects of how to measure the appropriate levels of accuracy and robustness set out
in paragraph 1 and any other relevant performance metrics, the Commission shall, in cooperation with relevant
stakeholders and organisations such as metrology and benchmarking authorities, encourage, as appropriate, the
development of benchmarks and measurement methodologies.
3. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the
accompanying instructions of use.
4. High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may
occur within the system or the environment in which the system operates, in particular due to their interaction
with natural persons or other systems. Technical and or ganisational measures shall be taken in this regard.
The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may
include backup or fail-safe plans.
High-risk AI systems that continue to learn after being placed on the market or put into service shall be
developed in such a way as to elimin ate or reduce as far as possible the risk of possibly biased outputs
influencing input for future operations (feedback loops), and as to ensure that any such feedback loops are duly
addressed with appropriate mitigation measures.
5. High-risk AI systems shall be resilient against attempts by unauthorised third parties to alter their use,
outputs or performance by exploiting system vulnerabilities.
The technical solutions aiming to ensure the cybersecurity of high-risk AI systems shall be appropriate to the
relevant circumstances and the risks.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 49/110
The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to
prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data
poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI
model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws.
SECTION 3
Obligations of providers and deployers of high-risk AI systems and other parties
Article 16
Obligations of pr oviders of high-risk AI systems
Providers of high-risk AI systems shall:
(a)ensure that their high-risk AI systems are compliant with the requirements set out in Section 2;
(b)indicate on the high-risk AI system or, where that is not possible, on its pack aging or its accompanying
documentation, as applicable, their name, registered trade name or registered trade mark, the address at
which they can be contacted;
(c)have a quality management system in place which complies with Article 17;
(d)keep the documentation referred to in Article 18;
(e)when under their control, keep the logs automatically generated by their high-risk AI systems as referred to
in Article 19;
(f)ensure that the high-risk AI system under goes the relevant conformity assessment procedure as referred to
in Article 43, prior to its being placed on the market or put into service;
(g)draw up an EU declaration of conformity in accordance with Article 47;
(h)affix the CE marking to the high-risk AI system or, where that is not possible, on its packaging or its
accompanying documentation, to indicate conformity with this Regulation, in accordance with Article 48;
(i)comply with the registration obligations referred to in Article 49(1);
(j)take the necessary corrective actions and provide information as required in Article 20;
(k)upon a reasoned request of a national competent authority , demonstrate the conformity of the high-risk AI
system with the requirements set out in Section 2;
(l)ensure that the high-risk AI system complies with accessibility requirements in accordance with Directives
(EU) 2016/2102 and (EU) 2019/882.
Article 17
Quality management system
1. Providers of high-risk AI systems shall put a quality management system in place that ensures compliance
with this Regulation. That system shall be documented in a systematic and orderly manner in the form of
written policies, procedures and instructions, and shall include at least the following aspects:
(a)a strategy for regulatory compliance, including compliance with conformity assessment procedures and
procedures for the management of modifications to the high-risk AI system;
(b)techniques, procedures and systematic actions to be used for the design, design control and design
verification of the high-risk AI system;
(c)techniques, procedures and systematic actions to be used for the development, quality control and quality
assurance of the high-risk AI system;
(d)examination, test and validation procedures to be carried out before, during and after the development of
the high-risk AI system, and the frequency with which they have to be carried out;
(e)technical specifications, including stand ards, to be applied and, where the relevant harmonised standards
are not applied in full or do not cover all of the relevant requirements set out in Section 2, the means to be
used to ensure that the high-risk AI system complies with those requirements;
(f)systems and procedures for data manag ement, including data acquisition, data collection, data analysis,
data labelling, data storage, data filtrat ion, data mining, data aggregation, data retention and any other
operation regarding the data that is performed before and for the purpose of the placing on the market or
the putting into service of high-risk AI systems;
(g)the risk management system referred to in Article 9;
(h)the setting-up, implementation and maintenance of a post-market monitoring system, in accordance with
Article 72;
(i)procedures related to the reporting of a serious incident in accordance with Article 73;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 50/110
(j)the handling of communication with national competent authorities, other relevant authorities, including
those providing or supporting the access to data, notified bodies, other operators, customers or other
interested parties;
(k)systems and procedures for record-keeping of all relevant documentation and information;
(l)resource management, including security-of-supply related measures;
(m)an accountability framework setting out the responsibilities of the management and other staff with regard
to all the aspects listed in this paragraph.
2. The implementation of the aspects referred to in paragraph 1 shall be proportionate to the size of the
provider ’s organisation. Providers shall, in any event, respect the degree of rigour and the level of protection
required to ensure the compliance of their high-risk AI systems with this Regulation.
3. Providers of high-risk AI systems that are subject to obligations regarding quality management systems or
an equivalent function under relevant sectoral Union law may include the aspects listed in paragraph 1 as part
of the quality management systems pursuant to that law .
4. For providers that are financial institutions subject to requirements regarding their internal governance,
arrangements or processes under Union financial services law, the obligation to put in place a quality
management system, with the exception of paragraph 1, points (g), (h) and (i) of this Article, shall be deemed to
be fulfilled by complying with the rules on internal governance arrangements or processes pursuant to the
relevant Union financial services law. To that end, any harmonised standards referred to in Article 40 shall be
taken into account.
Article 18
Documentation keeping
1. The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market
or put into service, keep at the disposal of the national competent authorities:
(a)the technical documentation referred to in Article 1 1;
(b)the documentation concerning the quality management system referred to in Article 17;
(c)the documentation concerning the changes approved by notified bodies, where applicable;
(d)the decisions and other documents issued by the notified bodies, where applicable;
(e)the EU declaration of conformity referred to in Article 47.
2. Each Member State shall determine conditions under which the documentation referred to in paragraph 1
remains at the disposal of the national competent authorities for the period indicated in that paragraph for the
cases when a provider or its authorised representative established on its territory goes bankrupt or ceases its
activity prior to the end of that period.
3. Providers that are financial institutions subject to requirements regarding their internal governance,
arrangements or processes under Union financial services law shall maintain the technical documentation as
part of the documentation kept under the relevant Union financial services law .
Article 19
Automatically generated logs
1. Providers of high-risk AI systems shall keep the logs referred to in Article 12(1), automatically generated by
their high-risk AI systems, to the extent such logs are under their control. Without prejudice to applicable Union
or national law, the logs shall be kept for a period appropriate to the intended purpose of the high-risk AI
system, of at least six months, unless provided otherwise in the applicable Union or national law, in particular
in Union law on the protection of personal data.
2. Providers that are financial institutions subject to requirements regarding their internal governance,
arrangements or processes under Union financial services law shall maintain the logs automatically generated
by their high-risk AI systems as part of the documentation kept under the relevant financial services law .
Article 20
Corr ective actions and duty of information
1. Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system that
they have placed on the market or put into service is not in conformity with this Regulation shall immediately
take the necessary corrective actions to bring that system into conformity , to withdraw it, to disable it, or to
recall it, as appropriate. They shall inform the distributors of the high-risk AI system concerned and, where
applicable, the deployers, the authorised representative and importers accordingly .
2. Where the high-risk AI system presents a risk within the meaning of Article 79(1) and the provider becomes
aware of that risk, it shall immediately investigate the causes, in collaboratio n with the reporting deployer ,
where applicable, and inform the market surveillance authorities competent for the high-risk AI system2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 51/110
concerned and, where applicable, the notified body that issued a certificate for that high-risk AI system in
accordance with Article 44, in particular , of the nature of the non-compliance and of any relevant corrective
action taken.
Article 21
Cooperation with competent authorities
1. Providers of high-risk AI systems shall, upon a reasoned request by a competent authority , provide that
authority all the information and documentation necessary to demonstrate the conformity of the high-risk AI
system with the requirements set out in Section 2, in a language which can be easily understood by the
authority in one of the official languages of the institutions of the Union as indicated by the Member State
concerned.
2. Upon a reasoned request by a competent authority , providers shall also give the requesting competent
authority , as applicable, access to the automatically generated logs of the high-risk AI system referred to in
Article 12(1), to the extent such logs are under their control.
3. Any information obtained by a competent authority pursuant to this Article shall be treated in accordance
with the confidentiality obligations set out in Article 78.
Article 22
Authorised r epresentatives of pr oviders of high-risk AI systems
1. Prior to making their high-risk AI systems available on the Union market , providers established in third
countries shall, by written mandate, appoint an authorised representative which is established in the Union.
2. The provider shall enable its authorised representative to perform the tasks specified in the mandate
received from the provider .
3. The authorised representative shall perform the tasks specified in the mandate received from the provider . It
shall provide a copy of the mandate to the market surveillance authorities upon request, in one of the official
languages of the institutions of the Union, as indicated by the competent authority . For the purposes of this
Regulation, the mandate shall empower the authorised representative to carry out the following tasks:
(a)verify that the EU declaration of conformity referred to in Article 47 and the technical documentation
referred to in Article 11 have been drawn up and that an appropriate conformity assessment procedure has
been carried out by the provider;
(b)keep at the disposal of the competent authorities and national authorities or bodies referred to in
Article 74(10), for a period of 10 years after the high-risk AI system has been placed on the market or put
into service, the contact details of the provider that appointed the authorised representative, a copy of the
EU declaration of conformity referred to in Article 47, the technical documentation and, if applicable, the
certificate issued by the notified body;
(c)provide a competent authority , upon a reasoned request, with all the inform ation and documentation,
including that referred to in point (b) of this subparagraph, necessary to demonstrate the conformity of
a high-risk AI system with the requirements set out in Section 2, including access to the logs, as referred to
in Article 12(1), automatically generated by the high-risk AI system, to the extent such logs are under the
control of the provider;
(d)cooperate with competent authorities, upon a reasoned request, in any action the latter take in relation to
the high-risk AI system, in particular to reduce and mitigate the risks posed by the high-risk AI system;
(e)where applicable, comply with the registration obligations referred to in Article 49(1), or, if the registration
is carried out by the provider itself, ensure that the information referred to in point 3 of Section A of
Annex VIII is correct.
The mandate shall empower the authorised representative to be addressed, in addition to or instead of the
provider , by the competent authorities, on all issues related to ensuring compliance with this Regulation.
4. The authorised representative shall terminate the mandate if it considers or has reason to consider the
provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall immediately
inform the relevant market surveillance authority , as well as, where applicable, the relevant notified body , about
the termination of the mandate and the reasons therefor .
Article 23
Obligations of importers
1. Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity
with this Regulation by verifying that:
(a)the relevant conformity assessment procedure referred to in Article 43 has been carried out by the provider
of the high-risk AI system;
(b)the provider has drawn up the technical documentation in accordance with Article 1 1 and Annex IV ;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 52/110
(c)the system bears the required CE marking and is accompanied by the EU declaration of conformity
referred to in Article 47 and instructions for use;
(d)the provider has appointed an authorised representative in accordance with Article 22(1).
2. Where an importer has sufficient reason to consider that a high-risk AI system is not in conformity with this
Regulation, or is falsified, or accompanied by falsified documentation, it shall not place the system on the
market until it has been brought into conformity . Where the high-risk AI system presents a risk within the
meaning of Article 79(1), the importer shall inform the provider of the system, the authorised representative
and the market surveillance authorities to that ef fect.
3. Importers shall indicate their name, registered trade name or registered trade mark, and the address at which
they can be contacted on the high-risk AI system and on its packaging or its accompanying documentation,
where applicable.
4. Importers shall ensure that, while a high-risk AI system is under their responsibility , storage or transport
conditions, where applicable, do not jeopardise its compliance with the requirements set out in Section 2.
5. Importers shall keep, for a period of 10 years after the high-risk AI system has been placed on the market or
put into service, a copy of the certificat e issued by the notified body , where applicable, of the instructions for
use, and of the EU declaration of conformity referred to in Article 47.
6. Importers shall provide the relevant competent authorities, upon a reasoned request, with all the necessary
information and documentation, including that referred to in paragraph 5, to demonstrate the conformity of
a high-risk AI system with the requirements set out in Section 2 in a language which can be easily understood
by them. For this purpose, they shall also ensure that the technical documentation can be made available to
those authorities.
7. Importers shall cooperate with the relevant competent authorities in any action those authorities take in
relation to a high-risk AI system placed on the market by the importers, in particular to reduce and mitigate the
risks posed by it.
Article 24
Obligations of distributors
1. Before making a high-risk AI system available on the market, distributo rs shall verify that it bears the
required CE marking, that it is accompanied by a copy of the EU declaration of conformity referred to in
Article 47 and instructions for use, and that the provider and the importer of that system, as applicable, have
complied with their respective obligations as laid down in Article 16, points (b) and (c) and Article 23(3).
2. Where a distributor considers or has reason to consider , on the basis of the information in its possession,
that a high-risk AI system is not in conformity with the requirements set out in Section 2, it shall not make the
high-risk AI system available on the market until the system has been brought into conformity with those
requirements. Furthermore, where the high-risk AI system presents a risk within the meaning of Article 79(1),
the distributor shall inform the provider or the importer of the system, as applicable, to that ef fect.
3. Distributors shall ensure that, while a high-risk AI system is under their responsibility , storage or transport
conditions, where applicable, do not jeopardise the compliance of the system with the requirements set out in
Section 2.
4. A distributor that considers or has reason to consider , on the basis of the information in its possession,
a high-risk AI system which it has made available on the market not to be in conformity with the requirements
set out in Section 2, shall take the corrective actions necessary to bring that system into conformity with those
requirements, to withdraw it or recall it, or shall ensure that the provider , the importer or any relevant operator ,
as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning
of Article 79(1), the distributor shall immediately inform the provider or importer of the system and the
authorities competent for the high-risk AI system concerned, giving details, in particular , of the non-compliance
and of any corrective actions taken.
5. Upon a reasoned request from a relevant competent authority , distributors of a high-risk AI system shall
provide that authority with all the information and documentation regarding their actions pursuant to
paragraphs 1 to 4 necessary to demonstrate the conformity of that system with the requirements set out in
Section 2.
6. Distributors shall cooperate with the relevant competent authorities in any action those authorities take in
relation to a high-risk AI system made available on the market by the distributors, in particular to reduce or
mitigate the risk posed by it.
Article 25
Responsibilities along the AI value chain
1. Any distributor , importer , deployer or other third-party shall be considered to be a provider of a high-risk AI
system for the purposes of this Regulation and shall be subject to the obligations of the provider under
Article 16, in any of the following circumstances:2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 53/110
(a)they put their name or trademark on a high-risk AI system already placed on the market or put into service,
without prejudice to contractual arrangements stipulating that the obligations are otherwise allocated;
(b)they make a substantial modification to a high-risk AI system that has already been placed on the market or
has already been put into service in such a way that it remains a high-risk AI system pursuant to Article 6;
(c)they modify the intended purpose of an AI system, including a general-purpose AI system, which has not
been classified as high-risk and has already been placed on the market or put into service in such a way
that the AI system concerned becomes a high-risk AI system in accordance with Article 6.
2. Where the circumstances referred to in paragraph 1 occur , the provider that initially placed the AI system on
the market or put it into service shall no longer be considered to be a provider of that specific AI system for the
purposes of this Regulation. That initial provider shall closely cooperate with new providers and shall make
available the necessary information and provide the reasonably expected technical access and other assistance
that are required for the fulfilment of the obligations set out in this Regulation, in particular regarding the
compliance with the conformity assessment of high-risk AI systems. This paragraph shall not apply in cases
where the initial provider has clearly specified that its AI system is not to be changed into a high-risk AI system
and therefore does not fall under the obligation to hand over the documentation.
3. In the case of high-risk AI systems that are safety components of products covered by the Union
harmonisation legislation listed in Section A of Annex I, the product manufacturer shall be considered to be the
provider of the high-risk AI system, and shall be subject to the obligations under Article 16 under either of the
following circumstances:
(a)the high-risk AI system is placed on the market together with the product under the name or trademark of
the product manufacturer;
(b)the high-risk AI system is put into service under the name or trademark of the product manufacturer after
the product has been placed on the market.
4. The provider of a high-risk AI system and the third party that supplies an AI system, tools, services,
components, or processes that are used or integrated in a high-risk AI system shall, by written agreement,
specify the necessary information, capabilities, technical access and other assistance based on the generally
acknowledged state of the art, in order to enable the provider of the high-risk AI system to fully comply with
the obligations set out in this Regulation. This paragraph shall not apply to third parties making accessible to
the public tools, services, processes, or components, other than general-purpose AI models, under a free and
open-source licence.
The AI Office may develop and recommend voluntary model terms for contracts between providers of high-risk
AI systems and third parties that supply tools, services, components or processes that are used for or integrated
into high-risk AI systems. When developing those voluntary model terms, the AI Office shall take into account
possible contractual requirements applicable in specific sectors or business cases. The voluntary model terms
shall be published and be available free of char ge in an easily usable electronic format.
5. Paragraphs 2 and 3 are without prejudice to the need to observe and protect intellectual property rights,
confidential business information and trade secrets in accordance with Union and national law .
Article 26
Obligations of deployers of high-risk AI systems
1. Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure
they use such systems in accordance with the instructions for use accompanying the systems, pursuant to
paragraphs 3 and 6.
2. Deployers shall assign human oversight to natural persons who have the necessary competence, training
and authority , as well as the necessary support.
3. The obligations set out in paragraphs 1 and 2, are without prejudice to other deployer obligations under
Union or national law and to the deployer ’s freedom to organise its own resourc es and activities for the purpose
of implementing the human oversight measures indicated by the provider .
4. Without prejudice to paragraphs 1 and 2, to the extent the deployer exercises control over the input data,
that deployer shall ensure that input data is relevant and sufficiently representative in view of the intended
purpose of the high-risk AI system.
5. Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and,
where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that
the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting
a risk within the meaning of Article 79(1), they shall, without undue delay , inform the provider or distributor
and the relevant market surveillance authority , and shall suspend the use of that system. Where deployers have
identified a serious incident, they shall also immediately inform first the provider , and then the importer or
distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach
the provider , Article 73 shall apply mutatis mutandis . This obligation shall not cover sensitive operational data
of deployers of AI systems which are law enforcement authorities.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 54/110
For deployers that are financial institutions subject to requirements regard ing their internal governance,
arrangements or processes under Union financial services law, the monitoring obligation set out in the first
subparagraph shall be deemed to be fulfilled by complying with the rules on internal governance arrangements,
processes and mechanisms pursuant to the relevant financial service law .
6. Deployers of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system
to the extent such logs are under their control, for a period appropriate to the intended purpose of the high-risk
AI system, of at least six months, unless provided otherwise in applicable Union or national law, in particular in
Union law on the protection of personal data.
Deployers that are financial institutions subject to requirements regarding their internal governance,
arrangements or processes under Union financial services law shall maintain the logs as part of the
documentation kept pursuant to the relevant Union financial service law .
7. Before putting into service or using a high-risk AI system at the workplace , deployers who are employers
shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-
risk AI system. This information shall be provided, where applicable, in accordance with the rules and
procedures laid down in Union and national law and practice on information of workers and their
representatives.
8. Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or
agencies shall comply with the registration obligations referred to in Article 49. When such deployers find that
the high-risk AI system that they envisage using has not been registered in the EU database referred to in
Article 71, they shall not use that system and shall inform the provider or the distributor .
9. Where applicable, deployers of high-risk AI systems shall use the informati on provided under Article 13 of
this Regulation to comply with their obligation to carry out a data protection impact assessment under
Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680.
10. Without prejudice to Directive (EU) 2016/680, in the framework of an investigation for the targeted search
of a person suspected or convicted of having committed a criminal offence, the deployer of a high-risk AI
system for post-remote biometric identification shall request an authorisation, ex ante, or without undue delay
and no later than 48 hours, by a judicial authority or an administrative authority whose decision is binding and
subject to judicial review , for the use of that system, except when it is used for the initial identification of
a potential suspect based on objective and verifiable facts directly linked to the offence. Each use shall be
limited to what is strictly necessary for the investigation of a specific criminal of fence.
If the authorisation requested pursuant to the first subparagraph is rejected, the use of the post-remote biometric
identification system linked to that requested authorisation shall be stopped with immediate effect and the
personal data linked to the use of the high-risk AI system for which the authorisation was requested shall be
deleted.
In no case shall such high-risk AI system for post-remote biometric identification be used for law enforcement
purposes in an untar geted way, without any link to a criminal offence, a criminal proceeding, a genuine and
present or genuine and foreseeable threat of a criminal offence, or the search for a specific missing person. It
shall be ensured that no decision that produces an adverse legal effect on a person may be taken by the law
enforcement authorities based solely on the output of such post-remote biometric identification systems.
This paragraph is without prejudice to Article 9 of Regulation (EU) 2016/679 and Article 10 of
Directive (EU) 2016/680 for the processing of biometric data.
Regardless of the purpose or deployer , each use of such high-risk AI systems shall be documented in the
relevant police file and shall be made available to the relevant market surveillance authority and the national
data protection authority upon request, excluding the disclosure of sensitive operational data related to law
enforcement. This subparagraph shall be without prejudice to the powers conferred by Directive (EU) 2016/680
on supervisory authorities.
Deployers shall submit annual reports to the relevant market surveillance and national data protection
authorities on their use of post-remote biometric identification systems, excluding the disclosure of sensitive
operational data related to law enfor cement. The reports may be aggrega ted to cover more than one
deployment.
Member States may introduce, in accordance with Union law, more restrictive laws on the use of post-remote
biometric identification systems.
11. Without prejudice to Article 50 of this Regulation, deployers of high- risk AI systems referred to in
Annex III that make decisions or assist in making decisions related to natural persons shall inform the natural
persons that they are subject to the use of the high-risk AI system. For high-risk AI systems used for law
enforcement purposes Article 13 of Directive (EU) 2016/680 shall apply .
12. Deployers shall cooperate with the relevant competent authorities in any action those authorities take in
relation to the high-risk AI system in order to implement this Regulation.
Article 272/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 55/110
Fundamental rights impact assessment for high-risk AI systems
1. Prior to deploying a high-risk AI system referred to in Article 6(2), with the exception of high-risk AI
systems intended to be used in the area listed in point 2 of Annex III, deployers that are bodies governed by
public law, or are private entities providing public services, and deployers of high-risk AI systems referred to in
points 5 (b) and (c) of Annex III, shall perform an assessment of the impact on fundamental rights that the use
of such system may produce. For that purpose, deployers shall perform an assessment consisting of:
(a)a description of the deployer ’s process es in which the high-risk AI system will be used in line with its
intended purpose;
(b)a description of the period of time within which, and the frequency with which, each high-risk AI system is
intended to be used;
(c)the categories of natural persons and groups likely to be af fected by its use in the specific context;
(d)the specific risks of harm likely to have an impact on the categories of natural persons or groups of persons
identified pursuant to point (c) of this paragraph, taking into account the information given by the provider
pursuant to Article 13;
(e)a description of the implementation of human oversight measures, according to the instructions for use;
(f)the measures to be taken in the case of the materialisation of those risks, including the arrangements for
internal governance and complaint mechanisms.
2. The obligation laid down in paragraph 1 applies to the first use of the high-risk AI system. The deployer
may, in similar cases, rely on previously conducted fundamental rights impact assessments or existing impact
assessments carried out by provider . If, during the use of the high-risk AI system, the deployer considers that
any of the elements listed in paragraph 1 has changed or is no longer up to date, the deployer shall take the
necessary steps to update the information.
3. Once the assessment referred to in paragraph 1 of this Article has been performed, the deployer shall notify
the market surveillance authority of its results, submitting the filled-out template referred to in paragraph 5 of
this Article as part of the notification. In the case referred to in Article 46(1), deployers may be exempt from
that obligation to notify .
4. If any of the obligations laid down in this Article is already met through the data protection impact
assessment conducted pursuant to Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU)
2016/680, the fundamental rights impact assessment referred to in paragraph 1 of this Article shall complement
that data protection impact assessment.
5. The AI Office shall develop a template for a questionnaire, including through an automated tool, to
facilitate deployers in complying with their obligations under this Article in a simplified manner .
SECTION 4
Notifying authorities and notified bodies
Article 28
Notifying authorities
1. Each Member State shall designate or establish at least one notifying authority responsible for setting up
and carrying out the necessary procedures for the assessment, designation and notification of conformity
assessment bodies and for their monito ring. Those procedures shall be develop ed in cooperation between the
notifying authorities of all Member States.
2. Member States may decide that the assessment and monitoring referred to in paragraph 1 is to be carried
out by a national accreditation body within the meaning of, and in accordance with, Regulation (EC)
No 765/2008.
3. Notifying authorities shall be established, organised and operated in such a way that no conflict of interest
arises with conformity assessment bodies, and that the objectivity and impartiality of their activities are
safeguarded.
4. Notifying authorities shall be organised in such a way that decisions relating to the notification of
conformity assessment bodies are taken by competent persons different from those who carried out the
assessment of those bodies.
5. Notifying authorities shall offer or provide neither any activities that conformity assessment bodies
perform, nor any consultancy services on a commercial or competitive basis.
6. Notifying authorities shall safeguard the confidentiality of the information that they obtain, in accordance
with Article 78.
7. Notifying authorities shall have an adequate number of competent personnel at their disposal for the proper
performance of their tasks. Competent personnel shall have the necessary exper tise, where applicable, for their2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 56/110
function, in fields such as information technologies, AI and law, including the supervision of fundamental
rights.
Article 29
Application of a conformity assessment body for notification
1. Conformity assessment bodies shall submit an application for notification to the notifying authority of the
Member State in which they are established.
2. The application for notification shall be accompanied by a description of the conformity assessment
activities, the conformity assessment module or modules and the types of AI systems for which the conformity
assessment body claims to be competent, as well as by an accreditation certificate, where one exists, issued by
a national accreditation body attesting that the conformity assessment body fulfils the requirements laid down
in Article 31.
Any valid document related to existing designations of the applicant notified body under any other Union
harmonisation legislation shall be added.
3. Where the conformity assessment body concerned cannot provide an accreditation certificate, it shall
provide the notifying authority with all the documentary evidence necessary for the verification, recognition
and regular monitoring of its compliance with the requirements laid down in Article 31.
4. For notified bodies which are designated under any other Union harmonisation legislation, all documents
and certificates linked to those designa tions may be used to support their designation procedure under this
Regulation, as appropriate. The notified body shall update the documentation referred to in paragraphs 2 and 3
of this Article whenever relevant changes occur , in order to enable the authority responsible for notified bodies
to monitor and verify continuous compliance with all the requirements laid down in Article 31.
Article 30
Notification pr ocedur e
1. Notifying authorities may notify only conformity assessment bodies which have satisfied the requirements
laid down in Article 31.
2. Notifying authorities shall notify the Commission and the other Member States, using the electronic
notification tool developed and managed by the Commission, of each conformity assessment body referred to
in paragraph 1.
3. The notification referred to in paragraph 2 of this Article shall include full details of the conformity
assessment activities, the conformity assessment module or modules, the types of AI systems concerned, and
the relevant attestation of competence. Where a notification is not based on an accreditation certificate as
referred to in Article 29(2), the notifying authority shall provide the Commission and the other Member States
with documentary evidence which attests to the competence of the conformi ty assessment body and to the
arrangements in place to ensure that that body will be monitored regularly and will continue to satisfy the
requirements laid down in Article 31.
4. The conformity assessment body concerned may perform the activities of a notified body only where no
objections are raised by the Commissio n or the other Member States within two weeks of a notification by
a notifying authority where it includes an accreditation certificate referred to in Article 29(2), or within two
months of a notification by the notifying authority where it includes docum entary evidence referred to in
Article 29(3).
5. Where objections are raised, the Commission shall, without delay , enter into consultations with the relevant
Member States and the conformity assessment body . In view thereof, the Commission shall decide whether the
authorisation is justified. The Commission shall address its decision to the Member State concerned and to the
relevant conformity assessment body .
Article 31
Requir ements r elating to notified bodies
1. A notified body shall be established under the national law of a Member State and shall have legal
personality .
2. Notified bodies shall satisfy the organisational, quality management, resources and process requirements
that are necessary to fulfil their tasks, as well as suitable cybersecurity requirements.
3. The organisational structure, allocation of responsibilities, reporting lines and operation of notified bodies
shall ensure confidence in their performance, and in the results of the conformity assessment activities that the
notified bodies conduct.
4. Notified bodies shall be independe nt of the provider of a high-risk AI system in relation to which they
perform conformity assessment activiti es. Notified bodies shall also be independent of any other operator
having an economic interest in high-ris k AI systems assessed, as well as of any competitors of the provider .2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 57/110
This shall not preclude the use of assessed high-risk AI systems that are necessary for the operations of the
conformity assessment body , or the use of such high-risk AI systems for personal purposes.
5. Neither a conformity assessment body , its top-level management nor the personnel responsible for carrying
out its conformity assessment tasks shall be directly involved in the design, development, marketing or use of
high-risk AI systems, nor shall they represent the parties engaged in those activities. They shall not engage in
any activity that might conflict with their independence of judgement or integrity in relation to conformity
assessment activities for which they are notified. This shall, in particular , apply to consultancy services.
6. Notified bodies shall be organised and operated so as to safeguard the independence, objectivity and
impartiality of their activities. Notified bodies shall document and implement a structure and procedures to
safeguard impartiality and to promote and apply the principles of impartiality throughout their organisation,
personnel and assessment activities.
7. Notified bodies shall have docum ented procedures in place ensuring that their personnel, committees,
subsidiaries, subcontractors and any associated body or personnel of external bodies maintain, in accordance
with Article 78, the confidentiality of the information which comes into their possession during the
performance of conformity assessment activities, except when its disclosure is required by law. The staff of
notified bodies shall be bound to observe professional secrecy with regard to all information obtained in
carrying out their tasks under this Regulation, except in relation to the notifying authorities of the Member
State in which their activities are carried out.
8. Notified bodies shall have procedures for the performance of activities which take due account of the size
of a provider , the sector in which it operates, its structure, and the degree of complexity of the AI system
concerned.
9. Notified bodies shall take out appropriate liability insurance for their conformity assessment activities,
unless liability is assumed by the Member State in which they are established in accordance with national law
or that Member State is itself directly responsible for the conformity assessment.
10. Notified bodies shall be capable of carrying out all their tasks under this Regulation with the highest
degree of professional integrity and the requisite competence in the specific field, whether those tasks are
carried out by notified bodies themselves or on their behalf and under their responsibility .
11. Notified bodies shall have sufficient internal competences to be able effectively to evaluate the tasks
conducted by external parties on their behalf. The notified body shall have permanent availability of sufficient
administrative, technical, legal and scientific personnel who possess experience and knowledge relating to the
relevant types of AI systems, data and data computing, and relating to the requirements set out in Section 2.
12. Notified bodies shall participate in coordination activities as referred to in Article 38. They shall also take
part directly , or be represented in, European standardisation organisations, or ensure that they are aware and up
to date in respect of relevant standards.
Article 32
Presumption of conformity with r equir ements r elating to notified bodies
Where a conformity assessment body demonstrates its conformity with the criteria laid down in the relevant
harmonised standards or parts thereof, the references of which have been published in the Official Journal of
the European Union , it shall be presumed to comply with the requirements set out in Article 31 in so far as the
applicable harmonised standards cover those requirements.
Article 33
Subsidiaries of notified bodies and subcontracting
1. Where a notified body subcontracts specific tasks connected with the conform ity assessment or has recourse
to a subsidiary , it shall ensure that the subcontractor or the subsidiary meets the requirements laid down in
Article 31, and shall inform the notifying authority accordingly .
2. Notified bodies shall take full responsibility for the tasks performed by any subcontractors or subsidiaries.
3. Activities may be subcontracted or carried out by a subsidiary only with the agreement of the provider .
Notified bodies shall make a list of their subsidiaries publicly available.
4. The relevant documents concerning the assessment of the qualifications of the subcontractor or the
subsidiary and the work carried out by them under this Regulation shall be kept at the disposal of the notifying
authority for a period of five years from the termination date of the subcontracting.
Article 34
Operational obligations of notified bodies
1. Notified bodies shall verify the conformity of high-risk AI systems in accordance with the conformity
assessment procedures set out in Article 43.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 58/110
2. Notified bodies shall avoid unnecessary burdens for providers when performing their activities, and take
due account of the size of the provider , the sector in which it operates, its structure and the degree of
complexity of the high-risk AI system concerned, in particular in view of minimising administrative burdens
and compliance costs for micro- and small enterprises within the meaning of Recommendation 2003/361/EC.
The notified body shall, nevertheless, respect the degree of rigour and the level of protection required for the
compliance of the high-risk AI system with the requirements of this Regulation.
3. Notified bodies shall make available and submit upon request all relevant documentation, including the
providers’ documentation, to the notifying authority referred to in Article 28 to allow that authority to conduct
its assessment, designation, notification and monitoring activities, and to facilitate the assessment outlined in
this Section.
Article 35
Identification numbers and lists of notified bodies
1. The Commission shall assign a single identification number to each notified body , even where a body is
notified under more than one Union act.
2. The Commission shall make publicly available the list of the bodies notified under this Regulation,
including their identification numbers and the activities for which they have been notified. The Commission
shall ensure that the list is kept up to date.
Article 36
Changes to notifications
1. The notifying authority shall notify the Commission and the other Member States of any relevant changes
to the notification of a notified body via the electronic notification tool referred to in Article 30(2).
2. The procedures laid down in Articles 29 and 30 shall apply to extensions of the scope of the notification.
For changes to the notification other than extensions of its scope, the procedures laid down in paragraphs (3) to
(9) shall apply .
3. Where a notified body decides to cease its conformity assessment activities, it shall inform the notifying
authority and the providers concerned as soon as possible and, in the case of a planned cessation, at least one
year before ceasing its activities. The certificates of the notified body may remain valid for a period of nine
months after cessation of the notified body’ s activities, on condition that another notified body has confirmed in
writing that it will assume responsibilities for the high-risk AI systems covered by those certificates. The latter
notified body shall complete a full assessment of the high-risk AI systems affected by the end of that nine-
month-period before issuing new certificates for those systems. Where the notified body has ceased its activity ,
the notifying authority shall withdraw the designation.
4. Where a notifying authority has sufficient reason to consider that a notified body no longer meets the
requirements laid down in Article 31, or that it is failing to fulfil its obligations, the notifying authority shall
without delay investigate the matter with the utmost diligence. In that context, it shall inform the notified body
concerned about the objections raised and give it the possibility to make its views known. If the notifying
authority comes to the conclusion that the notified body no longer meets the requirements laid down in
Article 31 or that it is failing to fulfil its obligations, it shall restrict, suspend or withdraw the designation as
appropriate, depending on the seriousness of the failure to meet those requirements or fulfil those obligations. It
shall immediately inform the Commission and the other Member States accordingly .
5. Where its designation has been suspended, restricted, or fully or partially withdrawn, the notified body shall
inform the providers concerned within 10 days.
6. In the event of the restriction, suspe nsion or withdrawal of a designation, the notifying authority shall take
appropriate steps to ensure that the files of the notified body concerned are kept, and to make them available to
notifying authorities in other Member States and to market surveillance authorities at their request.
7. In the event of the restriction, suspension or withdrawal of a designation, the notifying authority shall:
(a)assess the impact on the certificates issued by the notified body;
(b)submit a report on its findings to the Commission and the other Member States within three months of
having notified the changes to the designation;
(c)require the notified body to suspend or withdraw , within a reasonable period of time determined by the
authority , any certificates which were unduly issued, in order to ensure the continuing conformity of high-
risk AI systems on the market;
(d)inform the Commission and the Membe r States about certificates the suspension or withdrawal of which it
has required;
(e)provide the national competent authorities of the Member State in which the provider has its registered
place of business with all relevant information about the certificates of which it has required the suspension2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 59/110
or withdrawal; that authority shall take the appropriate measures, where necessary , to avoid a potential risk
to health, safety or fundamental rights.
8. With the exception of certificates unduly issued, and where a designation has been suspended or restricted,
the certificates shall remain valid in one of the following circumstances:
(a)the notifying authority has confirmed, within one month of the suspension or restriction, that there is no
risk to health, safety or fundamental rights in relation to certificates affected by the suspension or
restriction, and the notifying authority has outlined a timeline for actions to remedy the suspension or
restriction; or
(b)the notifying authority has confirmed that no certificates relevant to the suspension will be issued,
amended or re-issued during the course of the suspension or restriction, and states whether the notified
body has the capability of continuing to monitor and remain responsible for existing certificates issued for
the period of the suspension or restriction; in the event that the notifying authority determines that the
notified body does not have the capability to support existing certificates issued, the provider of the system
covered by the certificate shall confirm in writing to the national competent authorities of the Member
State in which it has its registered place of business, within three months of the suspension or restriction,
that another qualified notified body is temporarily assuming the functions of the notified body to monitor
and remain responsible for the certificates during the period of suspension or restriction.
9. With the exception of certificate s unduly issued, and where a designation has been withdrawn, the
certificates shall remain valid for a period of nine months under the following circumstances:
(a)the national competent authority of the Member State in which the provider of the high-risk AI system
covered by the certificate has its registered place of business has confirmed that there is no risk to health,
safety or fundamental rights associated with the high-risk AI systems concerned; and
(b)another notified body has confirmed in writing that it will assume immediate responsibility for those AI
systems and completes its assessment within 12 months of the withdrawal of the designation.
In the circumstances referred to in the first subparagraph, the national competent authority of the Member State
in which the provider of the system covered by the certificate has its place of business may extend the
provisional validity of the certificates for additional periods of three months, which shall not exceed 12 months
in total.
The national competent authority or the notified body assuming the functions of the notified body affected by
the change of designation shall immediately inform the Commission, the other Member States and the other
notified bodies thereof.
Article 37
Challenge to the competence of notified bodies
1. The Commission shall, where necessary , investigate all cases where there are reasons to doubt the
competence of a notified body or the continued fulfilment by a notified body of the requirements laid down in
Article 31 and of its applicable responsibilities.
2. The notifying authority shall provide the Commission, on request, with all relevant information relating to
the notification or the maintenance of the competence of the notified body concerned.
3. The Commission shall ensure that all sensitive information obtained in the course of its investigations
pursuant to this Article is treated confidentially in accordance with Article 78.
4. Where the Commission ascertains that a notified body does not meet or no longer meets the requirements
for its notification, it shall inform the notifying Member State accordingly and request it to take the necessary
corrective measures, including the suspension or withdrawal of the notification if necessary . Where the Member
State fails to take the necessary corrective measures, the Commission may, by means of an implementing act,
suspend, restrict or withdraw the design ation. That implementing act shall be adopted in accordance with the
examination procedure referred to in Article 98(2).
Article 38
Coordination of notified bodies
1. The Commission shall ensure that, with regard to high-risk AI systems, appropriate coordination and
cooperation between notified bodies active in the conformity assessment proced ures pursuant to this Regulation
are put in place and properly operated in the form of a sectoral group of notified bodies.
2. Each notifying authority shall ensure that the bodies notified by it participate in the work of a group
referred to in paragraph 1, directly or through designated representatives.
3. The Commission shall provide for the exchange of knowledge and best practices between notifying
authorities.
Article 392/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 60/110
Conformity assessment bodies of third countries
Conformity assessment bodies established under the law of a third country with which the Union has concluded
an agreement may be authorised to carry out the activities of notified bodies under this Regulation, provided
that they meet the requirements laid down in Article 31 or they ensure an equivalent level of compliance.
SECTION 5
Standards, conformity assessment, certificates, registration
Article 40
Harmonised standards and standardisation deliverables
1. High-risk AI systems or general-pur pose AI models which are in conformity with harmonised standards or
parts thereof the references of which have been published in the Official Journal of the European Union in
accordance with Regulation (EU) No 1025/2012 shall be presumed to be in conformity with the requirements
set out in Section 2 of this Chapter or, as applicable, with the obligations set out in of Chapter V, Sections 2 and
3, of this Regulation, to the extent that those standards cover those requirements or obligations.
2. In accordance with Article 10 of Regulation (EU) No 1025/2012, the Commission shall issue, without
undue delay , standardisation requests covering all requirements set out in Section 2 of this Chapter and, as
applicable, standardisation requests covering obligations set out in Chapter V, Sections 2 and 3, of this
Regulation. The standardisation request shall also ask for deliverables on reporting and documentation
processes to improve AI systems’ resource performance, such as reducin g the high-risk AI system’ s
consumption of energy and of other resources during its lifecycle, and on the energy-ef ficient development of
general-purpose AI models. When preparing a standardisation request, the Commission shall consult the Board
and relevant stakeholders, including the advisory forum.
When issuing a standardisation request to European standardisation organisations, the Commission shall
specify that standards have to be clear , consistent, including with the standards developed in the various sectors
for products covered by the existing Union harmonisation legislation listed in Annex I, and aiming to ensure
that high-risk AI systems or general-purpose AI models placed on the market or put into service in the Union
meet the relevant requirements or obligations laid down in this Regulation.
The Commission shall request the European standardisation organisations to provide evidence of their best
efforts to fulfil the objectives referred to in the first and the second subparagraph of this paragraph in
accordance with Article 24 of Regulation (EU) No 1025/2012.
3. The participants in the standardisa tion process shall seek to promote investment and innovation in AI,
including through increasing legal certainty , as well as the competitiveness and growth of the Union market, to
contribute to strengthening global cooperation on standardisation and taking into account existing international
standards in the field of AI that are consistent with Union values, fundamental rights and interests, and to
enhance multi-stakeholder governance ensuring a balanced representation of interests and the effective
participation of all relevant stakeholders in accordance with Articles 5, 6, and 7 of Regulation (EU)
No 1025/2012.
Article 41
Common specifications
1. The Commission may adopt, imple menting acts establishing common specifications for the requirements
set out in Section 2 of this Chapter or, as applicable, for the obligations set out in Sections 2 and 3 of
Chapter V where the following conditions have been fulfilled:
(a)the Commission has requested, pursuant to Article 10(1) of Regulation (EU) No 1025/2012, one or more
European standardisation organisations to draft a harmonised standard for the requirements set out in
Section 2 of this Chapter , or, as applicable, for the obligations set out in Sections 2 and 3 of Chapter V, and:
(i)the request has not been accepted by any of the European standardisation or ganisations; or
(ii)the harmonised standards addressing that request are not delivered within the deadline set in
accordance with Article 10(1) of Regulation (EU) No 1025/2012; or
(iii)the relevant harmonised standards insuf ficiently address fundamental rights concerns; or
(iv)the harmonised standards do not comply with the request; and
(b)no reference to harmonised standards covering the requirements referred to in Section 2 of this Chapter or,
as applicable, the obligations referred to in Sections 2 and 3 of Chapter V has been published in the Official
Journal of the European Union in accordance with Regulation (EU) No 1025/2012, and no such reference
is expected to be published within a reasonable period.
When drafting the common specifications, the Commission shall consult the advisory forum referred to in
Article 67.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 61/110
The implementing acts referred to in the first subparagraph of this paragraph shall be adopted in accordance
with the examination procedure referred to in Article 98(2).
2. Before preparing a draft implementing act, the Commission shall inform the committee referred to in
Article 22 of Regulation (EU) No 1025/2012 that it considers the conditions laid down in paragraph 1 of this
Article to be fulfilled.
3. High-risk AI systems or general-purpose AI models which are in conformity with the common
specifications referred to in paragraph 1, or parts of those specifications, shall be presumed to be in conformity
with the requirements set out in Section 2 of this Chapter or, as applicable, to comply with the obligations
referred to in Sections 2 and 3 of Chapter V, to the extent those comm on specifications cover those
requirements or those obligations.
4. Where a harmonised standard is adopted by a European standardisation organisation and proposed to the
Commission for the publication of its reference in the Official Journal of the European Union , the Commission
shall assess the harmonised standard in accordance with Regulation (EU) No 1025/2012. When reference to
a harmonised standard is published in the Official Journal of the European Union , the Commission shall repeal
the implementing acts referred to in paragraph 1, or parts thereof which cover the same requirements set out in
Section 2 of this Chapter or , as applicable, the same obligations set out in Sections 2 and 3 of Chapter V.
5. Where providers of high-risk AI systems or general-purpose AI models do not comply with the common
specifications referred to in paragraph 1, they shall duly justify that they have adopted technical solutions that
meet the requirements referred to in Section 2 of this Chapter or, as applicable, comply with the obligations set
out in Sections 2 and 3 of Chapter V to a level at least equivalent thereto.
6. Where a Member State considers that a common specification does not entirely meet the requirements set
out in Section 2 or, as applicable, comply with obligations set out in Sections 2 and 3 of Chapter V, it shall
inform the Commission thereof with a detailed explanation. The Commission shall assess that information and,
if appropriate, amend the implementing act establishing the common specification concerned.
Article 42
Presumption of conformity with certain r equir ements
1. High-risk AI systems that have been trained and tested on data reflec ting the specific geographical,
behavioural, contextual or functional setting within which they are intended to be used shall be presumed to
comply with the relevant requirements laid down in Article 10(4).
2. High-risk AI systems that have been certified or for which a statement of conformity has been issued under
a cybersecurity scheme pursuant to Regulation (EU) 2019/881 and the references of which have been published
in the Official Journal of the European Union shall be presumed to comply with the cybersecurity requirements
set out in Article 15 of this Regulation in so far as the cybersecurity certificate or statement of conformity or
parts thereof cover those requirements.
Article 43
Conformity assessment
1. For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-
risk AI system with the requirements set out in Section 2, the provider has applied harmonised standards
referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider
shall opt for one of the following conformity assessment procedures based on:
(a)the internal control referred to in Annex VI; or
(b)the assessment of the quality manageme nt system and the assessment of the technical documentation, with
the involvement of a notified body , referred to in Annex VII.
In demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the
provider shall follow the conformity assessment procedure set out in Annex VII where:
(a)harmonised standards referred to in Article 40 do not exist, and common specifications referred to in
Article 41 are not available;
(b)the provider has not applied, or has applied only part of, the harmonised standard;
(c)the common specifications referred to in point (a) exist, but the provider has not applied them;
(d)one or more of the harmonised standards referred to in point (a) has been published with a restriction, and
only on the part of the standard that was restricted.
For the purposes of the conformity assessment procedure referred to in Annex VII, the provider may choose
any of the notified bodies. However , where the high-risk AI system is intended to be put into service by law
enforcement, immigration or asylum authorities or by Union institutions, bodies , offices or agencies, the market
surveillance authority referred to in Article 74(8) or (9), as applicable, shall act as a notified body .
2. For high-risk AI systems referred to in points 2 to 8 of Annex III, provid ers shall follow the conformity
assessment procedure based on internal control as referred to in Annex VI, which does not provide for the2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 62/110
involvement of a notified body .
3. For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I, the
provider shall follow the relevant conformity assessment procedure as requir ed under those legal acts. The
requirements set out in Section 2 of this Chapter shall apply to those high-risk AI systems and shall be part of
that assessment. Points 4.3., 4.4., 4.5. and the fifth paragraph of point 4.6 of Annex VII shall also apply .
For the purposes of that assessment, notified bodies which have been notified under those legal acts shall be
entitled to control the conformity of the high-risk AI systems with the requirements set out in Section 2,
provided that the compliance of those notified bodies with requirements laid down in Article 31(4), (5), (10)
and (1 1) has been assessed in the context of the notification procedure under those legal acts.
Where a legal act listed in Section A of Annex I enables the product manufactu rer to opt out from a third-party
conformity assessment, provided that that manufacturer has applied all harmonised standards covering all the
relevant requirements, that manufacturer may use that option only if it has also applied harmonised standards
or, where applicable, common specifications referred to in Article 41, covering all requirements set out in
Section 2 of this Chapter .
4. High-risk AI systems that have already been subject to a conformity asses sment procedure shall under go
a new conformity assessment procedure in the event of a substantial modification, regardless of whether the
modified system is intended to be further distributed or continues to be used by the current deployer .
For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to
the high-risk AI system and its performance that have been pre-determined by the provider at the moment of
the initial conformity assessment and are part of the information contained in the technical documentation
referred to in point 2(f) of Annex IV , shall not constitute a substantial modification.
5. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend
Annexes VI and VII by updating them in light of technical progress.
6. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend
paragraphs 1 and 2 of this Article in order to subject high-risk AI systems referred to in points 2 to 8 of
Annex III to the conformity assessment procedure referred to in Annex VII or parts thereof. The Commission
shall adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure
based on internal control referred to in Annex VI in preventing or minimising the risks to health and safety and
protection of fundamental rights posed by such systems, as well as the availability of adequate capacities and
resources among notified bodies.
Article 44
Certificates
1. Certificates issued by notified bodie s in accordance with Annex VII shall be drawn-up in a language which
can be easily understood by the relevant authorities in the Member State in which the notified body is
established.
2. Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems
covered by Annex I, and four years for AI systems covered by Annex III. At the request of the provider , the
validity of a certificate may be extended for further periods, each not exceeding five years for AI systems
covered by Annex I, and four years for AI systems covered by Annex III, based on a re-assessment in
accordance with the applicable conformity assessment procedures. Any supplement to a certificate shall remain
valid, provided that the certificate which it supplements is valid.
3. Where a notified body finds that an AI system no longer meets the requirements set out in Section 2, it
shall, taking account of the principle of proportionality , suspend or withdraw the certificate issued or impose
restrictions on it, unless compliance with those requirements is ensured by appropriate corrective action taken
by the provider of the system within an appropriate deadline set by the notified body . The notified body shall
give reasons for its decision.
An appeal procedure against decisions of the notified bodies, including on conformity certificates issued, shall
be available.
Article 45
Information obligations of notified bodies
1. Notified bodies shall inform the notifying authority of the following:
(a)any Union technical documentation assessment certificates, any supplements to those certificates, and any
quality management system approvals issued in accordance with the requirements of Annex VII;
(b)any refusal, restriction, suspension or withdrawal of a Union technical documentation assessment
certificate or a quality management system approval issued in accordance with the requirements of
Annex VII;
(c)any circumstances af fecting the scope of or conditions for notification;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 63/110
(d)any request for information which they have received from market surveill ance authorities regarding
conformity assessment activities;
(e)on request, conformity assessment activities performed within the scope of their notification and any other
activity performed, including cross-border activities and subcontracting.
2. Each notified body shall inform the other notified bodies of:
(a)quality management system approvals which it has refused, suspended or withdrawn, and, upon request, of
quality system approvals which it has issued;
(b)Union technical documentation assessment certificates or any supplements thereto which it has refused,
withdrawn, suspended or otherwise restricted, and, upon request, of the certificates and/or supplements
thereto which it has issued.
3. Each notified body shall provide the other notified bodies carrying out similar conformity assessment
activities covering the same types of AI systems with relevant information on issues relating to negative and, on
request, positive conformity assessment results.
4. Notified bodies shall safeguard the confidentiality of the information that they obtain, in accordance with
Article 78.
Article 46
Derogation fr om conformity assessment pr ocedur e
1. By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority
may authorise the placing on the market or the putting into service of specific high-risk AI systems within the
territory of the Member State concerned , for exceptional reasons of public security or the protection of life and
health of persons, environmental protection or the protection of key industrial and infrastructural assets. That
authorisation shall be for a limited period while the necessary conformity assessment procedures are being
carried out, taking into account the exceptional reasons justifying the derogation. The completion of those
procedures shall be undertaken without undue delay .
2. In a duly justified situation of urgency for exceptional reasons of public security or in the case of specific,
substantial and imminent threat to the life or physical safety of natural persons, law-enforcement authorities or
civil protection authorities may put a specific high-risk AI system into service without the authorisation referred
to in paragraph 1, provided that such authorisation is requested during or after the use without undue delay . If
the authorisation referred to in paragrap h 1 is refused, the use of the high-risk AI system shall be stopped with
immediate ef fect and all the results and outputs of such use shall be immediately discarded.
3. The authorisation referred to in paragraph 1 shall be issued only if the market surveillance authority
concludes that the high-risk AI system complies with the requirements of Section 2. The market surveillance
authority shall inform the Commission and the other Member States of any authorisation issued pursuant to
paragraphs 1 and 2. This obligation shall not cover sensitive operational data in relation to the activities of law-
enforcement authorities.
4. Where, within 15 calendar days of receipt of the information referred to in paragraph 3, no objection has
been raised by either a Member State or the Commission in respect of an authorisation issued by a market
surveillance authority of a Member State in accordance with paragraph 1, that authorisation shall be deemed
justified.
5. Where, within 15 calendar days of receipt of the notification referred to in paragraph 3, objections are
raised by a Member State against an authorisation issued by a market surveillance authority of another Member
State, or where the Commission consid ers the authorisation to be contrary to Union law, or the conclusion of
the Member States regarding the compl iance of the system as referred to in paragraph 3 to be unfounded, the
Commission shall, without delay , enter into consultations with the relevant Member State. The operators
concerned shall be consulted and have the possibility to present their views. Having regard thereto, the
Commission shall decide whether the authorisation is justified. The Commission shall address its decision to
the Member State concerned and to the relevant operators.
6. Where the Commission considers the authorisation unjustified, it shall be withdrawn by the market
surveillance authority of the Member State concerned.
7. For high-risk AI systems related to products covered by Union harmonisation legislation listed in Section
A of Annex I, only the derogations from the conformity assessment establishe d in that Union harmonisation
legislation shall apply .
Article 47
EU declaration of conformity
1. The provider shall draw up a written machine readable, physical or electronically signed EU declaration of
conformity for each high-risk AI system , and keep it at the disposal of the nation al competent authorities for 10
years after the high-risk AI system has been placed on the market or put into service. The EU declaration of2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 64/110
conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration
of conformity shall be submitted to the relevant national competent authorities upon request.
2. The EU declaration of conformity shall state that the high-risk AI system concerned meets the requirements
set out in Section 2. The EU declaration of conformity shall contain the inform ation set out in Annex V, and
shall be translated into a language that can be easily understood by the national competent authorities of the
Member States in which the high-risk AI system is placed on the market or made available.
3. Where high-risk AI systems are subject to other Union harmonisation legislation which also requires an EU
declaration of conformity , a single EU declaration of conformity shall be drawn up in respect of all Union law
applicable to the high-risk AI system. The declaration shall contain all the information required to identify the
Union harmonisation legislation to which the declaration relates.
4. By drawing up the EU declaration of conformity , the provider shall assume responsibility for compliance
with the requirements set out in Section 2. The provider shall keep the EU declaration of conformity up-to-date
as appropriate.
5. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend
Annex V by updating the content of the EU declaration of conformity set out in that Annex, in order to
introduce elements that become necessary in light of technical progress.
Article 48
CE marking
1. The CE marking shall be subject to the general principles set out in Article 30 of Regulation (EC)
No 765/2008.
2. For high-risk AI systems provided digitally , a digital CE marking shall be used, only if it can easily be
accessed via the interface from which that system is accessed or via an easily accessible machine-readable code
or other electronic means.
3. The CE marking shall be affixed visibly , legibly and indelibly for high-risk AI systems. Where that is not
possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the
packaging or to the accompanying documentation, as appropriate.
4. Where applicable, the CE marking shall be followed by the identificatio n number of the notified body
responsible for the conformity assessment procedures set out in Article 43. The identification number of the
notified body shall be affixed by the body itself or, under its instructions, by the provider or by the provider ’s
authorised representative. The identifica tion number shall also be indicated in any promotional material which
mentions that the high-risk AI system fulfils the requirements for CE marking.
5. Where high-risk AI systems are subject to other Union law which also provides for the affixing of the CE
marking, the CE marking shall indicate that the high-risk AI system also fulfil the requirements of that other
law.
Article 49
Registration
1. Before placing on the market or putting into service a high-risk AI system listed in Annex III, with the
exception of high-risk AI systems referred to in point 2 of Annex III, the provider or, where applicable, the
authorised representative shall register themselves and their system in the EU database referred to in Article 71.
2. Before placing on the market or putting into service an AI system for which the provider has concluded that
it is not high-risk according to Article 6(3), that provider or, where applicable, the authorised representative
shall register themselves and that system in the EU database referred to in Article 71.
3. Before putting into service or using a high-risk AI system listed in Annex III, with the exception of high-
risk AI systems listed in point 2 of Annex III, deployers that are public authori ties, Union institutions, bodies,
offices or agencies or persons acting on their behalf shall register themselves, select the system and register its
use in the EU database referred to in Article 71.
4. For high-risk AI systems referred to in points 1, 6 and 7 of Annex III, in the areas of law enforcement,
migration, asylum and border control management, the registration referred to in paragraphs 1, 2 and 3 of this
Article shall be in a secure non-public section of the EU database referred to in Article 71 and shall include
only the following information, as applicable, referred to in:
(a)Section A, points 1 to 10, of Annex VIII, with the exception of points 6, 8 and 9;
(b)Section B, points 1 to 5, and points 8 and 9 of Annex VIII;
(c)Section C, points 1 to 3, of Annex VIII;
(d)points 1, 2, 3 and 5, of Annex IX.
Only the Commission and national authorities referred to in Article 74(8) shall have access to the respective
restricted sections of the EU database listed in the first subparagraph of this paragraph.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 65/110
5. High-risk AI systems referred to in point 2 of Annex III shall be registered at national level.
CHAPTER IV
TRANSP ARENCY OBLIGA TIONS FOR PROVIDERS AND DEPLOYERS OF CER TAIN AI
SYSTEMS
Article 50
Transpar ency obligations for providers and deployers of certain AI systems
1. Providers shall ensure that AI systems intended to interact directly with natural persons are designed and
developed in such a way that the natural persons concerned are informed that they are interacting with an AI
system, unless this is obvious from the point of view of a natural person who is reasonably well-informed,
observant and circumspect, taking into account the circumstances and the conte xt of use. This obligation shall
not apply to AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences, subject
to appropriate safeguards for the rights and freedoms of third parties, unless those systems are available for the
public to report a criminal of fence.
2. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or
text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and
detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective,
interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and
limitations of various types of content, the costs of implementation and the generally acknowledged state of the
art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI
systems perform an assistive function for standard editing or do not substantially alter the input data provided
by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute
criminal of fences.
3. Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural
persons exposed thereto of the operation of the system, and shall process the personal data in accordance with
Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680 , as applicable. This obligation
shall not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by
law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and
freedoms of third parties, and in accordance with Union law .
4. Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep
fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not
apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence. Where
the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the
transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or
manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.
Deployers of an AI system that generates or manipulates text which is published with the purpose of informing
the public on matters of public interest shall disclose that the text has been artificially generated or
manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or
prosecute criminal offences or where the AI-generated content has under gone a process of human review or
editorial control and where a natural or legal person holds editorial responsibility for the publication of the
content.
5. The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in
a clear and distinguishable manner at the latest at the time of the first interactio n or exposure. The information
shall conform to the applicable accessibility requirements.
6. Paragraphs 1 to 4 shall not affect the requirements and obligations set out in Chapter III, and shall be
without prejudice to other transparency obligations laid down in Union or national law for deployers of AI
systems.
7. The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level to facilitate
the effective implementation of the obligations regarding the detection and labelling of artificially generated or
manipulated content. The Commission may adopt implementing acts to approve those codes of practice in
accordance with the procedure laid down in Article 56 (6). If it deems the code is not adequate, the Commission
may adopt an implementing act speci fying common rules for the implemen tation of those obligations in
accordance with the examination procedure laid down in Article 98(2).
CHAPTER V
GENERAL-PURPOSE AI MODELS
SECTION 1
Classification rules2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 66/110
Article 51
Classification of general-purpose AI models as general-purpose AI models with systemic risk
1. A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets
any of the following conditions:
(a)it has high impact capabilities evaluate d on the basis of appropriate technical tools and methodologies,
including indicators and benchmarks;
(b)based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it
has capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in
Annex XIII.
2. A general-purpose AI model shall be presumed to have high impact capab ilities pursuant to paragraph 1,
point (a), when the cumulative amount of computation used for its trainin g measured in floating point
operations is greater than 1025.
3. The Commission shall adopt delega ted acts in accordance with Article 97 to amend the thresholds listed in
paragraphs 1 and 2 of this Article, as well as to supplement benchmarks and indicators in light of evolving
technological developments, such as algorithmic improvements or increased hardware efficiency , when
necessary , for these thresholds to reflect the state of the art.
Article 52
Procedur e
1. Where a general-purpose AI model meets the condition referred to in Article 51(1), point (a), the relevant
provider shall notify the Commission without delay and in any event within two weeks after that requirement is
met or it becomes known that it will be met. That notification shall include the information necessary to
demonstrate that the relevant requirement has been met. If the Commission becomes aware of a general-
purpose AI model presenting systemic risks of which it has not been notified, it may decide to designate it as
a model with systemic risk.
2. The provider of a general-purpose AI model that meets the condition referred to in Article 51(1), point (a),
may present, with its notification, sufficiently substantiated arguments to demonstrate that, exceptionally ,
although it meets that requirement, the general-purpose AI model does not present, due to its specific
characteristics, systemic risks and therefore should not be classified as a general-purpose AI model with
systemic risk.
3. Where the Commission concludes that the arguments submitted pursuant to paragraph 2 are not sufficiently
substantiated and the relevant provider was not able to demonstrate that the general-purpose AI model does not
present, due to its specific characterist ics, systemic risks, it shall reject those arguments, and the general-
purpose AI model shall be considered to be a general-purpose AI model with systemic risk.
4. The Commission may designate a general-purpose AI model as presenting systemic risks, ex officio or
following a qualified alert from the scientific panel pursuant to Article 90(1), point (a), on the basis of criteria
set out in Annex XIII.
The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend
Annex XIII by specifying and updating the criteria set out in that Annex.
5. Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model
with systemic risk pursuant to paragraph 4, the Commission shall take the request into account and may decide
to reassess whether the general-purpose AI model can still be considered to present systemic risks on the basis
of the criteria set out in Annex XIII. Such a request shall contain objective, detailed and new reasons that have
arisen since the designation decision. Providers may request reassessment at the earliest six months after the
designation decision. Where the Commission, following its reassessment, decides to maintain the designation
as a general-purpose AI model with systemic risk, providers may request reassessment at the earliest six months
after that decision.
6. The Commission shall ensure that a list of general-purpose AI models with systemic risk is published and
shall keep that list up to date, without prejudice to the need to observe and protect intellectual property rights
and confidential business information or trade secrets in accordance with Union and national law .
SECTION 2
Obligations for providers of general-purpose AI models
Article 53
Obligations for providers of general-purpose AI models
1. Providers of general-purpose AI models shall:
(a)draw up and keep up-to-date the technical documentation of the model, including its training and testing
process and the results of its evaluation, which shall contain, at a minimum, the information set out in2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 67/110
Annex XI for the purpose of providing it, upon request, to the AI Office and the national competent
authorities;
(b)draw up, keep up-to-date and make available information and documentation to providers of AI systems
who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need
to observe and protect intellectual property rights and confidential business information or trade secrets in
accordance with Union and national law , the information and documentation shall:
(i)enable providers of AI systems to have a good understanding of the capabilities and limitations of the
general-purpose AI model and to comply with their obligations pursuant to this Regulation; and
(ii)contain, at a minimum, the elements set out in Annex XII;
(c)put in place a policy to comply with Union law on copyright and related rights, and in particular to identify
and comply with, including through state-of-the-art technologies, a reservation of rights expressed pursuant
to Article 4(3) of Directive (EU) 2019/790;
(d)draw up and make publicly available a sufficiently detailed summary about the content used for training of
the general-purpose AI model, according to a template provided by the AI Of fice.
2. The obligations set out in paragraph 1, points (a) and (b), shall not apply to providers of AI models that are
released under a free and open-source licence that allows for the access, usage, modification, and distribution of
the model, and whose parameters, including the weights, the information on the model architecture, and the
information on model usage, are made publicly available. This exception shall not apply to general-purpose AI
models with systemic risks.
3. Providers of general-purpose AI models shall cooperate as necessary with the Commission and the national
competent authorities in the exercise of their competences and powers pursuant to this Regulation.
4. Providers of general-purpose AI models may rely on codes of practice within the meaning of Article 56 to
demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard
is published. Compliance with European harmonised standards grants providers the presumption of conformity
to the extent that those standards cover those obligations. Providers of general-purpose AI models who do not
adhere to an approved code of practice or do not comply with a European harmonised standard shall
demonstrate alternative adequate means of compliance for assessment by the Commission.
5. For the purpose of facilitating compliance with Annex XI, in particular points 2 (d) and (e) thereof, the
Commission is empowered to adopt delegated acts in accordance with Article 97 to detail measurement and
calculation methodologies with a view to allowing for comparable and verifiable documentation.
6. The Commission is empowered to adopt delegated acts in accordance with Article 97(2) to amend
Annexes XI and XII in light of evolving technological developments.
7. Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated
in accordance with the confidentiality obligations set out in Article 78.
Article 54
Authorised r epresentatives of pr oviders of general-purpose AI models
1. Prior to placing a general-purpose AI model on the Union market, providers established in third countries
shall, by written mandate, appoint an authorised representative which is established in the Union.
2. The provider shall enable its authorised representative to perform the tasks specified in the mandate
received from the provider .
3. The authorised representative shall perform the tasks specified in the mandate received from the provider . It
shall provide a copy of the mandate to the AI Office upon request, in one of the official languages of the
institutions of the Union. For the purposes of this Regulation, the mandate shall empower the authorised
representative to carry out the following tasks:
(a)verify that the technical documentation specified in Annex XI has been drawn up and all obligations
referred to in Article 53 and, where applicable, Article 55 have been fulfilled by the provider;
(b)keep a copy of the technical documentation specified in Annex XI at the disposal of the AI Office and
national competent authorities, for a period of 10 years after the general-purpose AI model has been placed
on the market, and the contact details of the provider that appointed the authorised representative;
(c)provide the AI Office, upon a reasoned request, with all the information and documentation, including that
referred to in point (b), necessary to demonstrate compliance with the obligations in this Chapter;
(d)cooperate with the AI Office and competent authorities, upon a reasoned request, in any action they take in
relation to the general-purpose AI model, including when the model is integrated into AI systems placed on
the market or put into service in the Union.
4. The mandate shall empower the authorised representative to be addressed, in addition to or instead of the
provider , by the AI Office or the competent authorities, on all issues related to ensuring compliance with this
Regulation.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 68/110
5. The authorised representative shall terminate the mandate if it considers or has reason to consider the
provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall also
immediately inform the AI Of fice about the termination of the mandate and the reasons therefor .
6. The obligation set out in this Article shall not apply to providers of general-purpose AI models that are
released under a free and open-source licence that allows for the access, usage, modification, and distribution of
the model, and whose parameters, including the weights, the information on the model architecture, and the
information on model usage, are made publicly available, unless the general-purpose AI models present
systemic risks.
SECTION 3
Obligations of providers of general-purpose AI models with systemic risk
Article 55
Obligations of pr oviders of general-purpose AI models with systemic risk
1. In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with
systemic risk shall:
(a)perform model evaluation in accordance with standardised protocols and tools reflecting the state of the
art, including conducting and documenting adversarial testing of the model with a view to identifying and
mitigating systemic risks;
(b)assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the
development, the placing on the market, or the use of general-purpose AI models with systemic risk;
(c)keep track of, document, and report, without undue delay , to the AI Office and, as appropriate, to national
competent authorities, relevant information about serious incidents and possible corrective measures to
address them;
(d)ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk
and the physical infrastructure of the model.
2. Providers of general-purpose AI models with systemic risk may rely on codes of practice within the
meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article,
until a harmonised standard is published. Compliance with European harmonise d standards grants providers the
presumption of conformity to the extent that those standards cover those obligations. Providers of general-
purpose AI models with systemic risks who do not adhere to an approved code of practice or do not comply
with a European harmonised standard shall demonstrate alternative adequate means of compliance for
assessment by the Commission.
3. Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated
in accordance with the confidentiality obligations set out in Article 78.
SECTION 4
Codes of practice
Article 56
Codes of practice
1. The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level in order to
contribute to the proper application of this Regulation, taking into account international approaches.
2. The AI Office and the Board shall aim to ensure that the codes of practice cover at least the obligations
provided for in Articles 53 and 55, including the following issues:
(a)the means to ensure that the information referred to in Article 53(1), points (a) and (b), is kept up to date in
light of market and technological developments;
(b)the adequate level of detail for the summary about the content used for training;
(c)the identification of the type and nature of the systemic risks at Union level, including their sources, where
appropriate;
(d)the measures, procedures and modalities for the assessment and management of the systemic risks at Union
level, including the documentation thereof, which shall be proportionate to the risks, take into
consideration their severity and probability and take into account the specific challenges of tackling those
risks in light of the possible ways in which such risks may emer ge and materialise along the AI value
chain.
3. The AI Office may invite all provide rs of general-purpose AI models, as well as relevant national competent
authorities, to participate in the drawing -up of codes of practice. Civil society organisations, industry , academia2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 69/110
and other relevant stakeholders, such as downstream providers and independent experts, may support the
process.
4. The AI Office and the Board shall aim to ensure that the codes of practice clearly set out their specific
objectives and contain commitments or measures, including key performance indicators as appropriate, to
ensure the achievement of those objectives, and that they take due account of the needs and interests of all
interested parties, including af fected persons, at Union level.
5. The AI Office shall aim to ensure that participants to the codes of practice report regularly to the AI Office
on the implementation of the commitments and the measures taken and their outcomes, including as measured
against the key performance indicators as appropriate. Key performance indicat ors and reporting commitments
shall reflect dif ferences in size and capacity between various participants.
6. The AI Office and the Board shall regularly monitor and evaluate the achie vement of the objectives of the
codes of practice by the participants and their contribution to the proper application of this Regulation. The AI
Office and the Board shall assess whether the codes of practice cover the obligations provided for in Articles 53
and 55, and shall regularly monitor and evaluate the achievement of their objectives. They shall publish their
assessment of the adequacy of the codes of practice.
The Commission may, by way of an implementing act, approve a code of practice and give it a general validity
within the Union. That implementing act shall be adopted in accordance with the examination procedure
referred to in Article 98(2).
7. The AI Office may invite all providers of general-purpose AI models to adhere to the codes of practice. For
providers of general-purpose AI models not presenting systemic risks this adherence may be limited to the
obligations provided for in Article 53, unless they declare explicitly their interest to join the full code.
8. The AI Office shall, as appropriate, also encourage and facilitate the review and adaptation of the codes of
practice, in particular in light of emer ging standards. The AI Office shall assist in the assessment of available
standards.
9. Codes of practice shall be ready at the latest by 2 May 2025. The AI Office shall take the necessary steps,
including inviting providers pursuant to paragraph 7.
If, by 2 August 2025, a code of practice cannot be finalised, or if the AI Office deems it is not adequate
following its assessment under paragraph 6 of this Article, the Commissio n may provide, by means of
implementing acts, common rules for the implementation of the obligations provided for in Articles 53 and 55,
including the issues set out in paragraph 2 of this Article. Those implementing acts shall be adopted in
accordance with the examination procedure referred to in Article 98(2).
CHAPTER VI
MEASURES IN SUPPOR T OF INNOV ATION
Article 57
AI regulatory sandboxes
1. Member States shall ensure that their competent authorities establish at least one AI regulatory sandbox at
national level, which shall be operational by 2 August 2026. That sandbox may also be established jointly with
the competent authorities of other Mem ber States. The Commission may provid e technical support, advice and
tools for the establishment and operation of AI regulatory sandboxes.
The obligation under the first subpara graph may also be fulfilled by participating in an existing sandbox
in so far as that participation provides an equivalent level of national coverage for the participating Member
States.
2. Additional AI regulatory sandboxes at regional or local level, or established jointly with the competent
authorities of other Member States may also be established.
3. The European Data Protection Supervisor may also establish an AI regulatory sandbox for Union
institutions, bodies, offices and agencies, and may exercise the roles and the tasks of national competent
authorities in accordance with this Chapter .
4. Member States shall ensure that the competent authorities referred to in paragraphs 1 and 2 allocate
sufficient resources to comply with this Article effectively and in a timely manner . Where appropriate, national
competent authorities shall cooperate with other relevant authorities, and may allow for the involvement of
other actors within the AI ecosystem. This Article shall not affect other regulatory sandboxes established under
Union or national law. Member States shall ensure an appropriate level of cooperation between the authorities
supervising those other sandboxes and the national competent authorities.
5. AI regulatory sandboxes established under paragraph 1 shall provide for a controlled environment that
fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for
a limited time before their being placed on the market or put into service pursuant to a specific sandbox plan
agreed between the providers or prosp ective providers and the competent authority . Such sandboxes may
include testing in real world conditions supervised therein.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 70/110
6. Competent authorities shall provide, as appropriate, guidance, supervision and support within the AI
regulatory sandbox with a view to identifying risks, in particular to fundam ental rights, health and safety ,
testing, mitigation measures, and their effectiveness in relation to the obliga tions and requirements of this
Regulation and, where relevant, other Union and national law supervised within the sandbox.
7. Competent authorities shall provide providers and prospective providers participating in the AI regulatory
sandbox with guidance on regulatory expectations and how to fulfil the requirements and obligations set out in
this Regulation.
Upon request of the provider or prospective provider of the AI system, the competent authority shall provide
a written proof of the activities successfully carried out in the sandbox. The competent authority shall also
provide an exit report detailing the activities carried out in the sandbox and the related results and learning
outcomes. Providers may use such documentation to demonstrate their compliance with this Regulation
through the conformity assessment process or relevant market surveillance activities. In this regard, the exit
reports and the written proof provided by the national competent authority shall be taken positively into
account by market surveillance authorities and notified bodies, with a view to accelerating conformity
assessment procedures to a reasonable extent.
8. Subject to the confidentiality provisions in Article 78, and with the agreement of the provider or prospective
provider , the Commission and the Board shall be authorised to access the exit reports and shall take them into
account, as appropriate, when exercising their tasks under this Regulation. If both the provider or prospective
provider and the national competent authority explicitly agree, the exit report may be made publicly available
through the single information platform referred to in this Article.
9. The establishment of AI regulatory sandboxes shall aim to contribute to the following objectives:
(a)improving legal certainty to achieve regulatory compliance with this Regulatio n or, where relevant, other
applicable Union and national law;
(b)supporting the sharing of best practices through cooperation with the authorities involved in the AI
regulatory sandbox;
(c)fostering innovation and competitiveness and facilitating the development of an AI ecosystem;
(d)contributing to evidence-based regulatory learning;
(e)facilitating and accelerating access to the Union market for AI systems, in particular when provided by
SMEs, including start-ups.
10. National competent authorities shall ensure that, to the extent the innovative AI systems involve the
processing of personal data or otherwise fall under the supervisory remit of other national authorities or
competent authorities providing or supporting access to data, the national data protection authorities and those
other national or competent authorities are associated with the operation of the AI regulatory sandbox and
involved in the supervision of those aspects to the extent of their respective tasks and powers.
11. The AI regulatory sandboxes shall not affect the supervisory or corrective powers of the competent
authorities supervising the sandboxes, including at regional or local level. Any significant risks to health and
safety and fundamental rights identified during the development and testing of such AI systems shall result in
an adequate mitigation. National competent authorities shall have the power to temporarily or permanently
suspend the testing process, or the participation in the sandbox if no effective mitigation is possible, and shall
inform the AI Office of such decision. National competent authorities shall exercise their supervisory powers
within the limits of the relevant law, using their discretionary powers when implementing legal provisions in
respect of a specific AI regulatory sandbox project, with the objective of supporting innovation in AI in the
Union.
12. Providers and prospective providers participating in the AI regulatory sandbox shall remain liable under
applicable Union and national liability law for any damage inflicted on third parties as a result of the
experimentation taking place in the sandbox. However , provided that the prospective providers observe the
specific plan and the terms and conditions for their participation and follow in good faith the guidance given by
the national competent authority , no administrative fines shall be imposed by the authorities for infringements
of this Regulation. Where other comp etent authorities responsible for other Union and national law were
actively involved in the supervision of the AI system in the sandbox and provided guidance for compliance, no
administrative fines shall be imposed regarding that law .
13. The AI regulatory sandboxes shall be designed and implemented in such a way that, where relevant, they
facilitate cross-border cooperation between national competent authorities.
14. National competent authorities shall coordinate their activities and cooperate within the framework of the
Board.
15. National competent authorities shall inform the AI Office and the Board of the establishment of a sandbox,
and may ask them for support and guida nce. The AI Office shall make publicly available a list of planned and
existing sandboxes and keep it up to date in order to encourage more interaction in the AI regulatory sandboxes
and cross-border cooperation.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 71/110
16. National competent authorities shall submit annual reports to the AI Office and to the Board, from one
year after the establishment of the AI regulatory sandbox and every year thereafter until its termination, and
a final report. Those reports shall provide information on the progress and results of the implementation of
those sandboxes, including best practices, incidents, lessons learnt and recommendations on their setup and,
where relevant, on the application and possible revision of this Regulation, including its delegated and
implementing acts, and on the applicati on of other Union law supervised by the competent authorities within
the sandbox. The national competent authorities shall make those annual reports or abstracts thereof available
to the public, online. The Commission shall, where appropriate, take the annual reports into account when
exercising its tasks under this Regulation.
17. The Commission shall develop a single and dedicated interface containing all relevant information related
to AI regulatory sandboxes to allow stakeholders to interact with AI regulatory sandboxes and to raise enquiries
with competent authorities, and to seek non-binding guidance on the conformity of innovative products,
services, business models embedding AI technologies, in accordance with Article 62(1), point (c). The
Commission shall proactively coordinate with national competent authorities, where relevant.
Article 58
Detailed arrangements for , and functioning of, AI regulatory sandboxes
1. In order to avoid fragmentation across the Union, the Commission shall adopt implementing acts specifying
the detailed arrangements for the establishment, development, implementation, operation and supervision of the
AI regulatory sandboxes. The implementing acts shall include common principles on the following issues:
(a)eligibility and selection criteria for participation in the AI regulatory sandbox;
(b)procedures for the application, participation, monitoring, exiting from and termi nation of the AI regulatory
sandbox, including the sandbox plan and the exit report;
(c)the terms and conditions applicable to the participants.
Those implementing acts shall be adopted in accordance with the examination procedure referred to in
Article 98(2).
2. The implementing acts referred to in paragraph 1 shall ensure:
(a)that AI regulatory sandboxes are open to any applying provider or prospective provider of an AI system
who fulfils eligibility and selection criteria, which shall be transparent and fair, and that national competent
authorities inform applicants of their decision within three months of the application;
(b)that AI regulatory sandboxes allow broad and equal access and keep up with demand for participation;
providers and prospective providers may also submit applications in partnerships with deployers and other
relevant third parties;
(c)that the detailed arrangements for, and conditions concerning AI regulatory sandboxes support, to the best
extent possible, flexibility for national competent authorities to establish and operate their AI regulatory
sandboxes;
(d)that access to the AI regulatory sandboxes is free of charge for SMEs, including start-ups, without
prejudice to exceptional costs that natio nal competent authorities may recover in a fair and proportionate
manner;
(e)that they facilitate providers and prosp ective providers, by means of the learning outcomes of the AI
regulatory sandboxes, in complying with conformity assessment obligations under this Regulation and the
voluntary application of the codes of conduct referred to in Article 95;
(f)that AI regulatory sandboxes facilitate the involvement of other relevant actors within the AI ecosystem,
such as notified bodies and standardisation organisations, SMEs, including start-ups, enterprises,
innovators, testing and experimentation facilities, research and experimentation labs and European Digital
Innovation Hubs, centres of excellence, individual researchers, in order to allow and facilitate cooperation
with the public and private sectors;
(g)that procedures, processes and administrative requirements for application, selection, participation and
exiting the AI regulatory sandbox are simple, easily intelligible, and clearly communicated in order to
facilitate the participation of SMEs, including start-ups, with limited legal and administrative capacities
and are streamlined across the Union, in order to avoid fragmentation and that participation in an AI
regulatory sandbox established by a Member State, or by the European Data Protection Supervisor is
mutually and uniformly recognised and carries the same legal ef fects across the Union;
(h)that participation in the AI regulatory sandbox is limited to a period that is appropriate to the complexity
and scale of the project and that may be extended by the national competent authority;
(i)that AI regulatory sandboxes facilitate the development of tools and infrastructure for testing,
benchmarking, assessing and explaining dimensions of AI systems relevant for regulatory learning, such as
accuracy , robustness and cybersecurity , as well as measures to mitigate risks to fundamental rights and
society at lar ge.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 72/110
3. Prospective providers in the AI regulatory sandboxes, in particular SMEs and start-ups, shall be directed,
where relevant, to pre-deployment services such as guidance on the implementa tion of this Regulation, to other
value-adding services such as help with standardisation documents and certification, testing and
experimentation facilities, European Digital Innovation Hubs and centres of excellence.
4. Where national competent authorities consider authorising testing in real world conditions supervised
within the framework of an AI regulatory sandbox to be established under this Article, they shall specifically
agree the terms and conditions of such testing and, in particular , the appropriate safeguards with the
participants, with a view to protecting fundamental rights, health and safety . Where appropriate, they shall
cooperate with other national competent authorities with a view to ensuring consistent practices across
the Union.
Article 59
Further processing of personal data for developing certain AI systems in the public inter est in the AI
regulatory sandbox
1. In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely
for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following
conditions are met:
(a)AI systems shall be developed for safeguarding substantial public interest by a public authority or another
natural or legal person and in one or more of the following areas:
(i)public safety and public health, including disease detection, diagnosis prevention, control and
treatment and improvement of health care systems;
(ii)a high level of protection and improvement of the quality of the environment, protection of
biodiversity , protection against pollutio n, green transition measures, climate change mitigation and
adaptation measures;
(iii)energy sustainability;
(iv)safety and resilience of transport systems and mobility , critical infrastructure and networks;
(v)efficiency and quality of public administration and public services;
(b)the data processed are necessary for complying with one or more of the requirements referred to in
Chapter III, Section 2 where those requirements cannot effectively be fulfilled by processing anonymised,
synthetic or other non-personal data;
(c)there are effective monitoring mechanisms to identify if any high risks to the rights and freedoms of the
data subjects, as referred to in Article 35 of Regulation (EU) 2016/679 and in Article 39 of Regulation
(EU) 2018/1725, may arise during the sandbox experimentation, as well as response mechanisms to
promptly mitigate those risks and, where necessary , stop the processing;
(d)any personal data to be processed in the context of the sandbox are in a functionally separate, isolated and
protected data processing environment under the control of the prospective provider and only authorised
persons have access to those data;
(e)providers can further share the originally collected data only in accordance with Union data protection law;
any personal data created in the sandbox cannot be shared outside the sandbox;
(f)any processing of personal data in the context of the sandbox neither leads to measures or decisions
affecting the data subjects nor does it affect the application of their rights laid down in Union law on the
protection of personal data;
(g)any personal data processed in the context of the sandbox are protected by means of appropriate technical
and organisational measures and deleted once the participation in the sandbox has terminated or the
personal data has reached the end of its retention period;
(h)the logs of the processing of personal data in the context of the sandbox are kept for the duration of the
participation in the sandbox, unless provided otherwise by Union or national law;
(i)a complete and detailed description of the process and rationale behind the training, testing and validation
of the AI system is kept together with the testing results as part of the technical documentation referred to
in Annex IV ;
(j)a short summary of the AI project developed in the sandbox, its objectives and expected results is
published on the website of the competent authorities; this obligation shall not cover sensitive operational
data in relation to the activities of law enforcement, border control, immigration or asylum authorities.
2. For the purposes of the prevention, investigation, detection or prosecution of criminal offences or the
execution of criminal penalties, including safeguarding against and preventing threats to public security , under
the control and responsibility of law enforcement authorities, the processing of personal data in AI regulatory
sandboxes shall be based on a specific Union or national law and subject to the same cumulative conditions as
referred to in paragraph 1.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 73/110
3. Paragraph 1 is without prejudice to Union or national law which excludes processing of personal data for
other purposes than those explicitly mentioned in that law, as well as to Union or national law laying down the
basis for the processing of personal data which is necessary for the purpose of developing, testing or training of
innovative AI systems or any other legal basis, in compliance with Union law on the protection of personal
data.
Article 60
Testing of high-risk AI systems in r eal world conditions outside AI regulatory sandboxes
1. Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes may be conducted
by providers or prospective providers of high-risk AI systems listed in Annex III, in accordance with this
Article and the real-world testing plan referred to in this Article, without prejudice to the prohibitions under
Article 5.
The Commission shall, by means of implementing acts, specify the detailed elements of the real-world testing
plan. Those implementing acts shall be adopted in accordance with the examination procedure referred to in
Article 98(2).
This paragraph shall be without prejudice to Union or national law on the testing in real world conditions of
high-risk AI systems related to products covered by Union harmonisation legislation listed in Annex I.
2. Providers or prospective providers may conduct testing of high-risk AI systems referred to in Annex III in
real world conditions at any time before the placing on the market or the putting into service of the AI system
on their own or in partnership with one or more deployers or prospective deployers.
3. The testing of high-risk AI systems in real world conditions under this Article shall be without prejudice to
any ethical review that is required by Union or national law .
4. Providers or prospective providers may conduct the testing in real world conditions only where all of the
following conditions are met:
(a)the provider or prospective provider has drawn up a real-world testing plan and submitted it to the market
surveillance authority in the Member State where the testing in real world conditions is to be conducted;
(b)the market surveillance authority in the Member State where the testing in real world conditions is to be
conducted has approved the testing in real world conditions and the real-world testing plan; where the
market surveillance authority has not provided an answer within 30 days, the testing in real world
conditions and the real-world testing plan shall be understood to have been approved; where national law
does not provide for a tacit approval, the testing in real world conditions shall remain subject to an
authorisation;
(c)the provider or prospective provider , with the exception of providers or prospective providers of high-risk
AI systems referred to in points 1, 6 and 7 of Annex III in the areas of law enfor cement, migration, asylum
and border control management, and high-risk AI systems referred to in point 2 of Annex III has registered
the testing in real world conditions in accordance with Article 71(4) with a Union-wide unique single
identification number and with the infor mation specified in Annex IX; the provider or prospective provider
of high-risk AI systems referred to in points 1, 6 and 7 of Annex III in the areas of law enforcement,
migration, asylum and border control management, has registered the testing in real-world conditions in
the secure non-public section of the EU database according to Article 49(4), point (d), with a Union-wide
unique single identification number and with the information specified therein; the provider or prospective
provider of high-risk AI systems referre d to in point 2 of Annex III has register ed the testing in real-world
conditions in accordance with Article 49(5);
(d)the provider or prospective provider conducting the testing in real world conditions is established in the
Union or has appointed a legal representative who is established in the Union;
(e)data collected and processed for the purpose of the testing in real world conditions shall be transferred to
third countries only provided that appropriate and applicable safeguards under Union law are implemented;
(f)the testing in real world conditions does not last longer than necessary to achieve its objectives and in any
case not longer than six months, which may be extended for an additional period of six months, subject to
prior notification by the provider or prospective provider to the market surveilla nce authority , accompanied
by an explanation of the need for such an extension;
(g)the subjects of the testing in real world conditions who are persons belonging to vulnerable groups due to
their age or disability , are appropriately protected;
(h)where a provider or prospective provide r organises the testing in real world conditions in cooperation with
one or more deployers or prospective deployers, the latter have been informed of all aspects of the testing
that are relevant to their decision to participate, and given the relevant instructions for use of the AI system
referred to in Article 13; the provider or prospective provider and the deployer or prospective deployer
shall conclude an agreement specifying their roles and responsibilities with a view to ensuring compliance
with the provisions for testing in real world conditions under this Regulation and under other applicable
Union and national law;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 74/110
(i)the subjects of the testing in real world conditions have given informed consent in accordance with
Article 61, or in the case of law enforcement, where the seeking of informed consent would prevent the AI
system from being tested, the testing itself and the outcome of the testing in the real world conditions shall
not have any negative effect on the subjects, and their personal data shall be deleted after the test is
performed;
(j)the testing in real world conditions is effectively overseen by the provider or prospective provider , as well
as by deployers or prospective deploye rs through persons who are suitably qualified in the relevant field
and have the necessary capacity , training and authority to perform their tasks;
(k)the predictions, recommendations or decisions of the AI system can be effectively reversed and
disregarded.
5. Any subjects of the testing in real world conditions, or their legally designated representative, as
appropriate, may, without any resulting detriment and without having to provide any justification, withdraw
from the testing at any time by revoking their informed consent and may request the immediate and permanent
deletion of their personal data. The withdrawal of the informed consent shall not affect the activities already
carried out.
6. In accordance with Article 75, Member States shall confer on their market surveillance authorities the
powers of requiring providers and prospective providers to provide information , of carrying out unannounced
remote or on-site inspections, and of performing checks on the conduct of the testing in real world conditions
and the related high-risk AI systems. Market surveillance authorities shall use those powers to ensure the safe
development of testing in real world conditions.
7. Any serious incident identified in the course of the testing in real world conditions shall be reported to the
national market surveillance authority in accordance with Article 73. The provider or prospective provider shall
adopt immediate mitigation measures or, failing that, shall suspend the testing in real world conditions until
such mitigation takes place, or otherw ise terminate it. The provider or prospective provider shall establish
a procedure for the prompt recall of the AI system upon such termination of the testing in real world conditions.
8. Providers or prospective providers shall notify the national market surveillance authority in the Member
State where the testing in real world conditions is to be conducted of the suspension or termination of the
testing in real world conditions and of the final outcomes.
9. The provider or prospective provid er shall be liable under applicable Union and national liability law for
any damage caused in the course of their testing in real world conditions.
Article 61
Informed consent to participate in testing in r eal world conditions outside AI regulatory sandboxes
1. For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be
obtained from the subjects of testing prior to their participation in such testing and after their having been duly
informed with concise, clear , relevant, and understandable information regarding:
(a)the nature and objectives of the testing in real world conditions and the possible inconvenience that may be
linked to their participation;
(b)the conditions under which the testing in real world conditions is to be conducted, including the expected
duration of the subject or subjects’ participation;
(c)their rights, and the guarantees regardin g their participation, in particular their right to refuse to participate
in, and the right to withdraw from, testing in real world conditions at any time without any resulting
detriment and without having to provide any justification;
(d)the arrangements for requesting the reversal or the disregarding of the predictions, recommendations or
decisions of the AI system;
(e)the Union-wide unique single identification number of the testing in real world conditions in accordance
with Article 60(4) point (c), and the contact details of the provider or its legal representative from whom
further information can be obtained.
2. The informed consent shall be dated and documented and a copy shall be given to the subjects of testing or
their legal representative.
Article 62
Measur es for providers and deployers, in particular SMEs, including start-ups
1. Member States shall undertake the following actions:
(a)provide SMEs, including start-ups, having a registered office or a branch in the Union, with priority access
to the AI regulatory sandboxes, to the extent that they fulfil the eligibility conditions and selection criteria;
the priority access shall not preclude other SMEs, including start-ups, other than those referred to in this
paragraph from access to the AI regulatory sandbox, provided that they also fulfil the eligibility conditions
and selection criteria;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 75/110
(b)organise specific awareness raising and training activities on the application of this Regulation tailored to
the needs of SMEs including start-ups, deployers and, as appropriate, local public authorities;
(c)utilise existing dedicated channels and where appropriate, establish new ones for communication with
SMEs including start-ups, deployers, other innovators and, as appropriate, local public authorities to
provide advice and respond to queries about the implementation of this Regulation, including as regards
participation in AI regulatory sandboxes;
(d)facilitate the participation of SMEs and other relevant stakeholders in the standardisation development
process.
2. The specific interests and needs of the SME providers, including start-ups, shall be taken into account when
setting the fees for conformity assessm ent under Article 43, reducing those fees proportionately to their size,
market size and other relevant indicators.
3. The AI Of fice shall undertake the following actions:
(a)provide standardised templates for areas covered by this Regulation, as specified by the Board in its
request;
(b)develop and maintain a single information platform providing easy to use information in relation to this
Regulation for all operators across the Union;
(c)organise appropriate communication campaigns to raise awareness about the obligations arising from this
Regulation;
(d)evaluate and promote the conver gence of best practices in public procurement procedures in relation to AI
systems.
Article 63
Derogations for specific operators
1. Microenterprises within the meaning of Recommendation 2003/361/EC may comply with certain elements
of the quality management system required by Article 17 of this Regulation in a simplified manner , provided
that they do not have partner enterprises or linked enterprises within the meaning of that Recommendation. For
that purpose, the Commission shall develop guidelines on the elements of the quality management system
which may be complied with in a simplified manner considering the needs of microenterprises, without
affecting the level of protection or the need for compliance with the requirements in respect of high-risk AI
systems.
2. Paragraph 1 of this Article shall not be interpreted as exempting those operators from fulfilling any other
requirements or obligations laid down in this Regulation, including those established in Articles 9, 10, 11, 12,
13, 14, 15, 72 and 73.
CHAPTER VII
GOVERNANCE
SECTION 1
Governance at Union level
Article 64
AI Office
1. The Commission shall develop Union expertise and capabilities in the field of AI through the AI Of fice.
2. Member States shall facilitate the tasks entrusted to the AI Of fice, as reflected in this Regulation.
Article 65
Establishment and structur e of the Eur opean Artificial Intelligence Board
1. A European Artificial Intelligence Board (the ‘Board’) is hereby established.
2. The Board shall be composed of one representative per Member State. The European Data Protection
Supervisor shall participate as observer . The AI Office shall also attend the Board’ s meetings, without taking
part in the votes. Other national and Union authorities, bodies or experts may be invited to the meetings by the
Board on a case by case basis, where the issues discussed are of relevance for them.
3. Each representative shall be designated by their Member State for a period of three years, renewable once.
4. Member States shall ensure that their representatives on the Board:
(a)have the relevant competences and powers in their Member State so as to contribute actively to the
achievement of the Board’ s tasks referred to in Article 66;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 76/110
(b)are designated as a single contact point vis-à-vis the Board and, where appropriate, taking into account
Member States’ needs, as a single contact point for stakeholders;
(c)are empowered to facilitate consistency and coordination between national competent authorities in their
Member State as regards the impleme ntation of this Regulation, including through the collection of
relevant data and information for the purpose of fulfilling their tasks on the Board.
5. The designated representatives of the Member States shall adopt the Board’ s rules of procedure by a two-
thirds majority . The rules of procedure shall, in particular , lay down procedures for the selection process, the
duration of the mandate of, and specifications of the tasks of, the Chair , detaile d arrangements for voting, and
the or ganisation of the Board’ s activities and those of its sub-groups.
6. The Board shall establish two standing sub-groups to provide a platform for cooperation and exchange
among market surveillance authorities and notifying authorities about issues related to market surveillance and
notified bodies respectively .
The standing sub-group for market surveillance should act as the administrative cooperation group (ADCO) for
this Regulation within the meaning of Article 30 of Regulation (EU) 2019/1020.
The Board may establish other standing or temporary sub-groups as appropriate for the purpose of examining
specific issues. Where appropriate, representatives of the advisory forum referred to in Article 67 may be
invited to such sub-groups or to specific meetings of those subgroups as observers.
7. The Board shall be organised and operated so as to safeguard the objectivity and impartiality of its
activities.
8. The Board shall be chaired by one of the representatives of the Member States. The AI Office shall provide
the secretariat for the Board, convene the meetings upon request of the Chair , and prepare the agenda in
accordance with the tasks of the Board pursuant to this Regulation and its rules of procedure.
Article 66
Tasks of the Board
The Board shall advise and assist the Commission and the Member States in order to facilitate the consistent
and ef fective application of this Regulation. To that end, the Board may in particular:
(a)contribute to the coordination among national competent authorities responsible for the application of this
Regulation and, in cooperation with and subject to the agreement of the market surveillance authorities
concerned, support joint activities of market surveillance authorities referred to in Article 74(1 1);
(b)collect and share technical and regulatory expertise and best practices among Member States;
(c)provide advice on the implementation of this Regulation, in particular as regards the enforcement of rules
on general-purpose AI models;
(d)contribute to the harmonisation of administrative practices in the Member States, including in relation to
the derogation from the conformity assessment procedures referred to in Article 46, the functioning of AI
regulatory sandboxes, and testing in real world conditions referred to in Articles 57, 59 and 60;
(e)at the request of the Commission or on its own initiative, issue recommendations and written opinions on
any relevant matters related to the implementation of this Regulation and to its consistent and effective
application, including:
(i)on the development and application of codes of conduct and codes of practice pursuant to this
Regulation, as well as of the Commission’ s guidelines;
(ii)the evaluation and review of this Regulation pursuant to Article 112, including as regards the serious
incident reports referred to in Article 73, and the functioning of the EU database referred to in
Article 71, the preparation of the delegated or implementing acts, and as regards possible alignments
of this Regulation with the Union harmonisation legislation listed in Annex I;
(iii)on technical specifications or existing standards regarding the requirements set out in Chapter III,
Section 2;
(iv)on the use of harmonised standards or common specifications referred to in Articles 40 and 41;
(v)trends, such as European global competitiveness in AI, the uptake of AI in the Union, and the
development of digital skills;
(vi)trends on the evolving typology of AI value chains, in particular on the resulting implications in terms
of accountability;
(vii)on the potential need for amendment to Annex III in accordance with Article 7, and on the potential
need for possible revision of Article 5 pursuant to Article 112, taking into account relevant available
evidence and the latest developments in technology;
(f)support the Commission in promoting AI literacy , public awareness and understanding of the benefits,
risks, safeguards and rights and obligations in relation to the use of AI systems;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 77/110
(g)facilitate the development of common criteria and a shared understanding among market operators and
competent authorities of the relevant concepts provided for in this Regulation, including by contributing to
the development of benchmarks;
(h)cooperate, as appropriate, with other Union institutions, bodies, offices and agencies, as well as relevant
Union expert groups and networks, in particular in the fields of product safety , cybersecurity , competition,
digital and media services, financial services, consumer protection, data and fundamental rights protection;
(i)contribute to effective cooperation with the competent authorities of third coun tries and with international
organisations;
(j)assist national competent authorities and the Commission in developing the organisational and technical
expertise required for the implementation of this Regulation, including by contri buting to the assessment of
training needs for staf f of Member States involved in implementing this Regulation;
(k)assist the AI Office in supporting national competent authorities in the establishment and development of
AI regulatory sandboxes, and facilitate cooperation and information-shari ng among AI regulatory
sandboxes;
(l)contribute to, and provide relevant advice on, the development of guidance documents;
(m)advise the Commission in relation to international matters on AI;
(n)provide opinions to the Commission on the qualified alerts regarding general-purpose AI models;
(o)receive opinions by the Member States on qualified alerts regarding general-p urpose AI models, and on
national experiences and practices on the monitoring and enforcement of AI systems, in particular systems
integrating the general-purpose AI models.
Article 67
Advisory forum
1. An advisory forum shall be established to provide technical expertise and advise the Board and the
Commission, and to contribute to their tasks under this Regulation.
2. The membership of the advisory forum shall represent a balanced selection of stakeholders, including
industry , start-ups, SMEs, civil society and academia. The membership of the advisory forum shall be balanced
with regard to commercial and non-commercial interests and, within the category of commercial interests, with
regard to SMEs and other undertakings.
3. The Commission shall appoint the members of the advisory forum, in accord ance with the criteria set out in
paragraph 2, from amongst stakeholders with recognised expertise in the field of AI.
4. The term of office of the members of the advisory forum shall be two years , which may be extended by up
to no more than four years.
5. The Fundamental Rights Agency , ENISA, the European Committee for Standardization (CEN), the
European Committee for Electrotechnical Standardization (CENELEC), and the European Telecommunications
Standards Institute (ETSI) shall be permanent members of the advisory forum.
6. The advisory forum shall draw up its rules of procedure. It shall elect two co-chairs from among its
members, in accordance with criteria set out in paragraph 2. The term of office of the co-chairs shall be two
years, renewable once.
7. The advisory forum shall hold meetings at least twice a year. The advisory forum may invite experts and
other stakeholders to its meetings.
8. The advisory forum may prepare opinions, recommendations and written contributions at the request of the
Board or the Commission.
9. The advisory forum may establish standing or temporary sub-groups as appropriate for the purpose of
examining specific questions related to the objectives of this Regulation.
10. The advisory forum shall prepare an annual report on its activities. That report shall be made publicly
available.
Article 68
Scientific panel of independent experts
1. The Commission shall, by mean s of an implementing act, make provisions on the establishment of
a scientific panel of independent expert s (the ‘scientific panel’) intended to support the enforcement activities
under this Regulation. That implementing act shall be adopted in accordance with the examination procedure
referred to in Article 98(2).
2. The scientific panel shall consist of experts selected by the Commission on the basis of up-to-date scientific
or technical expertise in the field of AI necessary for the tasks set out in paragraph 3, and shall be able to
demonstrate meeting all of the following conditions:2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 78/110
(a)having particular expertise and competence and scientific or technical expertise in the field of AI;
(b)independence from any provider of AI systems or general-purpose AI models;
(c)an ability to carry out activities diligently , accurately and objectively .
The Commission, in consultation with the Board, shall determine the number of experts on the panel in
accordance with the required needs and shall ensure fair gender and geographical representation.
3. The scientific panel shall advise and support the AI Of fice, in particular with regard to the following tasks:
(a)supporting the implementation and enforcement of this Regulation as regards general-purpose AI models
and systems, in particular by:
(i)alerting the AI Office of possible systemic risks at Union level of general-purpose AI models, in
accordance with Article 90;
(ii)contributing to the development of tools and methodologies for evaluating capabilities of general-
purpose AI models and systems, including through benchmarks;
(iii)providing advice on the classification of general-purpose AI models with systemic risk;
(iv)providing advice on the classification of various general-purpose AI models and systems;
(v)contributing to the development of tools and templates;
(b)supporting the work of market surveillance authorities, at their request;
(c)supporting cross-border market surveillance activities as referred to in Article 74(11), without prejudice to
the powers of market surveillance authorities;
(d)supporting the AI Office in carrying out its duties in the context of the Union safeguard procedure pursuant
to Article 81.
4. The experts on the scientific panel shall perform their tasks with impartiality and objectivity , and shall
ensure the confidentiality of information and data obtained in carrying out their tasks and activities. They shall
neither seek nor take instructions from anyone when exercising their tasks under paragraph 3. Each expert shall
draw up a declaration of interests, which shall be made publicly available. The AI Office shall establish systems
and procedures to actively manage and prevent potential conflicts of interest.
5. The implementing act referred to in paragraph 1 shall include provisions on the conditions, procedures and
detailed arrangements for the scientific panel and its members to issue alerts, and to request the assistance of
the AI Of fice for the performance of the tasks of the scientific panel.
Article 69
Access to the pool of experts by the Member States
1. Member States may call upon experts of the scientific panel to support their enforcement activities under
this Regulation.
2. The Member States may be required to pay fees for the advice and support provided by the experts. The
structure and the level of fees as well as the scale and structure of recoverab le costs shall be set out in the
implementing act referred to in Article 68(1), taking into account the objectives of the adequate implementation
of this Regulation, cost-ef fectiveness and the necessity of ensuring effective access to experts for all
Member States.
3. The Commission shall facilitate timely access to the experts by the Membe r States, as needed, and ensure
that the combination of support activities carried out by Union AI testing support pursuant to Article 84 and
experts pursuant to this Article is ef ficiently or ganised and provides the best possible added value.
SECTION 2
National competent authorities
Article 70
Designation of national competent authorities and single points of contact
1. Each Member State shall establish or designate as national competent authorities at least one notifying
authority and at least one market surveillance authority for the purposes of this Regulation. Those national
competent authorities shall exercise their powers independently , impartially and without bias so as to safeguard
the objectivity of their activities and tasks, and to ensure the application and implementation of this Regulation.
The members of those authorities shall refrain from any action incompatible with their duties. Provided that
those principles are observed, such activities and tasks may be performed by one or more designated
authorities, in accordance with the or ganisational needs of the Member State.
2. Member States shall communicate to the Commission the identity of the notifying authorities and the
market surveillance authorities and the tasks of those authorities, as well as any subsequent changes thereto.
Member States shall make publicly available information on how competent authorities and single points of2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 79/110
contact can be contacted, through elect ronic communication means by 2 August 2025. Member States shall
designate a market surveillance authority to act as the single point of contact for this Regulation, and shall
notify the Commission of the identity of the single point of contact. The Commission shall make a list of the
single points of contact publicly available.
3. Member States shall ensure that their national competent authorities are provided with adequate technical,
financial and human resources, and with infrastructure to fulfil their tasks effectively under this Regulation. In
particular , the national competent authorities shall have a sufficient number of personnel permanently available
whose competences and expertise shall include an in-depth understanding of AI technologies, data and data
computing, personal data protection, cybersecurity , fundamental rights, health and safety risks and knowledge
of existing standards and legal requirements. Member States shall assess and, if necessary , update competence
and resource requirements referred to in this paragraph on an annual basis.
4. National competent authorities shall take appropriate measures to ensure an adequate level of cybersecurity .
5. When performing their tasks, the national competent authorities shall act in accordance with the
confidentiality obligations set out in Article 78.
6. By 2 August 2025, and once every two years thereafter , Member States shall report to the Commission on
the status of the financial and human resources of the national competent authorities, with an assessment of
their adequacy . The Commission shall transmit that information to the Board for discussion and possible
recommendations.
7. The Commission shall facilitate the exchange of experience between national competent authorities.
8. National competent authorities may provide guidance and advice on the implementation of this Regulation,
in particular to SMEs including start-ups, taking into account the guidance and advice of the Board and the
Commission, as appropriate. Whenever national competent authorities intend to provide guidance and advice
with regard to an AI system in areas covered by other Union law, the national competent authorities under that
Union law shall be consulted, as appropriate.
9. Where Union institutions, bodies, offices or agencies fall within the scope of this Regulation, the European
Data Protection Supervisor shall act as the competent authority for their supervision.
CHAPTER VIII
EU DA TABASE FOR HIGH-RISK AI SYSTEMS
Article 71
EU database for high-risk AI systems listed in Annex III
1. The Commission shall, in collaboration with the Member States, set up and maintain an EU database
containing information referred to in paragraphs 2 and 3 of this Article concerning high-risk AI systems
referred to in Article 6(2) which are registered in accordance with Articles 49 and 60 and AI systems that are
not considered as high-risk pursuant to Article 6(3) and which are registered in accordance with Article 6(4)
and Article 49. When setting the functional specifications of such database, the Commission shall consult the
relevant experts, and when updating the functional specifications of such database, the Commission shall
consult the Board.
2. The data listed in Sections A and B of Annex VIII shall be entered into the EU database by the provider or,
where applicable, by the authorised representative.
3. The data listed in Section C of Annex VIII shall be entered into the EU database by the deployer who is, or
who acts on behalf of, a public authority , agency or body , in accordance with Article 49(3) and (4).
4. With the exception of the section referred to in Article 49(4) and Article 60(4), point (c), the information
contained in the EU database registered in accordance with Article 49 shall be accessible and publicly available
in a user-friendly manner . The information should be easily navigable and machine-readable. The information
registered in accordance with Article 60 shall be accessible only to market surveillance authorities and the
Commission, unless the prospective provider or provider has given consent for also making the information
accessible the public.
5. The EU database shall contain personal data only in so far as necessary for collecting and processing
information in accordance with this Regulation. That information shall include the names and contact details of
natural persons who are responsible for registering the system and have the legal authority to represent the
provider or the deployer , as applicable.
6. The Commission shall be the controller of the EU database. It shall make available to providers, prospective
providers and deployers adequate technical and administrative support. The EU database shall comply with the
applicable accessibility requirements.
CHAPTER IX
POST -MARKET MONIT ORING, INFORMA TION SHARING AND MARKET SUR VEILLANCE2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 80/110
SECTION 1
Post-market monitoring
Article 72
Post-market monitoring by pr oviders and post-market monitoring plan for high-risk AI systems
1. Providers shall establish and docum ent a post-market monitoring system in a manner that is proportionate
to the nature of the AI technologies and the risks of the high-risk AI system.
2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant
data which may be provided by deployers or which may be collected through other sources on the performance
of high-risk AI systems throughout their lifetime, and which allow the provider to evaluate the continuous
compliance of AI systems with the requirements set out in Chapter III, Section 2. Where relevant, post-market
monitoring shall include an analysis of the interaction with other AI systems. This obligation shall not cover
sensitive operational data of deployers which are law-enforcement authorities.
3. The post-market monitoring system shall be based on a post-market monitoring plan. The post-market
monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall
adopt an implementing act laying down detailed provisions establishing a template for the post-market
monitoring plan and the list of elements to be included in the plan by 2 February 2026. That implementing act
shall be adopted in accordance with the examination procedure referred to in Article 98(2).
4. For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I,
where a post-market monitoring system and plan are already established under that legislation, in order to
ensure consistency , avoid duplications and minimise additional burdens, providers shall have a choice of
integrating, as appropriate, the necessary elements described in paragraphs 1, 2 and 3 using the template
referred in paragraph 3 into systems and plans already existing under that legislation, provided that it achieves
an equivalent level of protection.
The first subparagraph of this paragrap h shall also apply to high-risk AI systems referred to in point 5 of
Annex III placed on the market or put into service by financial institutions that are subject to requirements
under Union financial services law regarding their internal governance, arrangements or processes.
SECTION 2
Sharing of information on serious incidents
Article 73
Reporting of serious incidents
1. Providers of high-risk AI systems placed on the Union market shall report any serious incident to the
market surveillance authorities of the Member States where that incident occurred.
2. The report referred to in paragraph 1 shall be made immediately after the provider has established a causal
link between the AI system and the serious incident or the reasonable likelihood of such a link, and, in any
event, not later than 15 days after the provider or, where applicable, the deployer , becomes aware of the serious
incident.
The period for the reporting referred to in the first subparagraph shall take account of the severity of the serious
incident.
3. Notwithstanding paragraph 2 of this Article, in the event of a widespread infringement or a serious incident
as defined in Article 3, point (49)(b), the report referred to in paragraph 1 of this Article shall be provided
immediately , and not later than two days after the provider or, where applicable, the deployer becomes aware of
that incident.
4. Notwithstanding paragraph 2, in the event of the death of a person, the report shall be provided immediately
after the provider or the deployer has established, or as soon as it suspects, a causal relationship between the
high-risk AI system and the serious incident, but not later than 10 days after the date on which the provider or,
where applicable, the deployer becomes aware of the serious incident.
5. Where necessary to ensure timely reporting, the provider or, where applicab le, the deployer , may submit an
initial report that is incomplete, followed by a complete report.
6. Following the reporting of a serious incident pursuant to paragraph 1, the provider shall, without delay ,
perform the necessary investigations in relation to the serious incident and the AI system concerned. This shall
include a risk assessment of the incident, and corrective action.
The provider shall cooperate with the competent authorities, and where relevant with the notified body
concerned, during the investigations referred to in the first subparagraph, and shall not perform any
investigation which involves altering the AI system concerned in a way which may affect any subsequent
evaluation of the causes of the incident, prior to informing the competent authorities of such action.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 81/110
7. Upon receiving a notification related to a serious incident referred to in Article 3, point (49)(c), the relevant
market surveillance authority shall inform the national public authorities or bodies referred to in Article 77(1).
The Commission shall develop dedicated guidance to facilitate compliance with the obligations set out in
paragraph 1 of this Article. That guidance shall be issued by 2 August 2025, and shall be assessed regularly .
8. The market surveillance authority shall take appropriate measures, as provided for in Article 19 of
Regulation (EU) 2019/1020, within seven days from the date it received the notification referred to in
paragraph 1 of this Article, and shall follow the notification procedures as provided in that Regulation.
9. For high-risk AI systems referred to in Annex III that are placed on the market or put into service by
providers that are subject to Union legislative instruments laying down reporting obligations equivalent to those
set out in this Regulation, the notification of serious incidents shall be limited to those referred to in Article 3,
point (49)(c).
10. For high-risk AI systems which are safety components of devices, or are themselves devices, covered by
Regulations (EU) 2017/745 and (EU) 2017/746, the notification of serious incidents shall be limited to those
referred to in Article 3, point (49)(c) of this Regulation, and shall be made to the national competent authority
chosen for that purpose by the Member States where the incident occurred.
11. National competent authorities shall immediately notify the Commission of any serious incident, whether
or not they have taken action on it, in accordance with Article 20 of Regulation (EU) 2019/1020.
SECTION 3
Enforcement
Article 74
Market surveillance and contr ol of AI systems in the Union market
1. Regulation (EU) 2019/1020 shall apply to AI systems covered by this Regulation. For the purposes of the
effective enforcement of this Regulation:
(a)any reference to an economic operator under Regulation (EU) 2019/1020 shall be understood as including
all operators identified in Article 2(1) of this Regulation;
(b)any reference to a product under Regulation (EU) 2019/1020 shall be understood as including all AI
systems falling within the scope of this Regulation.
2. As part of their reporting obligations under Article 34(4) of Regulation (EU) 2019/1020, the market
surveillance authorities shall report annually to the Commission and relevant national competition authorities
any information identified in the course of market surveillance activities that may be of potential interest for the
application of Union law on competition rules. They shall also annually report to the Commission about the use
of prohibited practices that occurred during that year and about the measures taken.
3. For high-risk AI systems related to products covered by the Union harm onisation legislation listed in
Section A of Annex I, the market surveillance authority for the purposes of this Regulation shall be the
authority responsible for market surveillance activities designated under those legal acts.
By derogation from the first subparagraph, and in appropriate circumstances, Member States may designate
another relevant authority to act as a market surveillance authority , provided they ensure coordination with the
relevant sectoral market surveillance authorities responsible for the enforcement of the Union harmonisation
legislation listed in Annex I.
4. The procedures referred to in Articles 79 to 83 of this Regulation shall not apply to AI systems related to
products covered by the Union harmonisation legislation listed in section A of Annex I, where such legal acts
already provide for procedures ensuring an equivalent level of protection and having the same objective. In
such cases, the relevant sectoral procedures shall apply instead.
5. Without prejudice to the powers of market surveillance authorities under Article 14 of Regulation (EU)
2019/1020, for the purpose of ensurin g the effective enforcement of this Regulation, market surveillance
authorities may exercise the powers referred to in Article 14(4), points (d) and (j), of that Regulation remotely ,
as appropriate.
6. For high-risk AI systems placed on the market, put into service, or used by financial institutions regulated
by Union financial services law, the market surveillance authority for the purposes of this Regulation shall be
the relevant national authority responsible for the financial supervision of those institutions under that
legislation in so far as the placing on the market, putting into service, or the use of the AI system is in direct
connection with the provision of those financial services.
7. By way of derogation from paragraph 6, in appropriate circumstances, and provided that coordination is
ensured, another relevant authority may be identified by the Member State as market surveillance authority for
the purposes of this Regulation.
National market surveillance authorities supervising regulated credit institutions regulated under Directive
2013/36/EU, which are participating in the Single Supervisory Mechanism established by Regulation (EU)2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 82/110
No 1024/2013, should report, without delay, to the European Central Bank any information identified in the
course of their market surveillance activities that may be of potential interest for the prudential supervisory
tasks of the European Central Bank specified in that Regulation.
8. For high-risk AI systems listed in point 1 of Annex III to this Regulation, in so far as the systems are used
for law enforcement purposes, border management and justice and democracy , and for high-risk AI systems
listed in points 6, 7 and 8 of Annex III to this Regulation, Member States shall designate as market surveillance
authorities for the purposes of this Regulation either the competent data protection supervisory authorities
under Regulation (EU) 2016/679 or Directive (EU) 2016/680, or any other authority designated pursuant to the
same conditions laid down in Articles 41 to 44 of Directive (EU) 2016/680. Market surveillance activities shall
in no way affect the independence of judicial authorities, or otherwise interfere with their activities when acting
in their judicial capacity .
9. Where Union institutions, bodies, offices or agencies fall within the scope of this Regulation, the European
Data Protection Supervisor shall act as their market surveillance authority , except in relation to the Court of
Justice of the European Union acting in its judicial capacity .
10. Member States shall facilitate coordination between market surveillance authorities designated under this
Regulation and other relevant national authorities or bodies which supervise the application of Union
harmonisation legislation listed in Annex I, or in other Union law, that might be relevant for the high-risk AI
systems referred to in Annex III.
11. Market surveillance authorities and the Commission shall be able to propose joint activities, including
joint investigations, to be conducted by either market surveillance authorities or market surveillance authorities
jointly with the Commission, that have the aim of promoting compliance, identifying non-compliance, raising
awareness or providing guidance in relation to this Regulation with respect to specific categories of high-risk
AI systems that are found to present a serious risk across two or more Mem ber States in accordance with
Article 9 of Regulation (EU) 2019/1 020. The AI Office shall provide coordination support for joint
investigations.
12. Without prejudice to the powers provided for under Regulation (EU) 2019 /1020, and where relevant and
limited to what is necessary to fulfil their tasks, the market surveillance authorities shall be granted full access
by providers to the documentation as well as the training, validation and testing data sets used for the
development of high-risk AI systems, including, where appropriate and subject to security safeguards, through
application programming interfaces (API) or other relevant technical means and tools enabling remote access.
13. Market surveillance authorities shall be granted access to the source code of the high-risk AI system upon
a reasoned request and only when both of the following conditions are fulfilled:
(a)access to source code is necessary to assess the conformity of a high-risk AI system with the requirements
set out in Chapter III, Section 2; and
(b)testing or auditing procedures and verifications based on the data and documentation provided by the
provider have been exhausted or proved insuf ficient.
14. Any information or documentation obtained by market surveillance authorities shall be treated in
accordance with the confidentiality obligations set out in Article 78.
Article 75
Mutual assistance, market surveillance and contr ol of general-purpose AI systems
1. Where an AI system is based on a general-purpose AI model, and the model and the system are developed
by the same provider , the AI Office shall have powers to monitor and supervis e compliance of that AI system
with obligations under this Regulation. To carry out its monitoring and supervision tasks, the AI Office shall
have all the powers of a market surveillance authority provided for in this Section and Regulation (EU)
2019/1020.
2. Where the relevant market surveillance authorities have sufficient reason to consider general-purpose AI
systems that can be used directly by deployers for at least one purpose that is classified as high-risk pursuant to
this Regulation to be non-compliant with the requirements laid down in this Regulation, they shall cooperate
with the AI Office to carry out compliance evaluations, and shall inform the Board and other market
surveillance authorities accordingly .
3. Where a market surveillance authority is unable to conclude its investiga tion of the high-risk AI system
because of its inability to access certain information related to the general-purpose AI model despite having
made all appropriate efforts to obtain that information, it may submit a reasoned request to the AI Office, by
which access to that information shall be enforced. In that case, the AI Office shall supply to the applicant
authority without delay , and in any event within 30 days, any information that the AI Office considers to be
relevant in order to establish whether a high-risk AI system is non-compliant. Market surveillance authorities
shall safeguard the confidentiality of the information that they obtain in accordance with Article 78 of this
Regulation. The procedure provided for in Chapter VI of Regulation (EU) 2019/1020 shall apply
mutatis mutandis .2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 83/110
Article 76
Supervision of testing in r eal world conditions by market surveillance authorities
1. Market surveillance authorities shall have competences and powers to ensure that testing in real world
conditions is in accordance with this Regulation.
2. Where testing in real world conditions is conducted for AI systems that are supervised within an AI
regulatory sandbox under Article 58, the market surveillance authorities shall verify the compliance with
Article 60 as part of their supervisory role for the AI regulatory sandbox. Those authorities may, as appropriate,
allow the testing in real world conditions to be conducted by the provider or prospective provider , in derogation
from the conditions set out in Article 60(4), points (f) and (g).
3. Where a market surveillance authority has been informed by the prospectiv e provider , the provider or any
third party of a serious incident or has other grounds for considering that the conditions set out in Articles 60
and 61 are not met, it may take either of the following decisions on its territory , as appropriate:
(a)to suspend or terminate the testing in real world conditions;
(b)to require the provider or prospective provider and the deployer or prospective deployer to modify any
aspect of the testing in real world conditions.
4. Where a market surveillance authority has taken a decision referred to in paragraph 3 of this Article, or has
issued an objection within the meaning of Article 60(4), point (b), the decision or the objection shall indicate
the grounds therefor and how the provider or prospective provider can challenge the decision or objection.
5. Where applicable, where a market surveillance authority has taken a decision referred to in paragraph 3, it
shall communicate the grounds therefor to the market surveillance authorities of other Member States in which
the AI system has been tested in accordance with the testing plan.
Article 77
Powers of authorities pr otecting fundamental rights
1. National public authorities or bodies which supervise or enforce the respect of obligations under Union law
protecting fundamental rights, including the right to non-discrimination, in relation to the use of high-risk AI
systems referred to in Annex III shall have the power to request and access any documentation created or
maintained under this Regulation in accessible language and format when access to that documentation is
necessary for effectively fulfilling their mandates within the limits of their jurisdiction. The relevant public
authority or body shall inform the market surveillance authority of the Member State concerned of any such
request.
2. By 2 November 2024, each Member State shall identify the public authorities or bodies referred to in
paragraph 1 and make a list of them publicly available. Member States shall notify the list to the Commission
and to the other Member States, and shall keep the list up to date.
3. Where the documentation referred to in paragraph 1 is insuf ficient to ascertain whether an infringement of
obligations under Union law protecting fundamental rights has occurred, the public authority or body referred
to in paragraph 1 may make a reasoned request to the market surveillance authority, to organise testing of the
high-risk AI system through technical means. The market surveillance authority shall organise the testing with
the close involvement of the requesting public authority or body within a reasonable time following the request.
4. Any information or documentation obtained by the national public authorities or bodies referred to in
paragraph 1 of this Article pursuant to this Article shall be treated in accordance with the confidentiality
obligations set out in Article 78.
Article 78
Confidentiality
1. The Commission, market surveillance authorities and notified bodies and any other natural or legal person
involved in the application of this Regulation shall, in accordance with Union or national law, respect the
confidentiality of information and data obtained in carrying out their tasks and activities in such a manner as to
protect, in particular:
(a)the intellectual property rights and confidential business information or trade secrets of a natural or legal
person, including source code, except in the cases referred to in Article 5 of Directive (EU) 2016/943 of the
European Parliament and of the Council (57);
(b)the effective implementation of this Regulation, in particular for the purposes of inspections, investigations
or audits;
(c)public and national security interests;
(d)the conduct of criminal or administrative proceedings;
(e)information classified pursuant to Union or national law .2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 84/110
2. The authorities involved in the application of this Regulation pursuant to paragraph 1 shall request only data
that is strictly necessary for the assessment of the risk posed by AI systems and for the exercise of their powers
in accordance with this Regulation and with Regulation (EU) 2019/1020. They shall put in place adequate and
effective cybersecurity measures to protect the security and confidentiality of the information and data
obtained, and shall delete the data collected as soon as it is no longer needed for the purpose for which it was
obtained, in accordance with applicable Union or national law .
3. Without prejudice to paragraphs 1 and 2, information exchanged on a confidential basis between the
national competent authorities or between national competent authorities and the Commission shall not be
disclosed without prior consultation of the originating national competent authority and the deployer when
high-risk AI systems referred to in point 1, 6 or 7 of Annex III are used by law enforcement, border control,
immigration or asylum authorities and when such disclosure would jeopardise public and national security
interests. This exchange of information shall not cover sensitive operational data in relation to the activities of
law enforcement, border control, immigration or asylum authorities.
When the law enforcement, immigration or asylum authorities are providers of high-risk AI systems referred to
in point 1, 6 or 7 of Annex III, the technical documentation referred to in Annex IV shall remain within the
premises of those authorities. Those authorities shall ensure that the market surveillance authorities referred to
in Article 74(8) and (9), as applicable, can, upon request, immediately access the documentation or obtain
a copy thereof. Only staff of the market surveillance authority holding the appropriate level of security
clearance shall be allowed to access that documentation or any copy thereof.
4. Paragraphs 1, 2 and 3 shall not affect the rights or obligations of the Commission, Member States and their
relevant authorities, as well as those of notified bodies, with regard to the exchange of information and the
dissemination of warnings, including in the context of cross-border coopera tion, nor shall they affect the
obligations of the parties concerned to provide information under criminal law of the Member States.
5. The Commission and Member States may exchange, where necessary and in accordance with relevant
provisions of international and trade agreements, confidential information with regulatory authorities of third
countries with which they have conclude d bilateral or multilateral confidentiality arrangements guaranteeing an
adequate level of confidentiality .
Article 79
Procedur e at national level for dealing with AI systems pr esenting a risk
1. AI systems presenting a risk shall be understood as a ‘product presenting a risk’ as defined in Article 3,
point 19 of Regulation (EU) 2019/1020, in so far as they present risks to the health or safety , or to fundamental
rights, of persons.
2. Where the market surveillance authority of a Member State has sufficient reason to consider an AI system
to present a risk as referred to in paragraph 1 of this Article, it shall carry out an evaluation of the AI system
concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation.
Particular attention shall be given to AI systems presenting a risk to vulnerable groups. Where risks to
fundamental rights are identified, the market surveillance authority shall also inform and fully cooperate with
the relevant national public authorities or bodies referred to in Article 77(1). The relevant operators shall
cooperate as necessary with the market surveillance authority and with the other national public authorities or
bodies referred to in Article 77(1).
Where, in the course of that evaluation, the market surveillance authority or, where applicable the market
surveillance authority in cooperation with the national public authority referred to in Article 77(1), finds that
the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall
without undue delay require the relevant operator to take all appropriate corrective actions to bring the AI
system into compliance, to withdraw the AI system from the market, or to recall it within a period the market
surveillance authority may prescribe, and in any event within the shorter of 15 working days, or as provided for
in the relevant Union harmonisation legislation.
The market surveillance authority shall inform the relevant notified body accord ingly . Article 18 of Regulation
(EU) 2019/1020 shall apply to the measures referred to in the second subparagraph of this paragraph.
3. Where the market surveillance authority considers that the non-compliance is not restricted to its national
territory , it shall inform the Commission and the other Member States without undue delay of the results of the
evaluation and of the actions which it has required the operator to take.
4. The operator shall ensure that all appropriate corrective action is taken in respect of all the AI systems
concerned that it has made available on the Union market.
5. Where the operator of an AI system does not take adequate corrective action within the period referred to in
paragraph 2, the market surveillance authority shall take all appropriate provisional measures to prohibit or
restrict the AI system’ s being made available on its national market or put into service, to withdraw the product
or the standalone AI system from that market or to recall it. That authority shall without undue delay notify the
Commission and the other Member States of those measures.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 85/110
6. The notification referred to in paragraph 5 shall include all available detai ls, in particular the information
necessary for the identification of the non-compliant AI system, the origin of the AI system and the supply
chain, the nature of the non-compliance alleged and the risk involved, the nature and duration of the national
measures taken and the arguments put forward by the relevant operator . In particular , the market surveillance
authorities shall indicate whether the non-compliance is due to one or more of the following:
(a)non-compliance with the prohibition of the AI practices referred to in Article 5;
(b)a failure of a high-risk AI system to meet requirements set out in Chapter III, Section 2;
(c)shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41
conferring a presumption of conformity;
(d)non-compliance with Article 50.
7. The market surveillance authorities other than the market surveillance authority of the Member State
initiating the procedure shall, without undue delay , inform the Commission and the other Member States of any
measures adopted and of any additional information at their disposal relating to the non-compliance of the AI
system concerned, and, in the event of disagreement with the notified national measure, of their objections.
8. Where, within three months of receipt of the notification referred to in paragraph 5 of this Article, no
objection has been raised by either a market surveillance authority of a Member State or by the Commission in
respect of a provisional measure taken by a market surveillance authority of another Member State, that
measure shall be deemed justified. This shall be without prejudice to the procedural rights of the concerned
operator in accordance with Article 18 of Regulation (EU) 2019/1020. The three-month period referred to in
this paragraph shall be reduced to 30 days in the event of non-compliance with the prohibition of the AI
practices referred to in Article 5 of this Regulation.
9. The market surveillance authorities shall ensure that appropriate restrictive measures are taken in respect of
the product or the AI system concerned, such as withdrawal of the product or the AI system from their market,
without undue delay .
Article 80
Procedur e for dealing with AI systems classified by the pr ovider as non-high-risk in application of
Annex III
1. Where a market surveillance author ity has sufficient reason to consider that an AI system classified by the
provider as non-high-risk pursuant to Article 6(3) is indeed high-risk, the market surveillance authority shall
carry out an evaluation of the AI system concerned in respect of its classification as a high-risk AI system based
on the conditions set out in Article 6(3) and the Commission guidelines.
2. Where, in the course of that evaluation, the market surveillance authority finds that the AI system concerned
is high-risk, it shall without undue delay require the relevant provider to take all necessary actions to bring the
AI system into compliance with the requirements and obligations laid down in this Regulation, as well as take
appropriate corrective action within a period the market surveillance authority may prescribe.
3. Where the market surveillance authority considers that the use of the AI system concerned is not restricted
to its national territory , it shall inform the Commission and the other Member States without undue delay of the
results of the evaluation and of the actions which it has required the provider to take.
4. The provider shall ensure that all necessary action is taken to bring the AI system into compliance with the
requirements and obligations laid down in this Regulation. Where the provider of an AI system concerned does
not bring the AI system into compliance with those requirements and obligations within the period referred to
in paragraph 2 of this Article, the provider shall be subject to fines in accordance with Article 99.
5. The provider shall ensure that all appropriate corrective action is taken in respect of all the AI systems
concerned that it has made available on the Union market.
6. Where the provider of the AI system concerned does not take adequate corrective action within the period
referred to in paragraph 2 of this Article, Article 79(5) to (9) shall apply .
7. Where, in the course of the evalu ation pursuant to paragraph 1 of this Article, the market surveillance
authority establishes that the AI system was misclassified by the provider as non-high-risk in order to
circumvent the application of requirements in Chapter III, Section 2, the provider shall be subject to fines in
accordance with Article 99.
8. In exercising their power to monitor the application of this Article, and in accordance with Article 11 of
Regulation (EU) 2019/1020, market surveillance authorities may perform appropriate checks, taking into
account in particular information stored in the EU database referred to in Article 71 of this Regulation.
Article 81
Union safeguard pr ocedur e
1. Where, within three months of receipt of the notification referred to in Article 79(5), or within 30 days in
the case of non-compliance with the prohibition of the AI practices referred to in Article 5, objections are raised2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 86/110
by the market surveillance authority of a Member State to a measure taken by another market surveillance
authority , or where the Commission considers the measure to be contrary to Union law, the Commission shall
without undue delay enter into consultation with the market surveillance authority of the relevant Member State
and the operator or operators, and shall evaluate the national measure. On the basis of the results of that
evaluation, the Commission shall, within six months, or within 60 days in the case of non-compliance with the
prohibition of the AI practices referred to in Article 5, starting from the notification referred to in Article 79(5),
decide whether the national measure is justified and shall notify its decision to the market surveillance authority
of the Member State concerned. The Commission shall also inform all other market surveillance authorities of
its decision.
2. Where the Commission considers the measure taken by the relevant Member State to be justified, all
Member States shall ensure that they take appropriate restrictive measures in respect of the AI system
concerned, such as requiring the withdrawal of the AI system from their market without undue delay , and shall
inform the Commission accordingly . Where the Commission considers the national measure to be unjustified,
the Member State concerned shall withdraw the measure and shall inform the Commission accordingly .
3. Where the national measure is considered justified and the non-compliance of the AI system is attributed to
shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 of this
Regulation, the Commission shall apply the procedure provided for in Article 11 of Regulation (EU)
No 1025/2012.
Article 82
Compliant AI systems which pr esent a risk
1. Where, having performed an evaluation under Article 79, after consulting the relevant national public
authority referred to in Article 77(1), the market surveillance authority of a Member State finds that although
a high-risk AI system complies with this Regulation, it nevertheless presents a risk to the health or safety of
persons, to fundamental rights, or to other aspects of public interest protection, it shall require the relevant
operator to take all appropriate measure s to ensure that the AI system concerned, when placed on the market or
put into service, no longer presents that risk without undue delay , within a period it may prescribe.
2. The provider or other relevant operator shall ensure that corrective action is taken in respect of all the AI
systems concerned that it has made available on the Union market within the timeline prescribed by the market
surveillance authority of the Member State referred to in paragraph 1.
3. The Member States shall immediat ely inform the Commission and the other Member States of a finding
under paragraph 1. That information shall include all available details, in particular the data necessary for the
identification of the AI system concerned, the origin and the supply chain of the AI system, the nature of the
risk involved and the nature and duration of the national measures taken.
4. The Commission shall without undu e delay enter into consultation with the Member States concerned and
the relevant operators, and shall evaluate the national measures taken. On the basis of the results of that
evaluation, the Commission shall decide whether the measure is justified and, where necessary , propose other
appropriate measures.
5. The Commission shall immediately communicate its decision to the Member States concerned and to the
relevant operators. It shall also inform the other Member States.
Article 83
Formal non-compliance
1. Where the market surveillance authority of a Member State makes one of the following findings, it shall
require the relevant provider to put an end to the non-compliance concerned, within a period it may prescribe:
(a)the CE marking has been af fixed in violation of Article 48;
(b)the CE marking has not been af fixed;
(c)the EU declaration of conformity referred to in Article 47 has not been drawn up;
(d)the EU declaration of conformity referred to in Article 47 has not been drawn up correctly;
(e)the registration in the EU database referred to in Article 71 has not been carried out;
(f)where applicable, no authorised representative has been appointed;
(g)technical documentation is not available.
2. Where the non-compliance referred to in paragraph 1 persists, the market surveillance authority of the
Member State concerned shall take appropriate and proportionate measures to restrict or prohibit the high-risk
AI system being made available on the market or to ensure that it is recalled or withdrawn from the market
without delay .
Article 84
Union AI testing support structur es2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 87/110
1. The Commission shall designate one or more Union AI testing support struc tures to perform the tasks listed
under Article 21(6) of Regulation (EU) 2019/1020 in the area of AI.
2. Without prejudice to the tasks referred to in paragraph 1, Union AI testing support structures shall also
provide independent technical or scientific advice at the request of the Board, the Commission, or of market
surveillance authorities.
SECTION 4
Remedies
Article 85
Right to lodge a complaint with a market surveillance authority
Without prejudice to other administrative or judicial remedies, any natural or legal person having grounds to
consider that there has been an infringement of the provisions of this Regulation may submit complaints to the
relevant market surveillance authority .
In accordance with Regulation (EU) 2019/1020, such complaints shall be taken into account for the purpose of
conducting market surveillance activitie s, and shall be handled in line with the dedicated procedures established
therefor by the market surveillance authorities.
Article 86
Right to explanation of individual decision-making
1. Any affected person subject to a decision which is taken by the deployer on the basis of the output from
a high-risk AI system listed in Annex III, with the exception of systems listed under point 2 thereof, and which
produces legal effects or similarly significantly affects that person in a way that they consider to have an
adverse impact on their health, safety or fundamental rights shall have the right to obtain from the deployer
clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main
elements of the decision taken.
2. Paragraph 1 shall not apply to the use of AI systems for which exceptions from, or restrictions to, the
obligation under that paragraph follow from Union or national law in compliance with Union law .
3. This Article shall apply only to the extent that the right referred to in paragraph 1 is not otherwise provided
for under Union law .
Article 87
Reporting of infringements and pr otection of r eporting persons
Directive (EU) 2019/1937 shall apply to the reporting of infringements of this Regulation and the protection of
persons reporting such infringements.
SECTION 5
Supervision, investigation, enforcement and monitoring in respect of providers of general-purpose AI
models
Article 88
Enfor cement of the obligations of pr oviders of general-purpose AI models
1. The Commission shall have exclusi ve powers to supervise and enforce Chapter V, taking into account the
procedural guarantees under Article 94. The Commission shall entrust the implementation of these tasks to the
AI Office, without prejudice to the powers of organisation of the Commission and the division of competences
between Member States and the Union based on the Treaties.
2. Without prejudice to Article 75(3), market surveillance authorities may request the Commission to exercise
the powers laid down in this Section, where that is necessary and proportionate to assist with the fulfilment of
their tasks under this Regulation.
Article 89
Monitoring actions
1. For the purpose of carrying out the tasks assigned to it under this Section, the AI Office may take the
necessary actions to monitor the effective implementation and compliance with this Regulation by providers of
general-purpose AI models, including their adherence to approved codes of practice.
2. Downstream providers shall have the right to lodge a complaint alleging an infringement of this Regulation.
A complaint shall be duly reasoned and indicate at least:
(a)the point of contact of the provider of the general-purpose AI model concerned;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 88/110
(b)a description of the relevant facts, the provisions of this Regulation concerned, and the reason why the
downstream provider considers that the provider of the general-purpose AI model concerned infringed this
Regulation;
(c)any other information that the downstream provider that sent the request considers relevant, including,
where appropriate, information gathered on its own initiative.
Article 90
Alerts of systemic risks by the scientific panel
1. The scientific panel may provide a qualified alert to the AI Of fice where it has reason to suspect that:
(a)a general-purpose AI model poses concrete identifiable risk at Union level; or
(b)a general-purpose AI model meets the conditions referred to in Article 51.
2. Upon such qualified alert, the Commission, through the AI Office and after having informed the Board,
may exercise the powers laid down in this Section for the purpose of assessing the matter . The AI Office shall
inform the Board of any measure according to Articles 91 to 94.
3. A qualified alert shall be duly reasoned and indicate at least:
(a)the point of contact of the provider of the general-purpose AI model with systemic risk concerned;
(b)a description of the relevant facts and the reasons for the alert by the scientific panel;
(c)any other information that the scientif ic panel considers to be relevant, including, where appropriate,
information gathered on its own initiative.
Article 91
Power to request documentation and information
1. The Commission may request the provider of the general-purpose AI model concerned to provide the
documentation drawn up by the provider in accordance with Articles 53 and 55, or any additional information
that is necessary for the purpose of assessing compliance of the provider with this Regulation.
2. Before sending the request for information, the AI Office may initiate a structured dialogue with the
provider of the general-purpose AI model.
3. Upon a duly substantiated request from the scientific panel, the Commission may issue a request for
information to a provider of a general-purpose AI model, where the access to information is necessary and
proportionate for the fulfilment of the tasks of the scientific panel under Article 68(2).
4. The request for information shall state the legal basis and the purpose of the request, specify what
information is required, set a period within which the information is to be provided, and indicate the fines
provided for in Article 101 for supplying incorrect, incomplete or misleading information.
5. The provider of the general-purpose AI model concerned, or its representative shall supply the information
requested. In the case of legal persons, companies or firms, or where the provider has no legal personality , the
persons authorised to represent them by law or by their statutes, shall supply the information requested on
behalf of the provider of the general-purpose AI model concerned. Lawyers duly authorised to act may supply
information on behalf of their clients. The clients shall nevertheless remain fully responsible if the information
supplied is incomplete, incorrect or misleading.
Article 92
Power to conduct evaluations
1. The AI Office, after consulting the Board, may conduct evaluations of the general-purpose AI model
concerned:
(a)to assess compliance of the provider with obligations under this Regulation, where the information
gathered pursuant to Article 91 is insuf ficient; or
(b)to investigate systemic risks at Union level of general-purpose AI models with systemic risk, in particular
following a qualified alert from the scientific panel in accordance with Article 90(1), point (a).
2. The Commission may decide to appoint independent experts to carry out evaluations on its behalf,
including from the scientific panel established pursuant to Article 68. Indepen dent experts appointed for this
task shall meet the criteria outlined in Article 68(2).
3. For the purposes of paragraph 1, the Commission may request access to the general-purpose AI model
concerned through APIs or further appropriate technical means and tools, including source code.
4. The request for access shall state the legal basis, the purpose and reasons of the request and set the period
within which the access is to be provided, and the fines provided for in Article 101 for failure to provide access.
5. The providers of the general-purpos e AI model concerned or its representative shall supply the information
requested. In the case of legal persons, companies or firms, or where the provider has no legal personality , the2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 89/110
persons authorised to represent them by law or by their statutes, shall provide the access requested on behalf of
the provider of the general-purpose AI model concerned.
6. The Commission shall adopt implementing acts setting out the detailed arrangements and the conditions for
the evaluations, including the detailed arrangements for involving independent experts, and the procedure for
the selection thereof. Those implementing acts shall be adopted in accordance with the examination procedure
referred to in Article 98(2).
7. Prior to requesting access to the general-purpose AI model concerned, the AI Office may initiate
a structured dialogue with the provider of the general-purpose AI model to gather more information on the
internal testing of the model, internal safeguards for preventing systemic risks , and other internal procedures
and measures the provider has taken to mitigate such risks.
Article 93
Power to request measur es
1. Where necessary and appropriate, the Commission may request providers to:
(a)take appropriate measures to comply with the obligations set out in Articles 53 and 54;
(b)implement mitigation measures, where the evaluation carried out in accordance with Article 92 has given
rise to serious and substantiated concern of a systemic risk at Union level;
(c)restrict the making available on the market, withdraw or recall the model.
2. Before a measure is requested, the AI Office may initiate a structured dialogue with the provider of the
general-purpose AI model.
3. If, during the structured dialogue referred to in paragraph 2, the provider of the general-purpose AI model
with systemic risk offers commitments to implement mitigation measures to address a systemic risk at Union
level, the Commission may, by decision, make those commitments binding and declare that there are no further
grounds for action.
Article 94
Procedural rights of economic operators of the general-purpose AI model
Article 18 of Regulation (EU) 2019/1020 shall apply mutatis mutandis to the providers of the general-purpose
AI model, without prejudice to more specific procedural rights provided for in this Regulation.
CHAPTER X
CODES OF CONDUCT AND GUIDELINES
Article 95
Codes of conduct for voluntary application of specific r equir ements
1. The AI Office and the Member States shall encourage and facilitate the drawing up of codes of conduct,
including related governance mechanisms, intended to foster the voluntary appli cation to AI systems, other than
high-risk AI systems, of some or all of the requirements set out in Chapter III, Section 2 taking into account the
available technical solutions and industry best practices allowing for the application of such requirements.
2. The AI Office and the Member States shall facilitate the drawing up of codes of conduct concerning the
voluntary application, including by deployers, of specific requirements to all AI systems, on the basis of clear
objectives and key performance indicators to measure the achievement of those objectives, including elements
such as, but not limited to:
(a)applicable elements provided for in Union ethical guidelines for trustworthy AI;
(b)assessing and minimising the impact of AI systems on environmental sustainability , including as regards
energy-ef ficient programming and techniques for the ef ficient design, training and use of AI;
(c)promoting AI literacy , in particular that of persons dealing with the development, operation and use of AI;
(d)facilitating an inclusive and diverse design of AI systems, including through the establishment of inclusive
and diverse development teams and the promotion of stakeholders’ participation in that process;
(e)assessing and preventing the negative impact of AI systems on vulnerable persons or groups of vulnerable
persons, including as regards accessibility for persons with a disability , as well as on gender equality .
3. Codes of conduct may be drawn up by individual providers or deployers of AI systems or by organisations
representing them or by both, including with the involvement of any interested stakeholders and their
representative organisations, including civil society organisations and academia. Codes of conduct may cover
one or more AI systems taking into account the similarity of the intended purpose of the relevant systems.
4. The AI Office and the Member States shall take into account the specific interests and needs of SMEs,
including start-ups, when encouraging and facilitating the drawing up of codes of conduct.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 90/110
Article 96
Guidelines fr om the Commission on the implementation of this Regulation
1. The Commission shall develop guidelines on the practical implementation of this Regulation, and in
particular on:
(a)the application of the requirements and obligations referred to in Articles 8 to 15 and in Article 25;
(b)the prohibited practices referred to in Article 5;
(c)the practical implementation of the provisions related to substantial modification;
(d)the practical implementation of transparency obligations laid down in Article 50;
(e)detailed information on the relationship of this Regulation with the Union harm onisation legislation listed
in Annex I, as well as with other relevant Union law , including as regards consistency in their enforcement;
(f)the application of the definition of an AI system as set out in Article 3, point (1).
When issuing such guidelines, the Com mission shall pay particular attention to the needs of SMEs including
start-ups, of local public authorities and of the sectors most likely to be af fected by this Regulation.
The guidelines referred to in the first subparagraph of this paragraph shall take due account of the generally
acknowledged state of the art on AI, as well as of relevant harmonised standards and common specifications
that are referred to in Articles 40 and 41, or of those harmonised standards or technical specifications that are
set out pursuant to Union harmonisation law .
2. At the request of the Member States or the AI Office, or on its own initiative, the Commission shall update
guidelines previously adopted when deemed necessary .
CHAPTER XI
DELEGA TION OF POWER AND COMMITTEE PROCEDURE
Article 97
Exer cise of the delegation
1. The power to adopt delegated acts is conferred on the Commission subject to the conditions laid down in
this Article.
2. The power to adopt delegated acts referred to in Article 6(6) and (7), Article 7(1) and (3), Article 11(3),
Article 43(5) and (6), Article 47(5), Article 51(3), Article 52(4) and Article 53(5) and (6) shall be conferred on
the Commission for a period of five years from 1 August 2024. The Commis sion shall draw up a report in
respect of the delegation of power not later than nine months before the end of the five-year period. The
delegation of power shall be tacitly extended for periods of an identical duration, unless the European
Parliament or the Council opposes such extension not later than three months before the end of each period.
3. The delegation of power referred to in Article 6(6) and (7), Article 7(1) and (3), Article 11(3), Article 43(5)
and (6), Article 47(5), Article 51(3), Article 52(4) and Article 53(5) and (6) may be revoked at any time by the
European Parliament or by the Council. A decision of revocation shall put an end to the delegation of power
specified in that decision. It shall take effect the day following that of its public ation in the Official Journal of
the European Union or at a later date specified therein. It shall not affect the validity of any delegated acts
already in force.
4. Before adopting a delegated act, the Commission shall consult experts designated by each Member State in
accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-
Making.
5. As soon as it adopts a delegated act, the Commission shall notify it simultaneously to the European
Parliament and to the Council.
6. Any delegated act adopted pursuant to Article 6(6) or (7), Article 7(1) or (3), Article 11(3), Article 43(5) or
(6), Article 47(5), Article 51(3), Article 52(4) or Article 53(5) or (6) shall enter into force only if no objection
has been expressed by either the European Parliament or the Council within a period of three months of
notification of that act to the European Parliament and the Council or if, befor e the expiry of that period, the
European Parliament and the Council have both informed the Commission that they will not object. That period
shall be extended by three months at the initiative of the European Parliament or of the Council.
Article 98
Committee pr ocedur e
1. The Commission shall be assisted by a committee. That committee shall be a committee within the meaning
of Regulation (EU) No 182/201 1.
2. Where reference is made to this paragraph, Article 5 of Regulation (EU) No 182/201 1 shall apply .2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 91/110
CHAPTER XII
PENAL TIES
Article 99
Penalties
1. In accordance with the terms and conditions laid down in this Regulation, Member States shall lay down
the rules on penalties and other enforcement measures, which may also inclu de warnings and non-monetary
measures, applicable to infringements of this Regulation by operators, and shall take all measures necessary to
ensure that they are properly and effectively implemented, thereby taking into account the guidelines issued by
the Commission pursuant to Article 96. The penalties provided for shall be effective, proportionate and
dissuasive. They shall take into account the interests of SMEs, including start-ups, and their economic viability .
2. The Member States shall, without delay and at the latest by the date of entry into application, notify the
Commission of the rules on penalties and of other enforcement measures refer red to in paragraph 1, and shall
notify it, without delay , of any subsequent amendment to them.
3. Non-compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to
administrative fines of up to EUR 35 000 000 or, if the offender is an unde rtaking, up to 7 % of its total
worldwide annual turnover for the preceding financial year , whichever is higher .
4. Non-compliance with any of the following provisions related to operators or notified bodies, other than
those laid down in Articles 5, shall be subject to administrative fines of up to EUR 15 000 000 or, if the
offender is an undertaking, up to 3 % of its total worldwide annual turnover for the preceding financial year,
whichever is higher:
(a)obligations of providers pursuant to Article 16;
(b)obligations of authorised representatives pursuant to Article 22;
(c)obligations of importers pursuant to Article 23;
(d)obligations of distributors pursuant to Article 24;
(e)obligations of deployers pursuant to Article 26;
(f)requirements and obligations of notified bodies pursuant to Article 31, Article 33(1), (3) and (4) or
Article 34;
(g)transparency obligations for providers and deployers pursuant to Article 50.
5. The supply of incorrect, incomplete or misleading information to notified bodies or national competent
authorities in reply to a request shall be subject to administrative fines of up to EUR 7 500 000 or, if the
offender is an undertaking, up to 1 % of its total worldwide annual turnover for the preceding financial year,
whichever is higher .
6. In the case of SMEs, including start-ups, each fine referred to in this Article shall be up to the percentages
or amount referred to in paragraphs 3, 4 and 5, whichever thereof is lower .
7. When deciding whether to impose an administrative fine and when deciding on the amount of the
administrative fine in each individual case, all relevant circumstances of the specific situation shall be taken
into account and, as appropriate, regard shall be given to the following:
(a)the nature, gravity and duration of the infringement and of its consequence s, taking into account the
purpose of the AI system, as well as, where appropriate, the number of affecte d persons and the level of
damage suf fered by them;
(b)whether administrative fines have already been applied by other market surveillance authorities to the same
operator for the same infringement;
(c)whether administrative fines have already been applied by other authorities to the same operator for
infringements of other Union or national law, when such infringements result from the same activity or
omission constituting a relevant infringement of this Regulation;
(d)the size, the annual turnover and market share of the operator committing the infringement;
(e)any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial
benefits gained, or losses avoided, directly or indirectly , from the infringement;
(f)the degree of cooperation with the national competent authorities, in order to remedy the infringement and
mitigate the possible adverse ef fects of the infringement;
(g)the degree of responsibility of the operator taking into account the technical and organisational measures
implemented by it;
(h)the manner in which the infringement became known to the national competent authorities, in particular
whether , and if so to what extent, the operator notified the infringement;
(i)the intentional or negligent character of the infringement;
(j)any action taken by the operator to mitigate the harm suf fered by the af fected persons.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 92/110
8. Each Member State shall lay down rules on to what extent administrative fines may be imposed on public
authorities and bodies established in that Member State.
9. Depending on the legal system of the Member States, the rules on administrative fines may be applied in
such a manner that the fines are imposed by competent national courts or by other bodies, as applicable in those
Member States. The application of such rules in those Member States shall have an equivalent ef fect.
10. The exercise of powers under this Article shall be subject to appropriate procedural safeguards in
accordance with Union and national law , including ef fective judicial remedies and due process.
11. Member States shall, on an annual basis, report to the Commission abou t the administrative fines they
have issued during that year, in accordance with this Article, and about any related litigation or judicial
proceedings.
Article 100
Administrative fines on Union institutions, bodies, offices and agencies
1. The European Data Protection Supervisor may impose administrative fines on Union institutions, bodies,
offices and agencies falling within the scope of this Regulation. When deciding whether to impose an
administrative fine and when deciding on the amount of the administrative fine in each individual case, all
relevant circumstances of the specific situation shall be taken into account and due regard shall be given to the
following:
(a)the nature, gravity and duration of the infringement and of its consequence s, taking into account the
purpose of the AI system concerned, as well as, where appropriate, the number of affected persons and the
level of damage suf fered by them;
(b)the degree of responsibility of the Union institution, body , office or agency , taking into account technical
and or ganisational measures implemented by them;
(c)any action taken by the Union institution, body , office or agency to mitigate the damage suffered by
affected persons;
(d)the degree of cooperation with the European Data Protection Supervisor in order to remedy the
infringement and mitigate the possible adverse effects of the infringement, including compliance with any
of the measures previously ordered by the European Data Protection Supervisor against the Union
institution, body , office or agency concerned with regard to the same subject matter;
(e)any similar previous infringements by the Union institution, body , office or agency;
(f)the manner in which the infringement became known to the European Data Protection Supervisor , in
particular whether , and if so to what extent, the Union institution, body , office or agency notified the
infringement;
(g)the annual budget of the Union institution, body , office or agency .
2. Non-compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to
administrative fines of up to EUR 1 500 000.
3. The non-compliance of the AI system with any requirements or obligations under this Regulation, other
than those laid down in Article 5, shall be subject to administrative fines of up to EUR 750 000.
4. Before taking decisions pursuant to this Article, the European Data Protection Supervisor shall give the
Union institution, body , office or agency which is the subject of the proceedings conducted by the European
Data Protection Supervisor the opportunity of being heard on the matter regarding the possible infringement.
The European Data Protection Supervisor shall base his or her decisions only on elements and circumstances
on which the parties concerned have been able to comment. Complainants, if any, shall be associated closely
with the proceedings.
5. The rights of defence of the parties concerned shall be fully respected in the proceedings. They shall be
entitled to have access to the European Data Protection Supervisor ’s file, subject to the legitimate interest of
individuals or undertakings in the protection of their personal data or business secrets.
6. Funds collected by imposition of fines in this Article shall contribute to the general budget of the Union.
The fines shall not af fect the ef fective operation of the Union institution, body , office or agency fined.
7. The European Data Protection Supervisor shall, on an annual basis, notify the Commission of the
administrative fines it has imposed pursuant to this Article and of any litigation or judicial proceedings it has
initiated.
Article 101
Fines for providers of general-purpose AI models
1. The Commission may impose on providers of general-purpose AI models fines not exceeding 3 % of their
annual total worldwide turnover in the preceding financial year or EUR 15 000 000, whichever is higher ., when
the Commission finds that the provider intentionally or negligently:2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 93/110
(a)infringed the relevant provisions of this Regulation;
(b)failed to comply with a request for a document or for information pursuant to Article 91, or supplied
incorrect, incomplete or misleading information;
(c)failed to comply with a measure requested under Article 93;
(d)failed to make available to the Commission access to the general-purpose AI model or general-purpose AI
model with systemic risk with a view to conducting an evaluation pursuant to Article 92.
In fixing the amount of the fine or periodic penalty payment, regard shall be had to the nature, gravity and
duration of the infringement, taking due account of the principles of proportionality and appropriateness. The
Commission shall also into account commitments made in accordance with Article 93(3) or made in relevant
codes of practice in accordance with Article 56.
2. Before adopting the decision pursuant to paragraph 1, the Commission shall communicate its preliminary
findings to the provider of the general-purpose AI model and give it an opportunity to be heard.
3. Fines imposed in accordance with this Article shall be ef fective, proportionate and dissuasive.
4. Information on fines imposed under this Article shall also be communicated to the Board as appropriate.
5. The Court of Justice of the Europe an Union shall have unlimited jurisdic tion to review decisions of the
Commission fixing a fine under this Article. It may cancel, reduce or increase the fine imposed.
6. The Commission shall adopt implementing acts containing detailed arrangements and procedural
safeguards for proceedings in view of the possible adoption of decisions pursuant to paragraph 1 of this Article.
Those implementing acts shall be adopted in accordance with the examination procedure referred to in
Article 98(2).
CHAPTER XIII
FINAL PROVISIONS
Article 102
Amendment to Regulation (EC) No 300/2008
In Article 4(3) of Regulation (EC) No 300/2008, the following subparagraph is added:
‘When adopting detailed measures related to technical specifications and procedures for approval and use of
security equipment concerning Artificial Intelligence systems within the meaning of Regulation (EU)
2024/1689 of the European Parliament and of the Council (*), the requirements set out in Chapter III, Section 2,
of that Regulation shall be taken into account.
Article 103
Amendment to Regulation (EU) No 167/2013
In Article 17(5) of Regulation (EU) No 167/2013, the following subparagraph is added:
‘When adopting delegated acts pursuant to the first subparagraph concerning artificial intelligence systems
which are safety components within the meaning of Regulation (EU) 2024/1689 of the European Parliament
and of the Council (*), the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into
account.
Article 104
Amendment to Regulation (EU) No 168/2013
In Article 22(5) of Regulation (EU) No 168/2013, the following subparagraph is added:
‘When adopting delegated acts pursuant to the first subparagraph concerning Artificial Intelligence systems
which are safety components within the meaning of Regulation (EU) 2024/1689 of the European Parliament
and of the Council (*), the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into
account.
Article 105
Amendment to Dir ective 2014/90/EU
In Article 8 of Directive 2014/90/EU, the following paragraph is added:
‘5. For Artificial Intelligence systems which are safety components within the meaning of Regulation (EU)
2024/1689 of the European Parliament and of the Council (*), when carrying out its activities pursuant to
paragraph 1 and when adopting technical specifications and testing standards in accordance with paragraphs 2
and 3, the Commission shall take into account the requirements set out in Chapter III, Section 2, of that
Regulation.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 94/110
Article 106
Amendment to Dir ective (EU) 2016/797
In Article 5 of Directive (EU) 2016/797, the following paragraph is added:
‘12. When adopting delegated acts pursuant to paragraph 1 and implementing acts pursuant to paragraph 11
concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU)
2024/1689 of the European Parliament and of the Council (*), the requirements set out in Chapter III, Section 2,
of that Regulation shall be taken into account.
Article 107
Amendment to Regulation (EU) 2018/858
In Article 5 of Regulation (EU) 2018/858 the following paragraph is added:
‘4. When adopting delegated acts pursuant to paragraph 3 concerning Artificial Intelligence systems which are
safety components within the meaning of Regulation (EU) 2024/1689 of the European Parliament and of the
Council (*), the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into account.
Article 108
Amendments to Regulation (EU) 2018/1 139
Regulation (EU) 2018/1 139 is amended as follows:
(1)in Article 17, the following paragraph is added:
‘3. Without prejudice to paragraph 2, when adopting implementing acts pursuant to paragraph 1
concerning Artificial Intelligence systems which are safety components within the meaning of Regulation
(EU) 2024/1689 of the European Parliam ent and of the Council (*), the requirements set out in Chapter III,
Section 2, of that Regulation shall be taken into account.
(*) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on
artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU)
2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (OJ L,
2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/2024/1689/oj).’;"
(2)in Article 19, the following paragraph is added:
‘4. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence
systems which are safety components within the meaning of Regulation (EU) 2024/1689, the requirements
set out in Chapter III, Section 2, of that Regulation shall be taken into account.’
;
(3)in Article 43, the following paragraph is added:
‘4. When adopting implementing acts pursuant to paragraph 1 concerning Artificial Intelligence systems
which are safety components within the meaning of Regulation (EU) 2024/1689, the requirements set out
in Chapter III, Section 2, of that Regulation shall be taken into account.’
;
(4)in Article 47, the following paragraph is added:
‘3. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence
systems which are safety components within the meaning of Regulation (EU) 2024/1689, the requirements
set out in Chapter III, Section 2, of that Regulation shall be taken into account.’
;
(5)in Article 57, the following subparagraph is added:
‘When adopting those implementing acts concerning Artificial Intelligence systems which are safety
components within the meaning of Regulation (EU) 2024/1689, the requirements set out in Chapter III,
Section 2, of that Regulation shall be taken into account.’
;
(6)in Article 58, the following paragraph is added:
‘3. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence
systems which are safety components within the meaning of Regulation (EU) 2024/1689, the requirements
set out in Chapter III, Section 2, of that Regulation shall be taken into account.’.
Article 109
Amendment to Regulation (EU) 2019/2144
In Article 1 1 of Regulation (EU) 2019/2144, the following paragraph is added:
‘3. When adopting the implementing acts pursuant to paragraph 2, concerning artificial intelligence systems
which are safety components within the meaning of Regulation (EU) 2024/1689 of the European Parliament2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 95/110
and of the Council (*), the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into
account.
Article 1 10
Amendment to Dir ective (EU) 2020/1828
In Annex I to Directive (EU) 2020/1828 of the European Parliament and of the Council (58), the following point
is added:
‘(68) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down
harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU)
No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1 139 and (EU) 2019/2144 and Directives
2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689,
12.7.2024, ELI: http://data.europa.eu/eli/reg/2024/1689/oj ).’.
Article 1 11
AI systems alr eady placed on the market or put into service and general-purpose AI models alr eady
placed on the marked
1. Without prejudice to the application of Article 5 as referred to in Article 113(3), point (a), AI systems which
are components of the large-scale IT systems established by the legal acts listed in Annex X that have been
placed on the market or put into service before 2 August 2027 shall be brought into compliance with this
Regulation by 31 December 2030.
The requirements laid down in this Regulation shall be taken into account in the evaluation of each large-scale
IT system established by the legal acts listed in Annex X to be undertaken as provided for in those legal acts
and where those legal acts are replaced or amended.
2. Without prejudice to the application of Article 5 as referred to in Article 113(3), point (a), this Regulation
shall apply to operators of high-risk AI systems, other than the systems referred to in paragraph 1 of this
Article, that have been placed on the market or put into service before 2 August 2026, only if, as from that date,
those systems are subject to significant changes in their designs. In any case, the providers and deployers of
high-risk AI systems intended to be used by public authorities shall take the necessary steps to comply with the
requirements and obligations of this Regulation by 2 August 2030.
3. Providers of general-purpose AI models that have been placed on the market before 2 August 2025 shall
take the necessary steps in order to comply with the obligations laid down in this Regulation by 2 August 2027.
Article 1 12
Evaluation and r eview
1. The Commission shall assess the need for amendment of the list set out in Annex III and of the list of
prohibited AI practices laid down in Article 5, once a year following the entry into force of this Regulation, and
until the end of the period of the delegation of power laid down in Article 97. The Commission shall submit the
findings of that assessment to the European Parliament and the Council.
2. By 2 August 2028 and every four years thereafter , the Commission shall evaluate and report to the
European Parliament and to the Council on the following:
(a)the need for amendments extending existing area headings or adding new area headings in Annex III;
(b)amendments to the list of AI systems requiring additional transparency measures in Article 50;
(c)amendments enhancing the ef fectiveness of the supervision and governance system.
3. By 2 August 2029 and every four years thereafter , the Commission shall submit a report on the evaluation
and review of this Regulation to the European Parliament and to the Council. The report shall include an
assessment with regard to the structure of enforcement and the possible need for a Union agency to resolve any
identified shortcomings. On the basis of the findings, that report shall, where appropriate, be accompanied by
a proposal for amendment of this Regulation. The reports shall be made public.
4. The reports referred to in paragraph 2 shall pay specific attention to the following:
(a)the status of the financial, technical and human resources of the national competent authorities in order to
effectively perform the tasks assigned to them under this Regulation;
(b)the state of penalties, in particular administrative fines as referred to in Article 99(1), applied by Member
States for infringements of this Regulation;
(c)adopted harmonised standards and common specifications developed to support this Regulation;
(d)the number of undertakings that enter the market after the entry into application of this Regulation, and
how many of them are SMEs.
5. By 2 August 2028, the Commission shall evaluate the functioning of the AI Office, whether the AI Office
has been given sufficient powers and competences to fulfil its tasks, and whether it would be relevant and2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 96/110
needed for the proper implementation and enforcement of this Regulation to upgrade the AI Office and its
enforcement competences and to increase its resources. The Commission shall submit a report on its evaluation
to the European Parliament and to the Council.
6. By 2 August 2028 and every four years thereafter , the Commission shall submit a report on the review of
the progress on the development of standardisation deliverables on the energy-ef ficient development of general-
purpose AI models, and asses the need for further measures or actions, including binding measures or actions.
The report shall be submitted to the European Parliament and to the Council, and it shall be made public.
7. By 2 August 2028 and every three years thereafter , the Commission shall evaluate the impact and
effectiveness of voluntary codes of cond uct to foster the application of the requirements set out in Chapter III,
Section 2 for AI systems other than high-risk AI systems and possibly other additional requirements for AI
systems other than high-risk AI systems, including as regards environmental sustainability .
8. For the purposes of paragraphs 1 to 7, the Board, the Member States and national competent authorities
shall provide the Commission with information upon its request and without undue delay .
9. In carrying out the evaluations and reviews referred to in paragraphs 1 to 7, the Commission shall take into
account the positions and findings of the Board, of the European Parliament, of the Council, and of other
relevant bodies or sources.
10. The Commission shall, if necessary , submit appropriate proposals to amend this Regulation, in particular
taking into account developments in technology , the effect of AI systems on health and safety , and on
fundamental rights, and in light of the state of progress in the information society .
11. To guide the evaluations and reviews referred to in paragraphs 1 to 7 of this Article, the AI Office shall
undertake to develop an objective and participative methodology for the evaluation of risk levels based on the
criteria outlined in the relevant Articles and the inclusion of new systems in:
(a)the list set out in Annex III, including the extension of existing area headings or the addition of new area
headings in that Annex;
(b)the list of prohibited practices set out in Article 5; and
(c)the list of AI systems requiring additional transparency measures pursuant to Article 50.
12. Any amendment to this Regulation pursuant to paragraph 10, or relevant delegated or implementing acts,
which concerns sectoral Union harmonisation legislation listed in Section B of Annex I shall take into account
the regulatory specificities of each secto r, and the existing governance, conform ity assessment and enforcement
mechanisms and authorities established therein.
13. By 2 August 2031, the Commission shall carry out an assessment of the enforcement of this Regulation
and shall report on it to the European Parliament, the Council and the European Economic and Social
Committee, taking into account the first years of application of this Regulation. On the basis of the findings,
that report shall, where appropriate, be accompanied by a proposal for amendment of this Regulation with
regard to the structure of enforcement and the need for a Union agency to resolve any identified shortcomings.
Article 1 13
Entry into for ce and application
This Regulation shall enter into force on the twentieth day following that of its publication in the Official
Journal of the Eur opean Union .
It shall apply from 2 August 2026.
However:
(a)Chapters I and II shall apply from 2 February 2025;
(b)Chapter III Section 4, Chapter V, Chapter VII and Chapter XII and Article 78 shall apply from 2 August
2025, with the exception of Article 101;
(c)Article 6(1) and the corresponding obligations in this Regulation shall apply from 2 August 2027.
This Regulation shall be binding in its entirety and directly applicable in all Member States.
Done at Brussels, 13 June 2024.
For the Eur opean Parliament
The Pr esident
R. METSOLA
For the Council
The Pr esident
M. MICHEL
(1) OJ C 517, 22.12.2021, p. 56.
(2) OJ C 115, 11.3.2022, p. 5.2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 97/110
(3) OJ C 97, 28.2.2022, p. 60.
(4) Position of the European Parliament of 13 March 2024 (not yet published in the Official Journal) and decision of the Council of 21 May
2024.
(5) European Council, Special meeting of the European Council (1 and 2 October 2020) — Conclusions, EUCO 13/20, 2020, p. 6.
(6) European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of
artificial intelligence, robotics and related technologies, 2020/2012(INL).
(7) Regulation (EC) No 765/2008 of the European Parliament and of the Council of 9 July 2008 setting out the requirements for
accreditation and repealing Regulation (EEC) No 339/93 (OJ L 218, 13.8.2008, p. 30).
(8) Decision No 768/2008/EC of the European Parliament and of the Council of 9 July 2008 on a common framework for the marketing of
products, and repealing Council Decision 93/465/EEC (OJ L 218, 13.8.2008, p. 82).
(9) Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of
products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (OJ L 169, 25.6.2019, p. 1).
(10) Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the
Member States concerning liability for defective products (OJ L 210, 7.8.1985, p. 29).
(11) Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with
regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data
Protection Regulation) (OJ L 119, 4.5.2016, p. 1).
(12) Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons
with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such
data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC (OJ L 295, 21.11.2018, p. 39).
(13) Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with
regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution
of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework
Decision 2008/977/JHA (OJ L 119, 4.5.2016, p. 89).
(14) Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and
the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) (OJ L 201,
31.7.2002, p. 37).
(15) Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital
Services and amending Directive 2000/31/EC (Digital Services Act) (OJ L 277, 27.10.2022, p. 1).
(16) Directive (EU) 2019/882 of the European Parliament and of the Council of 17 April 2019 on the accessibility requirements for
products and services (OJ L 151, 7.6.2019, p. 70).
(17) Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer
commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and 2002/65/EC
of the European Parliament and of the Council and Regulation (EC) No 2006/2004 of the European Parliament and of the Council (‘Unfair
Commercial Practices Directive’) (OJ L 149, 11.6.2005, p. 22).
(18) Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between
Member States (OJ L 190, 18.7.2002, p. 1).
(19) Directive (EU) 2022/2557 of the European Parliament and of the Council of 14 December 2022 on the resilience of critical entities and
repealing Council Directive 2008/114/EC (OJ L 333, 27.12.2022, p. 164).
(20) OJ C 247, 29.6.2022, p. 1.
(21) Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive
2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and
93/42/EEC (OJ L 117, 5.5.2017, p. 1).
(22) Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and
repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).
(23) Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive
95/16/EC (OJ L 157, 9.6.2006, p. 24).
(24) Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of civil
aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72).
(25) Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market
surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1).
(26) Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market
surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52).
(27) Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council
Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146).
(28) Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system
within the European Union (OJ L 138, 26.5.2016, p. 44).
(29) Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of
motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations
(EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1).
(30) Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil
aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008,
(EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and
repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation
(EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1).
(31) Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for
motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general
safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament
and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and
of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU)
No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012,
(EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1).
(32) Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on
Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 98/110
(33) Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and
withdrawing international protection (OJ L 180, 29.6.2013, p. 60).
(34) Regulation (EU) 2024/900 of the European parliament and of the Council of 13 March 2024 on the transparency and targeting of
political advertising (OJ L, 2024/900, 20.3.2024, ELI: http://data.europa.eu/eli/reg/2024/900/oj).
(35) Directive 2014/31/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the
Member States relating to the making available on the market of non-automatic weighing instruments (OJ L 96, 29.3.2014, p. 107).
(36) Directive 2014/32/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the
Member States relating to the making available on the market of measuring instruments (OJ L 96, 29.3.2014, p. 149).
(37) Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency
for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation
(EU) No 526/2013 (Cybersecurity Act) (OJ L 151, 7.6.2019, p. 15).
(38) Directive (EU) 2016/2102 of the European Parliament and of the Council of 26 October 2016 on the accessibility of the websites and
mobile applications of public sector bodies (OJ L 327, 2.12.2016, p. 1).
(39) Directive 2002/14/EC of the European Parliament and of the Council of 11 March 2002 establishing a general framework for
informing and consulting employees in the European Community (OJ L 80, 23.3.2002, p. 29).
(40) Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital
Single Market and amending Directives 96/9/EC and 2001/29/EC (OJ L 130, 17.5.2019, p. 92).
(41) Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation,
amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC,
2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and
Decision No 1673/2006/EC of the European Parliament and of the Council (OJ L 316, 14.11.2012, p. 12).
(42) Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and
amending Regulation (EU) 2018/1724 (Data Governance Act) (OJ L 152, 3.6.2022, p. 1).
(43) Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 on harmonised rules on fair access to
and use of data and amending Regulation (EU) 2017/2394 and Directive (EU) 2020/1828 (Data Act) (OJ L, 2023/2854, 22.12.2023,
ELI: http://data.europa.eu/eli/reg/2023/2854/oj).
(44) Commission Recommendation of 6 May 2003 concerning the definition of micro, small and medium-sized enterprises (OJ L 124,
20.5.2003, p. 36).
(45) Commission Decision of 24.1.2024 establishing the European Artificial Intelligence Office C(2024) 390.
(46) Regulation (EU) No 575/2013 of the European Parliament and of the Council of 26 June 2013 on prudential requirements for credit
institutions and investment firms and amending Regulation (EU) No 648/2012 (OJ L 176, 27.6.2013, p. 1).
(47) Directive 2008/48/EC of the European Parliament and of the Council of 23 April 2008 on credit agreements for consumers and
repealing Council Directive 87/102/EEC (OJ L 133, 22.5.2008, p. 66).
(48) Directive 2009/138/EC of the European Parliament and of the Council of 25 November 2009 on the taking-up and pursuit of the
business of Insurance and Reinsurance (Solvency II) (OJ L 335, 17.12.2009, p. 1).
(49) Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions
and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing
Directives 2006/48/EC and 2006/49/EC (OJ L 176, 27.6.2013, p. 338).
(50) Directive 2014/17/EU of the European Parliament and of the Council of 4 February 2014 on credit agreements for consumers relating
to residential immovable property and amending Directives 2008/48/EC and 2013/36/EU and Regulation (EU) No 1093/2010 (OJ L 60,
28.2.2014, p. 34).
(51) Directive (EU) 2016/97 of the European Parliament and of the Council of 20 January 2016 on insurance distribution (OJ L 26,
2.2.2016, p. 19).
(52) Council Regulation (EU) No 1024/2013 of 15 October 2013 conferring specific tasks on the European Central Bank concerning
policies relating to the prudential supervision of credit institutions (OJ L 287, 29.10.2013, p. 63).
(53) Regulation (EU) 2023/988 of the European Parliament and of the Council of 10 May 2023 on general product safety, amending
Regulation (EU) No 1025/2012 of the European Parliament and of the Council and Directive (EU) 2020/1828 of the European Parliament
and the Council, and repealing Directive 2001/95/EC of the European Parliament and of the Council and Council Directive 87/357/EEC
(OJ L 135, 23.5.2023, p. 1).
(54) Directive (EU) 2019/1937 of the European Parliament and of the Council of 23 October 2019 on the protection of persons who report
breaches of Union law (OJ L 305, 26.11.2019, p. 17).
(55) OJ L 123, 12.5.2016, p. 1.
(56) Regulation (EU) No 182/2011 of the European Parliament and of the Council of 16 February 2011 laying down the rules and general
principles concerning mechanisms for control by Member States of the Commission’s exercise of implementing powers (OJ L 55,
28.2.2011, p. 13).
(57) Directive (EU) 2016/943 of the European Parliament and of the Council of 8 June 2016 on the protection of undisclosed know-how
and business information (trade secrets) against their unlawful acquisition, use and disclosure (OJ L 157, 15.6.2016, p. 1).
(58) Directive (EU) 2020/1828 of the European Parliament and of the Council of 25 November 2020 on representative actions for the
protection of the collective interests of consumers and repealing Directive 2009/22/EC (OJ L 409, 4.12.2020, p. 1).
ANNEX I
List of Union harmonisation legislation
Section A. List of Union harmonisation legislation based on the New Legislative Framework
1.Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery ,
and amending Directive 95/16/EC ( OJ L 157, 9.6.2006, p. 24 );
2.Directive 2009/48/EC of the European Parliament and of the Council of 18 June 2009 on the safety of
toys ( OJ L 170, 30.6.2009, p. 1 );
3.Directive 2013/53/EU of the European Parliament and of the Council of 20 November 2013 on
recreational craft and personal watercraft and repealing Directive 94/25/EC ( OJ L 354, 28.12.2013,
p. 90 );2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 99/110
4.Directive 2014/33/EU of the European Parliament and of the Council of 26 February 2014 on the
harmonisation of the laws of the Member States relating to lifts and safety components for lifts ( OJ
L 96, 29.3.2014, p. 251 );
5.Directive 2014/34/EU of the European Parliament and of the Council of 26 February 2014 on the
harmonisation of the laws of the Member States relating to equipment and protective systems intended
for use in potentially explosive atmospheres ( OJ L 96, 29.3.2014, p. 309 );
6.Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on the
harmonisation of the laws of the Member States relating to the making available on the market of
radio equipment and repealing Directive 1999/5/EC ( OJ L 153, 22.5.2014, p. 62 );
7.Directive 2014/68/EU of the European Parliament and of the Council of 15 May 2014 on the
harmonisation of the laws of the Member States relating to the making available on the market of
pressure equipment ( OJ L 189, 27.6.2014, p. 164 );
8.Regulation (EU) 2016/424 of the European Parliament and of the Council of 9 March 2016 on
cableway installations and repealing Directive 2000/9/EC ( OJ L 81, 31.3.2016, p. 1 );
9.Regulation (EU) 2016/425 of the European Parliament and of the Council of 9 March 2016 on
personal protective equipment and repealing Council Directive 89/686/EEC ( OJ L 81, 31.3.2016,
p. 51 );
10.Regulation (EU) 2016/426 of the European Parliament and of the Council of 9 March 2016 on
appliances burning gaseous fuels and repealing Directive 2009/142/EC ( OJ L 81, 31.3.2016, p. 99 );
11.Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical
devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC)
No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC ( OJ L 117, 5.5.2017,
p. 1);
12.Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitr o
diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU
(OJ L 117, 5.5.2017, p. 176 ).
Section B. List of other Union harmonisation legislation
13.Regulation (EC) No 300/2008 of the European Parliament and of the Council of 1 1 March 2008 on
common rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 ( OJ
L 97, 9.4.2008, p. 72 );
14.Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on
the approval and market surveillance of two- or three-wheel vehicles and quadricycles ( OJ L 60,
2.3.2013, p. 52 );
15.Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on
the approval and market surveillance of agricultural and forestry vehicles ( OJ L 60, 2.3.2013, p. 1 );
16.Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine
equipment and repealing Council Directive 96/98/EC ( OJ L 257, 28.8.2014, p. 146 );
17.Directive (EU) 2016/797 of the European Parliament and of the Council of 1 1 May 2016 on the
interoperability of the rail system within the European Union ( OJ L 138, 26.5.2016, p. 44 );
18.Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the
approval and market surveillance of motor vehicles and their trailers, and of systems, components and
separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC)
No 595/2009 and repealing Directive 2007/46/EC ( OJ L 151, 14.6.2018, p. 1 );
19.Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on
type-approval requirements for motor vehicles and their trailers, and systems, components and
separate technical units intended for such vehicles, as regards their general safety and the protection of
vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European
Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC)
No 661/2009 of the European Parliament and of the Council and Commission Regulations
(EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010,
(EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/201 1, (EU) No 109/201 1, (EU) No 458/201 1,
(EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and
(EU) 2015/166 ( OJ L 325, 16.12.2019, p. 1 );
20.Regulation (EU) 2018/1 139 of the European Parliament and of the Council of 4 July 2018 on common
rules in the field of civil aviation and establishing a European Union Aviation Safety Agency , and
amending Regulations (EC) No 21 11/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU)
No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the
Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European
Parliament and of the Council and Council Regulation (EEC) No 3922/91 ( OJ L 212, 22.8.2018, p. 1 ),
in so far as the design, production and placing on the market of aircrafts referred to in Article 2(1),
points (a) and (b) thereof, where it concerns unmanned aircraft and their engines, propellers, parts and
equipment to control them remotely , are concerned.
ANNEX II2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 100/110
List of criminal offences r eferr ed to in Article 5(1), first subparagraph, point (h)(iii)
Criminal of fences referred to in Article 5(1), first subparagraph, point (h)(iii):
—terrorism,
—trafficking in human beings,
—sexual exploitation of children, and child pornography ,
—illicit traf ficking in narcotic drugs or psychotropic substances,
—illicit traf ficking in weapons, munitions or explosives,
—murder , grievous bodily injury ,
—illicit trade in human or gans or tissue,
—illicit traf ficking in nuclear or radioactive materials,
—kidnapping, illegal restraint or hostage-taking,
—crimes within the jurisdiction of the International Criminal Court,
—unlawful seizure of aircraft or ships,
—rape,
—environmental crime,
—organised or armed robbery ,
—sabotage,
—participation in a criminal or ganisation involved in one or more of the of fences listed above.
ANNEX III
High-risk AI systems r eferr ed to in Article 6(2)
High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas:
1.Biometrics, in so far as their use is permitted under relevant Union or national law:
(a)remote biometric identification systems.
This shall not include AI systems intended to be used for biometric verification the sole purpose of
which is to confirm that a specific natural person is the person he or she claims to be;
(b)AI systems intended to be used for biometric categorisation, according to sensitive or protected
attributes or characteristics based on the inference of those attributes or characteristics;
(c)AI systems intended to be used for emotion recognition.
2.Critical infrastructure: AI systems intended to be used as safety components in the management and
operation of critical digital infrastructure, road traf fic, or in the supply of water , gas, heating or electricity .
3.Education and vocational training:
(a)AI systems intended to be used to determine access or admission or to assign natural persons to
educational and vocational training institutions at all levels;
(b)AI systems intended to be used to evaluate learning outcomes, including when those outcomes are
used to steer the learning process of natural persons in educational and vocational training institutions
at all levels;
(c)AI systems intended to be used for the purpose of assessing the appropriate level of education that an
individual will receive or will be able to access, in the context of or within educ ational and vocational
training institutions at all levels;
(d)AI systems intended to be used for monitoring and detecting prohibited behaviour of students during
tests in the context of or within educational and vocational training institutions at all levels.
4.Employment, workers’ management and access to self-employment:
(a)AI systems intended to be used for the recruitment or selection of natural persons, in particular to place
targeted job advertisements, to analyse and filter job applications, and to evaluate candidates;
(b)AI systems intended to be used to make decisions affecting terms of work-related relationships, the
promotion or termination of work-related contractual relationships, to allocate tasks based on
individual behaviour or personal traits or characteristics or to monitor and evaluate the performance
and behaviour of persons in such relationships.
5.Access to and enjoyment of essential private services and essential public services and benefits:
(a)AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the
eligibility of natural persons for essential public assistance benefits and services, including healthcare
services, as well as to grant, reduce, revoke, or reclaim such benefits and services;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 101/110
(b)AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their
credit score, with the exception of AI systems used for the purpose of detecting financial fraud;
(c)AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case
of life and health insurance;
(d)AI systems intended to evaluate and classify emer gency calls by natural persons or to be used to
dispatch, or to establish priority in the dispatching of, emer gency first response services, including by
police, firefighters and medical aid, as well as of emer gency healthcare patient triage systems.
6.Law enforcement, in so far as their use is permitted under relevant Union or national law:
(a)AI systems intended to be used by or on behalf of law enforcement authorities, or by Union
institutions, bodies, offices or agencies in support of law enforcement authorities or on their behalf to
assess the risk of a natural person becoming the victim of criminal of fences;
(b)AI systems intended to be used by or on behalf of law enforcement authorities or by Union
institutions, bodies, offices or agencies in support of law enforcement authorities as polygraphs or
similar tools;
(c)AI systems intended to be used by or on behalf of law enforcement authorities, or by Union
institutions, bodies, offices or agencies, in support of law enforcement authorities to evaluate the
reliability of evidence in the course of the investigation or prosecution of criminal of fences;
(d)AI systems intended to be used by law enforcement authorities or on their behalf or by Union
institutions, bodies, offices or agencies in support of law enforcement authorities for assessing the risk
of a natural person offending or re-of fending not solely on the basis of the profiling of natural persons
as referred to in Article 3(4) of Directive (EU) 2016/680, or to assess personality traits and
characteristics or past criminal behaviour of natural persons or groups;
(e)AI systems intended to be used by or on behalf of law enforcement authorities or by Union
institutions, bodies, offices or agencies in support of law enforcement authorit ies for the profiling of
natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of the detection,
investigation or prosecution of criminal of fences.
7.Migration, asylum and border control management, in so far as their use is permitted under relevant Union
or national law:
(a)AI systems intended to be used by or on behalf of competent public authorities or by Union
institutions, bodies, of fices or agencies as polygraphs or similar tools;
(b)AI systems intended to be used by or on behalf of competent public authorities or by Union
institutions, bodies, offices or agencies to assess a risk, including a security risk, a risk of irregular
migration, or a health risk, posed by a natural person who intends to enter or who has entered into the
territory of a Member State;
(c)AI systems intended to be used by or on behalf of competent public authorities or by Union
institutions, bodies, offices or agencies to assist competent public authorities for the examination of
applications for asylum, visa or residen ce permits and for associated complai nts with regard to the
eligibility of the natural persons applying for a status, including related assessments of the reliability of
evidence;
(d)AI systems intended to be used by or on behalf of competent public authorities, or by Union
institutions, bodies, offices or agencies, in the context of migration, asylum or border control
management, for the purpose of detecting, recognising or identifying natural persons, with the
exception of the verification of travel documents.
8.Administration of justice and democratic processes:
(a)AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in
researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to
be used in a similar way in alternative dispute resolution;
(b)AI systems intended to be used for influencing the outcome of an election or referendum or the voting
behaviour of natural persons in the exercise of their vote in elections or referenda. This does not
include AI systems to the output of which natural persons are not directly exposed, such as tools used
to organise, optimise or structure political campaigns from an administrative or logistical point of
view .
ANNEX IV
Technical documentation r eferr ed to in Article 1 1(1)
The technical documentation referred to in Article 11(1) shall contain at least the following information, as
applicable to the relevant AI system:
1.A general description of the AI system including:2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 102/110
(a)its intended purpose, the name of the provider and the version of the system reflecting its relation to
previous versions;
(b)how the AI system interacts with, or can be used to interact with, hardware or software, including with
other AI systems, that are not part of the AI system itself, where applicable;
(c)the versions of relevant software or firmware, and any requirements related to version updates;
(d)the description of all the forms in which the AI system is placed on the market or put into service, such
as software packages embedded into hardware, downloads, or APIs;
(e)the description of the hardware on which the AI system is intended to run;
(f)where the AI system is a component of products, photographs or illustrations showing external
features, the marking and internal layout of those products;
(g)a basic description of the user -interface provided to the deployer;
(h)instructions for use for the deployer , and a basic description of the user-interface provided to the
deployer , where applicable;
2.A detailed description of the elements of the AI system and of the process for its development, including:
(a)the methods and steps performed for the development of the AI system, including, where relevant,
recourse to pre-trained systems or tools provided by third parties and how those were used, integrated
or modified by the provider;
(b)the design specifications of the system, namely the general logic of the AI system and of the
algorithms; the key design choices including the rationale and assumptions made, including with
regard to persons or groups of persons in respect of who, the system is intended to be used; the main
classification choices; what the system is designed to optimise for, and the relevance of the different
parameters; the description of the expect ed output and output quality of the system; the decisions about
any possible trade-of f made regarding the technical solutions adopted to comply with the requirements
set out in Chapter III, Section 2;
(c)the description of the system architectu re explaining how software components build on or feed into
each other and integrate into the overall processing; the computational resources used to develop, train,
test and validate the AI system;
(d)where relevant, the data requirements in terms of datasheets describing the training methodologies and
techniques and the training data sets used, including a general description of these data sets,
information about their provenance, scope and main characteristics; how the data was obtained and
selected; labelling procedures (e.g. for supervised learning), data cleaning methodologies (e.g. outliers
detection);
(e)assessment of the human oversight measures needed in accordance with Article 14, including an
assessment of the technical measures needed to facilitate the interpretation of the outputs of AI systems
by the deployers, in accordance with Article 13(3), point (d);
(f)where applicable, a detailed description of pre-determined changes to the AI system and its
performance, together with all the relevant information related to the technical solutions adopted to
ensure continuous compliance of the AI system with the relevant requirements set out in Chapter III,
Section 2;
(g)the validation and testing procedures used, including information about the validation and testing data
used and their main characteristics; metrics used to measure accuracy , robustnes s and compliance with
other relevant requirements set out in Chapter III, Section 2, as well as potentially discriminatory
impacts; test logs and all test reports dated and signed by the responsible persons, including with
regard to pre-determined changes as referred to under point (f);
(h)cybersecurity measures put in place;
3.Detailed information about the monitoring, functioning and control of the AI system, in particular with
regard to: its capabilities and limitations in performance, including the degrees of accuracy for specific
persons or groups of persons on which the system is intended to be used and the overall expected level of
accuracy in relation to its intended purpose; the foreseeable unintended outcomes and sources of risks to
health and safety , fundamental rights and discrimination in view of the intended purpose of the AI system;
the human oversight measures needed in accordance with Article 14, including the technical measures put
in place to facilitate the interpretation of the outputs of AI systems by the deployers; specifications on input
data, as appropriate;
4.A description of the appropriateness of the performance metrics for the specific AI system;
5.A detailed description of the risk management system in accordance with Article 9;
6.A description of relevant changes made by the provider to the system through its lifecycle;
7.A list of the harmonised standards applied in full or in part the references of which have been published in
the Official Journal of the European Union ; where no such harmonised standards have been applied,2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 103/110
a detailed description of the solutions adopted to meet the requirements set out in Chapter III, Section 2,
including a list of other relevant standards and technical specifications applied;
8.A copy of the EU declaration of conformity referred to in Article 47;
9.A detailed description of the system in place to evaluate the AI system performance in the post-market
phase in accordance with Article 72, including the post-market monitoring plan referred to in Article 72(3).
ANNEX V
EU declaration of conformity
The EU declaration of conformity referred to in Article 47, shall contain all of the following information:
1.AI system name and type and any additional unambiguous reference allowing the identification and
traceability of the AI system;
2.The name and address of the provider or , where applicable, of their authorised representative;
3.A statement that the EU declaration of conformity referred to in Article 47 is issued under the sole
responsibility of the provider;
4.A statement that the AI system is in conformity with this Regulation and, if applicable, with any other
relevant Union law that provides for the issuing of the EU declaration of conformity referred to in
Article 47;
5.Where an AI system involves the processing of personal data, a statement that that AI system complies
with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680;
6.References to any relevant harmonised standards used or any other common specification in relation to
which conformity is declared;
7.Where applicable, the name and identification number of the notified body , a description of the conformity
assessment procedure performed, and identification of the certificate issued;
8.The place and date of issue of the declaration, the name and function of the person who signed it, as well as
an indication for , or on behalf of whom, that person signed, a signature.
ANNEX VI
Conformity assessment pr ocedur e based on internal contr ol
1. The conformity assessment procedure based on internal control is the conformity assessment procedure
based on points 2, 3 and 4.
2. The provider verifies that the established quality management system is in compliance with the
requirements of Article 17.
3. The provider examines the information contained in the technical documentation in order to assess the
compliance of the AI system with the relevant essential requirements set out in Chapter III, Section 2.
4. The provider also verifies that the design and development process of the AI system and its post-market
monitoring as referred to in Article 72 is consistent with the technical documentation.
ANNEX VII
Conformity based on an assessment of the quality management system and an assessment of the
technical documentation
1. Intr oduction
Conformity based on an assessment of the quality management system and an assessment of the technical
documentation is the conformity assessment procedure based on points 2 to 5.
2. Overview
The approved quality management system for the design, development and testing of AI systems pursuant to
Article 17 shall be examined in accordance with point 3 and shall be subject to surveillance as specified in
point 5. The technical documentation of the AI system shall be examined in accordance with point 4.
3. Quality management system
3.1.The application of the provider shall include:
(a)the name and address of the provider and, if the application is lodged by an authorised
representative, also their name and address;
(b)the list of AI systems covered under the same quality management system;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 104/110
(c)the technical documentation for each AI system covered under the same quality management
system;
(d)the documentation concerning the quality management system which shall cover all the aspects
listed under Article 17;
(e)a description of the procedures in place to ensure that the quality management system remains
adequate and ef fective;
(f)a written declaration that the same application has not been lodged with any other notified body .
3.2.The quality management system shall be assessed by the notified body , which shall determine whether
it satisfies the requirements referred to in Article 17.
The decision shall be notified to the provider or its authorised representative.
The notification shall contain the conclusions of the assessment of the quality management system and
the reasoned assessment decision.
3.3.The quality management system as approved shall continue to be implemented and maintained by the
provider so that it remains adequate and ef ficient.
3.4.Any intended change to the approved quality management system or the list of AI systems covered by
the latter shall be brought to the attention of the notified body by the provider .
The proposed changes shall be examined by the notified body , which shall decide whether the
modified quality management system continues to satisfy the requirements referred to in point 3.2 or
whether a reassessment is necessary .
The notified body shall notify the provider of its decision. The notificat ion shall contain the
conclusions of the examination of the changes and the reasoned assessment decision.
4. Contr ol of the technical documentation.
4.1.In addition to the application referred to in point 3, an application with a notified body of their choice
shall be lodged by the provider for the assessment of the technical documentation relating to the AI
system which the provider intends to place on the market or put into service and which is covered by
the quality management system referred to under point 3.
4.2.The application shall include:
(a)the name and address of the provider;
(b)a written declaration that the same application has not been lodged with any other notified body;
(c)the technical documentation referred to in Annex IV .
4.3.The technical documentation shall be examined by the notified body . Where relevant, and limited to
what is necessary to fulfil its tasks, the notified body shall be granted full access to the training,
validation, and testing data sets used, including, where appropriate and subject to security safeguards,
through API or other relevant technical means and tools enabling remote access.
4.4.In examining the technical documentation, the notified body may require that the provider supply
further evidence or carry out further tests so as to enable a proper assessment of the conformity of the
AI system with the requirements set out in Chapter III, Section 2. Where the notified body is not
satisfied with the tests carried out by the provider , the notified body shall itself directly carry out
adequate tests, as appropriate.
4.5.Where necessary to assess the conformity of the high-risk AI system with the requirements set out in
Chapter III, Section 2, after all other reasonable means to verify conformity have been exhausted and
have proven to be insuf ficient, and upon a reasoned request, the notified body shall also be granted
access to the training and trained models of the AI system, including its relevant parameters. Such
access shall be subject to existing Union law on the protection of intellectual property and trade
secrets.
4.6.The decision of the notified body shall be notified to the provider or its authorised representative. The
notification shall contain the conclusions of the assessment of the technical documentation and the
reasoned assessment decision.
Where the AI system is in conformity with the requirements set out in Chapter III, Section 2, the
notified body shall issue a Union technical documentation assessment certificate. The certificate shall
indicate the name and address of the provider , the conclusions of the examination, the conditions (if
any) for its validity and the data necessary for the identification of the AI system.
The certificate and its annexes shall contain all relevant information to allow the conformity of the AI
system to be evaluated, and to allow for control of the AI system while in use, where applicable.
Where the AI system is not in conformity with the requirements set out in Chapter III, Section 2, the
notified body shall refuse to issue a Union technical documentation assessme nt certificate and shall
inform the applicant accordingly , giving detailed reasons for its refusal.
Where the AI system does not meet the requirement relating to the data used to train it, re-training of
the AI system will be needed prior to the application for a new conformity assessment. In this case, the2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 105/110
reasoned assessment decision of the notified body refusing to issue the Union technical documentation
assessment certificate shall contain specific considerations on the quality data used to train the AI
system, in particular on the reasons for non-compliance.
4.7.Any change to the AI system that could af fect the compliance of the AI system with the requirements
or its intended purpose shall be assessed by the notified body which issued the Union technical
documentation assessment certificate. The provider shall inform such notified body of its intention to
introduce any of the abovementioned changes, or if it otherwise becomes aware of the occurrence of
such changes. The intended changes shall be assessed by the notified body , which shall decide whether
those changes require a new conformity assessment in accordance with Article 43(4) or whether they
could be addressed by means of a supplement to the Union technical documentation assessment
certificate. In the latter case, the notified body shall assess the changes, notify the provider of its
decision and, where the changes are approved, issue to the provider a supplement to the Union
technical documentation assessment certificate.
5. Surveillance of the appr oved quality management system.
5.1.The purpose of the surveillance carried out by the notified body referred to in Point 3 is to make sure
that the provider duly complies with the terms and conditions of the approved quality management
system.
5.2.For assessment purposes, the provider shall allow the notified body to access the premises where the
design, development, testing of the AI systems is taking place. The provider shall further share with
the notified body all necessary information.
5.3.The notified body shall carry out periodic audits to make sure that the provider maintains and applies
the quality management system and shall provide the provider with an audit report. In the context of
those audits, the notified body may carry out additional tests of the AI systems for which a Union
technical documentation assessment certificate was issued.
ANNEX VIII
Information to be submitted upon the r egistration of high-risk AI systems in accordance with Article 49
Section A — Information to be submitted by providers of high-risk AI systems in accordance with
Article 49(1)
The following information shall be provided and thereafter kept up to date with regard to high-risk AI systems
to be registered in accordance with Article 49(1):
1.The name, address and contact details of the provider;
2.Where submission of information is carried out by another person on behalf of the provider , the name,
address and contact details of that person;
3.The name, address and contact details of the authorised representative, where applicable;
4.The AI system trade name and any additional unambiguous reference allowing the identification and
traceability of the AI system;
5.A description of the intended purpose of the AI system and of the components and functions supported
through this AI system;
6.A basic and concise description of the information used by the system (data, inputs) and its operating logic;
7.The status of the AI system (on the market, or in service; no longer placed on the market/in service,
recalled);
8.The type, number and expiry date of the certificate issued by the notified body and the name or
identification number of that notified body , where applicable;
9.A scanned copy of the certificate referred to in point 8, where applicable;
10.Any Member States in which the AI system has been placed on the market, put into service or made
available in the Union;
11.A copy of the EU declaration of conformity referred to in Article 47;
12.Electronic instructions for use; this information shall not be provided for high-risk AI systems in the areas
of law enforcement or migration, asylum and border control management referred to in Annex III, points 1,
6 and 7;
13.A URL for additional information (optional).
Section B — Information to be submitted by providers of high-risk AI systems in accordance with
Article 49(2)
The following information shall be provided and thereafter kept up to date with regard to AI systems to be
registered in accordance with Article 49(2):
1.The name, address and contact details of the provider;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 106/110
2.Where submission of information is carried out by another person on behalf of the provider , the name,
address and contact details of that person;
3.The name, address and contact details of the authorised representative, where applicable;
4.The AI system trade name and any additional unambiguous reference allowing the identification and
traceability of the AI system;
5.A description of the intended purpose of the AI system;
6.The condition or conditions under Article 6(3)based on which the AI system is considered to be not-high-
risk;
7.A short summary of the grounds on which the AI system is considered to be not-high-risk in application of
the procedure under Article 6(3);
8.The status of the AI system (on the market, or in service; no longer placed on the market/in service,
recalled);
9.Any Member States in which the AI system has been placed on the market, put into service or made
available in the Union.
Section C — Information to be submitted by deployers of high-risk AI systems in accordance with
Article 49(3)
The following information shall be provided and thereafter kept up to date with regard to high-risk AI systems
to be registered in accordance with Article 49(3):
1.The name, address and contact details of the deployer;
2.The name, address and contact details of the person submitting information on behalf of the deployer;
3.The URL of the entry of the AI system in the EU database by its provider;
4.A summary of the findings of the fundamental rights impact assessment conducted in accordance with
Article 27;
5.A summary of the data protection impact assessment carried out in accordance with Article 35 of
Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680 as specified in Article 26(8) of this
Regulation, where applicable.
ANNEX IX
Information to be submitted upon the r egistration of high-risk AI systems listed in Annex III in r elation
to testing in r eal world conditions in accordance with Article 60
The following information shall be provided and thereafter kept up to date with regard to testing in real world
conditions to be registered in accordance with Article 60:
1.A Union-wide unique single identification number of the testing in real world conditions;
2.The name and contact details of the provider or prospective provider and of the deployers involved in the
testing in real world conditions;
3.A brief description of the AI system, its intended purpose, and other information necessary for the
identification of the system;
4.A summary of the main characteristics of the plan for testing in real world conditions;
5.Information on the suspension or termination of the testing in real world conditions.
ANNEX X
Union legislative acts on large-scale IT systems in the ar ea of Fr eedom, Security and Justice
1. Schengen Information System
(a)Regulation (EU) 2018/1860 of the European Parliament and of the Council of 28 November 2018 on the
use of the Schengen Information System for the return of illegally staying third-country nationals (OJ
L 312, 7.12.2018, p. 1 ).
(b)Regulation (EU) 2018/1861 of the European Parliament and of the Council of 28 November 2018 on the
establishment, operation and use of the Schengen Information System (SIS) in the field of border checks,
and amending the Convention implementing the Schengen Agreement, and amending and repealing
Regulation (EC) No 1987/2006 ( OJ L 312, 7.12.2018, p. 14 ).
(c)Regulation (EU) 2018/1862 of the European Parliament and of the Council of 28 November 2018 on the
establishment, operation and use of the Schengen Information System (SIS) in the field of police
cooperation and judicial cooperation in criminal matters, amending and repealing Council Decision2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 107/110
2007/533/JHA, and repealing Regulation (EC) No 1986/2006 of the Europe an Parliament and of the
Council and Commission Decision 2010/261/EU ( OJ L 312, 7.12.2018, p. 56 ).
2. Visa Information System
(a)Regulation (EU) 2021/1 133 of the European Parliament and of the Council of 7 July 2021 amending
Regulations (EU) No 603/2013, (EU) 2016/794, (EU) 2018/1862, (EU) 2019/ 816 and (EU) 2019/818 as
regards the establishment of the conditio ns for accessing other EU information systems for the purposes of
the Visa Information System ( OJ L 248, 13.7.2021, p. 1 ).
(b)Regulation (EU) 2021/1 134 of the European Parliament and of the Council of 7 July 2021 amending
Regulations (EC) No 767/2008, (EC) No 810/2009, (EU) 2016/399, (EU) 2017/2226, (EU) 2018/1240,
(EU) 2018/1860, (EU) 2018/1861, (EU) 2019/817 and (EU) 2019/1896 of the European Parliament and of
the Council and repealing Council Decisions 2004/512/EC and 2008/633/JHA, for the purpose of
reforming the Visa Information System ( OJ L 248, 13.7.2021, p. 1 1).
3. Eur odac
Regulation (EU) 2024/1358 of the European Parliament and of the Council of 14 May 2024 on the
establishment of ‘Eurodac’ for the comp arison of biometric data in order to effectively apply Regulations (EU)
2024/1315 and (EU) 2024/1350 of the European Parliament and of the Council and Council Directive
2001/55/EC and to identify illegally staying third-country nationals and statele ss persons and on requests for
the comparison with Eurodac data by Member States’ law enforcement authorities and Europol for law
enforcement purposes, amending Regulations (EU) 2018/1240 and (EU) 2019/818 of the European Parliament
and of the Council and repealing Regulation (EU) No 603/2013 of the European Parliament and of the Council
(OJ L, 2024/1358, 22.5.2024, ELI: http://data.europa.eu/eli/reg/2024/1358/oj ).
4. Entry/Exit System
Regulation (EU) 2017/2226 of the European Parliament and of the Council of 30 November 2017 establishing
an Entry/Exit System (EES) to register entry and exit data and refusal of entry data of third-country nationals
crossing the external borders of the Member States and determining the conditions for access to the EES for
law enforcement purposes, and amending the Convention implementing the Schengen Agreement and
Regulations (EC) No 767/2008 and (EU) No 1077/201 1 (OJ L 327, 9.12.2017, p. 20 ).
5. Eur opean Travel Information and Authorisation System
(a)Regulation (EU) 2018/1240 of the European Parliament and of the Council of 12 September 2018
establishing a European Travel Information and Authorisation System (ETIAS) and amending Regulations
(EU) No 1077/201 1, (EU) No 515/2014 , (EU) 2016/399, (EU) 2016/1624 and (EU) 2017/2226 (OJ L 236,
19.9.2018, p. 1 ).
(b)Regulation (EU) 2018/1241 of the European Parliament and of the Council of 12 September 2018
amending Regulation (EU) 2016/794 for the purpose of establishing a Europe an Travel Information and
Authorisation System (ETIAS) ( OJ L 236, 19.9.2018, p. 72 ).
6. Eur opean Criminal Records Information System on third-country nationals and stateless persons
Regulation (EU) 2019/816 of the European Parliament and of the Council of 17 April 2019 establishing
a centralised system for the identificati on of Member States holding convictio n information on third-country
nationals and stateless persons (ECRIS-TCN) to supplement the European Criminal Records Information
System and amending Regulation (EU) 2018/1726 ( OJ L 135, 22.5.2019, p. 1 ).
7. Inter operability
(a)Regulation (EU) 2019/817 of the European Parliament and of the Council of 20 May 2019 on establishing
a framework for interoperability between EU information systems in the field of borders and visa and
amending Regulations (EC) No 767/2008, (EU) 2016/399, (EU) 2017/2226, (EU) 2018/1240,
(EU) 2018/1726 and (EU) 2018/1861 of the European Parliament and of the Council and Council
Decisions 2004/512/EC and 2008/633/JHA (OJ L 135, 22.5.2019, p. 27 ).
(b)Regulation (EU) 2019/818 of the European Parliament and of the Council of 20 May 2019 on establishing
a framework for interoperability between EU information systems in the field of police and judicial
cooperation, asylum and migration and amending Regulations (EU) 2018/1726, (EU) 2018/1862 and
(EU) 2019/816 ( OJ L 135, 22.5.2019, p. 85 ).
ANNEX XI
Technical documentation r eferr ed to in Article 53(1), point (a) — technical documentation for providers
of general-purpose AI models
Section 12/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 108/110
Information to be pr ovided by all pr oviders of general-purpose AI models
The technical documentation referred to in Article 53(1), point (a) shall contain at least the following
information as appropriate to the size and risk profile of the model:
1.A general description of the general-purpose AI model including:
(a)the tasks that the model is intended to perform and the type and nature of AI systems in which it can be
integrated;
(b)the acceptable use policies applicable;
(c)the date of release and methods of distribution;
(d)the architecture and number of parameters;
(e)the modality (e.g. text, image) and format of inputs and outputs;
(f)the licence.
2.A detailed description of the elements of the model referred to in point 1, and relevant information of the
process for the development, including the following elements:
(a)the technical means (e.g. instructions of use, infrastructure, tools) required for the general-purpose AI
model to be integrated in AI systems;
(b)the design specifications of the model and training process, including traini ng methodologies and
techniques, the key design choices including the rationale and assumptions made; what the model is
designed to optimise for and the relevance of the dif ferent parameters, as applicable;
(c)information on the data used for training, testing and validation, where applicable, including the type
and provenance of data and curation methodologies (e.g. cleaning, filtering, etc.), the number of data
points, their scope and main characteristics; how the data was obtained and selected as well as all other
measures to detect the unsuitability of data sources and methods to detect identifiable biases, where
applicable;
(d)the computational resources used to train the model (e.g. number of floating point operations), training
time, and other relevant details related to the training;
(e)known or estimated ener gy consumption of the model.
With regard to point (e), where the energy consumption of the model is unknown, the energy consumption
may be based on information about computational resources used.
Section 2
Additional information to be pr ovided by pr oviders of general-purpose AI models with systemic risk
1.A detailed description of the evaluation strategies, including evaluation results, on the basis of
available public evaluation protocols and tools or otherwise of other evaluation methodologies.
Evaluation strategies shall include evaluation criteria, metrics and the methodology on the
identification of limitations.
2.Where applicable, a detailed description of the measures put in place for the purpose of conducting
internal and/or external adversarial testing (e.g. red teaming), model adaptations, including alignment
and fine-tuning.
3.Where applicable, a detailed description of the system architecture explaining how software
components build or feed into each other and integrate into the overall processing.
ANNEX XII
Transpar ency information r eferr ed to in Article 53(1), point (b) — technical documentation for providers
of general-purpose AI models to downstr eam pr oviders that integrate the model into their AI system
The information referred to in Article 53(1), point (b) shall contain at least the following:
1.A general description of the general-purpose AI model including:
(a)the tasks that the model is intended to perform and the type and nature of AI systems into which it can
be integrated;
(b)the acceptable use policies applicable;
(c)the date of release and methods of distribution;
(d)how the model interacts, or can be used to interact, with hardware or software that is not part of the
model itself, where applicable;
(e)the versions of relevant software related to the use of the general-purpose AI model, where applicable;
(f)the architecture and number of parameters;
(g)the modality (e.g. text, image) and format of inputs and outputs;2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 109/110
(h)the licence for the model.
2.A description of the elements of the model and of the process for its development, including:
(a)the technical means (e.g. instructions for use, infrastructure, tools) required for the general-purpose AI
model to be integrated into AI systems;
(b)the modality (e.g. text, image, etc.) and format of the inputs and outputs and their maximum size (e.g.
context window length, etc.);
(c)information on the data used for training, testing and validation, where applicable, including the type
and provenance of data and curation methodologies.
ANNEX XIII
Criteria for the designation of general-purpose AI models with systemic risk r eferr ed to in Article 51
For the purpose of determining that a general-purpose AI model has capabilities or an impact equivalent to
those set out in Article 51(1), point (a), the Commission shall take into account the following criteria:
(a)the number of parameters of the model;
(b)the quality or size of the data set, for example measured through tokens;
(c)the amount of computation used for training the model, measured in floating point operations or indicated
by a combination of other variables such as estimated cost of training, estimated time required for the
training, or estimated ener gy consumption for the training;
(d)the input and output modalities of the model, such as text to text (large langua ge models), text to image,
multi-modality , and the state of the art thresholds for determining high-impact capabilities for each
modality , and the specific type of inputs and outputs (e.g. biological sequences);
(e)the benchmarks and evaluations of capabilities of the model, including consid ering the number of tasks
without additional training, adaptability to learn new, distinct tasks, its level of autonomy and scalability ,
the tools it has access to;
(f)whether it has a high impact on the internal market due to its reach, which shall be presumed when it has
been made available to at least 10 000 registered business users established in the Union;
(g)the number of registered end-users.
ELI: http://data.europa.eu/eli/reg/2024/1689/oj
ISSN 1977-0677 (electronic edition)2/20/25, 8:13 PM L_202401689EN.000101.fmx.xml
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 110/110
|
caneda.pdf
|
1 Canada’s Proposed Artificial Intelligence and Data Act (AIDA): A Critical Review Derek Brown JD Candidate Seattle University School of Law July 24th, 2023 ABSTRACT This research paper provides an in-depth examination of the forthcoming Canadian Artificial Intelligence and Data Act (AIDA), emphasizing its likely implications for the growth of AI. The study reveals pervasive ambiguity within the Act's clauses, thereby complicating its interpretation and enforcement. This ambiguity presents particular risks for researchers, small businesses, and private individuals who, due to the unclear definitions of risk and harm, could potentially incur substantial penalties or imprisonment. Furthermore, the absence of explicit definitions and standards potentially empowers Innovation, Science, and Economic Development Canada (ISED) to institute and enforce broad AI regulations without a transparent public deliberation or approval mechanism. The paper proposes a series of solutions for policymakers to mitigate these issues, concluding that rectifying the detected ambiguities and establishing an efficient regulatory infrastructure is essential to maintain a healthy equilibrium between effective oversight and the promotion of innovation within the AI landscape. 1. INTRODUCTION In June 2022, the Canadian federal government proposed the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27. AIDA seeks to create Canada’s first non-sectoral AI regulations,
Electronic copy available at: https://ssrn.com/abstract=4687995
2 updating and supplementing general-purpose privacy protections and sectoral AI regulations.1 AIDA would introduce measures to improve AI transparency, mitigate the risks of algorithmic bias, and require risk analysis and reporting, as well as establish criminal penalties for violating these measures. As of June 2023, the bill is currently under consideration by the Standing Committee on Industry and Technology. Given the rapidly growing use of AI systems in a variety of contexts, it is crucial to examine the potential implications and societal impact that may arise from its implementation. (a) Existing AI and Data Regulations Canada’s primary data regulation is the Personal Information Protection and Electronics Documents Act (PIPEDA).2 PIPEDA regulates commercial entities’ use of personal data including names, addresses, phone numbers, email addresses, financial information, and social insurance numbers. PIPEDA also encompasses less obvious forms of personal information, such as opinions, preferences, and transaction histories. Under PIPEDA, individuals have several rights concerning their data. They have the right to know why their information is being collected, how it will be used, and to whom it may be disclosed. Individuals also have the right to access their personal information held by an organization, request corrections if it is inaccurate or incomplete, and withdraw consent for its collection, use, or disclosure. PIPEDA grants individuals the right to file complaints with the Privacy Commissioner of Canada if they believe an organization has violated their privacy rights and provides recourse in cases of non-compliance. Canada has adopted standards for the development of AI systems for use by government agencies. In 2019, the Treasury Board issued the Directive on Automated Decision-Making (DADM), which outlines the requirements for federal institutions in Canada when deploying automated decision systems.3 DADM requires developers to conduct an algorithmic impact 1Lisa R Lifshitz, Canada’s First AI Act Proposed (July 2022), online: American Bar Association Business Law Section <https://www.americanbar.org/groups/business_law/resources/business-law-today/2022-july/canada-s-first-ai-act-proposed/> [https://perma.cc/593G-PRBC] 2 Personal Information Protection and Electronic Documents Act, SC 2000, c 5 (Can) 3 Treasury Board of Canada, Directive on Automated Decision-Making (April 2023), online: <https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592> [https://perma.cc/LVB2-6MA7]
Electronic copy available at: https://ssrn.com/abstract=4687995
3 assessment (AIA); an online questionnaire used to identify the risks of a particular automated system. Depending on the outcome of the AIA, the DADM requires different degrees of transparency and consent, outcome and bias monitoring, data governance controls, training, and other legal requirements. Notably, DADM requires that federal software systems give clients a recourse mechanism to review automated decisions. Non-compliance with the DADM may result in consequences as determined by the Treasury Board of Canada Secretariat. Canada additionally has sectoral regulations and directives that potentially limit how companies may use AI. For example, Health Canada jointly issued “Good Machine Learning Practice for Medical Device Development: Guiding Principles” together with the US Food and Drug Administration (FDA), which outlines basic best practices for using ML models, such as maintaining distinct training and test sets and evaluating potential bias in datasets.4 While these sorts of regulations don’t have the direct force of law, they could easily be leveraged in an administrative action (eg. Office of the Privacy Commissioner of Canada or Canadian Human Rights Commission) to argue that a corporation did not exercise reasonable care. (b) Intent of the AIDA The rapid adoption of AI technologies into decision systems poses novel risks that are not addressed by PIPEDA or other data protection regulations: (i) Data Subject Rights Since AI training data is considered anonymized, it is not covered by PIPEDA. However, even anonymized data carries some risk of de-anonymization, especially when fed into a complex AI model. To mitigate these risks, users should have a right to understand what purpose their data is being used for and control the use of their data. (ii) Transparency 4 Health Canada, Good Machine Learning Practice for Medical Device Development: Guiding Principles (October 2021), online: <https://www.canada.ca/en/health-canada/services/drugs-health-products/medical-devices/good-machine-learning-practice-medical-device-development.html> [https://perma.cc/LS4U-4ZSJ]
Electronic copy available at: https://ssrn.com/abstract=4687995
4 Since AI systems can be non-deterministic (and potentially prone to errors), users may wish to scrutinize decisions made by AI systems. Users, therefore, need to be aware of when AI is used to make high-impact decisions. (iii) Algorithmic Bias One risk of AI systems is that they may produce biased outcomes, either because the training data is itself biased or because of poor model design. Entities using AI systems to guide decisions need to carefully monitor and test their models to ensure that they are not producing biased outcomes. Data governance and intellectual property concerns AI systems often require a large dataset to adequately train, so developers sometimes turn to mass collection techniques (like scraping) to obtain data. Mass-collected datasets may contain copyrighted material or data where inadequate consent was obtained. Entities need to thoroughly track data governance in training their models. AIDA is intended to address several of these risks by imposing ex-ante requirements on entities using AI systems, as well as potential civil and criminal liability for knowingly causing harm using an AI system. In this paper, I evaluate AIDA’s effectiveness in mitigating these risks. 2. SCOPE OF THE AIDA (a) Regulated Activities To stay within the powers vested in the Canadian federal government, the AIDA specifically regulates the development and use of AI systems in trade or commerce: regulated activity means any of the following activities carried out in the course of international or interprovincial trade and commerce: (a) processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system;
Electronic copy available at: https://ssrn.com/abstract=4687995
5 (b) designing, developing or making available for use an artificial intelligence system or managing its operations. (activité réglementée)5 Non-application This Act does not apply with respect to a government institution as defined in section 3 of the Privacy 15 Act.6 This definition has a few problems in terms of achieving the goals of the AIDA, as well as providing a clear definition of what activities the AIDA will cover: Concern 1: AIDA doesn’t cover government entities The AIDA exempts government entities, “such as federal departments … Crown corporations nor to any system used by the Department of National Defence, Canadian Security Intelligence Service (CSIS), the Communications Security Establishment (CSE), or any other person who is responsible for federal or provincial departments or agencies and “who is prescribed by regulation”7. Government entities potentially use AI in significant and potentially harmful ways, and therefore ought to be covered under the AIDA to minimize potential harms.8 Concern 2: Liability of data publishers is unclear The definition of regulated activity does not provide clear guidance on what it means to “make data available for the purpose of using an artificial intelligence system.” Depending on how this is interpreted, the AIDA might be over- or under-inclusive. Consider a data broker that publishes anonymized bulk data without explicitly labeling the data as being “for the purpose of developing an artificial intelligence system.” If this activity is taken to be outside the scope of AIDA, this creates a significant loophole for entities to avoid compliance. Conversely, if this activity is to be covered by AIDA, this creates an arbitrary bright-line standard for when a 5 C-27, Digital Charter Implementation Act, 1st Sess, 4th Parl, 202, cl. 39(5) 6 C-27, Digital Charter Implementation Act, 1st Sess, 4th Parl, 202, cl. 39(2). 7 Christelle Tessono, Yuan Stevens, Momin M. Malik, Sonja Solomun, Supriya Dwivedi, and Sam Andrey, AI Oversight, Accountability and Protecting Human Rights (November 2022), online: Cybersecure Policy Exchange <https://www.cybersecurepolicy.ca/aida> [https://perma.cc/F5KF-PVR5]. 8 Id.
Electronic copy available at: https://ssrn.com/abstract=4687995
6 particular dataset becomes useful for developing an artificial intelligence system. To resolve this ambiguity, lawmakers should strike section (a) from the definition. The AIDA already holds entities operating AI systems (those covered under section (b)) accountable for the compliance of their training data. These entities, in turn, can hold data providers responsible through contractual means. This would make enforcement of the AIDA simpler since the AIDA would only cover data actually being used for the creation of an AI system. Concern 3: Liability of AI SaaS vendors is unclear Many entities, known as software-as-a-service (SaaS) vendors, offer tools that enable other entities to build AI systems more efficiently. These tools allow customers to import data, and design and develop their own AI models, running on servers operated by the SaaS vendor. Although the SaaS vendor operates these tools, they have a limited ability to control or assure the compliance of the resultant AI systems; in fact, many SaaS vendors provide encryption features that prevent them from being able to access their customer’s data and AI models.9 Under the current definition, SaaS vendors might fall under the purview of the regulation, as they aid in the design and development, and manage the operations of the AI system. This potentially places a burden on SaaS vendors, who primarily serve as facilitators in the AI development process and have little ability to enforce compliance with AIDA. Lawmakers can alleviate this potential burden by establishing a regulatory safe harbor for AI SaaS vendors on a system-by-system basis. The proposed safe harbor would provide legal protection and exemption to SaaS vendors who offer software tools or platforms enabling developers to build their own AI systems or models. Without the safe harbor rule, SaaS vendors would be required to make assurances on behalf of their customers, imposing additional 9 See generally: Kristin E. Lauter, Private AI: Machine Learning on Encrypted Data (2021) Cryptology ePrint Archive, online: <https://eprint.iacr.org/2021/324>. See also d'Aliberti et al, AWS Machine Learning Blog: Enable fully homomorphic encryption with Amazon SageMaker endpoints for secure, real-time inferencing (March 2023), online: AWS Machine Learning Blog <https://aws.amazon.com/blogs/machine-learning/enable-fully-homomorphic-encryption-with-amazon-sagemaker-endpoints-for-secure-real-time-inferencing/> [https://perma.cc/HXF2-VDD7].
Electronic copy available at: https://ssrn.com/abstract=4687995
7 roadblocks for entities wishing to access AI tooling, and therefore potentially excluding smaller entities from the AI marketplace. Concern 4: Liability of “off-the-shelf” AI customers is unclear Another significant concern stemming from the definition of regulated activities pertains to the potential liability of entities who purchase and operate “off-the-shelf” AI systems developed and provided by vendors. The inclusion of the phrase "makes available for use" or "manages its operation" in the definition implies that customers of “off-the-shelf” AI systems would be responsible for implementing AIDA controls and liable for AIDA violations, even though they may lack expertise or control over the AI systems’ source code. To help promote access to AI systems, lawmakers should establish a safe harbor for vendor-provided AI systems, instead of holding software vendors accountable when the customer operates the system according to the manufacturer’s instructions. Concern 5: Imposing liability on open-source projects may hinder innovation Software innovation is largely dependent on open-source projects, communities where developers work together to create tools and technologies free for public use. Open source has numerous benefits: it promotes the interoperability of software systems through the development of standards; democratizes access to technologies that ordinarily would only be accessible to large companies; and provides researchers the opportunity to analyze software for bias, security issues, and more. Promoting open-source development is thus crucial to ensuring that AI technologies are equitable and safe. The development of an AI system requires two ingredients; an AI model, which is a software application capable of learning from data, and a dataset the model can learn from. The developer of an AI system “trains” the model by running a set of expensive computations on the dataset, resulting in a trained model. The trained model is capable of performing generative tasks (like estimation, prediction, or content generation) at a minuscule fraction of the computational cost of the training process.
Electronic copy available at: https://ssrn.com/abstract=4687995
8 According to an interpretation by Innovation, Science and Economic Development Canada, AIDA does not apply to the distribution of AI models, but does apply to the distribution of datasets and trained models: Would the AIDA impact access to open source software, or open access AI systems? An AI system generally requires a model, as well as the use of datasets to train the model to perform certain tasks. It is common for researchers to publish models or other tools as open source software, which can then be used by anyone to develop AI systems based on their own data and objectives. As these models alone do not constitute a complete AI system, the distribution of open source software would not be subject to obligations regarding "making available for use." However, these obligations would apply to a person making available for use a fully-functioning high-impact AI system, including if it was made available through open access.10 Given the decentralized nature of open-source communities, open-source projects will have a difficult time meeting the ex-ante requirements of AIDA. To avoid liability, projects will therefore cease publishing datasets and models. This would significantly detriment innovation, as datasets are difficult and time-consuming to produce, and training models requires significant computing power and expense. To encourage innovation and provide equitable access to the benefits of AI, lawmakers should consider creating a similar safe harbor for open-source projects. Open-source projects and open-access projects do not have the same risks as consumer-facing AI systems, since they are intended primarily to be used by developers in the development of other software systems. 10 Innovation, Science and Economic Development Canada, Artificial Intelligence and Data Act (AIDA) Companion Document (2023), online: ISED Canada <https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document> [https://perma.cc/UTD7-JZMD].
Electronic copy available at: https://ssrn.com/abstract=4687995
9 Developers using open-source projects in the production of consumer-facing applications should then be held accountable for ensuring the project meets AIDA requirements. (b) What constitutes an AI system? The AIDA applies to the operation of “artificial intelligence systems”: Artificial intelligence system means a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.11 Concern 1: AIDA isn’t technologically neutral As Tessono et al point out, the AIDA’s definition of an AI system is rooted in specific technologies.12 This definition is not future-proof, as new AI technologies and techniques may be introduced over time. Moreover, the listed technologies are highly abstract concepts, and therefore subject to interpretation. Because of these concerns, Tessono et al advocate for abandoning this definition in favor of one that is technologically neutral and future-proof.13 The risks that AIDA is trying to mitigate are not specific to the technology being used. For instance, human-programmed algorithms (which fall outside the current scope of AIDA) can be extremely complex, and therefore non-transparent, biased, and harmful. Instead of trying to tackle the difficult task of discriminating between AI and non-AI technologies, regulators should broaden the scope of the regulation to address all “automated decision systems,” which is defined in the related, Consumer Privacy Protection Act, as follows: automated decision system means any technology that assists or replaces the judgment of human decisionmakers through the use of a rules-based system, regression analysis, 11 C-27, Digital Charter Implementation Act, 1st Sess, 4th Parl, 2022, cl. 39(2). 12 Christelle Tessono, Yuan Stevens, Momin M. Malik, Sonja Solomun, Supriya Dwivedi, and Sam Andrey, AI Oversight, Accountability and Protecting Human Rights (November 2022), online: Cybersecure Policy Exchange <https://www.cybersecurepolicy.ca/aida> [https://perma.cc/F5KF-PVR5]. 13 Id.
Electronic copy available at: https://ssrn.com/abstract=4687995
10 predictive analytics, machine learning, deep learning, a neural network or other technique. (système décisionnel automatisé)14 By broadening the scope of regulation, AIDA can better accomplish its intent of protecting individual rights in the face of complex technological systems, and reduce unnecessary litigation around the definition of AI systems. (c) What constitutes a “high-impact” system? Many of the obligations in AIDA are scoped to the development of “high-impact” systems, which are defined as: high-impact system means an artificial intelligence system that meets the criteria for a high-impact system that are established in regulations. (système à incidence élevée) The categorization of systems based on risk level is reminiscent of the proposed EU AI Act (AIA), which divides systems into categories including “unacceptable risk”, “high risk”, “limited risk” and “minimal risk” depending on the usage category.15 Concern 1: Relies on regulatory discretion The definition of a “high-impact system” is left entirely to regulators. As discussed in Part 9, there are potential risks in conferring this broad authority to regulators. Concern 2: Proving a system is “low-impact” is intractable Classifying systems into “high” and “low” impact systems necessitates a determination of which systems pose a risk to individual rights. Given the opaque nature of models, most systems that process human data have the potential to contain bias. Even if the set of input data to a model is constrained to a set of seemingly unobjectionable parameters, the model may perpetuate existing biases. For example: 14Bill C-27, Digital Charter Implementation Act, 1st Sess, 4th Parl, 2022 cl. 2(2). 15 European Commission Digital Strategy, Regulatory Framework on Artificial Intelligence (June 2023), online: <https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai> [https://perma.cc/GNW6-DCYL].
Electronic copy available at: https://ssrn.com/abstract=4687995
11 ● The Consumer Financial Protection Bureau has determined that geography and surname information is a sufficient proxy for ethnicity; implying that ML algorithms could “learn” racial bias from surname alone.16 ● An internal tool used by Amazon to evaluate resumes exhibited a bias against women, as it was trained on past sexist hiring decisions.17 Instead of drawing a rigid bright line between “high” and “low” impact systems in regulations, lawmakers should place the burden on the industry as a whole to determine the “proportionate” degree of care required for a particular type of system. This shift would ensure that all systems undergo a thorough risk assessment, and would give regulators flexibility to audit and enforce AIDA in cases where “low-impact” systems end up having a significant impact on individual rights. 3. RISK ASSESSMENT AND MONITORING Various provisions of the AIDA require that entities perform a risk assessment to determine the potential impacts of AI systems: Assessment — high-impact system 7 A person who is responsible for an artificial intelligence system must, in accordance with the regulations, assess whether it is a high-impact system. Measures related to risks 8 A person who is responsible for a high-impact system must, in accordance with the regulations, establish measures to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system. 16 Consumer Financial Protection Bureau, Using Publicly Available Information to Proxy for Unidentified Race and Ethnicity, Research Reports (September 2023), online: <https://files.consumerfinance.gov/f/201409_cfpb_report_proxy-methodology.pdf> [https://perma.cc/V86A-V8XY]. 17 Jeffery Dastin, Amazon scraps secret AI recruiting tool that showed bias against women (October 2018), online: Reuters <https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G> [https://perma.cc/3FZ8-BV3W].
Electronic copy available at: https://ssrn.com/abstract=4687995
12 Monitoring of mitigation measures 9 A person who is responsible for a high-impact system must, in accordance with the regulations, establish measures to monitor compliance with the mitigation measures they are required to establish under section 8 and the effectiveness of those mitigation measures.18 Various other regulatory frameworks require that entities conduct a risk assessment before implementing an AI system. Under the EU AI Act, entities are required to conduct a Conformity Assessment, which looks at data governance, design, and development of a system.19 Canada has adopted a similar requirement for AI systems used by government agencies. Under the Directive on Automated Decision-Making (DADM), agencies must answer 85 questions, which determine the risk level and required controls before implementing an AI system.20 Concern: Lack of adequate specificity regarding risks evaluated In comparison to these other regulatory frameworks, AIDA provides little guidance regarding the required elements of a risk assessment. In the companion document to the AIDA, Innovation, Science, and Economic Development Canada (ISED) suggests only that organizations are required to consider: ● Evidence of risks of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences; ● The severity of potential harms; ● The scale of use; ● The nature of harms or adverse impacts that have already taken place; 18 Bill C-27, Digital Charter Implementation Act, 1st Sess, 4th Parl, 2022, cl. 39(7). 19 Emily Jones, Introduction to the Conformity Assessment under the Draft EU AI Act and How It Compares to DPIAs (August 2022), online: Future of Privacy Forum <https://fpf.org/blog/introduction-to-the-conformity-assessment-under-the-draft-eu-ai-act-and-how-it-compares-to-dpias/> [https://perma.cc/2MD4-UKDB]. 20 Treasury Board of Canada, Directive on Automated Decision-Making (April 2023), online: <https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592> [https://perma.cc/LVB2-6MA7].
Electronic copy available at: https://ssrn.com/abstract=4687995
13 ● The extent to which for practical or legal reasons it is not reasonably possible to opt out from that system; ● Imbalances of economic or social circumstances, or the age of impacted persons; and The degree to which the risks are adequately regulated under another law.21 By contrast, EU and US regulations have a set of clear statutory requirements for conducting risk assessments. Proposed US regulations reference the NIST AI Risk Management Framework,22 a 36-page framework outlining: ● What stages of development an AI should go through, and who should be included in each step of the process. ● A list of common risks, and how to think through measuring and mitigating these risks. ● A playbook of measures to take to ensure compliance with the framework, along with examples and additional resources for implementation.23 Canada’s guidelines for government use of AI under the Direction on Automated Decision Making (DADM) are also much more specific and concrete than the proposed AIDA. The DADM provides an Algorithmic Impact Assessment (AIA) questionnaire and guidelines based on the determined risk level. Although administrators could promulgate more specific regulations regarding risk assessment in their authority under the AIDA, there are significant advantages to codifying requirements in statute. Administrative processes do not allow for the same level of public input as the legislative process does, denying peoples’ right to contribute to the development of AI regulations.24 Moreover, the development of a risk assessment framework likely requires the support of standards organizations, including adequate staffing and funding to develop policy frameworks, 21 Government of Canada. Innovation, Science and Economic Development Canada, Artificial Intelligence and Data Act (AIDA) Companion Document (2023). 22 National Institute of Standards and Technology, AI Risk Management Framework (2022). 23 National Institute of Standards and Technology, NIST AI RMF Playbook (2023). 24 Scassa, Regulating AI in Canada: A Critical Look at the Proposed Artificial Intelligence and Data Act, 101 Can. B. Rev. 1 (2023).
Electronic copy available at: https://ssrn.com/abstract=4687995
14 which is best accomplished through a legislative process.25 These concerns are discussed further in Part 9. 4. DATA GOVERNANCE The AIDA requires that entities implement processes to manage the anonymization and use of anonymized data: Anonymized data 6 A person who carries out any regulated activity and who processes or makes available for use anonymized data in the course of that activity must, in accordance with the regulations, establish measures with respect to (a) the manner in which data is anonymized; and (b) the use or management of anonymized data 26 Furthermore, AIDA establishes criminal liability for those who use illegally obtained information in the development of an AI system: Possession or use of personal information 38 Every person commits an offence if, for the purpose of designing, developing, using or making available for use an artificial intelligence system, the person possesses — within the meaning of subsection 4(3) of the Criminal Code — or uses personal information, knowing or believing that the information is obtained or derived, directly or indirectly, as a result of (a) the commission in Canada of an offence under an Act of Parliament or a provincial legislature; or (b) an act or omission anywhere that, if it had occurred in Canada, would have constituted such an offence.27 25 Id. 26 Bill C-27, Digital Charter Implementation Act, 1st Sess, 4th Parl, 202, cl. 39(6). 27 Id at cl. 39(38).
Electronic copy available at: https://ssrn.com/abstract=4687995
15 Concern 1: Required measures are vague The language of the statute is extremely vague; requiring only “measures with respect to the manner in which data is anonymized.” As discussed further in Part 9, a lack of clear regulatory authority and guidance reduces the legitimacy and efficacy of the AIDA. Legislators should instead require specific concrete measures. To provide some examples: ● Persons building AI systems should determine the acceptable level of statistical de-anonymization risk, and implement monitoring to ensure that data remains sufficiently anonymized based on this statistical model. ● Persons building AI systems should persist only the minimal data needed to achieve the desired function of the system. ● Persons building AI systems must take adequate measures to secure and encrypt anonymized data and adhere to the principle of least privilege. Concern 2: Liability for 3rd Party derived from illegally-obtained processes AIDA establishes liability only when a person knows or believes the information was obtained illegally. This requires prosecutors to establish elements of mens rea, making enforcement challenging. Legislators could strengthen the protections of the AIDA and make enforcement easier by creating a positive obligation for entities to validate the governance of their data. Following the model of GDPR, entities could be mandated to provide specific affirmative representations about their data procedures and enforce third-party vendors to enter into standard contractual clauses (SCCs) for certification.28 This would make enforcement of the AIDA far simpler, as a lack of certification or an adequate SCC would be sufficient to prove a regulatory violation.
28 See: Directorate-General for Justice and Consumers, Standard Contractual Clauses (SCC) (June 2021), online: <https://commission.europa.eu/law/law-topic/data-protection/international-dimension-data-protection/standard-contractual-clauses-scc_en> [https://perma.cc/HLB8-XF5U].
Electronic copy available at: https://ssrn.com/abstract=4687995
16 5. TRANSPARENCY The AIDA additionally requires that organizations are transparent with consumers, by making certain disclosures: Publication of description — making system available for use 11 (1) A person who makes available for use a high-impact system must, in the time and manner that may be prescribed by regulation, publish on a publicly available website a plain-language description of the system that includes an explanation of (a) how the system is intended to be used; (b) the types of content that it is intended to generate and the decisions, recommendations or predictions that it is intended to make; (c) the mitigation measures established under section 8 in respect of it; and (d) any other information that may be prescribed by regulation. Publication of description — managing operation of system (2) A person who manages the operation of a high-impact system must, in the time and manner that may be prescribed by regulation, publish on a publicly available website a plain-language description of the system that includes an explanation of (a) how the system is used; (b) the types of content that it generates and the decisions, recommendations or predictions that it makes; (c) the mitigation measures established under section 8 in respect of it; and (d) any other information that may be prescribed by regulation. 29 Concern 1: Hidden disclosures are inadequate to mitigate transparency risks It is important to make users aware that content or decisions are made by AI so that they can exercise additional scrutiny in relying upon the facts presented. In the context of everyday systems, users are unlikely to read attached disclosures to determine whether the content is AI- 29 Bill C-27, Digital Charter Implementation Act, 1st Sess, 4th Parl, 202, cl. 39(6).
Electronic copy available at: https://ssrn.com/abstract=4687995
17 generated. The regulation should therefore be augmented to require conspicuous notice that decisions or content are AI generated, to reduce the risk that users unknowingly rely on AI content. In particularly high-risk cases, it may even make sense to require affirmative consent to this notice. 6. BIAS AIDA requires that entities take steps to measure and mitigate the risks of biased output: Measures related to risk 8 A person who is responsible for a high-impact system must, in accordance with the regulations, establish measures to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system. biased output means content that is generated, or a decision, recommendation or prediction that is made, by an artificial intelligence system and that adversely differentiates, directly or indirectly and without justification, in relation to an individual on one or more of the prohibited grounds of discrimination set out in section 3 of the Canadian Human Rights Act, or on a combination of such prohibited grounds. It does not include content, or a decision, recommendation or prediction, the purpose and effect of which are to prevent disadvantages that are likely to be suffered by, or to eliminate or reduce disadvantages that are suffered by, any group of individuals when those disadvantages would be based on or related to the prohibited grounds. (résultat biaisé)30 Concern 1: AIDA lacks adequate audit mechanisms Identifying bias in an AI system is an especially difficult task for a few reasons: 1. Technical Impossibility- Many systems do not collect or persist information about a person’s status in protected classes, making it impossible to perform statistical monitoring of the system to determine if a process is biased. 30 Bill C-27, Digital Charter Implementation Act, 1st Sess, 4th Parl, 2022 cl. 39(11).
Electronic copy available at: https://ssrn.com/abstract=4687995
18 2. Collecting demographic data undermines privacy and discrimination safeguards- System operators and users alike may not wish for the system to collect information about protected status to minimize data collected and preserve privacy. 3. Coded biases are difficult to measure- Systems can be biased in ways that do not directly relate to a decision about a particular person. For example, an LLM may write more favorably about one gender over another in generative tasks. Detecting these sorts of biases requires a complex and thorough analysis of a model’s outputs. 4. Coded biases are pervasive- There are many different types of bias, so it can be difficult for a developer to build a thorough monitoring and testing scheme. Because of these factors, auditing an AI system post-hoc to determine if it is biased is an extremely difficult task (especially for external auditors), forcing regulators to rely on ex-ante monitoring and mitigation measures. While the AIDA requires that entities retain “general records” describing the bias monitoring schemes in “general terms,” this is likely insufficient for a regulator to determine if a system has been adequately monitored. Legislators should therefore bolster the record-keeping requirements to improve auditability, requiring entities to retain the specific codes and procedures used to perform monitoring (including versions thereof), and a record of each attempted monitoring test, and the result. Concern 2: To what extent does AIDA create an obligation to correct for historical bias? When developing an AI algorithm, historical data is often used to train the model and make predictions or decisions. If the historical data includes instances of discrimination or biased practices, the algorithm may unintentionally learn and encode those biases. For example, if a hiring algorithm is trained on past hiring data that exhibits gender or racial bias, the algorithm may learn to favor certain groups over others. This happens because the algorithm identifies patterns in the data and generalizes them. If the historical data reflects biased decisions or discriminatory practices, the algorithm can perpetuate and amplify those biases, leading to unfair outcomes. Subtle forms of discrimination can be especially complicated to mitigate. Large language models (LLMs) may encode discriminatory stereotypes or biases in the language they produce, which
Electronic copy available at: https://ssrn.com/abstract=4687995
19 can be a difficult task to address given the prevalence of these biases in online content.31 Under the current wording of the statute, it is unclear whether an entity is responsible for mitigating these forms of bias if the bias is ancillary to the function of the system. 7. HARM The AIDA establishes liability for entities who knowingly or recklessly cause physical, psychological, or economic harm by making an AI system available for use. It additionally requires that entities notify the Minister if a system is likely to, or does result in harm: Notification of material harm 12 A person who is responsible for a high-impact system must, in accordance with the regulations and as soon as feasible, notify the Minister if the use of the system results or is likely to result in material harm. … Making system available for use 39 Every person commits an offence if the person (a) without lawful excuse and knowing that or being reckless as to whether the use of an artificial intelligence system is likely to cause serious physical or psychological harm to an individual or substantial damage to an individual’s property, makes the artificial intelligence system available for use and the use of the system causes such harm or damage; or (b) with intent to defraud the public and to cause substantial economic loss to an individual, makes an artificial intelligence system available for use and its use causes that loss.32 31 Weidinger et al, “Ethical and social risks of harm from Language Models” (2021), online: arXiv <https://arxiv.org/abs/2112.04359>. 32 Bill C-27, Digital Charter Implementation Act, 1st Sess, 4th Parl, 2022 cl. 39(12).
Electronic copy available at: https://ssrn.com/abstract=4687995
20 Concern 1: AIDA doesn’t weigh benefits against harms The AIDA imposes liability on entities that cause “serious physical or psychological harm,” without weighing the benefits of an AI system against these harms. The absolute imposition of liability for any harm caused may deter the development of systems that operate in critical environments. For example, AI systems have the potential to significantly improve patient outcomes in the medical field. AI systems may provide faster, more accurate diagnoses, and suggest more effective treatment plans than doctors alone. However, medical AI systems may occasionally cause harm through misdiagnosis or mistreatment, though at a potentially similar or lower rate than human doctors. The AIDA ought to encourage these developments, by clarifying that harm should be considered in the context of potential benefits, as well as the oversight humans have in applying predictions and decisions made by AI. Concern 2: AIDA may inadequately consider community harms Without a clear definition for “harm,” it is unclear whether entities would be held accountable for less direct forms of harm. For instance, in “Bill C-27 and AI in Content Moderation: The Good, The Bad, and The Ugly,” Delaney suggests that social media networks may not be held accountable for “harmful” content moderation policies, since these types of harms don’t fit the narrow definition of “harm” provided in the AIDA.33 While it is an impossible task to enumerate all the possible harms of AI, additional criteria are needed within the text of the AIDA to ensure the scope of “harm” is predictable for businesses and regulators alike. Concern 3: Notification requirements are vague The AIDA additionally requires that entities notify the Minister of Innovation, Science and Industry if a system is likely to, or does cause harm. This requirement is vague and potentially creates a massive obligation for entities wishing to use AI. Legislators should clarify this requirement further in statute; what types of harms are expected to be reported, how likely do 33 Ben Delaney, Bill C-27 and AI in Content Moderation: The Good, The Bad, and The Ugly (January 2023), online: McGill Business Law Platform <https://www.mcgill.ca/business-law/article/bill-c-27-and-ai-content-moderation-good-bad-and-ugly> [https://perma.cc/6Q65-TJ6U].
Electronic copy available at: https://ssrn.com/abstract=4687995
21 these harms need to be, and how widespread do the harms need to be to trigger reporting requirements? Additionally, the AIDA doesn’t establish requirements to notify impacted users of an AI system. Users have a right to know when an AI system is producing harmful results, as this allows users to adapt their usage of the system to minimize risk to themselves. Future revisions of the statute should include a requirement to notify potentially impacted users, in addition to the Minister of Innovation, Science, and Industry. 8. INTELLECTUAL PROPERTY Generative AI algorithms are often trained on vast amounts of data, including copyrighted content, without compensating or obtaining authorization from the creators. This raises significant ethical and legal concerns. Many AI models are trained on publicly available datasets, which can include copyrighted works such as books, movies, music, and visual art. While the intention is often to create a diverse and representative training set, the unauthorized use of copyrighted content without proper attribution or compensation undermines the rights of creators. By using copyrighted material as training data, generative AI models can inadvertently reproduce elements, styles, or even entire works without permission, potentially infringing upon the original creator's intellectual property. This practice poses a challenge to the fair remuneration and recognition of artists and authors, highlighting the need for clearer guidelines and ethical considerations in the development and use of generative AI technologies. Concern 1: AIDA doesn’t specifically address copyright concerns Despite the significance of these issues, the AIDA doesn't provide specific AI copyright regulations. Hypothetically, the Minister of Innovation, Science, and Industry, could elect to promulgate rules regarding AI copyright (see Part 9) but has no mandate to do so. To ensure that these issues are adequately deliberated and addressed, the legislature should draft and adopt specific copyright provisions in the AIDA.
Electronic copy available at: https://ssrn.com/abstract=4687995
22 9. PROMULGATION AND ADMINISTRATION The AIDA is described as an “agile” regulatory framework; the regulations in the AIDA are very general, placing significant responsibility on agencies to determine the specific provisions of the AIDA.34 Proponents of this approach argue that this dynamic approach is required to keep up with advancements in AI.35 Critics argue that this would permit agencies to enact regulations without adequate public deliberation, oversight, or approval.36 Concern: The AIDA doesn’t provide adequate requirements for deliberation and public approval The AIDA vests extensive authority in Innovation, Science, and Economic Development (ISED) Canada, without defining a clear process for promulgating regulations. Given the relatively early stage of AI development and the inherent complexity of the field, there are concerns about whether ISED possesses the necessary expertise and experience to effectively develop AI technology standards. The rapid advancements and evolving nature of AI require a nuanced understanding of the technology, its capabilities, and potential risks. Without a clear process for gathering input from various stakeholders and experts in the AI community, there is a risk that the regulations may not adequately address the intricacies and potential implications of AI systems, which could hinder innovation and inadvertently stifle the growth of the AI industry in Canada.37 Striking the right balance between protecting against AI-related harm and fostering innovation requires a comprehensive approach that considers the diverse perspectives and expertise of those involved in the field. The AIDA should create clear guidelines for how definitions and regulations would be published, deliberated upon, and finally promulgated. 34 Teresa Scassa, Regulating AI in Canada: A Critical Look at the Proposed Artificial Intelligence and Data Act, 101 Can. B. Rev. 1 (2023). 35 Gillian Hadfield, Maggie Arai, and Isaac Gazendam, AI regulation in Canada is moving forward. Here’s what needs to come next (May 2023), online: University of Toronto Schwartz Reisman Institute For Technology and Society <https://srinstitute.utoronto.ca/news/ai-regulation-in-canada-is-moving-forward-heres-what-needs-to-come-next> [https://perma.cc/XZ3K-QZSD]. 36Blair Arrard-Frost, “Generative AI Systems: Impacts on Artists & Creators and Related Gaps in the Artificial Intelligence and Data Act” (June 2023), online: SSRN <https://ssrn.com/abstract=4468637>. 37 Teresa Scassa, Regulating AI in Canada: A Critical Look at the Proposed Artificial Intelligence and Data Act, 101 Can. B. Rev. 1 (2023).
Electronic copy available at: https://ssrn.com/abstract=4687995
23 10. ENFORCEMENT Responsibility for implementation and enforcement of the AIDA would fall on the Minister of Innovation, Science and Industry, though the AIDA establishes the role of AI and Data Commissioner to assist the Minister in this task. 38 The AIDA gives the Minister of Innovation, Science and Industry various administrative powers to enforce the act: ● The Minister can audit AI systems if they have reasonable cause to believe violate provisions of the AIDA. The person being audited bears the cost of this audit.39 ● The Minister can order an entity to cease using an AI system if they have reasonable cause to believe that it poses an imminent risk of harm.40 ● The Minister may share information discovered during an audit with the Privacy Commissioner. Human Rights Commission, and Commissioner of Competition.41 ● The Minister may establish and levy administrative monetary penalties.42 In addition, the AIDA establishes civil and criminal liability for various offenses under the act: Offense Penalty Failing to adhere to ex-ante requirements: ● Anonymization Measures ● Risk Assessment ● Establishing measures to mitigate risks ● Establishing monitoring of risk mitigation ● Keeping general and specific records Business Entity Up to $10m or 3% of gross global revenues in the given fiscal year, whichever is greater. Individual Up to the discretion of the court. The maximum on summary conviction is $50k. 38 Innovation, Science and Economic Development Canada, Artificial Intelligence and Data Act (March 2023), online: ISED <https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act> [https://perma.cc/UTD7-JZMD]. 39 Bill C-27, Digital Charter Implementation Act, 1st Sess, 4th Parl, 202, cl. 39(15). 40 Id at cl. 39(17). 41 Id at cl. 39(25). 42 Id at cl. 39(29).
Electronic copy available at: https://ssrn.com/abstract=4687995
24 ● Transparency requirements ● Causing harm Obstruction, or providing false or misleading statements to the Minister or Auditor Knowingly using personal information obtained illegally in the development of an AI system Business Entity Up to $25m or 5% of gross global revenues in the given fiscal year, whichever is greater. Individual A fine up to the discretion of the court, and up to five years less a day in prison. Maximum on summary conviction is $100,000 and two years less a day. Knowingly causing physical, psychological, or economic harm using an AI system Concern 1: Chilling effect on AI research The stringent penalties outlined in the AIDA could lead to a chilling effect on the development of AI systems for high-risk applications. Startups, often operating with limited resources and financial capacity, may struggle if fines amount to a significant portion of their revenue or lead to bankruptcy. Similarly, individuals involved in AI innovation may face severe financial and personal consequences, including the possibility of an unlimited fine and imprisonment, which could stifle their willingness to engage in AI research and development. This disproportionate burden on small entities and individuals may stifle innovation, limit competition, and hinder the overall growth of the AI ecosystem in Canada. Additionally, if regulators employ solely punitive measures, they risk creating a “whack-a-mole” scenario where new, unregulated entities replace those that face penalties. Instead, regulators should be incentivized to work collaboratively with entities rather than punishing them into bankruptcy. The AIDA should lay the groundwork for regulators to create
Electronic copy available at: https://ssrn.com/abstract=4687995
25 positive relationships with entities; encouraging proactive reporting and engagement, the development of implementation guidelines, and a strong culture of compliance. The AIDA should therefore include additional provisions to limit punitive fines, and encourage collaboration with entities. Legislators could include specific fines and sentencing guidelines, designed to ensure that startups and individuals aren’t disproportionately punished under the AIDA. Additionally, legislators could consider including notice-and-cure periods, which would foster a collaborative approach to mitigating AI risks. Concern 2: Lack of a private right of action Introducing a private right of action alongside the AIDA would greatly enhance the enforceability and effectiveness of the law. While the current legislation empowers the Minister of Innovation, Science and Industry to take legal action against entities, the sheer volume of entities engaged in AI-related activities poses a challenge for effective enforcement by a single governmental entity. By allowing private individuals or organizations to bring forth lawsuits in cases of AI system harm or illegal data use, the burden of enforcement can be shared among a broader range of actors. This would also create a powerful deterrent for entities to comply with the regulations. The prospect of facing legal action from affected individuals or organizations would incentivize entities to prioritize responsible AI practices and due diligence in data acquisition, further promoting the protection of individuals' rights and mitigating potential harm caused by AI systems. 10. CONCLUSION The Artificial Intelligence and Data Act (AIDA) symbolizes a commendable effort to establish regulatory oversight over the AI sector. AI technologies present novel risks to Canadians, and it is important that Canada’s legislature takes action to update laws and regulations to mitigate these risks. Given the pace of AI development, it is essential that regulations strike a balance between specificity and flexibility. However, the draft AIDA currently leans too far in the direction of flexibility; many aspects of the statute and definitions are ambiguous, creating obstacles for
Electronic copy available at: https://ssrn.com/abstract=4687995
26 organizations trying to comply with the statute, and for regulators attempting to enforce it. Moreover, delegating such significant implementation decisions to regulators potentially undermines the public’s right to contribute to the formulation AI regulations. In subsequent drafts, the legislature should take care to address these ambiguities, so that the AIDA can be an effective means of protecting Canadians’ rights.
Electronic copy available at: https://ssrn.com/abstract=4687995
|
guide-to-the-general-data-protection-regulation-gdpr-1-0.pdf
|
Guide to the
General Data Protection
Regulation (GDPR)Data protection
Introduction
What's new
Key definitions
What is personal data?
Principles
Lawfulness, fairness and transparency
Purpose limitation
Data minimisation
Accuracy
Storage limitation
Integrity and confidentiality (security)
Accountability principle
Lawful basis for processing
Consent
Contract
Legal obligation
Vital interests
Public task
Legitimate interests
Special category data
Criminal offence data
Individual rights
Right to be informed
Right of access
Right to rectification
Right to erasure
Right to restrict processing
Right to data portability
Right to object
Rights related to automated decision making including profiling
Accountability and governance
Contracts
Documentation
Data protection by design and default
Data protection impact assessments
Data protection officers
Codes of conduct
Certification
Guide to the data protection fee
Security
Encryption
Passwords in online services
Personal data breaches
International transfers
Exemptions
Applications
Children02 August 2018 - 1.0.248 23
5
9
10
14
17
21
26
31
39
47
48
49
59
64
68
72
75
80
86
89
91
92
100
110
116
122
128
139
146
153
163
168
173
185
192
200
203
206
207
220
223
233
241
256
287
288
Introduction
Introduction
The Guide to the GDPR explains the provisions of the GDPR to help organisations comply with its
requirements. It is for those who have day-to-day responsibility for data protection.
The GDPR forms part of the data protection regime in the UK, together with the new Data Protection Act
2018 (DPA 2018). The main provisions of this apply, like the GDPR, from 25 May 2018.
This guide refers to the DPA 2018 where it is relevant includes links to relevant sections of the GDPR
itself, to other ICO guidance and to guidance produced by the EU’s Article 29 Working Party - now the
European Data Protection Board (EDPB).
We intend the guide to cover the key points that organisations need to know. From now we will continue
to develop new guidance and review our resources to take into account what organisations tell us they
need. In the longer term we aim to publish more guidance under the umbrella of a new Guide to Data
Protection, which will cover the GDPR and DPA 2018, and include law enforcement, the applied GDPR
and other relevant provisions.
Further reading
Data protection self assessment toolkit
For organisations
For a more detailed understanding of the GDPR it’s also helpful to read the guidelines produced
by the EU’s Article 29 Working Party – which has now been renamed the European Data
Protection Board (EDPB). The EDPB includes representatives of the data protection authorities
from each EU member state, and the ICO is the UK’s representative. The ICO has been directly
involved in drafting many of these. We have linked to relevant EU guidelines throughout the
Guide to GDPR.
We produced many guidance documents on the previous Data Protection Act 1998 . Even though
that Act is no longer in force, some of them contain practical examples and advice which may
still be helpful in applying the new legislation. While we are building our new Guide to Data
Protection we will keep those documents accessible on our website, with the proviso that they
cannot be taken as guidance on the DPA 2018.
We previously produced an Introduction to the Data Protection Bill as it was going through
Parliament. We will update this document to reflect the final text of the DPA 2018 and publish it02 August 2018 - 1.0.248 3
as soon as possible.
We also published a guide to the law enforcement provisions in Part 3 of the Data Protection Bill ,
which implement the EU Law Enforcement Directive. We will update this to reflect the relevant
provisions in the DPA 2018.02 August 2018 - 1.0.248 4
What's new
We will update this page monthly to highlight and link to what’s new in our Guide to the GDPR.
September 2018
We have expanded our guidance on Exemptions .
August 2018
We have expanded our guidance on International transfers .
May 2018
The European Data Protection Board (EDPB) has published draft guidelines on certification and
identifying certification criteria in accordance with Articles 42 and 43 of the Regulation 2016/679 for
consultation. The consultation will end on 12 July.
We have published detailed guidance on children and the GDPR .
We have published detailed guidance on determining what is personal data .
We have expanded our guidance on data protection by design and default , and published detailed
guidance on automated decision-making and profiling .
We have published a new page on codes of conduct , and a new page on certification .
We have published detailed guidance on the right to be informed .
We have published detailed guidance on Data Protection Impact Assessments (DPIAs) .
We have expanded the pages on the right of access and the right to object .
We have published detailed guidance on consent .
We have expanded the page on the right to data portability .
April 2018
We have expanded the page on Accountability and governance .
We have expanded the page on Security .
We have updated all of the lawful basis pages to include a link to the lawful basis interactive guidance
tool.
March 2018
We have published detailed guidance on DPIAs for consultation . The consultation will end on 13 April
2018. We have also updated the guide page on DPIAs to include the guide level content from the
detailed guidance.
We have published detailed guidance on legitimate interests .
We have expanded the pages on:02 August 2018 - 1.0.248 5
Data protection impact assessments
Data protection officers
The right to be informed
The right to erasure
The right to rectification
The right to restri ct processing
February 2018
The consultation period for the Article 29 Working party guidelines on consent has now ended and
comments are being reviewed. The latest timetable is for the guidelines to be finalised for adoption on
10-11 April.
The consultation period for the Article 29 Working Party guidelines on transparency has now ended.
Following the consultation period, the Article 29 Working Party has adopted final guidelines
on Automated individual decision-making and Profiling and personal data breach notification . These
have been added to the Guide.
We have published our Guide to the data protection fee .
We have updated the page on Children to include the guide level content from the detailed guidance on
Children and the GDPR which is out for public consultation.
January 2018
We have published more detailed guidance on documentation .
We have expanded the page on personal data breaches .
We have also added four new pages in the lawful basis section, covering contract , legal obligation , vital
interests and public task .
December 2017
We have published detailed guidance on Children and the GDPR for public consultation. The consultation
closes on 28 February 2018.
The sections on Lawful basis for processing and Rights related to automated individual decision
making including profiling contain new expanded guidance. We have updated the section
on Documentation with additional guidance and documentation templates. We have also added new
sections on legitimate interests, special category data and criminal offence data, and updated the
section on consent.
The Article 29 Working Party has published the following guidance, which is now included in the Guide.
Consent
Transparency
It is inviting comments on these guidelines until 23 January 2018.
The consultation for the Article 29 Working Party guidelines on breach notification and automated
decision-making and profiling ended on 28 November. We are reviewing the comments received02 August 2018 - 1.0.248 6
together with other members of the Article 29 Working Party and expect the guidelines to be finalised in
early 2018.
November 2017
The Article 29 Working Party has published guidelines on imposing administrative fines .
We have replaced the Overview of the GDPR with the Guide to the GDPR. The Guide currently contains
similar content to the Overview, but we have expanded the sections on Consent and Contracts and
Liabilities on the basis of the guidance on these topics which we have previously published for
consultation.
The Guide to the GDPR is not yet a finished product; it is a framework on which we will build upcoming
GDPR guidance and it reflects how future GDPR guidance will be presented. We will be publishing more
detailed guidance on some topics and we will link to these from the Guide. We will do the same for
guidelines from the Article 29 Working Party.
October 2017
The Article 29 Working Party has published the following guidance, which is now included in our
overview.
Breach notification
Automated individual decision-making and Profiling
The Article 29 Working Party has also adopted guidelines on administrative fines and these are expected
to be published soon.
In the Rights related to automated decision making and profiling we have updated the next steps for the
ICO.
In the Key areas to consider we have updated the next steps in regard to the ICO’s consent guidance.
The deadline for responses to our draft GDPR guidance on contracts and liabilities for controllers and
processors has now passed. We are analysing the feedback and this will feed into the final version.
September 2017
We have put out for consultation our draft GDPR guidance on contracts and liabilities for controllers and
processors.
July 2017
In the Key areas to consider we have updated the next steps in regard to the ICO’s consent guidance
and the Article 29 Working Party’s Europe-wide consent guidelines.
June 2017
The Article 29 Working Party’s consultation on their guidelines on high risk processing and data
protection impact assessments closed on 23 May. We await the adoption of the final version.
May 2017
We have updated our GDPR 12 steps to take now document.02 August 2018 - 1.0.248 7
We have added a Getting ready for GDPR checklist to our self-assessment toolkit.
April 2017
We have published our profiling discussion paper for feedback.
March 2017
We have published our draft consent guidance for public consultation .
January 2017
Article 29 have published the following guidance, which is now included in our overview:
Data portability
Lead supervisory authorities
Data protection officers02 August 2018 - 1.0.248 8
Key definitions
Who does the GDPR apply to?
The GDPR applies to ‘controllers’ and ‘processors’.
A controller determines the purposes and means of processing personal data.
A processor is responsible for processing personal data on behalf of a controller.
If you are a processor, the GDPR places specific legal obligations on you; for example, you are
required to maintain records of personal data and processing activities. You will have legal liability if
you are responsible for a breach.
However, if you are a controller, you are not relieved of your obligations where a processor is
involved – the GDPR places further obligations on you to ensure your contracts with processors
comply with the GDPR.
The GDPR applies to processing carried out by organisations operating within the EU. It also applies
to organisations outside the EU that offer goods or services to individuals in the EU.
The GDPR does not apply to certain activities including processing covered by the Law Enforcement
Directive, processing for national security purposes and processing carried out by individuals purely
for personal/household activities.
Further Reading
Relevant provisions in the GDPR - Articles 3, 28-31 and Recitals 22-25, 81-82
External link02 August 2018 - 1.0.248 9
What is personal data?
At a glance
Understanding whether you are processing personal data is critical to understanding whether the
GDPR applies to your activities.
Personal data is information that relates to an identified or identifiable individual.
What identifies an individual could be as simple as a name or a number or could include other
identifiers such as an IP address or a cookie identifier, or other factors.
If it is possible to identify an individual directly from the information you are processing, then that
information may be personal data.
If you cannot directly identify an individual from that information, then you need to consider whether
the individual is still identifiable. You should take into account the information you are processing
together with all the means reasonably likely to be used by either you or any other person to identify
that individual.
Even if an individual is identified or identifiable, directly or indirectly, from the data you are
processing, it is not personal data unless it ‘relates to’ the individual.
When considering whether information ‘relates to’ an individual, you need to take into account a
range of factors, including the content of the information, the purpose or purposes for which you are
processing it and the likely impact or effect of that processing on the individual.
It is possible that the same information is personal data for one controller’s purposes but is not
personal data for the purposes of another controller.
Information which has had identifiers removed or replaced in order to pseudonymise the data is still
personal data for the purposes of GDPR.
Information which is truly anonymous is not covered by the GDPR.
If information that seems to relate to a particular individual is inaccurate (ie it is factually incorrect or
is about a different individual), the information is still personal data, as it relates to that individual.
In brief
What is personal data?
The GDPR applies to the processing of personal data that is:
wholly or partly by automated means; or
the processing other than by automated means of personal data which forms part of, or is
intended to form part of, a filing system.
Personal data only includes information relating to natural persons who:
can be identified or who are identifiable, directly from the information in question; or
who can be indirectly identified from that information in combination with other information.
Personal data may also include special categories of personal data or criminal conviction and
offences data. These are considered to be more sensitive and you may only process them in more02 August 2018 - 1.0.248 10
limited circumstances.
Pseudonymised data can help reduce privacy risks by making it more difficult to identify individuals,
but it is still personal data.
If personal data can be truly anonymised then the anonymised data is not subject to the GDPR. It is
important to understand what personal data is in order to understand if the data has been
anonymised.
Information about a deceased person does not constitute personal data and therefore is not subject
to the GDPR.
Information about companies or public authorities is not personal data.
However, information about individuals acting as sole traders, employees, partners and company
directors where they are individually identifiable and the information relates to them as an individual
may constitute personal data.
What are identifiers and related factors?
An individual is ‘identified’ or ‘identifiable’ if you can distinguish them from other individuals.
A name is perhaps the most common means of identifying someone. However whether any potential
identifier actually identifies an individual depends on the context.
A combination of identifiers may be needed to identify an individual.
The GDPR provides a non-exhaustive list of identifiers, including:
name;
identification number;
location data; and
an online identifier.
‘Online identifiers’ includes IP addresses and cookie identifiers which may be personal data.
Other factors can identify an individual.
Can we identify an individual directly from the information we have?
If, by looking solely at the information you are processing you can distinguish an individual from
other individuals, that individual will be identified (or identifiable).
You don’t have to know someone’s name for them to be directly identifiable, a combination of other
identifiers may be sufficient to identify the individual.
If an individual is directly identifiable from the information, this may constitute personal data.
Can we identify an individual indirectly from the information we have (together with other
available information)?
It is important to be aware that information you hold may indirectly identify an individual and
therefore could constitute personal data.
Even if you may need additional information to be able to identify someone, they may still be
identifiable.02 August 2018 - 1.0.248 11
That additional information may be information you already hold, or it may be information that you
need to obtain from another source.
In some circumstances there may be a slight hypothetical possibility that someone might be able to
reconstruct the data in such a way that identifies the individual. However, this is not necessarily
sufficient to make the individual identifiable in terms of GDPR. You must consider all the factors at
stake.
When considering whether individuals can be identified, you may have to assess the means that
could be used by an interested and sufficiently determined person.
You have a continuing obligation to consider whether the likelihood of identification has changed over
time (for example as a result of technological developments).
What is the meaning of ‘relates to’?
Information must ‘relate to’ the identifiable individual to be personal data.
This means that it does more than simply identifying them – it must concern the individual in some
way.
To decide whether or not data relates to an individual, you may need to consider:
the content of the data – is it directly about the individual or their activities?;
the purpose you will process the data for; and
the results of or effects on the individual from processing the data.
Data can reference an identifiable individual and not be personal data about that individual, as the
information does not relate to them.
There will be circumstances where it may be difficult to determine whether data is personal data. If
this is the case, as a matter of good practice, you should treat the information with care, ensure that
you have a clear reason for processing the data and, in particular, ensure you hold and dispose of it
securely.
Inaccurate information may still be personal data if it relates to an identifiable individual.
What happens when different organisations process the same data for different purposes?
It is possible that although data does not relate to an identifiable individual for one controller, in the
hands of another controller it does.
This is particularly the case where, for the purposes of one controller, the identity of the individuals is
irrelevant and the data therefore does not relate to them.
However, when used for a different purpose, or in conjunction with additional information available to
another controller, the data does relate to the identifiable individual.
It is therefore necessary to consider carefully the purpose for which the controller is using the data in
order to decide whether it relates to an individual.
You should take care when you make an analysis of this nature.
Further Reading
Relevant provisions in the GDPR - See Articles 2, 4, 9, 10 and Recitals 1, 2, 26, 51 02 August 2018 - 1.0.248 12
External link
In more detail – ICO guidance
We have published detailed guidance on determining what is personal data .02 August 2018 - 1.0.248 13
Principles
At a glance
The GDPR sets out seven key principles:
Lawfulness, fairness and transparency
Purpose limitation
Data minimisation
Accuracy
Storage limitation
Integrity and confidentiality (security)
Accountability
These principles should lie at the heart of your approach to processing personal data.
In brief
What’s new under the GDPR?
What are the principles?
Why are the principles important?
What’s new under the GDPR?
The principles are broadly similar to the principles in the Data Protection Act 1998 (the 1998 Act).
1998 Act: GDPR:
Principle 1 – fair and lawful Principle (a) – lawfulness, fairness and
transparency
Principle 2 – purposes Principle (b) – purpose limitation
Principle 3 – adequacy Principle (c) – data minimisation
Principle 4 – accuracy Principle (d) – accuracy
Principle 5 - retention Principle (e) – storage limitation
Principle 6 – rights No principle – separate provisions in Chapter III
Principle 7 – security Principle (f) – integrity and confidentiality
Principle 8 – international
transfers No principle – separate provisions in Chapter V 02 August 2018 - 1.0.248 14
(no equivalent) Accountability principle
However there are a few key changes. Most obviously:
there is no principle for individuals’ rights. This is now dealt with separately in Chapter III of the
GDPR;
there is no principle for international transfers of personal data. This is now dealt with separately in
Chapter V of the GDPR; and
there is a new accountability principle. This specifically requires you to take responsibility for
complying with the principles, and to have appropriate processes and records in place to demonstrate
that you comply.
What are the principles?
Article 5 of the GDPR sets out seven key principles which lie at the heart of the general data protection
regime.
Article 5(1) requires that personal data shall be:
“(a) processed lawfully, fairly and in a transparent manner in relation to individuals (‘lawfulness,
fairness and transparency’);
(b) collected for specified, explicit and legitimate purposes and not further processed in a manner
that is incompatible with those purposes; further processing for archiving purposes in the public
interest, scientific or historical research purposes or statistical purposes shall not be considered to
be incompatible with the initial purposes (‘purpose limitation’);
(c) adequate, relevant and limited to what is necessary in relation to the purposes for which they
are processed (‘data minimisation’);
(d) accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure
that personal data that are inaccurate, having regard to the purposes for which they are processed,
are erased or rectified without delay (‘accuracy’);
(e) kept in a form which permits identification of data subjects for no longer than is necessary for
the purposes for which the personal data are processed; personal data may be stored for longer
periods insofar as the personal data will be processed solely for archiving purposes in the public
interest, scientific or historical research purposes or statistical purposes subject to implementation of
the appropriate technical and organisational measures required by the GDPR in order to safeguard
the rights and freedoms of individuals (‘storage limitation’);
(f) processed in a manner that ensures appropriate security of the personal data, including
protection against unauthorised or unlawful processing and against accidental loss, destruction or
damage, using appropriate technical or organisational measures (‘integrity and confidentiality’).”02 August 2018 - 1.0.248 15
Article 5(2) adds that:
For more detail on each principle, please read the relevant page of this guide.
Why are the principles important?
The principles lie at the heart of the GDPR. They are set out right at the start of the legislation, and
inform everything that follows. They don’t give hard and fast rules, but rather embody the spirit of the
general data protection regime - and as such there are very limited exceptions.
Compliance with the spirit of these key principles is therefore a fundamental building block for good data
protection practice. It is also key to your compliance with the detailed provisions of the GPDR.
Failure to comply with the principles may leave you open to substantial fines. Article 83(5)(a) states that
infringements of the basic principles for processing personal data are subject to the highest tier of
administrative fines. This could mean a fine of up to €20 million, or 4% of your total worldwide annual
turnover, whichever is higher.
Further Reading
“The controller shall be responsible for, and be able to demonstrate compliance with, paragraph 1
(‘accountability’).”
Relevant provisions in the GDPR - See Article 5 and Recital 39, and Chapter III (rights), Chapter V
(international transfers) and Article 83 (fines)
External link
Further reading
Read our individual rights and international transfers guidance02 August 2018 - 1.0.248 16
Lawfulness, fairness and transparency
At a glance
You must identify valid grounds under the GDPR (known as a ‘lawful basis’) for collecting and using
personal data.
You must ensure that you do not do anything with the data in breach of any other laws.
You must use personal data in a way that is fair. This means you must not process the data in a way
that is unduly detrimental, unexpected or misleading to the individuals concerned.
You must be clear, open and honest with people from the start about how you will use their personal
data.
Checklist
In brief
What’s new under the GDPR?
What is the lawfulness, fairness and transparency principle?
What is lawfulness?Lawfulness
☐ We have identified an appropriate lawful basis (or bases) for our processing.
☐ If we are processing special category data or criminal offence data, we have identified a
condition for processing this type of data.
☐ We don’t do anything generally unlawful with personal data.
Fairness
☐ We have considered how the processing may affect the individuals concerned and can justify
any adverse impact.
☐ We only handle people’s data in ways they would reasonably expect, or we can explain why
any unexpected processing is justified.
☐ We do not deceive or mislead people when we collect their personal data.
Transparency
☐ We are open and honest, and comply with the transparency obligations of the right to be
informed.02 August 2018 - 1.0.248 17
What is fairness?
What is transparency?
What’s new under the GDPR?
The lawfulness, fairness and transparency principle is broadly similar to the first principle of the 1998
Act. Fairness is still fundamental. You still need to process personal data fairly and lawfully, but the
requirement to be transparent about what you do with people’s data is now more clearly signposted.
As with the 1998 Act, you still need to identify valid grounds to process people’s data. This is now known
as a ‘lawful basis’ rather than a ‘condition for processing’, but the principle is the same. Identifying a
lawful basis is essential for you to comply with the ‘lawfulness’ aspect of this principle.
The concept of ‘fair processing information’ is no longer incorporated into the concept of fairness.
Although transparency is still a fundamental part of this overarching principle, the detail of transparency
obligations is now set out in separate provisions on a new ‘right to be informed’.
What is the lawfulness, fairness and transparency principle?
Article 5(1) of the GDPR says:
There are more detailed provisions on lawfulness and having a ‘lawful basis for processing’ set out in
Articles 6 to 10.
There are more detailed transparency obligations set out in Articles 13 and 14, as part of the ‘right to be
informed’.
The three elements of lawfulness, fairness and transparency overlap, but you must make sure you
satisfy all three. It’s not enough to show your processing is lawful if it is fundamentally unfair to or
hidden from the individuals concerned.
What is lawfulness?
For processing of personal data to be lawful, you need to identify specific grounds for the processing.
This is called a ‘lawful basis’ for processing, and there are six options which depend on your purpose
and your relationship with the individual. There are also specific additional conditions for processing
some especially sensitive types of data. For more information, see the lawful basis section of this guide .
If no lawful basis applies then your processing will be unlawful and in breach of this principle.
“1. Personal data shall be:
(a) processed lawfully, fairly and in a transparent manner in relation to the data subject (‘lawfulness,
fairness, transparency’)”02 August 2018 - 1.0.248 18
Lawfulness also means that you don’t do anything with the personal data which is unlawful in a more
general sense. This includes statute and common law obligations, whether criminal or civil. If processing
involves committing a criminal offence, it will obviously be unlawful. However, processing may also be
unlawful if it results in:
a breach of a duty of confidence;
your organisation exceeding its legal powers or exercising those powers improperly;
an infringement of copyright;
a breach of an enforceable contractual agreement;
a breach of industry-specific legislation or regulations; or
a breach of the Human Rights Act 1998.
These are just examples, and this list is not exhaustive. You may need to take your own legal advice on
other relevant legal requirements.
Although processing personal data in breach of copyright or industry regulations (for example) will
involve unlawful processing in breach of this principle, this does not mean that the ICO can pursue
allegations which are primarily about breaches of copyright, financial regulations or other laws outside
our remit and expertise as data protection regulator. In this situation there are likely to be other legal or
regulatory routes of redress where the issues can be considered in a more appropriate forum.
If you have processed personal data unlawfully, the GDPR gives individuals the right to erase that data
or restrict your processing of it.
What is fairness?
Processing of personal data must always be fair as well as lawful. If any aspect of your processing is
unfair you will be in breach of this principle – even if you can show that you have a lawful basis for the
processing.
In general, fairness means that you should only handle personal data in ways that people would
reasonably expect and not use it in ways that have unjustified adverse effects on them. You need to
stop and think not just about how you can use personal data, but also about whether you should.
Assessing whether you are processing information fairly depends partly on how you obtain it. In
particular, if anyone is deceived or misled when the personal data is obtained, then this is unlikely to be
fair.
In order to assess whether or not you are processing personal data fairly, you must consider more
generally how it affects the interests of the people concerned – as a group and individually. If you have
obtained and used the information fairly in relation to most of the people it relates to but unfairly in
relation to one individual, there will still be a breach of this principle.
Personal data may sometimes be used in a way that negatively affects an individual without this
necessarily being unfair. What matters is whether or not such detriment is justified.
Example02 August 2018 - 1.0.248 19
You should also ensure that you treat individuals fairly when they seek to exercise their rights over their
data. This ties in with your obligation to facilitate the exercise of individuals’ rights. Read our guidance
on rights for more information.
What is transparency?
Transparency is fundamentally linked to fairness. Transparent processing is about being clear, open and
honest with people from the start about who you are, and how and why you use their personal data.
Transparency is always important, but especially in situations where individuals have a choice about
whether they wish to enter into a relationship with you. If individuals know at the outset what you will
use their information for, they will be able to make an informed decision about whether to enter into a
relationship, or perhaps to try to renegotiate the terms of that relationship.
Transparency is important even when you have no direct relationship with the individual and collect their
personal data from another source. In some cases, it can be even more important - as individuals may
have no idea that you are collecting and using their personal data, and this affects their ability to assert
their rights over their data. This is sometimes known as ‘invisible processing’.
You must ensure that you tell individuals about your processing in a way that is easily accessible and
easy to understand. You must use clear and plain language.
For more detail on your transparency obligations and the privacy information you must provide to
individuals, see our guidance on the right to be informed .
Further ReadingWhere personal data is collected to assess tax liability or to impose a fine for breaking the speed
limit, the information is being used in a way that may cause detriment to the individuals concerned,
but the proper use of personal data for these purposes will not be unfair.
Relevant provisions in the GDPR - See Article 5(1)(a) and Recital 39 (principles), and Article 6
(lawful bases), Article 9 (special category data), Article 10 (criminal offences data) and Articles
13 and 14 (the right to be informed), Article 17(1)(d) (the right to erasure)
External link
Further reading
Read our guidance on:
Lawful basis for processing
The right to be informed
Individuals’ rights02 August 2018 - 1.0.248 20
Purpose limitation
At a glance
You must be clear about what your purposes for processing are from the start.
You need to record your purposes as part of your documentation obligations and specify them in your
privacy information for individuals.
You can only use the personal data for a new purpose if either this is compatible with your original
purpose, you get consent, or you have a clear basis in law.
Checklist
In brief
What’s new under the GDPR?
What is the purpose limitation principle?
Why do we need to specify our purposes?
How do we specify our purposes?
Once we collect data for a specified purpose, can we use it for other purposes?
What is a 'compatible' purpose?
What’s new under the GDPR?
The purpose limitation principle is very similar to the second principle of the 1998 Act, with a few small
differences.
As with the 1998 Act, you still need to specify your purpose or purposes for processing at the outset.
However, under the GDPR you do this by complying with your documentation and transparency
obligations, rather than through registration with the ICO.☐ We have clearly identified our purpose or purposes for processing.
☐ We have documented those purposes.
☐ We include details of our purposes in our privacy information for individuals.
☐ We regularly review our processing and, where necessary, update our documentation and our
privacy information for individuals.
☐ If we plan to use personal data for a new purpose, we check that this is compatible with our
original purpose or we get specific consent for the new purpose.02 August 2018 - 1.0.248 21
The purpose limitation principle still prevents you from using personal data for new purposes if they are
‘incompatible’ with your original purpose for collecting the data, but the GDPR contains more detail on
assessing compatibility.
Instead of an exemption for research purposes, the GDPR purpose limitation principle specifically says
that it does not prevent further processing for:
archiving purposes in the public interest;
scientific or historical research purposes; or
statistical purposes.
What is the purpose limitation principle?
Article 5(1)(b) says:
In practice, this means that you must:
be clear from the outset why you are collecting personal data and what you intend to do with it;
comply with your documentation obligations to specify your purposes;
comply with your transparency obligations to inform individuals about your purposes; and
ensure that if you plan to use or disclose personal data for any purpose that is additional to or
different from the originally specified purpose, the new use is fair, lawful and transparent.
Why do we need to specify our purposes?
This requirement aims to ensure that you are clear and open about your reasons for obtaining personal
data, and that what you do with the data is in line with the reasonable expectations of the individuals
concerned.
Specifying your purposes from the outset helps you to be accountable for your processing, and helps
you avoid ‘function creep’. It also helps individuals understand how you use their data, make decisions
about whether they are happy to share their details, and assert their rights over data where
appropriate. It is fundamental to building public trust in how you use personal data.
There are clear links with other principles – in particular, the fairness, lawfulness and transparency
principle. Being clear about why you are processing personal data will help you to ensure your
processing is fair, lawful and transparent. And if you use data for unfair, unlawful or ‘invisible’ reasons,
“1. Personal data shall be:
(b) collected for specified, explicit and legitimate purposes and not further processed in a manner
that is incompatible with those purposes; further processing for archiving purposes in the public
interest, scientific or historical research purposes or statistical purposes shall, in accordance with
Article 89(1), not be considered to be incompatible with the initial purposes.”02 August 2018 - 1.0.248 22
it’s likely to be a breach of both principles.
Specifying your purposes is necessary to comply with your accountability obligations.
How do we specify our purposes?
If you comply with your documentation and transparency obligations, you are likely to comply with the
requirement to specify your purposes without doing anything more:
You need to specify your purpose or purposes for processing personal data within the documentation
you are required to keep as part of your records of processing (documentation) obligations under
Article 30.
You also need to specify your purposes in your privacy information for individuals.
However, you should also remember that whatever you document, and whatever you tell people, this
cannot make fundamentally unfair processing fair and lawful.
If you are a small organisation and you are exempt from some documentation requirements, you may
not need to formally document all of your purposes to comply with the purpose limitation principle.
Listing your purposes in the privacy information you provide to individuals will be enough. However, it is
still good practice to document all of your purposes. For more information, read our documentation
guidance .
If you have not provided privacy information because you are only using personal data for an obvious
purpose that individuals already know about, the “specified purpose” should be taken to be the obvious
purpose.
You should regularly review your processing, documentation and privacy information to check that your
purposes have not evolved over time beyond those you originally specified (‘function creep’).
Once we collect personal data for a specified purpose, can we use it for other purposes?
The GDPR does not ban this altogether, but there are restrictions. In essence, if your purposes change
over time or you want to use data for a new purpose which you did not originally anticipate, you can
only go ahead if:
the new purpose is compatible with the original purpose;
you get the individual’s specific consent for the new purpose; or
you can point to a clear legal provision requiring or allowing the new processing in the public interest
– for example, a new function for a public authority.
If your new purpose is compatible, you don’t need a new lawful basis for the further processing.
However, you should remember that if you originally collected the data on the basis of consent, you
usually need to get fresh consent to ensure your new processing is fair and lawful. See our lawful basis
guidance for more information.
You also need to make sure that you update your privacy information to ensure that your processing is
still transparent.
What is a ‘compatible’ purpose?02 August 2018 - 1.0.248 23
The GDPR specifically says that the following purposes should be considered to be compatible purposes:
archiving purposes in the public interest;
scientific or historical research purposes; and
statistical purposes.
Otherwise, the GDPR says that to decide whether a new purpose is compatible (or as the GDPR says,
“not incompatible”) with your original purpose you should take into account:
any link between your original purpose and the new purpose;
the context in which you originally collected the personal data – in particular, your relationship with
the individual and what they would reasonably expect;
the nature of the personal data – eg is it particularly sensitive;
the possible consequences for individuals of the new processing; and
whether there are appropriate safeguards - eg encryption or pseudonymisation.
As a general rule, if the new purpose is either very different from the original purpose, would be
unexpected, or would have an unjustified impact on the individual, it is likely to be incompatible with
your original purpose. In practice, you are likely to need to ask for specific consent to use or disclose
data for this type of purpose.
There are clear links here with the lawfulness, fairness and transparency principle. In practice, if your
intended processing is fair, you are unlikely to breach the purpose limitation principle on the basis of
incompatibility.
Further Reading
Example
A GP discloses his patient list to his wife, who runs a travel agency, so that she can offer special
holiday deals to patients needing recuperation. Disclosing the information for this purpose would be
incompatible with the purposes for which it was obtained.
Relevant provisions in the GDPR - See Article 5(1)(b), Recital 39 (principles), Article 6(4) and
Recital 50 (compatibility) and Article 30 (documentation)
External link
Further reading
Read our guidance on:
Documentation
The right to be informed02 August 2018 - 1.0.248 24
Lawful basis for processing02 August 2018 - 1.0.248 25
Data minimisation
At a glance
You must ensure the personal data you are processing is:
adequate – sufficient to properly fulfil your stated purpose;
relevant – has a rational link to that purpose; and
limited to what is necessary – you do not hold more than you need for that purpose.
Checklist
In brief
What’s new under the GDPR?
What is the data minimisation principle?
How do we decide what is adequate, relevant and limited?
When could we be processing too much personal data?
When could we be processing inadequate personal data?
What about the adequacy and relevance of opinions?
What’s new under the GDPR?
Very little. The data minimisation principle is almost identical to the third principle (adequacy) of the
1998 Act.
The main difference in practice is that you must be prepared to demonstrate you have appropriate data
minimisation practices in line with new accountability obligations, and there are links to the new rights of
erasure and rectification.
What is the data minimisation principle?
Article 5(1)(c) says:☐ We only collect personal data we actually need for our specified purposes.
☐ We have sufficient personal data to properly fulfil those purposes.
☐ We periodically review the data we hold, and delete anything we don’t need. 02 August 2018 - 1.0.248 26
So you should identify the minimum amount of personal data you need to fulfil your purpose. You should
hold that much information, but no more.
This is the first of three principles about data standards, along with accuracy and storage limitation.
The accountability principle means that you need to be able to demonstrate that you have appropriate
processes to ensure that you only collect and hold the personal data you need.
Also bear in mind that the GDPR says individuals have the right to complete any incomplete data which
is inadequate for your purpose, under the right to rectification. They also have right to get you to delete
any data that is not necessary for your purpose, under the right to erasure (right to be forgotten).
How do we decide what is adequate, relevant and limited?
The GDPR does not define these terms. Clearly, though, this will depend on your specified purpose for
collecting and using the personal data. It may also differ from one individual to another.
So, to assess whether you are holding the right amount of personal data, you must first be clear about
why you need it.
For special category data or criminal offence data, it is particularly important to make sure you collect
and retain only the minimum amount of information.
You may need to consider this separately for each individual, or for each group of individuals sharing
relevant characteristics. You should in particular consider any specific factors that an individual brings to
your attention – for example, as part of an objection, request for rectification of incomplete data, or
request for erasure of unnecessary data.
You should periodically review your processing to check that the personal data you hold is still relevant
and adequate for your purposes, and delete anything you no longer need. This is closely linked with the
storage limitation principle.
When could we be processing too much personal data?
You should not have more personal data than you need to achieve your purpose. Nor should the data
include irrelevant details.
“1. Personal data shall be:
(c) adequate, relevant and limited to what is necessary in relation to the purposes for which they
are processed (data minimisation)”
Example02 August 2018 - 1.0.248 27
If you need to process particular information about certain individuals only, you should collect it just for
those individuals – the information is likely to be excessive and irrelevant in relation to other people.
You must not collect personal data on the off-chance that it might be useful in the future. However, you
may be able to hold information for a foreseeable event that may never occur if you can justify it.
If you are holding more data than is actually necessary for your purpose, this is likely to be unlawful (as
most of the lawful bases have a necessity element) as well as a breach of the data minimisation
principle. Individuals will also have the right to erasure.
When could we be processing inadequate personal data?
If the processing you carry out is not helping you to achieve your purpose then the personal data you
have is probably inadequate. You should not process personal data if it is insufficient for its intended
purpose.
In some circumstances you may need to collect more personal data than you had originally anticipated
using, so that you have enough information for the purpose in question.A debt collection agency is engaged to find a particular debtor. It collects information on several
people with a similar name to the debtor. During the enquiry some of these people are discounted.
The agency should delete most of their personal data, keeping only the minimum data needed to
form a basic record of a person they have removed from their search. It is appropriate to keep this
small amount of information so that these people are not contacted again about debts which do not
belong to them.
Example
A recruitment agency places workers in a variety of jobs. It sends applicants a general
questionnaire, which includes specific questions about health conditions that are only relevant to
particular manual occupations. It would be irrelevant and excessive to obtain such information from
an individual who was applying for an office job.
Example
An employer holds details of the blood groups of some of its employees. These employees do
hazardous work and the information is needed in case of accident. The employer has in place safety
procedures to help prevent accidents so it may be that this data is never needed, but it still needs to
hold this information in case of emergency.
If the employer holds the blood groups of the rest of the workforce, though, such information is
likely to be irrelevant and excessive as they do not engage in the same hazardous work.02 August 2018 - 1.0.248 28
Data may also be inadequate if you are making decisions about someone based on an incomplete
understanding of the facts. In particular, if an individual asks you to supplement incomplete data under
their right to rectification, this could indicate that the data might be inadequate for your purpose.
Obviously it makes no business sense to have inadequate personal data – but you must be careful not
to go too far the other way and collect more than you need.
What about the adequacy and relevance of opinions?
A record of an opinion is not necessarily inadequate or irrelevant personal data just because the
individual disagrees with it or thinks it has not taken account of information they think is important.
However, in order to be adequate, your records should make clear that it is opinion rather than fact. The
record of the opinion (or of the context it is held in) should also contain enough information to enable a
reader to interpret it correctly. For example, it should state the date and the author’s name and position.
If an opinion is likely to be controversial or very sensitive, or if it will have a significant impact when
used or disclosed, it is even more important to state the circumstances or the evidence it is based on. If
a record contains an opinion that summarises more detailed records held elsewhere, you should make
this clear.
For more information about the accuracy of opinions, see our guidance on the accuracy principle.
Further Reading
Example
A group of individuals set up a club. At the outset the club has only a handful of members, who all
know each other, and the club’s activities are administered using only basic information about the
members’ names and email addresses. The club proves to be very popular and its membership
grows rapidly. It becomes necessary to collect additional information about members so that the
club can identify them properly, and so that it can keep track of their membership status,
subscription payments etc.
Example
A GP's record may hold only a letter from a consultant and it will be the hospital file that contains
greater detail. In this case, the record of the consultant’s opinion should contain enough information
to enable detailed records to be traced.
Relevant provisions in the GDPR - Article 5(1)(c) and Recital 39, and Article 16 (right to
rectification) and Article 17 (right to erasure)
External link02 August 2018 - 1.0.248 29
Further reading
Read our guidance on:
The storage limitation principle
The accountability principle
The right to rectification
The right to erasure 02 August 2018 - 1.0.248 30
Accuracy
At a glance
You should take all reasonable steps to ensure the personal data you hold is not incorrect or
misleading as to any matter of fact.
You may need to keep the personal data updated, although this will depend on what you are using it
for.
If you discover that personal data is incorrect or misleading, you must take reasonable steps to
correct or erase it as soon as possible.
You must carefully consider any challenges to the accuracy of personal data.
Checklist
In brief
What’s new under the GDPR?
What is the accuracy principle?
When is personal data ‘accurate’ or ‘inaccurate’?
What about records of mistakes?
What about accuracy of opinions?☐ We ensure the accuracy of any personal data we create.
☐ We have appropriate processes in place to check the accuracy of the data we collect, and we
record the source of that data.
☐ We have a process in place to identify when we need to keep the data updated to properly
fulfil our purpose, and we update it as necessary.
☐ If we need to keep a record of a mistake, we clearly identify it as a mistake.
☐ Our records clearly identify any matters of opinion, and where appropriate whose opinion it is
and any relevant changes to the underlying facts.
☐ We comply with the individual’s right to rectification and carefully consider any challenges to
the accuracy of the personal data.
☐ As a matter of good practice, we keep a note of any challenges to the accuracy of the
personal data.02 August 2018 - 1.0.248 31
Does personal data always have to be up to date?
What steps do we need to take to ensure accuracy?
What should we do if an individual challenges the accuracy of their personal data?
What’s new under the GDPR?
The accuracy principle is very similar to the fourth principle of the 1998 Act, with a couple of
differences:
The GDPR principle includes a clearer proactive obligation to take reasonable steps to delete or
correct inaccurate personal data.
The GDPR does not explicitly distinguish between personal data that you create and personal data
that someone else provides.
However, the ICO does not consider that this requires a major change in approach. The main difference
in practice is that individuals have a stronger right to have inaccurate personal data corrected under the
right to rectification.
What is the accuracy principle?
Article 5(1)(d) says:
This is the second of three principles about data standards, along with data minimisation and storage
limitation.
There are clear links here to the right to rectification , which gives individuals the right to have inaccurate
personal data corrected.
In practice, this means that you must:
take reasonable steps to ensure the accuracy of any personal data;
ensure that the source and status of personal data is clear;
carefully consider any challenges to the accuracy of information; and
consider whether it is necessary to periodically update the information.
When is personal data ‘accurate’ or ‘inaccurate’?
“1. Personal data shall be:
(d) accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure
that personal data that are inaccurate, having regard to the purposes for which they are processed,
are erased or rectified without delay (‘accuracy’)”02 August 2018 - 1.0.248 32
The GDPR does not define the word ‘accurate’. However, the Data Protection Act 2018 does say that
‘inaccurate’ means “incorrect or misleading as to any matter of fact”. It will usually be obvious whether
personal data is accurate.
You must always be clear about what you intend the record of the personal data to show. What you use
it for may affect whether it is accurate or not. For example, just because personal data has changed
doesn’t mean that a historical record is inaccurate – but you must be clear that it is a historical record.
What about records of mistakes?
There is often confusion about whether it is appropriate to keep records of things that happened which
should not have happened. Individuals understandably do not want their records to be tarnished by, for
example, a penalty or other charge that was later cancelled or refunded.
However, you may legitimately need your records to accurately reflect the order of events – in this
example, that a charge was imposed, but later cancelled or refunded. Keeping a record of the mistake
and its correction might also be in the individual’s best interests.
It is acceptable to keep records of mistakes, provided those records are not misleading about the facts.
Example
If an individual moves house from London to Manchester a record saying that they currently live in
London will obviously be inaccurate. However a record saying that the individual once lived in
London remains accurate, even though they no longer live there.
Example
The Postcode Address File (PAF) contains UK property postal addresses. It is structured to reflect the
way the Royal Mail delivers post. So it is common for someone to have a postal address linked to a
town in one county (eg Stoke-on-Trent in Staffordshire) even if they actually live in another county
(eg Cheshire) and pay council tax to that council. The PAF file is not intended to accurately reflect
county boundaries.
Example
A misdiagnosis of a medical condition continues to be held as part of a patient’s medical records
even after the diagnosis is corrected, because it is relevant for the purpose of explaining treatment
given to the patient, or for other health problems.02 August 2018 - 1.0.248 33
You may need to add a note to make clear that a mistake was made.
What about accuracy of opinions?
A record of an opinion is not necessarily inaccurate personal data just because the individual disagrees
with it, or it is later proved to be wrong. Opinions are, by their very nature, subjective and not intended
to record matters of fact.
However, in order to be accurate, your records must make clear that it is an opinion, and, where
appropriate, whose opinion it is. If it becomes clear that an opinion was based on inaccurate data, you
should also record this fact in order to ensure your records are not misleading.
Example
An individual finds that, because of an error, their account with their existing energy supplier has
been closed and an account opened with a new supplier. Understandably aggrieved, they believe the
original account should be reinstated and no record kept of the unauthorised transfer. Although this
reaction is understandable, if their existing supplier did close their account, and another supplier
opened a new account, then records reflecting what actually happened will be accurate. In such
cases it makes sense to ensure that the record clearly shows that an error occurred.
Example
An individual is dismissed for alleged misconduct. An Employment Tribunal finds that the dismissal
was unfair and the individual is reinstated. The individual demands that the employer deletes all
references to misconduct. However, the record of the dismissal is accurate. The Tribunal’s decision
was that the employee should not have been dismissed on those grounds. The employer should
ensure its records reflect this.02 August 2018 - 1.0.248 34
If an individual challenges the accuracy of an opinion, it is good practice to add a note recording the
challenge and the reasons behind it.
How much weight is actually placed on an opinion is likely to depend on the experience and reliability of
the person whose opinion it is, and what they base their opinion on. An opinion formed during a brief
meeting will probably be given less weight than one derived from considerable dealings with the
individual. However, this is not really an issue of accuracy. Instead, you need to consider whether the
personal data is “adequate” for your purposes, in line with the data minimisation principle.
Note that some records that may appear to be opinions do not contain an opinion at all. For example,
many financial institutions use credit scores to help them decide whether to provide credit. A credit
score is a number that summarises the historical credit information on a credit report and provides a
numerical predictor of the risk involved in granting an individual credit. Credit scores are based on a
statistical analysis of individuals’ personal data, rather than on a subjective opinion about their
creditworthiness. However, you must ensure the accuracy (and adequacy) of the underlying data.
Does personal data always have to be up to date?
This depends on what you use the information for. If you use the information for a purpose that relies on
it remaining current, you should keep it up to date. For example, you should update your employee
payroll records when there is a pay rise. Similarly, you should update your records for customers’
changes of address so that goods are delivered to the correct location.
In other cases, it will be equally obvious that you do not need to update information.
Example
An area of particular sensitivity is medical opinion, where doctors routinely record their opinions
about possible diagnoses. It is often impossible to conclude with certainty, perhaps until time has
passed or tests have been done, whether a patient is suffering from a particular condition. An initial
diagnosis (which is an informed opinion) may prove to be incorrect after more extensive
examination or further tests. However, if the patient’s records reflect the doctor’s diagnosis at the
time, the records are not inaccurate, because they accurately reflect that doctor’s opinion at a
particular time. Moreover, the record of the doctor’s initial diagnosis may help those treating the
patient later, and in data protection terms is required in order to comply with the ‘adequacy’ element
of the data minimisation principle.
Example
An individual places a one-off order with an organisation. The organisation will probably have good
reason to retain a record of the order for a certain period for accounting reasons and because of
possible complaints. However, this does not mean that it has to regularly check that the customer is
still living at the same address.02 August 2018 - 1.0.248 35
You do not need to update personal data if this would defeat the purpose of the processing. For
example, if you hold personal data only for statistical, historical or other research reasons, updating the
data might defeat that purpose.
In some cases it is reasonable to rely on the individual to tell you when their personal data has changed,
such as when they change address or other contact details. It may be sensible to periodically ask
individuals to update their own details, but you do not need to take extreme measures to ensure your
records are up to date, unless there is a corresponding privacy risk which justifies this.
However, if an individual informs the organisation of a new address, it should update its records. And if a
mailing is returned with the message ‘not at this address’ marked on the envelope – or any other
information comes to light which suggests the address is no longer accurate – the organisation should
update its records to indicate that the address is no longer current.
What steps do we need to take to ensure accuracy?
Where you use your own resources to compile personal data about an individual, then you must make
sure the information is correct. You should take particular care if the information could have serious
implications for the individual. If, for example, you give an employee a pay increase on the basis of an
annual increment and a performance bonus, then there is no excuse for getting the new salary figure
wrong in your payroll records.
We recognise that it may be impractical to check the accuracy of personal data someone else provides.
In order to ensure that your records are not inaccurate or misleading in this case, you must:
accurately record the information provided;
accurately record the source of the information;
take reasonable steps in the circumstances to ensure the accuracy of the information; and
carefully consider any challenges to the accuracy of the information.
What is a ‘reasonable step’ will depend on the circumstances and, in particular, the nature of the
personal data and what you will use it for. The more important it is that the personal data is accurate,
the greater the effort you should put into ensuring its accuracy. So if you are using the data to make
decisions that may significantly affect the individual concerned or others, you need to put more effort
into ensuring accuracy. This may mean you have to get independent confirmation that the data is
accurate. For example, employers may need to check the precise details of job applicants’ education,
qualifications and work experience if it is essential for that particular role, when they would need to
obtain authoritative verification.
Example
An organisation keeps addresses and contact details of previous customers for marketing purposes.
It does not have to use data matching or tracing services to ensure its records are up to date – and
it may actually be difficult to show that the processing involved in data matching or tracing for these
purposes is fair, lawful and transparent.02 August 2018 - 1.0.248 36
If your information source is someone you know to be reliable, or is a well-known organisation, it is
usually reasonable to assume that they have given you accurate information. However, in some
circumstances you need to double-check – for example if inaccurate information could have serious
consequences, or if common sense suggests there may be a mistake.
Even if you originally took all reasonable steps to ensure the accuracy of the data, if you later get any
new information which suggests it may be wrong or misleading, you should reconsider whether it is
accurate and take steps to erase, update or correct it in light of that new information as soon as
possible.
What should we do if an individual challenges the accuracy of their personal data?
If this happens, you should consider whether the information is accurate and, if it is not, you should
delete or correct it.
Remember that individuals have the absolute right to have incorrect personal data rectified – see the
Example
An organisation recruiting a driver will want proof that the individuals they interview are entitled to
drive the type of vehicle involved. The fact that an applicant states in his work history that he
worked as a Father Christmas in a department store 20 years ago does not need to be checked for
this particular job.
Example
A business that is closing down recommends a member of staff to another organisation. Assuming
the two employers know each other, it may be reasonable for the organisation to which the
recommendation is made to accept assurances about the individual’s work experience at face value.
However, if a particular skill or qualification is needed for the new job role, the organisation needs to
make appropriate checks.
Example
An individual sends an email to her mobile phone company requesting that it changes its records
about her willingness to receive marketing material. The company amends its records accordingly
without making any checks. However, when the customer emails again asking the company to send
her bills to a new address, they carry out additional security checks before making the requested
change.02 August 2018 - 1.0.248 37
right to rectification for more information.
Individuals don’t have the right to erasure just because data is inaccurate. However, the accuracy
principle requires you to take all reasonable steps to erase or rectify inaccurate data without delay, and
it may be reasonable to erase the data in some cases. If an individual asks you to delete inaccurate
data it is therefore good practice to consider this request.
Further Reading
Relevant provisions in the GDPR - See Article 5(1)(c) and Article 16 (the right to rectification)
and Article 17 (the right to erasure)
External link
Further reading
Read our guidance on:
The right to rectification
The right to erasure 02 August 2018 - 1.0.248 38
Storage limitation
At a glance
You must not keep personal data for longer than you need it.
You need to think about – and be able to justify – how long you keep personal data. This will depend
on your purposes for holding the data.
You need a policy setting standard retention periods wherever possible, to comply with
documentation requirements.
You should also periodically review the data you hold, and erase or anonymise it when you no longer
need it.
You must carefully consider any challenges to your retention of data. Individuals have a right to
erasure if you no longer need the data.
You can keep personal data for longer if you are only keeping it for public interest archiving, scientific
or historical research, or statistical purposes.
Checklist
☐ We know what personal data we hold and why we need it.
☐ We carefully consider and can justify how long we keep personal data.
☐ We have a policy with standard retention periods where possible, in line with documentation
obligations.
☐ We regularly review our information and erase or anonymise personal data when we no
longer need it.
☐ We have appropriate processes in place to comply with individuals’ requests for erasure under
‘the right to be forgotten’.
☐ We clearly identify any personal data that we need to keep for public interest archiving,
scientific or historical research, or statistical purposes.
Other resources
For more detailed checklists and practice advice on retention, please use the ICO’s self-assessment
toolkit - records management checklist02 August 2018 - 1.0.248 39
In brief
What’s new under the GDPR?
What is the storage limitation principle?
Why is storage limitation important?
Do we need a retention policy?
How should we set retention periods?
When should we review our retention?
What should we do with personal data that we no longer need?
How long can we keep personal data for archiving, research or statistical purposes?
How does this apply to data sharing?
What’s new under the GDPR?
The storage limitation principle is broadly similar to the fifth principle (retention) of the 1998 Act. The
key point remains that you must not keep data for longer than you need it.
Although there is no underlying change, the GDPR principle does highlight that you can keep
anonymised data for as long as you want. In other words, you can either delete or anonymise the
personal data once you no longer need it.
Instead of an exemption for research purposes, the GDPR principle specifically says that you can keep
personal data for longer if you are only keeping it for public interest archiving, scientific or historical
research, or statistical purposes (and you have appropriate safeguards).
New documentation provisions mean that you must now have a policy setting standard retention periods
where possible.
There are also clear links to the new right to erasure (right to be forgotten). In practice, this means you
must now review whether you still need to keep personal data if an individual asks you to delete it.
What is the storage limitation principle?
Article 5(1)(e) says:
“1. Personal data shall be:
(e) kept in a form which permits identification of data subjects for no longer than is necessary for
the purposes for which the personal data are processed; personal data may be stored for longer
periods insofar as the personal data will be processed solely for archiving purposes in the public
interest, scientific or historical research purposes or statistical purposes in accordance with
Article 89(1) subject to implementation of the appropriate technical and organisational measures
required by this Regulation in order to safeguard the rights and freedoms of the data subject
(‘storage limitation’)”02 August 2018 - 1.0.248 40
So, even if you collect and use personal data fairly and lawfully, you cannot keep it for longer than you
actually need it.
There are close links here with the data minimisation and accuracy principles.
The GDPR does not set specific time limits for different types of data. This is up to you, and will depend
on how long you need the data for your specified purposes.
Why is storage limitation important?
Ensuring that you erase or anonymise personal data when you no longer need it will reduce the risk that
it becomes irrelevant, excessive, inaccurate or out of date. Apart from helping you to comply with the
data minimisation and accuracy principles, this also reduces the risk that you will use such data in error
– to the detriment of all concerned.
Personal data held for too long will, by definition, be unnecessary. You are unlikely to have a lawful basis
for retention.
From a more practical perspective, it is inefficient to hold more personal data than you need, and there
may be unnecessary costs associated with storage and security.
Remember that you must also respond to subject access requests for any personal data you hold. This
may be more difficult if you are holding old data for longer than you need.
Good practice around storage limitation - with clear policies on retention periods and erasure - is also
likely to reduce the burden of dealing with queries about retention and individual requests for erasure.
Do we need a retention policy?
Retention policies or retention schedules list the types of record or information you hold, what you use it
for, and how long you intend to keep it. They help you establish and document standard retention
periods for different categories of personal data.
A retention schedule may form part of a broader ‘information asset register’ (IAR), or your general
processing documentation.
To comply with documentation requirements , you need to establish and document standard retention
periods for different categories of information you hold wherever possible. It is also advisable to have a
system for ensuring that your organisation keeps to these retention periods in practice, and for
reviewing retention at appropriate intervals. Your policy must also be flexible enough to allow for early
deletion if appropriate. For example, if you are not actually using a record, you should reconsider
whether you need to retain it.
If you are a small organisation undertaking occasional low-risk processing, you may not need a
documented retention policy.
However, if you don’t have a retention policy (or if it doesn’t cover all of the personal data you hold),
you must still regularly review the data you hold, and delete or anonymise anything you no longer need.
Further reading – records management and retention schedules
The National Archives (TNA) publishes practical guidance for public authorities on a range of02 August 2018 - 1.0.248 41
How should we set retention periods?
The GDPR does not dictate how long you should keep personal data. It is up to you to justify this, based
on your purposes for processing. You are in the best position to judge how long you need it.
You must also be able to justify why you need to keep personal data in a form that permits identification
of individuals. If you do not need to identify individuals, you should anonymise the data so that
identification is no longer possible.
For example:
You should consider your stated purposes for processing the personal data. You can keep it as long
as one of those purposes still applies, but you should not keep data indefinitely ‘just in case’, or if
there is only a small possibility that you will use it.records management topics, including retention and disposal. This guidance can help you comply
with the storage limitation principle (even if you are not a public authority):
Disposing of records
FOI Records Management Code – Guide 8: Disposal of records
The Keeper of the Records of Scotland also publishes guidance on Scottish public authorities’
records management obligations, including specific guidance on retention schedules .
Example
A bank holds personal data about its customers. This includes details of each customer’s address,
date of birth and mother’s maiden name. The bank uses this information as part of its security
procedures. It is appropriate for the bank to retain this data for as long as the customer has an
account with the bank. Even after the account has been closed, the bank may need to continue
holding some of this information for legal or operational reasons for a further set time.
Example
A bank may need to retain images from a CCTV system installed to prevent fraud at an ATM
machine for several weeks, since a suspicious transaction may not come to light until the victim gets
their bank statement. In contrast, a pub may only need to retain images from their CCTV system
for a short period because incidents will come to light very quickly. However, if a crime is reported
to the police, the pub will need to retain images until the police have time to collect them.02 August 2018 - 1.0.248 42
You should consider whether you need to keep a record of a relationship with the individual once that
relationship ends. You may not need to delete all personal data when the relationship ends. You may
need to keep some information so that you can confirm that the relationship existed – and that it has
ended – as well as some of its details.
You should consider whether you need to keep information to defend possible future legal claims.
However, you could still delete information that could not possibly be relevant to such a claim. Unless
Example
A tracing agency holds personal data about a debtor so that it can find that individual on behalf of a
creditor. Once it has found the individual and reported to the creditor, there may be no need to
retain the information about the debtor – the agency should remove it from their systems unless
there are good reasons for keeping it. Such reasons could include if the agency has also been asked
to collect the debt, or because the agency is authorised to use the information to trace debtors on
behalf of other creditors.
Example
A business may need to keep some personal data about a previous customer so that they can deal
with any complaints the customer might make about the services they provided.
Example
An employer should review the personal data it holds about an employee when they leave the
organisation’s employment. It will need to retain enough data to enable the organisation to deal
with, for example, providing references or pension arrangements. However, it should delete
personal data that it is unlikely to need again from its records – such as the employee’s emergency
contact details, previous addresses, or death-in-service beneficiary details.
Example
A business receives a notice from a former customer requiring it to stop processing the customer’s
personal data for direct marketing. It is appropriate for the business to retain enough information
about the former customer for it to stop including that person in future direct marketing activities.02 August 2018 - 1.0.248 43
there is some other reason for keeping it, personal data should be deleted when such a claim could
no longer arise.
You should consider any legal or regulatory requirements. There are various legal requirements and
professional guidelines about keeping certain kinds of records – such as information needed for
income tax and audit purposes, or information on aspects of health and safety. If you keep personal
data to comply with a requirement like this, you will not be considered to have kept the information
for longer than necessary.
You should consider any relevant industry standards or guidelines. For example, we have agreed that
credit reference agencies are permitted to keep consumer credit data for six years. Industry
guidelines are a good starting point for standard retention periods and are likely to take a considered
approach. However, they do not guarantee compliance. You must still be able to explain why those
periods are justified, and keep them under review.
You must remember to take a proportionate approach, balancing your needs with the impact of
retention on individuals’ privacy. Don’t forget that your retention of the data must also always be fair
and lawful.
When should we review our retention?
You should review whether you still need personal data at the end of any standard retention period, and
erase or anonymise it unless there is a clear justification for keeping it for longer. Automated systems
can flag records for review, or delete information after a pre-determined period. This is particularly
useful if you hold many records of the same type.
It is also good practice to review your retention of personal data at regular intervals before this,
especially if the standard retention period is lengthy or there is potential for a significant impact on
individuals.
If you don’t have a set retention period for the personal data, you must regularly review whether you
still need it.
However, there is no firm rule about how regular these reviews must be. Your resources may be a
relevant factor here, along with the privacy risk to individuals. The important thing to remember is that
you must be able to justify your retention and how often you review it.
You must also review whether you still need personal data if the individual asks you to. Individuals have
the absolute right to erasure of personal data that you no longer need for your specified purposes.
What should we do with personal data that we no longer need?
Example
An employer receives several applications for a job vacancy. Unless there is a clear business reason
for doing so, the employer should not keep recruitment records for unsuccessful applicants beyond
the statutory period in which a claim arising from the recruitment process may be brought.02 August 2018 - 1.0.248 44
You can either erase (delete) it, or anonymise it.
You need to remember that there is a significant difference between permanently deleting personal
data, and taking it offline. If personal data is stored offline, this should reduce its availability and the risk
of misuse or mistake. However, you are still processing personal data. You should only store it offline
(rather than delete it) if you can still justify holding it. You must be prepared to respond to subject
access requests for personal data stored offline, and you must still comply with all the other principles
and rights.
The word ‘deletion’ can mean different things in relation to electronic data, and we recognise it is not
always possible to delete or erase all traces of the data. The key issue is to ensure you put the data
beyond use. If it is appropriate to delete personal data from a live system, you should also delete it
from any back-up of the information on that system.
Alternatively, you can anonymise the data so that it is no longer “in a form which permits identification
of data subjects”.
Personal data that has been pseudonymised – eg key-coded – will usually still permit identification.
Pseudonymisation can be a useful tool for compliance with other principles such as data minimisation
and security, but the storage limitation principle still applies.
How long can we keep personal data for archiving, research or statistical purposes?
You can keep personal data indefinitely if you are holding it only for:
archiving purposes in the public interest;
scientific or historical research purposes; or
statistical purposes.
Although the general rule is that you cannot hold personal data indefinitely ‘just in case’ it might be
useful in future, there is an inbuilt exception if you are keeping it for these archiving, research or
statistical purposes.
You must have appropriate safeguards in place to protect individuals. For example, pseudonymisation
may be appropriate in some cases.
This must be your only purpose. If you justify indefinite retention on this basis, you cannot later use that
data for another purpose - in particular for any decisions affecting particular individuals. This does not
prevent other organisations from accessing public archives, but they must ensure their own collection
and use of the personal data complies with the principles.Further reading
We produced detailed guidance on the issues surrounding deletion under the 1998 Act. This will be
updated for the GDPR in due course, but in the meantime still offers useful guidance on the practical
issues surrounding deletion:
Deleting personal data 02 August 2018 - 1.0.248 45
How does this apply to data sharing?
If you share personal data with other organisations, you should agree between you what happens once
you no longer need to share the data. In some cases, it may be best to return the shared data to the
organisation that supplied it without keeping a copy. In other cases, all of the organisations involved
should delete their copies of the personal data.
The organisations involved in an information-sharing initiative may each need to set their own retention
periods, because some may have good reasons to retain personal data for longer than others. However,
if you all only hold the data for the purposes of the data-sharing initiative and it is no longer needed for
that initiative, then all organisations with copies of the information should delete it.
Further Reading
Example
Personal data about the customers of Company A is shared with Company B, which is negotiating to
buy Company A’s business. The companies arrange for Company B to keep the information
confidential, and use it only in connection with the proposed transaction. The sale does not go ahead
and Company B returns the customer information to Company A without keeping a copy.
Relevant provisions in the GDPR - See Articles 5(1)(e), 17(1)(a), 30(1)(f) and 89, and Recital 39
External link
Further reading – ICO guidance
Read our guidance on documentation and the right to erasure02 August 2018 - 1.0.248 46
Integrity and confidentiality (security)
You must ensure that you have appropriate security measures in place to protect the personal data you
hold.
This is the ‘integrity and confidentiality’ principle of the GDPR – also known as the security principle.
For more information, see the security section of this guide.02 August 2018 - 1.0.248 47
Accountability principle
The accountability principle requires you to take responsibility for what you do with personal data and
how you comply with the other principles.
You must have appropriate measures and records in place to be able to demonstrate your compliance.
For more information, see the accountability and governance section of this guide.02 August 2018 - 1.0.248 48
Lawful basis for processing
At a glance
You must have a valid lawful basis in order to process personal data.
There are six available lawful bases for processing. No single basis is ’better’ or more important than
the others – which basis is most appropriate to use will depend on your purpose and relationship with
the individual.
Most lawful bases require that processing is ‘necessary’. If you can reasonably achieve the same
purpose without the processing, you won’t have a lawful basis.
You must determine your lawful basis before you begin processing, and you should document it.
Take care to get it right first time - you should not swap to a different lawful basis at a later date
without good reason. In particular, you cannot usually swap from consent to a different basis.
Your privacy notice should include your lawful basis for processing as well as the purposes of the
processing.
If your purposes change, you may be able to continue processing under the original lawful basis if
your new purpose is compatible with your initial purpose (unless your original lawful basis was
consent).
If you are processing special category data you need to identify both a lawful basis for general
processing and an additional condition for processing this type of data.
If you are processing criminal conviction data or data about offences you need to identify both a
lawful basis for general processing and an additional condition for processing this type of data.
Checklist02 August 2018 - 1.0.248 49
In brief
What’s new?
Why is the lawful basis for processing important?
What are the lawful bases?
When is processing ‘necessary’?
How do we decide which lawful basis applies?
When should we decide on our lawful basis?
What happens if we have a new purpose?
How should we document our lawful basis?
What do we need to tell people?
What about special category data?
What about criminal conviction data?
What's new?
The requirement to have a lawful basis in order to process personal data is not new. It replaces and
mirrors the previous requirement to satisfy one of the ‘conditions for processing’ under the Data
Protection Act 1998 (the 1998 Act). However, the GDPR places more emphasis on being accountable for
and transparent about your lawful basis for processing.
The six lawful bases for processing are broadly similar to the old conditions for processing, although
there are some differences. You now need to review your existing processing, identify the most
appropriate lawful basis, and check that it applies. In many cases it is likely to be the same as your
existing condition for processing.☐ We have reviewed the purposes of our processing activities, and selected the most
appropriate lawful basis (or bases) for each activity.
☐ We have checked that the processing is necessary for the relevant purpose, and are satisfied
that there is no other reasonable way to achieve that purpose.
☐ We have documented our decision on which lawful basis applies to help us demonstrate
compliance.
☐ We have included information about both the purposes of the processing and the lawful basis
for the processing in our privacy notice.
☐ Where we process special category data, we have also identified a condition for processing
special category data, and have documented this.
☐ Where we process criminal offence data, we have also identified a condition for processing
this data, and have documented this.02 August 2018 - 1.0.248 50
The biggest change is for public authorities, who now need to consider the new ‘public task’ basis first
for most of their processing, and have more limited scope to rely on consent or legitimate interests.
You can choose a new lawful basis if you find that your old condition for processing is no longer
appropriate under the GDPR, or decide that a different basis is more appropriate. You should try to get
this right first time. Once the GDPR is in effect, it will be much harder to swap between lawful bases at
will if you find that your original basis was invalid. You will be in breach of the GDPR if you did not
clearly identify the appropriate lawful basis (or bases, if more than one applies) from the start.
The GDPR brings in new accountability and transparency requirements. You should therefore make sure
you clearly document your lawful basis so that you can demonstrate your compliance in line with Articles
5(2) and 24.
You must now inform people upfront about your lawful basis for processing their personal data. You
need therefore to communicate this information to individuals by 25 May 2018, and ensure that you
include it in all future privacy notices.
Further Reading
Why is the lawful basis for processing important?
The first principle requires that you process all personal data lawfully, fairly and in a transparent
manner. Processing is only lawful if you have a lawful basis under Article 6. And to comply with the
accountability principle in Article 5(2), you must be able to demonstrate that a lawful basis applies.
If no lawful basis applies to your processing, your processing will be unlawful and in breach of the first
principle. Individuals also have the right to erase personal data which has been processed unlawfully.
The individual’s right to be informed under Article 13 and 14 requires you to provide people with
information about your lawful basis for processing. This means you need to include these details in your
privacy notice.
The lawful basis for your processing can also affect which rights are available to individuals. For
example, some rights will not apply:
Relevant provisions in the GDPR - See Article 6 and Recital 171, and Article 5(2)
External link02 August 2018 - 1.0.248 51
However, an individual always has the right to object to processing for the purposes of direct marketing,
whatever lawful basis applies.
The remaining rights are not always absolute, and there are other rights which may be affected in other
ways. For example, your lawful basis may affect how provisions relating to automated decisions and
profiling apply, and if you are relying on legitimate interests you need more detail in your privacy notice
to comply with the right to be informed.
Please read the section of this Guide on individuals’ rights for full details.
Further Reading
What are the lawful bases for processing?
The lawful bases for processing are set out in Article 6 of the GDPR. At least one of these must apply
whenever you process personal data:
(a) Consent: the individual has given clear consent for you to process their personal data for a specific
purpose.
(b) Contract: the processing is necessary for a contract you have with the individual, or because they
have asked you to take specific steps before entering into a contract.Relevant provisions in the GDPR - See Article 6 and Recitals 39, 40, and Chapter III (Rights of the
data subject)
External link02 August 2018 - 1.0.248 52
(c) Legal obligation: the processing is necessary for you to comply with the law (not including
contractual obligations).
(d) Vital interests: the processing is necessary to protect someone’s life.
(e) Public task: the processing is necessary for you to perform a task in the public interest or for your
official functions, and the task or function has a clear basis in law.
(f) Legitimate interests: the processing is necessary for your legitimate interests or the legitimate
interests of a third party unless there is a good reason to protect the individual’s personal data which
overrides those legitimate interests. (This cannot apply if you are a public authority processing data to
perform your official tasks.)
For more detail on each lawful basis, read the relevant page of this guide.
Further Reading
When is processing ‘necessary’?
Many of the lawful bases for processing depend on the processing being “necessary”. This does not
mean that processing always has to be essential. However, it must be a targeted and proportionate way
of achieving the purpose. The lawful basis will not apply if you can reasonably achieve the purpose by
some other less intrusive means.
It is not enough to argue that processing is necessary because you have chosen to operate your
business in a particular way. The question is whether the processing is a necessary for the stated
purpose, not whether it is a necessary part of your chosen method of pursuing that purpose.
How do we decide which lawful basis applies?
This depends on your specific purposes and the context of the processing. You should consider which
lawful basis best fits the circumstances. You might consider that more than one basis applies, in which
case you should identify and document all of them from the start.
You must not adopt a one-size-fits-all approach. No one basis should be seen as always better, safer or
more important than the others, and there is no hierarchy in the order of the list in the GDPR.
You may need to consider a variety of factors, including:
What is your purpose – what are you trying to achieve?
Can you reasonably achieve it in a different way?
Do you have a choice over whether or not to process the data?
Are you a public authority?
Several of the lawful bases relate to a particular specified purpose – a legal obligation, a contract with
the individual, protecting someone’s vital interests, or performing your public tasks. If you are
processing for these purposes then the appropriate lawful basis may well be obvious, so it is helpful toRelevant provisions in the GDPR - See Article 6(1), Article 6(2) and Recital 40
External link02 August 2018 - 1.0.248 53
consider these first.
If you are a public authority and can demonstrate that the processing is to perform your tasks as set
down in UK law, then you are able to use the public task basis. If not, you may still be able to consider
consent or legitimate interests in some cases, depending on the nature of the processing and your
relationship with the individual. There is no absolute ban on public authorities using consent or legitimate
interests as their lawful basis, but the GDPR does restrict public authorities’ use of these two bases.
The Data Protection Act 2018 says that ‘public authority’ here means a public authority under the
Freedom of Information Act or Freedom of Information (Scotland) Act – with the exception of parish and
community councils.
If you are processing for purposes other than legal obligation, contract, vital interests or public task,
then the appropriate lawful basis may not be so clear cut. In many cases you are likely to have a choice
between using legitimate interests or consent. You need to give some thought to the wider context,
including:
Who does the processing benefit?
Would individuals expect this processing to take place?
What is your relationship with the individual?
Are you in a position of power over them?
What is the impact of the processing on the individual?
Are they vulnerable?
Are some of the individuals concerned likely to object?
Are you able to stop the processing at any time on request?
You may prefer to consider legitimate interests as your lawful basis if you wish to keep control over the
processing and take responsibility for demonstrating that it is in line with people’s reasonable
expectations and wouldn’t have an unwarranted impact on them. On the other hand, if you prefer to
give individuals full control over and responsibility for their data (including the ability to change their
Example
A university that wants to process personal data may consider a variety of lawful bases depending
on what it wants to do with the data.
Universities are classified as public authorities, so the public task basis is likely to apply to much of
their processing, depending on the detail of their constitutions and legal powers. If the processing is
separate from their tasks as a public authority, then the university may instead wish to consider
whether consent or legitimate interests are appropriate in the particular circumstances, considering
the factors set out below. For example, a University might rely on public task for processing
personal data for teaching and research purposes; but a mixture of legitimate interests and consent
for alumni relations and fundraising purposes.
The university however needs to consider its basis carefully – it is the controller’s responsibility to
be able to demonstrate which lawful basis applies to the particular processing purpose.02 August 2018 - 1.0.248 54
mind as to whether it can continue to be processed), you may want to consider relying on individuals’
consent.
Further Reading
When should we decide on our lawful basis?
You must determine your lawful basis before starting to process personal data. It’s important to get this
right first time. If you find at a later date that your chosen basis was actually inappropriate, it will be
difficult to simply swap to a different one. Even if a different basis could have applied from the start,
retrospectively switching lawful basis is likely to be inherently unfair to the individual and lead to
breaches of accountability and transparency requirements.
It is therefore important to thoroughly assess upfront which basis is appropriate and document this. It
may be possible that more than one basis applies to the processing because you have more than one
purpose, and if this is the case then you should make this clear from the start.
If there is a genuine change in circumstances or you have a new and unanticipated purpose which
means there is a good reason to review your lawful basis and make a change, you need to inform the
individual and document the change.In more detail – ICO guidance
We have produced the lawful basis interactive guidance tool , to give more tailored guidance on
which lawful basis is likely to be most appropriate for your processing activities.
Key provisions in the Data Protection Act 2018 - see section 7 (Meaning of ‘public authority’ and
‘public body’)
External link
Example
A company decided to process on the basis of consent, and obtained consent from individuals. An
individual subsequently decided to withdraw their consent to the processing of their data, as is their
right. However, the company wanted to keep processing the data so decided to continue the
processing on the basis of legitimate interests.
Even if it could have originally relied on legitimate interests, the company cannot do so at a later
date – it cannot switch basis when it realised that the original chosen basis was inappropriate (in this
case, because it did not want to offer the individual genuine ongoing control). It should have made
clear to the individual from the start that it was processing on the basis of legitimate interests.
Leading the individual to believe they had a choice is inherently unfair if that choice will be
irrelevant. The company must therefore stop processing when the individual withdraws consent.02 August 2018 - 1.0.248 55
Further Reading
What happens if we have a new purpose?
If your purposes change over time or you have a new purpose which you did not originally anticipate,
you may not need a new lawful basis as long as your new purpose is compatible with the original
purpose.
However, the GDPR specifically says this does not apply to processing based on consent. Consent must
always be specific and informed. You need to either get fresh consent which specifically covers the new
purpose, or find a different basis for the new purpose. If you do get specific consent for the new
purpose, you do not need to show it is compatible.
In other cases, in order to assess whether the new purpose is compatible with the original purpose you
should take into account:
any link between your initial purpose and the new purpose;
the context in which you collected the data – in particular, your relationship with the individual and
what they would reasonably expect;
the nature of the personal data – eg is it special category data or criminal offence data;
the possible consequences for individuals of the new processing; and
whether there are appropriate safeguards - eg encryption or pseudonymisation.
This list is not exhaustive and what you need to look at depends on the particular circumstances.
As a general rule, if the new purpose is very different from the original purpose, would be unexpected,
or would have an unjustified impact on the individual, it is unlikely to be compatible with your original
purpose for collecting the data. You need to identify and document a new lawful basis to process the
data for that new purpose.
The GDPR specifically says that further processing for the following purposes should be considered to be
compatible lawful processing operations:
archiving purposes in the public interest;
scientific research purposes; and
statistical purposes.
There is a link here to the ‘purpose limitation’ principle in Article 5, which states that “personal data shall
be collected for specified, explicit and legitimate purposes and not further processed in a manner that is
incompatible with those purposes”.
Even if the processing for a new purpose is lawful, you will also need to consider whether it is fair and
transparent, and give individuals information about the new purpose.
Further ReadingRelevant provisions in the GDPR - See Article 6(1) and Recitals 39 and 40
External link02 August 2018 - 1.0.248 56
How should we document our lawful basis?
The principle of accountability requires you to be able to demonstrate that you are complying with the
GDPR, and have appropriate policies and processes. This means that you need to be able to show that
you have properly considered which lawful basis applies to each processing purpose and can justify your
decision.
You need therefore to keep a record of which basis you are relying on for each processing purpose, and
a justification for why you believe it applies. There is no standard form for this, as long as you ensure
that what you record is sufficient to demonstrate that a lawful basis applies. This will help you comply
with accountability obligations, and will also help you when writing your privacy notices.
It is your responsibility to ensure that you can demonstrate which lawful basis applies to the particular
processing purpose.
Read the accountability section of this guide for more on this topic. There is also further guidance on
documenting consent or legitimate interests assessments in the relevant pages of the guide.
Further Reading
What do we need to tell people?
You need to include information about your lawful basis (or bases, if more than one applies) in your
privacy notice. Under the transparency provisions of the GDPR, the information you need to give people
includes:
your intended purposes for processing the personal data; and
the lawful basis for the processing.
This applies whether you collect the personal data directly from the individual or you collect their data
from another source.
Read the ‘right to be informed’ section of this guide for more on the transparency requirements of the
GDPR.
Further Reading
Relevant provisions in the GDPR - See Article 6(4), Article 5(1)(b) and Recital 50, Recital 61
External link
Relevant provisions in the GDPR - See Articles 5(2) and 24
External link
Relevant provisions in the GDPR - See Article 13(1)(c), Article 14(1)(c) and Recital 39
External link02 August 2018 - 1.0.248 57
What about special category data?
If you are processing special category data, you need to identify both a lawful basis for processing and
a special category condition for processing in compliance with Article 9. You should document both your
lawful basis for processing and your special category condition so that you can demonstrate compliance
and accountability.
Further guidance can be found in the section on special category data.
What about criminal offence data?
If you are processing data about criminal convictions, criminal offences or related security measures,
you need both a lawful basis for processing and a separate condition for processing this data in
compliance with Article 10. You should document both your lawful basis for processing and your criminal
offence data condition so that you can demonstrate compliance and accountability.
Further guidance can be found in the section on criminal offence data.
In more detail – ICO guidance
We have produced the lawful basis interactive guidance tool , to give tailored guidance on which
lawful basis is likely to be most appropriate for your processing activities. 02 August 2018 - 1.0.248 58
Consent
At a glance
The GDPR sets a high standard for consent. But you often won’t need consent. If consent is difficult,
look for a different lawful basis.
Consent means offering individuals real choice and control. Genuine consent should put individuals in
charge, build trust and engagement, and enhance your reputation.
Check your consent practices and your existing consents. Refresh your consents if they don’t meet
the GDPR standard.
Consent requires a positive opt-in. Don’t use pre-ticked boxes or any other method of default
consent.
Explicit consent requires a very clear and specific statement of consent.
Keep your consent requests separate from other terms and conditions.
Be specific and ‘granular’ so that you get separate consent for separate things. Vague or blanket
consent is not enough.
Be clear and concise.
Name any third party controllers who will rely on the consent.
Make it easy for people to withdraw consent and tell them how.
Keep evidence of consent – who, when, how, and what you told people.
Keep consent under review, and refresh it if anything changes.
Avoid making consent to processing a precondition of a service.
Public authorities and employers will need to take extra care to show that consent is freely given, and
should avoid over-reliance on consent.
Checklists
Asking for consent
☐ We have checked that consent is the most appropriate lawful basis for processing.
☐ We have made the request for consent prominent and separate from our terms and
conditions.
☐ We ask people to positively opt in.
☐ We don’t use pre-ticked boxes or any other type of default consent.
☐ We use clear, plain language that is easy to understand.
☐ We specify why we want the data and what we’re going to do with it.
☐ We give separate distinct (‘granular’) options to consent separately to different purposes and02 August 2018 - 1.0.248 59
Recording consent
Managing consent
In brief
What's new?
Why is consent important?
When is consent appropriate?types of processing.
☐ We name our organisation and any third party controllers who will be relying on the consent.
☐ We tell individuals they can withdraw their consent.
☐ We ensure that individuals can refuse to consent without detriment.
☐ We avoid making consent a precondition of a service.
☐ If we offer online services directly to children, we only seek consent if we have
age-verification measures (and parental-consent measures for younger children) in place.
☐ We keep a record of when and how we got consent from the individual.
☐ We keep a record of exactly what they were told at the time.
☐ We regularly review consents to check that the relationship, the processing and the purposes
have not changed.
☐ We have processes in place to refresh consent at appropriate intervals, including any parental
consents.
☐ We consider using privacy dashboards or other preference-management tools as a matter of
good practice.
☐ We make it easy for individuals to withdraw their consent at any time, and publicise how to do
so.
☐ We act on withdrawals of consent as soon as we can.
☐ We don’t penalise individuals who wish to withdraw consent.02 August 2018 - 1.0.248 60
What is valid consent?
How should we obtain, record and manage consent?
What's new?
The GDPR sets a high standard for consent, but the biggest change is what this means in practice for
your consent mechanisms.
The GDPR is clearer that an indication of consent must be unambiguous and involve a clear affirmative
action (an opt-in). It specifically bans pre-ticked opt-in boxes. It also requires distinct (‘granular’)
consent options for distinct processing operations. Consent should be separate from other terms and
conditions and should not generally be a precondition of signing up to a service.
You must keep clear records to demonstrate consent.
The GDPR gives a specific right to withdraw consent. You need to tell people about their right to
withdraw, and offer them easy ways to withdraw consent at any time.
Public authorities, employers and other organisations in a position of power may find it more difficult to
show valid freely given consent.
You need to review existing consents and your consent mechanisms to check they meet the GDPR
standard. If they do, there is no need to obtain fresh consent.
Why is consent important?
Consent is one lawful basis for processing, and explicit consent can also legitimise use of special
category data. Consent may also be relevant where the individual has exercised their right to
restriction , and explicit consent can legitimise automated decision-making and overseas transfers of
data.
Genuine consent should put individuals in control, build trust and engagement, and enhance your
reputation.
Relying on inappropriate or invalid consent could destroy trust and harm your reputation – and may
leave you open to large fines.
When is consent appropriate?
Consent is one lawful basis for processing, but there are alternatives. Consent is not inherently better or
more important than these alternatives. If consent is difficult, you should consider using an alternative.
Consent is appropriate if you can offer people real choice and control over how you use their data, and
want to build their trust and engagement. But if you cannot offer a genuine choice, consent is not
appropriate. If you would still process the personal data without consent, asking for consent is
misleading and inherently unfair.
If you make consent a precondition of a service, it is unlikely to be the most appropriate lawful basis.
Public authorities, employers and other organisations in a position of power over individuals should
avoid relying on consent unless they are confident they can demonstrate it is freely given.02 August 2018 - 1.0.248 61
What is valid consent?
Consent must be freely given; this means giving people genuine ongoing choice and control over how
you use their data.
Consent should be obvious and require a positive action to opt in. Consent requests must be prominent,
unbundled from other terms and conditions, concise and easy to understand, and user-friendly.
Consent must specifically cover the controller’s name, the purposes of the processing and the types of
processing activity.
Explicit consent must be expressly confirmed in words, rather than by any other positive action.
There is no set time limit for consent. How long it lasts will depend on the context. You should review
and refresh consent as appropriate.
How should we obtain, record and manage consent?
Make your consent request prominent, concise, separate from other terms and conditions, and easy to
understand. Include:
the name of your organisation;
the name of any third party controllers who will rely on the consent;
why you want the data;
what you will do with it; and
that individuals can withdraw consent at any time.
You must ask people to actively opt in. Don’t use pre-ticked boxes, opt-out boxes or other default
settings. Wherever possible, give separate (‘granular’) options to consent to different purposes and
different types of processing.
Keep records to evidence consent – who consented, when, how, and what they were told.
Make it easy for people to withdraw consent at any time they choose. Consider using preference-
management tools.
Keep consents under review and refresh them if anything changes. Build regular consent reviews into
your business processes.
Further Reading
Relevant provisions in the GDPR - See Articles 4(11), 6(1)(a) 7, 8, 9(2)(a) and Recitals 32, 38,
40, 42, 43, 171
External link
In more detail - ICO guidance
We have produced more detailed guidance on consent .02 August 2018 - 1.0.248 62
We have produced an interactive guidance tool to give tailored guidance on which lawful basis is
likely to be most appropriate for your processing activities.
In more detail - European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state. It
adopts guidelines for complying with the requirements of the GDPR.
WP29 adopted Guidelines on consent , which have been endorsed by the EDPB.02 August 2018 - 1.0.248 63
Contract
At a glance
You can rely on this lawful basis if you need to process someone’s personal data:
to fulfil your contractual obligations to them; or
because they have asked you to do something before entering into a contract (eg provide a
quote).
The processing must be necessary. If you could reasonably do what they want without processing
their personal data, this basis will not apply.
You should document your decision to rely on this lawful basis and ensure that you can justify your
reasoning.
In brief
What’s new?
What does the GDPR say?
When is the lawful basis for contracts likely to apply?
When is processing ‘necessary’ for a contract?
What else should we consider?
What's new?
Very little. The lawful basis for processing necessary for contracts is almost identical to the old condition
for processing in paragraph 2 of Schedule 2 of the 1998 Act.
You need to review your existing processing so that you can document where you rely on this basis and
inform individuals. But in practice, if you are confident that your existing approach complied with the
1998 Act, you are unlikely to need to change your existing basis for processing.
What does the GDPR say?
Article 6(1)(b) gives you a lawful basis for processing where:
When is the lawful basis for contracts likely to apply?
“processing is necessary for the performance of a contract to which the data subject is party or in
order to take steps at the request of the data subject prior to entering into a contract”02 August 2018 - 1.0.248 64
You have a lawful basis for processing if:
you have a contract with the individual and you need to process their personal data to comply with
your obligations under the contract.
you haven’t yet got a contract with the individual, but they have asked you to do something as a first
step (eg provide a quote) and you need to process their personal data to do what they ask.
It does not apply if you need to process one person’s details but the contract is with someone else.
It does not apply if you take pre-contractual steps on your own initiative or at the request of a third
party.
Note that, in this context, a contract does not have to be a formal signed document, or even written
down, as long as there is an agreement which meets the requirements of contract law. Broadly
speaking, this means that the terms have been offered and accepted, you both intend them to be legally
binding, and there is an element of exchange (usually an exchange of goods or services for money, but
this can be anything of value). However, this is not a full explanation of contract law, and if in doubt you
should seek your own legal advice.
When is processing ‘necessary’ for a contract?
‘Necessary’ does not mean that the processing must be essential for the purposes of performing a
contract or taking relevant pre-contractual steps. However, it must be a targeted and proportionate way
of achieving that purpose. This lawful basis does not apply if there are other reasonable and less
intrusive ways to meet your contractual obligations or take the steps requested.
The processing must be necessary to deliver your side of the contract with this particular person. If the
processing is only necessary to maintain your business model more generally, this lawful basis will not
apply and you should consider another lawful basis, such as legitimate interests.
Example
An individual shopping around for car insurance requests a quotation. The insurer needs to process
certain data in order to prepare the quotation, such as the make and age of the car.02 August 2018 - 1.0.248 65
This does not mean that processing which is not necessary for the contract is automatically unlawful, but
rather that you need to look for a different lawful basis.
What else should we consider?
If the processing is necessary for a contract with the individual, processing is lawful on this basis and
you do not need to get separate consent.
If processing of special category data is necessary for the contract, you also need to identify a separate
condition for processing this data. Read our guidance on special category data for more information.
If the contract is with a child under 18, you need to consider whether they have the necessary
competence to enter into a contract. If you have doubts about their competence, you may wish to
consider an alternative basis such as legitimate interests, which can help you to demonstrate that the
child’s rights and interests are properly considered and protected. Read our guidance on children and
the GDPR for more information.
If the processing is not necessary for the contract, you need to consider another lawful basis such as
legitimate interests or consent. Note that if you want to rely on consent you will not generally be able to
make the processing a condition of the contract. Read our guidance on consent for more information.
If you are processing on the basis of contract, the individual’s right to object and right not to be subject
to a decision based solely on automated processing will not apply. However, the individual will have a
right to data portability. Read our guidance on individual rights for more information.
Remember to document your decision that processing is necessary for the contract, and include
information about your purposes and lawful basis in your privacy notice.
Further Reading
Example
When a data subject makes an online purchase, a controller processes the address of the individual
in order to deliver the goods. This is necessary in order to perform the contract.
However, the profiling of an individual’s interests and preferences based on items purchased is not
necessary for the performance of the contract and the controller cannot rely on Article 6(1)(b) as
the lawful basis for this processing. Even if this type of targeted advertising is a useful part of your
customer relationship and is a necessary part of your business model, it is not necessary to perform
the contract itself.
Relevant provisions in the GDPR - See Article 6(1)(b) and Recital 44
External link02 August 2018 - 1.0.248 66
In more detail - ICO guidance
We have produced the lawful basis interactive guidance tool , to give tailored guidance on which
lawful basis is likely to be most appropriate for your processing activities.02 August 2018 - 1.0.248 67
Legal obligation
At a glance
You can rely on this lawful basis if you need to process the personal data to comply with a common
law or statutory obligation.
This does not apply to contractual obligations.
The processing must be necessary. If you can reasonably comply without processing the personal
data, this basis does not apply.
You should document your decision to rely on this lawful basis and ensure that you can justify your
reasoning.
You should be able to either identify the specific legal provision or an appropriate source of advice or
guidance that clearly sets out your obligation.
In brief
What’s new?
What does the GDPR say?
When is the lawful basis for legal obligations likely to apply?
When is processing ‘necessary’ for compliance?
What else should we consider?
What’s new?
Very little. The lawful basis for processing necessary for compliance with a legal obligation is almost
identical to the old condition for processing in paragraph 3 of Schedule 2 of the 1998 Act.
You need to review your existing processing so that you can document where you rely on this basis and
inform individuals. But in practice, if you are confident that your existing approach complied with the
1998 Act, you are unlikely to need to change your existing basis for processing.
What does the GDPR say?
Article 6(1)(c) provides a lawful basis for processing where:
When is the lawful basis for legal obligations likely to apply?
In short, when you are obliged to process the personal data to comply with the law.
“processing is necessary for compliance with a legal obligation to which the controller is subject.”02 August 2018 - 1.0.248 68
Article 6(3) requires that the legal obligation must be laid down by UK or EU law. Recital 41 confirms that
this does not have to be an explicit statutory obligation, as long as the application of the law is
foreseeable to those individuals subject to it. So it includes clear common law obligations.
This does not mean that there must be a legal obligation specifically requiring the specific processing
activity. The point is that your overall purpose must be to comply with a legal obligation which has a
sufficiently clear basis in either common law or statute.
You should be able to identify the obligation in question, either by reference to the specific legal
provision or else by pointing to an appropriate source of advice or guidance that sets it out clearly. For
example, you can refer to a government website or to industry guidance that explains generally
applicable legal obligations.
Regulatory requirements also qualify as a legal obligation for these purposes where there is a statutory
basis underpinning the regulatory regime and which requires regulated organisations to comply.
Example
An employer needs to process personal data to comply with its legal obligation to disclose employee
salary details to HMRC.
The employer can point to the HMRC website where the requirements are set out to demonstrate
this obligation. In this situation it is not necessary to cite each specific piece of legislation.
Example
A financial institution relies on the legal obligation imposed by the Part 7 of Proceeds of Crime Act
2002 to process personal data in order submit a Suspicious Activity Report to the National Crime
Agency when it knows or suspects that a person is engaged in, or attempting, money laundering.
Example
A court order may require you to process personal data for a particular purpose and this also
qualifies as a legal obligation.
Example02 August 2018 - 1.0.248 69
A contractual obligation does not comprise a legal obligation in this context. You cannot contract out of
the requirement for a lawful basis. However, you can look for a different lawful basis. If the contract is
with the individual you can consider the lawful basis for contracts. For contracts with other parties, you
may want to consider legitimate interests.
When is processing ‘necessary’ for compliance?
Although the processing need not be essential for you to comply with the legal obligation, it must be a
reasonable and proportionate way of achieving compliance. You cannot rely on this lawful basis if you
have discretion over whether to process the personal data, or if there is another reasonable way to
comply.
It is likely to be clear from the law in question whether the processing is actually necessary for
compliance.
What else should we consider?
If you are processing on the basis of legal obligation, the individual has no right to erasure, right to data
portability, or right to object. Read our guidance on individual rights for more information.
Remember to:
document your decision that processing is necessary for compliance with a legal obligation;
identify an appropriate source for the obligation in question; and
include information about your purposes and lawful basis in your privacy notice.
Further Reading
The Competition and Markets Authority (CMA) has powers under The Enterprise Act 2002 to make
orders to remedy adverse effects on competition, some of which may require the processing of
personal data.
A retail energy supplier passes customer data to the Gas and Electricity Markets Authority to comply
with the CMA’s Energy Market Investigation (Database) Order 2016. The supplier may rely on legal
obligation as the lawful basis for this processing.
Relevant provisions in the GDPR - See Article 6(1)(c), Recitals 41, 45
External link
In more detail - ICO guidance
We have produced the lawful basis interactive guidance tool , to give tailored guidance on which
lawful basis is likely to be most appropriate for your processing activities.02 August 2018 - 1.0.248 70
02 August 2018 - 1.0.248 71
Vital interests
At a glance
You are likely to be able to rely on vital interests as your lawful basis if you need to process the
personal data to protect someone’s life.
The processing must be necessary. If you can reasonably protect the person’s vital interests in
another less intrusive way, this basis will not apply.
You cannot rely on vital interests for health data or other special category data if the individual is
capable of giving consent, even if they refuse their consent.
You should consider whether you are likely to rely on this basis, and if so document the
circumstances where it will be relevant and ensure you can justify your reasoning.
In brief
What’s new?
What does the GDPR say?
What are ‘vital interests’?
When is the vital interests basis likely to apply?
What else should we consider?
What’s new?
The lawful basis for vital interests is very similar to the old condition for processing in paragraph 4 of
Schedule 2 of the 1998 Act. One key difference is that anyone’s vital interests can now provide a basis
for processing, not just those of the data subject themselves.
You need to review your existing processing to identify if you have any ongoing processing for this
reason, or are likely to need to process for this reason in future. You should then document where you
rely on this basis and inform individuals if relevant.
What does the GDPR say?
Article 6(1)(d) provides a lawful basis for processing where:
Recital 46 provides some further guidance:
“processing is necessary in order to protect the vital interests of the data subject or of another
natural person”.02 August 2018 - 1.0.248 72
What are ‘vital interests’?
It’s clear from Recital 46 that vital interests are intended to cover only interests that are essential for
someone’s life. So this lawful basis is very limited in its scope, and generally only applies to matters of
life and death.
When is the vital interests basis likely to apply?
It is likely to be particularly relevant for emergency medical care, when you need to process personal
data for medical purposes but the individual is incapable of giving consent to the processing.
It is less likely to be appropriate for medical care that is planned in advance. Another lawful basis such
as public task or legitimate interests is likely to be more appropriate in this case.
Processing of one individual’s personal data to protect the vital interests of others is likely to happen
more rarely. It may be relevant, for example, if it is necessary to process a parent’s personal data to
protect the vital interests of a child.
Vital interests is also less likely to be the appropriate basis for processing on a larger scale. Recital 46
does suggest that vital interests might apply where you are processing on humanitarian grounds such as
monitoring epidemics, or where there is a natural or man-made disaster causing a humanitarian
emergency.
However, if you are processing one person’s personal data to protect someone else’s life, Recital 46 also
indicates that you should generally try to use an alternative lawful basis, unless none is obviously
available. For example, in many cases you could consider legitimate interests, which will give you a
framework to balance the rights and interests of the data subject(s) with the vital interests of the person
or people you are trying to protect.
What else should we consider?
“The processing of personal data should also be regarded as lawful where it is necessary to protect
an interest which is essential for the life of the data subject or that of another natural person.
Processing of personal data based on the vital interest of another natural person should in principle
take place only where the processing cannot be manifestly based on another legal basis…”
Example
An individual is admitted to the A & E department of a hospital with life-threatening injuries following
a serious road accident. The disclosure to the hospital of the individual’s medical history is
necessary in order to protect his/her vital interests.02 August 2018 - 1.0.248 73
In most cases the protection of vital interests is likely to arise in the context of health data. This is one
of the special categories of data, which means you will also need to identify a condition for processing
special category data under Article 9.
There is a specific condition at Article 9(2)(c) for processing special category data where necessary to
protect someone’s vital interests. However, this only applies if the data subject is physically or legally
incapable of giving consent. This means explicit consent is more appropriate in many cases, and you
cannot in practice rely on vital interests for special category data (including health data) if the data
subject refuses consent, unless they are not competent to do so.
Further Reading
Relevant provisions in the GDPR - See Article 6(1)(d), Article 9(2)(c), Recital 46
External link
In more detail - ICO guidance
We have produced the lawful basis interactive guidance tool , to give tailored guidance on which
lawful basis is likely to be most appropriate for your processing activities.02 August 2018 - 1.0.248 74
Public task
At a glance
You can rely on this lawful basis if you need to process personal data:
‘in the exercise of official authority’. This covers public functions and powers that are set out in
law; or
to perform a specific task in the public interest that is set out in law.
It is most relevant to public authorities, but it can apply to any organisation that exercises official
authority or carries out tasks in the public interest.
You do not need a specific statutory power to process personal data, but your underlying task,
function or power must have a clear basis in law.
The processing must be necessary. If you could reasonably perform your tasks or exercise your
powers in a less intrusive way, this lawful basis does not apply.
Document your decision to rely on this basis to help you demonstrate compliance if required. You
should be able to specify the relevant task, function or power, and identify its statutory or common
law basis.
In brief
What’s new under the GDPR?
What is the ‘public task’ basis?
What does ‘laid down by law’ mean?
Who can rely on this basis?
When can we rely on this basis?
What else should we consider?
What's new under the GDPR?
The public task basis in Article 6(1)(e) may appear new, but it is similar to the old condition for
processing for functions of a public nature in Schedule 2 of the Data Protection Act 1998.
One key difference is that the GDPR says that the relevant task or function must have a clear basis in
law.
The GDPR is also clear that public authorities can no longer rely on legitimate interests for processing
carried out in performance of their tasks. In the past, some of this type of processing may have been
done on the basis of legitimate interests. If you are a public authority, this means you may now need to
consider the public task basis for more of your processing.
The GDPR also brings in new accountability requirements. You should document your lawful basis so that
you can demonstrate that it applies. In particular, you should be able to identify a clear basis in either
statute or common law for the relevant task, function or power for which you are using the personal02 August 2018 - 1.0.248 75
data.
You must also update your privacy notice to include your lawful basis, and communicate this to
individuals.
What is the ‘public task’ basis?
Article 6(1)(e) gives you a lawful basis for processing where:
This can apply if you are either:
carrying out a specific task in the public interest which is laid down by law; or
exercising official authority (for example, a public body’s tasks, functions, duties or powers) which is
laid down by law.
If you can show you are exercising official authority, including use of discretionary powers, there is no
additional public interest test. However, you must be able to demonstrate that the processing is
‘necessary’ for that purpose.
‘Necessary’ means that the processing must be a targeted and proportionate way of achieving your
purpose. You do not have a lawful basis for processing if there is another reasonable and less intrusive
way to achieve the same result.
In this guide we use the term ‘public task’ to help describe and label this lawful basis. However, this is
not a term used in the GDPR itself. Your focus should be on demonstrating either that you are carrying
out a task in the public interest, or that you are exercising official authority.
In particular, there is no direct link to the concept of ‘public task’ in the Re-use of Public Sector
Information Regulations 2015 (RPSI). There is some overlap, as a public sector body’s core role and
functions for RPSI purposes may be a useful starting point in demonstrating official authority for these
purposes. However, you shouldn’t assume that it is an identical test. See our Guide to RPSI for more on
public task in the context of RPSI.
What does ‘laid down by law’ mean?
Article 6(3) requires that the relevant task or authority must be laid down by domestic or EU law. This
will most often be a statutory function. However, Recital 41 clarifies that this does not have to be an
explicit statutory provision, as long as the application of the law is clear and foreseeable. This means
that it includes clear common law tasks, functions or powers as well as those set out in statute or
statutory guidance.
You do not need specific legal authority for the particular processing activity. The point is that your
overall purpose must be to perform a public interest task or exercise official authority, and that overall
“processing is necessary for the performance of a task carried out in the public interest or in the
exercise of official authority vested in the controller”02 August 2018 - 1.0.248 76
task or authority has a sufficiently clear basis in law.
Who can rely on this basis?
Any organisation who is exercising official authority or carrying out a specific task in the public interest.
The focus is on the nature of the function, not the nature of the organisation.
However, if you are a private sector organisation you are likely to be able to consider the legitimate
interests basis as an alternative.
See the main lawful basis page of this guide for more on how to choose the most appropriate basis.
When can we rely on this basis?
Section 8 of the Data Protection Act 2018 (DPA 2018) says that the public task basis will cover
processing necessary for:
the administration of justice;
parliamentary functions;
statutory functions;
governmental functions; or
activities that support or promote democratic engagement.
However, this is not intended as an exhaustive list. If you have other official non-statutory functions or
public interest tasks you can still rely on the public task basis, as long as the underlying legal basis for
that function or task is clear and foreseeable.
For accountability purposes, you should be able to specify the relevant task, function or power, and
identify its basis in common law or statute. You should also ensure that you can demonstrate there is no
other reasonable and less intrusive means to achieve your purpose.
What else should we consider?
Individuals’ rights to erasure and data portability do not apply if you are processing on the basis of
public task. However, individuals do have a right to object. See our guidance on individual rights for
Example
Private water companies are likely to be able to rely on the public task basis even if they do not fall
within the definition of a public authority in the Data Protection Act 2018. This is because they are
considered to be carrying out functions of public administration and they exercise special legal
powers to carry out utility services in the public interest. See our guidance on Public authorities
under the EIR for more details.02 August 2018 - 1.0.248 77
more information.
You should consider an alternative lawful basis if you are not confident that processing is necessary for
a relevant task, function or power which is clearly set out in law.
If you are a public authority (as defined in the Data Protection Act 2018), your ability to rely on consent
or legitimate interests as an alternative basis is more limited, but they may be available in some
circumstances. In particular, legitimate interests is still available for processing which falls outside your
tasks as a public authority. Other lawful bases may also be relevant. See our guidance on the other
lawful bases for more information.
Remember that the GDPR specifically says that further processing for certain purposes should be
considered to be compatible with your original purpose. This means that if you originally processed the
personal data for a relevant task or function, you do not need a separate lawful basis for any further
processing for:
archiving purposes in the public interest;
scientific research purposes; or
statistical purposes.
If you are processing special category data, you also need to identify an additional condition for
processing this type of data. The Data Protection Act 2018 includes specific conditions for parliamentary,
statutory or governmental functions in the substantial public interest. Read the special category data
page of this guide for our latest guidance on these provisions.
To help you meet your accountability and transparency obligations, remember to:
document your decision that the processing is necessary for you to perform a task in the public
interest or exercise your official authority;
identify the relevant task or authority and its basis in common law or statute; and
include basic information about your purposes and lawful basis in your privacy notice.
Further reading
Relevant provisions in the GDPR - See Article 6(1)(e) and 6(3), and Recitals 41, 45 and 50
External link
Relevant provisions in the Data Protection Act 2018 - See sections 7 and 8, and Schedule 1 paras 6
and 7
External link
In more detail – ICO guidance
We are planning to develop more detailed guidance on this topic.
We have produced the lawful basis interactive guidance tool , to give tailored guidance on which
lawful basis is likely to be most appropriate for your processing activities.02 August 2018 - 1.0.248 78
02 August 2018 - 1.0.248 79
Legitimate interests
At a glance
Legitimate interests is the most flexible lawful basis for processing, but you cannot assume it will
always be the most appropriate.
It is likely to be most appropriate where you use people’s data in ways they would reasonably expect
and which have a minimal privacy impact, or where there is a compelling justification for the
processing.
If you choose to rely on legitimate interests, you are taking on extra responsibility for considering
and protecting people’s rights and interests.
Public authorities can only rely on legitimate interests if they are processing for a legitimate reason
other than performing their tasks as a public authority.
There are three elements to the legitimate interests basis. It helps to think of this as a three-part
test. You need to:
identify a legitimate interest;
show that the processing is necessary to achieve it; and
balance it against the individual’s interests, rights and freedoms.
The legitimate interests can be your own interests or the interests of third parties. They can include
commercial interests, individual interests or broader societal benefits.
The processing must be necessary. If you can reasonably achieve the same result in another less
intrusive way, legitimate interests will not apply.
You must balance your interests against the individual’s. If they would not reasonably expect the
processing, or if it would cause unjustified harm, their interests are likely to override your legitimate
interests.
Keep a record of your legitimate interests assessment (LIA) to help you demonstrate compliance if
required.
You must include details of your legitimate interests in your privacy information.
Checklists
☐ We have checked that legitimate interests is the most appropriate basis.
☐ We understand our responsibility to protect the individual’s interests.
☐ We have conducted a legitimate interests assessment (LIA) and kept a record of it, to ensure
that we can justify our decision.
☐ We have identified the relevant legitimate interests.
☐ We have checked that the processing is necessary and there is no less intrusive way to
achieve the same result.02 August 2018 - 1.0.248 80
In brief
What's new under the GDPR?
What is the 'legitimate interests' basis?
When can we rely on legitimate interests?
How can we apply legitimate interests in practice?
What else do we need to consider?
Wha t’s new under the GDPR?
The concept of legitimate interests as a lawful basis for processing is essentially the same as the
equivalent Schedule 2 condition in the 1998 Act, with some changes in detail.
You can now consider the legitimate interests of any third party, including wider benefits to society. And
when weighing against the individual’s interests, the focus is wider than the emphasis on ‘unwarranted
prejudice’ to the individual in the 1998 Act. For example, unexpected processing is likely to affect
whether the individual’s interests override your legitimate interests, even without specific harm.
The GDPR is clearer that you must give particular weight to protecting children’s data.
Public authorities are more limited in their ability to rely on legitimate interests, and should consider the
‘public task’ basis instead for any processing they do to perform their tasks as a public authority.
Legitimate interests may still be available for other legitimate processing outside of those tasks.
The biggest change is that you need to document your decisions on legitimate interests so that you can
demonstrate compliance under the new GDPR accountability principle. You must also include more
information in your privacy information .☐ We have done a balancing test, and are confident that the individual’s interests do not
override those legitimate interests.
☐ We only use individuals’ data in ways they would reasonably expect, unless we have a very
good reason.
☐ We are not using people’s data in ways they would find intrusive or which could cause them
harm, unless we have a very good reason.
☐ If we process children’s data, we take extra care to make sure we protect their interests.
☐ We have considered safeguards to reduce the impact where possible.
☐ We have considered whether we can offer an opt out.
☐ If our LIA identifies a significant privacy impact, we have considered whether we also need to
conduct a DPIA.
☐ We keep our LIA under review, and repeat it if circumstances change.
☐ We include information about our legitimate interests in our privacy information.02 August 2018 - 1.0.248 81
In the run up to 25 May 2018, you need to review your existing processing to identify your lawful basis
and document where you rely on legitimate interests, update your privacy information , and
communicate it to individuals.
What is the ‘legitimate interests’ basis?
Article 6(1)(f) gives you a lawful basis for processing where:
This can be broken down into a three-part test:
Purpose test: are you pursuing a legitimate interest? 1.
Necessity test: is the processing necessary for that purpose? 2.
Balancing test: do the individual’s interests override the legitimate interest? 3.
A wide range of interests may be legitimate interests. They can be your own interests or the interests of
third parties, and commercial interests as well as wider societal benefits. They may be compelling or
trivial, but trivial interests may be more easily overridden in the balancing test.
The GDPR specifically mentions use of client or employee data, marketing, fraud prevention, intra-group
transfers, or IT security as potential legitimate interests, but this is not an exhaustive list. It also says
that you have a legitimate interest in disclosing information about possible criminal acts or security
threats to the authorities.
‘Necessary’ means that the processing must be a targeted and proportionate way of achieving your
purpose. You cannot rely on legitimate interests if there is another reasonable and less intrusive way to
achieve the same result.
You must balance your interests against the individual’s interests. In particular, if they would not
reasonably expect you to use data in that way, or it would cause them unwarranted harm, their interests
are likely to override yours. However, your interests do not always have to align with the individual’s
interests. If there is a conflict, your interests can still prevail as long as there is a clear justification for
the impact on the individual.
When can we rely on legitimate interests?
Legitimate interests is the most flexible lawful basis, but you cannot assume it will always be appropriate
for all of your processing.
If you choose to rely on legitimate interests, you take on extra responsibility for ensuring people’s rights
and interests are fully considered and protected.
Legitimate interests is most likely to be an appropriate basis where you use data in ways that people
“processing is necessary for the purposes of the legitimate interests pursued by the controller or by
a third party except where such interests are overridden by the interests or fundamental rights and
freedoms of the data subject which require protection of personal data, in particular where the data
subject is a child.”02 August 2018 - 1.0.248 82
would reasonably expect and that have a minimal privacy impact. Where there is an impact on
individuals, it may still apply if you can show there is an even more compelling benefit to the processing
and the impact is justified.
You can rely on legitimate interests for marketing activities if you can show that how you use people’s
data is proportionate, has a minimal privacy impact, and people would not be surprised or likely to
object – but only if you don’t need consent under PECR. See our Guide to PECR for more on when you
need consent for electronic marketing.
You can consider legitimate interests for processing children’s data, but you must take extra care to
make sure their interests are protected. See our detailed guidance on children and the GDPR .
You may be able to rely on legitimate interests in order to lawfully disclose personal data to a third
party. You should consider why they want the information, whether they actually need it, and what they
will do with it. You need to demonstrate that the disclosure is justified, but it will be their responsibility to
determine their lawful basis for their own processing.
You should avoid using legitimate interests if you are using personal data in ways people do not
understand and would not reasonably expect, or if you think some people would object if you explained
it to them. You should also avoid this basis for processing that could cause harm, unless you are
confident there is nevertheless a compelling reason to go ahead which justifies the impact.
If you are a public authority, you cannot rely on legitimate interests for any processing you do to
perform your tasks as a public authority. However, if you have other legitimate purposes outside the
scope of your tasks as a public authority, you can consider legitimate interests where appropriate. This
will be particularly relevant for public authorities with commercial interests.
See our guidance page on the lawful basis for more information on the alternatives to legitimate
interests, and how to decide which basis to choose.
How can we apply legitimate interests in practice?
If you want to rely on legitimate interests, you can use the three-part test to assess whether it applies.
We refer to this as a legitimate interests assessment (LIA) and you should do it before you start the
processing.
An LIA is a type of light-touch risk assessment based on the specific context and circumstances. It will
help you ensure that your processing is lawful. Recording your LIA will also help you demonstrate
compliance in line with your accountability obligations under Articles 5(2) and 24. In some cases an LIA
will be quite short, but in others there will be more to consider.
First, identify the legitimate interest(s). Consider:
Why do you want to process the data – what are you trying to achieve?
Who benefits from the processing? In what way?
Are there any wider public benefits to the processing?
How important are those benefits?
What would the impact be if you couldn’t go ahead?
Would your use of the data be unethical or unlawful in any way?
Second, apply the necessity test. Consider:02 August 2018 - 1.0.248 83
Does this processing actually help to further that interest?
Is it a reasonable way to go about it?
Is there another less intrusive way to achieve the same result?
Third, do a balancing test. Consider the impact of your processing and whether this overrides the
interest you have identified. You might find it helpful to think about the following:
What is the nature of your relationship with the individual?
Is any of the data particularly sensitive or private?
Would people expect you to use their data in this way?
Are you happy to explain it to them?
Are some people likely to object or find it intrusive?
What is the possible impact on the individual?
How big an impact might it have on them?
Are you processing children’s data?
Are any of the individuals vulnerable in any other way?
Can you adopt any safeguards to minimise the impact?
Can you offer an opt-out?
You then need to make a decision about whether you still think legitimate interests is an appropriate
basis. There’s no foolproof formula for the outcome of the balancing test – but you must be confident
that your legitimate interests are not overridden by the risks you have identified.
Keep a record of your LIA and the outcome. There is no standard format for this, but it’s important to
record your thinking to help show you have proper decision-making processes in place and to justify the
outcome.
Keep your LIA under review and refresh it if there is a significant change in the purpose, nature or
context of the processing.
If you are not sure about the outcome of the balancing test, it may be safer to look for another lawful
basis. Legitimate interests will not often be the most appropriate basis for processing which is
unexpected or high risk.
If your LIA identifies significant risks, consider whether you need to do a DPIA to assess the risk and
potential mitigation in more detail. See our guidance on DPIAs for more on this.
What else do we need to consider?
You must tell people in your privacy information that you are relying on legitimate interests, and explain
what these interests are.
If you want to process the personal data for a new purpose, you may be able to continue processing
under legitimate interests as long as your new purpose is compatible with your original purpose. We
would still recommend that you conduct a new LIA, as this will help you demonstrate compatibility.
If you rely on legitimate interests, the right to data portability does not apply.02 August 2018 - 1.0.248 84
If you are relying on legitimate interests for direct marketing, the right to object is absolute and you
must stop processing when someone objects. For other purposes, you must stop unless you can show
that your legitimate interests are compelling enough to override the individual’s rights. See our guidance
on individual rights for more on this.
Further Reading
Relevant provisions in the GDPR - See Article 6(1)(f) and Recitals 47, 48 and 49
External link
In more detail – ICO guidance
We have produced more detailed guidance on legitimate interests
We have produced the lawful basis interactive guidance tool , to give tailored guidance on which
lawful basis is likely to be most appropriate for your processing activities.
In more detail - European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state. It
adopts guidelines for complying with the requirements of the GDPR.
There are no immediate plans for EDPB guidance on legitimate interests under the GDPR, but WP29
Opinion 06/2014 (9 April 2014) gives detailed guidance on the key elements of the similar legitimate
interests provisions under the previous Data Protection Directive 95/46/EC.02 August 2018 - 1.0.248 85
Special category data
At a glance
Special category data is personal data which the GDPR says is more sensitive, and so needs more
protection.
In order to lawfully process special category data, you must identify both a lawful basis under Article
6 and a separate condition for processing special category data under Article 9. These do not have to
be linked.
There are ten conditions for processing special category data in the GDPR itself, but the Data
Protection Act 2018 introduces additional conditions and safeguards.
You must determine your condition for processing special category data before you begin this
processing under the GDPR, and you should document it.
In brief
What's new?
What's different about special category data?
What are the conditions for processing special category data?
What's new?
Special category data is broadly similar to the concept of sensitive personal data under the 1998 Act.
The requirement to identify a specific condition for processing this type of data is also very similar.
One change is that the GDPR includes genetic data and some biometric data in the definition. Another is
that it does not include personal data relating to criminal offences and convictions, as there are separate
and specific safeguards for this type of data in Article 10.
The conditions for processing special category data under the GDPR in the UK are broadly similar to the
Schedule 3 conditions under the 1998 Act for the processing of sensitive personal data. More detailed
guidance on the new special category conditions in the Data Protection Act 2018 - and how they differ
from existing Schedule 3 conditions - will follow in due course.
What’s different about special category data?
You must still have a lawful basis for your processing under Article 6, in exactly the same way as for any
other personal data. The difference is that you will also need to satisfy a specific condition under Article
9.
This is because special category data is more sensitive, and so needs more protection. For example,
information about an individual’s:
race;
ethnic origin;02 August 2018 - 1.0.248 86
politics;
religion;
trade union membership;
genetics;
biometrics (where used for ID purposes);
health;
sex life; or
sexual orientation.
In particular, this type of data could create more significant risks to a person’s fundamental rights and
freedoms. For example, by putting them at risk of unlawful discrimination.
Your choice of lawful basis under Article 6 does not dictate which special category condition you must
apply, and vice versa. For example, if you use consent as your lawful basis, you are not restricted to
using explicit consent for special category processing under Article 9. You should choose whichever
special category condition is the most appropriate in the circumstances – although in many cases there
may well be an obvious link between the two. For example, if your lawful basis is vital interests, it is
highly likely that the Article 9 condition for vital interests will also be appropriate.
What are the conditions for processing special category data?
The conditions are listed in Article 9(2) of the GDPR:
(a) the data subject has given explicit consent to the processing of those personal data for one or
more specified purposes, except where Union or Member State law provide that the prohibition
referred to in paragraph 1 may not be lifted by the data subject;
(b) processing is necessary for the purposes of carrying out the obligations and exercising specific
rights of the controller or of the data subject in the field of employment and social security and
social protection law in so far as it is authorised by Union or Member State law or a collective
agreement pursuant to Member State law providing for appropriate safeguards for the fundamental
rights and the interests of the data subject;
(c) processing is necessary to protect the vital interests of the data subject or of another natural
person where the data subject is physically or legally incapable of giving consent;
(d) processing is carried out in the course of its legitimate activities with appropriate safeguards by
a foundation, association or any other not-for-profit body with a political, philosophical, religious or
trade union aim and on condition that the processing relates solely to the members or to former
members of the body or to persons who have regular contact with it in connection with its purposes
and that the personal data are not disclosed outside that body without the consent of the data
subjects;
(e) processing relates to personal data which are manifestly made public by the data subject;
(f) processing is necessary for the establishment, exercise or defence of legal claims or whenever
courts are acting in their judicial capacity;02 August 2018 - 1.0.248 87
You need to read these alongside the Data Protection Act 2018, which adds more specific conditions and
safeguards:
Schedule 1 Part 1 contains specific conditions for the various employment, health and research
purposes under Articles 9(2)(b), (h), (i) and (j).
Schedule 1 Part 2 contains specific ‘substantial public interest’ conditions for Article 9(2)(g).
In some cases you must also have an ‘appropriate policy document’ in place to rely on these
conditions.
Now that the detail of these provisions has been finalised, we are working on more detailed guidance in
this area.
Further reading(g) processing is necessary for reasons of substantial public interest, on the basis of Union or
Member State law which shall be proportionate to the aim pursued, respect the essence of the right
to data protection and provide for suitable and specific measures to safeguard the fundamental
rights and the interests of the data subject;
(h) processing is necessary for the purposes of preventive or occupational medicine, for the
assessment of the working capacity of the employee, medical diagnosis, the provision of health or
social care or treatment or the management of health or social care systems and services on the
basis of Union or Member State law or pursuant to contract with a health professional and subject to
the conditions and safeguards referred to in paragraph 3;
(i) processing is necessary for reasons of public interest in the area of public health, such as
protecting against serious cross-border threats to health or ensuring high standards of quality and
safety of health care and of medicinal products or medical devices, on the basis of Union or Member
State law which provides for suitable and specific measures to safeguard the rights and freedoms of
the data subject, in particular professional secrecy;
(j) processing is necessary for archiving purposes in the public interest, scientific or historical
research purposes or statistical purposes in accordance with Article 89(1) based on Union or
Member State law which shall be proportionate to the aim pursued, respect the essence of the right
to data protection and provide for suitable and specific measures to safeguard the fundamental
rights and the interests of the data subject.
Relevant provisions in the GDPR - See Article 9(2) and Recital 51
External link
Relevant provisions in the Data Protection Act 2018 - See sections 10 and 11 and Schedule 1
External link02 August 2018 - 1.0.248 88
Criminal offence data
At a glance
To process personal data about criminal convictions or offences, you must have both a lawful basis
under Article 6 and either legal authority or official authority for the processing under Article 10.
The Data Protection Act 2018 deals with this type of data in a similar way to special category data,
and sets out specific conditions providing lawful authority for processing it.
You can also process this type of data if you have official authority to do so because you are
processing the data in an official capacity.
You cannot keep a comprehensive register of criminal convictions unless you do so in an official
capacity.
You must determine your condition for lawful processing of offence data (or identify your official
authority for the processing) before you begin the processing, and you should document this.
In brief
What's new?
What is criminal offence data?
What's different about criminal offence data?
What does Article 10 say?
What’s new?
The GDPR rules for sensitive (special category) data do not apply to information about criminal
allegations, proceedings or convictions. Instead, there are separate safeguards for personal data
relating to criminal convictions and offences, or related security measures, set out in Article 10.
Article 10 also specifies that you can only keep a comprehensive register of criminal convictions if you
are doing so under the control of official authority.
What is criminal offence data?
Article 10 applies to personal data relating to criminal convictions and offences, or related security
measures. In this guidance, we refer to this as criminal offence data.
This concept of criminal offence data includes the type of data about criminal allegations, proceedings or
convictions that would have been sensitive personal data under the 1998 Act. However, it is potentially
broader than this. In particular, Article 10 specifically extends to personal data linked to related security
measures.
What’s different about criminal offence data?
You must still have a lawful basis for your processing under Article 6, in exactly the same way as for any
other personal data. The difference is that if you are processing personal criminal offence data, you will02 August 2018 - 1.0.248 89
also need to comply with Article 10.
What does Article 10 say?
Article 10 says:
This means you must either:
process the data in an official capacity; or
meet a specific condition in Schedule 1 of the Data Protection Act 2018, and comply with the
additional safeguards set out in that Act. Now that the detail of these provisions has been finalised,
we are working on more detailed guidance in this area.
Even if you have a condition for processing offence data, you can only keep a comprehensive register of
criminal convictions if you are doing so in an official capacity.
Further reading
“Processing of personal data relating to criminal convictions and offences or related security
measures based on Article 6(1) shall be carried out only under the control of official authority or
when the processing is authorised by Union or Member State law providing for appropriate
safeguards for the rights and freedoms of data subjects. Any comprehensive register of criminal
convictions shall be kept only under the control of official authority.”
Relevant provisions in the GDPR - see Article 10
External link
Relevant provisions in the Data Protection Act 2018 - See sections 10 and 11, and Schedule 1
External link02 August 2018 - 1.0.248 90
Individual rights
The GDPR provides the following rights for individuals:
The right to be informed1.
The right of access2.
The right to rectification3.
The right to erasure4.
The right to restrict processing5.
The right to data portability6.
The right to object7.
Rights in relation to automated decision making and profiling.8.
This part of the guide explains these rights.02 August 2018 - 1.0.248 91
Right to be informed
At a glance
Individuals have the right to be informed about the collection and use of their personal data. This is a
key transparency requirement under the GDPR.
You must provide individuals with information including: your purposes for processing their personal
data, your retention periods for that personal data, and who it will be shared with. We call this
‘privacy information’.
You must provide privacy information to individuals at the time you collect their personal data from
them.
If you obtain personal data from other sources, you must provide individuals with privacy information
within a reasonable period of obtaining the data and no later than one month.
There are a few circumstances when you do not need to provide people with privacy information,
such as if an individual already has the information or if it would involve a disproportionate effort to
provide it to them.
The information you provide to people must be concise, transparent, intelligible, easily accessible,
and it must use clear and plain language.
It is often most effective to provide privacy information to people using a combination of different
techniques including layering, dashboards, and just-in-time notices.
User testing is a good way to get feedback on how effective the delivery of your privacy information
is.
You must regularly review, and where necessary, update your privacy information. You must bring
any new uses of an individual’s personal data to their attention before you start the processing.
Getting the right to be informed correct can help you to comply with other aspects of the GDPR and
build trust with people, but getting it wrong can leave you open to fines and lead to reputational
damage.
Checklists
What to provide
We provide individuals with all the following privacy information:
☐ The name and contact details of our organisation.
☐ The name and contact details of our representative (if applicable).
☐ The contact details of our data protection officer (if applicable).
☐ The purposes of the processing.02 August 2018 - 1.0.248 92
☐ The lawful basis for the processing.
☐ The legitimate interests for the processing (if applicable).
☐ The categories of personal data obtained (if the personal data is not obtained from the
individual it relates to).
☐ The recipients or categories of recipients of the personal data.
☐ The details of transfers of the personal data to any third countries or international
organisations (if applicable).
☐ The retention periods for the personal data.
☐ The rights available to individuals in respect of the processing.
☐ The right to withdraw consent (if applicable).
☐ The right to lodge a complaint with a supervisory authority.
☐ The source of the personal data (if the personal data is not obtained from the individual it
relates to).
☐ The details of whether individuals are under a statutory or contractual obligation to provide
the personal data (if applicable, and if the personal data is collected from the individual it relates
to).
☐ The details of the existence of automated decision-making, including profiling (if applicable).
When to provide it
☐ We provide individuals with privacy information at the time we collect their personal data from
them.
If we obtain personal data from a source other than the individual it relates to, we provide them
with privacy information:
☐ within a reasonable of period of obtaining the personal data and no later than one month;
☐ if we plan to communicate with the individual, at the latest, when the first communication
takes place; or
☐ if we plan to disclose the data to someone else, at the latest, when the data is disclosed.
How to provide it
We provide the information in a way that is:
☐ concise;
☐ transparent;
☐ intelligible;
☐ easily accessible; and02 August 2018 - 1.0.248 93
In brief
What’s new under the GDPR?
What is the right to be informed and why is it important?
What privacy information should we provide to individuals?
When should we provide privacy information to individuals?
How should we draft our privacy information?
How should we provide privacy information to individuals?
Should we test, review and update our privacy information?
What’s new under the GDPR?
The GDPR is more specific about the information you need to provide to people about what you do with
their personal data.☐ uses clear and plain language.
Changes to the information
☐ We regularly review and, where necessary, update our privacy information.
☐ If we plan to use personal data for a new purpose, we update our privacy information and
communicate the changes to individuals before starting any new processing.
Best practice – drafting the information
☐ We undertake an information audit to find out what personal data we hold and what we do
with it.
☐ We put ourselves in the position of the people we’re collecting information about.
☐ We carry out user testing to evaluate how effective our privacy information is.
Best practice – delivering the information
When providing our privacy information to individuals, we use a combination of appropriate
techniques, such as:
☐ a layered approach;
☐ dashboards;
☐ just-in-time notices;
☐ icons; and
☐ mobile and smart device functionalities.02 August 2018 - 1.0.248 94
You must actively provide this information to individuals in a way that is easy to access, read and
understand.
You should review your current approach for providing privacy information to check it meets the
standards of the GDPR.
What is the right to be informed and why is it important?
The right to be informed covers some of the key transparency requirements of the GDPR. It is about
providing individuals with clear and concise information about what you do with their personal data.
Articles 13 and 14 of the GDPR specify what individuals have the right to be informed about. We call this
‘privacy information’.
Using an effective approach can help you to comply with other aspects of the GDPR, foster trust with
individuals and obtain more useful information from them.
Getting this wrong can leave you open to fines and lead to reputational damage.
What privacy information should we provide to individuals?
The table below summarises the information that you must provide. What you need to tell people differs
slightly depending on whether you collect personal data from the individual it relates to or obtain it from
another source.
What information do we need to provide? Personal data
collected from
individualsPersonal data
obtained from other
sources
The name and contact details of your
organisation✓✓✓✓ ✓✓✓✓
The name and contact details of your
representative✓✓✓✓ ✓✓✓✓
The contact details of your data protection
officer✓✓✓✓ ✓✓✓✓
The purposes of the processing ✓✓✓✓ ✓✓✓✓
The lawful basis for the processing ✓✓✓✓ ✓✓✓✓
The legitimate interests for the processing ✓✓✓✓ ✓✓✓✓
The categories of personal data obtained ✓✓✓✓
The recipients or categories of recipients of
the personal data✓✓✓✓ ✓✓✓✓
The details of transfers of the personal data
to any third countries or international
organisations✓✓✓✓ ✓✓✓✓ 02 August 2018 - 1.0.248 95
The retention periods for the personal data ✓✓✓✓ ✓✓✓✓
The rights available to individuals in respect of
the processing✓✓✓✓ ✓✓✓✓
The right to withdraw consent ✓✓✓✓ ✓✓✓✓
The right to lodge a complaint with a
supervisory authority✓✓✓✓ ✓✓✓✓
The source of the personal data ✓✓✓✓
The details of whether individuals are under a
statutory or contractual obligation to provide
the personal data✓✓✓✓
The details of the existence of automated
decision-making, including profiling✓✓✓✓ ✓✓✓✓
When should we provide privacy information to individuals?
When you collect personal data from the individual it relates to, you must provide them with privacy
information at the time you obtain their data.
When you obtain personal data from a source other than the individual it relates to, you need to provide
the individual with privacy information:
within a reasonable period of obtaining the personal data and no later than one month;
if you use data to communicate with the individual, at the latest, when the first communication takes
place; or
if you envisage disclosure to someone else, at the latest, when you disclose the data.
You must actively provide privacy information to individuals. You can meet this requirement by putting
the information on your website, but you must make individuals aware of it and give them an easy way
to access it.
When collecting personal data from individuals, you do not need to provide them with any information
that they already have.
When obtaining personal data from other sources, you do not need to provide individuals with privacy
information if:
the individual already has the information;
providing the information to the individual would be impossible;
providing the information to the individual would involve a disproportionate effort;
providing the information to the individual would render impossible or seriously impair the
achievement of the objectives of the processing;
you are required by law to obtain or disclose the personal data; or
you are subject to an obligation of professional secrecy regulated by law that covers the personal
data.02 August 2018 - 1.0.248 96
How should we draft our privacy information?
An information audit or data mapping exercise can help you find out what personal data you hold and
what you do with it.
You should think about the intended audience for your privacy information and put yourself in their
position.
If you collect or obtain children’s personal data, you must take particular care to ensure that the
information you provide them with is appropriately written, using clear and plain language.
For all audiences, you must provide information to them in a way that is:
concise;
transparent;
intelligible;
easily accessible; and
uses clear and plain language.
How should we provide privacy information to individuals?
There are a number of techniques you can use to provide people with privacy information. You can use:
A layered approach – typically, short notices containing key privacy information that have
additional layers of more detailed information.
Dashboards – preference management tools that inform people how you use their data and allow
them to manage what happens with it.
Just-in-time notices – relevant and focused privacy information delivered at the time you collect
individual pieces of information about people.
Icons – small, meaningful, symbols that indicate the existence of a particular type of data
processing.
Mobile and smart device functionalities – including pop-ups, voice alerts and mobile device
gestures.
Consider the context in which you are collecting personal data. It is good practice to use the same
medium you use to collect personal data to deliver privacy information.
Taking a blended approach, using more than one of these techniques, is often the most effective way to
provide privacy information.
Should we test, review and update our privacy information?
It is good practice to carry out user testing on your draft privacy information to get feedback on how
easy it is to access and understand.
After it is finalised, undertake regular reviews to check it remains accurate and up to date.
If you plan to use personal data for any new purposes, you must update your privacy information and
proactively bring any changes to people’s attention.02 August 2018 - 1.0.248 97
The right to be informed in practice
If you sell personal data to (or share it with) other organisations:
As part of the privacy information you provide, you must tell people who you are giving their
information to, unless you are relying on an exception or an exemption.
You can tell people the names of the organisations or the categories that they fall within; choose the
option that is most meaningful.
It is good practice to use a dashboard to let people manage who their data is sold to, or shared with,
where they have a choice.
If you buy personal data from other organisations:
You must provide people with your own privacy information, unless you are relying on an exception
or an exemption.
If you think that it is impossible to provide privacy information to individuals, or it would involve a
disproportionate effort, you must carry out a DPIA to find ways to mitigate the risks of the
processing.
If your purpose for using the personal data is different to that for which it was originally obtained,
you must tell people about this, as well as what your lawful basis is for the processing.
Provide people with your privacy information within a reasonable period of buying the data, and no
later than one month.
If you obtain personal data from publicly accessible sources :
You still have to provide people with privacy information, unless you are relying on an exception or
an exemption.
If you think that it is impossible to provide privacy information to individuals, or it would involve a
disproportionate effort, you must carry out a DPIA to find ways to mitigate the risks of the
processing.
Be very clear with individuals about any unexpected or intrusive uses of personal data, such as
combining information about them from a number of different sources.
Provide people with privacy information within a reasonable period of obtaining the data, and no later
than one month.
If you apply Artificial Intelligence (AI) to personal data:
Be upfront about it and explain your purposes for using AI.
If the purposes for processing are unclear at the outset, give people an indication of what you are
going to do with their data. As your processing purposes become clearer, update your privacy
information and actively communicate this to people.
Inform people about any new uses of personal data before you actually start the processing.
If you use AI to make solely automated decisions about people with legal or similarly significant
effects, tell them what information you use, why it is relevant and what the likely impact is going to
be.
Consider using just-in-time notices and dashboards which can help to keep people informed and let
them control further uses of their personal data.02 August 2018 - 1.0.248 98
Further Reading
Relevant provisions in the GDPR – See Articles 12-14, and Recitals 58 and 60-62
External link
In more detail – ICO guidance
We have published detailed guidance on the right to be informed .
In more detail – European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state. It
adopts guidelines for complying with the requirements of the GDPR.
WP29 adopted guidelines on Transparency , which have been endorsed by the EDPB.02 August 2018 - 1.0.248 99
Right of access
At a glance
Individuals have the right to access their personal data.
This is commonly referred to as subject access.
Individuals can make a subject access request verbally or in writing.
You have one month to respond to a request.
You cannot charge a fee to deal with a request in most circumstances.
Checklists
In brief
What is the right of access?
What is an individual entitled to?
How do we recognise a request?Preparing for subject access requests
☐ We know how to recognise a subject access request and we understand when the right of
access applies.
☐ We have a policy for how to record requests we receive verbally.
☐ We understand when we can refuse a request and are aware of the information we need to
provide to individuals when we do so.
☐ We understand the nature of the supplementary information we need to provide in response
to a subject access request.
Complying with subject access requests
☐ We have processes in place to ensure that we respond to a subject access request without
undue delay and within one month of receipt.
☐ We are aware of the circumstances when we can extend the time limit to respond to a
request.
☐ We understand that there is a particular emphasis on using clear and plain language if we are
disclosing information to a child.
☐ We understand what we need to consider if a request includes information about others.02 August 2018 - 1.0.248 100
Should we provide a specially designed form for individuals to make a subject access request?
How should we provide the data to individuals?
Do we have to explain the contents of the information we send to the individual?
Can we charge a fee?
How long do we have to comply?
Can we extend the time for a response?
Can we ask an individual for ID?
What about requests for large amounts of personal data?
What about requests made on behalf of others?
What about requests for information about children?
What about data held by credit reference agencies?
What should we do if the data includes information about other people?
If we use a processor, does this mean they would have to deal with any subject access requests we
receive?
Can we refuse to comply with a request?
What should we do if we refuse to comply with a request?
Can I require an individual to make a subject access request?
What is the right of access?
The right of access, commonly referred to as subject access, gives individuals the right to obtain a copy
of their personal data as well as other supplementary information. It helps individuals to understand how
and why you are using their data, and check you are doing it lawfully.
What is an individual entitled to?
Individuals have the right to obtain the following from you:
confirmation that you are processing their personal data;
a copy of their personal data; and
other supplementary information – this largely corresponds to the information that you should
provide in a privacy notice (see ‘Supplementary information’ below).
Personal data of the individual
An individual is only entitled to their own personal data, and not to information relating to other people
(unless the information is also about them or they are acting on behalf of someone). Therefore, it is
important that you establish whether the information requested falls within the definition of personal
data. For further information about the definition of personal data please see our guidance on what is
personal data .
Other information02 August 2018 - 1.0.248 101
In addition to a copy of their personal data, you also have to provide individuals with the following
information:
the purposes of your processing;
the categories of personal data concerned;
the recipients or categories of recipient you disclose the personal data to;
your retention period for storing the personal data or, where this is not possible, your criteria for
determining how long you will store it;
the existence of their right to request rectification, erasure or restriction or to object to such
processing;
the right to lodge a complaint with the ICO or another supervisory authority;
information about the source of the data, where it was not obtained directly from the individual;
the existence of automated decision-making (including profiling); and
the safeguards you provide if you transfer personal data to a third country or international
organisation.
You may be providing much of this information already in your privacy notice.
How do we recognise a request?
The GDPR does not specify how to make a valid request. Therefore, an individual can make a subject
access request to you verbally or in writing. It can also be made to any part of your organisation
(including by social media) and does not have to be to a specific person or contact point.
A request does not have to include the phrase 'subject access request' or Article 15 of the GDPR, as
long as it is clear that the individual is asking for their own personal data.
This presents a challenge as any of your employees could receive a valid request. However, you have a
legal responsibility to identify that an individual has made a request to you and handle it accordingly.
Therefore you may need to consider which of your staff who regularly interact with individuals may need
specific training to identify a request.
Additionally, it is good practice to have a policy for recording details of the requests you receive,
particularly those made by telephone or in person. You may wish to check with the requester that you
have understood their request, as this can help avoid later disputes about how you have interpreted the
request. We also recommend that you keep a log of verbal requests.
Should we provide a specially designed form for individuals to make a subject access request?
Standard forms can make it easier both for you to recognise a subject access request and for the
individual to include all the details you might need to locate the information they want.
Recital 59 of the GDPR recommends that organisations ‘provide means for requests to be made
electronically, especially where personal data are processed by electronic means’. You should therefore
consider designing a subject access form that individuals can complete and submit to you electronically.
However, even if you have a form, you should note that a subject access request is valid if it is
submitted by any means, so you will still need to comply with any requests you receive in a letter, a
standard email or verbally.02 August 2018 - 1.0.248 102
Therefore, although you may invite individuals to use a form, you must make it clear that it is not
compulsory and do not try to use this as a way of extending the one month time limit for responding.
How should we provide the data to individuals?
If an individual makes a request electronically, you should provide the information in a commonly used
electronic format, unless the individual requests otherwise.
The GDPR includes a best practice recommendation that, where possible, organisations should be able
to provide remote access to a secure self-service system which would provide the individual with direct
access to his or her information (Recital 63). This will not be appropriate for all organisations, but there
are some sectors where this may work well.
However, providing remote access should not adversely affect the rights and freedoms of others –
including trade secrets or intellectual property.
We have received a request but need to amend the data before sending out the response. Should
we send out the “old” version?
It is our view that a subject access request relates to the data held at the time the request was
received. However, in many cases, routine use of the data may result in it being amended or even
deleted while you are dealing with the request. So it would be reasonable for you to supply information
you hold when you send out a response, even if this is different to that held when you received the
request.
However, it is not acceptable to amend or delete the data if you would not otherwise have done so.
Under the Data Protection Act 2018 (DPA 2018), it is an offence to make any amendment with the
intention of preventing its disclosure.
Do we have to explain the contents of the information we send to the individual?
The GDPR requires that the information you provide to an individual is in a concise, transparent,
intelligible and easily accessible form, using clear and plain language. This will be particularly important
where the information is addressed to a child.
At its most basic, this means that the additional information you provide in response to a request (see
the ‘Other information’ section above) should be capable of being understood by the average person (or
child). However, you are not required to ensure that that the information is provided in a form that can
be understood by the particular individual making the request.
For further information about requests made by a child please see the ‘What about requests for
information about children?’ section below.
Example
An individual makes a request for their personal data. When preparing the response, you notice that
a lot of it is in coded form. For example, attendance at a particular training session is logged as “A”,
while non-attendance at a similar event is logged as “M”. Also, some of the information is in the form02 August 2018 - 1.0.248 103
Can we charge a fee?
In most cases you cannot charge a fee to comply with a subject access request.
However, as noted above, where the request is manifestly unfounded or excessive you may charge a
“reasonable fee” for the administrative costs of complying with the request.
You can also charge a reasonable fee if an individual requests further copies of their data following a
request. You must base the fee on the administrative costs of providing further copies.
How long do we have to comply?
You must act on the subject access request without undue delay and at the latest within one month of
receipt.
You should calculate the time limit from the day after you receive the request (whether the day after is a
working day or not) until the corresponding calendar date in the next month.
If this is not possible because the following month is shorter (and there is no corresponding calendar
date), the date for response is the last day of the following month.
If the corresponding date falls on a weekend or a public holiday, you have until the next working day to
respond.of handwritten notes that are difficult to read. Without access to your key or index to explain this
information, it would be impossible for anyone outside your organisation to understand. In this case,
you are required to explain the meaning of the coded information. However, although it is good
practice to do so, you are not required to decipher the poorly written notes, as the GDPR does not
require you to make information legible.
Example
You receive a subject access request from someone whose English comprehension skills are quite
poor. You send a response and they ask you to translate the information you sent them. You are not
required to do this even if the person who receives it cannot understand all of it because it can be
understood by the average person. However, it is good practice for you to help individuals
understand the information you hold about them.
Example
An organisation receives a request on 3 September. The time limit will start from the next day (4
September). This gives the organisation until 4 October to comply with the request.02 August 2018 - 1.0.248 104
This means that the exact number of days you have to comply with a request varies, depending on the
month in which the request was made.
For practical purposes, if a consistent number of days is required (eg for operational or system
purposes), it may be helpful to adopt a 28-day period to ensure compliance is always within a calendar
month.
Can we extend the time for a response?
You can extend the time to respond by a further two months if the request is complex or you have
received a number of requests from the individual. You must let the individual know within one month of
receiving their request and explain why the extension is necessary.
However, it is the ICO's view that it is unlikely to be reasonable to extend the time limit if:
it is manifestly unfounded or excessive;
an exemption applies; or
you are requesting proof of identity before considering the request.
Can we ask an individual for ID?
If you have doubts about the identity of the person making the request you can ask for more
information. However, it is important that you only request information that is necessary to confirm who
they are. The key to this is proportionality.
You need to let the individual know as soon as possible that you need more information from them to
confirm their identity before responding to their request. The period for responding to the request begins
when you receive the additional information.
What about requests for large amounts of personal data?
If you process a large amount of information about an individual you can ask them for more information
to clarify their request. You should only ask for information that you reasonably need to find the
personal data covered by the request.
You need to let the individual know as soon as possible that you need more information from them
before responding to their request. The period for responding to the request begins when you receive
the additional information. However, if an individual refuses to provide any additional information, you
Example
An organisation receives a request on 30 March. The time limit starts from the next day (31 March).
As there is no equivalent date in April, the organisation has until 30 April to comply with the request.
If 30 April falls on a weekend, or is a public holiday, the organisation has until the end of the next
working day to comply.02 August 2018 - 1.0.248 105
must still endeavour to comply with their request ie by making reasonable searches for the information
covered by the request.
What about requests made on behalf of others?
The GDPR does not prevent an individual making a subject access request via a third party. Often, this
will be a solicitor acting on behalf of a client, but it could simply be that an individual feels comfortable
allowing someone else to act for them. In these cases, you need to be satisfied that the third party
making the request is entitled to act on behalf of the individual, but it is the third party’s responsibility to
provide evidence of this entitlement. This might be a written authority to make the request or it might be
a more general power of attorney.
If you think an individual may not understand what information would be disclosed to a third party who
has made a subject access request on their behalf, you may send the response directly to the individual
rather than to the third party. The individual may then choose to share the information with the third
party after having had a chance to review it.
There are cases where an individual does not have the mental capacity to manage their own affairs.
Although there are no specific provisions in the GDPR, the Mental Capacity Act 2005 or in the Adults with
Incapacity (Scotland) Act 2000 enabling a third party to exercise subject access rights on behalf of such
an individual, it is reasonable to assume that an attorney with authority to manage the property and
affairs of an individual will have the appropriate authority. The same applies to a person appointed to
make decisions about such matters:
in England and Wales, by the Court of Protection;
in Scotland, by the Sheriff Court; and
in Northern Ireland, by the High Court (Office of Care and Protection).
What about requests for information about children?
Example
A building society has an elderly customer who visits a particular branch to make weekly
withdrawals from one of her accounts. Over the past few years, she has always been accompanied
by her daughter who is also a customer of the branch. The daughter makes a subject access
request on behalf of her mother and explains that her mother does not feel up to making the
request herself as she does not understand the ins and outs of data protection. As the information
held by the building society is mostly financial, it is rightly cautious about giving customer
information to a third party. If the daughter had a general power of attorney, the society would be
happy to comply. They ask the daughter whether she has such a power, but she does not.
Bearing in mind that the branch staff know the daughter and have some knowledge of the
relationship she has with her mother, they might consider complying with the request by making a
voluntary disclosure. However, the building society is not obliged to do so, and it would not be
unreasonable to require more formal authority.02 August 2018 - 1.0.248 106
Even if a child is too young to understand the implications of subject access rights, it is still the right of
the child rather than of anyone else such as a parent or guardian. So it is the child who has a right of
access to the information held about them, even though in the case of young children these rights are
likely to be exercised by those with parental responsibility for them.
Before responding to a subject access request for information held about a child, you should consider
whether the child is mature enough to understand their rights. If you are confident that the child can
understand their rights, then you should usually respond directly to the child. You may, however, allow
the parent to exercise the child’s rights on their behalf if the child authorises this, or if it is evident that
this is in the best interests of the child.
What matters is that the child is able to understand (in broad terms) what it means to make a subject
access request and how to interpret the information they receive as a result of doing so. When
considering borderline cases, you should take into account, among other things:
the child’s level of maturity and their ability to make decisions like this;
the nature of the personal data;
any court orders relating to parental access or responsibility that may apply;
any duty of confidence owed to the child or young person;
any consequences of allowing those with parental responsibility access to the child’s or young
person’s information. This is particularly important if there have been allegations of abuse or ill
treatment;
any detriment to the child or young person if individuals with parental responsibility cannot access
this information; and
any views the child or young person has on whether their parents should have access to information
about them.
In Scotland, a person aged 12 years or over is presumed to be of sufficient age and maturity to be able
to exercise their right of access, unless the contrary is shown. This presumption does not apply in
England and Wales or in Northern Ireland, where competence is assessed depending upon the level of
understanding of the child, but it does indicate an approach that will be reasonable in many cases.
For further information on situations where the request has been made by a child, see our guidance on
children and the GDPR .
What about data held by credit reference agencies?
In the DPA 2018 there are special provisions about the access to personal data held by credit reference
agencies. Unless otherwise specified, a subject access request to a credit reference agency only applies
to information relating to the individual’s financial standing. Credit reference agencies must also inform
individuals of their rights under s.159 of the Consumer Credit Act.
What should we do if the data includes information about other people?
Responding to a subject access request may involve providing information that relates both to the
individual making the request and to another individual.
The DPA 2018 says that you do not have to comply with the request if it would mean disclosing
information about another individual who can be identified from that information, except if:02 August 2018 - 1.0.248 107
the other individual has consented to the disclosure; or
it is reasonable to comply with the request without that individual’s consent.
In determining whether it is reasonable to disclose the information, you must take into account all of the
relevant circumstances, including:
the type of information that you would disclose;
any duty of confidentiality you owe to the other individual;
any steps you have taken to seek consent from the other individual;
whether the other individual is capable of giving consent; and
any express refusal of consent by the other individual.
So, although you may sometimes be able to disclose information relating to a third party, you need to
decide whether it is appropriate to do so in each case. This decision will involve balancing the data
subject’s right of access against the other individual’s rights. If the other person consents to you
disclosing the information about them, then it would be unreasonable not to do so. However, if there is
no such consent, you must decide whether to disclose the information anyway.
For the avoidance of doubt, you cannot refuse to provide access to personal data about an individual
simply because you obtained that data from a third party. The rules about third party data apply only to
personal data which includes both information about the individual who is the subject of the request and
information about someone else.
If we use a processor, does this mean they would have to deal with any subject access requests
we receive?
Responsibility for complying with a subject access request lies with you as the controller. You need to
ensure that you have contractual arrangements in place to guarantee that subject access requests are
dealt with properly, irrespective of whether they are sent to you or to the processor. More information
about contracts and liabilities between controllers and processors can be found here.
You are not able to extend the one month time limit on the basis that you have to rely on a processor to
provide the information that you need to respond. As mentioned above, you can only extend the time
limit by two months if the request is complex or you have received a number of requests from the
individual.
Can we refuse to comply with a request?
You can refuse to comply with a subject access request if it is manifestly unfounded or excessive, taking
into account whether the request is repetitive in nature.
If you consider that a request is manifestly unfounded or excessive you can:
request a "reasonable fee" to deal with the request; or
refuse to deal with the request.
In either case you need to justify your decision.
You should base the reasonable fee on the administrative costs of complying with the request. If you
decide to charge a fee you should contact the individual promptly and inform them. You do not need to02 August 2018 - 1.0.248 108
comply with the request until you have received the fee.
What should we do if we refuse to comply with a request?
You must inform the individual without undue delay and within one month of receipt of the
request.
You should inform the individual about:
the reasons you are not taking action;
their right to make a complaint to the ICO or another supervisory authority; and
their ability to seek to enforce this right through a judicial remedy.
You should also provide this information if you request a reasonable fee or need additional information
to identify the individual.
Can I require an individual to make a subject access request?
In the DPA 2018 it is a criminal offence, in certain circumstances and in relation to certain information,
to require an individual to make a subject access request. We will provide further guidance on this
offence in due course.
Further ReadingIn more detail – Data Protection Act 2018
There are other exemptions from the right of access in the DPA 2018. These exemptions will apply
in certain circumstances, broadly associated with why you are processing the data. We will
provide guidance on the application of these exemptions in due course.
Relevant provisions in the GDPR - See Articles 12, 15 and Recitals 63, 64
External link02 August 2018 - 1.0.248 109
Right to rectification
At a glance
The GDPR includes a right for individuals to have inaccurate personal data rectified, or completed if it
is incomplete.
An individual can make a request for rectification verbally or in writing.
You have one calendar month to respond to a request.
In certain circumstances you can refuse a request for rectification.
This right is closely linked to the controller’s obligations under the accuracy principle of the GDPR
(Article (5)(1)(d)).
Checklists
In brief
What is the right to rectification?
Under Article 16 of the GDPR individuals have the right to have inaccurate personal data rectified. AnPreparing for requests for rectification
☐ We know how to recognise a request for rectification and we understand when this right
applies.
☐ We have a policy for how to record requests we receive verbally.
☐ We understand when we can refuse a request and are aware of the information we need to
provide to individuals when we do so.
Complying with requests for rectification
☐ We have processes in place to ensure that we respond to a request for rectification without
undue delay and within one month of receipt.
☐ We are aware of the circumstances when we can extend the time limit to respond to a
request.
☐ We have appropriate systems to rectify or complete information, or provide a supplementary
statement.
☐ We have procedures in place to inform any recipients if we rectify any data we have shared
with them. 02 August 2018 - 1.0.248 110
individual may also be able to have incomplete personal data completed – although this will depend on
the purposes for the processing. This may involve providing a supplementary statement to the
incomplete data.
This right has close links to the accuracy principle of the GDPR (Article 5(1)(d)). However, although you
may have already taken steps to ensure that the personal data was accurate when you obtained it, this
right imposes a specific obligation to reconsider the accuracy upon request.
What do we need to do?
If you receive a request for rectification you should take reasonable steps to satisfy yourself that the
data is accurate and to rectify the data if necessary. You should take into account the arguments and
evidence provided by the data subject.
What steps are reasonable will depend, in particular, on the nature of the personal data and what it will
be used for. The more important it is that the personal data is accurate, the greater the effort you
should put into checking its accuracy and, if necessary, taking steps to rectify it. For example, you
should make a greater effort to rectify inaccurate personal data if it is used to make significant decisions
that will affect an individual or others, rather than trivial ones.
You may also take into account any steps you have already taken to verify the accuracy of the data
prior to the challenge by the data subject.
When is data inaccurate?
The GDPR does not give a definition of the term accuracy. However, the Data Protection Act 2018 (DPA
2018) states that personal data is inaccurate if it is incorrect or misleading as to any matter of fact.
What should we do about data that records a mistake?
Determining whether personal data is inaccurate can be more complex if the data refers to a mistake
that has subsequently been resolved. It may be possible to argue that the record of the mistake is, in
itself, accurate and should be kept. In such circumstances the fact that a mistake was made and the
correct information should also be included in the individuals data.
What should we do about data that records a disputed opinion?
Example
If a patient is diagnosed by a GP as suffering from a particular illness or condition, but it is later
proved that this is not the case, it is likely that their medical records should record both the initial
diagnosis (even though it was later proved to be incorrect) and the final findings. Whilst the medical
record shows a misdiagnosis, it is an accurate record of the patient's medical treatment. As long as
the medical record contains the up-to-date findings, and this is made clear in the record, it would be
difficult to argue that the record is inaccurate and should be rectified.02 August 2018 - 1.0.248 111
It is also complex if the data in question records an opinion. Opinions are, by their very nature,
subjective, and it can be difficult to conclude that the record of an opinion is inaccurate. As long as the
record shows clearly that the information is an opinion and, where appropriate, whose opinion it is, it
may be difficult to say that it is inaccurate and needs to be rectified.
What should we do while we are considering the accuracy?
Under Article 18 an individual has the right to request restriction of the processing of their personal data
where they contest its accuracy and you are checking it. As a matter of good practice, you should
restrict the processing of the personal data in question whilst you are verifying its accuracy, whether or
not the individual has exercised their right to restriction. For more information, see our guidance on the
right to restriction .
What should we do if we are satisfied that the data is accurate?
You should let the individual know if you are satisfied that the personal data is accurate, and tell them
that you will not be amending the data. You should explain your decision, and inform them of their right
to make a complaint to the ICO or another supervisory authority; and their ability to seek to enforce
their rights through a judicial remedy.
It is also good practice to place a note on your system indicating that the individual challenges the
accuracy of the data and their reasons for doing so.
Can we refuse to comply with the request for rectification for other reasons?
You can refuse to comply with a request for rectification if the request is manifestly unfounded or
excessive, taking into account whether the request is repetitive in nature.
If you consider that a request is manifestly unfounded or excessive you can:
request a "reasonable fee" to deal with the request; or
refuse to deal with the request.
In either case you will need to justify your decision.
You should base the reasonable fee on the administrative costs of complying with the request. If you
decide to charge a fee you should contact the individual without undue delay and within one month. You
do not need to comply with the request until you have received the fee.
What should we do if we refuse to comply with a request for rectification?
You must inform the individual without undue delay and within one month of receipt of the requestIn more detail – Data Protection Act 2018
There are other exemptions from the right to rectification contained in the DPA 2018. These
exemptions will apply in certain circumstances, broadly associated with why you are processing the
data. We will provide guidance on the application of these exemptions in due course.02 August 2018 - 1.0.248 112
about:
the reasons you are not taking action;
their right to make a complaint to the ICO or another supervisory authority; and
their ability to seek to enforce this right through a judicial remedy.
You should also provide this information if you request a reasonable fee or need additional information
to identify the individual.
How can we recognise a request?
The GDPR does not specify how to make a valid request. Therefore, an individual can make a request
for rectification verbally or in writing. It can also be made to any part of your organisation and does not
have to be to a specific person or contact point.
A request to rectify personal data does not need to mention the phrase ‘request for rectification’ or
Article 16 of the GDPR to be a valid request. As long as the individual has challenged the accuracy of
their data and has asked you to correct it, or has asked that you take steps to complete data held about
them that is incomplete, this will be a valid request under Article 16.
This presents a challenge as any of your employees could receive a valid verbal request. However, you
have a legal responsibility to identify that an individual has made a request to you and handle it
accordingly. Therefore you may need to consider which of your staff who regularly interact with
individuals may need specific training to identify a request.
Additionally, it is good practice to have a policy for recording details of the requests you receive,
particularly those made by telephone or in person. You may wish to check with the requester that you
have understood their request, as this can help avoid later disputes about how you have interpreted the
request. We also recommend that you keep a log of verbal requests.
Can we charge a fee?
No, in most cases you cannot charge a fee to comply with a request for rectification.
However, as noted above, if the request is manifestly unfounded or excessive you may charge a
“reasonable fee” for the administrative costs of complying with the request.
How long do we have to comply?
You must act upon the request without undue delay and at the latest within one month of receipt.
You should calculate the time limit from the day after you receive the request (whether the day after is a
working day or not) until the corresponding calendar date in the next month.02 August 2018 - 1.0.248 113
If this is not possible because the following month is shorter (and there is no corresponding calendar
date), the date for response is the last day of the following month.
If the corresponding date falls on a weekend or a public holiday, you will have until the next working day
to respond.
This means that the exact number of days you have to comply with a request varies, depending on the
month in which the request was made.
For practical purposes, if a consistent number of days is required (eg for operational or system
purposes), it may be helpful to adopt a 28-day period to ensure compliance is always within a calendar
month.
Can we extend the time to respond to a request?
You can extend the time to respond by a further two months if the request is complex or you have
received a number of requests from the individual. You must let the individual know without undue delay
and within one month of receiving their request and explain why the extension is necessary.
The circumstances in which you can extend the time to respond can include further consideration of the
accuracy of disputed data - although you can only do this in complex cases - and the result may be that
at the end of the extended time period you inform the individual that you consider the data in question
to be accurate.
However, it is the ICO's view that it is unlikely to be reasonable to extend the time limit if:
it is manifestly unfounded or excessive;
an exemption applies; or
you are requesting proof of identity before considering the request.
Can we ask an individual for ID?
Example
An organisation receives a request on 3 September. The time limit will start from the next day (4
September). This gives the organisation until 4 October to comply with the request.
Example
An organisation receives a request on 30 March. The time limit starts from the next day (31 March).
As there is no equivalent date in April, the organisation has until 30 April to comply with the request.
If 30 April falls on a weekend, or is a public holiday, the organisation has until the end of the next
working day to comply.02 August 2018 - 1.0.248 114
If you have doubts about the identity of the person making the request you can ask for more
information. However, it is important that you only request information that is necessary to confirm who
they are. The key to this is proportionality. You should take into account what data you hold, the nature
of the data, and what you are using it for.
You must let the individual know without undue delay and within one month that you need more
information from them to confirm their identity. You do not need to comply with the request until you
have received the additional information.
Do we have to tell other organisations if we rectify personal data?
If you have disclosed the personal data to others, you must contact each recipient and inform them of
the rectification or completion of the personal data - unless this proves impossible or involves
disproportionate effort. If asked to, you must also inform the individual about these recipients.
The GDPR defines a recipient as a natural or legal person, public authority, agency or other body to
which the personal data are disclosed. The definition includes controllers, processors and persons who,
under the direct authority of the controller or processor, are authorised to process personal data.
Further Reading
Relevant provisions in the GDPR - See Articles 5, 12, 16 and 19
External link02 August 2018 - 1.0.248 115
Right to erasure
At a glance
The GDPR introduces a right for individuals to have personal data erased.
The right to erasure is also known as ‘the right to be forgotten’.
Individuals can make a request for erasure verbally or in writing.
You have one month to respond to a request.
The right is not absolute and only applies in certain circumstances.
This right is not the only way in which the GDPR places an obligation on you to consider whether to
delete personal data.
Checklists
In brief
What is the right to erasure?Preparing for requests for erasure
☐ We know how to recognise a request for erasure and we understand when the right applies.
☐ We have a policy for how to record requests we receive verbally.
☐ We understand when we can refuse a request and are aware of the information we need to
provide to individuals when we do so.
Complying with requests for erasure
☐ We have processes in place to ensure that we respond to a request for erasure without undue
delay and within one month of receipt.
☐ We are aware of the circumstances when we can extend the time limit to respond to a
request.
☐ We understand that there is a particular emphasis on the right to erasure if the request relates
to data collected from children.
☐ We have procedures in place to inform any recipients if we erase any data we have shared
with them.
☐ We have appropriate methods in place to erase information. 02 August 2018 - 1.0.248 116
Under Article 17 of the GDPR individuals have the right to have personal data erased. This is also known
as the ‘right to be forgotten’. The right is not absolute and only applies in certain circumstances.
When does the right to erasure apply?
Individuals have the right to have their personal data erased if:
the personal data is no longer necessary for the purpose which you originally collected or processed
it for;
you are relying on consent as your lawful basis for holding the data, and the individual withdraws
their consent;
you are relying on legitimate interests as your basis for processing, the individual objects to the
processing of their data, and there is no overriding legitimate interest to continue this processing;
you are processing the personal data for direct marketing purposes and the individual objects to that
processing;
you have processed the personal data unlawfully (ie in breach of the lawfulness requirement of the
1st principle);
you have to do it to comply with a legal obligation; or
you have processed the personal data to offer information society services to a child.
How does the right to erasure apply to data collected from children?
There is an emphasis on the right to have personal data erased if the request relates to data collected
from children. This reflects the enhanced protection of children’s information, especially in online
environments, under the GDPR.
Therefore, if you process data collected from children, you should give particular weight to any request
for erasure if the processing of the data is based upon consent given by a child – especially any
processing of their personal data on the internet. This is still the case when the data subject is no longer
a child, because a child may not have been fully aware of the risks involved in the processing at the
time of consent.
For further details about the right to erasure and children’s personal data please read our guidance on
children's privacy .
Do we have to tell other organisations about the erasure of personal data?
The GDPR specifies two circumstances where you should tell other organisations about the erasure of
personal data:
the personal data has been disclosed to others; or
the personal data has been made public in an online environment (for example on social networks,
forums or websites).
If you have disclosed the personal data to others, you must contact each recipient and inform them of
the erasure, unless this proves impossible or involves disproportionate effort. If asked to, you must also
inform the individuals about these recipients.02 August 2018 - 1.0.248 117
The GDPR defines a recipient as a natural or legal person, public authority, agency or other body to
which the personal data are disclosed. The definition includes controllers, processors and persons who,
under the direct authority of the controller or processor, are authorised to process personal data.
Where personal data has been made public in an online environment reasonable steps should be taken
to inform other controllers who are processing the personal data to erase links to, copies or replication
of that data. When deciding what steps are reasonable you should take into account available
technology and the cost of implementation.
Do we have to erase personal data from backup systems?
If a valid erasure request is received and no exemption applies then you will have to take steps to
ensure erasure from backup systems as well as live systems. Those steps will depend on your particular
circumstances, your retention schedule (particularly in the context of its backups), and the technical
mechanisms that are available to you.
You must be absolutely clear with individuals as to what will happen to their data when their erasure
request is fulfilled, including in respect of backup systems.
It may be that the erasure request can be instantly fulfilled in respect of live systems, but that the data
will remain within the backup environment for a certain period of time until it is overwritten.
The key issue is to put the backup data ‘beyond use’, even if it cannot be immediately overwritten. You
must ensure that you do not use the data within the backup for any other purpose, ie that the backup is
simply held on your systems until it is replaced in line with an established schedule. Provided this is the
case it may be unlikely that the retention of personal data within the backup would pose a significant
risk, although this will be context specific. For more information on what we mean by ‘putting data
beyond use’ see our old guidance under the 1998 Act on deleting personal data (this will be updated in
due course).
When does the right to erasure not apply?
The right to erasure does not apply if processing is necessary for one of the following reasons:
to exercise the right of freedom of expression and information;
to comply with a legal obligation;
for the performance of a task carried out in the public interest or in the exercise of official authority;
for archiving purposes in the public interest, scientific research historical research or statistical
purposes where erasure is likely to render impossible or seriously impair the achievement of that
processing; or
for the establishment, exercise or defence of legal claims.
The GDPR also specifies two circumstances where the right to erasure will not apply to special category
data:
if the processing is necessary for public health purposes in the public interest (eg protecting against
serious cross-border threats to health, or ensuring high standards of quality and safety of health care02 August 2018 - 1.0.248 118
and of medicinal products or medical devices); or
if the processing is necessary for the purposes of preventative or occupational medicine (eg where
the processing is necessary for the working capacity of an employee; for medical diagnosis; for the
provision of health or social care; or for the management of health or social care systems or
services). This only applies where the data is being processed by or under the responsibility of a
professional subject to a legal obligation of professional secrecy (eg a health professional).
For more information about special categories of data please see our Guide to the GDPR .
Can we refuse to comply with a request for other reasons?
You can refuse to comply with a request for erasure if it is manifestly unfounded or excessive, taking
into account whether the request is repetitive in nature.
If you consider that a request is manifestly unfounded or excessive you can:
request a "reasonable fee" to deal with the request; or
refuse to deal with the request.
In either case you will need to justify your decision.
You should base the reasonable fee on the administrative costs of complying with the request. If you
decide to charge a fee you should contact the individual promptly and inform them. You do not need to
comply with the request until you have received the fee.
What should we do if we refuse to comply with a request for erasure?
You must inform the individual without undue delay and within one month of receipt of the request.
You should inform the individual about:
the reasons you are not taking action;
their right to make a complaint to the ICO or another supervisory authority; and
their ability to seek to enforce this right through a judicial remedy.
You should also provide this information if you request a reasonable fee or need additional information
to identify the individual.
How do we recognise a request?
The GDPR does not specify how to make a valid request. Therefore, an individual can make a request
for erasure verbally or in writing. It can also be made to any part of your organisation and does notIn more detail – Data Protection Act 2018
There are other exemptions from the right to erasure in the DPA 2018. These exemptions will apply
in certain circumstances, broadly associated with why you are processing the data. We will provide
further guidance on the application of these exemptions in due course.02 August 2018 - 1.0.248 119
have to be to a specific person or contact point.
A request does not have to include the phrase 'request for erasure' or Article 17 of the GDPR, as long as
one of the conditions listed above apply.
This presents a challenge as any of your employees could receive a valid verbal request. However, you
have a legal responsibility to identify that an individual has made a request to you and handle it
accordingly. Therefore you may need to consider which of your staff who regularly interact with
individuals may need specific training to identify a request.
Additionally, it is good practice to have a policy for recording details of the requests you receive,
particularly those made by telephone or in person. You may wish to check with the requester that you
have understood their request, as this can help avoid later disputes about how you have interpreted the
request. We also recommend that you keep a log of verbal requests.
Can we charge a fee?
No, in most cases you cannot charge a fee to comply with a request for erasure.
However, as noted above, where the request is manifestly unfounded or excessive you may charge a
“reasonable fee” for the administrative costs of complying with the request.
How long do we have to comply?
You must act upon the request without undue delay and at the latest within one month of receipt.
You should calculate the time limit from the day after you receive the request (whether the day after is a
working day or not) until the corresponding calendar date in the next month.
If this is not possible because the following month is shorter (and there is no corresponding calendar
date), the date for response is the last day of the following month.
If the corresponding date falls on a weekend or a public holiday, you will have until the next working day
to respond.
This means that the exact number of days you have to comply with a request varies, depending on the
month in which the request is made.
Example
An organisation receives a request on 3 September. The time limit will start from the next day (4
September). This gives the organisation until 4 October to comply with the request.
Example
An organisation receives a request on 30 March. The time limit starts from the next day (31 March).02 August 2018 - 1.0.248 120
For practical purposes, if a consistent number of days is required (eg for operational or system
purposes), it may be helpful to adopt a 28-day period to ensure compliance is always within a calendar
month.
Can we extend the time for a response?
You can extend the time to respond by a further two months if the request is complex or you have
received a number of requests from the individual. You must let the individual know without undue delay
and within one month of receiving their request and explain why the extension is necessary.
However, it is the ICO's view that it is unlikely to be reasonable to extend the time limit if:
it is manifestly unfounded or excessive;
an exemption applies; or
you are requesting proof of identity before considering the request.
Can we ask an individual for ID?
If you have doubts about the identity of the person making the request you can ask for more
information. However, it is important that you only request information that is necessary to confirm who
they are. The key to this is proportionality. You should take into account what data you hold, the nature
of the data, and what you are using it for.
You must let the individual know without undue delay and within one month that you need more
information from them to confirm their identity. You do not need to comply with the request until you
have received the additional information.
Further ReadingAs there is no equivalent date in April, the organisation has until 30 April to comply with the request.
If 30 April falls on a weekend or is a public holiday, the organisation has until the end of the next
working day to comply.
Relevant provisions in the GDPR - See Articles 6, 9, 12, 17 and Recitals 65, 66
External link02 August 2018 - 1.0.248 121
Right to restrict processing
At a glance
Individuals have the right to request the restriction or suppression of their personal data.
This is not an absolute right and only applies in certain circumstances.
When processing is restricted, you are permitted to store the personal data, but not use it.
An individual can make a request for restriction verbally or in writing.
You have one calendar month to respond to a request.
This right has close links to the right to rectification (Article 16) and the right to object (Article 21).
Checklists
Preparing for requests for restriction
☐ We know how to recognise a request for restriction and we understand when the right applies.
☐ We have a policy in place for how to record requests we receive verbally.
☐ We understand when we can refuse a request and are aware of the information we need to
provide to individuals when we do so.
Complying with requests for restriction
☐ We have processes in place to ensure that we respond to a request for restriction without
undue delay and within one month of receipt.
☐ We are aware of the circumstances when we can extend the time limit to respond to a
request.
☐ We have appropriate methods in place to restrict the processing of personal data on our
systems.
☐ We have appropriate methods in place to indicate on our systems that further processing has
been restricted.
☐ We understand the circumstances when we can process personal data that has been
restricted.
☐ We have procedures in place to inform any recipients if we restrict any data we have shared
with them.
☐ We understand that we need to tell individuals before we lift a restriction on processing. 02 August 2018 - 1.0.248 122
In brief
What is the right to restrict processing?
Article 18 of the GDPR gives individuals the right to restrict the processing of their personal data in
certain circumstances. This means that an individual can limit the way that an organisation uses their
data. This is an alternative to requesting the erasure of their data.
Individuals have the right to restrict the processing of their personal data where they have a particular
reason for wanting the restriction. This may be because they have issues with the content of the
information you hold or how you have processed their data. In most cases you will not be required to
restrict an individual’s personal data indefinitely, but will need to have the restriction in place for a
certain period of time.
When does the right to restrict processing apply?
Individuals have the right to request you restrict the processing of their personal data in the following
circumstances:
the individual contests the accuracy of their personal data and you are verifying the accuracy of the
data;
the data has been unlawfully processed (ie in breach of the lawfulness requirement of the first
principle of the GDPR) and the individual opposes erasure and requests restriction instead;
you no longer need the personal data but the individual needs you to keep it in order to establish,
exercise or defend a legal claim; or
the individual has objected to you processing their data under Article 21(1), and you are considering
whether your legitimate grounds override those of the individual.
Although this is distinct from the right to rectification and the right to object, there are close links
between those rights and the right to restrict processing:
if an individual has challenged the accuracy of their data and asked for you to rectify it (Article 16),
they also have a right to request you restrict processing while you consider their rectification request;
or
if an individual exercises their right to object under Article 21(1), they also have a right to request
you restrict processing while you consider their objection request.
Therefore, as a matter of good practice you should automatically restrict the processing whilst you are
considering its accuracy or the legitimate grounds for processing the personal data in question.
How do we restrict processing?
You need to have processes in place that enable you to restrict personal data if required. It is important
to note that the definition of processing includes a broad range of operations including collection,
structuring, dissemination and erasure of data. Therefore, you should use methods of restriction that are
appropriate for the type of processing you are carrying out.
The GDPR suggests a number of different methods that could be used to restrict data, such as:
temporarily moving the data to another processing system;02 August 2018 - 1.0.248 123
making the data unavailable to users; or
temporarily removing published data from a website.
It is particularly important that you consider how you store personal data that you no longer need to
process but the individual has requested you restrict (effectively requesting that you do not erase the
data).
If you are using an automated filing system, you need to use technical measures to ensure that any
further processing cannot take place and that the data cannot be changed whilst the restriction is in
place. You should also note on your system that the processing of this data has been restricted.
Can we do anything with restricted data?
You must not process the restricted data in any way except to store it unless:
you have the individual’s consent;
it is for the establishment, exercise or defence of legal claims;
it is for the protection of the rights of another person (natural or legal); or
it is for reasons of important public interest.
Do we have to tell other organisations about the restriction of personal data?
Yes. If you have disclosed the personal data in question to others, you must contact each recipient and
inform them of the restriction of the personal data - unless this proves impossible or involves
disproportionate effort. If asked to, you must also inform the individual about these recipients.
The GDPR defines a recipient as a natural or legal person, public authority, agency or other body to
which the personal data are disclosed. The definition includes controllers, processors and persons who,
under the direct authority of the controller or processor, are authorised to process personal data.
When can we lift the restriction?
In many cases the restriction of processing is only temporary, specifically when the restriction is on the
grounds that:
the individual has disputed the accuracy of the personal data and you are investigating this; or
the individual has objected to you processing their data on the basis that it is necessary for the
performance of a task carried out in the public interest or the purposes of your legitimate interests,
and you are considering whether your legitimate grounds override those of the individual.
Once you have made a decision on the accuracy of the data, or whether your legitimate grounds
override those of the individual, you may decide to lift the restriction.
If you do this, you must inform the individual before you lift the restriction.
As noted above, these two conditions are linked to the right to rectification (Article 16) and the right to
object (Article 21). This means that if you are informing the individual that you are lifting the restriction
(on the grounds that you are satisfied that the data is accurate, or that your legitimate grounds override
theirs) you should also inform them of the reasons for your refusal to act upon their rights under Articles
16 or 21. You will also need to inform them of their right to make a complaint to the ICO or another02 August 2018 - 1.0.248 124
supervisory authority; and their ability to seek a judicial remedy.
Can we refuse to comply with a request for restriction?
You can refuse to comply with a request for restriction if the request is manifestly unfounded or
excessive, taking into account whether the request is repetitive in nature.
If you consider that a request is manifestly unfounded or excessive you can:
request a "reasonable fee" to deal with the request; or
refuse to deal with the request.
In either case you will need to justify your decision.
You should base the reasonable fee on the administrative costs of complying with the request. If you
decide to charge a fee you should contact the individual promptly and inform them. You do not need to
comply with the request until you have received the fee.
What should we do if we refuse to comply with a request for restriction?
You must inform the individual without undue delay and within one month of receipt of the request.
You should inform the individual about:
the reasons you are not taking action;
their right to make a complaint to the ICO or another supervisory authority; and
their ability to seek to enforce this right through a judicial remedy.
You should also provide this information if you request a reasonable fee or need additional information
to identify the individual.
How do we recognise a request?
The GDPR does not specify how to make a valid request. Therefore, an individual can make a request
for restriction verbally or in writing. It can also be made to any part of your organisation and does not
have to be to a specific person or contact point.
A request does not have to include the phrase 'request for restriction' or Article 18 of the GDPR, as long
as one of the conditions listed above apply.
This presents a challenge as any of your employees could receive a valid verbal request. However, youIn more detail – Data Protection Act 2018
There are other exemptions from the right to restriction contained in the Data Protection Act 2018.
These exemptions will apply in certain circumstances, broadly associated with why you are
processing the data. We will provide further guidance on the application of these exemptions in due
course.02 August 2018 - 1.0.248 125
have a legal responsibility to identify that an individual has made a request to you and handle it
accordingly. Therefore you may need to consider which of your staff who regularly interact with
individuals may need specific training to identify a request.
Additionally, it is good practice to have a policy for recording details of the requests you receive,
particularly those made by telephone or in person. You may wish to check with the requester that you
have understood their request, as this can help avoid later disputes about how you have interpreted the
request. We also recommend that you keep a log of verbal requests.
Can we charge a fee?
No, in most cases you cannot charge a fee to comply with a request for restriction.
However, as noted above, where the request is manifestly unfounded or excessive you may charge a
“reasonable fee” for the administrative costs of complying with the request.
How long do we have to comply?
You must act upon the request without undue delay and at the latest within one month of receipt.
You should calculate the time limit from the day after you receive the request (whether the day after is a
working day or not) until the corresponding calendar date in the next month.
If this is not possible because the following month is shorter (and there is no corresponding calendar
date), the date for response is the last day of the following month.
If the corresponding date falls on a weekend or a public holiday, you will have until the next working day
to respond.
This means that the exact number of days you have to comply with a request varies, depending on the
month in which the request was made.
Example
An organisation receives a request on 3 September. The time limit will start from the next day (4
September). This gives the organisation until 4 October to comply with the request.
Example
An organisation receives a request on 30 March. The time limit starts from the next day (31 March).
As there is no equivalent date in April, the organisation has until 30 April to comply with the request.
If 30 April falls on a weekend, or is a public holiday, the organisation has until the end of the next
working day to comply.02 August 2018 - 1.0.248 126
For practical purposes, if a consistent number of days is required (eg for operational or system
purposes), it may be helpful to adopt a 28-day period to ensure compliance is always within a calendar
month.
Can we extend the time for a response?
You can extend the time to respond by a further two months if the request is complex or you have
received a number of requests from the individual. You must let the individual know within one month of
receiving their request and explain why the extension is necessary.
However, it is the ICO's view that it is unlikely to be reasonable to extend the time limit if:
it is manifestly unfounded or excessive;
an exemption applies; or
you are requesting proof of identity before considering the request.
Can we ask an individual for ID?
If you have doubts about the identity of the person making the request you can ask for more
information. However, it is important that you only request information that is necessary to confirm who
they are. The key to this is proportionality. You should take into account what data you hold, the nature
of the data, and what you are using it for.
You must let the individual know without undue delay and within one month that you need more
information from them to confirm their identity. You do not need to comply with the request until you
have received the additional information.
Further Reading
Relevant provisions in the GDPR - See Articles 18, 19 and Recital 67
External link02 August 2018 - 1.0.248 127
Right to data portability
At a glance
The right to data portability allows individuals to obtain and reuse their personal data for their own
purposes across different services.
It allows them to move, copy or transfer personal data easily from one IT environment to another in
a safe and secure way, without affecting its usability.
Doing this enables individuals to take advantage of applications and services that can use this data to
find them a better deal or help them understand their spending habits.
The right only applies to information an individual has provided to a controller.
Some organisations in the UK already offer data portability through midata and similar initiatives
which allow individuals to view, access and use their personal consumption and transaction data in a
way that is portable and safe.
Checklists
In brief
What is the right to data portability?Preparing for requests for data portability
☐ We know how to recognise a request for data portability and we understand when the right
applies.
☐ We have a policy for how to record requests we receive verbally.
☐ We understand when we can refuse a request and are aware of the information we need to
provide to individuals when we do so.
Complying with requests for data portability
☐ We can transmit personal data in structured, commonly used and machine readable formats.
☐ We use secure methods to transmit personal data.
☐ We have processes in place to ensure that we respond to a request for data portability without
undue delay and within one month of receipt.
☐ We are aware of the circumstances when we can extend the time limit to respond to a
request.02 August 2018 - 1.0.248 128
The right to data portability gives individuals the right to receive personal data they have provided to a
controller in a structured, commonly used and machine readable format. It also gives them the right to
request that a controller transmits this data directly to another controller.
When does the right apply?
The right to data portability only applies when:
your lawful basis for processing this information is consent or for the performance of a contract; and
you are carrying out the processing by automated means (ie excluding paper files).
What does the right apply to?
Information is only within the scope of the right to data portability if it is personal data of the individual
that they have provided to you.
What does ‘provided to a controller’ mean?
Sometimes the personal data an individual has provided to you will be easy to identify (eg their mailing
address, username, age). However, the meaning of data ‘provided to’ you is not limited to this. It is also
personal data resulting from observation of an individual’s activities (eg where using a device or
service).
This may include:
history of website usage or search activities;
traffic and location data; or
‘raw’ data processed by connected objects such as smart meters and wearable devices.
It does not include any additional data that you have created based on the data an individual has
provided to you. For example, if you use the data they have provided to create a user profile then this
data would not be in scope of data portability.
You should however note that if this ‘inferred’ or ‘derived’ data is personal data, you still need to provide
it to an individual if they make a subject access request. Bearing this in mind, if it is clear that the
individual is seeking access to the inferred/derived data, as part of a wider portability request, it would
be good practice to include this data in your response.
Does the right apply to anonymous or pseudonymous data?
The right to data portability only applies to personal data. This means that it does not apply to genuinely
anonymous data. However, pseudonymous data that can be clearly linked back to an individual (eg
where that individual provides the respective identifier) is within scope of the right.
What happens if the personal data includes information about others?
If the requested information includes information about others (eg third party data) you need to
consider whether transmitting that data would adversely affect the rights and freedoms of those third
parties.02 August 2018 - 1.0.248 129
Generally speaking, providing third party data to the individual making the portability request should not
be a problem, assuming that the requestor provided this data to you within their information in the first
place. However, you should always consider whether there will be an adverse effect on the rights and
freedoms of third parties, in particular when you are transmitting data directly to another controller.
If the requested data has been provided to you by multiple data subjects (eg a joint bank account) you
need to be satisfied that all parties agree to the portability request. This means that you may have to
seek agreement from all the parties involved.
What is an individual entitled to?
The right to data portability entitles an individual to:
receive a copy of their personal data; and/or
have their personal data transmitted from one controller to another controller.
Individuals have the right to receive their personal data and store it for further personal use. This allows
the individual to manage and reuse their personal data. For example, an individual wants to retrieve
their contact list from a webmail application to build a wedding list or to store their data in a personal
data store.
You can achieve this by either:
directly transmitting the requested data to the individual; or
providing access to an automated tool that allows the individual to extract the requested data
themselves.
This does not create an obligation for you to allow individuals more general and routine access to your
systems – only for the extraction of their data following a portability request.
You may have a preferred method of providing the information requested depending on the amount and
complexity of the data requested. In either case, you need to ensure that the method is secure.
What are the limits when transmitting personal data to another controller?
Individuals have the right to ask you to transmit their personal data directly to another controller without
hindrance. If it is technically feasible, you should do this.
You should consider the technical feasibility of a transmission on a request by request basis. The
right to data portability does not create an obligation for you to adopt or maintain processing systems
which are technically compatible with those of other organisations (GDPR Recital 68). However, you
should take a reasonable approach, and this should not generally create a barrier to transmission.
Without hindrance means that you should not put in place any legal, technical or financial obstacles
which slow down or prevent the transmission of the personal data to the individual, or to another
organisation.
However, there may be legitimate reasons why you cannot undertake the transmission. For example, if
the transmission would adversely affect the rights and freedoms of others. It is however your
responsibility to justify why these reasons are legitimate and why they are not a ‘hindrance’ to the
transmission.02 August 2018 - 1.0.248 130
Do we have responsibility for the personal data we transmit to others?
If you provide information directly to an individual or to another organisation in response to a data
portability request, you are not responsible for any subsequent processing carried out by the individual
or the other organisation. However, you are responsible for the transmission of the data and need to
take appropriate measures to ensure that it is transmitted securely and to the right destination.
If you provide data to an individual, it is possible that they will store the information in a system with
less security than your own. Therefore, you should make individuals aware of this so that they can take
steps to protect the information they have received.
You also need to ensure that you comply with the other provisions in the GDPR. For example, whilst
there is no specific obligation under the right to data portability to check and verify the quality of the
data you transmit, you should already have taken reasonable steps to ensure the accuracy of this data
in order to comply with the requirements of the accuracy principle of the GDPR.
How should we provide the data?
You should provide the personal data in a format that is:
structured;
commonly used; and
machine-readable.
Although these terms are not defined in the GDPR these three characteristics can help you decide
whether the format you intend to use is appropriate.
You can also find relevant information in the ‘Open Data Handbook’, published by Open Knowledge
International. The handbook is a guide to ‘open data’, information that is free to access and can be
re-used for any purpose – particularly information held by the public sector. The handbook contains a
number of definitions that are relevant to the right to data portability, and this guidance includes some of
these below.
What does ‘structured’ mean?
Structured data allows for easier transfer and increased usability.
The Open Data Handbook defines ‘structured data’ as:
This means that software must be able to extract specific elements of the data. An example of a
structured format is a spreadsheet, where the data is organised into rows and columns, ie it is
‘structured’. In practice, some of the personal data you process will already be in structured form.
In many cases, if a format is structured it is also machine-readable.
‘data where the structural relation between elements is explicit in the way the data is stored on a
computer disk.’02 August 2018 - 1.0.248 131
What does ‘commonly used’ mean?
This simply means that the format you choose must be widely-used and well-established.
However, just because a format is ‘commonly used’ does not mean it is appropriate for data portability.
You have to consider whether it is ‘structured’, and ‘machine-readable’ as well. Although you may be
using common software applications, which save data in commonly-used formats, these may not be
sufficient to meet the requirements of data portability.
What does ‘machine-readable’ mean?
The Open Data Handbook states that ‘machine readable’ data is:
Furthermore, Regulation 2 of the Re-use of Public Sector Information Regulations 2015 defines
‘machine-readable format’ as:
Machine-readable data can be made directly available to applications that request that data over the
web. This is undertaken by means of an application programming interface (“API”).
If you are able to implement such a system then you can facilitate data exchanges with individuals and
respond to data portability requests in an easy manner.
Should we use an ‘interoperable’ format?
Although you are not required to use an interoperable format, this is encouraged by the GDPR, which
seeks to promote the concept of interoperability. Recital 68 says:
Interoperability allows different systems to share information and resources. An ‘interoperable format’ is
a type of format that allows data to be exchanged between different systems and be understandable to
both.
‘Data in a data format that can be automatically read and processed by a computer.’
‘A file format structured so that software applications can easily identify, recognise and extract
specific data, including individual statements of fact, and their internal structure.’
‘Data controllers should be encouraged to develop interoperable formats that enable data
portability.’02 August 2018 - 1.0.248 132
At the same time, you are not expected to maintain systems that are technically compatible with those
of other organisations. Data portability is intended to produce interoperable systems, not compatible
ones.
What formats can we use?
You may already be using an appropriate format within your networks and systems, and/or you may be
required to use a particular format due to the particular industry or sector you are part of. Provided it
meets the requirements of being structured, commonly-used and machine readable then it could be
appropriate for a data portability request.
The GDPR does not require you to use open formats internally. Your processing systems may indeed
use proprietary formats which individuals may not be able to access if you provide data to them in these
formats. In these cases you need to perform some additional processing on the personal data in order
to put it into the type of format required by the GDPR.
Where no specific format is in common use within your industry or sector, you should provide personal
data using open formats such as CSV, XML and JSON. You may also find that these formats are the
easiest for you to use when answering data portability requests.
For further information on CSV, XML and JSON, please see below.
What is CSV?
CSV stands for ‘Comma Separated Values’. It is defined by the Open Data Handbook as:
CSV is used to exchange data and is widely supported by software applications. Although CSV is not
standardised it is nevertheless structured, commonly used and machine-readable and is therefore an
appropriate format for you to use when responding to a data portability request.
What is XML?
XML stands for ‘Extensible Markup Language’. It is defined by the Open Data Handbook as:
It is a file format that is intended to be both human readable and machine-readable. Unlike CSV, XML is
defined by a set of open standards maintained by the World Wide Web Consortium (“W3C”). It is widely
used for documents, but can also be used to represent data structures such as those used in web
‘a standard format for spreadsheet data. Data is represented in a plain text file, with each data row
on a new line and commas separating the values on each row. As a very simple open format it is
easy to consume and is widely used for publishing open data.’
‘a simple and powerful standard for representing structured data.’02 August 2018 - 1.0.248 133
services.
This means XML can be processed by APIs, facilitating data exchange. For example, you may develop or
implement an API to exchange personal data in XML format with another organisation. In the context of
data portability, this can allow you to transmit personal data to an individual’s personal data store, or to
another organisation if the individual has asked you to do so.
What is JSON?
JSON stands for ‘JavaScript Object Notation’. The Open Data Handbook defines JSON as:
It is a file format based on the JavaScript language that many web sites use and is used as a data
interchange format. As with XML, it can be read by humans or machines. It is also a standardised open
format maintained by the W3C.
Are these the only formats we can use?
CSV, XML and JSON are three examples of structured, commonly used and machine-readable formats
that are appropriate for data portability. However, this does not mean you are obliged to use them.
Other formats exist that also meet the requirements of data portability.
You should however consider the nature of the portability request. If the individual cannot make use of
the format, even if it is structured, commonly-used and machine-readable then the data will be of no
use to them.
‘a simple but powerful format for data. It can describe complex data structures, is highly machine-
readable as well as reasonably human-readable, and is independent of platform and programming
language, and is therefore a popular format for data interchange between programs and systems.’
Example
The RDF or ‘Resource Description Framework’ format is also a structured, commonly-used,
machine-readable format. It is an open standard published by the W3C and is intended to provide
interoperability between applications exchanging information.
Further reading
The Open Data Handbook is published by Open Knowledge International and is a guide to ‘open
data’. The Handbook is updated regularly and you can read it here:
http://opendatahandbook.org
W3C candidate recommendation for XML is available here:02 August 2018 - 1.0.248 134
What responsibilities do we have when we receive personal data because of a data portability
request?
When you receive personal data that has been transmitted as part of a data portability request, you
need to process this data in line with data protection requirements.
In deciding whether to accept and retain personal data, you should consider whether the data is relevant
and not excessive in relation to the purposes for which you will process it. You also need to consider
whether the data contains any third party information.
As a new controller, you need to ensure that you have an appropriate lawful basis for processing any
third party data and that this processing does not adversely affect the rights and freedoms of those third
parties. If you have received personal data which you have no reason to keep, you should delete it as
soon as possible. When you accept and retain data, it becomes your responsibility to ensure that you
comply with the requirements of the GDPR.
In particular, if you receive third party data you should not use this for your own purposes. You should
keep the third party data under the sole control of the individual who has made the portability request,
and only used for their own purposes.
When can we refuse to comply with a request for data portability?
You can refuse to comply with a request for data portability if it is manifestly unfounded or excessive,
taking into account whether the request is repetitive in nature.
If you consider that a request is manifestly unfounded or excessive you can:
request a "reasonable fee" to deal with the request; orhttp://www.w3.org/TR/2008/REC-xml-20081126/
W3C’s specification of the JSON data interchange format is available here:
https://tools.ietf.org/html/rfc7159
W3C’s list of specifications for RDF is available here:
http://www.w3.org/standards/techs/rdf#w3c_all
Example
An individual enters into a contract with a controller for the provision of a service. The controller
relies on Article 6(1)(b) to process the individual’s personal data. The controller receives information
from a data portability request that includes information about third parties. The controller has a
legitimate interest to process the third party data under Article 6(1)(f) so that it can provide this
service to the individual. However, it should not then use this data to send direct marketing to the
third parties.02 August 2018 - 1.0.248 135
refuse to deal with the request.
In either case you will need to justify your decision.
You should base the reasonable fee on the administrative costs of complying with the request. If you
decide to charge a fee you should contact the individual promptly and inform them. You do not need to
comply with the request until you have received the fee.
What should we do if we refuse to comply with a request for data portability?
You must inform the individual without undue delay and within one month of receipt of the
request.
You should inform the individual about:
the reasons you are not taking action;
their right to make a complaint to the ICO or another supervisory authority; and
their ability to seek to enforce this right through a judicial remedy.
You should also provide this information if you request a reasonable fee or need additional information
to identify the individual.
How do we recognise a request?
The GDPR does not specify how individuals should make data portability requests. Therefore, requests
could be made verbally or in writing. They can also be made to any part of your organisation and do not
have to be to a specific person or contact point.
A request does not have to include the phrase 'request for data portability' or a reference to ‘Article 20
of the GDPR’, as long as one of the conditions listed above apply.
This presents a challenge as any of your employees could receive a valid request. However, you have a
legal responsibility to identify that an individual has made a request to you and handle it accordingly.
Therefore you may need to consider which of your staff who regularly interact with individuals may need
specific training to identify a request.
Additionally, it is good practice to have a policy for recording details of the requests you receive,
particularly those made by telephone or in person. You may wish to check with the requester that you
have understood their request, as this can help avoid later disputes about how you have interpreted the
request. We also recommend that you keep a log of verbal requests.
In practice, you may already have processes in place to enable your staff to recognise subject access
requests, such as training or established procedures. You could consider adapting them to ensure yourIn more detail – Data Protection Act 2018
There are other exemptions from the right to data portability contained in the Data Protection Act
2018. These exemptions will apply in certain circumstances, broadly associated with why you are
processing the data. We will provide further guidance on the application of these exemptions in due
course.02 August 2018 - 1.0.248 136
staff also recognise data portability requests.
Can we charge a fee?
No, in most cases you cannot charge a fee to comply with a request for data portability.
However, as noted above, if the request is manifestly unfounded or excessive you may charge a
“reasonable fee” for the administrative costs of complying with the request.
How long do we have to comply?
You must act upon the request without undue delay and at the latest within one month of receipt.
You should calculate the time limit from the day after you receive the request (whether the day after is a
working day or not) until the corresponding calendar date in the next month.
If this is not possible because the following month is shorter (and there is no corresponding calendar
date), the date for response is the last day of the following month.
If the corresponding date falls on a weekend or a public holiday, you will have until the next working day
to respond.
This means that the exact number of days you have to comply with a request varies, depending on the
month in which the request was made.
For practical purposes, if a consistent number of days is required (eg for operational or system
purposes), it may be helpful to adopt a 28-day period to ensure compliance is always within a calendar
month.
Can we extend the time for a response?
Example
An organisation receives a request on 3 September. The time limit will start from the next day (4
September). This gives the organisation until 4 October to comply with the request.
Example
An organisation receives a request on 30 March. The time limit starts from the next day (31 March).
As there is no equivalent date in April, the organisation has until 30 April to comply with the request.
If 30 April falls on a weekend, or is a public holiday, the organisation has until the end of the next
working day to comply.02 August 2018 - 1.0.248 137
You can extend the time to respond by a further two months if the request is complex or you have
received a number of requests from the individual. You must let the individual know within one month of
receiving their request and explain why the extension is necessary.
However, it is the ICO's view that it is unlikely to be reasonable to extend the time limit if:
it is manifestly unfounded or excessive;
an exemption applies; or
you are requesting proof of identity before considering the request.
Can we ask an individual for ID?
If you have doubts about the identity of the person making the request you can ask for more
information. However, it is important that you only request information that is necessary to confirm who
they are. The key to this is proportionality. You should take into account what data you hold, the nature
of the data, and what you are using it for.
You need to let the individual know as soon as possible that you need more information from them to
confirm their identity before responding to their request. The period for responding to the request begins
when you receive the additional information.
Further Reading
Relevant provisions in the GDPR - See Articles 13, 20 and Recital 68
External link
In more detail – European Data Protection Protection Board
The European Data Protection Protection Board (EDPB) includes representatives from the data
protection authorities of each EU member state. It adopts guidelines for complying with the
requirements of the GDPR.
The EDPB has published guidelines and FAQs on data portability for organisations.02 August 2018 - 1.0.248 138
Right to object
At a glance
The GDPR gives individuals the right to object to the processing of their personal data in certain
circumstances.
Individuals have an absolute right to stop their data being used for direct marketing.
In other cases where the right to object applies you may be able to continue processing if you can
show that you have a compelling reason for doing so.
You must tell individuals about their right to object.
An individual can make an objection verbally or in writing.
You have one calendar month to respond to an objection.
Checklists
In briefPreparing for objections to processing
☐ We know how to recognise an objection and we understand when the right applies.
☐ We have a policy in place for how to record objections we receive verbally.
☐ We understand when we can refuse an objection and are aware of the information we need to
provide to individuals when we do so.
☐ We have clear information in our privacy notice about individuals’ right to object, which is
presented separately from other information on their rights.
☐ We understand when we need to inform individuals of their right to object in addition to
including it in our privacy notice.
Complying with requests which object to processing
☐ We have processes in place to ensure that we respond to an objection without undue delay
and within one month of receipt.
☐ We are aware of the circumstances when we can extend the time limit to respond to an
objection.
☐ We have appropriate methods in place to erase, suppress or otherwise cease processing
personal data.02 August 2018 - 1.0.248 139
What is the right to object?
Article 21 of the GDPR gives individuals the right to object to the processing of their personal data. This
effectively allows individuals to ask you to stop processing their personal data.
The right to object only applies in certain circumstances. Whether it applies depends on your purposes
for processing and your lawful basis for processing.
When does the right to object apply?
Individuals have the absolute right to object to the processing of their personal data if it is for direct
marketing purposes.
Individuals can also object if the processing is for:
a task carried out in the public interest;
the exercise of official authority vested in you; or
your legitimate interests (or those of a third party).
In these circumstances the right to object is not absolute.
If you are processing data for scientific or historical research, or statistical purposes, the right to object
is more limited.
These various grounds are discussed further below.
Direct marketing
An individual can ask you to stop processing their personal data for direct marketing at any time. This
includes any profiling of data that is related to direct marketing.
This is an absolute right and there are no exemptions or grounds for you to refuse. Therefore, when you
receive an objection to processing for direct marketing, you must stop processing the individual’s data
for this purpose.
However, this does not automatically mean that you need to erase the individual’s personal data, and in
most cases it will be preferable to suppress their details. Suppression involves retaining just enough
information about them to ensure that their preference not to receive direct marketing is respected in
future.
Processing based upon public task or legitimate interests
An individual can also object where you are relying on one of the following lawful bases:
‘public task’ (for the performance of a task carried out in the public interest),
‘public task’ (for the exercise of official authority vested in you), or
legitimate interests.
An individual must give specific reasons why they are objecting to the processing of their data. These
reasons should be based upon their particular situation.02 August 2018 - 1.0.248 140
In these circumstances this is not an absolute right, and you can continue processing if:
you can demonstrate compelling legitimate grounds for the processing, which override the interests,
rights and freedoms of the individual; or
the processing is for the establishment, exercise or defence of legal claims.
If you are deciding whether you have compelling legitimate grounds which override the interests of an
individual, you should consider the reasons why they have objected to the processing of their data. In
particular, if an individual objects on the grounds that the processing is causing them substantial damage
or distress (eg the processing is causing them financial loss), the grounds for their objection will have
more weight. In making a decision on this, you need to balance the individual’s interests, rights and
freedoms with your own legitimate grounds. During this process you should remember that the
responsibility is for you to be able to demonstrate that your legitimate grounds override those of the
individual.
If you are satisfied that you do not need to stop processing the personal data in question you should let
the individual know. You should explain your decision, and inform them of their right to make a
complaint to the ICO or another supervisory authority; and their ability to seek to enforce their rights
through a judicial remedy.
Research purposes
Where you are processing personal data for scientific or historical research, or statistical purposes, the
right to object is more restricted.
Article 21(4) states:
Effectively this means that if you are processing data for these purposes and have appropriate
safeguards in place (eg data minimisation and pseudonymisation where possible) the individual only has
a right to object if your lawful basis for processing is:
public task (on the basis that it is necessary for the exercise of official authority vested in you), or
legitimate interests.
The individual does not have a right to object if your lawful basis for processing is public task because it
is necessary for the performance of a task carried out in the public interest.
Article 21(4) therefore differentiates between the two parts of the public task lawful basis (performance
of a task carried out in the public interest or in the exercise of official authority vested in
you).
This may cause difficulties if you are relying on the public task lawful basis for processing. It may not
always be clear whether you are carrying out the processing solely as a task in the public interest, or in
‘Where personal data are processed for scientific or historical research purposes or statistical
purposes pursuant to Article 89(1), the data subject, on grounds relating to his or her personal
situation, shall have the right to object to processing of personal data concerning him or her, unless
the processing is necessary for the performance of a task carried out for reasons of public interest.’02 August 2018 - 1.0.248 141
the exercise of official authority. Indeed, it may be difficult to differentiate between the two.
As such, it is good practice that if you are relying upon the public task lawful basis and receive an
objection, you should consider the objection on its own merits and go on to consider the steps outlined
in the next paragraph, rather than refusing it outright. If you do intend to refuse an objection on the
basis that you are carrying out research or statistical work solely for the performance of a public task
carried out in the public interest you should be clear in your privacy notice that you are only carrying
out this processing on this basis.
If you do receive an objection you may be able to continue processing, if you can demonstrate that you
have a compelling legitimate reason or the processing is necessary for legal claims. You need to go
through the steps outlined in the previous section to demonstrate this.
As noted above, if you are satisfied that you do not need to stop processing you should let the individual
know. You should provide an explanation for your decision, and inform them of their right to make a
complaint to the ICO or another supervisory authority, as well as their ability to seek to enforce their
rights through a judicial remedy.
Do we need to tell individuals about the right to object?
The GDPR is clear that you must inform individuals of their right to object at the latest at the time of
your first communication with them where:
you process personal data for direct marketing purposes, or
your lawful basis for processing is:
public task (for the performance of a task carried out in the public interest),
public task (for the exercise of official authority vested in you), or
legitimate interests.
If one of these conditions applies, you should explicitly bring the right to object to the individual’s
attention. You should present this information clearly and separately from any other information.
If you are processing personal data for research or statistical purposes you should include information
about the right to object (along with information about the other rights of the individual) in your privacy
notice.
Do we always need to erase personal data to comply with an objection?
Where you have received an objection to the processing of personal data and you have no grounds to
refuse, you need to stop processing the data.
This may mean that you need to erase personal data as the definition of processing under the GDPR is
broad, and includes storing data. However, as noted above, this will not always be the most appropriate
action to take.
Erasure may not be appropriate if you process the data for other purposes as you need to retain the
data for those purposes. For example, when an individual objects to the processing of their data for
direct marketing, you can place their details onto a suppression list to ensure that you continue to
comply with their objection. However, you need to ensure that the data is clearly marked so that it is not
processed for purposes the individual has objected to.02 August 2018 - 1.0.248 142
Can we refuse to comply with an objection for other reasons?
You can also refuse to comply with an objection if the request is manifestly unfounded or excessive,
taking into account whether the request is repetitive in nature.
If you consider that an objection is manifestly unfounded or excessive you can:
request a "reasonable fee" to deal with it; or
refuse to deal with it.
In either case you will need to justify your decision.
You should base the reasonable fee on the administrative costs of complying with the request. If you
decide to charge a fee you should contact the individual promptly and inform them. You do not need to
comply with the request until you have received the fee.
What should we do if we refuse to comply with an objection?
You must inform the individual without undue delay and within one month of receipt of the request.
You should inform the individual about:
the reasons you are not taking action;
their right to make a complaint to the ICO or another supervisory authority; and
their ability to seek to enforce this right through a judicial remedy.
You should also provide this information if you request a reasonable fee or need additional information
to identify the individual.
How do we recognise an objection?
The GDPR does not specify how to make a valid objection. Therefore, an objection to processing can be
made verbally or in writing. It can also be made to any part of your organisation and does not have to
be to a specific person or contact point.
A request does not have to include the phrase 'objection to processing' or Article 21 of the GDPR - as
long as one of the conditions listed above apply.
This presents a challenge as any of your employees could receive a valid verbal objection. However,
you have a legal responsibility to identify that an individual has made an objection to you and to handle
it accordingly. Therefore you may need to consider which of your staff who regularly interact with
individuals may need specific training to identify an objection.In more detail – Data Protection Act 2018
There are other exemptions from the right to object contained in the Data Protection Act 2018.
These exemptions will apply in certain circumstances, broadly associated with why you are
processing the data. We will provide further guidance on the application of these exemptions in due
course.02 August 2018 - 1.0.248 143
Additionally, it is good practice to have a policy for recording details of the objections you receive,
particularly those made by telephone or in person. You may wish to check with the requester that you
have understood their request, as this can help avoid later disputes about how you have interpreted the
objection. We also recommend that you keep a log of verbal objections.
Can we charge a fee?
No, in most cases you cannot charge a fee to comply with an objection to processing.
However, as noted above, where the objection is manifestly unfounded or excessive you may charge a
“reasonable fee” for the administrative costs of complying with the request.
How long do we have to comply?
You must act upon the objection without undue delay and at the latest within one month of receipt.
You should calculate the time limit from the day after you receive the objection (whether the day after is
a working day or not) until the corresponding calendar date in the next month.
If this is not possible because the following month is shorter (and there is no corresponding calendar
date), the date for response is the last day of the following month.
If the corresponding date falls on a weekend or a public holiday, you will have until the next working day
to respond.
This means that the exact number of days you have to comply with an objection varies, depending on
the month in which it was made.
For practical purposes, if a consistent number of days is required (eg for operational or system
purposes), it may be helpful to adopt a 28-day period to ensure compliance is always within a calendar
Example
An organisation receives an objection on 3 September. The time limit will start from the next day (4
September). This gives the organisation until 4 October to comply with the objection.
Example
An organisation receives an objection on 30 March. The time limit starts from the next day (31
March). As there is no equivalent date in April, the organisation has until 30 April to comply with the
objection.
If 30 April falls on a weekend, or is a public holiday, the organisation has until the end of the next
working day to comply.02 August 2018 - 1.0.248 144
month.
Can we extend the time for a response?
You can extend the time to respond to an objection by a further two months if the request is complex or
you have received a number of requests from the individual. You must let the individual know within one
month of receiving their objection and explain why the extension is necessary.
However, it is the ICO's view that it is unlikely to be reasonable to extend the time limit if:
it is manifestly unfounded or excessive;
an exemption applies; or
you are requesting proof of identity before considering the request.
Can we ask an individual for ID?
If you have doubts about the identity of the person making the objection you can ask for more
information. However, it is important that you only request information that is necessary to confirm who
they are. The key to this is proportionality. You should take into account what data you hold, the nature
of the data, and what you are using it for.
You need to let the individual know as soon as possible that you need more information from them to
confirm their identity before responding to their objection. The period for responding to the objection
begins when you receive the additional information.
Further Reading
Relevant provisions in the GDPR - See Articles 6, 12, 21, 89 and Recitals 69 and 70
External link02 August 2018 - 1.0.248 145
Rights related to automated decision making
including profiling
At a glance
The GDPR has provisions on:
automated individual decision-making (making a decision solely by automated means without any
human involvement); and
profiling (automated processing of personal data to evaluate certain things about an individual).
Profiling can be part of an automated decision-making process.
The GDPR applies to all automated individual decision-making and profiling.
Article 22 of the GDPR has additional rules to protect individuals if you are carrying out solely
automated decision-making that has legal or similarly significant effects on them.
You can only carry out this type of decision-making where the decision is:
necessary for the entry into or performance of a contract; or
authorised by Union or Member state law applicable to the controller; or
based on the individual’s explicit consent.
You must identify whether any of your processing falls under Article 22 and, if so, make sure that
you:
give individuals information about the processing;
introduce simple ways for them to request human intervention or challenge a decision;
carry out regular checks to make sure that your systems are working as intended.
Checklists
All automated individual decision-making and profiling
To comply with the GDPR...
☐ We have a lawful basis to carry out profiling and/or automated decision-making and document
this in our data protection policy.
☐ We send individuals a link to our privacy statement when we have obtained their personal
data indirectly.
☐ We explain how people can access details of the information we used to create their profile.
☐ We tell people who provide us with their personal data how they can object to profiling,
including profiling for marketing purposes.
☐ We have procedures for customers to access the personal data input into the profiles so they02 August 2018 - 1.0.248 146
Solely automated individual decision-making, including profiling with legal or similarly
significant effects (Article 22)can review and edit for any accuracy issues.
☐ We have additional checks in place for our profiling/automated decision-making systems to
protect any vulnerable groups (including children).
☐ We only collect the minimum amount of data needed and have a clear retention policy for the
profiles we create.
As a model of best practice...
☐ We carry out a DPIA to consider and address the risks before we start any new automated
decision-making or profiling.
☐ We tell our customers about the profiling and automated decision-making we carry out, what
information we use to create the profiles and where we get this information from.
☐ We use anonymised data in our profiling activities.
To comply with the GDPR...
☐ We carry out a DPIA to identify the risks to individuals, show how we are going to deal with
them and what measures we have in place to meet GDPR requirements.
☐ We carry out processing under Article 22(1) for contractual purposes and we can demonstrate
why it’s necessary.
OR
☐ We carry out processing under Article 22(1) because we have the individual’s explicit consent
recorded. We can show when and how we obtained consent. We tell individuals how they can
withdraw consent and have a simple way for them to do this.
OR
☐ We carry out processing under Article 22(1) because we are authorised or required to do so.
This is the most appropriate way to achieve our aims.
☐ We don’t use special category data in our automated decision-making systems unless we
have a lawful basis to do so, and we can demonstrate what that basis is. We delete any special
category data accidentally created.
☐ We explain that we use automated decision-making processes, including profiling. We explain
what information we use, why we use it and what the effects might be.
☐ We have a simple way for people to ask us to reconsider an automated decision.
☐ We have identified staff in our organisation who are authorised to carry out reviews and
change decisions.02 August 2018 - 1.0.248 147
In brief
What’s new under the GDPR?
What is automated individual decision-making and profiling?
What does the GDPR say about automated individual decision-making and profiling?
When can we carry out this type of processing?
What else do we need to consider?
What if Article 22 doesn’t apply to our processing?
What’s new under the GDPR?
Profiling is now specifically defined in the GDPR.
Solely automated individual decision-making, including profiling with legal or similarly significant
effects is restricted.
There are three grounds for this type of processing that lift the restriction.
Where one of these grounds applies, you must introduce additional safeguards to protect data
subjects. These work in a similar way to existing rights under the 1998 Data Protection Act.
The GDPR requires you to give individuals specific information about automated individual decision-
making, including profiling.
There are additional restrictions on using special category and children’s personal data.
What is automated individual decision-making and profiling?
Automated individual decision-making is a decision made by automated means without any human
involvement.
Examples of this include:
an online decision to award a loan; and
a recruitment aptitude test which uses pre-programmed algorithms and criteria.
Automated individual decision-making does not have to involve profiling, although it often will do.
The GDPR says that profiling is:☐ We regularly check our systems for accuracy and bias and feed any changes back into the
design process.
As a model of best practice...
☐ We use visuals to explain what information we collect/use and why this is relevant to the
process.
☐ We have signed up to [standard] a set of ethical principles to build trust with our customers.
This is available on our website and on paper.02 August 2018 - 1.0.248 148
Organisations obtain personal information about individuals from a variety of different sources. Internet
searches, buying habits, lifestyle and behaviour data gathered from mobile phones, social networks,
video surveillance systems and the Internet of Things are examples of the types of data organisations
might collect.
Information is analysed to classify people into different groups or sectors, using algorithms and
machine-learning. This analysis identifies links between different behaviours and characteristics to
create profiles for individuals. There is more information about algorithms and machine-learning in our
paper on big data, artificial intelligence, machine learning and data protection .
Based on the traits of others who appear similar, organisations use profiling to:
find something out about individuals’ preferences;
predict their behaviour; and/or
make decisions about them.
This can be very useful for organisations and individuals in many sectors, including healthcare,
education, financial services and marketing.
Automated individual decision-making and profiling can lead to quicker and more consistent decisions.
But if they are used irresponsibly there are significant risks for individuals. The GDPR provisions are
designed to address these risks.
What does the GDPR say about automated individual decision-making and profiling?
The GDPR restricts you from making solely automated decisions, including those based on profiling, that
have a legal or similarly significant effect on individuals.
For something to be solely automated there must be no human involvement in the decision-making
process.
“Any form of automated processing of personal data consisting of the use of personal data to
evaluate certain personal aspects relating to a natural person, in particular to analyse or predict
aspects concerning that natural person’s performance at work, economic situation, health, personal
preferences, interests, reliability, behaviour, location or movements.”
[Article 4(4)]
“The data subject shall have the right not to be subject to a decision based solely on automated
processing, including profiling, which produces legal effects concerning him or her or similarly
significantly affects him or her.”
[Article 22(1)]02 August 2018 - 1.0.248 149
The restriction only covers solely automated individual decision-making that produces legal or similarly
significant effects. These types of effect are not defined in the GDPR, but the decision must have a
serious negative impact on an individual to be caught by this provision.
A legal effect is something that adversely affects someone’s legal rights. Similarly significant effects are
more difficult to define but would include, for example, automatic refusal of an online credit application,
and e-recruiting practices without human intervention.
When can we carry out this type of processing?
Solely automated individual decision-making - including profiling - with legal or similarly significant
effects is restricted, although this restriction can be lifted in certain circumstances.
You can only carry out solely automated decision-making with legal or similarly significant effects if the
decision is:
necessary for entering into or performance of a contract between an organisation and the individual;
authorised by law (for example, for the purposes of fraud or tax evasion); or
based on the individual’s explicit consent.
If you’re using special category personal data you can only carry out processing described in Article
22(1) if:
you have the individual’s explicit consent; or
the processing is necessary for reasons of substantial public interest.
What else do we need to consider?
Because this type of processing is considered to be high-risk the GDPR requires you to carry out a Data
Protection Impact Assessment (DPIA) to show that you have identified and assessed what those risks
are and how you will address them.
As well as restricting the circumstances in which you can carry out solely automated individual decision-
making (as described in Article 22(1)) the GDPR also:
requires you to give individuals specific information about the processing;
obliges you to take steps to prevent errors, bias and discrimination; and
gives individuals rights to challenge and request a review of the decision.
These provisions are designed to increase individuals’ understanding of how you might be using their
personal data.
You must:
provide meaningful information about the logic involved in the decision-making process, as well as
the significance and the envisaged consequences for the individual;
use appropriate mathematical or statistical procedures;
ensure that individuals can:
obtain human intervention;02 August 2018 - 1.0.248 150
express their point of view; and
obtain an explanation of the decision and challenge it;
put appropriate technical and organisational measures in place, so that you can correct inaccuracies
and minimise the risk of errors;
secure personal data in a way that is proportionate to the risk to the interests and rights of the
individual, and that prevents discriminatory effects.
What if Article 22 doesn’t apply to our processing?
Article 22 applies to solely automated individual decision-making, including profiling, with legal or
similarly significant effects.
If your processing does not match this definition then you can continue to carry out profiling and
automated decision-making.
But you must still comply with the GDPR principles.
You must identify and record your lawful basis for the processing .
You need to have processes in place so people can exercise their rights .
Individuals have a right to object to profiling in certain circumstances. You must bring details of this right
specifically to their attention.
Further Reading
Relevant provisions in the GDPR - Article 4(4), 9, 12, 13, 14, 15, 21, 22, 35(1)and (3)
External link
In more detail – ICO guidance
We have published detailed guidance on automated decision-making and profiling .
Privacy notices transparency and control
Big data, artificial intelligence, machine learning and data protection
In more detail – European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state. It
adopts guidelines for complying with the requirements of the GDPR.
WP29 has adopted guidelines on Automated individual decision-making and Profiling , which have
been endorsed by the EDPB.
Other relevant guidelines published by WP29 and endorsed by the EDPB include:02 August 2018 - 1.0.248 151
WP29 guidelines on Data Protection Impact Assessment02 August 2018 - 1.0.248 152
Accountability and governance
At a glance
Accountability is one of the data protection principles - it makes you responsible for complying with
the GDPR and says that you must be able to demonstrate your compliance.
You need to put in place appropriate technical and organisational measures to meet the requirements
of accountability.
There are a number of measures that you can, and in some cases must, take including:
adopting and implementing data protection policies;
taking a ‘data protection by design and default’ approach;
putting written contracts in place with organisations that process personal data on your behalf;
maintaining documentation of your processing activities;
implementing appropriate security measures;
recording and, where necessary, reporting personal data breaches;
carrying out data protection impact assessments for uses of personal data that are likely to result
in high risk to individuals’ interests;
appointing a data protection officer; and
adhering to relevant codes of conduct and signing up to certification schemes.
Accountability obligations are ongoing. You must review and, where necessary, update the measures
you put in place.
If you implement a privacy management framework this can help you embed your accountability
measures and create a culture of privacy across your organisation.
Being accountable can help you to build trust with individuals and may help you mitigate enforcement
action.
Checklist
☐ We take responsibility for complying with the GDPR, at the highest management level and
throughout our organisation.
☐ We keep evidence of the steps we take to comply with the GDPR.
We put in place appropriate technical and organisational measures, such as:
☐ adopting and implementing data protection policies (where proportionate);
☐ taking a ‘data protection by design and default’ approach - putting appropriate data
protection measures in place throughout the entire lifecycle of our processing operations;
☐ putting written contracts in place with organisations that process personal data on our
behalf;02 August 2018 - 1.0.248 153
In brief
What’s new under the GDPR?
What is accountability?
Why is accountability important?
What do we need to do?
Should we implement data protection policies?
Should we adopt a ‘data protection by design and default’ approach?
Do we need to use contracts?
What documentation should we maintain?
What security measures should we put in place?
How do we record and report personal data breaches?
Should we carry out data protection impact assessments (DPIAs)?
Should we assign a data protection officer (DPO)?
Should we adhere to codes of conduct and certification schemes?
What else should we consider?
What's new under the GDPR?
One of the biggest changes introduced by the GDPR is around accountability – a new data protection
principle that says organisations are responsible for, and must be able to demonstrate, compliance with
the other principles. Although these obligations were implicit in the Data Protection Act 1998 (1998 Act),
the GDPR makes them explicit.
You now need to be proactive about data protection, and evidence the steps you take to meet your
obligations and protect people’s rights. Good practice tools that the ICO has championed for a long time,
such as privacy impact assessments and privacy by design, are now formally recognised and legally
required in some circumstances.☐ maintaining documentation of our processing activities;
☐ implementing appropriate security measures;
☐ recording and, where necessary, reporting personal data breaches;
☐ carrying out data protection impact assessments for uses of personal data that are likely
to result in high risk to individuals’ interests;
☐ appointing a data protection officer (where necessary); and
☐ adhering to relevant codes of conduct and signing up to certification schemes (where
possible).
☐ We review and update our accountability measures at appropriate intervals.02 August 2018 - 1.0.248 154
Organisations that already adopt a best practice approach to compliance with the 1998 Act should not
find it too difficult to adapt to the new requirements. But you should review the measures you take to
comply with the 1998 Act, update them for the GDPR if necessary, and stand ready to demonstrate your
compliance under the GDPR.
Further Reading
What is accountability?
There are two key elements. First, the accountability principle makes it clear that you are responsible
for complying with the GDPR. Second, you must be able to demonstrate your compliance.
Article 5(2) of the GDPR says:
Further Reading
Why is accountability important?
Taking responsibility for what you do with personal data, and demonstrating the steps you have taken to
protect people’s rights not only results in better legal compliance, it also offers you a competitive edge.
Accountability is a real opportunity for you to show, and prove, how you respect people’s privacy. This
can help you to develop and sustain people’s trust.
Furthermore, if something does go wrong, then being able to show that you actively considered the risks
and put in place measures and safeguards can help you provide mitigation against any potential
enforcement action. On the other hand, if you can’t show good data protection practices, it may leave
you open to fines and reputational damage.
Further ReadingRelevant provisions in the GDPR - See Articles 5 and 24, and Recitals 39 and 74
External link
“The controller shall be responsible for, and be able to demonstrate compliance with, paragraph 1
[the other data protection principles]”
Relevant provisions in the GDPR - See Article 5 and Recital 39
External link
Further reading – ICO guidance
Principles02 August 2018 - 1.0.248 155
What do we need to do?
Accountability is not a box-ticking exercise. Being responsible for compliance with the GDPR means
that you need to be proactive and organised about your approach to data protection, while
demonstrating your compliance means that you must be able to evidence the steps you take to
comply.
To achieve this, if you are a larger organisation you may choose to put in place a privacy management
framework. This can help you create a culture of commitment to data protection, by embedding
systematic and demonstrable compliance across your organisation. Amongst other things, your
framework should include:
robust program controls informed by the requirements of the GDPR;
appropriate reporting structures; and
assessment and evaluation procedures.
If you are a smaller organisation you will most likely benefit from a smaller scale approach to
accountability. Amongst other things you should:
ensure a good level of understanding and awareness of data protection amongst your staff;
implement comprehensive but proportionate policies and procedures for handling personal data; and
keep records of what you do and why.
Article 24(1) of the GDPR says that:
you must implement technical and organisational measures to ensure, and demonstrate, compliance
with the GDPR;
the measures should be risk-based and proportionate; and
you need to review and update the measures as necessary.
While the GDPR does not specify an exhaustive list of things you need to do to be accountable, it does
set out several different measures you can take that will help you get there. These are summarised
under the headings below, with links to the relevant parts of the guide. Some measures you are obliged
to take and some are voluntary. It will differ depending on what personal data you have and what you
do with it. These measures can form the basis of your programme controls if you opt to put in place a
privacy management framework across your organisation.
Should we implement data protection policies?
For many organisations, putting in place relevant policies is a fundamental part of their approach to data
protection compliance. The GDPR explicitly says that, where proportionate, implementing data protection
policies is one of the measures you can take to ensure, and demonstrate, compliance.
What you have policies for, and their level of detail, depends on what you do with personal data. If, forRelevant provisions in the GDPR - See Article 83
External link02 August 2018 - 1.0.248 156
instance, you handle large volumes of personal data, or particularly sensitive information such as
special category data, then you should take greater care to ensure that your policies are robust and
comprehensive.
As well as drafting data protection policies, you should also be able to show that you have implemented
and adhered to them. This could include awareness raising, training, monitoring and audits – all tasks
that your data protection officer can undertake ( see below for more on data protection officers ).
Further Reading
Should we adopt a ‘data protection by design and default’ approach?
Privacy by design has long been seen as a good practice approach when designing new products,
processes and systems that use personal data. Under the heading ‘data protection by design and by
default’, the GDPR legally requires you to take this approach.
Data protection by design and default is an integral element of being accountable. It is about embedding
data protection into everything you do, throughout all your processing operations. The GDPR suggests
measures that may be appropriate such as minimising the data you collect, applying pseudonymisation
techniques, and improving security features.
Integrating data protection considerations into your operations helps you to comply with your
obligations, while documenting the decisions you take (often in data protection impact assessments –
see below ) demonstrates this.
Further Reading
Do we need to use contracts?
Whenever a controller uses a processor to handle personal data on their behalf, it needs to put in place
a written contract that sets out each party’s responsibilities and liabilities.
Contracts must include certain specific terms as a minimum, such as requiring the processor to take
appropriate measures to ensure the security of processing and obliging it to assist the controller in
allowing individuals to exercise their rights under the GDPR.Relevant provisions in the GDPR - See Articles 24(2) and Recital 78
External link
Relevant provisions in the GDPR - See Article 25 and Recital 78
External link
Further reading – ICO guidance
Data protection by design and default
Anonymisation code of practice02 August 2018 - 1.0.248 157
Using clear and comprehensive contracts with your processors helps to ensure that everyone
understands their data protection obligations and is a good way to demonstrate this formally.
Further Reading
What documentation should we maintain?
Under Article 30 of the GDPR, most organisations are required to maintain a record of their processing
activities, covering areas such as processing purposes, data sharing and retention.
Documenting this information is a great way to take stock of what you do with personal data. Knowing
what information you have, where it is and what you do with it makes it much easier for you to comply
with other aspects of the GDPR such as making sure that the information you hold about people is
accurate and secure.
As well as your record of processing activities under Article 30, you also need to document other things
to show your compliance with the GDPR. For instance, you need to keep records of consent and any
personal data breaches.
Further Reading
What security measures should we put in place?
The GDPR repeats the requirement to implement technical and organisational measures to comply with
the GDPR in the context of security. It says that these measures should ensure a level of security
appropriate to the risk.Relevant provisions in the GDPR - See Article 28 and Recital 81
External link
Further reading – ICO guidance
Contracts
Relevant provisions in the GDPR - See Articles 7(1), 30 and 33(5), and Recitals 42 and 82
External link
Further reading – ICO guidance
Documentation
Consent
Personal data breaches02 August 2018 - 1.0.248 158
You need to implement security measures if you are handling any type of personal data, but what you
put in place depends on your particular circumstances. You need to ensure the confidentiality, integrity
and availability of the systems and services you use to process personal data.
Amongst other things, this may include information security policies, access controls, security
monitoring, and recovery plans.
Further Reading
How do we record and report personal data breaches?
You must report certain types of personal data breach to the relevant supervisory authority (for the UK,
this is the ICO), and in some circumstances, to the affected individuals as well.
Additionally, the GDPR says that you must keep a record of any personal data breaches, regardless of
whether you need to report them or not.
You need to be able to detect, investigate, report (both internally and externally) and document any
breaches. Having robust policies, procedures and reporting structures helps you do this.
Further Reading
Relevant provisions in the GDPR - See Articles 5(f) and 32, and Recitals 39 and 83
External link
Further reading – ICO guidance
Security
Relevant provisions in the GDPR - See Articles 33-34 and Recitals 85-88
External link
Further reading – ICO guidance
Personal data breaches02 August 2018 - 1.0.248 159
Should we carry out data protection impact assessments (DPIAs)?
A DPIA is an essential accountability tool and a key part of taking a data protection by design approach
to what you do. It helps you to identify and minimise the data protection risks of any new projects you
undertake.
A DPIA is a legal requirement before carrying out processing likely to result in high risk to individuals’
interests.
When done properly, a DPIA helps you assess how to comply with the requirements of the GDPR, while
also acting as documented evidence of your decision-making and the steps you took.
Further Reading
Should we assign a data protection officer (DPO)?
Some organisations are required to appoint a DPO. A DPO’s tasks include advising you about the GDPR,
monitoring compliance and training staff.Further reading – European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state. It
adopts guidelines for complying with the requirements of the GDPR.
WP29 adopted guidelines on Personal data breach notification , which have been adopted by the
EDPB.
Relevant provisions in the GDPR - See Articles 35-36, and Recitals 84 and 89-95
External link
Further reading – ICO guidance
Data protection impact assessments
Further reading – European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state. It
adopts guidelines for complying with the requirements of the GDPR.
WP29 adopted guidelines on data protection impact assessments , which have been endorsed by the
EDPB.02 August 2018 - 1.0.248 160
Your DPO must report to your highest level of management, operate independently, and have adequate
resources to carry out their tasks.
Even if you’re not obliged to appoint a DPO, it is very important that you have sufficient staff, skills, and
appropriate reporting structures in place to meet your obligations under the GDPR.
Further Reading
Should we adhere to codes of conduct and certification schemes?
Under the GDPR, trade associations and representative bodies may draw up codes of conduct covering
topics such as fair and transparent processing, pseudonymisation, and the exercise of people’s
rights.
In addition, supervisory authorities or accredited certification bodies can issue certification of the data
protection compliance of products and services.
Both codes of conduct and certification are voluntary, but they are an excellent way of verifying and
demonstrating that you comply with the GDPR.
Further ReadingRelevant provisions in the GDPR - See Articles 37-39, and Recital 97
External link
Further reading – ICO guidance
Data protection officers
Further reading – European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state. It
adopts guidelines for complying with the requirements of the GDPR.
WP29 adopted guidelines on data protection officers , which have been endorsed by the EDPB.
Relevant provisions in the GDPR - See Articles 40-43, and Recitals 98 and 100
External link
Further reading – ICO guidance
Codes of conduct and certification02 August 2018 - 1.0.248 161
What else should we consider?
The above measures can help to support an accountable approach to data protection, but it is not limited
to these. You need to be able to prove what steps you have taken to comply. In practice this means
keeping records of what you do and justifying your decisions.
Accountability is not just about being answerable to the regulator; you must also demonstrate your
compliance to individuals. Amongst other things, individuals have the right to be informed about what
personal data you collect, why you use it and who you share it with. Additionally, if you use techniques
such as artificial intelligence and machine learning to make decisions about people, in certain cases
individuals have the right to hold you to account by requesting explanations of those decisions and
contesting them. You therefore need to find effective ways to provide information to people about what
you do with their personal data, and explain and review automated decisions.
The obligations that accountability places on you are ongoing – you cannot simply sign off a particular
processing operation as ‘accountable’ and move on. You must review the measures you implement at
appropriate intervals to ensure that they remain effective. You should update measures that are no
longer fit for purpose. If you regularly change what you do with personal data, or the types of
information that you collect, you should review and update your measures frequently, remembering to
document what you do and why.
Further Reading
Example
A company wants to use the personal data it holds for a new purpose. It carries out an assessment
in line with Article 6(4) of the GDPR, and determines that the new purpose is compatible with the
original purpose for which it collected the personal data. Although this provision of the GDPR does
not specify that the company must document its compatibility assessment, it knows that to be
accountable, it needs to be able to prove that their handling of personal data is compliant with the
GDPR. The company therefore keeps a record of the compatibility assessment, including its
rationale for the decision and the appropriate safeguards it put in place.
Relevant provisions in the GDPR - See Articles 12-14, 22 and 24(1), and Recitals 39, 58-61 and 71
External link
Further reading – ICO guidance
Right to be informed
Rights related to automated decision making including profiling
Data protection self assessment02 August 2018 - 1.0.248 162
Contracts
At a glance
Whenever a controller uses a processor it needs to have a written contract in place.
The contract is important so that both parties understand their responsibilities and liabilities.
The GDPR sets out what needs to be included in the contract.
In the future, standard contract clauses may be provided by the European Commission or the ICO,
and may form part of certification schemes. However at the moment no standard clauses have been
drafted.
Controllers are liable for their compliance with the GDPR and must only appoint processors who can
provide ‘sufficient guarantees’ that the requirements of the GDPR will be met and the rights of data
subjects protected. In the future, using a processor which adheres to an approved code of conduct or
certification scheme may help controllers to satisfy this requirement – though again, no such
schemes are currently available.
Processors must only act on the documented instructions of a controller. They will however have
some direct responsibilities under the GDPR and may be subject to fines or other sanctions if they
don’t comply.
Checklists
Controller and processor contracts checklist
Our contracts include the following compulsory details:
Our contracts include the following compulsory terms:☐ the subject matter and duration of the processing;
☐ the nature and purpose of the processing;
☐ the type of personal data and categories of data subject; and
☐ the obligations and rights of the controller.
☐ the processor must only act on the written instructions of the controller (unless required by
law to act without such instructions);
☐ the processor must ensure that people processing the data are subject to a duty of
confidence;
☐ the processor must take appropriate measures to ensure the security of processing;02 August 2018 - 1.0.248 163
As a matter of good practice, our contracts:
Processors’ responsibilities and liabilities checklist
In addition to the Article 28.3 contractual obligations set out in the controller and processor contracts
checklist, a processor has the following direct responsibilities under the GDPR. The processor must:☐ the processor must only engage a sub-processor with the prior consent of the data controller
and a written contract;
☐ the processor must assist the data controller in providing subject access and allowing data
subjects to exercise their rights under the GDPR;
☐ the processor must assist the data controller in meeting its GDPR obligations in relation to the
security of processing, the notification of personal data breaches and data protection impact
assessments;
☐ the processor must delete or return all personal data to the controller as requested at the end
of the contract; and
☐ the processor must submit to audits and inspections, provide the controller with whatever
information it needs to ensure that they are both meeting their Article 28 obligations, and tell the
controller immediately if it is asked to do something infringing the GDPR or other data protection
law of the EU or a member state.
☐ state that nothing within the contract relieves the processor of its own direct responsibilities
and liabilities under the GDPR; and
☐ reflect any indemnity that has been agreed.02 August 2018 - 1.0.248 164
A processor should also be aware that:
In brief
What's new?
The GDPR makes written contracts between controllers and processors a general requirement, rather
than just a way of demonstrating compliance with the seventh data protection principle (appropriate
security measures) under the DPA.
These contracts must now include certain specific terms, as a minimum.
These terms are designed to ensure that processing carried out by a processor meets all the
requirements of the GDPR (not just those related to keeping personal data secure).
The GDPR allows for standard contractual clauses from the EU Commission or a supervisory authority
(such as the ICO) to be used in contracts between controllers and processors - though none have
been drafted so far.☐ only act on the written instructions of the controller (Article 29);
☐ not use a sub-processor without the prior written authorisation of the controller (Article 28.2);
☐ co-operate with supervisory authorities (such as the ICO) in accordance with Article 31;
☐ ensure the security of its processing in accordance with Article 32;
☐ keep records of its processing activities in accordance with Article 30.2;
☐ notify any personal data breaches to the controller in accordance with Article 33;
☐ employ a data protection officer if required in accordance with Article 37; and
☐ appoint (in writing) a representative within the European Union if required in accordance with
Article 27.
☐ it may be subject to investigative and corrective powers of supervisory authorities (such as
the ICO) under Article 58 of the GDPR;
☐ if it fails to meet its obligations, it may be subject to an administrative fine under Article 83 of
the GDPR;
☐ if it fails to meet its GDPR obligations it may be subject to a penalty under Article 84 of the
GDPR; and
☐ if it fails to meet its GDPR obligations it may have to pay compensation under Article 82 of the
GDPR.02 August 2018 - 1.0.248 165
The GDPR envisages that adherence by a processor to an approved code of conduct or certification
scheme may be used to help controllers demonstrate that they have chosen a suitable processor.
Standard contractual clauses may form part of such a code or scheme, though again, no schemes
are currently available.
The GDPR gives processors responsibilities and liabilities in their own right, and processors as well as
controllers may now be liable to pay damages or be subject to fines or other penalties.
When is a contract needed?
Whenever a controller uses a processor (a third party who processes personal data on behalf of the
controller) it needs to have a written contract in place. Similarly, if a processor employs another
processor it needs to have a written contract in place.
Why are contracts between controllers and processors important?
Contracts between controllers and processors ensure that they both understand their obligations,
responsibilities and liabilities. They help them to comply with the GDPR, and help controllers to
demonstrate their compliance with the GDPR. The use of contracts by controllers and processors may
also increase data subjects’ confidence in the handling of their personal data.
What needs to be included in the contract?
Contracts must set out the subject matter and duration of the processing, the nature and purpose of the
processing, the type of personal data and categories of data subject, and the obligations and rights of
the controller.
Contracts must also include as a minimum the following terms, requiring the processor to:
only act on the written instructions of the controller;
ensure that people processing the data are subject to a duty of confidence;
take appropriate measures to ensure the security of processing;
only engage sub-processors with the prior consent of the controller and under a written contract;
assist the controller in providing subject access and allowing data subjects to exercise their rights
under the GDPR;
assist the controller in meeting its GDPR obligations in relation to the security of processing, the
notification of personal data breaches and data protection impact assessments;
delete or return all personal data to the controller as requested at the end of the contract; and
submit to audits and inspections, provide the controller with whatever information it needs to ensure
that they are both meeting their Article 28 obligations, and tell the controller immediately if it is asked
to do something infringing the GDPR or other data protection law of the EU or a member state.
Can standard contracts clauses be used?
The GDPR allows standard contractual clauses from the EU Commission or a Supervisory Authority
(such as the ICO) to be used in contracts between controllers and processors. However, no standard
clauses are currently available.02 August 2018 - 1.0.248 166
The GDPR also allows these standard contractual clauses to form part of a code of conduct or
certification mechanism to demonstrate compliant processing. However, no schemes are currently
available.
What responsibilities and liabilities do processors have in their own right?
A processor must only act on the documented instructions of a controller. If a processor determines the
purpose and means of processing (rather than acting only on the instructions of the controller) then it
will be considered to be a controller and will have the same liability as a controller.
In addition to its contractual obligations to the controller, under the GDPR a processor also has the
following direct responsibilities:
not to use a sub-processor without the prior written authorisation of the data controller;
to co-operate with supervisory authorities (such as the ICO);
to ensure the security of its processing;
to keep records of processing activities;
to notify any personal data breaches to the data controller;
to employ a data protection officer; and
to appoint (in writing) a representative within the European Union if needed.
If a processor fails to meet any of these obligations, or acts outside or against the instructions of the
controller, then it may be liable to pay damages in legal proceedings, or be subject to fines or other
penalties or corrective measures.
If a processor uses a sub-processor then it will, as the original processor, remain directly liable to the
controller for the performance of the sub-processor’s obligations.
Further Reading
Relevant provisions in the GDPR - see Articles 28-36 and Recitals 81-83
External link
In more detail – ICO guidance
The deadline for responses to our draft GDPR guidance on contracts and liabilities for controllers and
processors has now passed. We are analysing the feedback and this will feed into the final version.02 August 2018 - 1.0.248 167
Documentation
At a glance
The GDPR contains explicit provisions about documenting your processing activities.
You must maintain records on several things such as processing purposes, data sharing and
retention.
You may be required to make the records available to the ICO on request.
Documentation can help you comply with other aspects of the GDPR and improve your data
governance.
Controllers and processors both have documentation obligations.
For small and medium-sized organisations, documentation requirements are limited to certain types
of processing activities.
Information audits or data-mapping exercises can feed into the documentation of your processing
activities.
Records must be kept in writing.
Most organisations will benefit from maintaining their records electronically.
Records must be kept up to date and reflect your current processing activities.
We have produced some basic templates to help you document your processing activities.
Checklists
Documentation of processing activities – requirements
☐ If we are a controller for the personal data we process, we document all the applicable
information under Article 30(1) of the GDPR.
☐ If we are a processor for the personal data we process, we document all the applicable
information under Article 30(2) of the GDPR.
If we process special category or criminal conviction and offence data, we document:
☐ the condition for processing we rely on in the Data Protection Act 2018 (DPA 2018);
☐ the lawful basis for our processing; and
☐ whether we retain and erase the personal data in accordance with our policy document.
where required in schedule 1 of the DPA 2018.
☐ We document our processing activities in writing.
☐ We document our processing activities in a granular way with meaningful links between the
different pieces of information.02 August 2018 - 1.0.248 168
In brief
What’s new under the GDPR?
What is documentation?
Who needs to document their processing activities?
What do we need to document under Article 30 of the GDPR?
Should we document anything else?
How do we document our processing activities?
What’s new under the GDPR?
The documentation of processing activities is a new requirement under the GDPR.
There are some similarities between documentation under the GDPR and the information you
provided to the ICO as part of registration under the Data Protection Act 1998.
You need to make sure that you have in place a record of your processing activities by 25 May 2018.☐ We conduct regular reviews of the personal data we process and update our documentation
accordingly.
Documentation of processing activities – best practice
When preparing to document our processing activities we:
☐ do information audits to find out what personal data our organisation holds;
☐ distribute questionnaires and talk to staff across the organisation to get a more complete
picture of our processing activities; and
☐ review our policies, procedures, contracts and agreements to address areas such as
retention, security and data sharing.
As part of our record of processing activities we document, or link to documentation, on:
☐ information required for privacy notices;
☐ records of consent;
☐ controller-processor contracts;
☐ the location of personal data;
☐ Data Protection Impact Assessment reports; and
☐ records of personal data breaches.
☐ We document our processing activities in electronic form so we can add, remove and amend
information easily.02 August 2018 - 1.0.248 169
What is documentation?
Most organisations are required to maintain a record of their processing activities, covering areas
such as processing purposes, data sharing and retention; we call this documentation .
Documenting your processing activities is important, not only because it is itself a legal requirement,
but also because it can support good data governance and help you demonstrate your compliance
with other aspects of the GDPR.
Who needs to document their processing activities?
Controllers and processors each have their own documentation obligations.
If you have 250 or more employees, you must document all your processing activities.
There is a limited exemption for small and medium-sized organisations. If you have fewer than 250
employees, you only need to document processing activities that:
are not occasional; or
could result in a risk to the rights and freedoms of individuals; or
involve the processing of special categories of data or criminal conviction and offence data.
What do we need to document under Article 30 of the GDPR?
You must document the following information:
The name and contact details of your organisation (and where applicable, of other controllers, your
representative and your data protection officer).
The purposes of your processing.
A description of the categories of individuals and categories of personal data.
The categories of recipients of personal data.
Details of your transfers to third countries including documenting the transfer mechanism safeguards
in place.
Retention schedules.
A description of your technical and organisational security measures.
Should we document anything else?
As part of your record of processing activities, it can be useful to document (or link to documentation of)
other aspects of your compliance with the GDPR and the UK’s Data Protection Act 2018. Such
documentation may include:
information required for privacy notices, such as:
the lawful basis for the processing
the legitimate interests for the processing
individuals’ rights
the existence of automated decision-making, including profiling
the source of the personal data;02 August 2018 - 1.0.248 170
records of consent;
controller-processor contracts;
the location of personal data;
Data Protection Impact Assessment reports;
records of personal data breaches;
information required for processing special category data or criminal conviction and offence data
under the Data Protection Act 2018, covering:
the condition for processing in the Data Protection Act;
the lawful basis for the processing in the GDPR; and
your retention and erasure policy document.
How do we document our processing activities?
Doing an information audit or data-mapping exercise can help you find out what personal data your
organisation holds and where it is.
You can find out why personal data is used, who it is shared with and how long it is kept by
distributing questionnaires to relevant areas of your organisation, meeting directly with key business
functions, and reviewing policies, procedures, contracts and agreements.
When documenting your findings, the records you keep must be in writing. The information must be
documented in a granular and meaningful way.
We have developed basic templates to help you document your processing activities.
Further Reading
Further ReadingDocumentation template for controllers
For organisations
File (31.22K)
Documentation template for processors
For organisations
File (19.48K)
Relevant provisions in the GDPR – See Article 30 and Recital 82
External link
Relevant provisions in the Data Protection Act 2018 – See Schedule 1
External link02 August 2018 - 1.0.248 171
In more detail – ICO guidance
We have produced more detailed guidance on documentation .
In more detail - European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state. It
adopts guidelines for complying with the requirements of the GDPR.
WP29 published a position paper on Article 30(5) (the exemption for small and medium-sized
organisations), which has been endorsed by the EDPB.02 August 2018 - 1.0.248 172
Data protection by design and default
At a glance
The GDPR requires you to put in place appropriate technical and organisational measures to
implement the data protection principles and safeguard individual rights. This is ‘data protection by
design and by default’.
In essence, this means you have to integrate or ‘bake in’ data protection into your processing
activities and business practices, from the design stage right through the lifecycle.
This concept is not new. Previously known as ‘privacy by design’, it has have always been part of
data protection law. The key change with the GDPR is that it is now a legal requirement.
Data protection by design is about considering data protection and privacy issues upfront in
everything you do. It can help you ensure that you comply with the GDPR’s fundamental principles
and requirements, and forms part of the focus on accountability.
Checklists
☐ We consider data protection issues as part of the design and implementation of systems,
services, products and business practices.
☐ We make data protection an essential component of the core functionality of our processing
systems and services.
☐ We anticipate risks and privacy-invasive events before they occur, and take steps to prevent
harm to individuals.
☐ We only process the personal data that we need for our purposes(s), and that we only use the
data for those purposes.
☐ We ensure that personal data is automatically protected in any IT system, service, product,
and/or business practice, so that individuals should not have to take any specific action to
protect their privacy.
☐ We provide the identity and contact information of those responsible for data protection both
within our organisation and to individuals.
☐ We adopt a ‘plain language’ policy for any public documents so that individuals easily
understand what we are doing with their personal data.
☐ We provide individuals with tools so they can determine how we are using their personal data,
and whether our policies are being properly enforced.
☐ We offer strong privacy defaults, user-friendly options and controls, and respect user
preferences.
☐ We only use data processors that provide sufficient guarantees of their technical and
organisational measures for data protection by design.02 August 2018 - 1.0.248 173
In brief
What’s new in the GDPR?
What does the GDPR say about data protection by design and by default?
What is data protection by design?
What is data protection by default?
Who is responsible for complying with data protection by design and by default?
What are we required to do?
When should we do this?
What are the underlying concepts of data protection by design and by default?
How do we do this in practice?
How do data protection by design and by default link to data protection impact assessments (DPIAs)?
What is the role of privacy-enhancing technologies (PETs)?
What about international transfers?
What is the role of certification?
What additional guidance is available?
What’s new in the GDPR?
The GDPR introduces new obligations that require you to integrate data protection concerns into every
aspect of your processing activities. This approach is ‘data protection by design and by default’. These
are key elements of the GDPR’s risk-based approach and its focus on accountability, ie you are able to
demonstrate how you are complying with its requirements.
However, data protection by design and by default is not new. It is essentially the GDPR’s version of
‘privacy by design’, an approach that the ICO has championed for many years. Although privacy by
design and data protection by design are not precisely the same, there are well-established privacy by
design principles and practices that can apply in this context.
Some organisations already adopt a ‘privacy by design approach’ as a matter of good practice. If this is
the case for you, then you are well-placed to meet the requirements of data protection by design and by
default. Although you may still need to review your processes and procedures to ensure that you are
meeting your obligations.
The biggest change is that whilst privacy by design was good practice under the Data Protection Act
1998 (the 1998 Act), data protection by design and by default are legal requirements under the GDPR.☐ When we use other systems, services or products in our processing activities, we make sure
that we only use those whose designers and manufacturers take data protection issues into
account.
☐ We use privacy-enhancing technologies (PETs) to assist us in complying with our data
protection by design obligations.02 August 2018 - 1.0.248 174
What does the GDPR say about data protection by design and by default?
Articles 25(1) and 25(2) of the GDPR outline your obligations concerning data protection by design and
by default.
Article 25(1) specifies the requirements for data protection by design:
Article 25(2) specifies the requirements for data protection by default:
Article 25(3) states that if you adhere to an approved certification under Article 42, you can use this as
one way of demonstrating your compliance with these requirements.
Further Reading
What is data protection by design?
Data protection by design is ultimately an approach that ensures you consider privacy and data
protection issues at the design phase of any system, service, product or process and then throughout
the lifecycle.
As expressed by the GDPR, it requires you to:
put in place appropriate technical and organisational measures designed to implement the data
‘Taking into account the state of the art, the cost of implementation and the nature, scope, context
and purposes of processing as well as the risks of varying likelihood and severity for rights and
freedoms of natural persons posed by the processing, the controller shall, both at the time of the
determination of the means for processing and at the time of the processing itself, implement
appropriate technical and organisational measures, such as pseudonymisation, which are designed
to implement data-protection principles, such as data minimisation, in an effective manner and to
integrate the necessary safeguards into the processing in order to meet the requirements of this
Regulation and protect the rights of data subjects.’
‘The controller shall implement appropriate technical and organisational measures for ensuring that,
by default, only personal data which are necessary for each specific purpose of the processing are
processed. That obligation applies to the amount of personal data collected, the extent of their
processing, the period of their storage and their accessibility. In particular, such measures shall
ensure that by default personal data are not made accessible without the individual's intervention to
an indefinite number of natural persons.’
Relevant provisions in the GDPR - Article 25 and Recital 78
External link02 August 2018 - 1.0.248 175
protection principles; and
integrate safeguards into your processing so that you meet the GDPR's requirements and protect the
individual rights.
In essence this means you have to integrate or ‘bake in’ data protection into your processing activities
and business practices.
Data protection by design has broad application. Examples include:
developing new IT systems, services, products and processes that involve processing personal data;
developing organisational policies, processes, business practices and/or strategies that have privacy
implications;
physical design;
embarking on data sharing initiatives; or
using personal data for new purposes.
The underlying concepts of data protection by design are not new. Under the name ‘privacy by design’
they have existed for many years. Data protection by design essentially inserts the privacy by design
approach into data protection law.
Under the 1998 Act, the ICO supported this approach as it helped you to comply with your data
protection obligations. It is now a legal requirement.
What is data protection by default?
Data protection by default requires you to ensure that you only process the data that is necessary to
achieve your specific purpose. It links to the fundamental data protection principles of data minimisation
and purpose limitation .
You have to process some personal data to achieve your purpose(s). Data protection by default means
you need to specify this data before the processing starts, appropriately inform individuals and only
process the data you need for your purpose. It does not require you to adopt a ‘default to off’ solution.
What you need to do depends on the circumstances of your processing and the risks posed to
individuals.
Nevertheless, you must consider things like:
adopting a ‘privacy-first’ approach with any default settings of systems and applications;
ensuring you do not provide an illusory choice to individuals relating to the data you will process;
not processing additional data unless the individual decides you can;
ensuring that personal data is not automatically made publicly available to others unless the
individual decides to make it so; and
providing individuals with sufficient controls and options to exercise their rights.
Who is responsible for complying with data protection by design and by default?
Article 25 specifies that, as the controller, you have responsibility for complying with data protection by
design and by default. Depending on your circumstances, you may have different requirements for
different areas within your organisation. For example:02 August 2018 - 1.0.248 176
your senior management, eg developing a culture of ‘privacy awareness’ and ensuring you develop
policies and procedures with data protection in mind;
your software engineers, system architects and application developers, –eg those who design
systems, products and services should take account of data protection requirements and assist you in
complying with your obligations; and
your business practices, eg you should ensure that you embed data protection by design in all your
internal processes and procedures.
This may not apply to all organisations, of course. However, data protection by design is about adopting
an organisation-wide approach to data protection, and ‘baking in’ privacy considerations into any
processing activity you undertake. It doesn’t apply only if you are the type of organisation that has your
own software developers and systems architects.
In considering whether to impose a penalty, the ICO will take into account the technical and
organisational measures you have put in place in respect of data protection by design. Additionally,
under the Data Protection Act 2018 (DPA 2018) we can issue an Enforcement Notice against you for any
failings in respect of Article 25.
What about data processors?
If you use another organisation to process personal data on your behalf, then that organisation is a data
processor under the GDPR.
Article 25 does not mention data processors specifically. However, Article 28 specifies the considerations
you must take whenever you are selecting a processor. For example, you must only use processors that
provide:
This requirement covers both data protection by design in Article 25 as well as your security obligations
under Article 32. Your processor cannot necessarily assist you with your data protection by design
obligations (unlike with security measures), however you must only use processors that provide
sufficient guarantees to meet the GDPR’s requirements.
What about other parties?
Data protection by design and by default can also impact organisations other than controllers and
processors. Depending on your processing activity, other parties may be involved, even if this is just
where you purchase a product or service that you then use in your processing. Examples include
manufacturers, product developers, application developers and service providers.
Recital 78 extends the concepts of data protection by design to other organisations, although it does not
place a requirement on them to comply – that remains with you as the controller. It says:
‘sufficient guarantees to implement appropriate technical and organisational measures in such a
manner that the processing will meet the requirements of this Regulation and ensure the protection
of the rights of the data subject’02 August 2018 - 1.0.248 177
Therefore, when considering what products and services you need for your processing, you should look
to choose those where the designers and developers have taken data protection into account. This can
help to ensure that your processing adheres to the data protection by design requirements.
If you are a developer or designer of products, services and applications, the GDPR places no specific
obligations on you about how you design and build these products. (You may have specific obligations as
a controller in your own right, eg for any employee data.) However, you should note that controllers are
required to consider data protection by design when selecting services and products for use in their data
processing activities – therefore if you design these products with data protection in mind, you may be
in a better position.
Further Reading
What are we required to do?
You must put in place appropriate technical and organisational measures designed to implement the
data protection principles and safeguard individual rights.
There is no ‘one size fits all’ method to do this, and no one set of measures that you should put in place.
It depends on your circumstances.
The key is that you consider data protection issues from the start of any processing activity, and adopt
appropriate policies and measures that meet the requirements of data protection by design and by
default.
Some examples of how you can do this include:
minimising the processing of personal data;
pseudonymising personal data as soon as possible;
ensuring transparency in respect of the functions and processing of personal data;
enabling individuals to monitor the processing; and
creating (and improving) security features.
This is not an exhaustive list. Complying with data protection by design and by default may require you
to do much more than the above.
‘When developing, designing, selecting and using applications, services and products that are based
on the processing of personal data or process personal data to fulfil their task, producers of the
products, services and applications should be encouraged to take into account the right to data
protection when developing and designing such products, services and applications and, with regard
to the state of the art, to make sure that controllers and processors are able to fulfil their data
protection obligations.’
Relevant provisions in the GDPR - Articles 25 and 28, and Recitals 78, 79, 81 and 82
External link02 August 2018 - 1.0.248 178
However, we cannot provide a complete guide to all aspects of data protection by design and by default
in all circumstances. This guidance identifies the main points for you to consider. Depending on the
processing you are doing, you may need to obtain specialist advice that goes beyond the scope of this
guidance.
Further Reading
When should we do this?
You should begin data protection by design at the initial phase of any system, service, product, or
process. You should start by considering your intended processing activities, the risks that these may
pose to individuals, and the possible measures available to ensure that you comply with the data
protection principles and protect individual rights. These considerations must cover:
the state of the art and costs of implementation of any measures;
the nature, scope, context and purposes of your processing; and
the risks that your processing poses to the rights and freedoms of individuals.
This is similar to the information risk assessment you should do when considering your security
measures.
These considerations lead into the second step, where you put in place actual technical and
organisational measures to implement the data protection principles and integrate safeguards into your
processing.
This is why there is no single solution or process that applies to every organisation or every processing
activity, although there are a number of commonalities that may apply to your specific circumstances as
described below.
The GDPR requires you to take these actions:
‘at the time of the determination of the means of the processing’ – in other words, when you are at
the design phase of any processing activity; and
‘at the time of the processing itself’ – ie during the lifecycle of your processing activity.
What are the underlying concepts of data protection by design and by default?
The underlying concepts are essentially expressed in the seven ‘foundational principles’ of privacy by
design, as developed by the Information and Privacy Commissioner of Ontario.
Although privacy by design is not necessarily equivalent to data protection by design, these foundational
principles can nevertheless underpin any approach you take.
‘Proactive not reactive; preventative not remedial’
You should take a proactive approach to data protection and anticipate privacy issues and risks before
they happen, instead of waiting until after the fact. This doesn’t just apply in the context of systemsRelevant provisions in the GDPR - Recital 78
External link02 August 2018 - 1.0.248 179
design – it involves developing a culture of ‘privacy awareness’ across your organisation.
‘Privacy as the default setting’
You should design any system, service, product, and/or business practice to protect personal data
automatically. With privacy built into the system, the individual does not have to take any steps to
protect their data – their privacy remains intact without them having to do anything.
‘Privacy embedded into design’
Embed data protection into the design of any systems, services, products and business practices. You
should ensure data protection forms part of the core functions of any system or service – essentially, it
becomes integral to these systems and services.
‘Full functionality – positive sum, not zero sum’
Also referred to as ‘win-win’, this principle is essentially about avoiding trade-offs, such the belief that in
any system or service it is only possible to have privacy or security, not privacy and security. Instead,
you should look to incorporate all legitimate objectives whilst ensuring you comply with your obligations.
‘End-to-end security – full lifecycle protection’
Put in place strong security measures from the beginning, and extend this security throughout the ‘data
lifecycle’ – ie process the data securely and then destroy it securely when you no longer need it.
‘Visibility and transparency – keep it open’
Ensure that whatever business practice or technology you use operates according to its premises and
objectives, and is independently verifiable. It is also about ensuring visibility and transparency to
individuals, such as making sure they know what data you process and for what purpose(s) you process
it.
‘Respect for user privacy – keep it user-centric’
Keep the interest of individuals paramount in the design and implementation of any system or service,
eg by offering strong privacy defaults, providing individuals with controls, and ensuring appropriate
notice is given.
How do we do this in practice?
One means of putting these concepts into practice is to develop a set of practical, actionable guidelines
that you can use in your organisation, framed by your assessment of the risks posed and the measures
available to you. You could base these upon the seven foundational principles.
However, how you go about doing this depends on your circumstances – who you are, what you are
doing, the resources you have available, and the nature of the data you process. You may not need to
have a set of documents and organisational controls in place, although in some situations you will be
required to have certain documents available concerning your processing.
The key is to take an organisational approach that achieves certain outcomes, such as ensuring that:
you consider data protection issues as part of the design and implementation of systems, services,02 August 2018 - 1.0.248 180
products and business practices;
you make data protection an essential component of the core functionality of your processing
systems and services;
you only process the personal data that you need in relation to your purposes(s), and that you only
use the data for those purposes;
personal data is automatically protected in any IT system, service, product, and/or business practice,
so that individuals should not have to take any specific action to protect their privacy;
the identity and contact information of those responsible for data protection are available both within
your organisation and to individuals;
you adopt a ‘plain language’ policy for any public documents so that individuals easily understand
what you are doing with their personal data;
you provide individuals with tools so they can determine how you are using their personal data, and
whether you are properly enforcing your policies; and
you offer offering strong privacy defaults, user-friendly options and controls, and respect user
preferences.
Many of these relate to other obligations in the GDPR, such as transparency requirements,
documentation, Data Protection Officers and DPIAs. This shows the broad nature of data protection by
design and how it applies to all aspects of your processing. Our guidance on these topics will help you
when you consider the measures you need to put in place for data protection by design and by default.
In more detail – ICO guidance
Read our sections on the data protection principles , individual rights , accountability and governance ,
documentation , data protection impact assessments , data protection officers and security in the
Guide to the GDPR.
In more detail – European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state. It
adopts guidelines for complying with the requirements of the GDPR.
WP29 has produced guidelines on transparency , data protection officers , and data protection impact
assessments , which have been endorsed by the EDPB.
Further reading
We will produce further guidance on how you can implement data protection by design soon.
However, the Information and Privacy Commissioner of Ontario has published guidance on how
organisations can ‘operationalise’ privacy by design , which may assist you.02 August 2018 - 1.0.248 181
How do data protection by design and by default link to data protection impact assessments
(DPIAs)?
A DPIA is a tool that you can use to identify and reduce the data protection risks of your processing
activities. They can also help you to design more efficient and effective processes for handling personal
data.
DPIAs are an integral part of data protection by design and by default. For example, they can determine
the type of technical and organisational measures you need in order to ensure your processing complies
with the data protection principles.
However, a DPIA is only required in certain circumstances, such as where the processing is likely to
result in a risk to rights and freedoms, though it is good practice to undertake a DPIA anyway. In
contrast, data protection by design is a broader concept, as it applies organisationally and requires you
to take certain considerations even before you decide whether your processing is likely to result in a
high risk or not.
What is the role of privacy-enhancing technologies (PETs)?
Privacy-enhancing technologies or PETs are technologies that embody fundamental data protection
principles by minimising personal data use, maximising data security, and empowering individuals. A
useful definition from the European Union Agency for Network and Information Security (ENISA) refers
to PETs as:
PETs link closely to the concept of privacy by design, and therefore apply to the technical measures you
can put in place. They can assist you in complying with the data protection principles and are a means of
implementing data protection by design within your organisation on a technical level.In more detail – ICO guidance
Read our guidance on DPIAs in the Guide to the GDPR.
We have also produced more detailed guidance on DPIAs , including a template that you can use and
a list of processing operations that we consider require DPIAs to be undertaken.
In more detail – European Data Protection Board
WP29 produced guidelines on data protection impact assessments , which have been endorsed by
the EDPB.
‘software and hardware solutions, ie systems encompassing technical processes, methods or
knowledge to achieve specific privacy or data protection functionality or to protect against risks of
privacy of an individual or a group of natural persons.’02 August 2018 - 1.0.248 182
What about international transfers?
Data protection by design also applies in the context of international transfers in cases where you intend
to transfer personal data overseas to a third country that does not have an adequacy decision.
You need to ensure that, whatever mechanism you use, appropriate safeguards are in place for these
transfers. As detailed in Recital 108, these safeguards need to include compliance with data protection
by design and by default.
Further Reading
What is the role of certification?
Article 25(3) says that:
This means that an approved certification mechanism, once one is available, can assist you in showing
how you are complying with, and implementing, data protection by design and by default.Further reading
We will provide further guidance on PETs in the near future. ENISA has also published research
reports on PETs that may assist you.
Relevant provisions in the GDPR - Article 47 and Recital 108
External link
In more detail – ICO guidance
Read our guidance on international transfers .
‘An approved certification mechanism pursuant to Article 42 may be used as an element to
demonstrate compliance with the requirements set out in paragraphs 1 and 2 of this Article.’
In more detail – European Data Protection Board
The EDPB published for consultation draft guidelines on certification and identifying certification
criteria in accordance with Articles 42 and 43 of the Regulation 2016/679 on 30 May 2018. The
consultation closed on 12 July 2018.02 August 2018 - 1.0.248 183
What additional guidance is available?
The ICO will publish more detailed guidance about data protection by design and privacy enhancing
technologies soon, as well as how these concepts apply in the context of the code of practice on age
appropriate design in the DPA 2018 section 123.
In the meantime, there are a number of publications about the privacy by design approach. We have
summarised some of these below.
Further reading
The Information and Privacy Commissioner of Ontario (IPC) originated the concept of privacy
by design in the 1990s. The IPC has a number of relevant publications about the concept and how
you can implement it in your organisation, including:
the original seven foundational principles of privacy by design (external link, PDF); and
a primer on privacy by design , published in 2013 (external link, PDF); and
guidance on Operationalizing privacy by design , published in 2012 (external link, PDF)
The European Union Agency for Network and Information Security (ENISA) has also
published research and guidance on privacy by design, including:
a research report on privacy and data protection by design (external link);
a research report on privacy by design and big data (external link); and
a subsection on privacy-enhancing technologies (external link)
The Norwegian data protection authority (Datatilsynet) has produced guidance on how
software developers can implement data protection by design and by default.02 August 2018 - 1.0.248 184
Data protection impact assessments
Click here for information about consulting the ICO about your data protection impact assessment.
At a glance
A Data Protection Impact Assessment (DPIA) is a process to help you identify and minimise the data
protection risks of a project.
You must do a DPIA for processing that is likely to result in a high risk to individuals. This
includes some specified types of processing. You can use our screening checklists to help you decide
when to do a DPIA.
It is also good practice to do a DPIA for any other major project which requires the processing of
personal data.
Your DPIA must:
describe the nature, scope, context and purposes of the processing;
assess necessity, proportionality and compliance measures;
identify and assess risks to individuals; and
identify any additional measures to mitigate those risks.
To assess the level of risk, you must consider both the likelihood and the severity of any impact on
individuals. High risk could result from either a high probability of some harm, or a lower possibility
of serious harm.
You should consult your data protection officer (if you have one) and, where appropriate, individuals
and relevant experts. Any processors may also need to assist you.
If you identify a high risk that you cannot mitigate, you must consult the ICO before starting the
processing.
The ICO will give written advice within eight weeks, or 14 weeks in complex cases. If appropriate, we
may issue a formal warning not to process the data, or ban the processing altogether.
Checklists
DPIA awareness checklist
☐ We provide training so that our staff understand the need to consider a DPIA at the early
stages of any plan involving personal data.
☐ Our existing policies, processes and procedures include references to DPIA requirements.
☐ We understand the types of processing that require a DPIA, and use the screening checklist to
identify the need for a DPIA, where necessary.
☐ We have created and documented a DPIA process.02 August 2018 - 1.0.248 185
☐ We provide training for relevant staff on how to carry out a DPIA.
DPIA screening checklist
☐ We always carry out a DPIA if we plan to:
☐ Use systematic and extensive profiling or automated decision-making to make significant
decisions about people.
☐ Process special category data or criminal offence data on a large scale.
☐ Systematically monitor a publicly accessible place on a large scale.
☐ Use new technologies.
☐ Use profiling, automated decision-making or special category data to help make decisions
on someone’s access to a service, opportunity or benefit.
☐ Carry out profiling on a large scale.
☐ Process biometric or genetic data.
☐ Combine, compare or match data from multiple sources.
☐ Process personal data without providing a privacy notice directly to the individual.
☐ Process personal data in a way which involves tracking individuals’ online or offline
location or behaviour.
☐ Process children’s personal data for profiling or automated decision-making or for
marketing purposes, or offer online services directly to them.
☐ Process personal data which could result in a risk of physical harm in the event of a
security breach.
☐ We consider whether to do a DPIA if we plan to carry out any other:
☐ Evaluation or scoring.
☐ Automated decision-making with significant effects.
☐ Systematic processing of sensitive data or data of a highly personal nature.
☐ Processing on a large scale.
☐ Processing of data concerning vulnerable data subjects.
☐ Innovative technological or organisational solutions.
☐ Processing involving preventing data subjects from exercising a right or using a service or
contract.
☐ We consider carrying out a DPIA in any major project involving the use of personal data.
☐ If we decide not to carry out a DPIA, we document our reasons.
☐ We carry out a new DPIA if there is a change to the nature, scope, context or purposes of our02 August 2018 - 1.0.248 186
In brief
What’s new under the GDPR?
What is a DPIA?
When do we need a DPIA?
How do we carry out a DPIA?
Do we need to consult the ICO?
What’s new under the GDPR?
The GDPR introduces a new obligation to do a DPIA before carrying out types of processing likely to
result in high risk to individuals’ interests. If your DPIA identifies a high risk that you cannot mitigate,
you must consult the ICO.
This is a key element of the new focus on accountability and data protection by design.
Some organisations already carry out privacy impact assessments (PIAs) as a matter of good practice.
If so, the concept will be familiar, but you still need to review your processes to make sure they complyprocessing.
DPIA process checklist
☐ We describe the nature, scope, context and purposes of the processing.
☐ We ask our data processors to help us understand and document their processing activities
and identify any associated risks.
☐ We consider how best to consult individuals (or their representatives) and other relevant
stakeholders.
☐ We ask for the advice of our data protection officer.
☐ We check that the processing is necessary for and proportionate to our purposes, and
describe how we will ensure data protection compliance.
☐ We do an objective assessment of the likelihood and severity of any risks to individuals’ rights
and interests.
☐ We identify measures we can put in place to eliminate or reduce high risks.
☐ We record our decision-making in the outcome of the DPIA, including any difference of opinion
with our DPO or individuals consulted.
☐ We implement the measures we identified, and integrate them into our project plan.
☐ We consult the ICO before processing, if we cannot mitigate high risks.
☐ We keep our DPIAs under review and revisit them when necessary.02 August 2018 - 1.0.248 187
with GDPR requirements. DPIAs are now mandatory in some cases, and there are specific legal
requirements for content and process.
If you have not already got a PIA process, you need to design a new DPIA process and embed this into
your organisation’s policies and procedures.
In the run-up to 25 May 2018, you also need to review your existing processing operations and decide
whether you need to do a DPIA, or review your PIA, for anything which is likely to be high risk. You do
not need to do a DPIA if you have already considered the relevant risks and safeguards in another way,
unless there has been a significant change to the nature, scope, context or purposes of the processing
since that previous assessment.
What is a DPIA?
A DPIA is a way for you to systematically and comprehensively analyse your processing and help you
identify and minimise data protection risks.
DPIAs should consider compliance risks, but also broader risks to the rights and freedoms of individuals,
including the potential for any significant social or economic disadvantage. The focus is on the potential
for harm - to individuals or to society at large, whether it is physical, material or non-material.
To assess the level of risk, a DPIA must consider both the likelihood and the severity of any impact on
individuals.
A DPIA does not have to eradicate the risks altogether, but should help to minimise risks and assess
whether or not remaining risks are justified.
DPIAs are a legal requirement for processing that is likely to be high risk. But an effective DPIA can also
bring broader compliance, financial and reputational benefits, helping you demonstrate accountability
and building trust and engagement with individuals.
A DPIA may cover a single processing operation or a group of similar processing operations. A group of
controllers can do a joint DPIA.
It’s important to embed DPIAs into your organisational processes and ensure the outcome can influence
your plans. A DPIA is not a one-off exercise and you should see it as an ongoing process, and regularly
review it.
When do we need a DPIA?
You must do a DPIA before you begin any type of processing which is “likely to result in a high risk”.
This means that although you have not yet assessed the actual level of risk you need to screen for
factors that point to the potential for a widespread or serious impact on individuals.
In particular, the GDPR says you must do a DPIA if you plan to:
use systematic and extensive profiling with significant effects;
process special category or criminal offence data on a large scale; or
systematically monitor publicly accessible places on a large scale.
The ICO also requires you to do a DPIA if you plan to:02 August 2018 - 1.0.248 188
use new technologies;
use profiling or special category data to decide on access to services;
profile individuals on a large scale;
process biometric data;
process genetic data;
match data or combine datasets from different sources;
collect personal data from a source other than the individual without providing them with a privacy
notice (‘invisible processing’);
track individuals’ location or behaviour;
profile children or target marketing or online services at them; or
process data that might endanger the individual’s physical health or safety in the event of a security
breach.
You should also think carefully about doing a DPIA for any other processing that is large scale, involves
profiling or monitoring, decides on access to services or opportunities, or involves sensitive data or
vulnerable individuals.
Even if there is no specific indication of likely high risk, it is good practice to do a DPIA for any major
new project involving the use of personal data. You can use or adapt the checklists to help you carry out
this screening exercise.
How do we carry out a DPIA?
A DPIA should begin early in the life of a project, before you start your processing, and run alongside
the planning and development process. It should include these steps:02 August 2018 - 1.0.248 189
You must seek the advice of your data protection officer (if you have one). You should also consult with
individuals and other stakeholders throughout this process.
The process is designed to be flexible and scalable. You can use or adapt our sample DPIA template ,
or create your own. If you want to create your own, you may want to refer to the European guidelines
which set out Criteria for an acceptable DPIA .
Although publishing a DPIA is not a requirement of GDPR, you should actively consider the benefits of
publication. As well as demonstrating compliance, publication can help engender trust and confidence.
We would therefore recommend that you publish your DPIAs, were possible, removing sensitive details
if necessary.
Do we need to consult the ICO?
You don’t need to send every DPIA to the ICO and we expect the percentage sent to us to be small. But
you must consult the ICO if your DPIA identifies a high risk and you cannot take measures to reduce
that risk. You cannot begin the processing until you have consulted us.
If you want your project to proceed effectively then investing time in producing a comprehensive DPIA
may prevent any delays later, if you have to consult with the ICO.
You need to email us and attach a copy of your DPIA.
Once we have the information we need, we will generally respond within eight weeks (although we can
extend this by a further six weeks in complex cases).02 August 2018 - 1.0.248 190
We will provide you with a written response advising you whether the risks are acceptable, or whether
you need to take further action. In some cases we may advise you not to carry out the processing
because we consider it would be in breach of the GDPR. In appropriate cases we may issue a formal
warning or take action to ban the processing altogether.
Further Reading
Key provisions in the GDPR - See Articles 35 and 36 and Recitals 74-77, 84, 89-92, 94 and 95
External link
Further reading – ICO guidance
We have published more detailed guidance on DPIAs .
Further reading – European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state. It
adopts guidelines for complying with the requirements of the GDPR.
WP29 published Guidelines on Data Protection Impact Assessment (DPIA) and determining whether
processing is “likely to result in a high risk” for the purposes of Regulation 2016/679 (WP248), which
have been endorsed by the EDPB.
Other relevant guidelines include:
Guidelines on Data Protection Officers (‘DPOs’) (WP243)
Guidelines on automated individual decision-making and profiling for the purposes of Regulation
2016/679 (WP251)02 August 2018 - 1.0.248 191
Data protection officers
At a glance
The GDPR introduces a duty for you to appoint a data protection officer (DPO) if you are a public
authority or body, or if you carry out certain types of processing activities.
DPOs assist you to monitor internal compliance, inform and advise on your data protection
obligations, provide advice regarding Data Protection Impact Assessments (DPIAs) and act as a
contact point for data subjects and the supervisory authority.
The DPO must be independent, an expert in data protection, adequately resourced, and report to the
highest management level.
A DPO can be an existing employee or externally appointed.
In some cases several organisations can appoint a single DPO between them.
DPOs can help you demonstrate compliance and are part of the enhanced focus on accountability.
Checklists
Appointing a DPO
☐ We are a public authority or body and have appointed a DPO (except if we are a court acting
in our judicial capacity).
☐ We are not a public authority or body, but we know whether the nature of our processing
activities requires the appointment of a DPO.
☐ We have appointed a DPO based on their professional qualities and expert knowledge of data
protection law and practices.
☐ We aren’t required to appoint a DPO under the GDPR but we have decided to do so
voluntarily. We understand that the same duties and responsibilities apply had we been required
to appoint a DPO. We support our DPO to the same standards.
Position of the DPO
☐ Our DPO reports directly to our highest level of management and is given the required
independence to perform their tasks.
☐ We involve our DPO, in a timely manner, in all issues relating to the protection of personal
data.
☐ Our DPO is sufficiently well resourced to be able to perform their tasks.
☐ We do not penalise the DPO for performing their duties.
☐ We ensure that any other tasks or duties we assign our DPO do not result in a conflict of
interests with their role as a DPO.02 August 2018 - 1.0.248 192
In brief
Do we need to appoint a Data Protection Officer?
Under the GDPR, you must appoint a DPO if:
you are a public authority or body (except for courts acting in their judicial capacity);
your core activities require large scale, regular and systematic monitoring of individuals (for
example, online behaviour tracking); or
your core activities consist of large scale processing of special categories of data or data relating to
criminal convictions and offences.
This applies to both controllers and processors. You can appoint a DPO if you wish, even if you aren’t
required to. If you decide to voluntarily appoint a DPO you should be aware that the same requirements
of the position and tasks apply had the appointment been mandatory.
Regardless of whether the GDPR obliges you to appoint a DPO, you must ensure that your organisation
has sufficient staff and resources to discharge your obligations under the GDPR. However, a DPO can
help you operate within the law by advising and helping to monitor compliance. In this way, a DPO can
be seen to play a key role in your organisation’s data protection governance structure and to help
improve accountability.
If you decide that you don’t need to appoint a DPO, either voluntarily or because you don’t meet the
above criteria, it’s a good idea to record this decision to help demonstrate compliance with the
accountability principle.Tasks of the DPO
☐ Our DPO is tasked with monitoring compliance with the GDPR and other data protection laws,
our data protection policies, awareness-raising, training, and audits.
☐ We will take account of our DPO’s advice and the information they provide on our data
protection obligations.
☐ When carrying out a DPIA, we seek the advice of our DPO who also monitors the process.
☐ Our DPO acts as a contact point for the ICO. They co-operate with the ICO, including during
prior consultations under Article 36, and will consult on any other matter.
☐ When performing their tasks, our DPO has due regard to the risk associated with processing
operations, and takes into account the nature, scope, context and purposes of processing.
Accessibility of the DPO
☐ Our DPO is easily accessible as a point of contact for our employees, individuals and the ICO.
☐ We have published the contact details of the DPO and communicated them to the ICO.02 August 2018 - 1.0.248 193
Further Reading
What is the definition of a public authority?
Section 7 of the Data Protection Act 2018 defines what a ‘public authority’ and a ‘public body’ are for
the purposes of the GDPR.
What are ‘core activities’?
The other two conditions that require you to appoint a DPO only apply when:
your core activities consist of processing activities, which, by virtue of their nature, scope and / or
their purposes, require the regular and systematic monitoring of individuals on a large scale; or
your core activities consist of processing on a large scale of special category data, or data relating to
criminal convictions and offences.
Your core activities are the primary business activities of your organisation. So, if you need to process
personal data to achieve your key objectives, this is a core activity. This is different to processing
personal data for other secondary purposes, which may be something you do all the time (eg payroll or
HR information), but which is not part of carrying out your primary objectives.
What does ‘regular and systematic monitoring of data subjects on a large scale’ mean?
There are two key elements to this condition requiring you to appoint a DPO. Although the GDPR does
not define ‘regular and systematic monitoring’ or ‘large scale’, the Article 29 Working Party
(WP29) provided some guidance on these terms in its guidelines on DPOs . WP29 has been replaced by
the European Data Protection Board (EDPB) which has endorsed these guidelines.
‘Regular and systematic’ monitoring of data subjects includes all forms of tracking and profiling, both
online and offline. An example of this is for the purposes of behavioural advertising.
When determining if processing is on a large scale, the guidelines say you should take the following
factors into consideration:Does my organisation need a data protection officer (DPO)?
For organisations
Example
For most organisations, processing personal data for HR purposes will be a secondary function to
their main business activities and so will not be part of their core activities.
However, a HR service provider necessarily processes personal data as part of its core activities to
provide HR functions for its client organisations. At the same time, it will also process HR information
for its own employees, which will be regarded as an ancillary function and not part of its core
activities.02 August 2018 - 1.0.248 194
the numbers of data subjects concerned;
the volume of personal data being processed;
the range of different data items being processed;
the geographical extent of the activity; and
the duration or permanence of the processing activity.
What does processing special category data and personal data relating to criminal convictions
and offences on a large scale mean?
Processing special category data or criminal conviction or offences data carries more risk than other
personal data. So when you process this type of data on a large scale you are required to appoint a
DPO, who can provide more oversight. Again, the factors relevant to large-scale processing can
include:
the numbers of data subjects;
the volume of personal data being processed;
the range of different data items being processed;
the geographical extent of the activity; and
the duration or permanence of the activity.
What professional qualities should the DPO have?
The GDPR says that you should appoint a DPO on the basis of their professional qualities, and in
particular, experience and expert knowledge of data protection law.
It doesn’t specify the precise credentials they are expected to have, but it does say that this should
be proportionate to the type of processing you carry out, taking into consideration the level of
Example
A large retail website uses algorithms to monitor the searches and purchases of its users and, based
on this information, it offers recommendations to them. As this takes place continuously and
according to predefined criteria, it can be considered as regular and systematic monitoring of data
subjects on a large scale.
Example
A health insurance company processes a wide range of personal data about a large number of
individuals, including medical conditions and other health information. This can be considered as
processing special category data on a large scale.02 August 2018 - 1.0.248 195
protection the personal data requires.
So, where the processing of personal data is particularly complex or risky, the knowledge and
abilities of the DPO should be correspondingly advanced enough to provide effective oversight.
It would be an advantage for your DPO to also have a good knowledge of your industry or sector, as
well as your data protection needs and processing activities.
What are the tasks of the DPO?
The DPO’s tasks are defined in Article 39 as:
to inform and advise you and your employees about your obligations to comply with the GDPR and
other data protection laws;
to monitor compliance with the GDPR and other data protection laws, and with your data protection
polices, including managing internal data protection activities; raising awareness of data protection
issues, training staff and conducting internal audits;
to advise on, and to monitor, data protection impact assessments ;
to cooperate with the supervisory authority; and
to be the first point of contact for supervisory authorities and for individuals whose data is processed
(employees, customers etc).
It’s important to remember that the DPO’s tasks cover all personal data processing activities, not just
those that require their appointment under Article 37(1).
When carrying out their tasks the DPO is required to take into account the risk associated with the
processing you are undertaking. They must have regard to the nature, scope, context and purposes
of the processing.
The DPO should prioritise and focus on the more risky activities, for example where special category
data is being processed, or where the potential impact on individuals could be damaging. Therefore,
DPOs should provide risk-based advice to your organisation.
If you decide not to follow the advice given by your DPO, you should document your reasons to help
demonstrate your accountability.
Can we assign other tasks to the DPO?
The GDPR says that you can assign further tasks and duties, so long as they don’t result in a conflict of
interests with the DPO’s primary tasks.
Basically this means the DPO cannot hold a position within your organisation that leads him or her to
determine the purposes and the means of the processing of personal data. At the same time, the DPO
Example
As an example of assigning other tasks, Article 30 requires that organisations must maintain records
of processing operations. There is nothing preventing this task being allocated to the DPO.02 August 2018 - 1.0.248 196
shouldn’t be expected to manage competing objectives that could result in data protection taking a
secondary role to business interests.
Can the DPO be an existing employee?
Yes. As long as the professional duties of the employee are compatible with the duties of the DPO and
do not lead to a conflict of interests, you can appoint an existing employee as your DPO, rather than you
having to create a new post.
Can we contract out the role of the DPO?
You can contract out the role of DPO externally, based on a service contract with an individual or an
organisation. It’s important to be aware that an externally-appointed DPO should have the same
position, tasks and duties as an internally-appointed one.
Can we share a DPO with other organisations?
You may appoint a single DPO to act for a group of companies or public authorities.
If your DPO covers several organisations, they must still be able to perform their tasks effectively,
taking into account the structure and size of those organisations. This means you should consider if
one DPO can realistically cover a large or complex collection of organisations. You need to ensure
they have the necessary resources to carry out their role and be supported with a team, if this is
appropriate.
Your DPO must be easily accessible, so their contact details should be readily available to your
employees, to the ICO, and people whose personal data you process.
Can we have more than one DPO?
The GDPR clearly provides that an organisation must appoint a single DPO to carry out the tasks
required in Article 39, but this doesn’t prevent it appointing other data protection specialists as part of
a team to help support the DPO.
You need to determine the best way to set up your organisation’s DPO function and whether this
necessitates a data protection team. However, there must be an individual designated as the DPO for
Examples
A company’s head of marketing plans an advertising campaign, including which of the company’s
customers to target, what method of communication and the personal details to use. This person
cannot also be the company’s DPO, as the decision-making is likely to lead to a conflict of interests
between the campaign’s aims and the company’s data protection obligations.
On the other hand, a public authority could appoint its existing FOI officer / records manager as its
DPO. There is no conflict of interests here as these roles are about ensuring information rights
compliance, rather than making decisions about the purposes of processing.02 August 2018 - 1.0.248 197
the purposes of the GDPR who meets the requirements set out in Articles 37-39.
If you have a team, you should clearly set out the roles and responsibilities of its members and how
it relates to the DPO.
If you hire data protection specialists other than a DPO, it’s important that they are not referred to as
your DPO, which is a specific role with particular requirements under the GDPR.
What do we have to do to support the DPO?
You must ensure that:
the DPO is involved, closely and in a timely manner, in all data protection matters;
the DPO reports to the highest management level of your organisation, ie board level;
the DPO operates independently and is not dismissed or penalised for performing their tasks;
you provide adequate resources (sufficient time, financial, infrastructure, and, where appropriate,
staff) to enable the DPO to meet their GDPR obligations, and to maintain their expert level of
knowledge;
you give the DPO appropriate access to personal data and processing activities;
you give the DPO appropriate access to other services within your organisation so that they can
receive essential support, input or information;
you seek the advice of your DPO when carrying out a DPIA; and
you record the details of your DPO as part of your records of processing activities.
This shows the importance of the DPO to your organisation and that you must provide sufficient support
so they can carry out their role independently. Part of this is the requirement for your DPO to report to
the highest level of management. This doesn’t mean the DPO has to be line managed at this level but
they must have direct access to give advice to senior managers who are making decisions about
personal data processing.
What details do we have to publish about the DPO?
The GDPR requires you to:
publish the contact details of your DPO; and
provide them to the ICO.
This is to enable individuals, your employees and the ICO to contact the DPO as needed. You aren’t
required to include the name of the DPO when publishing their contact details but you can choose to
provide this if you think it’s necessary or helpful.
You’re also required to provide your DPO’s contact details in the following circumstances:
when consulting the ICO under Article 36 about a DPIA; and
when providing privacy information to individuals under Articles 13 and 14.
However, remember you do have to provide your DPO’s name if you report a personal data breach to
the ICO and to those individuals affected by it.02 August 2018 - 1.0.248 198
Is the DPO responsible for compliance?
The DPO isn’t personally liable for data protection compliance. As the controller or processor it remains
your responsibility to comply with the GDPR. Nevertheless, the DPO clearly plays a crucial role in
helping you to fulfil your organisation’s data protection obligations.
Further Reading
Relevant provisions in the GDPR - See Articles 35-36, 37-39, 83 and Recital 97
External link
In more detail - ICO guidance
See the following section of the Guide to GDPR : Accountability and governance
See our Guide to freedom of information
In more detail – European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state. It
adopts guidelines for complying with the requirements of the GDPR.
WP29 published guidelines on DPOs and DPO FAQs , which have been endorsed by the EDPB.02 August 2018 - 1.0.248 199
Codes of conduct
At a glance
The GDPR recommends that you use approved codes of conduct to help you to apply the GDPR
effectively.
Codes of conduct will reflect the needs of different processing sectors and micro, small and medium
sized enterprises.
Trade associations or bodies representing a sector can create codes of conduct to help their sector
comply with the GDPR in an efficient and cost effective way.
Signing up to a code of conduct is voluntary. However, if there is an approved code of conduct,
relevant to your processing, you may wish to consider signing up. It can also help show compliance
to the ICO, the public and in your business to business relationships.
In brief
Codes of conduct help you to apply the GDPR effectively and allow you to demonstrate your compliance.
Who is responsible for codes of conduct?
Trade associations or bodies representing a sector can create codes of conduct, in consultation with
relevant stakeholders, including the public where feasible. They can amend or extend existing codes to
comply with the GDPR requirements. They have to submit the draft code to us for approval.
We will assess whether a monitoring body is independent and has expertise in the subject matter/sector.
Approved bodies will monitor compliance with the code (except for codes covering public authorities)
and help ensure that the code is appropriately robust and trustworthy.
We will:
check that codes covering UK processing include appropriate safeguards;
set out the monitoring body accreditation criteria;
accredit monitoring bodies;
approve and publish codes; and
maintain a public register of all approved UK codes.
If a code covers more than one EU country, the relevant supervisory authority will submit it to the
European Data Protection Board (EDPB), who will submit their opinion on the code to the European
Commission. The Commission may decide that a code is valid across all EU countries.
If a code covers personal data transfers to countries outside of the EU, the European Commission can
use legislation to give a code general validity within the Union.
What should codes of conduct address?
Codes of conduct should help you comply with the law, and may cover topics such as:02 August 2018 - 1.0.248 200
fair and transparent processing;
legitimate interests pursued by controllers in specific contexts;
the collection of personal data;
the pseudonymisation of personal data;
the information provided to individuals and the exercise of individuals’ rights;
the information provided to and the protection of children (including mechanisms for obtaining
parental consent);
technical and organisational measures, including data protection by design and by default and
security measures;
breach notification;
data transfers outside the EU; or
dispute resolution procedures.
Codes of conduct can collectively address the specific needs of micro, small and medium enterprises
and help them to work together to apply GDPR requirements to the specific issues in their sector. Codes
are expected to provide added value for their sector, as they will tailor the GDPR requirements to the
sector or area of data processing. They could be a cost effective means to enable compliance with GDPR
for a sector and its members.
Why sign up to a code of conduct?
Adhering to a code of conduct shows that you:
follow the GDPR requirements for data protection; and that
are addressing the level of risk relevant to your sector and the type of processing you are doing. For
example, in a ‘high risk’ sector, such as processing children’s or health data, the code may contain
more demanding requirements.
Adhering to a code of conduct can help you to:
be more transparent and accountable - enabling businesses or individuals to distinguish which
processing activities, products, and services meet GDPR data protection requirements and they can
trust with their personal data;
have a competitive advantage;
create effective safeguards to mitigate the risk around data processing and the rights and freedoms
of individuals;
help with specific data protection areas, such as international transfers;
improve standards by establishing best practice;
mitigate against enforcement action; and
demonstrate that you have appropriate safeguards to transfer data to countries outside the EU.
What are the practical implications for our organisation?
You can sign up to a code of conduct relevant to your data processing activities or sector. This could
be an extension or an amendment to a current code, or be a brand new code.02 August 2018 - 1.0.248 201
When you sign up to a code of conduct, you will need to demonstrate to the code’s monitoring body,
that you meet the code’s requirements. These requirements will reflect your sector and size of
organisation.
Your customers will be able to view your code membership via the code’s webpage, the ICO’s public
register of UK approved codes of conduct and the EDPB’s public register for all codes of conduct in
the EU.
Once you are assessed as adhering to the code, your compliance with the code will be monitored on
a regular basis. This monitoring provides assurance that the code can be trusted. Your membership
can be withdrawn if you no longer meet the requirements of the code, and the monitoring body will
notify us of this.
You can help reduce the risk of a fine by signing up to a code of conduct. This is because adherence
to a code of conduct will serve as a mitigating factor when a supervisory authority is considering
enforcement action via an administrative fine.
When contracting work to third parties, you may wish to consider whether they have signed up to a
code of conduct, as part of meeting your due diligence requirements under the GDPR.
Further Reading
Relevant provisions in the GDPR - See Articles 40-4 and 83 and Recitals 77, 98, 99 and 168
External link
European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state. It
adopts guidelines for complying with the requirements of the GDPR.
The EDPB are drafting guidelines on codes of conduct and monitoring bodies to cover the provisions
in Articles 40-1 and on codes of conduct as appropriate safeguards for international transfers of
personal data (Article 46(2)(e)).02 August 2018 - 1.0.248 202
Certification
At a glance
Member states, supervisory authorities (such as the ICO), the European Data Protection Board
(EDPB) and the Commission will promote certification.
Certification schemes will be a way to comply with the GDPR and enhance your transparency.
Certification schemes should reflect the needs of micro, small and medium sized enterprises.
Certification schemes under GDPR will be approved by the ICO and delivered by approved third
party assessors.
Signing up to a certification scheme is voluntary. However, if there is an approved certification
scheme that covers your processing activity, you may wish to consider working towards it. It can help
you demonstrate compliance to the regulator, the public and in your business to business
relationships.
In brief
Who is responsible for certification?
Member states, supervisory authorities (such as the ICO), the European Data Protection Board (EDPB)
and the Commission will promote certification as a means to enhance transparency and compliance with
the Regulation.
In the UK the certification framework will involve:
the ICO publishing accreditation requirements for certification bodies to meet;
the UK’s national accreditation body, UKAS, accrediting certification bodies and maintaining a public
register;
the ICO approving and publishing certification criteria for certification schemes;
accredited certification bodies (third party assessors) issuing certification; and
controllers and processors applying for certification and using certifications.
The ICO has no plans to accredit certification bodies or carry out certification at this time, although the
GDPR does allow this.
Currently there are no approved certification schemes or accredited certification bodies for issuing
GDPR certificates. Once the certification bodies have been accredited to issue GDPR certificates, you will
find this information on ICO’s and UKAS’s websites.
Across EU member states, the EDPB will collate all EU certification schemes in a public register. There is
also scope for a European Data Protection Seal.
What is the purpose of certification?
Certification is a way of demonstrating that your processing of personal data complies with the GDPR02 August 2018 - 1.0.248 203
requirements, in line with the accountability principle. It could help you demonstrate to the ICO that you
have a systematic and comprehensive approach to compliance. Certification can also help demonstrate
data protection in a practical way to businesses, individuals and regulators. Your customers can use
certification as a means to quickly assess the level of data protection of your particular product or
service.
The GDPR says that certification is also a means to:
demonstrate compliance with the provisions on data protection by design and by default (Article
25(3));
demonstrate that you have appropriate technical and organisational measures to ensure data
security (Article 32 (3)); and
to support transfers of personal data to third countries or international organisations (Article
46(2)(f)).
Why should we apply for certification of our processing?
Applying for certification is voluntary. However, if there is an approved certification scheme that covers
your processing activity, you may wish to consider working towards it as a way of demonstrating that
you comply with the GDPR.
Obtaining certification for your processing can help you to:
be more transparent and accountable - enabling businesses or individuals to distinguish which
processing activities, products and services meet GDPR data protection requirements and they can
trust with their personal data;
have a competitive advantage;
create effective safeguards to mitigate the risk around data processing and the rights and freedoms
of individuals;
improve standards by establishing best practice;
help with international transfers; and
mitigate against enforcement action.
What are the practical implications for us?
As a controller or processor, you could obtain certification for your processing operations, products
and services. Certification bodies will act as independent assessors, providing an external steer and
expertise in data protection. You will need to provide them with all the necessary information and
access to your processing activities to enable them to conduct the certification procedure.
Certification is valid for a maximum of three years, subject to periodic reviews. These independent
reviews provide assurance that the certification can be trusted. However, certifications can be
withdrawn if you no longer meet the requirements of the certification, and the certification body will
notify us of this.
Your customers can view your certification in a public register of certificates issued by certification
bodies.
Certification can help you demonstrate compliance, but does not reduce your data protection
responsibilities. Whilst certification will be considered as a mitigating factor when the ICO is02 August 2018 - 1.0.248 204
considering imposing a fine, non- compliance with a certification scheme can also be a reason for
issuing a fine.
When contracting work to third parties, you may wish to consider whether they hold a GDPR
certificate for their processing operations, as part of meeting your due diligence requirements under
the GDPR.
Further Reading
Relevant provisions in the GDPR - See Articles 42-43 and 83 and Recitals 81 and 100
External link
In more detail - European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state. It
adopts guidelines for complying with the requirements of the GDPR.
The EDPB published for consultation draft guidelines on certification and identifying certification
criteria in accordance with Articles 42 and 43 of the Regulation 2016/679 on 30 May 2018. The
consultation ended on 12 July 2018 and the responses are being considered.
The EDPB are also drafting guidelines on certification as an appropriate safeguard for international
transfers of personal data (Article 46(2)(f).
In more detail – Article 29
The WP29 draft guidelines on accreditation for certification bodies were published for consultation.
The consultation closed on 30 March 2018 and the responses are being considered:
http://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=614486 02 August 2018 - 1.0.248 205
Guide to the data protection fee
On 25 May 2018, the Data Protection (Charges and Information) Regulations 2018 (the 2018
Regulations) came into force, changing the way we fund our data protection work.
Under the 2018 Regulations, organisations that determine the purpose for which personal data is
processed (controllers) must pay a data protection fee unless they are exempt.
The new data protection fee replaces the requirement to ‘notify’ (or register), which was in the Data
Protection Act 1998 (the 1998 Act).
Although the 2018 Regulations come into effect on 25 May 2018, this doesn’t mean everyone now has to
pay the new fee. Controllers who have a current registration (or notification) under the 1998 Act do not
have to pay the new fee until that registration has expired.
Further Reading
The data protection fee - a guide for controllers
For organisations
PDF (103.28K)02 August 2018 - 1.0.248 206
Security
At a glance
A key principle of the GDPR is that you process personal data securely by means of ‘appropriate
technical and organisational measures’ – this is the ‘security principle’.
Doing this requires you to consider things like risk analysis, organisational policies, and physical and
technical measures.
You also have to take into account additional requirements about the security of your processing –
and these also apply to data processors.
You can consider the state of the art and costs of implementation when deciding what measures to
take – but they must be appropriate both to your circumstances and the risk your processing poses.
Where appropriate, you should look to use measures such as pseudonymisation and encryption.
Your measures must ensure the ‘confidentiality, integrity and availability’ of your systems and
services and the personal data you process within them.
The measures must also enable you to restore access and availability to personal data in a timely
manner in the event of a physical or technical incident.
You also need to ensure that you have appropriate processes in place to test the effectiveness of
your measures, and undertake any required improvements.
Checklists
☐ We undertake an analysis of the risks presented by our processing, and use this to assess the
appropriate level of security we need to put in place.
☐ When deciding what measures to implement, we take account of the state of the art and costs
of implementation.
☐ We have an information security policy (or equivalent) and take steps to make sure the policy
is implemented.
☐ Where necessary, we have additional policies and ensure that controls are in place to enforce
them.
☐ We make sure that we regularly review our information security policies and measures and,
where necessary, improve them.
☐ We have put in place basic technical controls such as those specified by established
frameworks like Cyber Essentials.
☐ We understand that we may also need to put other technical measures in place depending on
our circumstances and the type of personal data we process.
☐ We use encryption and/or pseudonymisation where it is appropriate to do so.
☐ We understand the requirements of confidentiality, integrity and availability for the personal02 August 2018 - 1.0.248 207
In brief
What’s new?
What does the GDPR say about security?
Why should we worry about information security?
What do we need to protect with our security measures?
What level of security is required?
What organisational measures do we need to consider?
What technical measures do we need to consider?
What if we operate in a sector that has its own security requirements?
What do we do when a data processor is involved?
Should we use pseudonymisation and encryption?
What are ‘confidentiality, integrity, availability’ and ‘resilience’?
What are the requirements for restoring availability and access to personal data?
Are we required to ensure our security measures are effective?
What about codes of conduct and certification?
What about our staff?
What’s new?
The GDPR requires you to process personal data securely. This is not a new data protection obligation. It
replaces and mirrors the previous requirement to have ‘appropriate technical and organisational
measures’ under the Data Protection Act 1998 (the 1998 Act).
However, the GDPR provides more specifics about what you have to do about the security of your
processing and how you should assess your information risk and put appropriate security measures in
place. Whilst these are broadly equivalent to what was considered good and best practice under the
1998 Act, they are now a legal requirement.data we process.
☐ We make sure that we can restore access to personal data in the event of any incidents, such
as by establishing an appropriate backup process.
☐ We conduct regular testing and reviews of our measures to ensure they remain effective, and
act on the results of those tests where they highlight areas for improvement.
☐ Where appropriate, we implement measures that adhere to an approved code of conduct or
certification mechanism.
☐ We ensure that any data processor we use also implements appropriate technical and
organisational measures.02 August 2018 - 1.0.248 208
What does the GDPR say about security?
Article 5(1)(f) of the GDPR concerns the ‘integrity and confidentiality’ of personal data. It says that
personal data shall be:
You can refer to this as the GDPR’s ‘security principle’. It concerns the broad concept of information
security .
This means that you must have appropriate security to prevent the personal data you hold being
accidentally or deliberately compromised. You should remember that while information security is
sometimes considered as cybersecurity (the protection of your networks and information systems from
attack), it also covers other things like physical and organisational security measures.
You need to consider the security principle alongside Article 32 of the GDPR, which provides more
specifics on the security of your processing. Article 32(1) states:
Further Reading
Why should we worry about information security?
Poor information security leaves your systems and services at risk and may cause real harm and
distress to individuals – lives may even be endangered in some extreme cases.
Some examples of the harm caused by the loss or abuse of personal data include:
identity fraud;
fake credit card transactions;
targeting of individuals by fraudsters, potentially made more convincing by compromised personal
data;
'Processed in a manner that ensures appropriate security of the personal data, including protection
against unauthorised or unlawful processing and against accidental loss, destruction or damage,
using appropriate technical or organisational measures'
‘Taking into account the state of the art, the costs of implementation and the nature, scope, context
and purposes of processing as well as the risk of varying likelihood and severity for the rights and
freedoms of natural persons, the controller and the processor shall implement appropriate technical
and organisational measures to ensure a level of security appropriate to the risk’
Relevant provisions in the GDPR - See Articles 5(1)(f) and 32, and Recitals 39 and 83
External link02 August 2018 - 1.0.248 209
witnesses put at risk of physical harm or intimidation;
offenders at risk from vigilantes;
exposure of the addresses of service personnel, police and prison officers, and those at risk of
domestic violence;
fake applications for tax credits; and
mortgage fraud.
Although these consequences do not always happen, you should recognise that individuals are still
entitled to be protected from less serious kinds of harm, for example embarrassment or inconvenience.
Information security is important, not only because it is itself a legal requirement, but also because it
can support good data governance and help you demonstrate your compliance with other aspects of the
GDPR.
The ICO is also required to consider the technical and organisational measures you had in place when
considering an administrative fine.
What do our security measures need to protect?
The security principle goes beyond the way you store or transmit information. Every aspect of your
processing of personal data is covered, not just cybersecurity. This means the security measures you
put in place should seek to ensure that:
the data can be accessed, altered, disclosed or deleted only by those you have authorised to do so
(and that those people only act within the scope of the authority you give them);
the data you hold is accurate and complete in relation to why you are processing it; and
the data remains accessible and usable, ie, if personal data is accidentally lost, altered or destroyed,
you should be able to recover it and therefore prevent any damage or distress to the individuals
concerned.
These are known as ‘confidentiality, integrity and availability’ and under the GDPR, they form part of
your obligations.
What level of security is required?
The GDPR does not define the security measures that you should have in place. It requires you to have
a level of security that is ‘appropriate’ to the risks presented by your processing. You need to consider
this in relation to the state of the art and costs of implementation, as well as the nature, scope, context
and purpose of your processing.
This reflects both the GDPR’s risk-based approach, and that there is no ‘one size fits all’ solution to
information security. It means that what’s ‘appropriate’ for you will depend on your own circumstances,
the processing you’re doing, and the risks it presents to your organisation.
So, before deciding what measures are appropriate, you need to assess your information risk. You
should review the personal data you hold and the way you use it in order to assess how valuable,
sensitive or confidential it is – as well as the damage or distress that may be caused if the data was
compromised. You should also take account of factors such as:
the nature and extent of your organisation’s premises and computer systems;02 August 2018 - 1.0.248 210
the number of staff you have and the extent of their access to personal data; and
any personal data held or used by a data processor acting on your behalf.
Further Reading
We cannot provide a complete guide to all aspects of security in all circumstances for all organisations,
but this guidance is intended to identify the main points for you to consider.
What organisational measures do we need to consider?
Carrying out an information risk assessment is one example of an organisational measure, but you will
need to take other measures as well. You should aim to build a culture of security awareness within your
organisation. You should identify a person with day-to-day responsibility for information security within
your organisation and make sure this person has the appropriate resources and authority to do their job
effectively.
Clear accountability for security will ensure that you do not overlook these issues, and that your overall
security posture does not become flawed or out of date.
Although an information security policy is an example of an appropriate organisational measure, you
may not need a ‘formal’ policy document or an associated set of policies in specific areas. It depends on
your size and the amount and nature of the personal data you process, and the way you use that data.
However, having a policy does enable you to demonstrate how you are taking steps to comply with the
security principle.
Whether or not you have such a policy, you still need to consider security and other related matters
such as:
co-ordination between key people in your organisation (eg the security manager will need to know
about commissioning and disposing of any IT equipment);
access to premises or equipment given to anyone outside your organisation (eg for computer
maintenance) and the additional security considerations this will generate;
business continuity arrangements that identify how you will protect and recover any personal dataRelevant provisions in the GDPR - See See Article 32(2) and Recital 83
External link
Example
The Chief Executive of a medium-sized organisation asks the Director of Resources to ensure that
appropriate security measures are in place, and that regular reports are made to the board.
The Resources Department takes responsibility for designing and implementing the organisation’s
security policy, writing procedures for staff to follow, organising staff training, checking whether
security measures are actually being adhered to and investigating security incidents.02 August 2018 - 1.0.248 211
you hold; and
periodic checks to ensure that your security measures remain appropriate and up to date.
What technical measures do we need to consider?
Technical measures are sometimes thought of as the protection of personal data held in computers and
networks. Whilst these are of obvious importance, many security incidents can be due to the theft or
loss of equipment, the abandonment of old computers or hard-copy records being lost, stolen or
incorrectly disposed of. Technical measures therefore include both physical and computer or IT security.
When considering physical security, you should look at factors such as:
the quality of doors and locks, and the protection of your premises by such means as alarms,
security lighting or CCTV;
how you control access to your premises, and how visitors are supervised;
how you dispose of any paper and electronic waste; and
how you keep IT equipment, particularly mobile devices, secure.
In the IT context, technical measures may sometimes be referred to as ‘cybersecurity’. This is a
complex technical area that is constantly evolving, with new threats and vulnerabilities always emerging.
It may therefore be sensible to assume that your systems are vulnerable and take steps to protect
them.
When considering cybersecurity, you should look at factors such as:
system security – the security of your network and information systems, including those which
process personal data;
data security – the security of the data you hold within your systems, eg ensuring appropriate access
controls are in place and that data is held securely;
online security – eg the security of your website and any other online service or application that you
use; and
device security – including policies on Bring-your-own-Device (BYOD) if you offer it.
Depending on the sophistication of your systems, your usage requirements and the technical expertise
of your staff, you may need to obtain specialist information security advice that goes beyond the scope
of this guidance. However, it’s also the case that you may not need a great deal of time and resources to
secure your systems and the personal data they process.
Whatever you do, you should remember the following:
your cybersecurity measures need to be appropriate to the size and use of your network and
information systems;
you should take into account the state of technological development, but you are also able to
consider the costs of implementation;
your security must be appropriate to your business practices. For example, if you offer staff the
ability to work from home, you need to put measures in place to ensure that this does not
compromise your security; and
your measures must be appropriate to the nature of the personal data you hold and the harm that
might result from any compromise.02 August 2018 - 1.0.248 212
A good starting point is to make sure that you’re in line with the requirements of Cyber Essentials – a
government scheme that includes a set of basic technical controls you can put in place relatively easily.
You should however be aware that you may have to go beyond these requirements, depending on your
processing activities. Cyber Essentials is only intended to provide a ‘base’ set of controls, and won’t
address the circumstances of every organisation or the risks posed by every processing operation.
A list of helpful sources of information about cybersecurity is provided below.
What if we operate in a sector that has its own security requirements?
Some industries have specific security requirements or require you to adhere to certain frameworks or
standards. These may be set collectively, for example by industry bodies or trade associations, or could
be set by other regulators. If you operate in these sectors, you need to be aware of their requirements,
particularly if specific technical measures are specified.
Although following these requirements will not necessarily equate to compliance with the GDPR’s
security principle, the ICO will nevertheless consider these carefully in any considerations of regulatory
action. It can be the case that they specify certain measures that you should have, and that those
measures contribute to your overall security posture.Other resources
The Cyber Essentials scheme
In more detail – ICO guidance
Under the 1998 Act, the ICO published a number of more detailed guidance pieces on different
aspects of IT security. We will be updating each of these to reflect the GDPR’s requirements in due
course. However, until that time they may still provide you with assistance or things to consider.
IT security top tips – for further general information on IT security;
IT asset disposal for organisations (pdf) – guidance to help organisations securely dispose of old
computers and other IT equipment;
A practical guide to IT security – ideal for the small business (pdf);
Protecting personal data in online services – learning from the mistakes of others (pdf) – detailed
technical guidance on common technical errors the ICO has seen in its casework
Bring your own device (BYOD) (pdf) – guidance for organisations who want to allow staff to use
personal devices to process personal data;
Cloud computing (pdf) – guidance covering how security requirements apply to personal data
processed in the cloud; and
Encryption – advice on the use of encryption to protect personal data.02 August 2018 - 1.0.248 213
What do we do when a data processor is involved?
If one or more organisations process personal data on your behalf, then these are data processors
under the GDPR. This can have the potential to cause security problems – as a data controller you are
responsible for ensuring compliance with the GDPR and this includes what the processor does with the
data. However, in addition to this, the GDPR’s security requirements also apply to any processor you
use.
This means that:
you must choose a data processor that provides sufficient guarantees about its security measures;
your written contract must stipulate that the processor takes all measures required under Article 32 –
basically, the contract has to require the processor to undertake the same security measures that
you would have to take if you were doing the processing yourself; and
you should ensure that your contract includes a requirement that the processor makes available all
information necessary to demonstrate compliance. This may include allowing for you to audit and
inspect the processor, either yourself or an authorised third party.
At the same time, your processor can assist you in ensuring compliance with your security obligations.
For example, if you lack the resource or technical expertise to implement certain measures, engaging a
processor that has these resources can assist you in making sure personal data is processed securely,
provided that your contractual arrangements are appropriate.
Further Reading
Should we use pseudonymisation and encryption?
Pseudonymisation and encryption are specified in the GDPR as two examples of measures that may be
appropriate for you to implement. This does not mean that you are obliged to use these measures. It
depends on the nature, scope, context and purposes of your processing, and the risks posed to
Example
If you are processing payment card data, you are obliged to comply with the Payment Card
Industry Data Security Standard . The PCI-DSS outlines a number of specific technical and
organisational measures that the payment card industry considers applicable whenever such data is
being processed.
Although compliance with the PCI-DSS is not necessarily equivalent to compliance with the GDPR’s
security principle, if you process card data and suffer a personal data breach, the ICO will consider
the extent to which you have put in place measures that PCI-DSS requires particularly if the breach
related to a lack of a particular control or process mandated by the standard.
Relevant provisions in the GDPR - See Articles 28 and 32, and Recitals 81 and 83
External link02 August 2018 - 1.0.248 214
individuals.
However, there are a wide range of solutions that allow you to implement both without great cost or
difficulty. For example, for a number of years the ICO has considered encryption to be an appropriate
technical measure given its widespread availability and relatively low cost of implementation. This
position has not altered due to the GDPR — if you are storing personal data, or transmitting it over the
internet, we recommend that you use encryption and have a suitable policy in place, taking account of
the residual risks involved.
When considering what to put in place, you should undertake a risk analysis and document your
findings.
Further Reading
What are ‘confidentiality, integrity, availability’ and ‘resilience’?
Collectively known as the ‘CIA triad’, confidentiality, integrity and availability are the three key elements
of information security. If any of the three elements is compromised, then there can be serious
consequences, both for you as a data controller, and for the individuals whose data you process.
The information security measures you implement should seek to guarantee all three both for the
systems themselves and any data they process.
The CIA triad has existed for a number of years and its concepts are well-known to security
professionals.
You are also required to have the ability to ensure the ‘resilience’ of your processing systems and
services. Resilience refers to:
whether your systems can continue operating under adverse conditions, such as those that may
result from a physical or technical incident; and
your ability to restore them to an effective state.
This refers to things like business continuity plans, disaster recovery, and cyber resilience. Again, there
is a wide range of solutions available here, and what is appropriate for you depends on your
circumstances.
Further ReadingRelevant provisions in the GDPR - See Article 32(1)(a) and Recital 83
External link
In more detail – ICO guidance
We have published detailed guidance on encryption under the 1998 Act. Much of this guidance still
applies, however we are also working to update it to reflect the GDPR.
Relevant provisions in the GDPR - See Article 32(1)(b) and Recital 83 02 August 2018 - 1.0.248 215
What are the requirements for restoring availability and access to personal data?
You must have the ability to restore the availability and access to personal data in the event of a
physical or technical incident in a ‘timely manner’.
The GDPR does not define what a ‘timely manner’ should be. This therefore depends on:
who you are;
what systems you have; and
the risk that may be posed to individuals if the personal data you process is unavailable for a period
of time.
The key point is that you have taken this into account during your information risk assessment and
selection of security measures. For example, by ensuring that you have an appropriate backup process
in place you will have some level of assurance that if your systems do suffer a physical or technical
incident you can restore them, and therefore the personal data they hold, as soon as reasonably
possible.
Further ReadingExternal link
Example
An organisation takes regular backups of its systems and the personal data held within them. It
follows the well-known ‘3-2-1’ backup strategy: three copies, with two stored on different devices
and one stored off-site.
The organisation is targeted by a ransomware attack that results in the data being encrypted. This
means that it is no longer able to access the personal data it holds.
Depending on the nature of the organisation and the data it processes, this lack of availability can
have significant consequences on individuals – and would therefore be a personal data breach under
the GDPR.
The ransomware has spread throughout the organisation’s systems, meaning that two of the
backups are also unavailable. However, the third backup, being stored off-site, allows the
organisation to restore its systems in a timely manner. There may still be a loss of personal data
depending on when the off-site backup was taken, but having the ability to restore the systems
means that whilst there will be some disruption to the service, the organisation are nevertheless
able to comply with this requirement of the GDPR.
Relevant provisions in the GDPR - See Article 32(1)(c) and Recital 83
External link02 August 2018 - 1.0.248 216
Are we required to ensure our security measures are effective?
Yes, the GDPR specifically requires you to have a process for regularly testing, assessing and evaluating
the effectiveness of any measures you put in place. What these tests look like, and how regularly you
do them, will depend on your own circumstances. However, it’s important to note that the requirement in
the GDPR concerns your measures in their entirety, therefore whatever ‘scope’ you choose for this
testing should be appropriate to what you are doing, how you are doing it, and the data that you are
processing.
Technically, you can undertake this through a number of techniques, such as vulnerability scanning and
penetration testing. These are essentially ‘stress tests’ of your network and information systems, which
are designed to reveal areas of potential risk and things that you can improve.
In some industries, you are required to undertake tests of security measures on a regular basis. The
GDPR now makes this an obligation for all organisations. Importantly, it does not specify the type of
testing, nor how regularly you should undertake it. It depends on your organisation and the personal
data you are processing.
You can undertake testing internally or externally. In some cases it is recommended that both take
place.
Whatever form of testing you undertake, you should document the results and make sure that you act
upon any recommendations, or have a valid reason for not doing so, and implement appropriate
safeguards. This is particularly important if your testing reveals potential critical flaws that could result in
a personal data breach.
Further Reading
What about codes of conduct and certification?
If your security measures include a product or service that adheres to a GDPR code of conduct (once
any have been approved) or certification (once any have been issued), you may be able to use this as
an element to demonstrate your compliance with the security principle. It is important that you check
carefully that the code or certification is appropriately issued in accordance with the GDPR.
Further ReadingRelevant provisions in the GDPR - See Article 32(1)(d) and Recital 83
External link
Relevant provisions in the GDPR - See Article 32(3) and Recital 83
External link
In more detail - European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state. It02 August 2018 - 1.0.248 217
What about our staff?
The GDPR requires you to ensure that anyone acting under your authority with access to personal data
does not process that data unless you have instructed them to do so. It is therefore vital that your staff
understand the importance of protecting personal data, are familiar with your security policy and put its
procedures into practice.
You should provide appropriate initial and refresher training, including:
your responsibilities as a data controller under the GDPR;
staff responsibilities for protecting personal data – including the possibility that they may commit
criminal offences if they deliberately try to access or disclose these data without authority;
the proper procedures to identify callers;
the dangers of people trying to obtain personal data by deception (eg by pretending to be the
individual whom the data concerns, or enabling staff to recognise ‘phishing’ attacks), or by
persuading your staff to alter information when they should not do so; and
any restrictions you place on the personal use of your systems by staff (eg to avoid virus infection or
spam).
Your staff training will only be effective if the individuals delivering it are themselves reliable and
knowledgeable.
Further Readingadopts guidelines for complying with the requirements of the GDPR.
The EDPB will be producing specific guidance on certification in the coming months.
The EDPB published for consultation draft guidelines on certification and identifying certification
criteria in accordance with Articles 42 and 43 of the Regulation 2016/679 on 30 May 2018. The
consultation closed on 12 July 2018.
The EDPB are also drafting guidelines on certification as an appropriate safeguard for international
transfers of personal data (Article 46(2)(f).
Relevant provisions in the GDPR - See Article 32(4) and Recital 83
External link
Other resources
The NCSC has detailed technical guidance in a number of areas that will be relevant to you
whenever you process personal data. Some examples include:
10 Steps to Cyber Security – The 10 Steps define and communicate an Information Risk
Management Regime which can provide protection against cyber-attacks.
The Cyber Essentials scheme – this provides a set of basic technical controls that you can
implement to guard against common cyber threats.02 August 2018 - 1.0.248 218
Risk management collection – a collection of guidance on how to assess cyber risk.
The government has produced relevant guidance on cybersecurity:
CyberAware – a cross-government awareness campaign developed by the Home Office, the
Department for Digital, Culture, Media and Sport (‘DCMS’) and the NCSC.
‘Cybersecurity – what small businesses need to know’ – produced by DCMS and the
department for Business, Enterprise, Innovation and Skills (‘BEIS’).
Technical guidance produced by the European Union Agency for Network and Information Security
(ENISA) may also assist you:
Data protection section at ENISA’s website
In more detail – ICO guidance
The ICO and NCSC have jointly produced guidance on security outcomes .02 August 2018 - 1.0.248 219
Encryption
At a glance
The GDPR requires you to implement appropriate technical and organisational measures to ensure
you process personal data securely.
Article 32 of the GDPR includes encryption as an example of an appropriate technical measure,
depending on the nature and risks of your processing activities.
Encryption is a widely-available measure with relatively low costs of implementation. There is a large
variety of solutions available.
You should have an encryption policy in place that governs how and when you implement encryption,
and you should also train your staff in the use and importance of encryption.
When storing or transmitting personal data, you should use encryption and ensure that your
encryption solution meets current standards.
You should be aware of the residual risks of encryption, and have steps in place to address these.
Checklists
Encryption
In brief We understand that encryption can be an appropriate technical measure to ensure that we
process personal data securely.
We have an appropriate policy in place governing our use of encryption.
We ensure that we educate our staff on the use and importance of encryption.
We have assessed the nature and scope of our processing activities and have implemented
encryption solution(s) to protect the personal data we store and/or transmit.
We understand the residual risks that remain, even after we have implemented our
encryption solution(s).
Our encryption solution(s) meet current standards such as FIPS 140-2 and FIPS 197.
We ensure that we keep our encryption solution(s) under review in the light of technological
developments.
We have considered the types of processing we undertake, and whether encryption can be
used in this processing.02 August 2018 - 1.0.248 220
What's new?
What is encryption?
Encryption and data storage
Encryption and data transfer
What types of encryption are there?
How should we implement encryption?
What's new
The GDPR’s security principle requires to you put in place appropriate technical and organisational
measures to ensure you process personal data securely.
Article 32 of the GDPR provides further considerations for the security of your processing. This
includes specifying encryption as an example of an appropriate technical measure, depending on the
risks involved and the specific circumstances of your processing. The ICO has seen numerous
incidents of personal data being subject to unauthorised or unlawful processing, loss, damage or
destruction. In many cases, the damage and distress caused by these incidents may have been
reduced or even avoided had the personal data been encrypted.
It is also the case that encryption solutions are widely available and can be deployed at relatively low
cost.
It is possible that, where data is lost or destroyed and it was not encrypted, regulatory action may be
pursued (depending on the context of each incident).
What is encryption?
Encryption is a mathematical function that encodes data in such a way that only authorised users can
access it.
It is a way of safeguarding against unauthorised or unlawful processing of personal data, and is one
way in which you can demonstrate compliance with the security principle.
Encryption protects information stored on mobile and static devices and in transmission, and there
are a number of different encryption options available.
You should consider encryption alongside other technical and organisational measures, taking into
account the benefits and risks it can offer.
You should have a policy in place governing the use of encryption, including appropriate staff
education.
You should also be aware of any sector-specific guidance that applies to you, as this may require you
to use encryption.
Encryption and data storage
Encrypting data whilst it is being stored provides effective protection against unauthorised or unlawful
processing.
Most modern operating systems have full-disk encryption built-in.
You can also encrypt individual files or create encrypted containers.
Some applications and databases can be configured to store data in encrypted form.02 August 2018 - 1.0.248 221
Storing encrypted data still poses residual risks. You will need to address these depending on the
context, such as by means of an organisational policy and staff training
Encryption and data transfer
Encrypting personal data whilst it is being transferred provides effective protection against
interception by a third party.
You should use encrypted communications channels when transmitting any personal data over an
untrusted network.
You can encrypt data prior to transmission over an insecure channel and ensure it is still protected.
However, a secure channel provides assurance that the content cannot be understood if it is
intercepted. Without additional encryption methods, such as encrypting the data itself prior to
transmission, the data will only be encrypted whilst in transit.
Encrypted data transfer still poses residual risks. You will need to address these depending on the
context, such as by means of an organisational policy and staff training.
What types of encryption are there?
The two types of encryption in widespread use today are symmetric and asymmetric encryption.
With symmetric encryption, the same key is used for encryption and decryption. Conversely, with
asymmetric encryption, different keys are used for encryption and decryption.
When using symmetric encryption, it is critical to ensure that the key is transferred securely.
The technique of cryptographic hashing is sometimes equated to encryption, but it is important to
understand that encryption and hashing are not identical concepts, and are used for different
purposes.
How should we implement encryption?
When implementing encryption it is important to consider four things: choosing the right algorithm,
choosing the right key size, choosing the right software, and keeping the key secure.
Over time, vulnerabilities may be discovered in encryption algorithms that can eventually make them
insecure. You should regularly assess whether your encryption method remains appropriate.
It is important to ensure that the key size is sufficiently large to protect against an attack over the
lifetime of the data. You should therefore assess whether your key sizes remain appropriate.
The encryption software you use is also crucial. You should ensure that any solution you implement
meets current standards such as FIPS 140-2 and FIPS 197.
Advice on appropriate encryption solutions is available from a number of organisations, including the
National Cyber Security Centre (NCSC).
You should also ensure that you keep your keys secure, and have processes in place to generate
new keys when necessary to do so.
02 August 2018 - 1.0.248 222
Passwords in online services
At a glance
Although the GDPR does not say anything specific about passwords, you are required to process
personal data securely by means of appropriate technical and organisational measures.
Passwords are a commonly-used means of protecting access to systems that process personal data.
Therefore, any password setup that you implement must be appropriate to the particular
circumstances of this processing.
You should consider whether there are any better alternatives to using passwords.
Any password system you deploy must protect against theft of stored passwords and ‘brute-force’ or
guessing attacks.
There are a number of additional considerations you will need to take account of when designing your
password system, such as the use of an appropriate hashing algorithm to store your passwords,
protecting the means by which users enter their passwords, defending against common attacks and
the use of two-factor authentication.
In brief
What is required under the GDPR?
Choosing the right authentication scheme
What should I consider when implementing a password system?
What is required under the GDPR?
The GDPR does not say anything specific about passwords. However, Article 5(1)(f) states that personal
data shall be:
This is the GDPR’s ‘integrity and confidentiality’ principle, or, more simply, the ‘security’ principle. So,
although there are no provisions on passwords, the security principle requires you to take appropriate
technical and organisational measures to prevent unauthorised processing of personal data you hold.
This means that when you are considering a password setup to protect access to a system that
processes personal data, that setup must be ‘appropriate’.
What are the other considerations?
‘Processed in a manner that ensures appropriate security of the personal data, including protection
against unauthorised or unlawful processing and against accidental loss, destruction or damage,
using appropriate technical or organisational measures.’02 August 2018 - 1.0.248 223
Although the GDPR does not define what is ‘appropriate’, it does provide further considerations in Article
32, ‘security of processing’:
This means that when considering any measures, you can consider the state of technological
development and the cost of implementation – but the measures themselves must ensure a level of
security appropriate to the nature of the data being protected and the harm that could be caused by
unauthorised access.
This means that you cannot simply set up a password system and then forget about it – there must be a
periodic review process.
You must also ensure that you are aware of the state of technological development in this area and must
ensure that your processes and technologies are robust against evolving threats. For example,
advances in processing power can reduce the effectiveness of cryptography, particular design choices
can become outdated, and so on.
You must also consider whether there might be better alternatives to passwords that can be used to
secure a system.
Article 25 of the GDPR also requires you to adopt a data protection by design approach. This means that
whenever you develop systems and services that are involved in your processing, you should ensure
that you take account of data protection considerations at the initial design stage and throughout the
lifecycle. This applies to any password system you intend to use.
At the same time, provided you properly implement a password system, it can be an element that can
be used to demonstrate compliance with your obligations under data protection by design.
Further Reading
Choosing the right authentication scheme
One of the biggest challenges you face when dealing with personal data online is ensuring that such data
can be accessed only by those with the correct permissions - in other words, authenticating, and
‘Taking into account the state of the art, the costs of implementation, and the nature, scope, context
and purposes of processing as well as the risk of varying likelihood and severity for the rights and
freedoms of natural persons, the controller and the processor shall implement appropriate technical
and organisational measures to ensure a level of security appropriate to the risk.’
Relevant provisions in the GDPR - See Articles 5(1)(f), 25, 32 and Recitals 39, 78 and 83
External link
In more detail – ICO guidance
Read our sections on security and data protection by design in the Guide to the GDPR.02 August 2018 - 1.0.248 224
authorising, the individual who is trying to gain access.
It is commonly accepted that there are three main ways of authenticating people to a system – checking
for:
something the individual has (such as a smart card);
something the individual is (this is usually a biometric measure, such as a fingerprint); or
something the individual knows.
Of these, the most commonly used is something the individual knows. In most cases something they
know is taken to be a password.
Passwords remain the most popular way that individuals authenticate to online services. The reason for
this is that a password is generally the simplest method to deploy and the most familiar for individuals.
Despite this, passwords carry well-known risks. The biggest risk is that people have generally seen
passwords as a mathematical problem that can be solved by increasing complexity rules. This fails to
take into account natural human behaviour which is to make passwords more easily memorable,
regardless of the cost to security.
A rigid focus on password strength rules with no consideration of the usual behaviour of people choosing
passwords means that you can make inappropriate choices in setting up and maintaining of your
authentication system. This could place the wider security of your systems or your users at risk.
Are passwords the best choice?
The success of using a password to properly authenticate a user of your service relies on the fact that
their password remains a shared secret between you and them. When a password is shared amongst
users or can be easily guessed by an attacker it can become extremely difficult to tell the difference
between an authorised user and an imposter with stolen or guessed credentials.
The proliferation of online services requiring individuals to create an account has meant that some have
become overwhelmed with access credentials and defaulted to reusing a short and memorable password
(often coupled with the same email address as a username) across multiple websites. The risk here is
that if one service suffers a personal data breach and access credentials are compromised, these can be
tested against other online services to gain access – a technique known as ‘credential stuffing’.
Example
In 2012, the social networking site LinkedIn was hacked. It was thought at the time that passwords
for around 6.5 million user accounts were stolen by cybercriminals. However, in May 2016, following
the advertisement for sale on the dark web of 165 million user accounts and passwords, LinkedIn
confirmed that the 2012 attack had actually resulted in the theft of email addresses and hashed
passwords of approximately 165 million users.
The vast majority of the passwords were subsequently cracked and posted online less than a day
after the further distribution, largely due to the use of SHA1 without a salt as the hashing algorithm.
Due to the reuse of passwords across online services, a number of subsequent account takeovers at
other services were attributed to the LinkedIn hack.02 August 2018 - 1.0.248 225
Before designing and implementing a new password system, you should consider whether it is
necessary to do so, or whether there is a better alternative that can provide secure access.
One common alternative to designing and implementing your own solution is to utilise a single sign on
(SSO) system. While this has its advantages (not least a reduction in the number of passwords that a
user has to remember) you must ensure that you are happy with the level of security that is offered by
that system. You must also consider what will happen if the SSO is compromised, as this will most likely
also result in your user’s accounts being compromised.
What makes a secure and useable password system?
A good password system is one that provides you with sufficient assurance that the individual attempting
to log in is the user they claim to be. In practice, this means a good password system should protect
against two types of attack:
firstly, it should be as difficult as possible for attackers to access stored passwords in a useable form;
and
secondly, it should protect against attackers trying to brute force or guess a valid password and
username combination.
Your system should also make it as easy as possible for users to create secure and unique passwords
that they can remember or store easily. It should not place an undue burden on individuals to make
sure that their account is secure. Putting such barriers in place can result in users making less secure
password choices.
The advice provided in this guidance is a good starting point for most systems where personal data is
being protected. It will be updated as necessary, but you should consider whether you need to apply a
higher level of security given your particular circumstances.
You should ensure that you stay up to date with the current capabilities of attackers who might try to
compromise password systems. You should also consider advice from other sources, such as the
National Cyber Security Centre (NCSC) and GetSafeOnline .
What should I consider when implementing a password system?
How should we store passwords?
How should our users enter their passwords?
What requirements should we set for user passwords?Further reading
Guidance on passwords from the NCSC:
Passwords: simplifying your approach
Using passwords to protect your data
Guidance on passwords from GetSafeOnline:
Password protocol and control 02 August 2018 - 1.0.248 226
What should we do about password expirations and resets?
What defences can we put in place against attacks?
What else do we need to consider?
How should we store passwords?
Do not store passwords in plaintext - make sure you use a suitable hashing algorithm, or another
mechanism that offers an equivalent level of protection against an attacker deriving the original
password.
Well-known hashing algorithms such as MD5 and SHA1 are not suitable for hashing passwords. Both
algorithms have known security weaknesses which can be exploited, and you should not use these for
password protection in any circumstances. You should also consider avoiding other fast algorithms. Use
a hashing algorithm that has been specifically designed for passwords, such as bcrypt, scrypt or
PBKDF2, with a salt of appropriate length.
It is important that you review the hashing algorithms you use, as over time they can become outdated.
Guidance on algorithms is available from a number of organisations such as the National Institute of
Standards in Technology (NIST) and the European Union Agency for Network and Information Security
(ENISA). You should also be aware of any sector-specific guidelines that are available and may be
applicable to you, eg from the European Payments Council .
You may also need to make sure that you can replace any algorithm that becomes obsolete.
You should also ensure that the architecture around your password system does not allow for any
inadvertent leaking of passwords in plaintext.
Example
In 2018, Twitter and GitHub discovered that errors in their logging systems had led to plaintext
passwords for users being stored in log files. Although the log files were not exposed to anyone
outside of the organisations, both Twitter and GitHub recommended or required that users changed
their passwords. 02 August 2018 - 1.0.248 227
How should our users enter their passwords?
You should ensure that your login pages are protected with HTTPS, or some other equivalent level of
protection. Failure to do so will mean that anyone who is in a position to intercept network traffic can
obtain passwords and may be able to carry out replay attacks. You should also consider that many
browsers now mark pages that require secure input (such as login pages) as insecure if they are
delivered over HTTP.
Make sure that password hashing is carried out server-side, rather than client-side. Hashing client-side
will remove the protection afforded by hashing in the first place, unless other mitigations are put in
place. This is a complicated area with a number of factors to consider. At the most basic level, if you are
hashing client-side and an attacker obtains your password database, then those hashes can be
presented directly to the server for a successful login.
Also, you should not prevent users from pasting passwords into the password field. Preventing pasting is
often seen as a security measure, but at the same time doing so can impede people from using
password managers effectively. The NCSC’s position on password pasting is the same, as expressed in
a blog post discussing this issue in much more detail. Any attacks that are facilitated by allowing
pasting can be defended against with proper rate limiting (see for more details on rate limiting).
What requirements should we set for user passwords?Further reading
Information on the status of a number of hashing functions can be found in NIST Special Publication
800-131A Revision 1 – Transitions: Recommendations for transitioning the use of cryptographic
algorithms and key lengths (2015)
ENISA’s 2014 ‘ Algorithms, key size and parameters ’ report provides further information on the
status of cryptographic hash functions. You should note that although SHA1 is listed as acceptable
for legacy use, this was only until the SHA3 hashing function was finalised, which took place in 2015.
SHA1 is now regarded as unsuitable for use.
The European Payments Council’s guidance on the use of cryptographic algorithms provides
additional information if your organisation is part of this sector.
Further reading – ICO guidance
Read our guidance on encryption for more information about secure data transfer and HTTPS.
Further reading
Read the NCSC’s ‘Let them paste passwords’ blog post for more information on why you should
allow your users to paste passwords into password fields.02 August 2018 - 1.0.248 228
There are three general requirements for any password system that you will need to consider:
password length—you should set a suitable minimum password length (this should be no less than 10
characters), but not a maximum length. If you are correctly hashing your passwords, then the output
should be the same length for every password, and therefore the only limit to password length should
be the way your website is coded. If you absolutely must set a maximum length due to the limitations
of your website code, then tell users what it is before they try to enter a password;
special characters—you should allow the use of special characters, but don’t mandate it. If you must
disallow special characters (or spaces) make sure this is made clear before the user creates their
password; and
password blacklisting—do not allow your users to use a common, weak password. Screen passwords
against a ‘password blacklist’ of the most commonly used passwords, leaked passwords from website
breaches and common words or phrases that relate to the service. Update this list on a yearly basis.
Explain to users that this is what you are doing, and that this is why a password has been rejected.
Other than the three requirements listed above, do not set restrictions on how users should create a
password. Current research (see ‘Further reading’ below) indicates that doing so will cause people to
reuse passwords across accounts, to create weak passwords with obvious substitutions or to forget their
passwords. All this places unnecessary stress on your reset process.
Properly set up and configured password strength meters can be a good way to easily communicate the
requirements listed above to your users, and research has shown that good meters can assist users in
choosing strong passwords. If you decide to use one, make sure it properly reflects what constitutes a
strong or weak password.
Example
A password blacklist could be a feature of the software you use. Other lists are available online, e.g.
SecLists and haveibeenpwned's password list.
It is also possible to find easy implementations, such as NIST Bad Passwords , which uses
SecLists.
Further reading
Microsoft’s password guidance contains advice on passwords in the context of several Microsoft
platforms. It includes guidance for IT administrators as well as users, and details a number of
common password attacks and highlights a number of issues including the risks of placing
restrictions on how users create passwords.
Advice from the Federal Trade Commission (FTC) also discusses these issues.
For more information on password strength meters, read this analysis from Sophos as well as the
significant amount of research from Carnegie Mellon University.02 August 2018 - 1.0.248 229
Finally, remind your users that they should not reuse passwords from other sites. In most circumstances
you should not have any idea what your user’s passwords are. However, some companies will actively
track compromised credentials that are traded on the dark web and will check these credentials against
the hashes they hold on their systems to see if there is a match. If you decide that this is something you
want to do you need to carefully consider the potential legal implications of obtaining such lists, and you
will need to explain very clearly how you use that data to your users (especially where the use of such
data has led to a password reset or an account lockout).
What should we do about password expirations and resets?
You should only set password expirations if they are absolutely necessary for your particular
circumstances. Regular expiry often causes people to change a single strong password for a series of
weak passwords. As a general rule, get your users to create a strong initial password and only change
them if there are pressing reasons, such as a personal data breach.
When deploying a password reset process you should ensure that it is secure. Do not send passwords
over email, even if they are temporary – use one time links, and ensure that you do not leak the
credentials in any referral headers. You should also not be in a position where a member of your staff is
able to ‘read out’ a user’s password to them, eg over the phone in a service call—this indicates that you
are storing passwords in plaintext, which is, as described above, not appropriate. If you require a
password to validate a user over the phone, set a separate phone password for the account.
You should also time limit any password reset credentials. The majority of users will probably reset their
password immediately, but set a limit that fits your observed user behaviour.
What defences can we put in place against attacks?
Ensure that you are rate limiting or ‘throttling’ the number and frequency of incorrect login attempts.
The precise number of attempts and the consequence of exceeding these limits will be for you to decide
based on the specific circumstances of your organisation, but limiting to a certain number per hour, day
and month is a good idea. This will help to deter both bulk attackers and people targeting individual
accounts.
There are additional considerations when implementing your rate limits:Further reading
Read the FTC’s advice about the potential issues with mandatory password changes .
Example
NIST guidance recommends that accounts with internet access should be limited to 100
consecutive failed attempts on a single account within a 30 day period, unless otherwise specified in
the system being deployed.02 August 2018 - 1.0.248 230
you should be aware that some attackers will deliberately work within your limits to avoid detection,
and will still achieve a reasonable success rate, especially with targeted guessing;
set your limits based on observed behaviour of both attackers and your users;
be aware that overly-aggressive rate limiting can be used as a denial of service attack; and
remember that a number of successful or unsuccessful access attempts to a range of different user
accounts from the same device or IP address might also be indicative of a bulk attack.
You should also consider whether other methods of preventing attacks might be appropriate. Examples
of these methods could include, but are not limited to:
the use of ‘CAPTCHAs’;
whitelisting IP addresses; and
time limits or time delays after failed authentications.
What else do we need to consider?
You will need to address how your system will respond to an attacker who has legitimate credentials for
a user, or for multiple users. There is a distinct possibility that you will encounter this scenario given that
both password reuse and website breaches are relatively common occurrences.
Techniques for recognising common user behaviour are becoming more advanced, and you could use
these to develop a risk-based approach to verifying an authentication attempt. For example, if a user
logs in from a new device or IP address you might consider requesting a second authentication factor
and informing the user by another contact method of the login attempt. It is however important to
remember that collecting additional data from users in order to defend against authentication attacks
could itself constitute processing personal data and should operate in compliance with the GDPR. This
does not mean you cannot process this data, but you must ensure that you have considered the data
protection implications of doing so.
You should consider providing your users with the facility to review a list of unsuccessful login attempts.
This will allow people who might be specifically targeted to check for potential attacks manually.
However, this will only be useful if you pay attention to reports from individuals that their accounts are
being attacked.
You should also consider implementing two-factor or multifactor authentication wherever it is possible to
do so - to take the most common example, a password and a one-time token generator. This will be
more important where the personal data that can be accessed is of a sensitive nature, or could cause
significant harm if it were compromised.
Other examples of a second factor that could be used include biometrics (fingerprints being the most
common and easy to implement), smart cards or U2F keys and devices. You will however need to
ensure that any processing of biometric data for the purposes of uniquely identifying an individual is
done in accordance with the GDPR’s requirements for special category data, and/or an appropriate
processing condition in Schedule 1 of the Data Protection Act 2018.
In more detail – ICO guidance
Read Protecting personal data in online services: learning from the mistakes of others (PDF) for
more information.02 August 2018 - 1.0.248 231
For more information on special category data, read the section on key definitions in the Guide to
the GDPR.
Further reading
Additional guidance on digital identities, hashing functions and algorithms and passwords in general
includes:
NIST’s Special Publication 800-63 on digital identity guidelines ;
NIST’s policy on hashing functions ;
ENISA’s 2014 report into ‘ Algorithms, key size and parameters ’;
The International Working Group on Data Protection in Telecommunications (the ‘Berlin Group’)
published a Working Paper on biometrics in online authentication in 2016 (PDF);
Guidance on cryptographic algorithms from the European Payments Council;
OWASP cheat sheet on password storage ;
The NCSC’s password guidance ;
Additional NCSC guidance on the use of multi-factor authentication in online services . Although
primarily aimed at large organisations, this guidance summarises the considerations involved in
implementing an ‘extra factor’ for authentication, including the options for those factors; and
Cynosure Prime’s analysis of 320 million leaked passwords from the HaveIBeenPwned website .02 August 2018 - 1.0.248 232
Personal data breaches
At a glance
The GDPR introduces a duty on all organisations to report certain types of personal data breach to
the relevant supervisory authority. You must do this within 72 hours of becoming aware of the
breach, where feasible.
If the breach is likely to result in a high risk of adversely affecting individuals’ rights and freedoms,
you must also inform those individuals without undue delay.
You should ensure you have robust breach detection, investigation and internal reporting procedures
in place. This will facilitate decision-making about whether or not you need to notify the relevant
supervisory authority and the affected individuals.
You must also keep a record of any personal data breaches, regardless of whether you are required
to notify.
Checklists
Preparing for a personal data breach
☐ We know how to recognise a personal data breach.
☐ We understand that a personal data breach isn’t only about loss or theft of personal data.
☐ We have prepared a response plan for addressing any personal data breaches that occur.
☐ We have allocated responsibility for managing breaches to a dedicated person or team.
☐ Our staff know how to escalate a security incident to the appropriate person or team in our
organisation to determine whether a breach has occurred.
Responding to a personal data breach
☐ We have in place a process to assess the likely risk to individuals as a result of a breach.
☐ We know who is the relevant supervisory authority for our processing activities.
☐ We have a process to notify the ICO of a breach within 72 hours of becoming aware of it,
even if we do not have all the details yet.
☐ We know what information we must give the ICO about a breach.
☐ We have a process to inform affected individuals about a breach when it is likely to result in a02 August 2018 - 1.0.248 233
In brief
What is a personal data breach?
A personal data breach means a breach of security leading to the accidental or unlawful destruction,
loss, alteration, unauthorised disclosure of, or access to, personal data. This includes breaches that are
the result of both accidental and deliberate causes. It also means that a breach is more than just about
losing personal data.
A personal data breach can be broadly defined as a security incident that has affected the
confidentiality, integrity or availability of personal data. In short, there will be a personal data breach
whenever any personal data is lost, destroyed, corrupted or disclosed; if someone accesses the data or
passes it on without proper authorisation; or if the data is made unavailable, for example, when it has
been encrypted by ransomware, or accidentally lost or destroyed .
Recital 87 of the GDPR makes clear that when a security incident takes place, you should quickly
establish whether a personal data breach has occurred and, if so, promptly take steps to address it,
including telling the ICO if required.
What breaches do we need to notify the ICO about?
When a personal data breach has occurred, you need to establish the likelihood and severity of the
resulting risk to people’s rights and freedoms. If it’s likely that there will be a risk then you must notifyhigh risk to their rights and freedoms.
☐ We know we must inform affected individuals without undue delay.
☐ We know what information about a breach we must provide to individuals, and that we should
provide advice to help them protect themselves from its effects.
☐ We document all breaches, even if they don’t all need to be reported.
Example
Personal data breaches can include:
access by an unauthorised third party;
deliberate or accidental action (or inaction) by a controller or processor;
sending personal data to an incorrect recipient;
computing devices containing personal data being lost or stolen;
alteration of personal data without permission; and
loss of availability of personal data.02 August 2018 - 1.0.248 234
the ICO; if it’s unlikely then you don’t have to report it. However, if you decide you don’t need to report
the breach, you need to be able to justify this decision, so you should document it.
In assessing risk to rights and freedoms, it’s important to focus on the potential negative consequences
for individuals. Recital 85 of the GDPR explains that:
This means that a breach can have a range of adverse effects on individuals, which include emotional
distress, and physical and material damage. Some personal data breaches will not lead to risks beyond
possible inconvenience to those who need the data to do their job. Other breaches can significantly
affect individuals whose personal data has been compromised. You need to assess this case by case,
looking at all relevant factors.
So, on becoming aware of a breach, you should try to contain it and assess the potential adverse
consequences for individuals, based on how serious or substantial these are, and how likely they are to
happen.
For more details about assessing risk, please see section IV of the Article 29 Working Party (WP29)
guidelines on personal data breach notification. WP29 has been replaced by the European Data
Protection Board (EDPB) which has endorsed these guidelines.
What role do processors have?
If your organisation uses a data processor, and this processor suffers a breach, then under Article 33(2)
it must inform you without undue delay as soon as it becomes aware.
“A personal data breach may, if not addressed in an appropriate and timely manner, result in
physical, material or non-material damage to natural persons such as loss of control over their
personal data or limitation of their rights, discrimination, identity theft or fraud, financial loss,
unauthorised reversal of pseudonymisation, damage to reputation, loss of confidentiality of personal
data protected by professional secrecy or any other significant economic or social disadvantage to
the natural person concerned.”
Example
The theft of a customer database, the data of which may be used to commit identity fraud, would
need to be notified, given the impact this is likely to have on those individuals who could suffer
financial loss or other consequences. On the other hand, you would not normally need to notify the
ICO, for example, about the loss or inappropriate alteration of a staff telephone list.
Example02 August 2018 - 1.0.248 235
This requirement allows you to take steps to address the breach and meet your breach-reporting
obligations under the GDPR.
If you use a processor, the requirements on breach reporting should be detailed in the contract between
you and your processor, as required under Article 28. For more details about contracts, please see our
draft GDPR guidance on contracts and liabilities between controllers and processors .
How much time do we have to report a breach?
You must report a notifiable breach to the ICO without undue delay, but not later than 72 hours after
becoming aware of it. If you take longer than this, you must give reasons for the delay.
Section II of the WP29 Guidelines on personal data breach notification gives more details of when a
controller can be considered to have “become aware” of a breach.
What information must a breach notification to the supervisory authority contain?
When reporting a breach, the GDPR says you must provide:
a description of the nature of the personal data breach including, where possible:
the categories and approximate number of individuals concerned; and
the categories and approximate number of personal data records concerned;
the name and contact details of the data protection officer (if your organisation has one) or other
contact point where more information can be obtained;
a description of the likely consequences of the personal data breach; and
a description of the measures taken, or proposed to be taken, to deal with the personal data breach,
including, where appropriate, the measures taken to mitigate any possible adverse effects.
What if we don’t have all the required information available yet?
The GDPR recognises that it will not always be possible to investigate a breach fully within 72 hours to
understand exactly what has happened and what needs to be done to mitigate it. So Article 34(4) allows
you to provide the required information in phases, as long as this is done without undue further delay.
However, we expect controllers to prioritise the investigation, give it adequate resources, and expedite it
urgently. You must still notify us of the breach when you become aware of it, and submit further
information as soon as possible. If you know you won’t be able to provide full details within 72 hours, it
is a good idea to explain the delay to us and tell us when you expect to submit more information.Your organisation (the controller) contracts an IT services firm (the processor) to archive and store
customer records. The IT firm detects an attack on its network that results in personal data about its
clients being unlawfully accessed. As this is a personal data breach, the IT firm promptly notifies you
that the breach has taken place. You in turn notify the ICO.
02 August 2018 - 1.0.248 236
How do we notify a breach to the ICO?
To notify the ICO of a personal data breach, please see our pages on reporting a breach .
Remember, in the case of a breach affecting individuals in different EU countries, the ICO may not be
the lead supervisory authority. This means that as part of your breach response plan, you should
establish which European data protection agency would be your lead supervisory authority for the
processing activities that have been subject to the breach. For more guidance on determining who your
lead authority is, please see the WP29 guidance on identifying your lead authority , which has been
endorsed by the EDPB.
When do we need to tell individuals about a breach?
If a breach is likely to result in a high risk to the rights and freedoms of individuals, the GDPR says you
must inform those concerned directly and without undue delay. In other words, this should take place as
soon as possible.
A ‘high risk’ means the threshold for informing individuals is higher than for notifying the ICO. Again,
you will need to assess both the severity of the potential or actual impact on individuals as a result of a
breach and the likelihood of this occurring. If the impact of the breach is more severe, the risk is higher;
if the likelihood of the consequences is greater, then again the risk is higher. In such cases, you will
need to promptly inform those affected, particularly if there is a need to mitigate an immediate risk of
damage to them. One of the main reasons for informing individuals is to help them take steps to protect
themselves from the effects of a breach.Example
You detect an intrusion into your network and become aware that files containing personal data have
been accessed, but you don’t know how the attacker gained entry, to what extent that data was
accessed, or whether the attacker also copied the data from your system.
You notify the ICO within 72 hours of becoming aware of the breach, explaining that you don’t yet
have all the relevant details, but that you expect to have the results of your investigation within a
few days. Once your investigation uncovers details about the incident, you give the ICO more
information about the breach without delay.
Example
A hospital suffers a breach that results in an accidental disclosure of patient records. There is likely
to be a significant impact on the affected individuals because of the sensitivity of the data and their
confidential medical details becoming known to others. This is likely to result in a high risk to their
rights and freedoms, so they would need to be informed about the breach.
A university experiences a breach when a member of staff accidentally deletes a record of alumni
contact details. The details are later re-created from a backup. This is unlikely to result in a high risk
to the rights and freedoms of those individuals. They don’t need to be informed about the breach.02 August 2018 - 1.0.248 237
If you decide not to notify individuals, you will still need to notify the ICO unless you can demonstrate
that the breach is unlikely to result in a risk to rights and freedoms. You should also remember that the
ICO has the power to compel you to inform affected individuals if we consider there is a high risk. In
any event, you should document your decision-making process in line with the requirements of the
accountability principle.
What information must we provide to individuals when telling them about a breach?
You need to describe, in clear and plain language, the nature of the personal data breach and, at least:
the name and contact details of your data protection officer (if your organisation has one) or other
contact point where more information can be obtained;
a description of the likely consequences of the personal data breach; and
a description of the measures taken, or proposed to be taken, to deal with the personal data breach
and including, where appropriate, of the measures taken to mitigate any possible adverse effects.
Does the GDPR require us to take any other steps in response to a breach?
You should ensure that you record all breaches, regardless of whether or not they need to be reported
to the ICO.
Article 33(5) requires you to document the facts relating to the breach, its effects and the remedial
action taken. This is part of your overall obligation to comply with the accountability principle, and allows
us to verify your organisation’s compliance with its notification duties under the GDPR.
As with any security incident, you should investigate whether or not the breach was a result of human
error or a systemic issue and see how a recurrence can be prevented – whether this is through better
processes, further training or other corrective steps.
What else should we take into account?
The following aren’t specific GDPR requirements, but you may need to take them into account when
you’ve experienced a breach.
It is important to be aware that you may have additional notification obligations under other laws if you
experience a personal data breach. For example:
If you are a communications service provider, you must notify the ICO of any personal data breach
within 24 hours under the Privacy and Electronic Communications Regulations (PECR). You should use
our PECR breach notification form, rather than the GDPR process. Please see our pages on PECR for
more details.
If you are a UK trust service provider, you must notify the ICO of a security breach, which may
include a personal data breach, within 24 hours under the Electronic Identification and Trust Services
(eIDAS) Regulation. Where this includes a personal data breach you can use our eIDAS breach
notification form or the GDPR breach-reporting process. However, if you report it to us under the
GDPR, this still must be done within 24 hours. Please read our Guide to eIDAS for more information.
If your organisation is an operator of essential services or a digital service provider, you will have
incident-reporting obligations under the NIS Directive. These are separate from personal data breach
notification under the GDPR. If you suffer an incident that’s also a personal data breach, you will still02 August 2018 - 1.0.248 238
need to report it to the ICO separately, and you should use the GDPR process for doing so.
You may also need to consider notifying third parties such as the police, insurers, professional bodies, or
bank or credit card companies who can help reduce the risk of financial loss to individuals.
The EDPR, which has replaced WP29, may issue guidelines, recommendations and best practice advice
that may include further guidance on personal data breaches. You should look out for any such future
guidance. Likewise, you should be aware of any recommendations issued under relevant codes of
conduct or sector-specific requirements that your organisation may be subject to.
What happens if we fail to notify?
Failing to notify a breach when required to do so can result in a significant fine up to 10 million euros or
2 per cent of your global turnover. The fine can be combined the ICO’s other corrective powers under
Article 58. So it’s important to make sure you have a robust breach-reporting process in place to ensure
you detect and can notify a breach, on time; and to provide the necessary details.
Further Reading
Relevant provisions in the GDPR - See Articles 33, 34, 58, 83 and Recitals 75, 85-88
External link
In more detail - ICO guidance
Security
Accountability and governance
Draft GDPR guidance on contracts and liabilities between controllers and processors
Guide to PECR
Notification of PECR security breaches
Guide to eIDAS
We are also working to update existing Data Protection Act 1998 guidance to reflect GDPR
provisions. In the meantime, our existing guidance on encryption and A practical guide to IT
security: ideal for the small business are good starting points.
In more detail - European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state. It
adopts guidelines for complying with the requirements of the GDPR.
WP29 published the following guidelines which have been endorsed by the EDPB:
Guidelines on personal data breach notification
Guidelines on lead supervisory authorities 02 August 2018 - 1.0.248 239
Other resourcesLead supervisory authority FAQs
Report a security breach
For organisations02 August 2018 - 1.0.248 240
International transfers
At a glance
The GDPR primarily applies to controllers and processors located in the European Economic Area (the
EEA) with some exceptions.
Individuals risk losing the protection of the GDPR if their personal data is transferred outside of the
EEA.
On that basis, the GDPR restricts transfers of personal data outside the EEA, or the protection of the
GDPR, unless the rights of the individuals in respect of their personal data is protected in another
way, or one of a limited number of exceptions applies.
A transfer of personal data outside the protection of the GDPR (which we refer to as a ‘restricted
transfer’), most often involves a transfer from inside the EEA to a country outside the EEA.
If you wish to do so, you should answer the following questions, until you reach a provision which
permits your restricted transfer:
Are we planning to make a restricted transfer of personal data outside of the EEA?
If no, you can make the transfer. If yes go to Q21.
Do we need to make a restricted transfer of personal data in order to meet our purposes?
If no, you can make the transfer without any personal data. If yes go to Q32.
Has the EU made an ‘adequacy decision’ in relation to the country or territory where the receiver
is located or a sector which covers the receiver?
If yes, you can make the transfer. If no go to Q43.
Have we put in place one of the ‘appropriate safeguards’ referred to in the GDPR?
If yes, you can make the transfer. If no go to Q54.
Does an exception provided for in the GDPR apply?
If yes, you can make the transfer. If no you cannot make the transfer in accordance with the
GDPR5.
If you reach the end without finding a provision which permits the restricted transfer, you will be
unable to make that restricted transfer in accordance with the GDPR.
In brief
What are the restrictions on international transfers?
The GDPR restricts the transfer of personal data to countries outside the EEA, or international02 August 2018 - 1.0.248 241
organisations. These restrictions apply to all transfers, no matter the size of transfer or how often you
carry them out.
Further Reading
Are we making a transfer of personal data outside the EEA?
1) Are we making a restricted transfer?
You are making a restricted transfer if:
the GDPR applies to your processing of the personal data you are transferring. The scope of the
GDPR is set out in Article 2 (what is processing of personal data) and Article 3 (where the GDPR
applies). Please see the section of the guide What is personal data . We will be providing guidance on
where the GDPR applies later this year. In general, the GDPR applies if you are processing personal
data in the EEA, and may apply in specific circumstances if you are outside the EEA and processing
personal data about individuals in the EEA;
you are sending personal data, or making it accessible, to a receiver to which the GDPR does not
apply. Usually because they are located in a country outside the EEA; and
the receiver is a separate organisation or individual. The receiver cannot be employed by you or by
your company. It can be a company in the same group.In more detail - European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state and
each EEA state. It adopts guidelines for complying with the requirements of the GDPR.
The EDPB is currently working on its guidance in relation to International Transfers, and we will
update our guide as this is published.
Relevant provisions in the GDPR – see Article 44 and Recitals 101-102
External link
Example
A UK company uses a centralised human resources service in the United States provided by its
parent company. The UK company passes information about its employees to its parent company in
connection with the HR service. This is a restricted transfer.
Example02 August 2018 - 1.0.248 242
Transfer does not mean the same as transit. If personal data is just electronically routed through a
non-EEA country but the transfer is actually from one EEA country to another EEA country, then it is not
a restricted transfer.
You are making a restricted transfer if you collect information about individuals on paper, which is not
ordered or structured in any way, and you send this to a service company located outside of the EEA,
to:
put into digital form; or
add to a highly structured manual filing system relating to individuals.
Putting personal data on to a website will often result in a restricted transfer. The restricted transfer
takes place when someone outside the EEA accesses that personal data via the website.
If you load personal data onto a UK server which is then available through a website, and you plan or
anticipate that the website may be accessed from outside the EEA, you should treat this as a restricted
transfer.
2) Is it to a country outside the EEA?
The EEA countries consist of the EU member states and the EFTA States.
The EU member states are Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark,
Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg,
Malta, Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden and the United
Kingdom.A UK company sells holidays in Australia. It sends the personal data of customers who have bought
the holidays to the hotels they have chosen in Australia in order to secure their bookings. This is a
restricted transfer.
Example
Personal data is transferred from a controller in France to a controller in Ireland (both countries in
the EEA) via a server in Australia. There is no intention that the personal data will be accessed or
manipulated while it is in Australia. Therefore the transfer is only to Ireland.
Example
A UK insurance broker sends a set of notes about individual customers to a company in a non-EEA
country. These notes are handwritten and are not stored on computer or in any particular order. The
non-EEA company adds the notes to a computer customer management system. This is a restricted
transfer.02 August 2018 - 1.0.248 243
The EEA states are Iceland, Norway and Liechtenstein. The EEA Joint Committee has made a decision
that the GDPR applies to those countries and transfers to those countries are not restricted.
Further Reading
Do we need to make a restricted transfer of personal data to outside the EEA?
Before making a restricted transfer you should consider whether you can achieve your aims without
actually sending personal data.
If you make the data anonymous so that it is never possible to identify individuals (even when combined
with other information which is available to receiver), it is not personal data. This means that the
restrictions do not apply and you are free to transfer the anonymised data outside the EEA.
Further Reading
How do we make a restricted transfer in accordance with the GDPR?
You must work through the following questions, in order.
If by the last question, you are still unable to make the restricted transfer, then it will be in breach of the
GDPR.
Has the EU Commission made an ‘adequacy decision’ about the country or international
organisation?
If you are making a restricted transfer then you need to know whether it is covered by an EU
Commission “adequacy decision”.
This decision is a finding by the Commission that the legal framework in place in that country, territory
or sector provides ‘adequate’ protection for individuals’ rights and freedoms for their personal data.
Adequacy decisions made prior to GDPR, remain in force unless there is a further Commission decision
which decides otherwise. The Commission plans to review these decisions at least once every four
years.
If it is covered by an adequacy decision, you may go ahead with the restricted transfer. Of course, you
must still comply with the rest of the GDPR.
All EU Commission adequacy decisions to date also cover restricted transfers made from EEA states.
The EEA Joint Committee will need to make a formal decision to adopt any future EU Commission
adequacy decisions, for them to cover restricted transfers from EEA states.Relevant provisions in the GDPR - see Article 44 and Recital 101
External link
Relevant provisions in the GDPR – see Article 44 and Recital 26
External link02 August 2018 - 1.0.248 244
1) What ‘adequacy decisions’ have there been?
As at July 2018 the Commission has made a full finding of adequacy about the following countries and
territories:
Andorra, Argentina, Guernsey, Isle of Man, Israel, Jersey, New Zealand, Switzerland and Uruguay.
The Commission has made partial findings of adequacy about Canada and the USA.
The adequacy finding for Canada only covers data that is subject to Canada's Personal Information
Protection and Electronic Documents Act (PIPEDA). Not all data is subject to PIPEDA. For more details
please see the Commission's FAQs on the adequacy finding on the Canadian PIPEDA.
The adequacy finding for the USA is only for personal data transfers covered by the EU-US Privacy
Shield framework.
The Privacy Shield places requirements on US companies certified by the scheme to protect personal
data and provides for redress mechanisms for individuals. US Government departments such as the
Department of Commerce oversee certification under the scheme.
If you want to transfer personal data to a US organisation under the Privacy Shield, you need to:
check on the Privacy Shield list to see whether the organisation has a current certification; and
make sure the certification covers the type of data you want to transfer.
We are expecting an adequacy decision for Japan soon.
You can view an up to date list of the countries which have an adequacy finding on the European
Commission's data protection website . You should check back regularly for any changes.
2) What if there is no adequacy decision?
You should move on to the next section Is the transfer covered by appropriate safeguards?
Further Reading
Relevant provisions in the GDPR – see Article 45 and Recitals 103-107 and 169
External link
In more detail - ICO guidance
Using the privacy shield to transfer data to the US
Other resources
See the Privacy Shield website for more information.02 August 2018 - 1.0.248 245
Is the restricted transfer covered by appropriate safeguards?
If there is no ‘adequacy decision’ about the country, territory or sector for your restricted transfer, you
should then find out whether you can make the transfer subject to ‘appropriate safeguards’, which are
listed in the GDPR.
These appropriate safeguards ensure that both you and the receiver of the transfer are legally required
to protect individuals’ rights and freedoms for their personal data.
If it is covered by an appropriate safeguards, you may go ahead with the restricted transfer. Of course,
you must still comply with the rest of the GDPR.
Each appropriate safeguard is set out below:
1. A legally binding and enforceable instrument between public authorities or bodies
You can make a restricted transfer if you are a public authority or body and you are transferring to
another public authority or body, and you have both signed a contract or another legal instrument which
is legally binding and enforceable. This contract or instrument must include enforceable rights and
effective remedies for individuals whose personal data is transferred.
This is not an appropriate safeguard if either you or the receiver are a private body or an individual.
If you are a public authority or body which does not have the power to enter into legally binding and
enforceable arrangements, you may consider an administrative arrangement which includes enforceable
and effective individual rights .
Further Reading
2. Binding corporate rules
You can make a restricted transfer if both you and the receiver have signed up to a group document
called binding corporate rules (BCRs).
BCRs are an internal code of conduct operating within a multinational group, which applies to restricted
transfers of personal data from the group's EEA entities to non-EEA group entities.
This may be a corporate group or a group of undertakings or enterprises engaged in a joint economic
activity, such as franchises or joint ventures.
You must submit BCRs for approval to an EEA supervisory authority in an EEA country where one of the
companies is based. Usually this is where the EEA head office is located, but it does not need to be. The
criteria for choosing the lead authority for BCRs is laid down in the “Working Document Setting Forth a
Co-Operation Procedure for the approval of “Binding Corporate Rules” for controllers and processors
under the GDPR” (see “In more detail” below).
One or two other supervisory authorities will be involved in the review and approval of BCRs (depending
on how many EEA countries you are making restricted transfers from). These will be supervisory
authorities where other companies signing up to those BCRs are located.Relevant provisions in the GDPR – see Article 46 and Recitals 108-109 and 114
External link02 August 2018 - 1.0.248 246
The concept of using BCRs to provide adequate safeguards for making restricted transfers was
developed by the Article 29 Working Party in a series of working documents. These form a ‘toolkit’ for
organisations. The documents, including application forms and guidance have all been revised and
updated in line with GDPR (see “In more detail” below).
Further Reading
3. Standard data protection clauses adopted by the Commission
You can make a restricted transfer if you and the receiver have entered into a contract incorporating
standard data protection clauses adopted by the Commission.
These are known as the ‘standard contractual clauses’ (sometimes as ‘model clauses’). There are four
sets which the Commission adopted under the Directive. They must be entered into by the data exporter
(based in the EEA) and the data importer (outside the EEA).
The clauses contain contractual obligations on the data exporter and the data importer, and rights for
the individuals whose personal data is transferred. Individuals can directly enforce those rights against
the data importer and the data exporter.
There are two sets of standard contractual clauses for restricted transfers between a controller and
controller, and two sets between a controller and processor. The earlier set of clauses between a
controller and processor can no longer be used for new contracts, and are only valid for contracts
entered into prior to 2010.
The Commission plans to update the existing standard contractual clauses for the GDPR. Until then, you
can still enter into contracts which include the Directive-based standard contractual clauses. Please keep
checking the websites of the ICO and the Commission for further information.
Existing contracts incorporating standard contractual clauses can continue to be used for restrictedIn more detail - European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state and
each EEA state. It adopts guidelines for complying with the requirements of the GDPR.
WP29 adopted the following guidelines, which have been endorsed by the EDPB:
Table of elements and principles for controller BCRs (WP256)
Table of elements and principles for processor BCRs (WP257)
Co-Operation Procedure for the approval of “Binding Corporate Rules (WP263.01)
Application Form BCR -C (WP264)
Application Form BCR – P (WP265)
Relevant provisions in the GDPR – see Articles 46-47 and Recitals 108-110 and 114
External link02 August 2018 - 1.0.248 247
transfers (even once the Commission has adopted GDPR standard contractual clauses).
If you are entering into a new contract, you must use the standard contractual clauses in their
entirety and without amendment . You can include additional clauses on business related issues,
provided that they do not contradict the standard contractual clauses. You can also add parties (i.e.
additional data importers or exporters) provided they are also bound by the standard contractual
clauses.
If you are making a restricted transfer from a controller to another controller, you can choose which set
of clauses to use, depending on which best suits your business arrangements.
If you are making a restricted transfer from a controller to a processor, you also need to comply with
the GDPR requirements about using processors .
Further Reading
4. Standard data protection clauses adopted by a supervisory authority and approved by
the Commission.
You can make a restricted transfer from the UK if you enter into a contract incorporating standard data
protection clauses adopted by the ICO.
However, neither the ICO nor any other EEA supervisory authority has yet adopted any standard data
Example
A family books a holiday in Australia with a UK travel company. The UK travel company sends
details of the booking to the Australian hotel.
Each company is a separate controller, as it is processing the personal data for its own purposes and
making its own decisions.
The contract between the UK travel company and the hotel should use controller to controller
standard contractual clauses.
In more detail
The Commission published the following standard contractual clauses:
2001 controller to controller
2004 controller to controller
2010 controller to processor
Relevant provisions in the GDPR – see Article 46 and Recitals 108-109 and 114
External link02 August 2018 - 1.0.248 248
protection clauses.
They are likely to be similar to those adopted by the Commission (above), but will be first adopted by
the supervisory authority and then approved by the Commission.
We will add more details about using this option in due course.
Further Reading
5. An approved code of conduct together with binding and enforceable commitments of the
receiver outside the EEA
You can make a restricted transfer if the receiver has signed up to a code of conduct, which has been
approved by a supervisory authority. The code of conduct must include appropriate safeguards to
protect the rights of individuals whose personal data transferred, and which can be directly enforced.
The GDPR endorses the use of approved codes of conduct to demonstrate compliance with its
requirements.
This option is newly introduced by the GDPR and no approved codes of conduct are yet in use. We will
add more details about this option in due course.
Further Reading
6. Certification under an approved certification mechanism together with binding and
enforceable commitments of the receiver outside the EEA
You can make a restricted transfer if the receiver has a certification, under a scheme approved by a
supervisory authority. The certification scheme must include appropriate safeguards to protect the rights
of individuals whose personal data transferred, and which can be directly enforced.
The GDPR also endorses the use of approved certification mechanisms to demonstrate compliance with
its requirements.Relevant provisions in the GDPR – see Article 46 and Recitals 108-109 and 114
External link
Relevant provisions in the GDPR – see Article 46 and Recitals 108-109 and 114
External link
In more detail - European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state and
each EEA state. It adopts guidelines for complying with the requirements of the GDPR.
The EDPB is producing guidance on codes of conduct, in general and in relation to restricted
transfers, which will be published in due course.02 August 2018 - 1.0.248 249
This option is newly introduced by the GDPR and no approved certification schemes are yet in use. We
will add more details about this option in due course.
Further Reading
7. Contractual clauses authorised by a supervisory authority
You can make a restricted transfer if you and the receiver have entered into a bespoke contract
governing a specific restricted transfer which has been individually authorised by the supervisory
authority of the country from which the personal data is being exported. If you are making a restricted
transfer from the UK, the ICO will have had to have approved the contract.
At present the ICO is not authorising any such bespoke contracts, until guidance has been produced by
the EDPB.
8. Administrative arrangements between public authorities or bodies which include
enforceable and effective rights for the individuals whose personal data is transferred, and
which have been authorised by a supervisory authority
You can make a restricted transfer if:
you are a public authority or body making a transfer to one or more public authorities or bodies;
at least one of the public authorities or bodies does not have the power to use any of the other
appropriate safeguards (set out above). For example, it cannot enter into a binding contract;
you and the receiver have entered into an administrative arrangement, (usually a document) setting
out appropriate safeguards regarding the personal data to be transferred and which provides for
effective and enforceable rights by the individuals whose personal data is transferred; or
the administrative arrangement has been individually authorised by the supervisory authority in the
country (or countries) from which you are making the restricted transfer. If the restricted transfer is
to be made from the UK, the ICO must approve it.
This is not an appropriate safeguard for restricted transfers between a public and private body.
This option is newly introduced by the GDPR and no approved administrative arrangements are yet in
use. We will add more details about this option in due course.Relevant provisions in the GDPR – see Article 46 and Recitals 108-109 and 114
External link
In more detail - European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state and
each EEA state. It adopts guidelines for complying with the requirements of the GDPR.
The EDPB is producing guidance on certification schemes, in general and in relation to restricted
transfers, which will be published in due course.02 August 2018 - 1.0.248 250
Further Reading
What if the restricted transfer is not covered by appropriate safeguards?
If it the restricted transfer is not covered by appropriate safeguards, then you need to consider the next
question: Is the restricted transfer covered by an exception?
Is the restricted transfer covered by an exception?
If you are making a restricted transfer that is not covered by an adequacy decision, nor an appropriate
safeguard, then you can only make that transfer if it is covered by one of the ‘exceptions’ set out in
Article 49 of the GDPR.
You should only use these as true ‘exceptions’ from the general rule that you should not make a
restricted transfer unless it is covered by an adequacy decision or there are appropriate safeguards in
place.
If it is covered by an exception, you may go ahead with the restricted transfer. Of course, you must still
comply with the rest of the GDPR.
Each exception is set out below:
Exception 1. Has the individual given his or her explicit consent to the restricted transfer?
Please see the section on consent as to what is required for a valid explicit consent under the GDPR.
As a valid consent must be both specific and informed, you must provide the individual with precise
details about the restricted transfer. You cannot obtain a valid consent for restricted transfers in general.
You should tell the individual:
the identity of the receiver, or the categories of receiver;
the country or countries to which the data is to be transferred;
why you need to make a restricted transfer;
the type of data;
the individual’s right to withdraw consent; andIn more detail - European Data Protection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state and
each EEA state. It adopts guidelines for complying with the requirements of the GDPR.
The EDPB is producing guidance on administrative arrangements, which will be published in due
course.
Relevant provisions in the GDPR – see Article 46 and Recitals 108-109 and 114
External link02 August 2018 - 1.0.248 251
the possible risks involved in making a transfer to a country which does not provide adequate
protection for personal data and without any other appropriate safeguards in place. For example, you
might explain that there will be no local supervisory authority, and no (or only limited) individual data
protection or privacy rights.
Given the high threshold for a valid consent, and that the consent must be capable of being withdrawn,
this may mean that using consent is not a feasible solution.
Exception 2. Do you have a contract with the individual? Is the restricted transfer
necessary for you to perform that contract?
Are you about to enter into a contract with the individual? Is the restricted transfer
necessary for you to take steps requested by the individual in order to enter into that
contract?
This exception explicitly states that it can only be used for occasional restricted transfers. This means
that the restricted transfer may happen more than once but not regularly. If you are regularly making
restricted transfers, you should be putting in place an appropriate safeguard .
The transfer must also be necessary , which means that you cannot perform the core purpose of the
contract or the core purpose of the steps needed to enter into the contract, without making the
restricted transfer. It does not cover a transfer for you to use a cloud based IT system.
Public authorities cannot rely on this exception when exercising their public powers.
Exception 3. Do you have (or are you entering into) a contract with an individual which
benefits another individual whose data is being transferred? Is that transfer necessary for
you to either enter into that contract or perform that contract?
As set out in Exception 2, you may only use this exception for occasional transfers, and the transfer
must be necessary for you to perform the core purposes of the contract or to enter into that contract.
You may rely on both Exceptions 2 and 3: Exception 2 for the individual entering into the contract and
Exception 3 for other people benefiting from that contract, often family members.
Example
A UK travel company offering bespoke travel arrangements may rely on this exception to send
personal data to a hotel in Peru, provided that it does not regularly arrange for its customers to stay
at that hotel. If it did, it should consider using an appropriate safeguard, such as the the standard
contractual clauses .
It is only necessary to send limited personal data for this purpose, such as the name of the guest,
the room required and the length of stay.
Example of necessary steps being taken at the individual’s request in order to enter into a contract:
Before the package is confirmed (and the contract entered into), the individual wishes to reserve a
room in the Peruvian hotel. The UK travel company has to send the Peruvian hotel the name of the
customer in order to hold the room.02 August 2018 - 1.0.248 252
Exceptions 2 and 3 are not identical. You cannot rely on Exception 3 for any restricted transfers needed
for steps taken prior to entering in to the contract.
Public authorities cannot rely on this exception when exercising their public powers.
Exception 4: You need to make the restricted transfer for important reasons of public
interest.
There must be an EU or UK law which states or implies that this type of transfer is allowed for important
reasons of public interest, which may be in the spirit of reciprocity for international co-operation. For
example an international agreement or convention (which the UK or EU has signed) that recognises
certain objectives and provides for international co-operation (such as the 2005 International
Convention for the Suppression of Acts of Nuclear Terrorism ).
This can be relied upon by both public and private entities.
If a request is made by a non-EEA authority, requesting a restrictive transfer under this exception, and
there is an international agreement such as a mutual assistance treaty (MLAT), you should consider
referring the request to the existing MLAT or agreement.
You should not rely on this exception for systematic transfers. Instead, you should consider one of the
appropriate safeguards . You should only use it in specific situations, and each time you should satisfy
yourself that the transfer is necessary for an important reason of public interest.
Exception 5: You need to make the restricted transfer to establish if you have a legal claim,
to make a legal claim or to defend a legal claim.
This exception explicitly states that you can only use it for occasional transfers. This means that the
transfer may happen more than once but not regularly. If you are regularly transferring personal data,
you should put in place an appropriate safeguard .
The transfer must be necessary, so there must be a close connection between the need for the transfer
and the relevant legal claim.
The claim must have a basis in law, and a formal legally defined process, but it is not just judicial or
administrative procedures. This means that you can interpret what is a legal claim quite widely, to cover,
for example:
all judicial legal claims, in civil law (including contract law) and criminal law. The court procedure
does not need to have been started, and it covers out-of-court procedures. It covers formal pre-trial
discovery procedures.
administrative or regulatory procedures, such as to defend an investigation (or potential
Example
Following the Exception 2 example, Exception 3 may apply if the customer is buying the travel
package for themselves and their family. Once the customer has bought the package with the UK
travel company, it may be necessary to send the names of the family members to Peruvian hotel in
order to book the rooms. 02 August 2018 - 1.0.248 253
investigation) in anti-trust law or financial services regulation, or to seek approval for a merger.
You cannot rely on this exception if there is only the mere possibility that a legal claim or other formal
proceedings may be brought in the future.
Public authorities can rely on this exception, in relation to the exercise of their powers.
Exception 6: You need to make the restricted transfer to protect the vital interests of an
individual. He or she must be physically or legally incapable of giving consent.
This applies in a medical emergency where the transfer is needed in order to give the medical care
required. The imminent risk of serious harm to the individual must outweigh any data protection
concerns.
You cannot rely on this exception to carry out general medical research.
If the individual is physically and legally capable of giving consent, then you cannot rely on this
exception.
For detail as to what is considered a ‘vital interest’ under the GDPR, please see the section on vital
interests as a condition of processing special category data .
For detail as to what is ‘consent’ under the GDPR please see the section on consent .
Exception 7: You are making the restricted transfer from a public register.
The register must be created under UK or EU law and must be open to either:
the public in general; or
any person who can demonstrate a legitimate interest.
For example, registers of companies, associations, criminal convictions, land registers or public vehicle
registers. The whole of the register cannot be transferred, nor whole categories of personal data.
The transfer must comply with any general laws which apply to disclosures from the public register. If
the register has been established at law and access is only given to those with a legitimate interest, part
of that assessment must take into account the data protection rights of the individuals whose personal
data is to be transferred. This may include consideration of the risk to that personal data by transferring
it to a country with less protection.
This does not cover registers run by private companies, such as credit reference databases.
Exception 8: you are making a one-off restricted transfer and it is in your compelling
legitimate interests.
If you cannot rely on any of the other exceptions, there is one final exception to consider. This exception
should not be relied on lightly and never routinely as it is only for truly exceptional circumstances.
For this exception to apply to your restricted transfer:
there must be no adequacy decision which applies.1.
you are unable to use any of the other appropriate safeguards. You must give serious consideration
to this, even if it would involve significant investment from you.2.
none of the other exceptions apply. Again, you must give serious consideration to the other3.02 August 2018 - 1.0.248 254
exceptions. It may be that you can obtain explicit consent with some effort or investment.
your transfer must not be repetitive – that is it may happen more than once but not regularly.4.
the personal data must only relate to a limited number of individuals. There is no absolute threshold
for this. The number of individuals involved should be part of the balancing exercise you must
undertake in para (g) below.5.
The transfer must be necessary for your compelling legitimate interests. Please see the section of the
guide on legitimate interests as a lawful basis for processing , but bearing mind that this exception
requires a higher standard, as it must be a compelling legitimate interest. An example is a transfer of
personal data to protect a company’s IT systems from serious immediate harm.6.
On balance, your compelling legitimate interests outweigh the rights and freedoms of the individuals.7.
You have made a full assessment of the circumstances surrounding the transfer and provided suitable
safeguards to protect the personal data. Suitable safeguards might be strict confidentiality
agreements, a requirement for data to be deleted soon after transfer, technical controls to prevent
the use of the data for other purposes, or sending pseudonymised or encrypted data. This must be
recorded in full in your documentation of your processing activities .8.
You have informed the ICO of the transfer. We will ask to see full details of all the steps you have
taken as set out above.9.
You have informed the individual of the transfer and explained your compelling legitimate interest to
them.10.
Further Reading
Relevant provisions in the GDPR – see Article 49 and Recitals 111-112
External link
In more detail - European Data P rotection Board
The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party
(WP29), includes representatives from the data protection authorities of each EU member state and
each EEA state. It adopts guidelines for complying with the requirements of the GDPR.
The EDPB adopted Guidelines 2/2018 on derogations of Article 49 under Regulation 2016/679 02 August 2018 - 1.0.248 255
Exemptions
At a glance
The GDPR and the Data Protection Act 2018 set out exemptions from some of the rights and
obligations in some circumstances.
Whether or not you can rely on an exemption often depends on why you process personal data.
You should not routinely rely on exemptions; you should consider them on a case-by-case basis.
You should justify and document your reasons for relying on an exemption.
If no exemption covers what you do with personal data, you need to comply with the GDPR as
normal.
Checklists
In brief
What’s new under the GDPR and the Data Protection Act 2018?
What are exemptions?
How do exemptions work?
What exemptions are available?
What’s new under the GDPR and the Data Protection Act 2018?
Not much has changed. Most of the exemptions in the Data Protection Act 1998 (the 1998 Act) are
included as exceptions built in to certain GDPR provisions or exemptions in the Data Protection Act 2018Exemptions
☐ We consider whether we can rely on an exemption on a case-by-case basis.
☐ Where appropriate, we carefully consider the extent to which the relevant GDPR requirements
would be likely to prevent, seriously impair, or prejudice the achievement of our processing
purposes.
☐ We justify and document our reasons for relying on an exemption.
☐ When an exemption does not apply (or no longer applies) to our processing of personal data,
we comply with the GDPR’s requirements as normal.02 August 2018 - 1.0.248 256
(the DPA 2018).
The ‘domestic purposes’ exemption in the 1998 Act is not replicated. This is because the GDPR does not
apply to personal data processed in the course of a purely personal or household activity, with no
connection to a professional or commercial activity.
If you used to rely on certain exemptions under the 1998 Act, the things you are exempt from may
have changed slightly under the GDPR and the DPA 2018. You should check what is covered by the
exemptions in the DPA 2018 and ensure that your use of any of the exemptions is appropriate and
compliant.
What are exemptions?
In some circumstances, the DPA 2018 provides an exemption from particular GDPR provisions. If an
exemption applies, you may not have to comply with all the usual rights and obligations.
There are several different exemptions; these are detailed in Schedules 2-4 of the DPA 2018. They add
to and complement a number of exceptions already built in to certain GDPR provisions.
This part of the Guide focuses on the exemptions in Schedules 2-4 of the DPA 2018. We give guidance
on the exceptions built in to the GDPR in the parts of the Guide that relate to the relevant provisions.
The exemptions in the DPA 2018 can relieve you of some of your obligations for things such as:
the right to be informed;
the right of access;
dealing with other individual rights;
reporting personal data breaches; and
complying with the principles.
Some exemptions apply to only one of the above, but others can exempt you from several things.
Some things are not exemptions. This is simply because they are not covered by the GDPR. Here are
some examples:
Domestic purposes – personal data processed in the course of a purely personal or household
activity, with no connection to a professional or commercial activity, is outside the GDPR’s scope. This
means that if you only use personal data for such things as writing to friends and family or taking
pictures for your own enjoyment, you are not subject to the GDPR.
Law enforcement – the processing of personal data by competent authorities for law enforcement
purposes is outside the GDPR’s scope (e.g. the Police investigating a crime). Instead, this type of
processing is subject to the rules in Part 3 of the DPA 2018. See our Guide to Law Enforcement
Processing for further information.
National security – personal data processed for the purposes of safeguarding national security or
defence is outside the GDPR’s scope. However, it is covered by Part 2, Chapter 3 of the DPA 2018
(the ‘applied GDPR’), which contains an exemption for national security and defence.
How do exemptions work?02 August 2018 - 1.0.248 257
Whether or not you can rely on an exemption generally depends on your purposes for processing
personal data.
Some exemptions apply simply because you have a particular purpose. But others only apply to the
extent that complying with the GDPR would:
be likely to prejudice your purpose (e.g. have a damaging or detrimental effect on what you are
doing); or
prevent or seriously impair you from processing personal data in a way that is required or necessary
for your purpose.
Exemptions should not routinely be relied upon or applied in a blanket fashion. You must consider each
exemption on a case-by-case basis.
If an exemption does apply, sometimes you will be obliged to rely on it (for instance, if complying with
GDPR would break another law), but sometimes you can choose whether or not to rely on it.
In line with the accountability principle, you should justify and document your reasons for relying on an
exemption so you can demonstrate your compliance.
If you cannot identify an exemption that covers what you are doing with personal data, you must
comply with the GDPR as normal.
What exemptions are available?
Crime, law and public protection
Crime and taxation: general
Crime and taxation: risk assessment
Information required to be disclosed by law or in connection with legal proceedings
Legal professional privilege
Self incrimination
Disclosure prohibited or restricted by an enactment
Immigration
Functions designed to protect the public
Audit functions
Bank of England functions
Regulation, parliament and the judiciary
Regulatory functions relating to legal services, the health service and children’s services
Other regulatory functions
Parliamentary privilege
Judicial appointments, independence and proceedings
Crown honours, dignities and appointments02 August 2018 - 1.0.248 258
Journalism, research and archiving
Journalism, academia, art and literature
Research and statistics
Archiving in the public interest
Health, social work, education and child abuse
Health data – processed by a court
Health data – an individual’s expectations and wishes
Health data – serious harm
Health data – restriction of the right of access
Social work data – processed by a court
Social work data – an individual’s expectations and wishes
Social work data – serious harm
Social work data – restriction of the right of access
Education data – processed by a court
Education data – serious harm
Education data – restriction of the right of access
Child abuse data
Finance, management and negotiations
Corporate finance
Management forecasts
Negotiations
References and exams
Confidential references
Exam scripts and exam marks
Subject access requests – information about other people
Protection of the rights of others
Crime and taxation: general
There are two parts to this exemption. The first part can apply if you process personal data for the
purposes of:
the prevention and detection of crime;02 August 2018 - 1.0.248 259
the apprehension or prosecution of offenders; or
the assessment or collection of a tax or duty or an imposition of a similar nature.
It exempts you from the GDPR’s provisions on:
the right to be informed;
all the other individual rights, except rights related to automated individual decision-making including
profiling;
notifying individuals of personal data breaches;
the lawfulness, fairness and transparency principle, except the requirement for processing to be
lawful;
the purpose limitation principle; and
all the other principles, but only so far as they relate to the right to be informed and the other
individual rights.
But the exemption only applies to the extent that complying with these provisions would be likely to
prejudice your purposes of processing. If this is not so, you must comply with the GDPR as normal.
The second part of this exemption applies when another controller obtains personal data processed for
any of the purposes mentioned above for the purposes of discharging statutory functions. The controller
that obtains the personal data is exempt from the GDPR provisions below to the same extent that the
original controller was exempt:
The right to be informed.
The right of access.
All the principles, but only so far as they relate to the right to be informed and the right of access.
Note that if you are a competent authority processing personal data for law enforcement purposes (e.g.
the Police conducting a criminal investigation), your processing is subject to the rules of Part 3 of the
DPA 2018. See our Guide to Law Enforcement Processing for information on how individual rights may
be restricted when personal data is processed for law enforcement purposes by competent authorities.
Further Reading
Example
A bank conducts an investigation into suspected financial fraud. The bank wants to pass its
investigation file, including the personal data of several customers, to the National Crime Agency
(NCA) for further investigation. The bank’s investigation and proposed disclosure to the NCA are for
the purposes of the prevention and detection of crime. The bank decides that, were it to inform the
individuals in question about this processing of their personal data, this would be likely to prejudice
the investigation because they might abscond or destroy evidence. So the bank relies on the crime
and taxation exemption and, in this case, does not comply with the right to be informed.
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 1,02 August 2018 - 1.0.248 260
Crime and taxation: risk assessment
This exemption can apply to personal data in a classification applied to an individual as part of a risk
assessment system.
The risk assessment system must be operated by a government department, local authority, or another
authority administering housing benefit, for the purposes of:
the assessment or collection of a tax or duty; or
the prevention or detection of crime or the apprehension or prosecution of offenders, where the
offence involves the unlawful use of public money or an unlawful claim for payment out of public
money.
It exempts you from the GDPR’s provisions on:
the right to be informed;
the right of access;
all the principles, but only so far as they relate to the right to be informed and the right of access.
But the exemption only applies to the extent that complying with these provisions would prevent the risk
assessment system from operating effectively . If this is not so, you must comply with these provisions
as normal.
Further Reading
Information required to be disclosed by law or in connection with legal
proceedings
This exemption has three parts. The first part can apply if you are required by law to make personal
data available to the public.
It exempts you from the GDPR’s provisions on:Paragraph 2
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4),
15(1)-(3), 16, 17(1) and (2), 18(1), 19, 20(1) and (2), 21(1), and 34(1) and (4)
External link
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 1,
Paragraph 3
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4), and
15(1)-(3)
External link02 August 2018 - 1.0.248 261
the right to be informed;
all the other individual rights, except rights related to automated individual decision-making including
profiling;
the lawfulness, fairness and transparency principle, except the requirement for processing to be
lawful;
the purpose limitation principle; and
all the other principles, but only so far as they relate to the right to be informed and the other
individual rights.
But the exemption only applies to the extent that complying with these provisions would prevent you
meeting your legal obligation to make personal data publicly available.
The second part of this exemption can apply if you are required by law, or court order, to disclose
personal data to a third party. It exempts you from the same provisions as above, but only to the extent
that complying with those provisions would prevent you disclosing the personal data.
The third part of this exemption can apply if it is necessary for you to disclose personal data for the
purposes of, or in connection with:
legal proceedings, including prospective legal proceedings;
obtaining legal advice; or
establishing, exercising or defending legal rights.
It exempts you from the same provisions as above, but only to the extent that complying with them
would prevent you disclosing the personal data. If complying with these provisions would not prevent
Example
The Registrar of Companies is legally obliged to maintain a public register of certain information
about companies, including the names and (subject to certain restrictions) addresses of company
directors. A director asks to exercise his right to erasure by having his name and address removed
from the register. The request does not need to be complied with as it would prevent the Registrar
meeting his legal obligation to make that information publicly available.
Example
An employer receives a court order to hand over the personnel file of one of its employees to an
insurance company for the assessment of a claim. Normally, the employer would not be able to
disclose this information because doing so would be incompatible with the original purposes for
collecting the data (contravening the purpose limitation principle). However, on this occasion the
employer is exempt from the purpose limitation principle’s requirements because it would prevent
the employer disclosing personal data that it must do by court order.02 August 2018 - 1.0.248 262
the disclosure, you cannot rely on the exemption.
Further Reading
Legal professional privilege
This exemption applies if you process personal data:
to which a claim to legal professional privilege (or confidentiality of communications in Scotland)
could be maintained in legal proceedings; or
in respect of which a duty of confidentiality is owed by a professional legal adviser to his client.
It exempts you from the GDPR’s provisions on:
the right to be informed;
the right of access; and
all the principles, but only so far as they relate to the right to be informed and the right of access.
Further Reading
Self incrimination
This exemption can apply if complying with the GDPR provisions below would reveal evidence that you
have committed an offence.
It exempts you from the GDPR’s provisions on:
the right to be informed;
the right of access; and
all the principles, but only so far as they relate to the right to be informed and the right of access.Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 1,
Paragraph 5
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4),
15(1)-(3), 16, 17(1)-(2), 18(1), 19, 20(1)-(2), and 21(1)
External link
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 4,
Paragraph 19
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4), and
15(1)-(3)
External link02 August 2018 - 1.0.248 263
But the exemption only applies to the extent that complying with these provisions would expose you to
proceedings for the offence.
This exemption does not apply to an offence under the DPA 2018 or an offence regarding false
statements made otherwise than on oath.
But any information you do provide to an individual in response to a subject access request is not
admissible against you in proceedings for an offence under the DPA 2018.
Further Reading
Disclosure prohibited or restricted by an enactment
Five separate exemptions apply to personal data that is prohibited or restricted from disclosure by an
enactment.
Each of them exempts you from the GDPR’s provisions on:
the right of access; and
all the principles, but only so far as they relate to the right of access.
But the exemptions only apply to personal data restricted or prohibited from disclosure by certain
specific provisions of enactments covering:
human fertilisation and embryology;
adoption;
special educational needs;
parental orders; and
children’s hearings.
If you think any of these exemptions might apply to your processing of personal data, see Schedule 4 of
the DPA 2018 for full details of the enactments that are covered.
Further ReadingRelevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 4,
Paragraph 20
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4), and
15(1)-(3)
External link
Relevant provisions in the Data Protection Act 2018 (the exemptions) - Schedule 4
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5 and 15(1)-(3)
External link02 August 2018 - 1.0.248 264
Immigration
There are two parts to this exemption. The first part can apply if you process personal data for the
purposes of maintaining effective immigration control, including investigatory/detection work (the
immigration purposes).
It exempts you from the GDPR’s provisions on:
the right to be informed;
the right of access;
the right to erasure;
the right to restrict processing;
the right to object;
all the principles, but only so far as they relate to the rights to be informed, of access, to erasure, to
restrict processing and to object.
But the exemption only applies to the extent that applying these provisions would be likely to prejudice
processing for the immigration purposes. If not, the exemption does not apply.
The second part of this exemption applies when personal data processed by any controller is obtained
and processed by another controller for the immigration purposes. The controller that discloses the
personal data is exempt from the GDPR’s provisions on:
the right to be informed;
the right of access;
all the principles, but only so far as they relate to the right to be informed and the right of access.
The exemption only applies to the same extent that the second controller is exempt from these
provisions.
Further Reading
Functions designed to protect the public
This exemption can apply if you process personal data for the purposes of discharging one of six
functions designed to protect the public.
The first four functions must: be conferred on a person by enactment; be a function of the Crown, a
Minister of the Crown or a government department; or be of a public nature and exercised in the public
interest. These functions are:Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 1,
Paragraph 4
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4),
15(1)-(3), 17(1)-(2), 18(1), and 21(1)
External link02 August 2018 - 1.0.248 265
to protect the public against financial loss due to the seriously improper conduct (or unfitness, or
incompetence) of financial services providers, or in the management of bodies corporate, or due to
the conduct of bankrupts;1.
to protect the public against seriously improper conduct (or unfitness, or incompetence);2.
to protect charities or community interest companies against misconduct or mismanagement in their
administration, to protect the property of charities or community interest companies from loss or
misapplication, or to recover the property of charities or community interest companies; or3.
to secure workers’ health, safety and welfare or to protect others against health and safety risks in
connection with (or arising from) someone at work.4.
The fifth function must be conferred by enactment on: the Parliamentary Commissioner for
Administration; the Commissioner for Local Administration in England; the Health Service Commissioner
for England; the Public Services Ombudsman for Wales; the Northern Ireland Public Services
Ombudsman; the Prison Ombudsman for Northern Ireland; or the Scottish Public Services Ombudsman.
This function is:
to protect the public from maladministration, or a failure in services provided by a public body, or
from the failure to provide a service that it is a function of a public body to provide.5.
The sixth function must be conferred by enactment on the Competition and Markets Authority. This
function is:
to protect members of the public from business conduct adversely affecting them, to regulate conduct
(or agreements) preventing, restricting or distorting commercial competition, or to regulate
undertakings abusing a dominant market position.6.
If you process personal data for any of the above functions, you are exempt from the GDPR’s provisions
on:
the right to be informed;
all the other individual rights, except rights related to automated individual decision-making including
profiling; and
all the principles, but only so far as they relate to the right to be informed and the other individual
rights.
But the exemption only applies to the extent that complying with these provisions would be likely to
prejudice the proper discharge of your functions. If you can comply with these provisions and discharge
your functions as normal, you must do so.
Further Reading
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 1,
Paragraph 7
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4),
15(1)-(3), 16, 17(1)-(2), 18(1), 19, 20(1)-(2), and 21(1)
External link02 August 2018 - 1.0.248 266
Audit functions
This exemption can apply if you process personal data for the purposes of discharging a function
conferred by enactment on:
the Comptroller and Auditor General;
the Auditor General for Scotland;
the Auditor General for Wales; or
the Comptroller and Auditor General for Northern Ireland.
It exempts you from the GDPR’s provisions on:
the right to be informed;
all the other individual rights, except rights related to automated individual decision-making including
profiling; and
all the principles, but only so far as they relate to the right to be informed and the other individual
rights.
But the exemption only applies to the extent that complying with these provisions would be likely to
prejudice the proper discharge of your functions. If it does not, you must comply with the GDPR as
normal.
Further Reading
Bank of England functions
This exemption can apply if you process personal data for the purposes of discharging a function of the
Bank of England:
in its capacity as a monetary authority;
that is a public function (within the meaning of Section 349 of the Financial Services and Markets Act
2000); or
that is conferred on the Prudential Regulation Authority by enactment.
It exempts you from the GDPR’s provisions on:
the right to be informed;
all the other individual rights, except rights related to automated individual decision-making including
profiling; andRelevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 1,
Paragraph 8
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4),
15(1)-(3), 16, 17(1)-(2), 18(1), 19, 20(1)-(2), and 21(1)
External link02 August 2018 - 1.0.248 267
all the principles, but only so far as they relate to the right to be informed and the other individual
rights.
But the exemption only applies to the extent that complying with these provisions would be likely to
prejudice the proper discharge of your functions. If this is not so, the exemption does not apply.
Further Reading
Regulatory functions relating to legal services, the health service and children’s
services
This exemption can apply if you process personal data for the purposes of discharging a function of:
the Legal Services Board;
considering a complaint under:
Part 6 of the Legal Services Act 2007,
Section 14 of the NHS Redress Act 2006,
Section 113(1) or (2), or Section 114(1) or (3) of the Health and Social Care (Community Health
and Standards) Act 2003,
Section 24D or 26 of the Children’s Act 1989, or
Part 2A of the Public Services Ombudsman (Wales) Act 2005; or
considering a complaint or representations under Chapter 1, Part 10 of the Social Services and
Well-being (Wales) Act 2014.
It exempts you from the GDPR’s provisions on:
the right to be informed;
all the other individual rights, except rights related to automated individual decision-making including
profiling; and
all the principles, but only so far as they relate to the right to be informed and the other individual
rights.
But the exemption only applies to the extent that complying with these provisions would be likely to
prejudice the proper discharge of your functions. If you can comply with these provisions and discharge
your functions as normal, you cannot rely on the exemption.
Further ReadingRelevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 1,
Paragraph 9
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4),
15(1)-(3), 16, 17(1)-(2), 18(1), 19, 20(1)-(2), and 21(1)
External link02 August 2018 - 1.0.248 268
Other regulatory functions
This exemption can apply if you process personal data for the purpose of discharging a regulatory
function conferred under specific, listed legislation on any one of 14 bodies and persons. These are:
the Information Commissioner;
the Scottish Information Commissioner;
the Pensions Ombudsman;
the Board of the Pension Protection Fund;
the Ombudsman for the Board of the Pension Protection Fund;
the Pensions Regulator;
the Financial Conduct Authority;
the Financial Ombudsman;
the investigator of complaints against the financial regulators;
a consumer protection enforcer (other than the Competition and Markets Authority);
the monitoring officer of a relevant authority;
the monitoring officer of a relevant Welsh authority;
the Public Services Ombudsman for Wales; or
the Charity Commission.
It exempts you from the GDPR’s provisions on:
the right to be informed;
all the other individual rights, except rights related to automated individual decision-making including
profiling; and
all the principles, but only so far as they relate to the right to be informed and the other individual
rights.
But the exemption only applies to the extent that complying with these provisions would be likely to
prejudice the proper discharge of your function. If this is not so, you must comply with these provisions
as you normally would.
Further ReadingRelevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 2,
Paragraph 10
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4),
15(1)-(3), 16, 17(1)-(2), 18(1), 19, 20(1)-(2), and 21(1)
External link
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 2,
Paragraphs 11-12 02 August 2018 - 1.0.248 269
Parliamentary privilege
This exemption can apply if it is required to avoid the privileges of either House of Parliament being
infringed.
It exempts you from the GDPR’s provisions on:
the right to be informed;
all the other individual rights, except rights related to automated individual decision-making including
profiling;
the communication of personal data breaches to individuals; and
all the principles, but only so far as they relate to the right to be informed and the other individual
rights.
But if you can comply with these provisions without infringing parliamentary privilege, you must do so.
Further Reading
Judicial appointments, independence and proceedings
This exemption applies if you process personal data:
for the purposes of assessing a person’s suitability for judicial office or the office of Queen’s Counsel;
as an individual acting in a judicial capacity; or
as a court or tribunal acting in its judicial capacity.
It exempts you from the GDPR’s provisions on:
the right to be informed;
all the other individual rights, except rights related to automated individual decision-making including
profiling; andExternal link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4),
15(1)-(3), 16, 17(1)-(2), 18(1), 19, 20(1)-(2), and 21(1)
External link
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 2,
Paragraph 13
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4),
15(1)-(3), 16, 17(1)-(2), 18(1), 19, 20(1)-(2), 21(1), and 34(1) and (4)
External link02 August 2018 - 1.0.248 270
all the principles, but only so far as they relate to the right to be informed and the other individual
rights.
Additionally, even if you do not process personal data for the reasons above, you are also exempt from
the same provisions of the GDPR to the extent that complying with them would be likely to prejudice
judicial independence or judicial proceedings.
Further Reading
Crown honours, dignities and appointments
This exemption applies if you process personal data for the purposes of:
conferring any honour or dignity by the Crown; or
assessing a person’s suitability for any of the following offices:
archbishops and diocesan and suffragan bishops in the Church of England,
deans of cathedrals of the Church of England,
deans and canons of the two Royal Peculiars,
the First and Second Church Estates Commissioners,
lord-lieutenants,
Masters of Trinity College and Churchill College, Cambridge,
the Provost of Eton,
the Poet Laureate, or
the Astronomer Royal.
It exempts you from the GDPR’s provisions on:
the right to be informed;
all the other individual rights, except rights related to automated individual decision-making including
profiling; and
all the principles, but only so far as they relate to the right to be informed and the other individual
rights.
Further ReadingRelevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 2,
Paragraph 14
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4),
15(1)-(3), 16, 17(1)-(2), 18(1), 19, 20(1)-(2), and 21(1)
External link
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 2,
Paragraph 15 02 August 2018 - 1.0.248 271
Journalism, academia, art and literature
This exemption can apply if you process personal data for:
journalistic purposes;
academic purposes;
artistic purposes; or
literary purposes.
Together, these are known as the ‘special purposes’.
The exemption relieves you from your obligations regarding the GDPR’s provisions on:
all the principles, except the security and accountability principles;
the lawful bases;
the conditions for consent;
children’s consent;
the conditions for processing special categories of personal data and data about criminal convictions
and offences;
processing not requiring identification;
the right to be informed;
all the other individual rights, except rights related to automated individual decision-making including
profiling;
the communication of personal data breaches to individuals;
consultation with the ICO for high risk processing;
international transfers of personal data; and
cooperation and consistency between supervisory authorities.
But the exemption only applies to the extent that:
as controller for the processing of personal data, you reasonably believe that compliance with these
provisions would be incompatible with the special purposes (this must be more than just an
inconvenience);
the processing is being carried out with a view to the publication of some journalistic, academic,
artistic or literary material; and
you reasonably believe that the publication of the material would be in the public interest, taking into
account the special importance of the general public interest in freedom of expression, any specific
public interest in the particular subject, and the potential to harm individuals.
When deciding whether it is reasonable to believe that publication would be in the public interest, youExternal link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4),
15(1)-(3), 16, 17(1)-(2), 18(1), 19, 20(1)-(2), and 21(1)
External link02 August 2018 - 1.0.248 272
must (if relevant) have regard to:
the BBC Editorial Guidelines;
the Ofcom Broadcasting Code; and
the Editors’ Code of Practice.
We expect you to be able to explain why the exemption is required in each case, and how and by whom
this was considered at the time. The ICO does not have to agree with your view – but we must be
satisfied that you had a reasonable belief.
Further Reading
Research and statistics
This exemption can apply if you process personal data for:
scientific or historical research purposes; or
statistical purposes.
It does not apply to the processing of personal data for commercial research purposes such as market
research or customer satisfaction surveys.
It exempts you from the GDPR’s provisions on:
the right of access;
the right to rectification;
the right to restrict processing; and
the right to object.
The GDPR also provides exceptions from its provisions on the right to be informed (for indirectly
collected data) and the right to erasure.
But the exemption and the exceptions only apply:
to the extent that complying with the provisions above would prevent or seriously impair the
achievement of the purposes for processing;
if the processing is subject to appropriate safeguards for individuals’ rights and freedoms (see Article
89(1) of the GDPR – among other things, you must implement data minimisation measures);
if the processing is not likely to cause substantial damage or substantial distress to an individual;Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 5,
Paragraph 26
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5(1)(a)-(e), 6, 7, 8(1)-(2), 9,
10, 11(2), 13(1)-(3), 14(1)-(4), 15(1)-(3), 16, 17(1)-(2), 18(1)(a)-(b) and (d), 19, 20(1)-(2),
21(1), 34(1) and (4), 36, 44, and 60-67
External link02 August 2018 - 1.0.248 273
if the processing is not used for measures or decisions about particular individuals, except for
approved medical research; and
as regards the right of access, the research results are not made available in a way that identifies
individuals.
Additionally, the GDPR contains specific provisions that adapt the application of the purpose limitation
and storage limitation principles when you process personal data for scientific or historical research
purposes, or statistical purposes. See the Guide pages on these principles for more detail.
Further Reading
Archiving in the public interest
This exemption can apply if you process personal data for archiving purposes in the public interest.
It exempts you from the GDPR’s provisions on:
the right of access;
the right to rectification;
the right to restrict processing;
the obligation to notify others regarding rectification, erasure or restriction;
the right to data portability; and
the right to object.
The GDPR also provides exceptions from its provisions on the right to be informed (for indirectly
collected data) and the right to erasure.
But the exemption and the exceptions only apply:
to the extent that complying with the provisions above would prevent or seriously impair the
achievement of the purposes for processing;
if the processing is subject to appropriate safeguards for individuals’ rights and freedoms (see Article
89(1) of the GDPR – among other things, you must implement data minimisation measures);
if the processing is not likely to cause substantial damage or substantial distress to an individual; and
if the processing is not used for measures or decisions about particular individuals, except for
approved medical research.
Additionally, the GDPR contains specific provisions that adapt the application of the purpose limitation
and storage limitation principles when you process personal data for archiving purposes in the publicRelevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 6,
Paragraph 27
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5(1)(b) and (e), 14(1)-(4),
15(1)-(3), 16, 18(1) and 21(1)
External link02 August 2018 - 1.0.248 274
interest. See the Guide pages on these principles for more detail.
Further reading
Health data – processed by a court
This exemption can apply to health data (personal data concerning health) that is processed by a court.
It exempts you from the GDPR’s provisions on:
the right to be informed;
all the other individual rights, except rights related to automated individual decision-making including
profiling; and
all the principles, but only so far as they relate to the right to be informed and the other individual
rights.
But the exemption only applies if the health data is:
supplied in a report or evidence given to the court in the course of proceedings; and
those proceedings are subject to certain specific statutory rules that allow the data to be withheld
from the individual it relates to.
If you think this exemption might apply to your processing of personal data, see paragraph 3(2) of
Schedule 3, Part 2 of the DPA 2018 for full details of the statutory rules.
Further ReadingRelevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 6,
Paragraph 28
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5(1)(b) and (e), 14(1)-(4),
15(1)-(3), 16, 18(1), 19, 20(1) and 21(1)
External link
Relevant provisions in the GDPR (the appropriate safeguards) - Article 89(1) and Recital 156
External link
Relevant provisions in the Data Protection Act 2018 (safeguards) - Section 19
External link
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 3, Part 2,
Paragraph 3
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4),
15(1)-(3), 16, 17(1)-(2), 18(1), 20(1)-(2), and 21(1)
External link02 August 2018 - 1.0.248 275
Health data – an individual’s expectations and wishes
This exemption can apply if you receive a request (in exercise of a power conferred by an enactment or
rule of law) for health data from:
someone with parental responsibility for an individual aged under 18 (or 16 in Scotland); or
someone appointed by the court to manage the affairs of an individual who is incapable of managing
their own affairs.
It exempts you from the GDPR’s provisions on:
the right to be informed;
all the other individual rights, except rights related to automated individual decision-making including
profiling; and
all the principles, but only so far as they relate to the right to be informed and the other individual
rights.
But the exemption only applies to the extent that complying with the request would disclose information
that:
the individual provided in the expectation that it would not be disclosed to the requestor, unless the
individual has since expressly indicated that they no longer have that expectation;
was obtained as part of an examination or investigation to which the individual consented in the
expectation that the information would not be disclosed in this way, unless the individual has since
expressly indicated that they no longer have that expectation; or
the individual has expressly indicated should not be disclosed in this way.
Further Reading
Health data – serious harm
This exemption can apply if you receive a subject access request for health data.
It exempts you from the GDPR’s provisions on the right of access regarding your processing of health
data.
But the exemption only applies to the extent that compliance with the right of access would be likely to
cause serious harm to the physical or mental health of any individual. This is known as the ‘serious
harm test’ for health data.Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 3, Part 2,
Paragraph 4
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4),
15(1)-(3), 16, 17(1)-(2), 18(1), 20(1)-(2), and 21(1)
External link02 August 2018 - 1.0.248 276
You can only rely on this exemption if:
you are a health professional; or
within the last six months you have obtained an opinion from an appropriate health professional that
the serious harm test for health data is met. Even if you have done this, you still cannot rely on the
exemption if it would be reasonable in all the circumstances to re-consult the appropriate health
professional.
If you think this exemption might apply to a subject access request you have received, see paragraph
2(1) of Schedule 3, Part 2 of the DPA 2018 for full details of who is considered an appropriate health
professional.
Further Reading
Health data – restriction of the right of access
This is a restriction rather than an exemption. It applies if you receive a subject access request for
health data.
It restricts you from disclosing health data in response to a subject access request, unless:
you are a health professional; or
within the last six months you have obtained an opinion from an appropriate health professional that
the serious harm test for health data is not met. Even if you have done this, you must re-consult the
appropriate health professional if it would be reasonable in all the circumstances.
This restriction does not apply if you are satisfied that the health data has already been seen by, or is
known by, the individual it is about.
If you think this restriction could apply to a subject access request you have received, see paragraph
2(1) of Schedule 3, Part 2 of the DPA 2018 for full details of who is considered an appropriate health
professional.
Further ReadingRelevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 3, Part 2,
Paragraph 5
External link
Relevant provisions in the GDPR (the exempt provisions) - Article 15(1)-(3)
External link
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 3, Part 2,
Paragraph 6
External link
Relevant provisions in the GDPR (the restricted provisions) - Article 15(1)-(3)
External link02 August 2018 - 1.0.248 277
Social work data – processed by a court
This exemption can apply to social work data (personal data that isn’t health or education data)
processed by a court. If you are unsure whether the data you process is social work data, see
paragraphs 7(1) and 8 of Schedule 3, Part 3 of the DPA 2018 for full details of what this is.
The exemption relieves you from your obligations regarding the GDPR’s provisions on:
the right to be informed;
all the other individual rights, except rights related to automated individual decision-making including
profiling; and
all the principles, but only so far as they relate to the right to be informed and the other individual
rights.
But the exemption only applies if the social work data is:
supplied in a report or evidence given to the court in the course of proceedings; and
those proceedings are subject to certain specific statutory rules that allow the social work data to be
withheld from the individual it relates to.
If you think this exemption might apply to your processing of personal data, see paragraph 9(2) of
Schedule 3, Part 3 of the DPA 2018 for full details of the statutory rules.
Further Reading
Social work data – an individual’s expectations and wishes
This exemption can apply if you receive a request (in exercise of a power conferred by an enactment or
rule of law) for social work data concerning an individual from:
someone with parental responsibility for an individual aged under 18 (or 16 in Scotland); or
someone appointed by court to manage the affairs of an individual who is incapable of managing
their own affairs.
It exempts you from the GDPR’s provisions on:
the right to be informed;
all the other individual rights, except rights related to automated individual decision-making including
profiling; and
all the principles, but only so far as they relate to the right to be informed and the other individualRelevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 3, Part 3,
Paragraph 9
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4),
15(1)-(3), 16, 17(1)-(2), 18(1), 20(1)-(2), and 21(1)
External link02 August 2018 - 1.0.248 278
rights.
But the exemption only applies to the extent that complying with the request would disclose information
that:
the individual provided in the expectation that it would not be disclosed to the requestor, unless the
individual has since expressly indicated that they no longer have that expectation;
was obtained as part of an examination or investigation to which the individual consented in the
expectation that the information would not be disclosed in this way, unless the individual has since
expressly indicated that they no longer have that expectation; or
the individual has expressly indicated should not be disclosed in this way.
Further Reading
Social work data – serious harm
This exemption can apply if you receive a subject access request for social work data.
It exempts you from the GDPR’s provisions on the right of access regarding your processing of social
work data.
But the exemption only applies to the extent that complying with the right of access would be likely to
prejudice carrying out social work because it would be likely to cause serious harm to the physical or
mental health of any individual. This is known as the ‘serious harm test’ for social work data.
Further Reading
Social work data – restriction of the right of access
This is a restriction rather than an exemption. It applies if you process social work data as a local
authority in Scotland (as defined by the Social Work (Scotland) Act 1968), and you receive a subject
access request for that data.Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 3, Part 3,
Paragraph 10
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4),
15(1)-(3), 16, 17(1)-(2), 18(1), 20(1)-(2), and 21(1)
External link
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 3, Part 3,
Paragraph 11
External link
Relevant provisions in the GDPR (the exempt provisions) - Article 15(1)-(3)
External link02 August 2018 - 1.0.248 279
It restricts you from disclosing social work data in response to a subject access request if:
the data came from the Principal Reporter (as defined by the Children’s Hearings (Scotland) Act
2011) in the course of his statutory duties; and
the individual whom the data is about is not entitled to receive it from the Principal Reporter.
If there is a question as to whether you need to comply with a subject access request in this situation,
you must inform the Principal Reporter within 14 days of the question arising.
You must not disclose the social work data in response to the subject access request unless the Principal
Reporter has told you they think the serious harm test for social work data is not met.
Further Reading
Education data – processed by a court
This exemption can apply to education data (personal data in an educational record) processed by a
court. If you are unsure whether the data you process is ‘education data’, see paragraphs 13-17 of
Schedule 3, Part 4 of the DPA 2018 for full details of what this is.
The exemption relieves you from your obligations regarding the GDPR’s provisions on:
the right to be informed;
all the other individual rights, except rights related to automated individual decision-making including
profiling; and
all the principles, but only so far as they relate to the right to be informed and the other individual
rights.
But the exemption only applies if the education data is:
supplied in a report or evidence given to the court in the course of proceedings; and
those proceedings are subject to certain specific statutory rules that allow the education data to be
withheld from the individual it relates to.
If you think this exemption might apply to your processing of personal data, see paragraph 18(2) of
Schedule 3, Part 4 of the DPA 2018 for full details of the statutory rules.
Further ReadingRelevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 3, Part 3,
Paragraph 12
External link
Relevant provisions in the GDPR (the restricted provisions) - Article 15(1)-(3)
External link
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 3, Part 4,
Paragraph 18
External link02 August 2018 - 1.0.248 280
Education data – serious harm
This exemption can apply if you receive a subject access request for education data.
It exempts you from the GDPR’s provisions on the right of access regarding your processing of
education data.
But the exemption only applies to the extent that complying with the right of access would be likely to
cause serious harm to the physical or mental health of any individual. This is known as the ‘serious
harm test’ for education data.
Further Reading
Education data – restriction of the right of access
This is a restriction rather than an exemption. It applies if you process education data as an education
authority in Scotland (as defined by the Education (Scotland) Act 1980), and you receive a subject
access request for that data.
It restricts you from disclosing education data in response to a subject access request if:
you believe that the data came from the Principal Reporter (as defined by the Children’s Hearings
(Scotland) Act 2011) in the course of his statutory duties; and
the individual whom the data is about is not entitled to receive it from the Principal Reporter.
If there is a question as to whether you need to comply with a subject access request in this situation,
you must inform the Principal Reporter within 14 days of the question arising.
You must not disclose the education data in response to the subject access request unless the Principal
Reporter has told you they think the serious harm test for education data is not met.
Further ReadingRelevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4),
15(1)-(3), 16, 17(1)-(2), 18(1), 20(1)-(2), and 21(1)
External link
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 3, Part 4,
Paragraph 19
External link
Relevant provisions in the GDPR (the exempt provisions) - Article 15(1)-(3)
External link
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 3, Part 4,
Paragraph 20
External link02 August 2018 - 1.0.248 281
Child abuse data
This exemption can apply if you receive a request (in exercise of a power conferred by an enactment or
rule of law) for child abuse data. If you are unsure whether the data you process is ‘child abuse data’,
see paragraph 21(3) of Schedule 3, Part 5 of the DPA 2018 for a definition.
The exemption applies if the request is from:
someone with parental responsibility for an individual aged under 18; or
someone appointed by court to manage the affairs of an individual who is incapable of managing
their own affairs.
It exempts you from the GDPR’s provisions on the right of access.
But the exemption only applies to the extent that complying with the request would not be in the best
interests of the individual who the child abuse data is about.
This exemption can only apply in England, Wales and Northern Ireland. It cannot apply in Scotland.
Further Reading
Corporate finance
This exemption can apply if you process personal data in connection with a corporate finance service
(e.g. if you underwrite financial instruments or give corporate finance advice to undertakings) that you
are permitted to provide (as set out in the Financial Services and Markets Act 2000).
It exempts you from the GDPR’s provisions on:
the right to be informed;
the right of access; and
all the principles, but only so far as they relate to the right to be informed and the right of access.
But the exemption only applies to the extent that complying with the provisions above would:
be likely to affect the price of an instrument; or
have a prejudicial effect on the orderly functioning of financial markets (or the efficient allocation of
capital within the economy), and you reasonably believe that complying with the provisions above
could affect someone’s decision whether to:
deal in, subscribe for or issue a financial instrument, orRelevant provisions in the GDPR (the restricted provisions) - Article 15(1)-(3)
External link
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 3, Part 5
External link
Relevant provisions in the GDPR (the exempt provisions) - Article 15(1)-(3)
External link02 August 2018 - 1.0.248 282
act in a way likely to have an effect on a business activity (e.g. an effect on an undertaking’s
capital structure, the legal or beneficial ownership of a business or asset or a person’s industrial
strategy
Further Reading
Management forecasts
This exemption can apply if you process personal data for the purposes of management forecasting or
management planning in relation to a business or other activity.
It exempts you from the GDPR’s provisions on:
the right to be informed;
the right of access; and
all the principles, but only so far as they relate to the right to be informed and the right of access.
But the exemption only applies to the extent that compliance with the above provisions would be likely
to prejudice the conduct of the business or activity.
Further ReadingRelevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 4,
Paragraph 21
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4), and
15(1)-(3)
External link
Example
The senior management of an organisation is planning a re-organisation. This is likely to involve
making certain employees redundant, and this possibility is included in management plans. Before
the plans are revealed to the workforce, an employee makes a subject access request. In
responding to that request, the organisation does not have to reveal its plans to make him
redundant if doing so would be likely to prejudice the conduct of the business (perhaps by causing
staff unrest before the management’s plans are announced).
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 4,
Paragraph 23
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4), and02 August 2018 - 1.0.248 283
Negotiations
This exemption can apply to personal data in records of your intentions relating to any negotiations with
an individual.
It exempts you from the GDPR’s provisions on:
the right to be informed;
the right of access; and
all the principles, but only so far as they relate to the right to be informed and the right of access.
But it only applies to the extent that complying with the above provisions would be likely to prejudice
negotiations with that individual.
Further reading
Confidential references
This exemption applies if you give or receive a confidential reference for the purposes of prospective or
actual:
education, training or employment of an individual;
placement of an individual as a volunteer;15(1)-(3)
External link
Example
An individual makes a claim to his insurance company. The claim is for compensation for personal
injuries he sustained in an accident. The insurance company disputes the seriousness of the injuries
and the amount of compensation it should pay. An internal paper sets out the company’s position on
these matters including the maximum sum it would be willing to pay to avoid the claim going to
court. If the individual makes a subject access request to the insurance company, it would not have
to send him the internal paper – because doing so would be likely to prejudice the negotiations to
settle the claim.
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 4,
Paragraph 22
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4), and
15(1)-(3)
External link02 August 2018 - 1.0.248 284
appointment of an individual to office; or
provision by an individual of any service.
It exempts you from the GDPR’s provisions on:
the right to be informed;
the right of access; and
all the principles, but only so far as they relate to the right to be informed and the right of access.
Further Reading
Exam scripts and exam marks
This exemption can apply to personal data in exam scripts.
It exempts you from the GDPR’s provisions on:
the right to be informed;
the right of access; and
all the principles, but only so far as they relate to the right to be informed and the right of access.
But it only applies to the information recorded by candidates. This means candidates do not have the
right to copies of their answers to the exam questions.
However, the information recorded by the person marking the exam is not exempt from the above
provisions. If an individual makes a subject access request for this information before the results are
announced, special rules apply to how long you have to comply with the request. You must provide the
information:
within five months of receiving the request; or
Example
Company A provides an employment reference in confidence for one of its employees to company
B. If the employee makes a subject access request to company A or company B, the reference will
be exempt from disclosure. This is because the exemption applies to the reference regardless of
whether it is in the hands of the company that gives it or receives it.
Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 4,
Paragraph 24
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 13(1)-(3), 14(1)-(4), and
15(1)-(3)
External link02 August 2018 - 1.0.248 285
within 40 days of announcing the exam results, if this is earlier.
Further Reading
Protection of the rights of others
Paragraphs 16 and 17 of Schedule 2, Part 3 of the DPA 2018 provide an exemption that can apply if you
receive a subject access request for information containing the personal data of more than one
individual.
See our Guide page on the right of access for guidance on what to do if you receive a request for
information that includes the personal data of other people.Relevant provisions in the Data Protection Act 2018 (the exemption) - Schedule 2, Part 4,
Paragraph 25
External link
Relevant provisions in the GDPR (the exempt provisions) - Articles 5, 12(3)-(4), 13(1)-(3),
14(1)-(4), and 15(1)-(3)
External link02 August 2018 - 1.0.248 286
Applications
To assist organisations in applying the requirements of the GDPR in different contexts, we are working to
produce guidance in a number of areas. For example, children’s data, CCTV, big data, etc.
This section will expand when our work on this guidance is complete.02 August 2018 - 1.0.248 287
Children
At a glance
Children need particular protection when you are collecting and processing their personal data
because they may be less aware of the risks involved.
If you process children’s personal data then you should think about the need to protect them from
the outset, and design your systems and processes with this in mind.
Compliance with the data protection principles and in particular fairness should be central to all your
processing of children’s personal data.
You need to have a lawful basis for processing a child’s personal data. Consent is one possible lawful
basis for processing, but it is not the only option. Sometimes using an alternative basis is more
appropriate and provides better protection for the child.
If you are relying on consent as your lawful basis for processing, when offering an online service
directly to a child, in the UK only children aged 13 or over are able to provide their own consent.
For children under this age you need to get consent from whoever holds parental responsibility for
the child - unless the online service you offer is a preventive or counselling service.
Children merit specific protection when you use their personal data for marketing purposes or
creating personality or user profiles.
You should not usually make decisions based solely on automated processing about children if this
will have a legal or similarly significant effect on them.
You should write clear privacy notices for children so that they are able to understand what will
happen to their personal data, and what rights they have.
Children have the same rights as adults over their personal data. These include the rights to access
their personal data; request rectification; object to processing and have their personal data erased.
An individual’s right to erasure is particularly relevant if they gave their consent to processing when
they were a child.
Checklists
General
☐ We comply with all the requirements of the GDPR, not just those specifically relating to
children and included in this checklist.
☐ We design our processing with children in mind from the outset, and use a data protection by
design and by default approach.
☐ We make sure that our processing is fair and complies with the data protection principles.
☐ As a matter of good practice, we use DPIAs to help us assess and mitigate the risks to
children.02 August 2018 - 1.0.248 288
☐ If our processing is likely to result in a high risk to the rights and freedom of children then we
always do a DPIA.
☐ As a matter of good practice, we take children’s views into account when designing our
processing.
Bases for processing a child’s personal data
☐ When relying on consent, we make sure that the child understands what they are consenting
to, and we do not exploit any imbalance of power in the relationship between us.
☐ When relying on ‘necessary for the performance of a contract’, we consider the child’s
competence to understand what they are agreeing to, and to enter into a contract.
☐ When relying upon ‘legitimate interests’, we take responsibility for identifying the risks and
consequences of the processing, and put age appropriate safeguards in place.02 August 2018 - 1.0.248 289
Offering an information Society Service (ISS) directly to a child, on the basis of consent
☐ If we decide not to offer our ISS (online service) directly to children, then we mitigate the risk
of them gaining access, using measures that are proportionate to the risks inherent in the
processing.
☐ When offering ISS to UK children on the basis of consent, we make reasonable efforts (taking
into account the available technology and the risks inherent in the processing) to ensure that
anyone who provides their own consent is at least 13 years old.
☐ When offering ISS to UK children on the basis of consent, we obtain parental consent to the
processing for children who are under the age of 13, and make reasonable efforts (taking into
account the available technology and risks inherent in the processing) to verify that the person
providing consent holds parental responsibility for the child.
☐ When targeting wider European markets we comply with the age limits applicable in each
Member State.
☐ We regularly review available age verification and parental responsibility verification
mechanisms to ensure we are using appropriate current technology to reduce risk in the
processing of children’s personal data.
☐ We don’t seek parental consent when offering online preventive or counselling services to a
child.
Marketing
☐ When considering targeting marketing at children we take into account their reduced ability to
recognise and critically assess the purposes behind the processing and the potential
consequences of providing their personal data.
☐ We take into account sector specific guidance on marketing, such as that issued by the
Advertising Standards Authority, to make sure that children’s personal data is not used in a way
that might lead to their exploitation.
☐ We stop processing a child’s personal data for the purposes of direct marketing if they ask us
to.
☐ We comply with the direct marketing requirements of the Privacy and Electronic
Communications Regulations (PECR).02 August 2018 - 1.0.248 290
Solely automated decision making (including profiling)
☐ We don’t usually use children’s personal data to make solely automated decisions about them
if these will have a legal, or similarly significant effect upon them.
☐ If we do use children’s personal data to make such decisions then we make sure that one of
the exceptions in Article 22(2) applies and that suitable, child appropriate, measures are in place
to safeguard the child’s rights, freedoms and legitimate interests.
☐ In the context of behavioural advertising, when deciding whether a solely automated decision
has a similarly significant effect upon a child, we take into account: the choices and behaviours
that we are seeking to influence; the way in which these might affect the child; and the child’s
increased vulnerability to this form of advertising; using wider evidence on these matters to
support our assessment.
☐ We stop any profiling of a child that is related to direct marketing if they ask us to.
Data Sharing
☐ We follow the approach in the ICO’s Data Sharing Code of Practice.
Privacy notices
☐ Our privacy notices are clear, and presented in plain, age-appropriate language.
☐ We use child friendly ways of presenting privacy information, such as: diagrams, cartoons,
graphics and videos, dashboards, layered and just-in-time notices, icons and symbols.
☐ We explain to children why we require the personal data we have asked for, and what we will
do with it, in a way which they can understand.
☐ As a matter of good practice, we explain the risks inherent in the processing, and how we
intend to safeguard against them, in a child friendly way, so that children (and their parents)
understand the implications of sharing their personal data.
☐ We tell children what rights they have over their personal data in language they can
understand.02 August 2018 - 1.0.248 291
In brief
What's new?
A child’s personal data merits particular protection under the GDPR.
If you rely on consent as your lawful basis for processing personal data when offering an ISS directly to
children, in the UK only children aged 13 or over are able provide their own consent. You may therefore
need to verify that anyone giving their own consent in these circumstances is old enough to do so. For
children under this age you need to get consent from whoever holds parental responsibility for them -
unless the ISS you offer is an online preventive or counselling service. You must also make reasonable
efforts (using available technology) to verify that the person giving consent does, in fact, hold parental
responsibility for the child.
Children also merit specific protection when you are collecting their personal data and using it for
marketing purposes or creating personality or user profiles.
You should not usually make decisions about children based solely on automated processing if this will
have a legal or similarly significant effect on them. The circumstances in which the GDPR allows you to
make such decisions are limited and only apply if you have suitable measures to protect the interests of
the child in place.
You must write clear and age-appropriate privacy notices for children.
The right to have personal data erased is particularly relevant when the individual gave their consent to
processing when they were a child.
What should our general approach to processing children’s personal data be?
Children need particular protection when you are collecting and processing their personal data because
they may be less aware of the risks involved.
If you process children’s personal data, or think that you might, then you should consider the need to
protect them from the outset, and design your systems and processes with this in mind.The child’s data protection rights
☐ We design the processes by which a child can exercise their data protection rights with the
child in mind, and make them easy for children to access and understand.
☐ We allow competent children to exercise their own data protection rights.
☐ If our original processing was based on consent provided when the individual was a child, then
we comply with requests for erasure whenever we can.
☐ We design our processes so that, as far as possible, it is as easy for a child to get their
personal data erased as it was for them to provide it in the first place.02 August 2018 - 1.0.248 292
Fairness, and compliance with the data protection principles, should be central to all your processing of
children’s personal data.
It is good practice to consider children’s views when designing your processing.
What do we need to consider when choosing a basis for processing children’s personal data?
As with adults, you need to have a lawful basis for processing a child’s personal data and you need to
decide what that basis is before you start processing. You can use any of the lawful bases for processing
set out in the GDPR when processing children’s personal data. But for some bases there are additional
things you need to think about when your data subject is a child.
If you wish to rely upon consent as your lawful basis for processing, then you need to ensure that the
child can understand what they are consenting to, otherwise the consent is not ‘informed’ and therefore
is invalid. There are also some additional rules for online consent.
If you wish to rely upon ‘performance of a contract’ as your lawful basis for processing, then you must
consider the child’s competence to agree to the contract and to understand the implications of the
processing.
If you wish to rely upon legitimate interests as your lawful basis for processing you must balance your
own (or a third party’s) legitimate interests in processing the personal data against the interests and
fundamental rights and freedoms of the child. This involves a judgement as to the nature and purpose of
the processing and the potential risks it poses to children. It also requires you to take appropriate
measures to safeguard against those risks.
What are the rules about an ISS and consent?
Consent is not the only basis for processing children’s personal data in the context of an ISS.
If you rely upon consent as your lawful basis for processing personal data when offering an ISS directly
to children, in the UK only children aged 13 or over can consent for themselves. You therefore need to
make reasonable efforts to verify that anyone giving their own consent in this context is old enough to
do so.
For children under this age you need to get consent from whoever holds parental responsibility for them
- unless the ISS you offer is an online preventive or counselling service. You must make reasonable
efforts (using available technology) to verify that the person giving consent does, in fact, hold parental
responsibility for the child.
You should regularly review the steps you are taking to protect children’s personal data and consider
whether you are able to implement more effective verification mechanisms when obtaining consent for
processing.
What if we want to target children with marketing?
Children merit specific protection when you are using their personal data for marketing purposes. You
should not exploit any lack of understanding or vulnerability.
They have the same right as adults to object to you processing their personal data for direct marketing.
So you must stop doing this if a child (or someone acting on their behalf) asks you to do so.02 August 2018 - 1.0.248 293
If you wish to send electronic marketing messages to children then you also need to comply with the
Privacy and Electronic Communications Regulations 2003.
What if we want to profile children or make automated decisions about them?
In most circumstances you should not make decisions about children that are based solely on
automated processing, (including profiling) if these have a legal effect on the child, or similarly
significantly affect them. If you do make such decisions you need to make sure that you put suitable
measures in place to protect the rights, freedoms and legitimate interests of the child.
If you profile children then you must provide them with clear information about what you are doing with
their personal data. You should not exploit any lack of understanding or vulnerability.
You should generally avoid profiling children for marketing purposes. You must respect a child’s absolute
right to object to profiling that is related to direct marketing, and stop doing this if they ask you to.
It is possible for behavioural advertising to ‘similarly significantly affect’ a child. It depends on the
nature of the choices and behaviour it seeks to influence.
What about data-sharing and children’s personal data?
If you want to share children’s personal data with third parties then you need to follow the advice in our
data sharing Code of Practice. We also recommend that you do a DPIA.
How do the exemptions apply to children’s personal data?
The exemptions apply to children’s personal data in the same way as they apply to adults’ personal
data. They may allow you to process children’s personal data in ways that the GDPR would not
otherwise allow. You need to consider and apply the specific provisions of the individual exemption.
How does the right to be informed apply to children?
You must provide children with the same information about what you do with their personal data as you
give adults. It is good practice to also explain the risks inherent in the processing and the safeguards
you have put in place.
You should write in a concise, clear and plain style for any information you are directing to children. It
should be age-appropriate and presented in a way that appeals to a young audience.
What rights do children have?
Children have the same rights as adults over their personal data which they can exercise as long as
they are competent to do so. Where a child is not considered to be competent, an adult with parental
responsibility may usually exercise the child’s data protection rights on their behalf.
How does the right to erasure apply to children?
Children have the same right to have their personal data erased as adults. This right is particularly
relevant when an individual originally gave their consent to processing when they were a child, without
being fully aware of the risks.02 August 2018 - 1.0.248 294
One of the specified circumstances in which the right to erasure applies is when you collected the
personal data of a child under the lawful basis of consent, when offering an ISS directly to a child.
It should generally be as easy for a child to exercise their right to erasure as it was for them to provide
their personal data in the first place.
Further reading
We have published detailed guidance on children and the GDPR .02 August 2018 - 1.0.248 295
|
Principles of Artificial Intelligence ( PDFDrive ).pdf
|
Principles
of
Artificial Intelligence
NILS J. NILSSON
Stanford University
MORGAN KAUFMANN
PUBLISHERS, INC.
Library of Congress Cataloging-in-Publication Data
Nilsson, Nils J., 1933-
Principles of artificial intelligence.
Reprint. Originally published: Palo Alto, Calif. :
TiogaPub. Co., © 1980.
Bibliography: p.
Includes indexes.
1. Artificial intelligence. I. Title.
Q335.N515 1986 006.3 86-2815 ISBN 0-934613-10-9
Copyright © 1980 Morgan Kaufmann Publishers, Inc.
All rights reserved. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
electronic, mechanical, photocopying, recording, or otherwise, without
the prior written permission of
the publisher. Printed in the United States
of America. Library of Congress Catalog Card Number 86-2815.
The figures listed below are from "Problem-Solving Methods in Artifi
cial Intelligence" by Nils J. Nilsson, copyright © 1971 McGraw-Hill
Book Company. Used with permission of McGraw-Hill Book Company.
Figures 1.4, 1.5, 1.6, 1.13, 2.6, 2.7, 2.8, 2.9, 2.12, 2.13, 3.8, 3.9, 3.10, 3.11,
3.12, 5.8, 5.9, 5.10, 5.11, 5.12, 5.13, and 5.14.
ISBN 0-934613-10-9
(Previously published by Tioga Publishing Co. under ISBN 0-935382-01-1)
FG-DO
for Kristen and Lars
PREFACE
Previous treatments of Artificial Intelligence (AI) divide the subject
into its major areas of application, namely, natural language processing,
automatic programming, robotics, machine vision, automatic theorem
proving, intelligent data retrieval systems, etc. The major difficulty with
this approach is that these application areas are now so extensive, that
each could, at best, be only superficially treated in a book of this length.
Instead, I have attempted here to describe fundamental AI ideas that
underlie many of these applications. My organization of these ideas is
not, then, based on the subject matter of their application, but is, instead,
based on general computational concepts involving the kinds of data
structures used, the types of operations performed on these data struc
tures, and the properties of control strategies used by AI systems. I stress,
in particular, the important roles played in AI by generalized production
systems and the predicate calculus.
The notes on which the book is based evolved in courses and seminars
at Stanford University and at the University of Massachusetts at
Amherst. Although certain topics treated in my previous book, Problem-
solving Methods in Artificial Intelligence, are covered here as well, this
book contains many additional topics such as rule-based systems, robot problem-solving systems, and structured-object representations.
One of the goals of this book is to fill a gap between theory and
practice. AI theoreticians have little difficulty in communicating with
each other; this book is not intended to contribute to that communication. Neither is the book a handbook of current AI programming
technology; other sources are available for that purpose. As it stands, the
book could be supplemented either by more theoretical treatments of
certain subjects, for AI theory courses, or by project and laboratory
sessions, for more practically oriented courses.
The book is designed as a text for a senior or first-year graduate course
in AI. It is assumed that the reader has a good background in the fundamentals of computer science; knowledge of a list-processing
language, such as LISP, would be helpful. A course organized around this book could comfortably occupy a full semester. If separate practical or
xi
theoretical material is added, the time required might be an entire year. A
one-quarter course would be somewhat hurried unless some material
(perhaps parts of chapter 6 and chapter 8) is omitted.
The exercises at the end of each chapter are designed to be thought-
provoking. Some expand on subjects briefly mentioned in the text.
Instructors may find it useful to use selected exercises as a basis for class
discussion. Pertinent references are briefly discussed at the end of every
chapter. These citations should provide the interested student with
adequate entry points to much of the most important literature in the
field.
I look forward someday to revising this book—to correct its inevitable
errors, and to add new results and points of view. Toward that end, I
solicit correspondence from readers.
Nils J. Nilsson
xn
ACKNOWLEDGEMENTS
Several organizations supported and encouraged the research, teach
ing, and discussions that led to this book. The Information Systems
Program, Marvin Denicoff, Director, of the Office of Naval Research,
provided research support under contract no. N00014-77-C-0222 with
SRI International. During the academic year 1976-77,1 was a part-time
visiting professor in the Computer Science Department at Stanford
University. From September 1977 to January 1978, I spent the Winter
Semester at the Computer and Information Sciences Department of the University of Massachusetts at Amherst. The students and faculty of
these departments were immensely helpful in the development of this book.
I want to give special thanks to my home organization, SRI Interna
tional, for the use of its facilities and for its liberal attitude toward
book-writing. I also want to thank all my friends and colleagues in the
Artificial Intelligence Center at SRI. One could not find a more dynamic,
intellectually stimulating, and constructively critical setting in which to
work and write.
Though this book carries the name of a single author, it has been
influenced by several people. It is a pleasure to thank here everyone who
helped guide me toward a better presentation. Some of those who
provided particularly detailed and extensive suggestions are: Doug Appelt, Michael Arbib, Wolfgang Bibel, Woody Bledsoe, John Brown,
Lew Creary, Randy Davis, Jon Doyle, Ed Feigenbaum, Richard Fikes,
Northrup Fowler, Peter Friedland, Anne Gardner, David Gelperin,
Peter Hart, Pat Hayes, Gary Hendrix, Doug Lenat, Vic Lesser, John
Lowrance, Jack Minker, Tom Mitchell, Bob Moore, Allen Newell, Earl Sacerdoti, Len Schubert, Herb Simon, Reid Smith, Elliot Soloway, Mark
Stefik, Mabry Tyson, and Richard Waldinger.
I also want to thank Robin Roy, Judy Fetler, and Georgia Navarro, for
patient and accurate typing; Sally Seitz for heroic insertion of typesetting
instructions into the manuscript; and Helen Tognetti for creative
copy-editing.
Most importantly, my efforts would not have been equal to this task
had they not been generously supported, encouraged, and understood by
my wife, Karen.
xiii
CREDITS
The manuscript for this book was prepared on a Digital Equipment
Corporation KL-10 computer at SRI International. The computer
manuscript file was processed for automatic photo-typesetting by W. A.
Barrett's TYPET system on a Hewlett-Packard 3000 computer. The main
typeface is Times Roman.
Book design: Ian Bastelier
Cover design: Andrea Hendrick
Illustrations: Maria Masterson
Typesetting: Typothetae, Palo Alto, CA
Page makeup: Vera Allen Composition, Castro Valley, CA
Printing and binding: R. R. Donnelley and Sons Company
xv
PROLOGUE
Many human mental activities such as writing computer programs,
doing mathematics, engaging in commonsense reasoning, understanding
language, and even driving an automobile are said to demand "intelli
gence." Over the past few decades, several computer systems have been
built that can perform tasks such as these. Specifically, there are computer systems that can diagnose diseases, plan the synthesis of
complex organic chemical compounds, solve differential equations in
symbolic form, analyze electronic circuits, understand limited amounts of human speech and natural language text, or write small computer
programs to meet formal specifications. We might say that such systems
possess some degree of
artificial intelligence.
Most of the work on building these kinds of systems has taken place in
the field called Artificial Intelligence (AI). This work has had largely an
empirical and engineering orientation. Drawing from a loosely structured but growing body of computational techniques, AI systems are developed, undergo experimentation, and are improved. This process
has produced and refined several general AI principles of wide applica
bility.
This book is about some of the more important, core AI ideas. We
concentrate on those that find application in several different problem
areas. In order to emphasize their generality, we explain these principles
abstractly rather than discuss them in the context of specific applications,
such as automatic programming or natural language processing. We
illustrate their use with several small examples but omit detailed case
studies of large-scale applications. (To treat these applications in detail
would each certainly require a separate book.) An abstract understanding
of the basic ideas should facilitate understanding specific AI systems
(including strengths and weaknesses) and should also prove a sound basis
for designing new systems.
1
PROLOGUE
AI has also embraced the larger scientific goal of constructing an
information-processing theory of intelligence. If such a science of
intelligence could be developed, it could guide the design of intelligent
machines as well as explicate intelligent behavior as it occurs in humans
and other animals. Since the development of such a general theory is still
very much a goal, rather than an accomplishment of AI, we limit our
attention here to those principles that are relevant to the engineering goal of building intelligent machines. Even with this more limited outlook,
our discussion of AI ideas might well be of interest to cognitive
psychologists and others attempting to understand natural intelligence.
As we have already mentioned, AI methods and techniques have been
applied in several different problem areas. To help motivate our subsequent discussions, we next describe some of these applications.
0.1. SOME APPLICATIONS OF ARTIFICIAL
INTELLIGENCE
0.1.1. NATURAL LANGUAGE PROCESSING
When humans communicate with each other using language, they
employ, almost effortlessly, extremely complex and still little understood
processes. It has been very difficult to develop computer systems capable of generating and "understanding" even fragments of a natural language,
such as English. One source of the difficulty is that language has evolved
as a communication medium between intelligent beings. Its primary use
is for transmitting a bit of "mental structure" from one brain to another under circumstances in which each brain possesses large, highly similar, surrounding mental structures that serve as a common context. Further
more, part of these similar, contextual mental structures allows each
participant to know that the other also possesses this common structure
and that the other can and will perform certain processes using it during
communication "acts." The evolution of language use has apparently
exploited the opportunity for participants to use their considerable
computational resources and shared knowledge to generate and under
stand highly condensed and streamlined messages: A word to the wise from the wise is sufficient. Thus generating and understanding language
is an encoding and decoding problem of fantastic complexity.
2
SOME APPLICATIONS OF ARTIFICIAL INTELLIGENCE
A computer system capable of understanding a message in natural
language would seem, then, to require (no less than would a human) both
the contextual knowledge and the processes for making the inferences
(from this contextual knowledge and from the message) assumed by the
message generator. Some progress has been made toward computer systems of this sort, for understanding spoken and written fragments of
language. Fundamental to the development of such systems are certain
AI ideas about structures for representing contextual knowledge and
certain techniques for making inferences from that knowledge. Although we do not treat the language-processing problem as such in this book, we
do describe some important methods for knowledge representation and processing that do find application in language-processing systems.
0.1.2. INTELLIGENT RETRIEVAL FROM DATABASES
Database systems are computer systems that store a large body of facts
about some subject in such a way that they can be used to answer users'
questions about that subject. To take a specific example, suppose the facts
are the personnel records of a large corporation. Example items in such a
database might be representations for such facts as "Joe Smith works in
the Purchasing Department," "Joe Smith was hired on October 8, 1976,"
"The Purchasing Department has 17 employees," "John Jones is the
manager of the Purchasing Department," etc.
The design of database systems is an active subspecialty of computer
science, and many techniques have been developed to enable the efficient representation, storage, and retrieval of large numbers of
facts. From our
point of view, the subject becomes interesting when we want to retrieve
answers that require deductive reasoning with the facts in the database.
There are several problems that confront the designer of such an
intelligent information retrieval system. First, there is the immense
problem of building a system that can understand queries stated in a
natural language like English. Second, even if the language-understanding problem is dodged by specifying some formal, machine-understand
able query language, the problem remains of how to deduce answers from stored facts. Third, understanding the query and deducing an answer may require knowledge beyond that explicitly represented in the
subject domain database. Common knowledge (typically omitted in the
subject domain database) is often required. For example, from the
personnel facts mentioned above, an intelligent system ought to be able
3
PROLOGUE
to deduce the answer "John Jones" to the query "Who is Joe Smith's
boss?" Such a system would have to know somehow that the manager of a
department is the boss of the people who work in that department. How
common knowledge should be represented and used is one of the system
design problems that invites the methods of Artificial Intelligence.
0.13. EXPERT CONSULTING SYSTEMS
AI methods have also been employed in the development of automatic
consulting systems. These systems provide human users with expert conclusions about specialized subject areas. Automatic consulting sys
tems have been built that can diagnose diseases, evaluate potential ore
deposits, suggest structures for complex organic chemicals, and even provide advice about how to use other computer systems.
A key problem in the development of expert consulting systems is how
to represent and use the knowledge that human experts in these subjects
obviously possess and use. This problem is made more difficult by the
fact that the expert knowledge in many important fields is often imprecise, uncertain, or anecdotal (though human experts use such
knowledge to arrive at useful conclusions).
Many expert consulting systems employ the AI technique of rule-based
deduction. In such systems, expert knowledge is represented as a large set
of simple rules, and these rules are used to guide the dialogue between
the system and the user and to deduce conclusions. Rule-based deduction
is one of the major topics of this book.
0.1.4. THEOREM PROVING
Finding a proof (or disproof) for a conjectured theorem in mathemat
ics can certainly be regarded as an intellectual task. Not only does it
require the ability to make deductions from hypotheses but demands
intuitive skills such as guessing about which lemmas should be proved first in order to help prove the main theorem. A skilled mathematician uses what he might call judgment (based on a large amount of specialized
knowledge) to guess accurately about which previously proven theorems in a subject area will be useful in the present proof and to break his main
4
SOME APPLICATIONS OF ARTIFICIAL INTELLIGENCE
problem down into subproblems to work on independently. Several
automatic theorem proving programs have been developed that possess
some of these same skills to a limited degree.
The study of theorem proving has been significant in the development
of AI methods. The formalization of the deductive process using the
language of predicate logic, for example, helps us to understand more
clearly some of the components of reasoning. Many informal tasks,
including medical diagnosis and information retrieval, can be formalized
as theorem-proving problems. For these reasons, theorem proving is an
extremely important topic in the study of AI methods.
0.1.5. ROBOTICS
The problem of controlling the physical actions of a mobile robot
might not seem to require much intelligence. Even small children are
able to navigate successfully through their environment and to manipu
late items, such as light switches, toy blocks, eating utensils, etc. However
these same tasks, performed almost unconsciously by humans, per
formed by a machine require many of the same abilities used in solving
more intellectually demanding problems.
Research on robots or robotics has helped to develop many AI ideas. It
has led to several techniques for modeling states of the world and for
describing the process of change from one world state to another. It has led to a better understanding of how to generate plans for action
sequences and how to monitor the execution of these plans. Complex
robot control problems have forced us to develop methods for planning at high levels of abstraction, ignoring details, and then planning at lower and lower levels, where details become important. We have frequent
occasion in this book to use examples of robot problem solving to
illustrate important ideas.
0.1.6. AUTOMATIC PROGRAMMING
The task of writing a computer program is related both to theorem
proving and to robotics. Much of the basic research in automatic
programming, theorem proving, and robot problem solving overlaps. In
a sense, existing compilers already do "automatic programming." They
take in a complete source code specification of what a program is to
5
PROLOGUE
accomplish, and they write an object code program to do it. What we
mean here by automatic programming might be described as a "super-
compiler," or a program that could take in a very high-level description
of what the program is to accomplish and produce a program. The high-level description might be a precise statement in a formal language,
such as the predicate calculus, or it might be a loose description, say, in
English, that would require further dialogue between the system and the
user in order to resolve ambiguities.
The task of automatically writing a program to achieve a stated result is
closely related to the task of proving that a given program achieves a
stated result. The latter is called program verification. Many automatic
programming systems produce a verification of the output program as an
added benefit.
One of the important contributions of research in automatic program
ming has been the notion of debugging as a problem-solving strategy. It
has been found that it is often much more efficient to produce an
inexpensive, errorful solution to a programming or robot control problem and then modify it (to make it work correctly), than to insist on a first solution completely free of defects.
0.1.7. COMBINATORIAL AND SCHEDULING PROBLEMS
An interesting class of problems is concerned with specifying optimal
schedules or combinations. Many of these problems can be attacked by
the methods discussed in this book. A classical example is the traveling
salesman's problem, where the problem is to find a minimum distance
tour, starting at one of several cities, visiting each city precisely once, and
returning to the starting city. The problem generalizes to one of finding a
minimum cost path over the edges of a graph containing n nodes such
that the path visits each of the n nodes precisely once.
Many puzzles have this same general character. Another example is
the 8-queens problem, where the problem is to place eight queens on a
standard chessboard in such a way that no queen can capture any of the
others; that is, there can be no more than one queen in any row, column
or diagonal. In most problems of this type, the domain of possible
combinations or sequences from which to choose an answer is very large.
Routine attempts at solving these types of problems soon generate a
combinatorial explosion of possibilities that exhaust even the capacities of
large computers.
6
SOME APPLICATIONS OF ARTIFICIAL INTELLIGENCE
Several of these problems (including the traveling salesman problem)
are members of a class that computational theorists call NP-complete.
Computational theorists rank the difficulty of various problems on how
the worst case for the time taken (or number of steps taken) using the
theoretically best method grows with some measure of the problem size.
(For example, the number of cities would be a measure of the size of a
traveling salesman problem.) Thus, problem difficulty may grow linearly,
polynomially, or exponentially, for example, with problem size.
The time taken by the best methods currently known for solving
NP-complete problems grows exponentially with problem size. It is not
yet known whether faster methods (involving only polynomial time, say)
exist, but it has been proven that if a faster method exists for one of the
NP-complete problems, then this method can be converted to similarly
faster methods for all the rest of the NP-complete problems. In the
meantime, we must make do with exponential-time methods.
AI researchers have worked on methods for solving several types of
combinatorial problems. Their efforts have been directed at making the
time-versus-problem-size curve grow as slowly as possible, even when it
must grow exponentially. Several methods have been developed for
delaying and moderating the inevitable combinatorial explosion. Again, knowledge about the problem domain is the key to more efficient
solution methods. Many of the methods developed to deal with combin
atorial problems are also useful on other, less combinatorially severe
problems.
0.1.8. PERCEPTION PROBLEMS
Attempts have been made to fit computer systems with television
inputs to enable them to "see" their surroundings or to fit them with
microphone inputs to enable them to "hear" speaking
voices. From these
experiments, it has been learned that useful processing of complex input
data requires "understanding" and that understanding requires a large base of knowledge about the things being perceived.
The process of perception studied in Artificial Intelligence usually
involves
a set of operations. A visual scene, say, is encoded by sensors and
represented as a matrix of intensity values. These are processed by
detectors that search for primitive picture components such as line
segments, simple curves, corners, etc. These, in turn, are processed to
7
PROLOGUE
infer information about the three-dimensional character of the scene in
terms of its surfaces and shapes. The ultimate goal is to represent the scene by some appropriate model. This model might consist of a
high-level description such as "A hill with a tree on top with cattle
grazing."
The point of the whole perception process is to produce a condensed
representation to substitute for the unmanageably immense, raw input
data. Obviously, the nature and quality of the final representation depend on the goals of the perceiving system. If colors are important,
they must be noticed; if spatial relationships and measurements are
important, they must be judged accurately. Different systems have different goals, but all must reduce the tremendous amount of sensory data at the input to a manageable and meaningful description.
The main difficulty in perceiving a scene is the enormous number of
possible candidate descriptions in which the system might be interested.
If it were not for this fact, one could conceivably build a number of
detectors to decide the category of a scene. The scene's category could then serve as its description. For example, perhaps a detector could be
built that could test a scene to see if it belonged to the category "A hill
with a tree on top with cattle grazing." But why should this detector be
selected instead of the countless others that might have been used?
The strategy of making hypotheses about various levels of description
and then testing these hypotheses seems to offer an approach to this problem. Systems have been constructed that process suitable represen
tations of a scene to develop hypotheses about the components of a
description. These hypotheses are then tested by detectors that are specialized to the component descriptions. The outcomes of these tests, in
turn, are used to develop better hypotheses, etc.
This hypothesize-and-test paradigm is applied at many levels of the
perception process. Several aligned segments suggest a straight line; a
line detector can be employed to test it. Adjacent rectangles suggest the
faces of a solid prismatic object; an object detector can be employed to
test it.
The process of hypothesis formation requires a large amount of
knowledge about the expected scenes. Some AI researchers have
suggested that this knowledge be organized in special structures called
frames or schémas. For example, when a robot enters a room through a
8
OVERVIEW
doorway, it activates a room schema, which loads into working memory a
number of expectations about what might be seen next. Suppose the
robot perceives a rectangular form. This form, in the context of a room schema, might suggest a window. The window schema might contain the
knowledge that windows typically do not touch the floor. A special
detector, applied to the scene, confirms this expectation, thus raising
confidence in the window hypothesis. We discuss some of the fun
damental ideas underlying frame-structured representations and inference processes later in the book.
0.2· OVERVIEW
The book is divided into nine chapters and a prospectus. In chapter 1,
we introduce a generalized production system and emphasize its impor
tance as a basic building block of AI systems. Several distinctions among
production systems and their control strategies are introduced. These
distinctions are used throughout the book to help classify different AI
systems.
The major emphasis in chapters 2 and 3 is on the search strategies that
are useful in the control of AI systems. Chapter 2 concerns itself with
heuristic methods for searching the graphs that are implicitly defined by
many AI systems. Chapter 3 generalizes these search techniques to
extended versions of these graphs, called AND/OR graphs, and to the
graphs that arise in analyzing certain games.
In chapter 4, we introduce the predicate calculus and describe the
important role that it plays in AI systems. Various rules of inference,
including resolution, are described. Systems for proving theorems using
resolution are discussed in chapter 5. We indicate how several different
kinds of problems can be posed as theorem-proving problems.
Chapter 6 examines some of the inadequacies of simple resolution
systems and describes some alternatives, called rule-based deduction
systems, that are more suitable for many AI applications. To illustrate
how these deduction systems might be used, several small examples,
ranging from information retrieval to automatic programming, are
presented.
9
PROLOGUE
In chapters 7 and 8, we present methods for synthesizing sequences of
actions that achieve prescribed goals. These methods are illustrated by
considering simple problems in robot planning and automatic program
ming. Chapter 7 introduces some of the more basic ideas, and chapter 8
elaborates on the subjects of complex goal interactions and hierarchical
planning.
Chapter 9 discusses some representational formalisms in which the
structure of the representation itself is used to aid retrieval processes and
to make certain common deductions more immediate. Two examples are semantic networks and the so-called frame-based representations. Our point of view toward such representations is that they can best be understood as a form of predicate calculus.
Last, in the prospectus, we review some outstanding AI problems that
are not yet sufficiently well understood to be included in the main part of a textbook. It is hoped that a discussion of these problems will provide perspective about the current status of
the field and useful directions for
future research.
0.3· BIBLIOGRAPHICAL AND HISTORICAL
REMARKS
In this section, and in similar sections at the end of each chapter, we
discuss very briefly some of the relevant literature. The material cited is
listed alphabetically by first author in the bibliography at the end of the
book. Many of these citations will be useful to readers who wish to probe
more deeply into either theoretical or applications topics. For complete
ness, we have occasionally referenced unpublished memoranda and
reports. Authors (or their home institutions) will sometimes provide
copies of such material upon request.
Several books have been written about AI and its applications. The
book by Slagle (1971) describes many early AI systems. Nilsson's (1971) book on problem solving in AI concentrates on search methods and applications of resolution theorem proving. An introductory book by
Jackson (1974) treats these problem-solving ideas and also describes
applications to natural language processing and image
analysis. The book
by Hunt (1975) treats pattern recognition, as well as other AI topics.
10
BIBLIOGRAPHICAL AND HISTORICAL REMARKS
Introductory articles about AI topics appear in a book edited by Barr and
Feigenbaum (1980). Nilsson's (1974) survey describes the field in the
early 1970s and contains many references. Michie's (1974) book contains
several of his articles on AI.
Raphael's (1975) book and Winston's (1977) book are easy-to-read and
elementary treatments of AI ideas. The latter contains an excellent
introduction to AI programming methods. A book edited by Bundy
(1978) contains material used in an introductory AI course given at the
University of Edinburgh. A general discussion of AI and its connection
with human intelligence is contained in Boden (1977). McCorduck
(1979) has written an interesting book about the history of artificial intelligence. Marr's (1977) essay and Simon's (1969) book discuss AI
research as a scientific endeavor. Cohen (1979) discusses the relationships
between artistic imagery and visual cognition.
The most authoritative and complete account of mechanisms of
human problem solving from an AI perspective is the book by Newell
and Simon (1972). The book edited by Norman and Rumelhart (1975)
contains articles describing computer models of human memory, and a
psychology text by Lindsay and Norman (1972) is written from an
information-processing viewpoint. A multidisciplinary journal, Cognitive
Science, contains articles on information-processing aspects of human
cognition, perception, and language.
03.1. NATURAL LANGUAGE PROCESSING
Grosz (1979) presents a good survey of current techniques and
problems in natural language processing. A collection of important
papers on this topic is contained in a book edited by Rustin (1973). One
of the first successful AI systems for understanding limited fragments of
natural language is described in a book by Winograd (1972).
The book by Newell et al. (1973) describes the five-year goals of a
research project to develop a speech understanding system; the major
results of this research are described in papers by Medress et al. (1977),
and Klatt (1977); reports by Reddy et al. (1977), Woods, et al (1976), and Bernstein (1976); and a book edited by Walker (1978).
A forthcoming book by Winograd (1980a) will present the foundations
of computational mechanisms in natural language processing. Some
11
PROLOGUE
interface systems for subsets of natural language are described in an
article edited by Waltz (1977).
Proceedings of biannual conferences on Theoretical Issues in Natural
Language Processing (TINLAP) contain several important papers.
Work in language processing draws on several disciplines besides Al
most notably, computational linguistics, philosophy, and cognitive psy
chology.
03.2. INTELLIGENT RETRIEVAL FROM DATABASES
Two excellent books on database systems are those of Date (1977) and
Wiederhold (1977). An important paper by Codd (1970) formalizes a
relational model for database management. Papers describing various
applications of AI and logic to database organization and retrieval are
contained in a book edited by Gallaire and Minker (1978). The article
edited by Waltz (1977) contains several descriptions of systems for
querying databases using simplified natural language.
033. EXPERT CONSULTING SYSTEMS
Expert consulting systems have been developed for a variety of
domains. The most prominent applications of AI ideas to medical
consulting are those of Pople (1977), for internal medicine; Weiss et al.
(1978), for the glaucomas; and Shortliffe (1976) and Davis (1976), for
bacterial infection diagnosis and therapy.
A consulting system to aid a geologist in evaluating potential mineral
deposits is described by Duda et al. (1978a, 1978b, 1979). Several expert
systems developed at Stanford University are summarized by Feigenbaum
(1977). The most highly developed of these, DENDRAL, computes
structural descriptions of complex organic chemicals from their mass spectrograms and related data [Buchanan and Feigenbaum (1978)].
Other important expert systems are those of Sussman and Stallman
(1975) [see also Stallman and Sussman (1977)] for analyzing the
performance of electronic circuits; and Genesereth (1978, 1979), for
helping casual users of the MACSYMA mathematical formula manipu
lation system [Martin and Fateman (1971)].
12
BIBLIOGRAPHICAL AND HISTORICAL REMARKS
03.4. THEOREM PROVING
Early applications of AI ideas to proving theorems were made by
Gelernter (1959) to plane geometry; and by Newell, Shaw, and Simon
(1957) to propositional logic. The resolution principle of Robinson
(1965) greatly accelerated work on automatic theorem proving. Resolu
tion theorem proving is thoroughly explained in books by Chang and Lee
(1973), Loveland (1978), and Robinson (1979).
Bledsoe and his co-workers have developed impressive theorem-prov
ing systems for analysis [Ballantyne and Bledsoe (1977)], for topology
[Bledsoe and Bruell (1974)], and for set theory [Bledsoe (1971)]. Wos and
his co-workers have achieved excellent results with resolution-based
systems [McCharen et al. (1976); Winker and Wos (1978); Winker
(1979)]. Boyer and Moore (1979) have developed a theorem-proving
system that proves theorems about recursive functions and makes strong
use of induction.
Regular workshops are held on automatic deduction. An informal
proceedings was issued for the Fourth Workshop [see WAD in the
Bibliography].
03.5. ROBOTICS
Much of the theoretical research in robotics was conducted through
robot projects at MIT, Stanford University, Stanford Research Institute and the University of Edinburgh in the late 1960s and early 1970s. This
work has been described in several papers and reports. Good accounts
are available for the MIT work by Winston (1972); for the Stanford
Research Institute work by Raphael et al. (1971) and Raphael (1976,
chapter 8); for the Stanford University work by McCarthy et al. (1969);
and for the Edinburgh work by Ambler, et al. (1975).
Practical applications of robotics in industrial automation are becom
ing commonplace. A paper by Abraham (1977) describes a prototype
robot system for assembling small electric motors. Automatic visual sensing with a solid-state TV camera
is used to guide manipulators in the
system. Rosen and Nitzan (1977) discuss the use of vision and other
sensors in industrial automation. For a sample of advanced work in
robotics applications see Nitzan (1979), Binford et al. (1978), Nevins and
13
PROLOGUE
Whitney (1977), Will and Grossman (1975), Takeyasu et al. (1977),
Okhotsimski et al. (1979), and Cassinis (1979). International symposia on
industrial robots are held regularly.
03.6. AUTOMATIC PROGRAMMING
One of the earliest attempts to use AI ideas for automatically
synthesizing computer programs was by Simon (1963, 1972b). Pioneer
ing papers by Waldinger and Lee (1969) and by Green (1969a) showed
how small programs could be synthesized using theorem-proving tech
niques.
Surveys by Biermann (1976) and by Hammer and Ruth (1979) discuss
several approaches to automatic programming. The PS I project of Green
(1976) includes several components, one of which is a rule-based system
for synthesizing programs from descriptions of abstract algorithms
[Barstow (1979)]. Rich and Shrobe (1979) describe a programmer's
apprentice system for assisting a human programmer.
The related topic of program verification is surveyed by London
(1979). [See also the discussion by Constable (1979) in the same volume.]
The formal verification of properties of programs was discussed early in
the history of computing by Goldstine and von Neumann (1947) and by
Turing (1950). Program verification was mentioned by McCarthy (1962)
as one of the applications of a proposed mathematical science of
computation. Work by Floyd (1967) and Naur (1966) explicitly in
troduced the idea of invariant assertions. A collection of papers in a book
by Manna and Waldinger (1977) describe logic-based methods for
program verification, synthesis, and debugging.
03.7. COMBINATORIAL AND SCHEDULING PROBLEMS
Scheduling problems are usually studied in operations research. Good
general references are the books by Wagner (1975) and by Hillier and
Lieberman (1974). For a discussion of NP-complete problems and other
topics in the mathematical analysis of algorithms, see the book by Aho,
Hopcroft, and Ullman (1974). Lamiere (1978) presents a computer
language and a system for solving combinatorial problems using AI
methods.
14
BIBLIOGRAPHICAL AND HISTORICAL REMARKS
0.3.8. PERCEPTION PROBLEMS
Many good papers on the problems of visual perception by machine
are contained in volumes edited by Hansen and Riseman (1978) and by
Winston (1975). Representative systems for processing visual images
include those of Barrow and Tenenbaum (1976) and Shirai (1978). An important paper by Marr (1976) theorizes about the computational and
representational mechanisms of human vision. Kanade (1977) reviews
some of the important general aspects of vision systems, and Agin (1977)
surveys some of the uses of vision systems in industrial automation.
A book by Duda and Hart (1973) describes some of the fundamentals
of computer vision. International Joint Conferences on Pattern Recogni
tion are regularly held and proceedings are published by the IEEE. The Information Processing Techniques Office of the U. S. Defense Ad
vanced Research Projects Agency sponsors Image Understanding Work
shops; proceedings of these workshops are available.
0.3.9. OTHER APPLICATIONS
Applications of AI ideas have been made in other areas as well.
Latombe (1977) and Sussman (1977) describe systems for automatic
design; Brown (1977) discusses applications in education; and Gelernter et al. (1977) and Wipke, Ouchi, and Krishnan (1978) have developed systems for organic chemical synthesis.
03.10. IMPORTANT SOURCE MATERIALS
In addition to the books already mentioned, several volumes of
collected papers are cited at the beginning of the bibliography. These
include a series of nine volumes called Machine Intelligence (MI) and a
volume entitled Computers and Thought ( CT) of important early papers
edited by Feigenbaum and Feldman (1963).
The international journal Artificial Intelligence is a primary publica
tion medium for papers in the field. AI papers are also published in the
Journal of the Association for Computing Machinery (JACM), the
Communications of the Association for Computing Machinery ( CACM),
and in various publications of the Institute of Electrical and Electronic
Engineers (IEEE).
15
PROLOGUE
International Joint Conferences on Artificial Intelligence (IJCAI) have
been held biannually since 1969. The Association for Computing
Machinery (ACM) publishes a newsletter devoted to AI called the
SIGART Newsletter. In Britain, the Society for the Study of Artificial
Intelligence and Simulation of Behavior publishes the AISB Quarterly
and holds biannual summer conferences. The Canadian Society for
Computational Studies of Intelligence (CSCSI/SCEIO) publishes an
occasional newsletter.
Some of the topics treated in this book assume some familiarity with
the programming language LISP. For a readable introduction, see the
book by Weissman (1967). Friedman (1974) is an entertaining pro
grammed instruction manual. For a more technical treatment, see the
book by Allen (1978).
16
CHAPTER 1
PRODUCTION SYSTEMS AND AI
Most AI systems display a more or less rigid separation between the
standard computational components of data, operations, and control.
That is, if these systems are described at an appropriate level, one can
often identify a central entity that might be called & global database that is
manipulated by certain well-defined operations, all under the control of
some global control strategy. We stress the importance of identifying an
appropriate level of description; near the machine-code level, any neat separation into distinct components can disappear; at the top level, the
complete AI system can consist of several database/operations/control
modules interacting in a complex fashion. Our point is that a system
consisting of separate database, operations, and control components
represents an appropriate metaphorical building block for constructing
lucid descriptions of AI systems.
1.1. PRODUCTION SYSTEMS
Various generalizations of the computational formalism known as a
production system involve a clean separation of these computational
components and thus seem to capture the essence of operation of many
AI systems. The major elements of an AI production system are a. global
database, a set of production rules, and a control system.
The global database is the central data structure used by an AI
production system. Depending on the application, this database may be
as simple as a small matrix of numbers or as complex as a large, relational,
indexed file structure. (The reader should not confuse the phrase "global
database," as it is used in this book, with the databases of database
systems.)
17
PRODUCTION SYSTEMS AND AI
The production rules operate on the global database. Each rule has a
precondition that is either satisfied or not by the global database. If the
precondition is satisfied, the rule can be applied. Application of the rule
changes the database. The control system chooses which applicable rule
should be applied and ceases computation when a termination condition
on the global database is satisfied.
There are several differences between this production system structure
and conventional computational systems that use hierarchically organ
ized programs. The global database can be accessed by all of the rules; no
part of it is local to any of them in particular. Rules do not "call" other
rules; communication between rules occurs only through the global
database. These features of production systems are compatible with the
evolutionary development of large AI systems requiring extensive
knowledge. One difficulty with using conventional systems of hierarchically organized programs in AI applications is that additions or changes to the knowledge base might require extensive changes to the various existing programs, data structures, and subroutine organization. The
production system design is much more modular, and changes to the
database, to the control system, or to the rules can be made relatively independently.
We shall distinguish several varieties of production systems. These
differ in the kinds of control systems they use, in properties of their rules
and databases, and in the ways in which they are applied to specific
problems.
As a short example of what we mean by an AI production system, we
shall illustrate how one is used to solve a simple puzzle.
1.1.1. THE8-PUZZLE
Many AI applications involve composing a sequence of operations.
Controlling the actions of a robot and automatic programming are two
examples. A simple and perhaps familiar problem of this sort, useful for
illustrating basic ideas, is the 8-puzzle. The 8-puzzle consists of eight
numbered, movable tiles set in a 3 X 3 frame. One cell of the frame is
always empty thus making it possible to move an adjacent numbered tile
into the empty cell—or, we could say, to move the empty cell. Such a
puzzle is illustrated in Figure 1.1. Two configurations of tiles are given. Consider the problem of changing the initial configuration into the goal
18
PRODUCTION SYSTEMS
configuration. A solution to the problem is an appropriate sequence of
moves, such as "move tile 6 down, move tile 8 down, ..., etc."
To solve a problem using a production system, we must specify the
global database, the rules, and the control strategy. Transforming a
problem statement into these three components of a production system is
often called the representation problem in AI. Usually there are several ways to so represent a problem. Selecting a good representation is one of
the important arts involved in applying AI techniques to practical
problems.
For the 8-puzzle and certain other problems, we can easily identify
elements of the problem that correspond to these three components.
These elements are the problem states, moves, and goal. In the 8-puzzle,
each tile configuration is a problem state. The set of all possible
configurations is the space of problem states or the problem space. Many
of the problems in which we are interested have very large problem
spaces. The 8-puzzle has a relatively small space; there are only 362,880
(that is, 9!) different configurations of the 8 tiles and the blank space. (This space happens to be partitioned into two disjoint subspaces of
181,440 states each.)
Once the problem states have been conceptually identified, we must
construct a computer representation, or description, of them. This
description is then used as the global database of a production system.
For the 8-puzzle, a straightforward description is a 3 X 3 array or matrix
of numbers. The initial global database is this description of the initial
problem state. Virtually any kind of data structure can be used to describe states. These include symbol strings, vectors, sets, arrays, trees,
and lists. Sometimes, as in the 8-puzzle, the form of the data structure
bears a close resemblance to some physical property of the problem being
solved.
1
8
J_ 2
6 3
4
5 2
1
7 8
6 3
4
5
Initial Goal
Fig. 1.1 Initial and goal configurations for the 8-puzzle.
19
PRODUCTION SYSTEMS AND AI
A move transforms one problem state into another state. The 8-puzzle
is conveniently interpreted as having the following four moves: Move
empty space (blank) to the left, move blank up, move blank to the right,
and move blank down. These moves are modeled by production rules
that operate on the state descriptions in the appropriate manner. The
rules each have preconditions that must be satisfied by a state description
in order for them to be applicable to that state description. Thus, the
precondition for the rule associated with "move blank up" is derived from the requirement that the blank space must not already be in the top
row.
In the 8-puzzle, we are asked to produce a particular problem state,
namely, the goal state shown in Figure 1.1. We can also deal with problems for which the goal is to achieve any one of an explicit list of
problem states. A further generalization is to specify some true/false
condition on states to serve as a
goal condition. Then the goal would be to
achieve any state satisfying this condition. Such a condition implicitly defines some set of goal states. For example, in the 8-puzzle, we might
want to achieve any tile configuration for which the sum of
the numbers
labeling the tiles in the first row is 6. In our language of states, moves, and
goals, a solution to a problem is a sequence of moves that transforms an
initial state into a goal state.
The problem goal condition forms the basis for the termination
condition of the production system. The control strategy repeatedly
applies rules to state descriptions until a description of a goal state is
produced. It also keeps track of the rules that have been applied so that it
can compose them into the sequence representing the problem solution.
In certain problems, we want the solution to be subject to certain
additional constraints. For example, we may want the solution to our
8-puzzle problem having the smallest number of moves. In general we
ascribe a cost to each move and then attempt to find a solution having
minimal cost. These elaborations can easily be handled by methods we
describe later on.
1.1.2. THE BASIC PROCEDURE
The basic production system algorithm for solving a problem like the
8-puzzle can be written in nondeterministic form as follows:
20
PRODUCTION SYSTEMS
Procedure PRODUCTION
1 DA TA 4- initial database
2 until DA TA satisfies the termination condition, do:
3 begin
4 select some rule, R, in the set of rules
that can be applied to DA TA
5 DA TA <- result of applying R to DA TA
6 end
1.13. CONTROL
The above procedure is nondeterministic because we have not yet
specified precisely how we are going to select an applicable rule in
statement 4. Selecting rules and keeping track of those sequences of rules already tried and the databases they produced constitute what we call the
control strategy for production systems. In most AI applications, the
information available to the control strategy is not sufficient to permit selection of the most appropriate rule on every pass through step 4. The operation of AI production systems can thus be characterized as a search
process in which rules are tried until some sequence of them is found that
produces a database satisfying the termination condition. Efficient control strategies require enough knowledge about the problem being
solved so that the rule selected in step 4 has a good chance of being the
most appropriate one.
We distinguish two major kinds of control strategies: irrevocable and
tentative. In an irrevocable control regime, an applicable rule is selected
and applied irrevocably without provision for reconsideration later. In a
tentative control regime, an applicable rule is selected (either arbitrarily
or perhaps with some good reason), the rule is applied, but provision is
made to return later to this point in the computation to apply some other
rule.
We further distinguish two different types of tentative control regimes.
In one, which we call backtracking, a backtracking point is established
21
PRODUCTION SYSTEMS AND AI
when a rule is selected. Should subsequent computation encounter
difficulty in producing a solution, the state of the computation reverts to
the previous backtracking point, where another rule is applied instead, and the process continues.
In the second type of tentative control regime, which we call
graph-search control, provision is made for keeping track of the effects of
several sequences of rules simultaneously. Various kinds of graph structures and graph searching procedures are used in this type of control.
1.1.4. EXAMPLES OF CONTROL REGIMES
1.1.4.1. Irrevocable. At first thought, it might seem that an irrevocable
control regime would never be appropriate for production systems
expected to solve problems requiring search. Trial-and-error methods
seem to be inherent in solving puzzles, for example. One might argue that if a control strategy of a production system possessed sufficient know
ledge about a puzzle to select irrevocably an appropriate rule to apply to each state description, then it would have the puzzle's solution built into it and, if
so, can hardly be said to have "solved" the puzzle, for it already
knew the solution. Such an argument fails to acknowledge the distinction
between the explicit local knowledge, about how to proceed toward a goal
from any state, and the implicit global knowledge, of the complete
solution. When infallible local knowledge is available, an irrevocable production system can use it to construct the explicit global knowledge of
a solution (without having the explicit global knowledge originally).
Outside of AI, one of the most common examples of the use of local
knowledge to construct a global solution is in the "hill-climbing" process
of finding the maximum of a function. At any point, we proceed in the
direction of the steepest gradient (the local knowledge) to find eventually
a maximum of the function (the global knowledge). For certain kinds of
functions (those with a single maximum and certain other properties),
knowledge of the direction of the steepest gradient is sufficient to find a solution.
We can use the hill-climbing process directly in an irrevocable
production system. We need only some real-valued function on the
global databases. The control strategy uses this function to select a rule. It
22
PRODUCTION SYSTEMS
selects (irrevocably) the applicable rule that produces a database giving
the largest increase in the value of the function. Our hill-climbing
function must be such that it attains its highest value for a database satisfying the termination condition.
Applying hill-climbing to the
8-puzzle we might use, as a function of
the state description, the negative of the number of tiles "out of place," as
compared to the goal state description. For example, the value of this
function for the initial state in Figure 1.1 is — 4, and the value for the goal
state is 0. We can easily compute the value of this function for any state
description.
From the initial state, we achieve maximum increase in the value of
this function by moving the blank up, so our production system selects
the corresponding rule. In Figure 1.2 we show the sequence of states
traversed by such a production system in solving this puzzle. The value of
our hill-climbing function for each state description is circled. The figure
shows that one of the rule applications along the path did not increase the
value of our function. If none of the applicable rules permits an increase
in the value of our function, a rule is selected (arbitrarily) that does not
diminish the value. If there are no such rules, the process halts.
& \2
1
LL 8
6 3
4
5
®
ΓΓ
8
Ll 2
6 3
4
5 Q>
2
1
7 8
6 3
4
5
Θ
1
7 2
8
6 3
4
5 <3>
2|
11
7l<
© ■ 1 **
"li
7|< ΊΤ]
JU
>15]
1
>|3
! 4
>l5i
Fig. 1.2 Hill-climbing values for states of the 8-puzzle.
23
PRODUCTION SYSTEMS AND AI
For the instance of the 8-puzzle in Figure 1.2, the hill-climbing
strategy allowed us to find a path to a goal state. In general, however,
hill-climbing functions can have multiple local maxima, which frustrates
hill-climbing methods. For example, suppose the goal state is
123
74
865
and the initial state is
125
74
863
Any applicable rule applied to the initial state description lowers the
value of our hill-climbing function. In this case the initial state descrip
tion is at a local (but not a global) maximum of the function.
Other types of hill-climbing frustrations also occur: The process may
get stuck on "plateaus" and "ridges." Of course, these difficulties could
be solved if we could devise a better behaved hill-climbing func
tion—one that had just one global maximum and no plateaus, for
example. Easily computable functions for problems of interest in AI
typically have some of the difficulties we have mentioned. Thus, the use
of hill-climbing methods to guide rule selection in irrevocable produc
tion systems is quite limited.
Even though the control strategy cannot always select the best rule to
apply at any stage, there are times where an irrevocable regime is
appropriate. For example, if the application of what might turn out to be an inappropriate rule does not foreclose a subsequent application of an
appropriate rule, nothing (other than making superfluous rule applications) is risked by applying rules irrevocably. We shall see some examples
of this possibility later.
1.1.4.2. Backtracking. In many problems of interest, applying an
inappropriate rule may prevent or substantially delay successful termination. In these cases, we want a control strategy that can try a rule and, if
it later discovers that this rule was inappropriate, can go back and try
another one instead.
24
PRODUCTION SYSTEMS
The backtracking process is one way in which the control strategy can
be tentative. A rule is selected, and if it doesn't lead to a solution, the
intervening steps are "forgotten," and another rule is selected instead.
Formally, the backtracking strategy can be used regardless of how much or how little knowledge is available to bear on rule selection. If no
knowledge is available, rules can be selected according to some arbitrary
scheme. Ultimately, control will backtrack to select the appropriate rule. Obviously, if good rule-selection knowledge can be used, backing up to
consider alternative rules will occur less often, and the whole process will
be more efficient.
As an example, let us apply the backtracking strategy to our 8-puzzle
example of Figure 1.1 where rules are selected according to the arbitrary
scheme of first attempting to move the blank square left, then up, then
right, then down. Backing up will occur (a) whenever
we generate a state
description that already occurs on the path back to the initial state description, (b) whenever we have applied an arbitrarily set number of rules without having generated a goal state description, or (c) whenever
there are no (more) applicable rules. In (b) above, the number chosen is the depth bound of this backtracking process. In Figure 1.3 we show a
sequence of tentative rule applications and backups to illustrate how backtracking might be applied to the 8-puzzle. In Figure 1.3, each state description is labeled by a (circled) number to indicate its order in the
sequence of state descriptions produced by the production system. We
cannot depict the entire search for a solution in the figure; it is too
extensive. Eventually though, a solution path will be found, because all
possible paths (of length less than 6) will be explored. Note that if the depth bound is set too low, the process may not find a solution.
The backtracking process is more efficient if rule selection is not
arbitrary but is instead guided by information about what might be the
best
move. If this information is reasonably reliable, then the appropriate
rule will usually be selected and there will be little need for backing up. In
the 8-puzzle, for example, we might use a hill-climbing function as the means for selecting a rule. Whereas hill-climbing with an irrevocable
control regime might get stuck on local maxima, backtracking allows
alternative paths to be pursued.
1.1.43.
Graph Search. Graphs (or more specially, trees) are extremely
useful structures for keeping track of the effects of several sequences of
rules. We will be discussing these structures in much more detail in
chapters 2 and 3, giving only a short example here of their use.
25
PRODUCTION SYSTEMS AND AI
©
[2
1
17 T
6 T
4
5
©1
|2
1 8
6
7 3
4
5
® 1
r
Li 8
6
7 3
4
5
©1
2
Li 8
6
7 3
4
5
©1 T
2
Li 6
7 3
4
5
©1
2 8
7 4
il ©
IT
2
li 6
7 TI
4
5
©1
IT
2
Li 3
6
7 4
5
©1 T
2
7 4 il ©
Again, this repeats
one on the path, so we retract the last move and apply
"move blank down"
to state (6) instead.
Continuation is in the
next column.
This state occurs on
the path back to the
initial state, so we
retract the last move and apply "move
blank right" to state
© instead. Continu
ation is in the next
column. T
2
LL 3
6
7 41
5
©1 T
2 3
/ llii A
ii ©
We have now ap
plied six rules with
out reaching the
goal, so we retract
the last move. There
are no more untried rules to apply to the previous state
(number ©), so we
retract the next-to
the-last move also
and apply "move
blank down" to
state (5) . Continua
tion is in the next column. ΓΤ
2
Li 6
7 T
4
5
©1
ΓΤ
2
[l_ 6
7 3
4
5
©1
r 6
7 4
ll
Again, we have ap
plied six rules without reaching a goal,
so, etc.
Fig. 1.3 A backtracking control strategy applied to the 8-puzzle.
26
PRODUCTION SYSTEMS
Suppose we decide to use a graph-search control regime in solving the
8-puzzle problem posed in Figure 1.1. We can keep track of the various
rules applied and the databases produced by a structure called a search
tree. An example of such a tree is in Figure 1.4. At the top or root of the
tree is a description of the initial configuration. The various rules that can
be applied correspond to links or directed arcs to descendant nodes,
representing those states that can be reached by just one move from the
initial state. A graph-search control strategy grows such a tree until a
database is produced that satisfies the termination condition.
In Figure 1.4, we show all applicable rules being applied to every state
description. This sort of indecision on the part of the control system is usually grossly inefficient because the resulting tree grows too rapidly. An
intelligent control strategy would grow a much narrower tree, using its
special knowledge to focus the growth more directly toward the goal. We
shall be discussing several methods for achieving such focusing in
chapter 2.
Even though we use graphs of
this sort only with graph-search control
regimes, it is useful to notice that an irrevocable control regime
corresponds to following just a single path down through the search tree.
(We have already seen that such a simple strategy can sometimes be
usefully employed.) A backtracking regime does not maintain the entire
search tree structure; it merely keeps track of the path that it is working
on currently, modifying it when necessary.
1.1.5. PROBLEMS OF REPRESENTATION
Efficient problem solution requires more than an efficient control
strategy. It requires selecting good representations for problem states,
moves, and goal conditions. The representation of a problem has a great
influence on the effort needed to solve it. Obviously one prefers
representations with small state spaces. There are many examples of
seemingly difficult puzzles that, when represented appropriately, have
trivially small state spaces. Sometimes a given state space can be
collapsed by recognizing that certain rules can be discarded or that rules
can be combined to make more powerful ones. Even when such simple transformations cannot be achieved, it is possible that a complete
reformulation of the problem (changing the very notion of what
a state is,
for example) will result in a smaller space.
27
io
oo
/ 8 ■ 3
: 6 4
1 7 5 Λ
8 3 "1
: h 4
1 7 5;| \
|8 0 3
: ■ 4
|l 7 5 ■ : 3|
6 8 4
1 7 5| 1: 3 ■
6 8 4
|l 7 5 > 2 8 3
6 7 4
1 ■ 5 Λ
2 8 3 1
6 7 4
■ 1 5| \
12 8 3
6 7 4
|l 5 ■ 2 8 3
■ 1 4
7 6 5
■ 83
2 1 4
7 6 5
8 ■ 3
2 1 4
7 6 5 A , / ,
8 3 ■] 2 1 4
7 6 5| ,\ ,
|8 1 3 2 ■ 4
\l 6 5 2 8 3
7 1 4
■ 65
2 8 3
7 1 4
6 ■ 5
TV , / ,
2 8 3 I 7 ■ 4
6 1 5| ,\ ,
1 2 8 3 7 1 4
|ó 5 ■ |2" 3|
18 4
7 6 S
1" 2 3[12 3 "1
Il 8 4 1 8 4
1 7 6 51 1 7 6 5 |
Il 2 3| |2 3 4|
■ 8 4 1 8 ■
1 7 6 5 1 1 7 6 5 |
,h>, Il 2 3| Il 2 31
8 «478 4
1 7 6 5 1 | ■ 6 5 1 2 8 "1
1 4 3
7 6 5| 1 2 8 3
14 5
|7 (i ■
2 ■ 8|
1 4 3
7 6 5| 1 2 8 3
14 5
1 7 ■ β o a e n
H
1
er
<
Vi H W
F/^. 1.4 A search tree for the 8-puzzle.
PRODUCTION SYSTEMS
The processes required to represent problems initially and to improve
given representations are still poorly understood. It seems that desirable
shifts in a problem's representation depend on experience gained in
attempts to solve it in a given representation. This experience allows us to
recognize the occurrence of simplifying notions, such as symmetries, or useful sequences of rules that ought to be combined into macro-rules.
For example, an initial representation of the
8-puzzle might specify the
32 rules corresponding to: move tile 1 left, move tile 1 right, move tile 1
up, move tile 1 down, move tile 2 left, etc. Of course, most of these rules
are never applicable to any given state description. After this fact
becomes apparent to a problem solver, he would perhaps hit upon the
better representation involving moving just the blank space.
We shall next examine two more example problems to illustrate how
they might be represented for solution by a production system.
1.1.6. SOME EXAMPLE PROBLEM REPRESENTATIONS
A wide variety of problems can be set up for solution by our
production system approach. The formulations that we use in the
following examples do not necessarily represent the only ways in which
these problems can be solved. The reader may be able to think of good
alternatives.
1.1.6.1. A Traveling Salesman Problem. A salesman must visit each of
the 5 cities shown in the map of Figure 1.5. There is a road between every
pair of cities, and the distance is given next to the road. Starting at city A,
the problem is to find a route of minimal distance that visits each of the
cities only once and returns to A.
D
Fig. 1.5 A map for the traveling salesman problem.
29
PRODUCTION SYSTEMS AND AI
Initial
(A)
(AB)
A·
Fig. 1.6 A search tree for the traveling salesman problem.
To set up this problem we specify the following:
The global database shall be a list of the cities
visited so far. Thus the initial database is
described by the list (A). We do not allow lists that name any city more than once,
except that after all of the other cities have been named, A can be named again.
The rules correspond to the decisions (a) go
to city A next, (b) go to city B next, ..., and
(e) go to city E next. A rule is not applicable
to a database unless it transforms it into some
legal one. Thus the rule corresponding to "go
to city A next" is not applicable to any list not
already naming all of the cities.
Any global database beginning and ending
with A and naming all of the other cities
30 (AC) (AD) (AE)
\
(ACD) • t · ·
/ \ /\ 6 M / \
I \ / 1 / \ / \
(ACDE)
0 l Ts I
I \ i \ I \
I \ I \ I \
I \ I \ I \
i \
(ACDEB)
/
(ACDEB A)
Goal
PRODUCTION SYSTEMS
satisfies the termination condition. Notice that
we can use the distance chart of Figure 1.5 to
compute the total distance for any trip. Any
trip proposed as a solution must be of
minimal distance.
Figure 1.6 shows part of the search tree that might be generated by a
graph-search control strategy in solving this problem. The numbers next
to the edges of the tree are the increments of distance added to the trip by
applying the corresponding rule.
1.1.6.2. A Syntax Analysis Problem. Another problem we might want
to solve using a production system approach is whether an arbitrary
sequence of symbols is a sentence in a language; that is, could it have been
generated by a grammar. Deciding whether a symbol string is a sentence is called the parsing problem, and production systems can be used to do parsing.
Suppose we are given a simple context-free grammar that defines a
language. As an example, let the grammar contain the following terminal
symbols,
of approves new president company sale the
and the following non-terminal symbols,
S NP VP PP P V DNP DET A N.
The grammar is defined by the following rewrite rules:
DNP VP -+
V DNP -+
P DNP ->
of -» P
approves —»
DET NP
-H>
DNP PP -+ S
VP
PP
V
DNP
DNP
A NP -^ NP
N -> NP
new —> A president —>
company —>
sale —» N
the -> DET N
N
31
PRODUCTION SYSTEMS AND AI
This grammar is too simple to be useful in analyzing most English
sentences, but it could be expanded to make it a bit more realistic.
Suppose we wanted to determine whether or not the following string of
symbols is a sentence in the language:
The president of the new company approves the sale
To set up this problem, we specify the following:
The global database shall consist of a string of
symbols. The initial database is the given
string of symbols that we want to test.
The production rules are derived from the
rewrite rules of the grammar. The right-hand
side of a grammar rule can replace any
occurrence of the left-hand side in a database.
For example, the grammar rule
DNP VP —» S is used to change any
database containing the subsequence
DNP VP to one in which this subsequence is replaced by S. A rule is not applicable if the database does not contain the left-hand
side of the corresponding grammar rule. Also,
a rule may be applicable to a database in
different ways, corresponding to different occurrences of the left-hand side of the grammar rule in the database.
Only that database consisting of the single
symbol S satisfies the termination condition.
Part of a search tree for this problem is shown in Figure 1.7. In this
simple example, aside from different possible orderings of rule applica
tions, there is very little branching in the tree.
1.1.7. BACKWARD AND BIDIRECTIONAL PRODUCTION
SYSTEMS
We might say that our production system for solving the 8-puzzle
v/orkedforward from the initial state to a goal state. Thus, we could call it
32
PRODUCTION SYSTEMS
di forward production system. We could also have solved the problem in a
backward direction, by starting at the goal state, applying inverse moves,
and working toward the initial state. Each inverse move would produce a
subgoal state from which the immediately superordinate goal state could
be reached by one forward move. A production system for solving the
8-puzzle in this manner would merely reverse the roles of states and goals
and would use rules that correspond to inverse moves.
Setting up a backward-directed production system in the case of the
8-puzzle is simple because the goal is described by an explicit state. We
can also set up backward-directed production systems when the goal is
described by a condition. We discuss this situation later, after introducing
an appropriate language (predicate logic) for talking about goals de
scribed by conditions.
Initial
The president of the new company approves the sale
I This sequence of rules replaces terminal
! symbols by non-terminal symbols.
DET N P DET A N V DET N
i Another sequence produces
| the following string:
DNP P S
lothing more can t >e DNP PP VP
\ 1
DNP VP ]
1
Π Goa
Fig. 1.7 A search tree for the syntax analysis problem.
33
PRODUCTION SYSTEMS AND AI
Although there is no formal difference between a production system
that works on a problem in a forward direction and one that works in a
backward direction, it is often convenient to make this distinction
explicit. When a problem has intuitively clear states and goals and when
we choose to employ descriptions of these states as the global database of
a production system, we say that the system is a forward production
system. Rules are applied to the state descriptions to produce new state
descriptions, and these rules are called F-rules. If, instead, we choose to
employ problem goal descriptions as the global database, we shall say
that the system is a backward production system. Then, rules are applied
to goal descriptions to produce subgoal descriptions, and these rules will
be called B-rules.
In the 8-puzzle, with a single initial state and a single goal state, it
makes no difference whether the problem is solved in the forward or the backward direction. The computational effort is the same for both
directions. There are occasions, however, when it is more efficient to solve
a problem in one direction rather than the other. Suppose, for example,
that there were a large number of explicit goal states and one initial state.
It would not be very efficient to try to solve such a problem in the
backward direction; we do not know a priori which goal state is "closest"
to the initial state, and we would have to begin a search from all of them.
The most efficient solution direction, in general, depends on the structure
of the state space.
It is often a good idea to attempt a solution to a problem searching
bidirectionally (that is, both forward and backward simultaneously). We
can achieve this effect with production systems also. To do so, we must
incorporate both state descriptions and goal descriptions into the global
database. F-rules are applied to the state description part, while B-rules
are applied to the goal description part. In this type of search, the
termination condition to be used by the control system (to decide when
the problem is solved) must be stated as some type of matching condition
between the state description part and the goal description part of the
global database. The control system must also decide at every stage whether to apply an applicable F-rule or an applicable B-rule.
34
SPECIALIZED PRODUCTION SYSTEMS
1.2. SPECIALIZED PRODUCTION SYSTEMS
1.2.1. COMMUTATIVE PRODUCTION SYSTEMS
Under certain conditions, the order in which a set of applicable rules is
applied to a database is unimportant. When these conditions are satisfied,
a production system improves its efficiency by avoiding needless exploration of redundant solution paths that are all equivalent except for rule ordering.
In Figure 1.8 we have three rules, Rl, R2, and R3, that are applicable
to the database denoted by SO. After applying any one of these rules, all three rules are still applicable to the resulting databases; after applying
any pair in sequence, the three are still applicable. Furthermore, Figure
1.8 demonstrates that the same database, namely SG, is achieved
regardless of the sequence of rules applied in the set {Rl, R2, R3}.
We say that a production system is commutative if it has the following
properties with respect to any database D :
(a) Each member of the set of rules applicable to D
is also applicable to any database produced by
applying an applicable rule to D.
(b) If the goal condition is satisfied by
Z), then it is also
satisfied by any database produced by applying any applicable rule to D.
(c) The database that results by applying to D any
sequence composed of rules that are applicable to
D is invariant under permutations of the sequence.
The rule applications in Figure 1.8 possess this commutative property.
In producing the database denoted by SG in Figure 1.8, we clearly need
consider only one of the many paths shown. Methods for avoiding
exploration of redundant paths are obviously of great importance for
commutative systems.
Note that commutativity of a system does not mean that the entire
sequence of rules used to transform a given database into one satisfying a
certain condition can be reordered. After a rule is applied to a database,
additional rules might become applicable. Only those rules that are
initially applicable to a database can be organized into an arbitrary
sequence and applied to that database to produce a result independent of order. This distinction is important.
35
PRODUCTION SYSTEMS AND AI
Fig. 1.8 Equivalent paths in a graph.
Commutative production systems are an important subclass enjoying
special properties. For example, an irrevocable control regime can always
be used in a commutative system because the application of a rule never
needs to be taken back or undone. Any rule that was applicable to an earlier database is still applicable to the current one. There is no need to
provide a mechanism for applying alternative sequences of rules.
Applying an inappropriate rule delays, but never prevents, termination;
after termination, extraneous rules can be removed from the solution
sequence. We have occasion later to investigate commutative systems in
more detail.
It is interesting to note that there is a simple way to transform any
production system into a commutative one. Suppose we have already
represented a problem for solution by a production system. Imagine that
this production system has
a global database, rules that can modify it, and
a graph-search control strategy that generates a search tree of global
databases. Now consider another production system whose global
database is the entire search tree of the first. The rules of the new production system represent the various ways in which a search tree can
be modified by the action of the control strategy of the first production
system. Clearly, any rules of the second system that are applicable at any
36
SPECIALIZED PRODUCTION SYSTEMS
stage remain applicable thereafter. The second system explicitly em
bodies in its commutative properties the nondeterministic tentativeness
that we conferred upon the control strategy of the first system. Employ
ing this conversion results in a more complex global database and rule set
and in a simpler sort of control regime (irrevocable). This change in
representation simply shifts the system description to a lower level.
1.2.2. DECOMPOSABLE PRODUCTION SYSTEMS
Commutativity is not the only condition whose fulfillment permits a
certain freedom in the order in which rules are applied.
Consider, for example, a system whose initial database is (C,2?,Z),
whose production rules are based on the following rewrite rules,
Rl: C^>(D,L)
R2: C-»(J?,M)
R3: B->(M,M)
R4: Ζ^(Β,Β,Μ)
and whose termination condition is that the database contain only Ms.
A graph-search control regime might explore many equivalent paths in
producing a database containing only Ms. Two of these are shown in
Figure 1.9. Redundant paths can lead to inefficiencies because the control
strategy might attempt to explore all of them, but worse than this, in
exploring paths that do not terminate successfully, the system may
nevertheless do much useful work that ultimately is wasted. (Many of the
rule applications in the right-hand branch of the tree in Figure 1.9 are ones needed in a solution.)
One way to avoid the exploration of these redundant paths is to
recognize that the initial database can be decomposed or split into
separate components that can be processed independently. In our
example, the initial database can be split into the components C, B, and
Z. Production rules can be applied to each of these components
independently (possibly in parallel); the results of these applications can
also be split, and so on, until each component database contains only Ms.
AI production systems often have global databases that are decom
posable in this manner. Metaphorically, we might imagine that such a
37
PRODUCTION SYSTEMS AND AI
global database is a "molecule" consisting of individual "atoms" bound
together in some way. If the applicability conditions of the rules involve
tests on individual atoms only, and if the effects of the rules are to substitute a qualifying atom by some new molecule (that, in turn, is
composed of atoms), then we might as well split the molecule into its
atomic components and work on each part separately and independently.
Each rule application affects only that component of the global database
used to establish the precondition of the rule. Since some of the rules are
being applied essentially in parallel, their order is unimportant.
In order to decompose a database, we must also be able to decompose
the termination condition. That is, if we are to work on each component separately, we must be able to express the global termination condition
using the termination conditions of each of the components. The most important case occurs when the global termination condition can be
expressed as the conjunction of the same termination condition for each
component database. Unless otherwise stated, we shall always assume
this case.
nitial
(Β,Μ,Β,Ζ)
RS r
(Μ,Μ,Μ,Β,Ζ)
RS r
(Μ,Μ,Μ,Μ,Μ,Ζ)
R4
'^ ■— ' '
(Μ,Μ,Μ,Μ,Μ,Β,Β,Μ) J
RS '
(Μ,Μ,Μ,Μ,Μ,Μ,Μ,Β,Μ) 1
R3
Goal
(Μ,Μ,Μ,Μ,Μ,Μ,Μ,Μ,Μ,ΜΜ (C,B,B,B,M)
R2
(Β,Μ,Β,Β,Β,Μ)
RS
(Μ,Μ,Μ,Β,Β,Β,Μ)
R3 (D,L,B,Z)
R3 \
(D,L,M,M,Z)
R4 r
(D,L,M,M,B,B,M)
R3
1 1
1 (D,L,M,M,M,M,B,M)
R3
< 1
(D,L,M,MMMMMM)
Fig. 1.9 Solution sequences for a rewriting problem.
38
SPECIALIZED PRODUCTION SYSTEMS
Production systems that are able to decompose their global databases
and termination conditions are called decomposable. The basic procedure
for a decomposable production system might look something like the
following:
Procedure SPLIT
1 DATA c initial database
2 { Di } t- decomposition of DATA ; the individual Di are
now regarded as separate databases
3 until all { Di} satisfy the termination condition, do:
4 begin
5 select D* from among those { Di} that do not
satisfy the termination condition
6 remove D* from { Di}
7 select some rule R in the set of rules that can be
applied to D*
8 D c result of applying R to D*
9 { di } f- decomposition of D
10 append { di} to { Di}
11 end
The control strategy for SPLIT must select a component database, D*, in
Step 5 and must select a rule, R, to apply in Step 7. Whatever the form of
this strategy, in order to satisfy Step 3, it must ultimately select aZl the
elements in { Di}. For any D* selected, though, it need only select one
applicable rule.
Even though processing component databases in parallel is possible,
we are typically interested in control strategies that process them in some
serial order. There are two major ways to order the components: (a) the
components can either be arranged in some fixed order at the time they
39
PRODUCTION SYSTEMS AND AI
are generated, or (b) they can be dynamically reordered during process
ing. In the former mode, each component is processed to completion
before processing begins on the next. Of course, when a production rule
is applied to a component, a database may result that can itself be split.
The components of this database are processed in order also. Typically, a
backtracking strategy for making rule selections is used in conjunction
with this fixed-order strategy for processing components.
More flexible control strategies for decomposable production systems
allow the component databases to be reordered dynamically as the
processing unfolds. Structures called AND /OR graphs are useful for
depicting the activity of production systems under this control regime.
We show an example AND/OR tree for our rewrite problem in Figure
1.10. Just as with ordinary graphs, an AND/OR graph consists of nodes
labeled by global databases. Nodes labeled by compound databases have
sets of successor nodes each labeled by one of the components. These
successor nodes are called AND nodes because in order to process the
compound database to termination, all of the component databases must
be processed to termination. Sets of AND nodes are so indicated in our
illustrations by a circular mark linking their incoming arcs.
{M,M)
ZL JSL
M
Fig. 1.10 An AND/OR tree for a rewriting problem.
40
SPECIALIZED PRODUCTION SYSTEMS
Rules can be applied to component databases. Nodes labeled by these
component databases have successor nodes labeled by the results of rule
applications. These successor nodes are called OR nodes because in order
to process a component database to termination, the database resulting
from just one of the rule applications must be processed to termination.
In Figure 1.10, any node corresponding to a component database
satisfying the termination condition (in this case consisting of the symbol
M) is enclosed in a double box. Such nodes are called terminal nodes.
(We could also have drawn the tree of Figure 1.10 as a graph. For
example, the database (M,M) occurs as four nodes in Figure 1.10, and
these could have been collapsed into one.)
A solution to this rewriting problem can be illustrated by a subgraph of
the AND/OR graph. Such a solution subgraph is shown by darkened branches in Figure 1.10. It is a graph whose "tip nodes" correspond to databases that each satisfy the termination condition. We shall discuss
strategies for searching AND/OR graphs to find solution graphs in
chapter 3.
We next discuss how decomposable production systems can be used on
some example problems.
1.2.2.1. Chemical Structure Generation. An important problem in
organic chemistry involves determining the structure of a complex
organic compound, given certain experimental data such as a mass
spectrogram of a sample of the compound. A large AI system called
DENDRAL can propose plausible structures for rather complex com
pounds. An important part of the DENDRAL system involves the generation of candidate structures, given the chemical formula of the
compound. A full explanation of how these candidate structures are
generated
is beyond the scope of our present discussion, but we can give a
brief description of how the process works for a simple hydrocarbon.
The system for generating candidate structures can be viewed as a
production system. The global database is a "partially structured"
compound. The production system operates on this database to increase
its degree of structure: Initially, the database describes no chemical
structure and contains merely the chemical formula; at intermediate
stages, the database describes some of the structure of the compound; at
the end of the process, the database contains a representation of the
entire structure of the compound.
41
PRODUCTION SYSTEMS AND AI
We can use a decomposable production system for this problem
because the databases are decomposable into segments, some of which
are unstructured chemical formulas of part of the original compound.
The production rules are "structure-proposing" rules that convert
databases representing unstructured chemical formulas into databases
representing partial structures. Any database that contains no unstruc
tured formulas satisfies the termination condition.
Briefly, we can illustrate how the structure-proposing rules work by a
simple example. Let us suppose that we are given the chemical formula
CjH^. Our production system proposes some candidate structures for
this compound. (Not all of the proposed structures will be chemically
possible. At this stage of the process we are merely describing how we
could generate structures that are plausible, given only simple valence
bond considerations. The actual DENDRAL system drastically prunes the
candidates by using other chemical knowledge as well as features of the
mass spectrogram.)
The initial database is simply the formula C 5H7^. In this case, the rules
propose the following partial structures:
|C2H7|
H
I i H
C = C
| I |C 2H6|
H
I H
H — C — H
I
H
H-C
H
j
H —C—(
1 1
H
H-C :-H
H
I
1
:— c—
1 1
H
:-H
42 -C-H
I
= c
I
— C-H
I
H H
I
H-C-H
, I
C2HS — C-H
H — C — H
I
H
H
|C 2H5|-Ç-|C 2H5|
H
SPECIALIZED PRODUCTION SYSTEMS
In the partial structures above, the formulas within vertical bars (| |) are
unstructured. These can be split from the structured part of the database,
and relevant structure-proposing rules can be applied to each of them
independently. For example, the rules propose the following structure
for the formula —\C 2H51 :
H H
I I
H—C—C —
I I
H H
A partial AND/OR tree for our C5H12 problem is shown in Figure
1.11. Each solution tree corresponds to a candidate structure. The one
indicated by dark lines corresponds to the following structure:
H H H H H
I I I I I
H—C —C —C—C—C —H (pentane)
I I I I I
H H H H H
1.2.2.2. Symbolic Integration. In the problem of symbolic integration
we want an automatic process that will accept any indefinite integral as
input, say, fx sin 3x dx and deliver the answer 1/9 sin 3x — 1/3 x cos 3x
as output. We allow a table containing such simple integral forms as:
udu — — 2
sin udu — — cos u
au du — au log a e
etc.
Solutions to symbolic integration problems can then be attempted by a
production system that converts the given integral into expressions
involving only instances of those integral forms given in the table.
The production rules can be based on the integration by parts rule, the
decomposition of an integral of a sum rule, and other transformation
rules such as those involving algebraic and trigonometric substitutions. A
43
PRODUCTION SYSTEMS AND AI
production rule based on integration by parts would transform the
expression fu dv into the expression ufdv — fv du. If there is an option
about which part of the original integrand is to be u and which is to be dv,
then a separate rule instantiation covers each alternative.
The decomposition rule states that the integral of a sum can be
replaced by the sum of the integrands. Another rule, called the factoring
rule, allows us to replace the expression fkf(x)dx by the expression
kff(x)dx. Other rules are based on the processes shown in Figure 1.12.
H
I
H- C-H
I
- C-H
I
H- C-H
I
H
Terminal |C2H5| -
Rule" |C2H5| -
Rule
^] Rul e> |C2H5| -
Ijr H
1
- c-
1
H
H H
I I
H- C-C -
I I
H H Terminal
Terminal
Fig. 1.11 An AND/OR tree for a chemical structure problem.
44
SPECIALIZED PRODUCTION SYSTEMS
Any expression involving the sum of integrals can be split into the
separate integrals. Each of these can be processed separately, so we see
that our production system is decomposable.
The utility of these various rules depends strongly on the form of the
integrand. In a symbolic integration system called SAINT (Slagle, 1963),
the integrands were classified according to various features that they
possessed. For each class of integrand, the various rules were selected
according to their heuristic applicability.
In Figure 1.13 we show an AND/OR tree that illustrates a possible
search performed by a decomposable production system. The problem is
to integrate
- v2^5/2 (l - x*y -dx
/£,-/■(«■—w) dz usingz2 = (2 + 3x)2/3 Algebraic substitutions
Example
Trigonometric substitutions
Example
/dx fS 4 — -► / — cot 0 csc 0 dB using x = 7 tan 0
JCV25JC2+ 16 J 16 5
Division of numerator by denominator
Example
Completing the square
Example
/dx Ç dx
(x2-4x+l3)2^J [(JC-2)2 + 9]2
Fig. 1.12 Examples of integration rules.
45
PRODUCTION SYSTEMS AND AI
Jcoi-* yd y
'
\f-z = cot y
r
dz
z\\ +z2)
\f"'
1 r
fdl z = tan y
f—
J 1 +z l dz
Divide Numerator I by Denominator
/(-1 + ', + ίτ?)Λ
fIìdz ί—
J 1 +;
J dw
Fig. 1.13 An AND/OR tree for an integration problem.
46
COMMENTS ON THE DIFFERENT TYPES OF PRODUCTION SYSTEMS
The nodes of the tree represent expressions to be integrated. Expressions
corresponding to basic integrals in an integral table satisfy the termina
tion condition and are enclosed in double boxes. The darkened arcs
indicate a solution tree for this problem. From this solution tree and from
the integrals obtained from the integral table, we compute the answer:
arcsin x + - tan3 (arcsin x) — tan (arcsin x)
1.3. COMMENTS ON THE DIFFERENT TYPES OF
PRODUCTION SYSTEMS
In summary, we shall be discussing two major types of AI production
systems in this book, namely, the ordinary type, described by procedure
PRODUCTION, and the decomposable type, described by procedure
SPLIT. Depending on the way a problem is represented for solution by a
production system, either of these types might be used in a forward or
backward direction. They might be controlled by irrevocable or tentative control regimes. The taxonomy of production systems based on these
distinctions will help greatly in organizing various AI systems and
concepts into a coherent framework.
It is important to note that we are drawing distinctions only between
different kinds of
AI systems; we are not making any distinctions between
different kinds of problems. We shall see instances later in which the same
problem can be represented and solved by entirely different kinds of
systems.
We will present many more examples of problem representation.
Setting up global databases, rules, and termination conditions for any
given problem is still a bit of an art and can best be taught by example.
Since most of the examples used so far have been elementary puzzles and
problems, the reader might well wonder whether production systems are
really powerful enough to form the basis of intelligent systems. Later we
shall consider some more realistic and difficult problems to show the broad utility of these organizations.
Efficient AI systems require knowledge of the problem domain. We
can naturally subdivide this knowledge into three broad categories
47
PRODUCTION SYSTEMS AND AI
corresponding to the global database, the rules, and the control subdivi
sions of production systems. The knowledge about a problem that is
represented in the global database is sometimes called declarative
knowledge. In an intelligent information retrieval system, for example,
the declarative knowledge would include the main database of specific
facts. The knowledge about a problem that is represented in the rules is
often called procedural knowledge. In intelligent information retrieval,
the procedural knowledge would include general information that allows
us to manipulate the declarative knowledge. The knowledge about a
problem that is represented by the control strategy is often called the
control knowledge. Control knowledge includes knowledge about a
variety of processes, strategies, and structures used to coordinate the
entire problem-solving process. The central problem considered in this
book is how best to organize problem knowledge into its declarative,
procedural, and control components for use by AI production systems.
Our first concern, to be treated in some detail in the next two chapters, is
with control—especially graph-searching control regimes. Then we move
on to consider the uses of the predicate calculus in Artificial Intelligence.
1.4. BIBLIOGRAPHICAL AND HISTORICAL
REMARKS
1.4.1. PRODUCTION SYSTEMS
The term production system has been used rather loosely in AI,
although it usually refers to more specialized types of computational
systems than those discussed in this book. Production systems derive
from a computational formalism proposed by Post (1943) that was based
on string replacement rules. The closely related idea of a Markov
algorithm [Markov (1954), Galler and Perlis (1970)] involves imposing an
order on the replacement rules and using this order to decide which
applicable rule to apply next. Newell and Simon (1972) use string-modi
fying production rules, with a simple control strategy, to model certain
types of human problem-solving behavior [see also Newell (1973)].
Rychener (1976) proposes an AI programming language based on string-modifying production rules.
Generalizations of these production system formalisms have been
used in AI and called, variously, production systems, rule-based systems,
blackboard systems, and pattern-directed inference systems. The volume
48
BIBLIOGRAPHICAL AND HISTORICAL REMARKS
edited by Waterman and Hayes-Roth (1978) provides many examples of
these sorts of systems [see also Hayes-Roth and Waterman (1977)]. A
paper by Davis and King (1977) thoroughly discusses production systems
in AL
Our notion of a production system involves no restrictions on the form
of the global database, the rules, or the control strategy. We introduce the
idea of tentative control regimes to allow a form of controlled nondeter-minism in rule application. Thus generalized, production systems can be
used to describe the operation of many important AI systems.
Our observation that rule application order can be unimportant in
commutative and decomposable production systems is related to Church-
Rosser theorems of abstract algebra. [See, for example, Rosen (1973), and
Ehrig and Rosen (1977,1980).]
The notion of a decomposable production system encompasses a
technique often called problem reduction in AI. [See Nilsson (1971).] The
problem reduction idea usually involves replacing a problem goal by a set of subgoals such that if the subgoals are solved, the main goal is also solved. Explaining problem reduction in terms of decomposable production systems allows us to be indefinite about whether we are decomposing problem goals or problem states. Slagle (1963) used structures that he called AND/OR goal trees to deal with problem decomposition; Amarel (1967) proposed similar structures. Since then,
AND/OR trees and graphs have been used frequently in AI. Additional
references for AND/OR graph methods are given in chapter 3.
The problem of finding good representations for problems has been
treated by only a few researchers. Amarel (1968) has written a classic
paper on the subject; it takes the reader through a series of progressively
better representations for the missionaries-and-cannibals problem. [See Exercise 1.1.] Simon (1977) described a system called UNDERSTAND for
converting natural language (English) descriptions of problems into
representations suitable for problem solution.
1.4.2. CONTROL STRATEGIES
Hill-climbing is used in control theory and systems analysis as one
method for finding the maximum {steepest ascent
) or minimum {steepest
descent) of a function. See Athans et al. (1974, pp. 126ff) for a discussion.
49
PRODUCTION SYSTEMS AND AI
In computer science, Golomb and Baumert (1965) suggested backtrack
ing as a selection mechanism. Various AI programming languages use
backtracking as a built-in search strategy [Bobrow and Raphael (1974)].
The literature on heuristic graph searching is extensive; several refer
ences are cited in the next two chapters.
1.43. EXAMPLE PROBLEMS
Problem-solving programs have sharpened their techniques on a
variety of puzzles and games. Some good general books of puzzles are
those of Gardner (1959, 1961), who edits a puzzle column in Scientific
American. Also see the books of puzzles by Dudeney (1958, 1967), a
famous British puzzle inventor, a book of logical puzzles by Smullyan
(1978), and a book on how to solve problems by Wickelgren (1974). The
8-puzzle is a small version of the 15-puzzle, which is discussed by
Gardner (1964, 1965a,b,c) and by Ball (1931, pp. 224-228).
The traveling-salesman problem arises in operations research [see
Wagner (1975), and Hillier and Lieberman (1974)]. A method for finding
optimal tours has been proposed by Held and Karp (1970, 1971), and a
method for finding "approximately" optimum tours has been proposed
by Lin (1965).
A good general reference on formal languages, grammars, and syntax
analysis is Hopcroft and Ullman (1969).
The technique for proposing chemical structures is based on the
DENDRAL system of Feigenbaum et al. (1971). The symbolic integration
example is based on the SAINT system of Slagle (1963). A more powerful
symbolic integration system, SIN, was developed later by Moses (1967).
Moses (1971) discusses the history of techniques for symbolic integra
tion.
EXERCISES
1.1 Specify a global database, rules, and a termination condition for a
production system to solve the missionaries and cannibals problem:
50
EXERCISES
Three missionaries and three cannibals come
to a river. There is a boat on their side of the
river that can be used by either one or two
persons. How should they use this boat to
cross the river in such a way that cannibals
never outnumber missionaries on either side
of the river?
Specify a hill-climbing function over the global databases. Illustrate how
an irrevocable control strategy and a backtracking control strategy would
use this function in attempting to solve this problem.
1.2 Specify a global database, rules, and a termination condition for a
production system to solve the following water-jug problem:
Given a 5-liter jug filled with water and an
empty 2-liter jug, how can one obtain
precisely 1 liter in the 2-liter jug? Water may
either be discarded or poured from one jug
into another; however, no more than the
initial 5 liters is available.
13 Describe how the rewrite rules of section 1.1.6. can be used in a
production system that generates sentences. What is the global database
and the termination condition for such a system? Use the system to
generate five grammatical (even if not meaningful) sentences.
1.4 My friend, Tom, claims to be a descendant of Paul Revere. Which
would be the easier way to verify Tom's claim: By showing that Revere is
one of Tom's ancestors or by showing that Tom is one of Revere's
descendants? Why?
1.5 Suppose a rule R of a commutative production system is applied to a
database D to produce D'. Show that if R has an inverse, the set of rules
applicable to D' is identical to the set of rules applicable to D.
1.6 A certain production system has as its global database a set of
integers. A database can be transformed by adding to the set the product of any pair of its elements. Show that this production system is commutative.
51
PRODUCTION SYSTEMS AND AI
1.7 Describe how a production system can be used to convert a decimal
number into a binary one. Illustrate its operation by converting 141.
1.8 Critically discuss the following thesis: Backtracking (or depth-first
graph-search) control strategies should be used when there are multiple
paths between problem states because these strategies tend to avoid
exploring all of the paths.
1.9 In using a backtracking strategy with procedure SPLIT, should the
selection made in step 5 be a backtracking point? Discuss. If step 5 is not a
backtracking point, are there any differences between procedure SPLIT
under backtracking and procedure PRODUCTION under backtracking?
52
CHAPTER 2
SEARCH STRATEGIES FOR AI
PRODUCTION SYSTEMS
In this chapter we examine some control strategies for AI production
systems. Referring to the basic procedure for production systems given
on page 21, the fundamental control problem is to select an applicable
rule to apply in step 4. For decomposable production systems (page 39),
the control problem is to select a component database in step 5 and an
applicable rule to apply in step 7. Other subsidiary but important tasks of
the control system include checking rule applicability conditions, testing
for termination, and keeping track of the rules that have been applied.
An important characteristic of computations for selecting rules is the
amount of information, or "knowledge," about the problem at hand that
these computations use. At the uninformed extreme, the selection is made completely arbitrarily, without regard to any information about the problem at hand. For example, an applicable rule could be selected
completely at random. At the informed extreme, the control strategy is guided by problem knowledge great enough for it to select a "correct"
rule every time.
The overall computational efficiency of an AI production system
depends upon where along the informed/uninformed spectrum the
control strategy falls. We can separate the computational costs of a
production system into two major categories: rule application costs and
control costs. A completely uninformed control system incurs only a
small control strategy cost because merely arbitrary rule selection need
not depend on costly computations. However, such a strategy results in
high rule application costs because it generally needs to try a large
number of rules to
find a solution. To inform a control system completely
about the problem domains of interest in AI typically involves a high-cost
control strategy, in terms of the storage and computations required.
53
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
0 iςInformedness,, COMPLETE
Fig. 2.1 Computational costs of ΛI production systems.
Completely informed control strategies, however, result in minimal rule
application costs; they guide the production system directly to a solution.
These tendencies are shown informally in Figure 2.1.
The overall computational cost of an AI production system is the
combined rule application cost and control strategy cost. Part of the art of
designing efficient AI systems is deciding how to balance these two costs.
In any given problem, optimum production system efficiency might be
obtained from less than completely informed control strategies. (The cost
of a completely informed strategy may simply be too high.)
Another important aspect of AI system design involves the use of
techniques that allow the control strategy to use a large amount of
problem information without incurring excessive control costs. Such
techniques help to decrease the slope of the control strategy cost curve of
Figure 2.1, lowering the overall cost of the production system.
The behavior of the control system as it makes rule selections can be
regarded as a search process. Some examples of the ways in which the
control system might search for a solution were given in chapter 1. There,
we discussed the hill-climbing method of irrevocable rule selection, exploring a surface for a maximum, and the backtracking and graph-
search regimes, search processes that permitted tentative rule selection.
54
BACKTRACKING STRATEGIES
Our main concern in the present chapter is tentative control regimes,
even though the irrevocable ones have important applications, especially
with commutative production systems. Some of the search methods that
we develop for tentative control regimes can be adapted for use with
certain types of commutative production systems using irrevocable control regimes. We begin our discussion of tentative control by
describing backtracking methods.
2.1. BACKTRACKING STRATEGIES
In chapter 1 we presented a general description of the backtracking
control strategy and illustrated its use on the 8-puzzle. For problems
requiring only a small amount of search, backtracking control strategies
are often perfectly adequate and efficient. Compared with graph-search
control regimes, backtracking strategies are typically simpler to implement and require less storage.
A simple recursive procedure captures the essence of the operation of a
production system under backtracking control. This procedure, which we
call BACKTRACK, takes a single argument, DA TA, initially set equal to
the global database of the production system. Upon successful termina
tion, the procedure returns a list of rules, that, if applied in sequence to the initial database, produces a database satisfying the termination
condition. If the procedure halts without finding such a list of rules, it
returns FAIL. The BACKTRACK procedure is defined as follows:
Recursive procedure B ACKTRACK( DA TA )
1 if TERM(DATA\ return NIL; TERM is a
predicate true for arguments that satisfy
the termination condition of the production
system. Upon successful termination, NIL,
the empty list, is returned.
2 ifDEADEND(/X47M), return FAIL; DEADEND
is a predicate true for arguments that are known not to be on a path to a solution. In
this case, the procedure returns the symbol
FAIL.
55
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
3 RULES*- APPRULES(DATA); APPRULES is a
function that computes the rules applicable to
its argument and orders them (either arbitrarily
or according to heuristic merit).
4 LOOP: if NVLL(RULES\ return FAIL;
if there are no (more) rules to apply, the
procedure fails.
5 fl<-FIRST(RULES); the best of the applicable
rules is selected.
6 RULES <-TAlL(RULES); the list of applicable
rules is diminished by removing the one just
selected.
7 RDA TA 4- R( DA TA ); rule R is applied to
produce a new database.
8 PATH*- B ACKTRACK( RDA TA ); BACKTRACK is
called recursively on the new database.
9 ii PATH = FAIL, go LOOP; if the
recursive call fails, try another rule.
10 return CONS(R, PATH); otherwise, pass the
successful list of rules up, by adding R to the front of the list.
We can make several comments about this procedure. First, it
terminates successfully (in step 1) only if it produces a database satisfying the termination condition. The list of rules used in producing this database is built up in step 10. Unsuccessful terminations can occur in
steps 2 and 4. When an unsuccessful termination occurs within a
recursive call, the procedure backtracks to a higher level. Step 2 performs a test to check whether or not a solution is even possible from the database in question. In step 4, the procedure fails if it has already tried
all applicable rules.
Procedure BACKTRACK may never terminate; it may generate new
nonterminal databases indefinitely or it may cycle. Both of these cases
can be arbitrarily prevented by imposing a depth bound on the recursion.
56
BACKTRACKING STRATEGIES
Any recursive call fails when its depth exceeds this bound. Cycling can be
more straightforwardly prevented by maintaining a list of the databases produced so far and by checking new ones to see that they do not match
any on the list. Later we present a slightly more complicated procedure that makes these tests.
In step 3, the procedure orders the rules that are applicable to the
database in question. Here, any available heuristic information about the
problem domain is used. Those rules that are "guessed," using the
heuristic information, most appropriate for that database occur early in
the ordering. The applicable rules can be ordered arbitrarily if no
ordering information is available, although, in that case, extensive backtracking may cause the procedure to be prohibitively inefficient. By
definition, if a "correct" rule is always first in the ordering, no backtrack
ing will occur at all.
We have used a specific procedure, BACKTRACK, to explain how
backtracking control strategies operate. Several practical concerns—such
as the need to avoid recopying large, complex global databases—would
dictate implementations of the backtracking strategy that are more
efficient than the procedure given here.
Another illustrative example of how the backtracking strategy is
applied to a simple problem is perhaps useful. Suppose we are given the
problem of placing 4 queens on a 4 X 4 chess board so that none can capture any other. For our global database, we use a 4 X 4 array with
marked cells corresponding to squares occupied by queens. The termi
nation condition, expressed by the predicate TERM, is satisfied for a
database if and only if it has precisely 4 queen marks and the marks correspond to queens located so that they cannot capture each other.
There are many alternative formulations possible for the production
rules. A useful one for our purposes involves the following rule schema,
for 1 < i,j < 4:
Ru
Precondition:
i=l: There are no queen marks in the array.
1 < / < 4: There is a queen mark in row / — 1
of the array.
57
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
Effect:
Puts a queen mark in row i, column y of the array.
Thus, the first queen mark added to the array must be in row 1, the
second must be in row 2, etc.
To use the BACKTRACK procedure to solve the 4-queens problem,
we have still to specify both the predicate DEADEND and an ordering
relation for applicable rules. Suppose we arbitrarily say that R {j is ahead
of Rik in the ordering only when/ < k. The predicate DEADEND might
be defined so that it is satisfied for databases where it is obvious that no
solution is possible; for example, certainly no solution is possible for any
database containing a pair of queen marks in mutually capturing
positions. (The reader is encouraged to try working through BACK
TRACK by hand using this simple test for DEADEND.) Altogether, the
algorithm backtracks 22 times before finding a solution; even the very first rule applied must ultimately be taken back.
A more efficient algorithm (with less backtracking) can be obtained if
we use a more informed rule ordering. One simple, but useful ordering
for this problem involves using the function diag(i,j), defined to be the
length of the longest diagonal passing through cell (ij). Let R
{j be ahead
of R mn in the ordering if diag(ij) < diag(m,n). (For equal values of
diag, use the same order as before.) Using this ordering relation, the rules
that are applicable to the initial database would be ordered as follows:
(R12,R139R11,Rn)' The reader might verify that this ordering scheme
solves the 4-queens problem with only 2 backtracks.
As previously mentioned, we need a slightly more complex algorithm
to avoid cycles. All databases on a path back to the initial one must be
checked to insure that none are revisited. In order to implement this
backtracking strategy as a recursive procedure, the entire chain of
databases must be an argument of the procedure. Again, practical
implementations of AI backtracking production systems use various techniques to avoid the need for explicitly listing all of these databases in
their entirety.
Let us call our cycle-avoiding algorithm
BACKTRACK1. It takes a list
of databases as its argument; when first called, this list contains the initial
database as its single element. Upon successful termination, BACK-
TRACK1 returns a sequence of rules that can be applied to the initial
database to produce one that satisfies the termination condition. The
BACKTRACKl algorithm is defined as follows:
58
BACKTRACKING STRATEGIES
Recursive procedure BACKTRACK1( DA TA LIST)
1 DATA «- FIRST(DATALIST); DATALIST
is a list of all databases on a path back
to the initial one. DA TA is the most
recent one produced.
2 if MEMBER( DA TA, T AIL( DA TA LIST)), return
FAIL; the procedure fails if it revisits
an earlier database.
3 if TERM(DATA),return NIL
4 if DEADEND( DA TA ), return FAIL
5 if LENGTH( DA TA LIST) > BOUND, return
FAIL; the procedure fails if too many
rules have been applied. BOUND is a global
variable specified before the procedure is
first called.
6 RULES <- APPRULES(£M7V1)
7 LOOP: if NULL(Äi/L£S), return FAIL
8 R <- FIRST(RULES)
9 RULES *-TAlL(RULES)
10 RDATA*-R(DATA)
11 ÄDv4rv4L/Sr^CONS(ÄZ)/ir^ö,4ry4L/.ST); the
list of databases visited so far is extended
by adding RDATA.
12 PATH*- BACKTRACK1( RDA TA LIST)
13 if PA TH = FAIL, go LOOP
14 return CONS(R, PATH)
59
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
The 8-puzzle example of backtracking in chapter 1 used BOUND = 7
and also checked to see if a tile configuration had been visited previously.
Note that the recursive algorithm does not remember all databases that it
visited previously. Backtracking involves "forgetting" all databases
whose paths lead to failures. The algorithm remembers only those
databases on the current path back to the initial one.
The backtracking strategies just described "fail back" one level at a
time. If a level n recursive call of BACKTRACK fails, control returns to
level n — 1 where another rule is tried. But sometimes the reason, or
blame, for the failure at level n can be traced to rule choices made many
levels above. In these cases it would be obviously futile to try another rule
choice at level n — 1 ; predictably, any such choice there would again lead
to a failure. What is needed, then, is a way to jump several levels at a time,
all the way back to one where a different rule choice will make a useful
difference.
To see an example of this multilevel backtracking phenomenon,
consider using BACKTRACK to solve the 8-queens problem. In this
problem, we must place 8 queens on an 8 X 8 board so that none of them
can capture any others.
Suppose we are at a stage of the algorithm in which the database just
produced is illustrated by the array in Figure 2.2. (In fact, the BACK
TRACK algorithm would produce precisely this array using the arbitrary
rule ordering that we originally discussed.) The algorithm must now
attempt to place a queen in row 6. Note that no cell in row 6 is
satisfactory; each attempt to place a queen in that row would fail. In such
a circumstance, BACKTRACK would attempt to relocate the queen in
row 5, moving it eventually to column 8. But a more detailed analysis of
the reasons for the row-6 failures would reveal that all of them would
have still occurred regardless of the position of the queen in row 5. The
row-6 failures were predestined by the positions of the first 4 queens.
Therefore, since there is no point in relocating queen 5, we can jump over
one recursive level, back to the point where we were selecting row-4
locations. Some AI systems have used backtracking strategies that are
able to analyze failures in this manner and to back up to the appropriate
point.
60
GRAPH^SEARCH STRATEGIES
F
X X
X X
Fig. 2.2 Queen positions during a stage 0/BACKTRACK.
2.2. GRAPH-SEARCH STRATEGIES
In backtracking strategies, the control system effectively forgets any
trial paths that result in failures. Only the path currently being extended
is stored explicitly. A more flexible procedure would involve the explicit
storage of all trial paths so that any of them could be candidates for
further extension.
For example, in Figure 2.3 we show an initial database, DB1, to which
rules Rl and R2, say, are applicable; suppose the control system selects
and applies Rl producing database DB2; then suppose the control
system selects applicable rule R3 and applies it to DB2, to produce DB3 ;
and at this point, suppose the control system decides that this path is not promising and backs up to apply rule R2 to DB1, to produce database
DB4. As stated, a backtracking strategy would erase the records of
DB2
and DBS. But if the control system were to maintain this record, then,
should a path through DB4 ultimately prove futile, it could resume work
immediately from either DB2 or DB3. In order to achieve this sort of
flexibility, a control system must keep an explicit record of a graph of
databases linked by rule applications. We say that control systems that
operate in this manner use graph-search strategies.
61
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
In our discussions of graph-search strategies, we speak as if the various
databases produced by rule applications are actually represented, each in
its entirety, as nodes in a graph or tree. Because these databases are
usually very large structures, it would be impractical to store each of them
explicitly. Fortunately, there are ways in which the effect of explicit
storage of all of the databases can be achieved, by explicitly storing just the initial database and records of incremental changes from which any of the other databases can rapidly be computed.
2.2.1. GRAPH NOTATION
We can think of a graph-search control strategy as a means of finding a
path in a graph from a node representing the initial database to one
representing a database that satisfies the termination condition of the
production system. Graph-searching algorithms are thus of special
interest to us. Before describing these algorithms, we first review some
graph-theory terminology.
A graph consists of a (not necessarily finite) set of nodes. Certain pairs
of nodes are connected by arcs, and these arcs are directed from one member of the pair to the other. Such a graph is called a directed graph. For our purposes, the nodes are labeled by databases, and the arcs are
labeled by rules. If
an arc is directed from node n { to node n h then node
nj is said to be SL successor of node n {, and node n { is said to be a parent of
node nj. In the graphs that are of interest to us, a node can have only a finite number of successors. (Our production systems have only a finite number of applicable rules.) A pair of nodes may be successors of each
other; in this case the pair of directed arcs is sometimes replaced by an
edge.
Ri DB1
DB2
R3 r
DB3 s R2
DB4
Fig. 2.3 A tree of databases.
62
GRAPH-SEARCH STRATEGIES
A tree is a special case of a graph in which each node has at most one
parent. A node in the tree having no parent is called a root node. A node
in the tree having no successors is called a tip node. We say that the root
node is of depth zero. The depth of any other node in the tree is defined to
be the depth of its parent plus 1.
A sequence of nodes (n u,ni2,.. .,n ik), with each η υ a successor of
nu-i f°TJ — 2,.. .,&, is called a, path of length k from node n u to node
nik. If a path exists from node n { to node n jf then node n f is said to be
accessible from node n %. Node AZ, is then a descendant of node 7ΐ 4, and
node n% is an ancestor of node /i,. We see that the problem of finding a
sequence of rules transforming one database into another is equivalent to
the problem of finding a path in a graph.
Often it is convenient to assign positive costs to arcs, to represent the
cost of applying the corresponding rule. We use the notation c(n i9nj) to
denote the cost of an arc directed from node n x to node n,. It will be
important in some of our later arguments to assume that these costs are
all greater than some arbitrarily small positive number, e. The cost of a
path between two nodes is then the sum of the costs of all of the arcs
connecting the nodes on the path. In some problems, we want to find that path having minimal cost between two nodes.
In the simplest type of problem, we desire to find a path (perhaps
having minimal cost) between a given node s, representing the initial
database and another given node t
9 representing some other database.
The more usual situation, though, involves finding a path between a node
s and any member of a set of nodes {t %} that represent databases
satisfying the termination condition. We call the set {t {} the goal set, and
each node t in {t {} is a goal node.
A graph may be specified either explicitly or implicitly. In an explicit
specification, the nodes and arcs (with associated costs) are explicitly
given by a table. The table might list every node in the graph, its
successors, and the costs of the associated arcs. Obviously, an explicit
specification is impractical for large graphs and impossible for those
having an infinite set of nodes.
In our applications, the control strategy generates (makes explicit) part
of an implicitly specified graph. This implicit specification is given by the
start node, s, representing the initial database, and the rules that alter
databases. It will be convenient to introduce the notion of a successor
63
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
operator that is applied to a node to give all of the successors of that node
(and the costs of the associated arcs). We call this process of applying the
successor operator to a node, expanding the node. The successor operator
depends in an obvious way on the rules. Expanding s, the successors of s,
ad infinitum, makes explicit the graph that is implicitly defined by s and
the successor operator. A graph-search control strategy, then, can be
viewed as a process of making explicit a portion of an implicit graph
sufficient to include a goal node.
2.2.2. A GENERAL GRAPH-SEARCHING PROCEDURE*
The process of explicitly generating part of an implicitly defined graph
can be informally defined as follows.
Procedure GRAPHSEARCH
1 Create a search graph, G, consisting solely of the
start node, s. Put s on a list called OPEN.
2 Create a list called CLOSED that is initially empty.
3 LOOP: if OPEN is empty, exit with failure.
4 Select the first node on OPEN, remove it from OPEN,
and put it on CLOSED. Call this node n.
5 If n is a goal node, exit successfully with the solution
obtained by tracing a path along the pointers from n to s in G. (Pointers are established in step 7.)
6 Expand node n, generating the set, M, of its successors
and install them as successors of n in G.
7 Establish a pointer to n from those members of M that
were not already in G (i.e., not already on either
OPEN or CLOSED). Add these members of M to
OPEN. For each member of M that was already on
OPEN or CLOSED, decide whether or not to redirect
its pointer to n. (See text.) For each member of
*Note added to the fourth and subsequent printings of this book: Step 6 of the graph-searching
procedure described in this section has been changed slightly to correct an error kindly pointed
out to the author by Maurice Karnaugh of IBM.
64
GRAPH-SEARCH STRATEGIES
M already on CLOSED, decide for each of its
descendants in G whether or not to redirect its
pointer. (See text.)
8 Reorder the list OPEN, either according to some
arbitrary scheme or according to heuristic merit.
9 Go LOOP
This procedure is sufficiently general to encompass a wide variety of
special graph-searching algorithms. The procedure generates an
explicit graph, G, called the search graph and a subset, T, of G called
the search tree. Each node in G is also in T. The search tree is defined by
the pointers that are set up in step 7. Each node (except s) in G has a pointer directed to just one of its parents in G, which defines its unique
parent in T. Each possible path to a node discovered by the algorithm is preserved explicitly in G; a single distinguished path to any node is defined by T. Roughly speaking, the nodes on OPEN are the tip nodes of the search tree, and the nodes on CLOSED are the nontip nodes. More
precisely, at step 3 of the procedure, the nodes on OPEN are those (tip)
nodes of the search tree that have not yet been selected for expansion.
The nodes on CLOSED are either tip nodes selected for expansion that
generated no successors in the search graph or nontip nodes of the
search tree.
The procedure orders the nodes on OPEN in step 8 so that the "best"
of these is selected for expansion in step 4. This ordering can be based on
a variety of heuristic ideas (discussed below) or on various arbitrary
criteria. Whenever the node selected for expansion is a goal node, the
process terminates successfully. The successful path from start node to
goal node can then be recovered (in reverse) by tracing the pointers back
from the goal node to s. The process terminates unsuccessfully whenever
the search tree has no remaining tip nodes that have not yet been selected
for expansion. (Some nodes may have no successors at all, so it is possible for the list
OPEN, ultimately, to become empty.) In the case of
unsuccessful termination, the goal node(s) must have been inaccessible
from the start node.
Step 7 of the procedure requires some additional explanation. If the
implicit graph being searched was a tree, we could be sure that none of
the successors generated in step 6 had been generated previously: Every
65
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
node (except the root node) of a tree is the successor of only one node and
thus is generated once only when its unique parent is expanded. Thus, in this special case, the members of M in steps 6 and 7 are not already on either OPEN or CLOSED. In this case, each member of M is added to
OPEN and is installed in the search tree as a successor ofn. The search
graph is the search tree throughout the execution of the algorithm, and
there is no need to change parents of the nodes in T.
If the implicit graph being searched is not a tree, it is possible that some
of the members of M have already been generated, that is, they may already be on OPEN or CLOSED. The problem of determining whether
a newly generated database is identical to one generated before can be
computationally expensive. For this reason, some search processes avoid
making this test, with the result that the search tree may contain several nodes labeled by the same database. Node repetitions, of course, lead to
redundant successor computations. Hence, there is a tradeoff between
the computational cost of testing for matching databases and the computational cost of generating a larger search tree (containing multiple nodes labeled by identical databases). In steps 6 and 7 of procedure
GRAPHSEARCH, we are assuming that it is worthwhile to test for node
identities.
When the search process generates a node that it had generated before,
it finds a (perhaps better) path to it other than the one already recorded in
the search tree. We desire that the search tree preserve the least costly
path found so far from s to any of its nodes. (The cost of a path from s to n in the search tree can be computed by summing the arc costs encountered in the tree while tracing back from n to s. In problems for which no arc
costs are given, we assume that the arcs have unit cost.) When a newly found path is less costly than an older one, the search tree is adjusted by
changing the parentage of the regenerated node to its more recent parent.
If a node n on CLOSED has its parentage in T changed, a less costly
path has been found to n. The less costly path may be part of less costly
paths to some of the successors of n in the search graph, G; in this case,
a change might be in order to the parentage in T of the successors of n
in G. Because G is finite, the process of propagating the costs of the new
paths downward to the successors of n in G is straightforward and
finite. After this computation, the search tree is adjusted to record these
paths, if appropriate.
A simple example will serve to show how such search tree adjustments
are accomplished. Suppose a search process has generated the search
66
GRAPH-SEARCH STRATEGIES
graph and search tree shown in Figure 2.4. The dark arrows along certain
arcs in this search graph are the pointers that define parents of nodes in
the search tree. The solid nodes are on CLOSED, and the other nodes are
on OPEN at the time the algorithm selects node 1 for expansion. (We
assume unit arc costs.) When node 1 is expanded, its single successor,
node 2, is generated. But node 2, with parent node 3 in the search tree, had previously been generated, and node 2 is also on CLOSED with successor nodes 4 and 5. Note, however, that node 4's parent in the search tree is node 6, because the shortest (least costly) path from s to node 4 in the search graph is through node 6. Since the algorithm now discovers a path to node 2 through node 1 that is less costly than the previous path
through node 3, the parent of node 2 in the search tree is changed from
node 3 to node 1. The costs of the paths to the descendants of node 2 in the search graph (namely, the paths to nodes 4 and 5) are recomputed.
These costs are now also lower than before, with the result that the parent
of node 4 is changed from node 6 to node 2. The adjusted search tree is
defined by the pointers on the arcs of the search graph of Figure 2.5.
As described, the GRAPHSEARCH algorithm generates all of the
successors of a node at once. It is possible to modify the algorithm so that a node is selected for expansion and successors are generated one at a time [see, for example, Michie and Ross (1970)]. The modified algorithm
does not put a node on CLOSED until all of its successors have been
generated. Since the process of applying rules to a database to produce
new databases is typically computationally expensive, the modified
algorithm is often preferable even though it is slightly more difficult to
describe. To facilitate explaining some general properties of graph-
searching procedures, we continue to use that version of the algorithm in
which all successors are generated simultaneously.
Fig. 2.4 A search graph and search tree before expanding node I.
67
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
'4 ~ 5
Fig. 2.5 A search graph and search tree after expanding node 1.
2.3. UNINFORMED GRAPH-SEARCH
PROCEDURES
If no heuristic information from the problem domain is used in
ordering the nodes on OPEN, some arbitrary scheme must be used in
step 8 of the algorithm. The resulting search procedure is called
uninformed. In AI, we are typically not interested in uninformed
procedures, but we describe two types here for purposes of comparison:
depth-first search and breadth-first search.
The first type of uninformed search orders the nodes on OPEN in
descending order of their depth in the search tree. The deepest nodes are
put first in the list. Nodes of equal depth are ordered arbitrarily. The
search that results from such an ordering is called depth-first search
because the deepest node in the search tree is always selected for
expansion. To prevent the search process from running away along some
fruitless path forever, a depth bound is provided. No node whose depth
in the search tree exceeds this bound is ever generated. (The process can
be made to terminate virtually as soon as a goal node is generated by
putting goal nodes at the very beginning of OPEN ; but, of course, this
68
UNINFORMED GRAPH-SEARCH PROCEDURES
procedure would involve a goal test during step 8 of GRAPHSEARCH.
If the result is saved, then the goal test in step 5 need only look up the
result instead of repeating a possibly costly computation.)
The depth-first procedure generates new databases in an order similar
to that generated by an uninformed backtracking control strategy. The
correspondence would be exact if the graph-search process generated
only one successor at a time. Usually, the backtracking implementation is
preferred to the depth-first version of GRAPHSEARCH because back
tracking is simpler to implement and involves less storage. (Backtracking
strategies save only one path to a goal node; they do not save the entire
record of the search as do depth-first graph-search strategies.)
The search tree generated by a depth-first search process in an 8-puzzle
problem is illustrated in Figure 2.6. The nodes are labeled with their corresponding databases and are numbered in the order in which they
are selected for expansion. We assume a depth bound of five. The dark
path shows a solution involving five rule applications. We see that a
depth-first search process progresses along one path until it reaches the
depth bound, then it begins to consider alternative paths of the same depth, or
less, that differ only in the last step; then those that differ in the
last two steps; etc.
The second type of uninformed search procedure orders the nodes on
OPEN in increasing order of their depth in the search tree. (Again, to
promote earlier termination, goal nodes should be put immediately at the very beginning of OPEN.) The search that results from such an ordering is called
breadth-first because expansion of nodes in the search tree
proceeds along "contours" of equal depth. In Figure 2.7, we show the search tree generated by a breadth-first search in the 8-puzzle problem.
The numbers next to each node indicate the order in which nodes are
selected for expansion. Note that the goal node is selected immediately
after it is generated.
Later we show that breadth-first search is guaranteed to find a
shortest-length path to a goal node, providing a path exists at all. (If no
path
exists, the method will exit with failure for finite graphs or will never
terminate for infinite graphs.)
69
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
00 sO «JO ?, m Tj- m
1 à m oo o
/ # / /
2 o
00 O ■
\ °° 00 ■ SO
o 00 — Ό
ri ■ r-
Γ^, Tt Wi
oo o r-
n — a
0C %C Γ—
ri ■ — ^t-m ri- «o
OO — Ό
ΓΙΓ - 1
O 00 — MD
oo
oo ■ r-
00 G Γ-
■ ΓΙ — c«-> rf m
oo — ■
rih - \θ
-rr> Tt m
■ — vo
oo ci r-
r<-) Ti- LO
oo r- ■
ri se —
\ ΓΛ ■ in
oo Tt r-
CI sO —
o ■ oo r-
rc, T+ tr,
■ or-<
C\|
<
^
e
•^
^
e
1^
e
o
1 ^
e \ CJ r- vo
m Tt κ->
00 ■ —
n r- so
r*1 Tt IO
— ■ so
oo CJ r-
oo n r-
m rf ■
oo f- u->
CI sO —
oo r— —
no i
n-i i/-> ■
oo rt r-
ci se —
0O 1" r-
co oo r~
r 1 OO Γ-
■ -C —
■ I" Vi
00 CI — ">J
1
1
"8
■3
£
§
^
>o
<N
.00
70
UNINFORMED GRAPH-SEARCH PROCEDURES
Γ^, ■ Tf
00 ΪΙ Λ
oo f^vlt
00 ■ W1
ΓΙ — |^ rr; sC TT
■ 00 in
ιη vCT t
oo — m
ri ■ r-
rn m ■
00 ·^ sC
n — r-~
00 t vC
Γ) — Γ-00 Tf ■
ri — r-
ΟΟ^ΐ Λ
π — [—
ro ^t w>
■ 00 sO y!
rf ro rf in
noo vc
■ — r- SO nOO vu
m rr m
ri oo so
— ■ r-m Tt
00
Γ-ΙΛ
SO
■
n^in
<N
— ■
CO SO
r-
m Tf in
00 — SO
«n O Tt ti-i
00 — ■
ri r~- so < 00 — Ό
(N Γ~ SO
00 ■ —
™ r- so
f> T}- U-,
30 o r-"-, rf v>
< ■ Tf ΗΊ
fi Tf i/~>
ri oor -
■ vC — 1
1
*,
£>
Ì
DC ri —
71
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
2.4. HEURISTIC GRAPH-SEARCH PROCEDURES
The uninformed search methods, whether breadth-first or depth-first,
are exhaustive methods for finding paths to a goal node. In principle,
these methods provide a solution to the path-finding problem, but they
are often infeasible to use to control AI production systems because the
search expands too many nodes before a path is found. Since there are
always practical limits on the amount of time and storage available to
expend on the search, more efficient alternatives to uninformed search
must be found.
For many tasks it is possible to use task-dependent information to help
reduce search. Information of this sort is usually called heuristic informa
tion, and search procedures using it are called heuristic search methods. It
is often possible to specify heuristics that reduce search effort (below that
expended by, say, breadth-first search) without sacrificing the guarantee
of finding a minimal length path. Some heuristics greatly reduce search
effort but do not guarantee finding minimal cost paths. In most practical
problems, we are interested in minimizing some combination of the cost
of the path and the cost of the search required to obtain the path.
Furthermore, we are usually interested in search methods that minimize
this combination averaged over all problems likely to be encountered. If
the averaged combination cost of search method 1 is lower than the
averaged combination cost of search method 2, then search method 1 is
said to have more heuristic power than search method 2. Note that
according to our definition, it is not necessary (though it is a common
misconception) that a search method with more heuristic power give up
any guarantee for finding a minimal cost path.
Averaged combination costs are never actually computed, both be
cause it is difficult to decide on the way to combine path cost and search
effort cost and because it would be difficult to define a probability
distribution over the set of problems to be encountered. Therefore, the
matter of deciding whether one search method has more heuristic power than another is usually left to informed intuition, gained from actual
experience with the methods.
2.4.1. USE OF EVALUATION FUNCTIONS
Heuristic information can be used to order the nodes on OPEN in step
8 of GRAPHSEARCH so that search expands along those sectors of the
72
HEURISTIC GRAPH-SEARCH PROCEDURES
frontier thought to be most promising. In order to apply such an ordering
procedure, we need a method for computing the "promise" of a node.
One important method uses a real-valued function over the nodes called
an evaluation function. Evaluation functions have been based on a variety
of ideas: Attempts have been made to define the probability that a node
is on the best path; distance or difference metrics between an arbitrary
node and the goal set have been suggested; or in board games or puzzles,
a configuration is often scored points on the basis of those features that it possesses that are thought to be related to its promise as a step toward the goal.
Suppose
we denote the evaluation function by the symbol/. Then/(fl )
gives the value of the function at node n. For the moment we let/be any
arbitrary function; later, we propose that it be an estimate of the cost of a
minimal cost path from the start node to a goal node constrained to go
through node n.
We use the function / to order the nodes on OPEN in step 8 of
GRAPHSEARCH. By convention, the nodes on OPEN are ordered in
increasing order of their / values. Ties among / values are ordered
arbitrarily, but always in favor of goal nodes. Supposedly, a node having
a low evaluation is more likely to be on an optimal path.
The way in which GRAPHSEARCH uses an evaluation function to
order nodes can be illustrated by considering again our 8-puzzle
example. We use the simple evaluation function:
/(n) = rf(n)+ W{n)
where d(n ) is the depth of node n in the search tree and W(n ) counts the
number of misplaced tiles in that database associated with node n. Thus
the start node configuration
283
164
7 5
has an/value equal to 0 + 4 = 4.
The results of applying GRAPHSEARCH to the 8-puzzle using this
evaluation function are summarized in Figure 2.8. The value of/for each
node is circled; the uncircled numbers show the order in which nodes are
73
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
l7_
s^ 2
^» 1 2 113 1 ^^ΓΤί
O 16 4 n i ■
3 ^^^^ 4
^ 1 2 8 3 1 ^ IT" O ■ '4 O ' *
^7 6 5| ^7 f
1" 8 3| ^^2 8 3 —> ΓΓ^
2 1 4 ΗΠ 1 4 B 1 ί
|7 6 5| ^> 6 5| ^[7 ( *"7L
5 4 Start
! c Node
"71 ^^|2 8 3|
4 1 ^9 16 4
S| |7 5 ■[
3] ^ 1 2 8 3 1
41 ^m\ 14·
5| b 6 S j
Ti ^^|2 3 "1
4 0 18 4
) 5| |7 6 5|
6 |
B ■δ 4
L7 65J
<~ i ^ 1 ' 2 3l ^ 1 1 2 3|
Node^^Jj W[ B 6 5|
F/'g. 2.#Λ search tree using an evaluation function.
expanded. We see that the same solution path is found here as was found
by the other search methods, although the use of the evaluation function
has resulted in substantially fewer nodes being expanded. (If we simply
use the evaluation function/( n ) = d{ n ), we get the breadth-first search
process.)
The choice of evaluation function critically determines search results.
The use of an evaluation function that fails to recognize the true promise
of some nodes can result in nonminimal cost paths; whereas, the use of an
evaluation function that overestimates the promise of all nodes (such as
the evaluation function yielding breadth-first search) results in expansion of too many nodes. In the next few sections, we develop some theoretical results about the performance of GRAPHSEARCH when it uses a
particular kind of evaluation function.
2.4.2. ALGORITHM A
Let us define the evaluation function/so that its value,/(n), at any
node n estimates the sum of the cost of the minimal cost path from the
start node
s to node n plus the cost of a minimal cost path from node n to a
74
HEURISTIC GRAPH-SEARCH PROCEDURES
goal node. That ÌS,/(AI) is an estimate of the cost of a minimal cost path
constrained to go through node n. That node on OPEN having the
smallest value of/is then the node estimated to impose the least severe
constraint; hence it is appropriate that it be expanded next.
Before demonstrating some of the properties of this evaluation
function, we first introduce some helpful notation. Let the function
/c(/i i,/ii) give the actual cost of a minimal cost path between two arbitrary
nodes n { and AI, . (The function k is undefined for nodes having no path
between them.) The cost of a minimal cost path from node n to some
particular goal node, t i9 is then given by k{n,t {). We let h*(n) be the
minimum of all of the k{n,t {) over the entire set of goal nodes {t %).
Thus, A *(Λ ) is the cost of the minimal cost path from n to a goal node,
and any path from node n to a goal node that achieves h *( n ) is an optimal
path from « to a goal. (The function h * is undefined for any node n that
has no accessible goal node.)
Often we are interested in knowing the cost k (s,n ) of an optimal path
from a given start node, s, to some arbitrary node n. It will simplify our
notation somewhat to introduce a new function g * for this purpose. The
function g * is defined as
g*(n) = k(s,n),
for all n accessible from s.
We next define the function/* so that its value/*(« ) at any node n is
the actual cost of an optimal path from node s to node n plus the cost of an
optimal path from node « to a goal node, that is,
/*(«) = £*(/!) + **(«)·
The value of/*( n ) is then the cost of an optimal path from s constrained
to go through node n. (Note that/*(^) = h*(s) is the actual cost of an
unconstrained optimal path from s to a goal.)
We desire our evaluation function/to be an estimate of/*. Our
estimate can be given by
f(n)=g(n) + h(n),
where g is an estimate of g * and h is an estimate of h * . An obvious choice
for g(n) is the cost of the path in the search tree from s to n given by
75
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
summing the arc costs encountered while tracing the pointers from n to s.
(This path is the lowest cost path from s to n found so far by the search
algorithm. The value of g ( n ) for certain nodes may decrease if the search
tree is altered in step 7.) Notice that this definition implies
g{n)> g*(n). For the estimate Α(Λ), of A*(«), we rely on heuristic
information from the problem domain. Such information might be similar to that used in the function W(n) in the 8-puzzle example. We
call
A the heuristic function and will discuss it in more detail later.
Suppose we now use as an evaluation function
/(n) = g(n) + h(n).
We call the GRAPHSEARCH algorithm using this evaluation function
for ordering nodes, algorithm A. Note that when h = 0 and g = d (the
depth of a node in the search tree), algorithm A is identical to
breadth-first search. We claimed earlier that the breadth-first algorithm is guaranteed to find a minimal length path to a goal. We now show that if A
is a lower bound on
A * (that is, if A (AI ) < A *(n ) for all nodes n ), then
algorithm A will find an optimal path to a goal. When algorithm A uses an
A function that is a lower bound on A * , we call it algorithm A* (read
"A-star"). Since A = 0 is certainly a lower bound on A * , the fact that the
breadth-first algorithm finds minimal length paths follows directly as a
special case of this more general result for algorithm A*.
2.43. THE ADMISSIBILITY OF A*.
Let us say that a search algorithm is admissible if, for any graph, it
always terminates in an optimal path from s to a goal node whenever a
path from s to a goal node exists. In this section we show informally that
A* is admissible.
To show that an algorithm is admissible, it is necessary to show, at least,
that it terminates whenever a goal node is accessible. The GRAPH-
SEARCH algorithm terminates (if at all) either in step 3 or in step 5. Notice that in every cycle through the loop of the algorithm, a node is removed from OPEN and that only a finite number of new successors are
added to
OPEN. For finite graphs, we ultimately run out of new
successors, and thus, unless the algorithm terminates successfully in step
5 by finding a goal node, it will terminate in step 3 after eventually
depleting OPEN. Therefore,
76
HEURISTIC GRAPH-SEARCH PROCEDURES
RESULT 1: GRAPHSEARCH always terminates for finite
graphs.
Next we would like to show that if a path from s to a goal node exists,
A* will terminate even for infinite graphs. To do so, let us suppose the
opposite, that A* does not terminate. Termination is prevented only if
new nodes are forever added to OPEN. But in this case we can show that
even the smallest of the / values of the nodes on OPEN will grow
impossibly large.
Let d*( n ) be the length of the shortest path in the implicit graph being
searched from s to any node n in the search tree produced by A*. Then
since the cost of each arc in the graph is at least some small positive
number e, g *( n ) >: d *( n ) e. (Recall that g *( n ) is the cost of the optimal
path from s to n, and that g(n ) is the cost of the path in the search tree
from s to node n.) Clearly, g(n)> g*(n), and thus g(n) > d*(n)e. If
h(n)>0 (which we henceforth assume), f(n)> g(n\ and thus
f(n) > d*(n)e. In particular, for every node n on OPEN, the value of
f(n) is at least as large as d*(n )e. Even though A* selects for expansion
that node on OPEN whose / value is smallest, the node selected will
ultimately have an arbitrarily large value ofd* and therefore also of/ if
A* does not terminate.
Now, to show that A* must eventually terminate, we show that before
termination of A*, there is always a node n on OPEN such that
f(n) </*(^). Let the ordered sequence (s = n 0,nl9.. .,n k), where n k is
a goal node, be an optimal path from s to a goal node. Then, for any time
before A* terminates, let n' be the first node in this sequence that is on
OPEN. (There must be at least one such node, because s is on OPEN at
the beginning and if n k is on CLOSED, A* has terminated.) By the
definition of/for A*, we have
/(Ό = g(O+ *('!') ■
We know that A* has already found an optimal path to ri since ri is on an
optimal path to a goal and all of the ancestors on this path are on
CLOSED. Therefore, g (ri) = g*(ri) and
f(ri) = g*(ri) + h(ri).
Since we are assuming h (ri) < h *(ri), we can write
f(n')<g*(ri) + h*(ri) =/*(«')·
77
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
But the/* value of any node on an optimal path is equal to f*(s), the
minimal cost, and therefore/(«') </*(s). Thus, we have:
RESULT 2: At any time before A* terminates, there
exists on OPEN a node n' that is on
an optimal path from s to a goal node, with
Combining this result with our previous argument, that even the
smallest/values of the nodes on OPEN of a nonterminating A* become
unbounded, shows that A* must terminate even for infinite graphs. Thus,
RESULT 3: If there is a path from s to a goal node,
A* terminates.
RESULT 3 has an interesting corollary, namely, that any node, n, on
OPEN with f(n) <f*(s) will eventually be selected for expansion by
A*. We leave the proof as an exercise for the reader.
Now it is a simple matter to show that A* is admissible. First, we note
again that A* can either terminate by finding a goal node in step 5 or,
after depleting OPEN, in step 3. But OPEN can never become empty
before termination if there is a path from s to a goal node because, by
RESULT 2, there will always be a node on OPEN (and on an optimal
path). Therefore, A* must terminate by finding a goal node.
Next we would like to show that A* only terminates by finding an
optimal path to a goal node. Suppose A* were to terminate at some goal
node, /, without finding an optimal path, that is,/(/) = g(t) >f*(s).
But, by RESULT 2, there existed just before termination a node, n\ on
OPEN and on an optimal path with/(«') </*(*) </(*)· Thus> at this
stage, A* would have selected nr for expansion rather than /, contradict
ing our supposition that A* terminated. Therefore, we finally have
RESULT 4: Algorithm A* is admissible. (That is, if
there is a path from s to a goal node, A*
terminates by finding an optimal path.)
Each node selected for expansion by A* has an interesting property
that follows directly from RESULT 2: Its/value is never greater than the
cost,/*($), of an optimal path. This result will be important to us later.
To show that it is true, let n be any node selected for expansion by A*. If n
78
HEURISTIC GRAPH-SEARCH PROCEDURES
is a goal node, we have/( n ) = f*(s) by RESULT 4; so suppose n is not a
goal node. Now A* selected n before termination, so at this time (by
RESULT 2) we know that there existed on OPEN some node ri on an
optimal path from s to a goal with/(Az') <f*(s). If n = ri, our result is
established. Otherwise, we know that A* chose to expand n rather than
ri; therefore it must have been the case that
f(n) <f(ri) </·(*).
Therefore, we have
RESULT 5: For any node n selected for expansion by
A*,/(n) </·(*).
2.4.4. COMPARISON OF A* ALGORITHMS
The precision of our heuristic function h depends on the amount of
heuristic knowledge it possesses about the problem domain. Clearly,
using h(n) = 0 reflects complete absence of any heuristic information
about the problem, even though such an estimate is a lower bound on
h*(n) and therefore leads to an admissible algorithm.
Let us compare two versions of A*, namely, \ 1 and A 2 using the
following evaluation functions:
M") = gl(n) + hin)
and
Λ(Ό = gt(n) + ΜΌ
where h 1 and h 2 are both lower bounds on h * . We say that algorithm A 2
is more informed than algorithm A 7 if for all nongoal nodes, «,
h2(n)> h 1{n). This definition seems intuitively reasonable, since with h
bounded from above by h* for admissibility, one suspects that using
larger values of h (and thus values closer to h * ) requires more accurate
heuristic information.
As an example, consider the 8-puzzle solved in Figure 2.8. There we
used the evaluation function/(/i) = d(n) + W(n). We can interpret
the search process of that example as an application of A* with
79
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
h(n) — W{n) and unit arc costs. (Note that W{n) is a lower bound on
the number of steps remaining to the goal.) It is reasonable to say that A*
with h{n) — W{n) is more informed than breadth-first search, which
uses h(n) = 0.
We would expect intuitively that the more informed algorithm
typically would need to expand fewer nodes to find a minimal cost path.
In the case of the 8-puzzle, this observation is supported by comparing
Figure 2.7 with Figure 2.8. Of course, merely because one algorithm
expands fewer nodes than another does not imply that it is more efficient.
The more informed algorithm may indeed have to make more costly
computations, which would destroy efficiency. Nevertheless, the number
of nodes expanded by an algorithm is one of the factors that determines efficiency, and it is a factor that permits simple comparisons.
Suppose that A
2 is more informed than A 2 and that both A 2 and A 2 are
versions of A*. Suppose that A 2 and A 2 are used to search an implicit
graph having a path from a given node s to a goal node. Both, of course, will terminate in an optimal path. We will show that, at termination, if
node n in G was expanded by A
2, it was also expanded by A 7. Thus, A 7
always expands at least as many nodes as does the more informed A 2.
We prove this result using induction on the depth of a node in the A 2
search tree at termination. First, we prove that if A 2 expands a node n
having zero depth in its search tree, then so will A 2. But, in this case,
n — s. If s is a goal node, neither algorithm expands any nodes. If s is not a
goal node, both algorithms expand node s. Continuing the inductive
argument, we assume (the induction hypothesis) that A ; expands all the
nodes expanded by A 2 having depth k, or less, in the A 2 search tree. We
must now prove that any node n expanded by A 2 and of depth k + 1 in
fthe A 2 search tree is also expanded by A 2. By the induction hypothesis,
any ancestor oïn in the A 2 search tree is also expanded by A 2. Thus, node
n is in the A ; search tree and there is a path from s to n in the A ; search
tree that is no more costly than the cost of the path from s to n in the A 2
search tree; that is,
gi{n) < g 2(n).
Let us suppose the opposite of what we are trying to prove, namely,
that A; did not expand node n expanded by A 2. Certainly, at termination
of A 2, node n must be on OPEN for A 2, because A 1 expanded a parent of
node n. Since A 2 terminated in a minimal cost path without expanding
node n, we know that
80
HEURISTIC GRAPH-SEARCH PROCEDURES
thus,
g1{n) + h 1{n)>f*{s).
Since we have already shown that g1 (n ) < g 2(n ), we have
But, by RESULT 5, since \ 2 expanded node n, we have
or
or
Comparing this inequality for h 2( n ) with the earlier one for h 1(n) (i.e.,
A/(/i) >/*(J) — &?(«)) reveals that, at least at node «, A 2 must be as
large as h 2 , which violates the assumption that A 2 is more informed than
A2. Thus, we have
RESULT 6: If A, and A 2 are two
versions of A* such that \ 2 is
more informed than A 2, then at the
termination of their searches on any graph
having a path from 5toa goal node,
every node expanded by A 2 is also
expanded by Aj. It follows that A 2
expands at least as many nodes as does A 2.
2.4.5. THE MONOTONE RESTRICTION
Describing the GRAPHSEARCH procedure, we noted that when a
node n is expanded, some of its successors may already be on OPEN or
CLOSED. The search tree may then need to be adjusted so that it defines
81
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
the least costly paths in G from node s to the descendants of node n. In
addition to the burden of adjusting the search tree, it is often computa
tionally quite expensive to test whether a node has been generated before. We now show that given a rather mild and reasonable restriction
on A, when A* selects a node for expansion it has already found an
optimal path to that node. Thus, with this restriction, there is no need for
A* to test to see if a newly generated node is already on CLOSED, and
there is no need to change the parentage in the search tree of any
successors of this node in the search graph.
A heuristic function, A, is said to satisfy the monotone restriction if for
all nodes n
x and n,, such that n, is a successor of n i9
h(n {) - hin^^cin^nj)
with
A(O = 0'.
If we write the monotone restriction in the form
Λ(«ι)< h(nj) + c(A2i,Ai ?),
it is seen to be similar to a triangle inequality. It specifies that the estimate of the optimal cost to a goal from node n
{ not be more than the cost of the
arc from n { to AÎ; plus the estimate of the optimal cost from TI,· to a goal.
We might say that the monotone restriction imposes the rather reason
able condition that the heuristic function be locally consistent with the
arc costs.
In the 8-puzzle, it is easily verified that h(n) = W(n) satisfies the
monotone restriction. If the function A is changed in any manner during
the search process, then the monotone restriction might not be satisfied.
We now show that, given the monotone restriction, when A* expands a
node, it has found an optimal path to that node. Let n be any node
selected for expansion by A*. If n = s, A* has trivially found an optimal
path to s ; so let us suppose that n is not s. Let the sequence P — (s = n 0,
nj,n 2i.. .,n k = n ) be an optimal path from s to n. Let node n x be the last
node in this sequence that is on CLOSED at the time A* selects n for
expansion. (Node s is on CLOSED, but node n k is not, because it is just
now being selected for expansion.) Thus, node nx +1 in the sequence P is
on OPEN at the time A* selects node n.
82
HEURISTIC GRAPH-SEARCH PROCEDURES
Using the monotone restriction, we have that
S*(n,) + A(n 4) < g*( ni) + c(n ifni+1) + h(n i+1).
Since n { and ^ +7 are on an optimal path
g*("i+l) = g*("i) + C(*i,*i+i) >
therefore
[g*(O + A(«i)] ^ [**(*i+,) + A(/i i4J)].
By transitivity, we then have
g*(n l+1) + A(#i Z4J) < £*("*) + h(n k)
or
/(ΛΙ-Μ)^^·(Λ) + Α(ΙΙ).
Therefore, at the time A* selected node n, in preference to node n t +2, it
must have been the case that g(n) < g*(n); otherwise,/(n ) would have
been greater than/(n i+i). Since g(m) >: g*(m) for all nodes m in the
search tree, we have
RESULT 7: If the monotone restriction is satisfied,
then A* has already found an optimal path
to any node it selects for expansion. That is,
if A* selects n for expansion, and if the
monotone restriction is satisfied,
g(n) = g*(n).
The monotone restriction also implies another interesting result,
namely, that the/values of the sequence of nodes expanded by A* are
nondecreasing. Suppose node n 2 is expanded immediately after node n 1.
If n2 was on OPEN at the time n l was expanded, we have (trivially) that
f{nt) </(w^). Suppose n 2 is not on OPEN at the time n 1 is expanded.
(Node n 2 is not on CLOSED either, because we are assuming that it has
not been expanded yet.) Then, if n 2 is expanded immediately after rij, it
must have been added to OPEN by the process of expanding n 1.
Therefore, n 2 is a successor of n 1. Under these conditions, when n 2 is
selected for expansion we have
83
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
= g*(n 2) + h(n 2) (RESULT 7)
= g*(*î) + c(n l9nt) + h(n f)
= g("i) + c(n l9n2) + Α(π,)
(RESULT 7)
Since the monotone restriction implies
c(n l9n2) + h(n 2)> Λ(*ι),
we have
f(n2)>g(n 1) + h(n 1)=f(n 1).
Since this fact is true for any adjacent pair of nodes in the sequence of
nodes expanded by A*, we have
RESULT 8: If the monotone restriction is satisfied,
the/values of the sequence of nodes
expanded by A* is nondecreasing.
When the monotone restriction is not satisfied, it is possible that some
node has a smaller / value at expansion than that of a previously
expanded node. We can exploit this observation to improve the effi
ciency of A* under this condition. By RESULT 5, when node n is
expanded, f(n) </*(s). Suppose, during the execution of A*, we
maintain a global variable, F, as the maximum of the/values of all nodes
so far expanded. Certainly F </*($) at all times. If ever a node, n, on
OPEN has/( n ) < F, we know by the corollary to RESULT 3 that it will
eventually be expanded. In fact, there may be several nodes on OPEN
whose/values are strictly less than F. Rather than choose, from these,
that node with the smallest/value, we might rather choose that node with
the smallest g value. (All of them must eventually be expanded anyway.)
The effect of this altered node selection rule is to enhance the chances
that the first path discovered to a node will be an optimal path. Thus, even when the monotone restriction is not satisfied, this alteration will
diminish the need for pointer redirection in step
7 of the algorithm. (Note
that when the monotone restriction is satisfied, RESULT 8 implies that
there will never be a node on OPEN whose/value is less than F.)
84
HEURISTIC GRAPH-SEARCH PROCEDURES
2.4.6. THE HEURISTIC POWER OF EVALUATION
FUNCTIONS
The selection of the heuristic function is crucial in determining the
heuristic power of search algorithm A. Using A = 0 assures admissibility
but results in a breadth-first search and is thus usually inefficient. Setting
A equal to the highest possible lower bound on A * expands the fewest
nodes consistent with maintaining admissibility.
Often, heuristic power can be gained at the expense of admissibility by
using some function for A that is not a lower bound on A * . This added
heuristic power then allows us to solve much harder problems. In the
8-puzzle, the function h(n) = W{ n ) (where W{ n ) is the number of tiles
in the wrong place) is a lower bound on A *( n ), but it does not provide a
very good estimate of the difficulty (in terms of number of steps to the
goal) of a tile configuration. A better estimate is the function
h(n) = P(n), where P(n) is the sum of the distances that each tile is
from "home" (ignoring intervening pieces). Even this estimate is too
coarse, however, in that it does not accurately appraise the difficulty of
exchanging the positions of two adjacent tiles.
An estimate that works quite well for the 8-puzzle is
h(n) = P(n) + 3S(n).
The quantity S(n) is a sequence score obtained by checking around the
noncentral squares in turn, allotting 2 for every tile not followed by its
proper successor and allotting 0 for every other tile; a piece in the center
scores one. We note that this A function does not provide a lower bound
for A* . With this heuristic function used in the evaluation function
f(n) = g(n) + A(Λ), we can easily solve much more difficult 8-puzzles
than the one we solved earlier. In Figure 2.9 we show the search tree
resulting from applying GRAPHSEAkCH with this evaluation function
to the problem of transforming
2 1 6
4 8
7 5 3
into
1 2 3
8 4
7 6 5
85
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
Fig. 2.9 A search tree for the 8-puzzle.
86
HEURISTIC GRAPH-SEARCH PROCEDURES
Again, the / values of each node are circled in the figure, and the
uncircled numbers show the order in which nodes are expanded. (In the
search depicted in Figure 2.9, ties among minimal/values are resolved
by selecting the deepest node in the search tree.)
The solution path found happens to be of minimal length (18 steps);
although, since the A function is not a lower bound for A * , we were not
guaranteed of finding an optimal path. Note that this A function results in
a focused search, directed toward the goal; only a very limited spread occurred, near the start.
Another factor that determines the heuristic power of search al
gorithms is the amount of effort involved in calculating the heuristic
function. The best function would be one identically equal to A* ,
resulting in an absolute minimum number of node expansions. (Such an
A could, for example, be determined as a result of a separate complete
search at every node; but this obviously would not reduce the total computational effort.) Sometimes
an A function that is not a lower bound
on A * is easier to compute than one that is a lower bound. In these cases,
the heuristic power might be doubly improved—because the total
number of nodes expanded can be reduced (at the expense of admissi-bility) and because the computational effort is reduced.
In certain cases the heuristic power of a given heuristic function can be
increased simply by multiplying it by some positive constant greater than
one. If this constant is very large, the situation is as if g(n ) = 0. In many
problems we merely desire to find some path to a goal node and are
unconcerned about the cost of the resulting path. (We are, of course, concerned about the amount of search effort required to
find a path.) In
such situations, we might think that g could be ignored completely since,
at any stage during the search, we don't care about the costs of the paths
developed thus far. We care only about the remaining seach effort required
to find a goal node. This search effort, while possibly dependent
on the A values of the nodes on OPEN, would seem to be independent of
the g values of these nodes. Therefore, for such problems, we might be led to use/=
A as the evaluation function.
To ensure that some path to a goal will eventually be found, g should
be included in/even when it is not essential to find a path of minimal
cost. Such insurance is necessary whenever A is not a perfect estimator; if
the node with minimum A were always expanded, the search process
might expand deceptive nodes forever without ever reaching a goal node.
87
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
Including g tends to add a breadth-first component to the search and thus
ensures that no part of the implicit graph will go permanently un-
searched.
The relative weights of g and h in the evaluation function can be
controlled by using/ = g + H>Ä, where w is a positive number. Very large
values of w overemphasize the heuristic component, while very small
values of w give the search a predominantly breadth-first character.
Experimental evidence suggests that search efficiency is often enhanced by allowing the value of
w to vary inversely with the depth of a node in
the search tree. At shallow depths, the search relies mainly on the
heuristic component, while at greater depths, the search becomes
increasingly breadth-first, to ensure that some path to a goal will
eventually be found.
To summarize, there are three important factors influencing the
heuristic power of Algorithm A:
(a) the cost of the path,
(b) the number of nodes expanded in finding the path, and
(c) the computational effort required to compute A.
The selection of a suitable heuristic function permits one to balance these
factors to maximize heuristic power.
2.5. RELATED ALGORITHMS
2.5.1. BIDIRECTIONAL SEARCH
Some problems can be solved using production systems whose rules
can be used in either a forward or a backward direction. An interesting
possibility is to search in both directions simultaneously. The graph-
searching process that models such a bidirectional production system can
be viewed as one in which search proceeds outward simultaneously from
both the start node and from a set of goal nodes. The process terminates
when (and if) the two search frontiers meet in some appropriate fashion.
88
RELATED ALGORITHMS
Unidirectional
search frontier
at termination
Start node Goal node
Bidirectional
search frontiers
at termination
Fig. 2.10 Bidirectional and unidirectional breadth-first searches.
Backward
search frontier
Forward
search frontier
Fig. 2.11 Forward search misses backward search.
89
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
Breadth-first versions of bidirectional graph-searching processes com
pare favorably with breadth-first unidirectional search. In Figure 2.10 we
compare two searches over a two-dimensional grid of nodes. We see that
the bidirectional process expands many fewer nodes than does the
unidirectional one.
The situation is more complex, however, when comparing bidirec
tional and unidirectional heuristic searches. If the heuristic functions
used by the bidirectional process are even slightly inaccurate, the search frontiers may pass each other without intersecting. In such a case, the
bidirectional search process may expand twice as many nodes as would
the unidirectional one. This situation is illustrated in Figure 2.11.
2.5.2. STAGED SEARCH
The use of heuristic information as discussed so far can substantially
reduce the amount of search effort required to find acceptable paths. Its
use, therefore, also allows much larger graphs to be searched than would
be the case otherwise. Even so, occasions may arise when available
storage is exhausted before a satisfactory path is found. Rather than
abandon the search process completely, in such cases, it may be desirable to prune the search graph, to free needed storage space to press the search
deeper.
The search process can then continue in stages, punctuated by pruning
operations obtaining storage space. At the end of each stage, some subset of the nodes on
OPEN, for example those having the smallest values of/,
are marked for retention. The best paths to these nodes are remembered,
and the rest of the search graph is thrown away. Search then resumes with
these best nodes. This process continues until either a goal node is found or until resources are exhausted. Of course, even if A* is used in each
stage and if the whole process does terminate in a path, there is now no
guarantee that it is an optimal path.
2.53. LIMITATION OF SUCCESSORS
One technique that may save search effort is the disposal immediately
after expansion of all successors except a few having the smallest values
of/. Of course the nodes thrown away may be on the best (or the only!) paths to a goal, so the worth of any such pruning method for a particular
problem can be determined only by experience.
90
MEASURES OF PERFORMANCE
Knowledge about the problem domain may sometimes be adequate to
recognize that certain nodes cannot possibly be on a path to a goal node.
(Such nodes satisfy a predicate like the DEADEND predicate used in the
backtracking algorithm.) These nodes can be pruned from the search
graph by modifying algorithm A to include this test. Alternatively, we could assign such nodes a very high h value so that they would never be selected for expansion.
There are also search problems for which the successors of a node can
be enumerated and their h values computed before the corresponding databases themselves are explicitly calculated. Furthermore, it may be advantageous to delay calculating the database associated with a node until it itself is expanded; then the process never calculates any successors not expanded by the algorithm.
2.6. MEASURES OF PERFORMANCE
The heuristic power of a searching technique depends heavily on the
particular factors specific to a given problem. Estimating heuristic power involves judgements, based on experience rather than calculation.
Certain measures of performance can be calculated, however, and though
they do not completely determine heuristic power, they are useful in
comparing various search techniques.
One such measure is called penetrance. The penetrance, P, of a search
is the extent to which the search has focused toward a goal, rather than
wandered off in irrelevant directions. It is simply defined as
P = L/T
9
where L is the length of the path found to the goal and T is the total
number of nodes generated during the search (including the goal node
but not including the start node). For example, if the successor operator is
so precise that the only nodes generated are those on a path toward the
goal, P will attain its maximum value of 1. Uninformed search is
characterized by values of P much less than 1. Thus, penetrance measures
the extent to which the tree generated by the search is "elongated" rather
than "bushy."
91
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
The penetrance value of a search depends on the difficulty of the
problem being searched as well as on the efficiency of the search method.
A given search method might have a high penetrance value when the
optimal solution path is short and a much lower one when it is long.
(Increasing the length of the solution path L usually causes Tto increase
even faster.)
Another measure, the effective branching factor, B, is more nearly
independent of the length of the optimal solution path. Its definition is
based on a tree having (a) a depth equal to the path length and (b) a total
number of nodes equal to the number generated during the search. The
effective branching factor is the constant number of successors that
would be possessed by each node in such a tree. Therefore, B is related to
path length L and to the total number of nodes generated, Γ, by the
expressions:
B + B2 + ... + Bh = T
[5L- \]B/(B - \)= T.
Although B cannot be written explicitly as a function of L and Γ, a plot
of B versus Tïor various values of L is given in Figure 2.12. A value of B
near unity corresponds to a search that is highly focused toward the goal,
with very little branching in other directions. On the other hand, a
"bushy" search graph would have a high B value. Penetrance can be
related to B and path length by the expression
P — L(B — 1)/2?[2?L — 1]. In Figure 2.13 we illustrate how penetrance
varies with path length for various values of B.
To the extent that the effective branching factor is reasonably
independent of path length, it can be used to give a prediction of how
many nodes might be generated in searches of various lengths. For
example, we can use Figure 2.12 to calculate that the use of the evaluation
function /= g + P +3S results in a 5 value equal to 1.08 for the
8-puzzle problem illustrated in Figure 2.9. Suppose we wanted to
estimate how many nodes would be generated using this same evaluation
function iti solving a more difficult 8-puzzle problem, say, one requiring
30 steps. From Figure 2.12, we note that the 30-step puzzle would involve
the generation of about 120 nodes, assuming that the branching factor
remained constant. This estimate, incidentally, is not inconsistent with
the experimental results of Doran and Michie (1966) on a wide variety of
8-puzzle problems.
92
MEASURES OF PERFORMANCE
_ ^
— —
—
h—
[—
l·-
—
—
p~ p^
—
1-
~ oS.
II \
S^
II %
<1
II >
II ^
— II
o
II
<l
II
II
o
(N
II
o
J l—ST
rA
II ^
o
J L —rç
II
o
L ~~P
vo\
II
L oo\
||
V<1 c\ (N\
II \
V*1 \
J i—v o\
II \
O \
J L "Ί Γ
1
^
S
ί _
1
\ ^
_J L —1
— —
—
J —
-\
\
\—
N^-
—
^ 8
£
.*0
93
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
1.0
0.9 l—
0.8 h-
0.7
0.6
0.5 \~
0.4 —
0.3
0.2
0.1 [— 1 1
\B = 5.0 \
No 1 I"1" 1
\5= 15
1^-4^- 1 1 Γ 1 1
L{B-\)
P~ B(BL - 1)
%Ä= 1.1
^\^5= 1.2
\Ä=1.3 ^^\^^
^Ί—-H U- 1 —\
-\
10 12 14 16 18 20
L
Fig. 2.13 P versus Lfor various values ofB.
2.7. BIBLIOGRAPHICAL AND HISTORICAL
REMARKS
The book by Horowitz and Sahni (1978) contains a thorough discus
sion of backtracking and other search methods. Gaschnig (1979) presents
experimental efficiency comparisons of backtracking and related al
gorithms. In some problems involving constraint satisfaction, relaxation
techniques can be employed to reduce search effort; these methods are
discussed by Waltz (1975), Montanari (1974), and Mackworth (1977).
94
BIBLIOGRAPHICAL AND HISTORICAL REMARKS
Graph-search procedures of the sort that we termed uninformed have
arisen in a variety of contexts. Dijkstra (1959) and Moore (1959) both
proposed essentially breadth-first procedures. Dynamic programming
[Bellman and Dreyfus (1962)] is a type of breadth-first search process.
Our GRAPHSEARCH procedure differs from many previous ones in
that we do not transfer nodes from CLOSED back to OPEN when they
are revisited. [We redirect pointers in the search tree instead.]
The use of heuristic information to increase search efficiency has been
studied both in AI and in operations research. In AI, heuristic search was
a main theme of the work of Newell, Shaw, and Simon (1957, 1960). The
use of evaluation functions to direct search in graphs was proposed by
Doran and Michie (1966), from whom we take our 8-puzzle examples.
A general theory of the use of evaluation functions to guide search was
presented in a paper by Hart, Nilsson, and Raphael (1968). Our
description of A* and its properties is based on that paper. [The fact that
A* expands no more nodes than other algorithms that are no more
informed than A* was originally mistakenly thought to depend on a restriction similar to the monotone restriction. This error, originally pointed out by R. Coleman, was corrected in Hart, Nilsson, and Raphael
(1972). Corrections and refinements were also proposed by Gelperin
(1977).] VanderBrug (1976) presents an interesting geometric interpreta
tion of heuristic search processes.
Pohl has proposed several generalizations of A*, including a scheme
for bidirectional search [Pohl (1971)], and a method that changes the
relative weighting of A and g as search proceeds [Pohl (1973)]. Our use of
the monotone restriction is based on Pohl (1977). (The earlier consistency
restriction, of Hart, Nilsson, and Raphael, is stronger than needed and
harder to establish than the monotone restriction.) Pohl (1970,1977) and
Harris (1974) analyze some of the effects of errors in the heuristic
function on search, and Martelli (1977) analyzes the complexity of
heuristic search algorithms. [The node selection rule described on page
84 is based on Martelli's paper.] Simon and Kadane (1975) describe search methods designed to find any solution rather than insisting on
optimal solutions. Michie and Ross (1970) describe a heuristic search
process that generates just one successor at a time.
The staged search variant was investigated by Doran and Michie
(1966) and by Doran (1967). A process involving staged search has been
95
SEARCH STRATEGIES FOR AI PRODUCTION SYSTEMS
used rather effectively in systems for speech understanding [Lowerre
(1976)] and visual scene interpretation [Rubin (1978)]. Jackson (1974, pp.
104) discusses an application to the 15-puzzle (by A. K. Chandra) of an
interesting search process that uses "mileposts."
Doran and Michie (1966) proposed the penetrance measure for
judging the efficiency of a given search. Slagle and Dixon (1969)
proposed another measure that they called the "depth ratio." Our
"effective branching factor" was motivated by these earlier measures.
Heuristic search finds many applications, sometimes outside of the
context of conventional AI systems. Montanari (1970) makes use of
heuristic search in chromosome matching, and Kanal (1979) discusses an
application in pattern classification.
EXERCISES
2.1 Consider a sliding block puzzle with the following initial configura
tion:
\B B B W W W E\
there are three black tiles (2?), three white tiles ( W\ and an empty cell
(E). The puzzle has the following moves:
(a) A tile may move into an adjacent empty
cell with unit cost.
(b) A tile may hop over at most two other
tiles into an empty cell with a cost equal to
the number of tiles hopped over.
The goal of the puzzle is to have all of the white tiles to the left of all of the
black tiles (without regard for the position of the blank cell).
Specify a heuristic function, A, for this problem and show the search
tree produced by algorithm A using this heuristic function. Can you tell
whether or not your h function satisfies the monotone restriction? Does it
satisfy the monotone restriction for the nodes in your search tree?
96
EXERCISES
2.2 Propose two (non-zero) h functions for the traveling salesman
problem of section 1.1.6. Is either of these h functions a lower bound on
h *? In your opinion, which of them would result in more efficient search?
Apply algorithm A with these h functions to the five-city problem shown
in Figure 1.5.
23 Assume unit costs for each rule application in the formulation of the
4-queens problem of section 2.1. Describe the general characteristics of
the h * function for this problem. Can you think of any h functions that
would be useful for guiding search?
2.4 Describe how to modify procedure GRAPHSEARCH so that only
one successor of a node (at a time) is generated in step 6. The modified
procedure must make two selections: which node to expand and which successor to generate. (In controlling a production system, the modified procedure must select a database and an applicable rule.)
2.5 Prove, as a corollary to RESULT 3, that any node, n, on OPEN with
f{n) </*(s), will eventually be selected for expansion by A*.
2.6 Explain why algorithm A* remains admissible if it removes from
OPEN any node n for which /(
n ) > F, where F is an upper bound on
2.7 Use the evaluation function f(n) = d(n) + W(n) (defined in
section 2.4.1.) with algorithm A to search backward from the goal node of
Figure 2.8 to the start node. Where would the backward search meet the forward search?
2.8 Discuss ways in which an h function might be improved during a
search.
97
CHAPTER 3
SEARCH STRATEGIES FOR
DECOMPOSABLE PRODUCTION
SYSTEMS
In chapter 1, we introduced decomposable production systems and
structures called AND/OR trees, for controlling their operation. In this
chapter we describe some heuristic strategies for searching AND/OR
trees and graphs. We also describe some search techniques for graphs used in game-playing systems.
3.1. SEARCHING AND/OR GRAPHS
Recall that the AND or the OR label given to a node in an AND/OR
tree depends upon that node's relation to its parent. In one case, a parent
node labeled by a compound database has a set of AND successor nodes,
each labeling one of the component databases. In the other case, a parent
node labeled by a component database has a set of OR successor nodes, each labeling the database resulting from the application of alternative
rules to the component database.
We are generally concerned with AND/OR graphs rather than with
the special case of
trees, because different sequences of rule applications
may generate identical databases. For example, a node could be labeled
by a component database resulting both from having split a compound
one and from having applied a rule to another one. In this case, it would
be called an OR node with respect to one parent and an AND node with
respect to the other parent. For this reason, we do not generally refer to
the nodes of an AND/OR graph as being AND nodes or OR nodes;
99
SEARCH STRATEGIES FOR DECOMPOSABLE PRODUCTION SYSTEMS
instead, we introduce some more general notation, appropriate for
graphs. We continue to call these structures AND/OR graphs, however,
and use the terms AND nodes and OR nodes when discussing AND/OR
trees.
We define AND/OR graphs here as hyper graphs. Instead of arcs
connecting pairs of nodes, there are hyperarcs connecting a parent node
with Si set of successor nodes. These hyperarcs are called connectors. Each
k-connector is directed from ^parent node to a set of A: successor nodes. (If
all of the connectors are 1-connectors, we have the special case of an
ordinary graph.)
In Figure 3.1, we show an example of an AND/OR graph. Note that
node n 0 has a 1-connector directed to successor n t and a 2-connector
directed to the set of successors {n 4in5}. For k > 1, /c-connectors are
denoted in our illustrations by a curved line joining the arcs from parent
to elements of the successor set. (Using our earlier terminology, we could
have regarded nodes n h and n 5 as a set of AND nodes, and we could have
regarded node n t as an OR node, relative to their common parent n 0 ; but
note that node n 8, for example, belongs to a set of AND nodes relative to
its parent n 5 but is an OR node relative to its parent n h.)
Fig. 3.1 An AND/OR graph.
100
SEARCHING AND/OR GRAPHS
In an AND/OR tree, each node has at most one parent. In trees and
graphs we call a node without any parent a root node. In graphs, we call a
node having no successors a leaf node (a tip node for trees).
A decomposable production system defines an implicit AND/OR
graph. The initial database corresponds to a distinguished node in the
graph called the start node. The start node has an outgoing connector to a
set of successor nodes corresponding to the components of the initial
database (if it can be decomposed). Each production rule corresponds to
a connector in the implicit graph. The nodes to which such a connector is
directed correspond to component databases resulting after rule applica
tion and decomposition into components. There is a set of terminal nodes
in the implicit graph corresponding to databases satisfying the termination condition of the production system. The task of the production
system can be regarded as
finding a solution graph from the start node to
the terminal nodes.
Roughly speaking, a solution graph from node n to node set N of an
AND/OR graph is analogous to a path in an ordinary graph. It can be
obtained by starting with node n and selecting exactly one outgoing
connector. From each successor node to which this connector is directed,
we continue to select one outgoing connector, and so on, until eventually
every successor thus produced is an element of the set N. In Figure 3.2,
we show two different solution graphs from node n 0 to {n 7,n8} in the
graph of Figure 3.1.
We can give a precise recursive definition of a solution graph. The
definition assumes that our AND/OR graphs contain no cycles, that is, it
assumes that there is no node in the graph having a successor that is also
its ancestor. The nodes thus form a partial order which guarantees
termination of the recursive procedures we use. We henceforth make this
assumption of acyclicity.
J0n0 Qn0 /^ Λ
n7 n 8 n 7 n 8
Fig. 3.2 Two solution graphs.
101
SEARCH STRATEGIES FOR DECOMPOSABLE PRODUCTION SYSTEMS
Let G' denote a solution graph from node n to a set N of nodes of an
AND/OR graph G. G' is a subgraph of G.
If n is an element of N, G' consists of the single node n ;
otherwise, if n has an outgoing connector, K 9 directed to nodes
{nl9.. .9nk} such that there is a solution graph to N from each of n i9
where / = 1,..., fc, then G' consists of node n9 the connector, K 9 the nodes
{nl9.. .,n k}9 and the solution graphs to TV from each of the nodes in
{nl9...,n k};
otherwise, there is no solution graph from n to N.
Analogous to the use of arc costs in ordinary graphs, it is often useful to
assign costs to connectors in AND/OR graphs. (These costs model the
costs of rule applications; again we need to assume that each cost is
greater than some small positive number, e.) The connector costs can
then be used to calculate the cost of a solution graph. Let the cost of a
solution graph from any node n to N be denoted by k(n 9N). The cost
k(n 9N) can be recursively calculated as follows:
If n is an element of N9 k(n 9N) = 0.
Otherwise, n has an outgoing connector to a set of successor nodes
{n1,..., nx} in the solution graph. Let the cost of this connector be c n.
Then,
k(n,N) = c n+ k(n l9N) + ... + k(n i9N).
We see that the cost of a solution graph, G' 9 from ntoNis the cost of
the outgoing connector from n (in G') plus the sum of the costs of the
solution graphs from the successors of n (in G') to N. This recursive
definition is satisfactory because we are assuming acyclic graphs.
Note that our definition of the cost of a solution graph might count the
costs of some connectors in the solution graph more than once. In
general, the cost of an outgoing connector from some node m is counted
in the cost of a solution graph from n to TV just as many times as there are
paths from n to m in the solution graph. Thus, the costs of the two
solution graphs in Figure 3.2 are 8 and 7 if the cost of each fc-connector is
k.
102
AO*: A HEURISTIC SEARCH PROCEDURE FOR AND/OR GRAPHS
Beyond merely finding any solution graph from the start node to a set
of terminal nodes, we may want to find one having minimal cost. We call
such a solution graph an optimal solution graph. Let the cost of an
optimal solution graph from n to a set of terminal nodes be denoted by the function h*(n).
3.2. AO*: A HEURISTIC SEARCH PROCEDURE FOR
AND/OR GRAPHS
As with ordinary graphs, we define the process of expanding a node as
the application of a successor operator that generates all of the successors
of a node (through all outgoing connectors). We might now define a
breadth-first search algorithm for searching implicit AND/OR graphs to
find solution graphs. Again, since breadth-first procedures are unin
formed about the problem domain, they are typically not sufficiently efficient for AI applications. We are naturally led to ask whether some search procedure using an evaluation function with a heuristic component can be devised for AND/OR graphs.
We now describe a search procedure that uses a heuristic function
A ( n ) that is an estimate of A *( n ), the cost of an optimal solution graph
from node wtoa set of terminal nodes. Just as with GRAPHSEARCH,
simplifications in the statement of the procedure are possible if A satisfies
certain restrictions.
Let us impose a monotone restriction on A, that is, for every connector
in the implicit graph directed from node n to successors n 1,.. .,n k, we
assume:
h(n)<c + h(n,) + ... + h(n k),
where c is the cost of the connector. This restriction is analogous to the
monotone restriction on heuristic functions for ordinary graphs. If
h(n) = 0 for n in the set of terminal nodes, then the monotone
restriction implies that A is a lower bound on A *, that is, A(n ) < A *(n )
for all nodes n.
103
SEARCH STRATEGIES FOR DECOMPOSABLE PRODUCTION SYSTEMS
Our heuristic search procedure for AND/OR graphs can now be stated
as follows:
Procedure AO*
1 Create a search graph, G, consisting solely of
the start node, s. Associate with node
s a cost q(s) — h(s).
If s is a terminal node, label s SOLVED.
2 until s is labeled SOL VED, do:
3 begin 4 Compute a. partial solution graph, G',
in G by tracing down the marked connectors
in G from s. (Connectors of G will be marked in a subsequent step.)
5 select any nonterminal leaf node, n
y of
G'. (We discuss later how this
selection might be made.)
6 Expand node n generating all of its successors
and install these in G as successors of AI.
For each successor, n j9 not already
occurring in G, associate the cost
Label SOL VED any of these successors that are
terminal nodes. (See text for discussion of what to
do in case node n has no successors.)
7 Create a singleton set of nodes, S, containing
just node n.
8 until S is empty, do:
9 begin
10 Remove from S a node m such that
m has no descendants in G occurring
inS.
104
AO*: A HEURISTIC SEARCH PROCEDURE FOR AND/OR GRAPHS
11 Revise the cost q ( m ) for m, as follows:
for each connector directed from m to a
set of nodes {n li9.. .,n ki}
compute q {(m) = c i + q(n H) + ...
+ q(nki)· [The q(n H) have
either just been computed in a
previous pass through this inner loop
or (if this is the first pass) they were
computed in step 6.]
Set q ( m ) to the minimum over all
outgoing connectors of qi(m) and mark the connector through which this minimum is achieved, erasing the previous
marking if different. If all of the successor nodes through this connector
are labeled SOLVED, then label node m SOLVED.
12 If m has been marked SOL VED or if the
revised cost of m is different than its
just previous cost, then add to S all
those parents of m such that m is one
of their successors through a marked
connector.
13 end
14 end
Algorithm AO* can best be understood as a repetition of the following
two major operations. First, a top-down, graph-growing operation (steps
4-6) finds the best partial solution graph by tracing down through the
marked connectors. These (previously computed) marks indicate the
current best partial solution graph from each node in the search graph.
(Before the algorithm terminates, the best partial solution graph does not
yet have all of its leaf nodes terminal, which is why it is called partial.)
One of the nonterminal leaf nodes of this best partial solution graph is
expanded, and a cost is assigned to its successors.
The second major operation in AO* is a bottom-up, cost-revising,
connector-marking, SOLVEAabcling procedure (steps 7-12). Starting
with the node just expanded, the procedure revises its cost (using the
105
SEARCH STRATEGIES FOR DECOMPOSABLE PRODUCTION SYSTEMS
newly computed costs of its successors) and marks the outgoing connec
tor on the estimated best "path" to terminal nodes. This revised cost
estimate is propagated upward in the graph. (Acyclicity of our graphs guarantees no loops in this upward propagation.) The revised cost, q
( n ),
is an updated estimate of the cost of an optimal solution graph from n to a
set of terminal nodes. Only the ancestors of nodes having their costs revised can possibly have their costs revised, so only these need be
considered. Because we are assuming the monotone restriction on
A, cost
revisions can only be cost increases. Therefore, not all ancestors need
have cost revisions, but only those ancestors having best partial solution
graphs containing descendants with revised costs (hence step 12).
When the AND/OR graph is an AND/OR tree, the bottom-up
operation can be simplified somewhat (because then each node has only
one parent).
To avoid making algorithm AO* appear more comptex than it already
does, we ignored the possibility (in step 6) that the node selected for
expansion might not have any successors. This case is easily handled in
step 11 by associating a very high q value cost with any node, m, having
no successors (or, more generally, any node recognized as not belonging
to any solution graph). The bottom-up operation will then propagate this
high cost upward, which eliminates any chance that a graph containing
this node might be selected as an estimated best solution graph.
Suppose some node n has a finite number of descendants in the
implicit AND/OR graph and that these do not comprise a solution graph
from n to a set of terminal nodes. Then, eventually, the revised cost, q ( n ),
for node n will have a very high value. The assignment of a very high
value, q(s), to the start node can therefore be taken to signal that there is
no solution graph from the start node.
It is possible to prove that if there is a solution graph from a given node
to a set of terminal nodes, and if h ( n ) < h *( n ) for all nodes, and if h
satisfies the monotone restriction, then algorithm AO* will terminate in
an optimal solution graph. (This optimal solution graph can be obtained
by tracing down from s through the marked connectors at termination.
The cost of this optimal solution graph is equal to the q value of s at
termination.) Thus, we can say that algorithm AO* with these restrictions
is admissible. We omit the proof of this result here; the interested reader
is referred to Martelli and Montanari (1973).
106
AO*: A HEURISTIC SEARCH PROCEDURE FOR AND/OR GRAPHS
A breadth-first algorithm can be obtained from AO* by using h = 0.
Because such an h function satisfies the monotone restriction (and is a
lower bound on h *), the breadth-first algorithm using it is admissible.
As an example of the use of AO*, let us consider again the graph of
Figure 3.1. Suppose that the following estimates are available:
h(n 0) = 0, h(n,) = 2 9h(n f) = 4, h{n 3) = 4,
h(n u) = 1, h(n 5) = hh(n 6) = 2, h{n 7) = 0,
h(n 8) = 0.
Let nodes n 7 and n 8 be terminal nodes, and let the cost of each
/c-connector be k. Note that our h function provides a lower bound on h *
and satisfies the monotone restriction.
The search graphs obtained after various cycles through the outer loop
of AO* are shown in Figure 3.3. In each graph, the revised q values are
shown next to each node; heavy arrows are used to mark connectors, and
nodes labeled SOLVED are indicated by solid circles. During the first
cycle, we expand node n 0\ next we expand node n 1, then node n 5, and
then node n u. After node n u is expanded, node n 0 is labeled SOL VED.
The solution graph (with minimal cost equal to 5) is obtained by tracing
down through the marked connectors.
We have not yet discussed how AO* selects (in step 5) a nonterminal
leaf node of the estimated best partial solution graph to expand. Perhaps
it would be efficient to select that leaf node most likely to change the
estimate of the best partial solution graph. If the estimate of the best
partial solution graph never changes, AO* must eventually expand all of the nonterminal leaf nodes of this graph anyway. However, if the estimate is eventually going to change to some more nearly optimal graph, the sooner AO* makes this change, the better. Possibly the expansion ofthat leaf node having the highest h value would most likely result in a changed estimate.
As with algorithms A and A* for ordinary graphs, AO* may be
modified in a variety of ways to render it more practical in special
situations. First, rather than recompute a new estimated best partial
solution graph after every node expansion, one might instead expand one
107
SEARCH STRATEGIES FOR DECOMPOSABLE PRODUCTION SYSTEMS
3^ fir 4 //,,
Ό"4
η,Ο
After one cycle After two cycles
"0^5 n0 5
After three cycles After four cycles
Fig. 3.3 Search graphs after various cycles of AO*.
108
RELATIONSHIPS BETWEEN DECOMPOSABLE AND COMMUTATIVE SYSTEMS
or more leaf nodes and some number of their descendants all at once, and
then recompute an estimated best partial solution graph. This strategy
reduces the overhead expense of frequent bottom-up operations but incurs the risk that some node expansions may not be on the best solution graph.
A staged-search strategy may also be used for AND/OR graphs. To
employ it, one periodically reclaims needed storage space by discarding
some of the AND/OR search graph. One might, for example, determine
a few of those partial solution graphs within the entire search graph having the
largest estimated costs. These can then be discarded periodi
cally (with the risk, of course, of discarding one that might turn out to be
the top of an optimal solution graph.)
3.3. SOME RELATIONSHIPS BETWEEN
DECOMPOSABLE AND COMMUTATIVE
SYSTEMS
In chapter 1 we mentioned that several problems could be solved by
production systems working in either forward or backward directions.
(Whether one chooses to call a given direction forward, or backward, is
often arbitrary.) Here we illustrate that certain types of commutative
systems are dual to decomposable ones.
Suppose that we have a production system based on the following
rewrite rules:
Rl:
R2:
R3:
R4:
R5:
R6: Τ^Α,Β
T^>B,C
A-+D
B-> E,F
B^>G
C^>G
109
SEARCH STRATEGIES FOR DECOMPOSABLE PRODUCTION SYSTEMS
These rules are to be applied to a global database consisting of a set of
symbols. A rule is applicable if the global database contains a symbol
matching its left-hand side. The effect on the global database of applying
the rule is to remove the occurrence of the left-hand side of the rule and add the right-hand side of the rule.
Production systems using such context-free rewrite rules with single
ton left-hand sides are decomposable. An AND/OR search graph that
results from applying the rewrite rules to an initial global database
consisting of the single symbol, T, is shown in Figure 3.4.
There is an interesting manner in which the rewrite rules of our
example can be used in the reverse direction. We say that such a reverse
rule is applicable if the global database contains symbols matching all the
symbols of the right-hand side. The effect of the rule is to add (not replace by) the symbol occurring on the left-hand side. In Figure 3.5 we show an example in which some (reverse direction) rules are applied to an initial global database consisting of the set {D, E, F, G}. (We indicate a reverse
direction application of rule R by R'.) We note that the production system that results from using these rewrite rules in the reverse direction,
in the manner we have indicated, is commutative. Thus, as we discussed in
chapter 1, an irrevocable control regime can be used without the danger of foreclosing any possible rule applications.
If we continue to apply (irrevocably) the reverse rules RV,..., R6\ to a
database that is initially the set {D,E,F,G}, and to its descendants, we
eventually obtain the set {D,E,F,G,A,B,C,T}. We can keep track of
these rule applications and the resulting global databases by an interest
ing structure called a derivation graph. A derivation graph is a way of structuring the global database at any stage of the production system process so that it indicates something about the history of rule applica
tions.
We show a derivation graph for our example in Figure 3.6. The global
database consists of the derivation graph. The way in which each boxed
expression in the graph is derived is indicated by an incoming set of arcs labeled by the reverse rule.
It is obvious, of course, that the two structures of Figure 3.4 and Figure
3.6 are identical except for arc directions. In many problems in which we are interested, if we reverse the direction of a commutative production
system, we obtain a decomposable production system. Often we think of
110
RELATIONSHIPS BETWEEN DECOMPOSABLE AND COMMUTATIVE SYSTEMS
R5 R6
H H 0 Ξ
Fig. 3.4 A search graph.
[D,E,F,G]
{D,E,F,G,A} {D,E,F,G,C} {D,E,F,G,B}
Fig. 3.5 Using rewrite rules in the reverse direction.
Rf
D RÏ
E T
B
R4' R2'
\R5'
F
Fig. 3.6 A derivation graph.
Ill
SEARCH STRATEGIES FOR DECOMPOSABLE PRODUCTION SYSTEMS
the commutative system, using its rules, as the forward-directed system
and the decomposable system, using reverse direction rules, as the backward-directed system.
We can use an evaluation function in connection with derivation
graphs to control this type of commutative production system. Any rule
applied to a derivation graph can be regarded as producing a new
derivation graph. The rule application adds one new node to the
structure. Thus, rule RI' adds the node labeled Tin Figure 3.6. We can
define the cost of
the derivation through this rule as the cost of both the
rule itself plus the costs of the least costly derivation (sub)graphs
associated with the nodes that are "inputs" to the rule. Such a cost
definition is exactly analogous to the recursive definition of the cost of an AND/OR solution graph.
The cost of
a derivation graph can be regarded as a way of computing a
g function for a commutative production system. There are several
alternative rules that can be applied to any derivation graph. Each has
associated with it a g value computed as we have just described. We can
also define a heuristic function, h, over derivation graphs. Such a function
estimates the additional cost of all subsequent rule applications to that derivation graph and to its descendants along an optimal path to
termination. When used to evaluate alternative
rules, we let the h value of
the rule application be the value obtained from this heuristic function for the derivation graph after the rule is applied. We can now add the g and h
values of a rule application to obtain an/value for evaluating rules. That
applicable rule with the smallest / value is selected for irrevocable application.
In this manner, a commutative production system with an irrevocable
control strategy can be guided by a process very much like that used by
algorithm A in graph searching. Given the assumption that h is a lower
bound on
h *, we could show that such a strategy yields minimal cost
derivations and that a more informed h uses fewer rule applications.
3,4. SEARCHING GAME TREES
Search techniques similar to those already discussed can be used to
find playing strategies for certain kinds of games. The games that we
consider are those called two-person, perfect-information games. These
112
SEARCHING GAME TREES
are played by two players who move in turn. They each know completely
what both players have done and can do. Specifically, we are interested in
those games where either one of the two players wins (and the other loses )
or where the result is a draw. Example games from this class are checkers,
tic-tac-toe, chess, go, and nim. We are not going to consider here any
games whose results are determined even partially by chance; thus, dice
games and most card games are ruled out. (Our treatment could be
generalized to include certain chance games, however.)
We can use systems that are very much like production systems to
analyze games. For example, in chess, the global database would contain a representation of the positions of all of the pieces on the board. The
production rules model the legal moves of the game. The application of
these rules to the initial database and to its successors, and so on, generates what is called a
game graph or tree.
We can illustrate these ideas using a simple game called "Grundy's
game." The rules of the game are as follows: Two players have in front of
them a single pile of objects, say a stack of pennies. The first player
divides the original stack into two stacks that must be unequal. Each
player alternately thereafter does the same to some single stack when it is
his turn to play. The game proceeds until every stack has either just one penny or two—at which point continuation becomes impossible. The
player who first cannot play is the loser. Suppose we call our two players
MAX and MIN and let MIN play first.
Let us start with seven pennies in the stack. A database for this game is
an unordered sequence of numbers representing the number of pennies
in the various stacks plus an indication of who is to move next. Thus
(Ί,ΜΙΝ) is the starting configuration. From
(7, MIN), MIN has three
alternative moves creating the configurations (6,1, MAX), (5,2, MAX), or
(4,3, MAX). The complete game graph for this game (produced by
applying all applicable rules to all databases) is shown in Figure 3.7. All
of the leaf nodes represent losing situations for the player next to move.
We can use the game graph to show that, no matter what MIN does,
MAX can always win. A winning strategy for MAX is shown in Figure 3.7
by heavy lines. For every node representing a game situation in which it
is M I NT s move next, we must show that MAX can win from every position to which MIN might move. For every node representing a situation for which it is MAX's move next, we need only show that MAX
can win from just one of the positions to which he might move.
113
SEARCH STRATEGIES FOR DECOMPOSABLE PRODUCTION SYSTEMS
Note the similarity between the winning strategy for MAX shown in
Figure 3.7 and a solution graph of an AND/OR graph. Nodes corre
sponding to MIKTs next move have successors that are like AND nodes.
From MAX*s point of view, a solution (that is, a win) must be obtainable
from all of these successors. Nodes corresponding to MAX'S next move
have successors that are like OR nodes. Again, from MAX'S point of view,
a win must be obtainable from at least one of these successors. Terminal
nodes are nodes corresponding to winning situations for MAX.
In our discussion of games, we adopt the convention that we are trying
to find a winning strategy for MAX. Also, we assume that MAX moves
first and that thereafter the moves alternate between the two players.
With these conventions we can suppress any explicit mention of whose
move is next in further illustrations of game graphs and trees. Nodes at
even-numbered depths correspond to positions in which it is MAX's move next; these will be called MAX nodes. Nodes at odd-numbered depths correspond to positions in which it is MIN's move next; these are the MIN
nodes. A terminal node is any node corresponding to a winning
position for MAX. (The top node of a game graph is of depth zero, an
even number.)
(5, 1, \,MIN)\ (4,2, \,MIN)\ (3,2, 2,MIN)\ (3,3, \,MIN)
Fig. 3.7 A game graph for Grundy's game.
114
SEARCHING GAME TREES
3.4.1. THE MINIMAX PROCEDURE
Many simple games (as well as some "ending" sequences of more
complex games) can be handled by search techniques that are analogous
to those used for finding AND/OR solution graphs. The solution graph,
then, represents a complete playing strategy. Grundy's game, tic-tac-toe
(naughts and crosses), various versions of nim, and some chess and
checker end-games are examples of simple games in which AND/OR
search to termination is feasible. A gross estimate of the size of the
tic-tac-toe game tree, for example, can be obtained by noting that the start node has nine successors, these in turn have eight, etc., yielding 9!
(or 362,880) nodes at the bottom of the tree. Many of the paths end in terminal nodes at shallower levels, however, and further reductions in the
size of the tree result if symmetries are acknowledged.
For more complex games, such as complete chess and checker games,
AND/OR search to termination is wholly out of the question. It has been
estimated that the complete game tree for checkers has approximately
1040 nodes and the chess tree has approximately 10120 nodes. (It would
take about 1021 centuries to generate the complete checker tree, even
assuming that a successor could be generated in 1/3 of a nanosecond.)
Furthermore, heuristic search techniques do not reduce the effective
branching factor sufficiently to be of much help. Therefore, for complex
games, we must accept the fact that search to termination is impossible;
that is, we must abandon the idea of using this method to prove that a win
or draw can be obtained (except perhaps during the end-game).
Our goal in searching a game tree might be, instead, merely to find a
good first move. We could then make the indicated move, await the
opponent's reply, and search again to find a good first move from this new
position. We can use either breadth-first, depth-first, or heuristic meth
ods, except that the termination conditions must now be modified.
Several artificial termination conditions can be specified based on such
factors as a time limit, a storage-space limit, and the depth of the deepest node in the search tree. It is also usual in chess, for example, not to terminate if any of the tip nodes represent "live" positions, that is, positions in which there is an immediate advantageous swap.
After search terminates, we must extract from the search graph an
estimate of the "best" first move. This estimate can be made by applying
a static evaluation function to the leaf nodes of the search graph. The
evaluation function measures the "worth" of a leaf node position. The
115
SEARCH STRATEGIES FOR DECOMPOSABLE PRODUCTION SYSTEMS
measurement is based on various features thought to influence this
worth; for example, in checkers some useful features measure the relative
piece advantage, control of the center, control of the center by kings, and
so forth. It is customary in analyzing game trees to adopt the convention
that game positions favorable to MAX cause the evaluation function to
have a positive value, while positions favorable to MIN cause the evaluation function to have
a negative value; values near zero correspond
to game positions not particularly favorable to either MAX or MIN.
A good first move can be extracted by a procedure called the minimax
procedure. (For simplicity we explain this procedure and others depend
ing on it as if the game graph were really just a game tree.) We assume
that were MAX to choose among tip nodes, he would choose that node having the largest evaluation. Therefore, the
( MAX node) parent of MIN
tip nodes is assigned a backed-up value equal to the maximum of the
evaluations of the tip nodes. On the other hand, if MIN were to choose among tip nodes, he would presumably choose that node having the
smallest evaluation (that is, the most negative). Therefore, the (MIN node) parent of MAX tip nodes
is assigned a backed-up value equal to the
minimum of the evaluations of the tip nodes. After the parents of all tip
nodes have been assigned backed-up values, we back up values another
level, assuming that MAX would choose that node with the largest backed-up value while MIN would choose that node with the smallest
backed-up value.
We continue to back up values, level by level, until, finally, the
successors of the start node are assigned backed-up values. We are
assuming it is MAX'S turn to move at the start, so MAX should choose as
his first move the one corresponding to the successor having the largest
backed-up value.
The utility of this whole procedure rests on the assumption that the
backed-up values of the start node's successors are more reliable
measures of the ultimate relative worth of these positions than are the
values that would be obtained by directly applying the static evaluation
function to these positions. The backed-up values are, after all, based on
"looking ahead" in the game tree and therefore depend on features
occurring nearer the end of the game.
A simple example using the game of tic-tac-toe illustrates the min-
imaxing method. Let us suppose that MAX marks crosses (X
) and MIN
116
SEARCHING GAME TREES
marks circles (O) and that it is MAX'S turn to play first. We conduct a
breadth-first search, until all of the nodes at level 2 are generated, and
then we apply a static evaluation function to the positions at these nodes. Let our evaluation function e(p) of a position p be given simply by:
If p is not a winning position for either player,
e(p) = (number of complete rows, columns, or diagonals
that are still open for MAX) — (number of
complete rows, columns, or diagonals that are
still open for MIN).
lip is a win for MAX,
e(p) = oo (oo denotes a very large positive number).
If/? is a win for MIN,
e(p) = -oo.
Thus, if/7 is
o X
we have e(p) = 6 — 4 = 2.
We make use of symmetries in generating successor positions; thus the
following game states
o X X
o X o o X
are all considered identical. (Early in the game, the branching factor of
the tic-tac-toe tree is kept small by symmetries; late in the game, it is kept
small by the number of open spaces available.)
In Figure 3.8 we show the tree generated by a search to depth 2. Static
evaluations are shown below the tip nodes, and backed-up values are
circled.
117
SEARCH STRATEGIES FOR DECOMPOSABLE PRODUCTION SYSTEMS
I
È
is
.bio
xlOl
118
Fig. 3.9 Minimax applied to tic-tac-toe {stage 2). n
X 5 o
p
H
w w
SEARCH STRATEGIES FOR DECOMPOSABLE PRODUCTION SYSTEMS
120
SEARCHING GAME TREES
Since
has the largest backed-up value, it is chosen as the first move. (Coin-
cidentally, this is MAX'S best first move.)
Now let us suppose that MAX makes this move and MIN replies by
putting a circle in the square directly above the X (a bad move for MIN,
who must not be using a good search strategy). Next MAX searches to
depth 2 below the resulting configuration, yielding the search tree shown
in Figure 3.9. There are now two possible "best" moves; suppose MAX
makes the one indicated. Now MIN makes the move that avoids his
immediate defeat, yielding
O
X O
XI
MAX searches again, yielding the tree shown in Figure 3.10. Some of
the tip nodes in this tree (for example, the one marked A ) represent wins
for MIN and thus have evaluations equal to — oo. When these evalua
tions are backed up, we see that MAX'S best move is also the only one that
avoids his immediate defeat. Now MIN can see that MAX must win on
his next move, so MIN gracefully resigns.
3.4.2. THE ALPHA-BETA PROCEDURE
The search procedure that we have just described separates completely
the processes of search-tree generation and position evaluation. Only
after tree generation is completed does position evaluation begin. It
happens that this separation results in a grossly inefficient strategy.
Remarkable reductions (amounting sometimes to many orders of mag
nitude) in the amount of search needed (to discover an equally good
move) are possible if one performs tip-node evaluations and calculates
backed-up values simultaneously with tree generation.
Consider the search tree of Figure 3.10 (the last stage of our tic-tac-toe
search). Suppose that a tip node is evaluated as soon as it is generated.
Then after the node marked A is generated and evaluated, there is no
point in generating (and evaluating) nodes B, C, and D ; that is, since
MIN has A available and MIN could prefer nothing to A, we know
121
SEARCH STRATEGIES FOR DECOMPOSABLE PRODUCTION SYSTEMS
immediately that MIN will choose A. We can then assign A's parent the
backed-up value of — oo and proceed with the search, having saved the
search effort of generating and evaluating nodes 2?, C, and D. (Note that
the savings in search effort would have been even greater if we were
searching to greater depths; for then none of the descendants of nodes B,
C, and D would have to be generated either.) It is important to observe
that failing to generate nodes B, C, and D can in no way affect what will
turn out to be MAX'S best first move.
In this example, the search savings depended on the fact that node A
represented a win for MIN. The same kind of savings can be achieved,
however, even when none of the positions in the search tree represents a
win for either MAX or MIN.
Consider the first stage of the tic-tac-toe tree shown in Figure 3.8. We
repeat part of this tree in Figure 3.11. Suppose that search had progressed
in a depth-first manner and that whenever a tip node is generated, its
static evaluation is computed. Also suppose that whenever a position can
be given a backed-up value, this value is computed. Now consider the
situation occurring at that stage of the depth-first search immediately
after node A and all of its successors have been generated, but before
node B is generated. Node A is now given the backed-up value of — 1. At
this point we know that the backed-up value of the start node is bounded
from below by — 1. Depending on the backed-up values of the other
successors of the start node, the final backed-up value of the start node
may be greater than — 1, but it cannot be less. We call this lower bound
an alpha value for the start node.
Now let depth-first search proceed until node B and its first successor
node, C, are generated. Node C is then given the static value of — 1. Now
we know that the backed-up value of node B is bounded from above by
— 1. Depending on the static values of the rest of node B's successors, the
final backed-up value of node B can be less than — 1 but it cannot be
greater. We call this upper bound on node B a beta value. We note at this
point, therefore, that the final backed-up value of node B can never
exceed the alpha value of the start node, and therefore we can
discontinue search below node B. We are guaranteed that node B will not
turn out to be preferable to node A.
This reduction in search effort was achieved by keeping track of
bounds on backed-up values. In general, as successors of a node are given
122
SEARCHING GAME TREES
backed-up values, the bounds on backed-up values can be revised. But
we note that:
(a) The alpha values of MAX nodes (including the
start node) can never decrease, and
(b) the beta values of MIN nodes can never increase.
Because of these constraints we can state the following rules for
discontinuing the search:
(1) Search can be discontinued below any MIN node
having a beta value less than or equal to the
alpha value of any of its MAX node ancestors.
The final backed-up value of this MIN node can
then be set to its beta value. This value may
not be the same as that obtained by full minimax
search, but its use results in selecting the same
best move.
(2) Search can be discontinued below any MAX node
having an alpha value greater than or equal to
the beta value of any of its MIN node ancestors.
The final backed-up value of this MAX node can
then be set to its alpha value.
X
o 1— -^
Γ O 1— -^
o L Ά 1 XJ
fe o Beta value = -1
Fig. 3.11 Part of the first stage tic-tac-toe tree.
123
Start Node
(Backed-Up Value = +1)
Π MAX Nodes
O MIN Nodes
+5 -3 +3 +3 -3 0 +2 -2 +3 +5 +2+5-5 0 +1 +5+1 -3 0 -5 +5 -3 +3 +2 +3 -3 0 -2 0 +1 +4+5 +1 -1 +3 -3 +2 M
n
H
w
2
M
C/5
o
a
M n o S
►d
O r
w
►o o a
n
H 1
H W
S
C/i
Fig. 3.12 An example illustrating the alpha-beta search procedure.
SEARCHING GAME TREES
During search, alpha and beta values are computed as follows:
(a) The alpha value of a MAX node is set equal to the
current largest final backed-up value of its
successors.
(b) The beta value of a MIN node is set equal to the
current smallest final backed-up value
of its successors.
When search is discontinued under rule (1) above, we say that an alpha
cutoff has occurred; when search is discontinued under rule (2), we say
that a beta cutoff has occurred. The whole process of keeping track of
alpha and beta values and making cutoffs when possible is usually called the alpha-beta procedure. The procedure terminates when all of the
successors of
the start node have been given final backed-up values, and
the best first move is then the one creating that successor having the
highest backed-up value. Employing this procedure always results in
finding a move that is equally as good as the move that would have been
found by the simple minimax method searching to the same depth. The
only difference is that the alpha-beta procedure finds a best first move
usually after much less search.
An application of the alpha-beta procedure is illustrated in Figure
3.12. We show a search tree generated to a depth of 6. (Our convention is
to generate the left-most nodes first. MAX nodes are depicted by a square, and MIN nodes are depicted by a circle.) The tip nodes have the static values indicated. Now suppose we conduct a depth-first search employing the alpha-beta procedure. The subtree generated by the
alpha-beta procedure is indicated by darkened branches. Those nodes
cut off have
X s drawn through them. Note that only 18 of the original 41
tip nodes had to be evaluated. (The reader can test his understanding of
the procedure by attempting to duplicate the alpha-beta search on this
example.)
3.43. THE SEARCH EFFICIENCY OF THE ALPHA-BETA
PROCEDURE
In order to perform alpha-beta cutoffs, at least some part of the search
tree must be generated to maximum depth, because alpha and beta
values must be based on the static values of tip nodes. Therefore some
125
SEARCH STRATEGIES FOR DECOMPOSABLE PRODUCTION SYSTEMS
type of a depth-first search is usually employed in using the alpha-beta
procedure. Furthermore, the number of cutoffs that can be made during
a search depends on the degree to which the early alpha and beta values
approximate the final backed-up values.
The final backed-up value of the start node is identical to the static
value of one of the tip nodes. If this tip node could be reached first in a
depth-first search, the number of cutoffs would be maximal. When the
number of cutoffs is maximal, a minimal number of tip nodes need to be generated and evaluated.
Suppose a tree has depth D, and every node (except a tip node) has
exactly B successors. Such a tree will have precisely B
D tip nodes.
Suppose an alpha-beta procedure generated successors in the order of
their true backed-up values—the lowest valued successors first for MIN
nodes and the highest valued successors first for MAX nodes. (Of course,
these backed-up values are not typically known at the time of successor
generation, so this order could never really be achieved, except perhaps
accidentally.)
It happens that this order maximizes the number of cutoffs that will
occur and minimizes the number of tip nodes generated. Let us denote
this minimal number of tip nodes by N D. It can be shown that
ND = 2ΒΌ/2 - 1 (for even/))
and
ND = BiD+i)/2 + BiO~l)/2 - 1 (for odd D).
That is, the number of tip nodes of depth D that would be generated by
optimal alpha-beta search is about the same as the number of tip nodes
that would have been generated at depth D/2 without alpha-beta.
Therefore, for the same storage requirements, the alpha-beta procedure
with perfect successor ordering allows search depth to double. Even
though perfect ordering cannot be achieved in search problems (if it
could, we wouldn't need the search process at all!), the large potential
payoff suggests the importance of using the best ordering function available.
126
BIBLIOGRAPHICAL AND HISTORICAL REMARKS
3.5. BIBLIOGRAPHICAL AND HISTORICAL
REMARKS
3.5.1. AND/OR GRAPHS
Decomposition and AND/OR graphs have been used in a variety of
applications. Hinxman (1976) discusses applications to the "stock-cutting
problem"; Martelli and Montanari (1975,1978) show how dynamic
programming problems can be formulated as problems of AND/OR
search and how such a formulation is used to optimize decision trees;
Slagle (1963) uses AND/OR trees in symbolic integration; Stockman
(1977) describes applications to the analysis of waveforms, and, as we
shall see in chapter 6, AND/OR graphs can be used in theorem-proving systems.
Our algorithm AO* is essentially the same as the algorithm for
searching AND/OR graphs of Martelli and Montanari
(1973, 1978). We
have taken some of our illustrative examples from Martelli and Montan
ari (1979). These AND/OR graph-searching algorithms are based on
earlier work of Nilsson (1969,1971). [See also Amarel (1967).] Hall (1973)
has shown the equivalence between AND/OR graphs and context-free
grammars. Levi and Sirovich (1976) generalize AND/OR graphs to
represent interdependent subproblems and show that the generalized
graphs are equivalent to type-0 grammars. Chang and Slagle (1971) also
discuss AND/OR graphs, although their treatment seems to lose some of
the advantages inherent in decomposition. Berliner (1979) presents a
related search algorithm involving upper and lower bound values at each node.
Kowalski (1972) and vanderBrug and Minker (1975) discuss the
relationships between what we term backward decomposable systems
(using AND/OR graphs) and forward commutative ones (using deriva
tion graphs). Michie and Sibert (1974) also describe heuristic search
algorithms based on derivation graphs.
3.5.2. GAME TREES
Shannon (1950) proposed a minimax search procedure to be used with
a static evaluation function in a proposal for a program to play chess.
Newell, Shaw, and Simon (1958) used these ideas in constructing an early
127
SEARCH STRATEGIES FOR DECOMPOSABLE PRODUCTION SYSTEMS
chess-playing program. Samuel (1959, 1967) developed a checker
(draughts) program that used polynomial evaluation functions, alpha-
beta search methods, and learning strategies for improving play. Slagle
(1970) has discussed the similarities between AND/OR trees and game
trees.
The alpha-beta procedure was discovered independently by many of
the early AI researchers. A version of it is first described by Newell, Shaw,
and Simon (1958). Knuth and Moore (1975) present a thorough analysis of its properties and discuss its history. Newborn (1977) and Baudet
(1978) present additional results. The results on search efficiency of alpha-beta were first stated by Edwards and Hart (1963) based on a
theorem that they attribute to Michael Levin. Later, Slagle and Dixon
(1969) give what they consider to be the first published proof of this theorem. Knuth and Moore (1975) contains the most complete account
of these properties. Lindstrom (1979) reformulates the alpha-beta
procedure for coroutine (rather than recursive) control. Harris (1974)
proposes an alternative to minimax search for game trees.
Chess-playing programs are steadily improving in ability, and many
AI experts continue to believe that a computer world chess champion is
not far off. Good accounts of computer chess are given in an article by
Berliner (1978) and in books by Newborn (1975) and by Levy (1976). A
recent program by Wilkins (1979) incorporates knowledge about chess
tactics, which greatly diminishes the amount of search needed. [See also
Pitrat(1977).]
EXERCISES
3.1 The following rewrite rules can be used to replace the numeral on
the left-hand side with the string of numerals on the right.
6^3,3 4->3,l
6^4,2 3-* 2,1
4->2,2 2 —> 1,1
Consider the problem of using these rules to transform the numeral 6
into a string of Is. Illustrate how algorithm AO* works by using it to solve
128
EXERCISES
this problem. Assume that the cost of a /c-connector is k units, and that
the value of the h function at nodes labeled by the numeral 1 is zero and
at nodes labeled by n (n Φ 1) is n.
3.2 The game nim is played as follows: Two players alternate in
removing one, two, or three pennies from a stack initially containing five
pennies. The player who picks up the last penny loses. Show, by drawing
the game graph, that the player who has the second move can always win.
Can you think of a simple characterization of the winning strategy?
33 Conduct on alpha-beta search of the game tree shown in Figure 3.12
by generating nodes in the order right-most node first. Indicate where
cutoffs occur and compare with Figure 3.12, in which nodes were generated left-most node first.
3.4 Chapters 2 and 3 concentrated on search techniques for tentative
control regimes (backtracking and graph-search). Discuss the search
problem for an irrevocable control regime guiding a commutative
production system. (You might base your discussion on Section 3.3., for
example.) Specify (in detail) a search algorithm that uses an evaluation function with a heuristic component.
3.5 Represent the configuration of a tic-tac-toe board by a nine-dimen
sional vector, c, having components equal to + 1, O, or
— 1 according to
whether the corresponding cells are marked with a X, are empty, or are
marked with a O, respectively. Specify a nine-dimensional vector w, such
that the dot product ovv is a useful evaluation function for use by MAX
(playing Xs) to evaluate nonterminal positions. Use this evaluation
function to perform a few minimax searches making any adjustments to
w that seem appropriate to improve the evaluation function. Can you
find a vector w that appraises positions so accurately that search below
these positions is not needed?
129
CHAPTER 4
THE PREDICATE CALCULUS IN AI
In many applications, the information to be encoded into the global
database of a production system originates from descriptive statements
that are difficult or unnatural to represent by simple structures like arrays
or sets of numbers. Intelligent information retrieval, robot problem
solving, and mathematical theorem proving, for example, require the
capability for representing, retrieving and manipulating sets of state
ments.
The first order predicate calculus is a formal language in which a wide
variety of statements can be expressed. Throughout the rest of the book,
we use expressions in the predicate calculus language as components of
the global databases of production systems. Before describing exactly
how this language is used in AI systems, however, we must define the
language, show how it is used to represent statements, explain how
inferences can be made from sets of expressions in the language, and
discuss how to deduce statements in the language from other statements
in the language. These are fundamental concepts of formal logic and are
also of great importance in AI. In this chapter we introduce the language
and methods of logic and then show how they can be exploited in AI
production systems.
4.1. INFORMAL INTRODUCTION TO THE
PREDICATE CALCULUS
A language, such as the predicate calculus, is defined by its syntax. To
specify a syntax we must specify the alphabet of symbols to be used in the
language and how these symbols are to be put together to form legitimate
expressions in the language. The legitimate expressions of the predicate
131
THE PREDICATE CALCULUS IN AI
calculus are called the well-formed formulas (wffs). In the discussion that
follows we give a brief, informal description of the syntax of the predicate
calculus.
4.1.1. THE SYNTAX AND SEMANTICS OF ATOMIC
FORMULAS
The elementary components of the predicate calculus language are
predicate symbols, variable symbols, function symbols, and constant
symbols set off by parentheses, brackets, and commas, in a manner to be
illustrated by examples. A predicate symbol is used to represent a
relation in a domain of discourse. Suppose, for example, that we wanted to represent the fact that someone wrote something. We might use the predicate symbol WRITE to denote a relationship between a person
doing the writing and a thing written. We can compose a simple atomic
formula using WRITE and two terms, denoting the writer and what is
written. For example, to represent the sentence "Voltaire wrote Can
dide," we might use the simple atomic formula:
WRITE( VOLTAIRE,CANDIDE).
In this atomic formula, VOLTAIRE, and CANDIDE, are constant
symbols. In general, atomic formulas are composed of predicate symbols
and terms. A constant symbol is the simplest kind of term and is used to
represent objects or entities in a domain of discourse. These objects or entities may be physical objects, people, concepts, or anything that we
want to name.
Variable symbols, like x ory, are terms also, and they permit us to be
indefinite about which entity is being referred to. Formulas using
variable symbols, like WRITE
(x,y), are discussed later in the context of
quantification.
We can also compose terms of function symbols. Function symbols
denote functions in the domain of discourse. For example, the function
symbolfather can be used to denote the mapping between an individual
and his male parent. To express the sentence "John's mother is married to
John's father," we might use the following atomic formula:
MARRIED[father(JOHN),mother(JOHN)].
Usually a mnemonic string of capital letters is used as a predicate
132
INFORMAL INTRODUCTION TO THE PREDICATE CALCULUS
symbol. (Examples: WRITE, MARRIED.) In some abstract examples,
short strings of upper-case letters and numerals (PI, Q2) are used as
predicate symbols. A mnemonic string of capital letters or short strings of
upper-case letters and numerals are also used as constant symbols; for
example, CANDIDE, Al, or B2. Context prevents confusion between
whether a string is a predicate symbol or a constant symbol.
Mnemonic strings of lower-case letters are used as function symbols.
(Examples: father, mother.) Lower-case letters near the middle of the
alphabet, like/, g, h, etc., are used in abstract examples.
To represent an English sentence by an atomic formula, we focus on
the relations and entities that the sentence describes and represent them
by predicates and terms. Often, the predicate is identified with the verb of
the sentence, and the terms are identified with the subject or object of the
verb. Usually we have several choices about how to represent a sentence.
For example, we can represent the sentence "The house is yellow" either
by a one-term predicate, as in YELLOW(HOUSE-l), by a two-term
predicate, as in COLOR(HOUSE-l, YELLOW), or by a three-term
predicate, as in VALUE(COLOR,HOUSE-l,YELLOW), etc. The designer of
a representation selects the alphabet of predicates and terms
that he will use and defines what the elements of this alphabet will mean.
In the predicate calculus, a wff can be given an interpretation by
assigning a correspondence between the elements of the language and the relations, entities, and functions in the domain of discourse. To each
predicate symbol, we must assign a corresponding relation in the
domain; to each constant symbol, an entity in the domain; and to each
function symbol,
a function in the domain. These assignments define the
semantics of the predicate calculus language. In our applications, we are
using the predicate calculus specifically to represent certain statements
about a domain of discourse; thus we usually have a specific interpreta
tion in mind for the wffs that we use. Once an interpretation for an atomic
formula has been defined, we say that the formula has value T (true) just
when the corresponding statement about the domain is true and that it
has value F (false) just when the corresponding statement is false. Thus,
using the obvious interpretation, the formula
WRITE( VOLTAIRE, CANDIDE)
has value T, and
WRITE( VOLTAIRE, COMPUTER-CHESS)
133
THE PREDICATE CALCULUS IN AI
has value F. When an atomic formula contains variables, there may be
some assignments to the variables (of entities in the domain) for which an
atomic formula has value T and other assignments for which it has value
F.
4.1.2. CONNECTIVES
Atomic formulas, like WRITE (x,y), are merely the elementary
building blocks of the predicate calculus language. We can combine
atomic formulas to form more complex wffs by using connectives such as "Λ" (and), "V" (or), and "^>" (implies).
The connective "Λ" has obvious use in representing compound
sentences like "John likes Mary, and John likes Sue." Also, some simpler
sentences can be written in a compound form. For example, "John lives
in a yellow house" might be represented by the formula
LIVES(JOHN,HOUSE-l) A COLOR(HOUSE-l, YELLOW),
where the predicate LIVES represents a relation between a person and
an object and where the predicate COLOR represents a relation between
an object and a
color. Formulas built by connecting other formulas by Λ s
are called conjunctions, and each of the component formulas is called a
conjunct. Any conjunction composed of wffs is also a wff.
The symbol "V" is used to represent inclusive "or." For example, the
sentence "John plays centerfield or shortstop" might be represented by
\PLAYS(JOHN,CENTERFIELD) V PLAYS (JOHN, SHORT
STOP)]. Formulas built by connecting other formulas by Vs are called
disjunctions, and each of the component formulas is called a disjunct. Any
disjunction composed of wffs is also a wff.
The truth values of conjunctions and disjunctions are determined from
the truth values of the components. A conjunction has value T if each of its conjuncts has value T; otherwise it has value F. A disjunction has
value Tif at least one of its disjuncts has value T\ otherwise it has value F.
The other connective, "=>," is used for representing "if-then" state
ments. For example, the sentence "If the car belongs to John, then it is
green," might be represented by
OWNS(JOHN,CAR-l)=> COLOR(CAR-l,GREEN).
134
INFORMAL INTRODUCTION TO THE PREDICATE CALCULUS
A formula built by connecting two formulas with a =Φ is called an
implication. The left-hand side of an implication is called the antecedent,
and the right-hand side is called the consequent. If both the antecedent
and the consequent are wffs, then the implication is a wff also. An
implication has value T if either the consequent has value T (regardless of
the value of the antecedent) or if the antecedent has value F (regardless of
the value of the consequent); otherwise the implication has value F. This
definition of implicational truth value is sometimes at odds with our intuitive notion of the meaning of "implies." For example, the predicate
calculus representation of the sentence "If the moon is made of green cheese, then horses can fly" has value T
The symbol "~" (not) is sometimes called a connective although it is
really not used to connect two formulas. It is used to negate the truth
value of
a formula; that is, it changes the value of a wff from T to F, and
vice versa. For example, the (true) sentence "Voltaire did not write
Computer Chess" might be represented as
~WRITE( VOLTAIRE, COMPUTER-CHESS) .
A formula with a ~ in front of it is called a negation. The negation of a
wff is also a wff. An atomic formula and the negation of an atomic
formula are both called literals.
It is easy to see that ~F1 V F2 always has the same truth value as
Fl => F2, so we really wouldn't ever need to use =Φ. But our object here is
not to propose a minimal representation but a useful one. There are
occasions in which Fl =$> F2 is heuristically preferable to its equivalent
~F1 V F2, and vice versa.
If we limited our sentences to those that could be represented by the
constructs that we have introduced so far, and if we never used variables
in terms, we would be using a subset of the predicate calculus called the
propositional calculus. Indeed, the propositional calculus can be a useful
representation for many simplified domains, but it lacks the ability to
represent many statements (such as "All elephants are gray") in a useful
manner. To extend its power, we need the capability to make statements
with variables in the formulas.
135
THE PREDICATE CALCULUS IN AI
4.13. QUANTIFICATION
Sometimes an atomic formula, like P(x\ has value T (with a given
interpretation for P ) no matter what assignment is given to the variable
x. Or such an atomic formula may have value Tfor at least one value of x.
In the predicate calculus these properties are used in establishing the
truth values of formulas containing constructs called quantifiers. The
formula consisting of the universal quantifier (Vx )in front of a formula
P(x ) has value 7" for an interpretation just when the value of P(x ) under
this interpretation is T for all assignments of x to entities in the domain.
The formula consisting of the existential quantifier (3x ) in front of a
formula P(x) has value T for an interpretation just when the value of
P(x ) under the interpretation is T for at least one assignment of x to an
entity in the domain.
For example, the sentence "All elephants are gray" might be repre
sented by
(Vx )[ ELEPHANT {x ) => COLOR (JC, GRA Y)].
Here, the formula being quantified is an implication, and x is the
quantified variable. We say that x is quantified over. The scope of a
quantifier is just that part of the following string of formulas to which the quantifier applies. As another example, the sentence "There is a person
who wrote Computer Chess" might be represented by
(3x) WRITE(x,COMPUTER-CHESS).
Any expression obtained by quantifying a wff over a variable is also a
wff. If a variable in a wff is quantified over, it is said to be a bound
variable; otherwise it is said to be
a.free variable. We are mainly interested
in wffs having all of their variables bound. Such wffs are called sentences.
We note that if quantifiers occur in a wff, it is not always possible to use
the rules for the semantics of quantifiers to compute the truth value of
that wff. For example, consider the wff (\/x)P(x).Given an interpreta
tion for P and an infinite domain of entities, we would have to check to
see whether the relation corresponding to P held for every possible
assignment of the value of JC to a domain entity in order to establish that
the wff had value T. Such a process would never terminate.
The version of the predicate calculus used in this book is called first
136
INFORMAL INTRODUCTION TO THE PREDICATE CALCULUS
order because it does not allow quantification over predicate symbols or
function symbols. Thus, formulas like (VP)P(^4)are not wffs in first
order predicate calculus.
4.1.4. EXAMPLES AND PROPERTIES OF WFFS
Using the syntactic rules that we have just informally discussed, we can
build arbitrarily complex wffs, and we can compute whether or not an
arbitrary expression is a wff. For example, the following expressions are
wffs:
(3x){<yy)[(P(x,y)AQ(y,x))^R(x)]}
~0fq){(3x)[P(x)V R(q)]}
~P[A,g(A,B,A)\ {~[P(A)^P(B)]}^P(B)
In the above expressions, we have used parentheses, brackets, and braces
as delimiters to group the component wffs. We use these delimiters to
improve readability and to eliminate any ambiguity about how a wff is
put together.
Some examples of expressions that are not wffs are:
~f{A)
j\P{A)]
Q{f(AUp(B)^Q(C)]}
A V ~ => (V~)
Given an interpretation, the truth values of wffs (except for some
containing quantifiers) can be computed given the rules we have
informally described above. When truth values are computed in this manner, we are using what is called a truth table
method. This method
takes its name from a truth table that summarizes the rules we have
already discussed. If XI and X2 are any wffs, then the truth values of
composite expressions made up of these wffs are given by the following
truth table.
137
THE PREDICATE CALCULUS IN AI
Table 4.1
Truth Table
XI X2 X1VX2 X1AX2 XI => X2 ~*7
If the truth values of two wffs are the same regardless of their
interpretation, then we say that these wffs are equivalent. Using the truth
table, we can easily establish the following equivalences:
~(~XI ) is equivalent to XI
XI V X2 is equivalent to ~X1 => X2
de Morgan's Laws:
~(X1 AX2) is equivalent to ~X1 V ~ X2
~(X1 V X2) is equivalent to ~*7 Λ ~X2
Distributive Laws:
XI A (X2 V X3) is equivalent to (XI A X2) V (XI A X3)
XI V (X2 Λ X3) is equivalent to (XI V X2) A (XI V X3)
Commutative Laws:
XI A X2 is equivalent to X2 A XI
XI V X2 is equivalent to X2 V XI
Associative Laws:
(XI AX2)AX3 is equivalent to XI A (X2 A X3)
(XI V X2)V X3 is equivalent to XI V (X2 V X3)
Contrapositive Law:
X1^X2 is equivalent to — X2^>~X1
138
INFORMAL INTRODUCTION TO THE PREDICATE CALCULUS
These laws justify the form in which we have written various of our
example wffs in the discussion above. For example, the associative law
allows us to write the conjunction XI A X2 A ... A XN without any
parentheses.
From the meanings of the quantifiers, we can also establish the
following equivalences:
~(3x ) P ( x ) is equivalent to (Vx )[~P ( x )]
~(Vx)P(x) is equivalent to (3x)[~P(x)]
(Vx)[P(x) A Q(x)]is equivalent to
(Vx)P(x)A(Vy)Q(y)
(3x)[P(x) V Q(x)] is equivalent to
(3x)P(x)V(3y)Q(y)
(Vx)P(x) is equivalent to (Vy)P(y)
(3x )P(x) is equivalent to (3y ) P (y )
In the last two equivalences, we see that the bound variable in a
quantified expression is a kind of "dummy" variable. It can be arbitrarily
replaced by any other variable symbol not already occurring in the
expression.
To show the versatility of the predicate calculus as a language for
expressing various assertions, we show below some example predicate calculus representations of some English sentences:
Every city has a dogcatcher who has been bitten by every dog in town.
0/x){CITY(x)^(3y){DOGCATCHER(x,y)
A (Vz){[DOG(z) A LIVES-IN(x
9z)]^> BIT(y,z)}}}
For every set x, there is a set y, such that the cardinality ofy is greater than
the cardinality of JC.
(Vx){SET(x)^(3y)(3u)(3v)
[SET(y) A CARD(x,u) A CARD(y,v) A G(n,v)]}
139
THE PREDICATE CALCULUS IN AI
All blocks on top of blocks that have been moved or that are attached to
blocks that have been moved have also been moved.
(Vx)(Vy) {{BLOCK(x) A BLOCK(y)
A[ONTOP(x,y)V ATTACHED(χ,γ)]
A MOVED00}=>MOVED(x)}
4.1.5. RULES OF INFERENCE, THEOREMS, AND PROOFS
In the predicate calculus, there are rules of inference that can be applied
to certain wffs and sets of wffs to produce new wffs. One important
inference rule is modus ponens. Modus ponens is the operation that
produces the wff W2 from wffs of the form Wl and Wl => W2. Another
rule of inference, universal specialization, produces the wff W(A ) from
the wff (VA: ) W(x ),where A is any constant symbol. Using modus ponens
and universal specialization together, for example, produces the wff
W2(A) from the wffs (\/x)[Wl(x)=> W2(x)]imd W1(A).
Inference rules, then, produce derived wffs from given ones. In the
predicate calculus, such derived wffs are called theorems, and the
sequence of inference rule applications used in the derivation constitutes
di proof oî the theorem. As we mentioned earlier, some problem-solving
tasks can be regarded as the task of finding a proof for a theorem.
4.1.6. UNIFICATION
In proving theorems involving quantified formulas, it is often neces
sary to "match" certain subexpressions. For example, to apply the
combination of modus ponens and universal specialization to produce
W2(A) from the wffs (Vx)[ Wl(x)=> W2{x)] and W1(A), it is
necessary to find the substitution "A for x" that makes Wl (A ) and
Wl{x) identical. Finding substitutions of terms for variables to make
expressions identical is an extremely important process in AI and is called
unification. In order to describe this process, we must first discuss the
topic of substitutions.
The terms of an expression can be variable symbols, constant symbols,
or functional expressions, the latter consisting of function symbols and
terms. A substitution instance of an expression is obtained by substituting
140
INFORMAL INTRODUCTION TO THE PREDICATE CALCULUS
terms for variables in that expression. Thus, four instances of
P[x,f(y),B] are:
P[z,f(w),B]
P[x,f(A),B)
P[g(z),f(A),B] P[C,f(A),B]
The first instance is called an alphabetic variant of the original literal
because we have merely substituted different variables for the variables
appearing in P
[ x,f(y ),B], The last of the four instances shown above is
called a ground instance, since none of the terms in the literal contains
variables.
We can represent any substitution by a set of ordered pairs s = {t 1 /v 1,
t2/v2, ..., t n/vn}. The pair ti/vi means that term t { is substituted for
variable v { throughout. We insist that a substitution be such that each
occurrence of a variable have the same term substituted for it. Also, no
variable can be replaced by a term containing that same variable. The
substitutions used above in obtaining the four instances of P[x,f(y),B]
are:
si = {z/x,w/y}
s2={A/y] s3={g(z)/x,A/y)
s4={C/x
9A/y]
To denote a substitution instance of an expression, E, using a
substitution, s, we write Es. Thus,
P[z 9f(w) 9B] = P[x 9f(y) 9B]sl .
The composition of two substitutions si and s2 is denoted by sls2, which
is that substitution obtained by applying s2 to the terms of si and then
adding any pairs of s2 having variables not occurring among the variables
of si. Thus,
{g(x,y)/z}{A/x,B/y 9C/w 9D/z} = {g(A 9B)/z 9A/x,B/y 9C/w} .
It can be shown that applying si and s2 successively to an expression L
is the same as applying ^7^2 to L ; that is, ( Lsl )s2 = L ( sls2 ). It can also
be shown that the composition of substitutions is associative:
(sls2)s3 = sl(s2s3).
141
THE PREDICATE CALCULUS IN AI
Substitutions are not, in general, commutative; that is, it is not generally
the case that sls2 = s2sl.
If a substitution s is applied to every member of a set { E,}of
expressions, we denote the set of substitution instances by { Ei } s. We say
that a set { Ei}of expressions is unijiiable if there exists a substitution s
such that E,s = E,s = E,s = . . , . In such a case, s is said to be a unijier
of { E,}since its use collapses the set to a singleton. For example,
s = { A/x,B/y} unifies { P[x,f(y),BI, P[x,f(B),BI}, to yield
{P[A,f(B),BI}.
Although s = {A/x,B/y} is a unifier of the set { P[x,f(y),B],
P[ x,f(B),B]}, in some sense it is not the simplest unifier. We note that
we really did not have to substitute A for x to achieve unification. The
most general (or simplest) unifier, mgu, g of { Ei }, has the property that if
s is any unifier of { Ei } yielding { Ei } s, then there exists a substitution s’
such that { Ei } s = { Ei } gs’. Furthermore, the common instance pro-
duced by a most general unifier is unique except for alphabetic variants.
There are many algorithms that can be used to unify a finite set of
unifiable expressions and which report failure when the set cannot be
unified. The recursive procedure UNIFY, given informally below, is
useful for establishing a general idea of how to unify a set of two
list-structured expressions. [The literal P( x,f(A,y )) is written as
(P x CfA y )) in list-structured form.]
Recursive Procedure UNIFY( El, E2)
1 if either El or E2 is an atom (that is, a
predicate symbol, a function symbol, a
constant symbol, a negation symbol or a variable),
interchange the arguments El and E2 (if
necessary) so that El is an atom, and do:
2 begin
3 if El and E2 are identical, return NIL
4 if El is a variable, do:
142
INFORMAL INTRODUCTION TO THE PREDICATE CALCULUS
5 begin
6 if El occurs in E2, return FAIL
7 return {E2/E1}
8 end
9 if E2 is a variable, return {E1/E2}
10 return FAIL
11 end
12 Fl <- the first element of El, Tl <- the rest of El
13 F2 <- the first element of E2 9 T2 «— the rest of E2
14 Z7<-UNIFY(F7,F2) 15 if Zl = FAIL, return FAIL
16 Gl 4- result of applying Z7 to 77
17
G2 +- result of applying Z7 to 77
18 Z2 <-UNIFY(G7, (72)
19 if Z2 = FAIL, return FAIL
20 return the composition of Z7 and Z2
It can be proven that UNIFY finds a most general unifier of a set of
unifiable expressions or reports failure when the expressions are not
unifiable.
As examples, we list the most general common substitution instances
(those obtained by applying the mgu) for a few sets of literals.
143
THE PREDICATE CALCULUS IN AI
Table 4.2
Unifiable Sets
Sets of Literals Most General Common
Substitution Instances
{P(x),P(A)} P(A)
{P\f(x),y,g(y)lP\f<<x)^g<<x)\) n/UW(*)l
{P[f(x,g(A,y)) yg(A,y)lP[f(x,z),z]} P\f(^i^y)\g(A,y)]
Typically, we use unification to discover if one literal can match
another one. There may be variables in both literals, and these variables
may have terms substituted for them which would make the literals identical. The process of matching one expression to another template
expression is sometimes called pattern matching. It plays a key role in AI systems. The unification process is more general than what is usually
meant by pattern matching, however, because pattern matching processes typically do not allow variables to occur in both expressions.
4.1.7. VALIDITY AND SATISFIABILITY
If a wff has the value T for all possible interpretations, it is called
valid.
(Valid ground wffs are usually called tautologies.) Thus, by the truth
table, the wffP(^ ) => [P(A ) V P(B)] has the value T regardless of the
interpretation; therefore, it is valid. The truth table method can always be used to determine the validity of any wff that does not contain variables.
One merely checks whether the wff has the value T for all possible
valuations of the atomic formulas contained in the wff.
When quantifiers occur, one cannot always compute whether or not a
wff
is valid. It has been shown to be impossible to find a general method
to decide the validity of quantified expressions, and, for this reason, the
predicate calculus is said to be undecidable. However, the validity of
certain kinds of formulas containing quantifiers can be decided; thus, one
may speak of decidable subclasses of the predicate calculus. Furthermore,
it has been shown that if a wff is, in fact, valid, then a procedure exists for
144
RESOLUTION
verifying the validity of the wff. (This procedure applied to wffs that are
not valid may never terminate.) Thus, the predicate calculus is said to be
semidecidable.
If the same interpretation makes each wff in a set of wffs have the value
Γ, then we say that this interpretation satisfies the set of wffs. A wff X
logically follows from a set of wffs S if every interpretation satisfying S
also satisfies X. Thus, it is easy to see that the wff
(Vx)(Vy)[P(jc) V Q(y)] logically follows from the set
{(Vx)(Vy)[P(x) V ßOO], (Vz)[tf(z) V Q(A )]} .
Also, the wff P(A )logically follows from (Vx)P(x). It also happens that
(Vx ) Q ( x ) logically follows from the set {(Vx )[~ P ( x ) V Q ( JC )],
(Vx)P(x)}.
There is an important connection between the concept of a wff
logically following from a set of wffs and the concept of a wff being a
theorem derived from a set of wffs by applying inference rules. Suppose
we are given a system of inference rules. We say that these rules are sound
if any theorem derivable from any set of wffs also logically follows from
that set of wffs. It can be shown, for example, that modus ponens is sound.
We say that a system of inference rules is complete if all wffs that logically
follow from any set are also theorems derivable from that set. We are always interested in sound inference rules, although sometimes we do not
insist that the set of rules be complete.
4.2. RESOLUTION
4.2.1. CLAUSES
Resolution is an important rule of inference that can be applied to a
certain class of wffs called clauses. A clause is defined as a wff consisting of
a disjunction of literals. The resolution process, when it is applicable, is
applied to a pair of parent clauses to produce a derived clause. Before
explaining the resolution process itself, we first show that any predicate
calculus wff can be converted to a set of clauses. We illustrate this conversion process by applying it to the following example wff:
145
THE PREDICATE CALCULUS IN AI
(VJC){P(X) => {(VyXPC) =>P(f(x,y))]
A~(Vy)(ô(x, 7)^ PO')]}}.
The conversion process consists of the following steps:
(1) Eliminate implication symbols. All occurrences of the => symbol in
a wffare eliminated by making the substitution ~X1 V X2 for XI => A7
throughout the wff. In our example wff, this substitution yields:
(Vx){~p(x) v {(Vyx-poo v ^σ(^))ΐ
A~(V 7)[~ô(^j)Vi>(7)]}}.
(2) Reduce scopes of negation symbols. We want each negation
symbol, ~, to apply to at most one atomic formula. By making repeated
use of de Morgan's laws and other equivalences mentioned with them on
pages 138-139, we change our example wff to:
(Vx){~P(x) V {(VyX~P00 V P(f{x,y))]
Α(3 7)[ρ(χ, 7)Λ^Ρ( 7)]}}.
(3) Standardize variables. Within the scope of any quantifier, a variable
bound by that quantifier is a dummy variable. It can be uniformly
replaced by any other (non-occurring) variable throughout the scope of the quantifier without changing the truth value of the wff. Standardizing
variables within a wff means to rename the dummy variables to ensure
that each quantifier has its own unique dummy variable. Thus, instead of
writing (Vx)[/>(*)=» (3χ)β(χ)], we write (Vx)[/>(*)=> (3y)Q(y)].
Standardizing our example wff yields:
0/x){~P(x) V {(V>0[~PO0 V P(f(x,y))]
A(3w)[Q(x,w)A~P(w)]}}.
(4) Eliminate existential quantifiers. Consider the wff
(Vy)[(3x)P(x,y)\,
which might be read as "For all y, there exists an x (possibly depending
ony) such that P(x,y)" Note that because the existential quantifier is
within the scope of a universal quantifier, we allow the possibility that the
x that exists might depend on the value of y. Let this dependence be
explicitly defined by some function g (y ), which maps each value of y into
the x that "exists." Such a function is called a Skolem function. If we use
146
RESOLUTION
the Skolem function in place of the JC that exists, we can eliminate the
existential quantifier altogether and write Qfy)P[g(y\y\.
The general rule for eliminating an existential quantifier from a wff is
to replace each occurrence of its existentially quantified variable by a
Skolem function whose arguments are those universally quantified
variables that are bound by universal quantifiers whose scopes include
the scope of the existential quantifier being eliminated. Function
symbols used in Skolem functions must be new in the sense that they
cannot be ones that already occur in the wff. Thus, we can eliminate the
(3z ) from
[(Vw)Ô(w)]^(Vx){(Vy){(3z)[i>(x,/,z)
^(Vu)R(x,y,u,z)]}},
to yield
[(Vw)Q(w)]^0/x){0/y)[P(x,y 9g(x,y))
^(Vu)R(x,y,u,g(x
Ky))] .
If the existential quantifier being eliminated is not within the scope of
any universal quantifiers, we use a Skolem function of no arguments,
which is just a constant. Thus, (3x)P(x) becomes P(A), where the
constant symbol A is used to refer to the entity that we know exists. It is
important that A be a new constant symbol and not one used in other formulas to refer to known entities.
To eliminate all of the existentially quantified variables from a wff, we
use the above procedure on each formula in turn. Eliminating the existential quantifiers (there
is just one) in our example wff yields:
(V*){-/>(*) V {(Vy)[~P(y) V P(f(x,y))]
^[Q(x,g(x))A~P(g(x))]}},
where g(x ) is a Skolem function.
(5) Convert to prenex form. At this stage, there are no remaining
existential quantifiers and each universal quantifier has its own variable.
We may now move all of the universal quantifiers to the front of the wff
and let the scope of each quantifier include the entirety of the wff
following it. The resulting wff is said to be in prenex form. A wff in prenex
147
THE PREDICATE CALCULUS IN AI
form consists of a string of quantifiers called a prefix followed by a
quantifier-free formula called a matrix. The prenex form of our wff is:
(Vx)(Vy) {-/>(*) V {[~/>00 V P(f(x 9y))]
A[Q(x,g(x))A~P(g(x))]}}.
(6) Put matrix in conjunctive normal form. Any matrix may be written
as the conjunction of a finite set of disjunctions of literals. Such a matrix is
said to be in conjunctive normal form. Examples of matrices in conjunc
tive normal form are:
[P(x) V Q(x,y)] A [P(w) V ~ R(y)] A Q(x,y)
P(x)VQ(x,y)
P(x)AQ(x,y)
~R(y)
We may put any matrix into conjunctive normal form by repeatedly
using one of the distributive rules, namely, by replacing expressions of
the form XI V (X2 A X3) by (XI V X2) A (XI V X3).
When the matrix of our example wff is put in conjunctive normal form,
our wff becomes:
(V*)(Vy){[~P(x) V ~P(y) V P(f(x,y))]
A [-/>(*) V Q(x,g(x))] A [~P(x) V ~P(g(x)))} .
(7) Eliminate universal quantifiers. Since all of the variables in the wffs
we use must be bound, we are assured that all the variables remaining at
this step are universally quantified. Furthermore, the order of universal
quantification is unimportant, so we may eliminate the explicit occur
rence of universal quantifiers and assume, by convention, that all
variables in the matrix are universally quantified. We are left now with
just a matrix in conjunctive normal form.
(8) Eliminate Λ symbols. We may now eliminate the explicit occur
rence of Λ symbols by replacing expressions of the form (XI A X2)
with the set of wffs { X1 9X2 }. The result of repeated replacements is to
obtain a finite set of wffs, each of which is a disjunction of literals. Any wff
consisting solely of a disjunction of literals is called a clause. Our example
wff is transformed into the following set of clauses:
148
RESOLUTION
~P(x)V ~P{y)V P\f{x,y)]
~P{x)V Q[x,g(x)]
~P{x)V ~P[g(x)]
(9) Rename variables. Variable symbols may be renamed so that no
variable symbol appears in more than one clause. Recall that
(VxXi»(jc) Λ Q(x)] is equivalent to [(VJC)P(JC) Λ (Vy)ßOOl· This
process is sometimes called standardizing the variables apart. Our clauses
are now:
~P(xl)\/ ~P(y)V P[f(xl,y))
~P(x2)V Q[x2,g(x2)]
~P(x3)V ~P[g(x3)}
We note that the literals of a clause may contain variables but that
these variables are always understood to be universally quantified. If
terms not containing variables are substituted for the variables in an expression, we obtain what is called aground
instance of the literal. Thus,
Q(A,f(g(B))) is a ground instance of Q(x,y).
When resolution is used as a rule of inference in a theorem-proving
system, the set of wffs from which we wish to prove a theorem is first converted into clauses. It can be shown that if
the wff A" logically follows
from a set of wffs, 5, then it also logically follows from the set of clauses
obtained by converting the wffs in S to clause form. Therefore, for our purposes, clauses are a completely general form in which to express wffs.
4.2.2. RESOLUTION FOR GROUND CLAUSES
The best way to obtain a general idea of the resolution inference rule is
to understand how it applies to ground clauses. Suppose we have two
ground clauses, PI V P2 V ... V PN and ~P1 V Q2 V ... QM. We
assume that all of the Pi and Qj are distinct. Note that one of these clauses
contains a literal that is the exact negation of one of the literals in the
other clause. From these two parent clauses we can infer a new clause,
called the resolvent of the two. The resolvent is computed by taking the
disjunction of the two clauses and then eliminating the complementary pair,
P1,~P1. Some interesting special cases of resolution follow in Table
4.3.
149
THE PREDICATE CALCULUS IN AI
Table 4.3
Clauses and Resolvents
Parent Clauses
?and-?Ve
(i.e.,P=>£)
P V ßand~/> V Q
P V ßand~P V ~Q
~P and P
~i>Ve(i.e,i>=>0)
and~<2 V Ä(i.e., Q^>R) Resolvent(s)
Q
Q
Q V -Qand
PV-P
JV7L
-?VÄ
(i.e., />=>/*) Comments
Modus Ponens
The clause
eve
"collapses" to
Q. This re
solvent is
called a merge.
Here, there
are two possible
resolvents; in
this case, both
are tautologies.
The empty
clause is a sign of a contradiction.
Chaining
From the table above, we see that resolution allows the incorporation
of several operations into one simple inference rule. We next consider
how this simple rule can be extended to deal with clauses containing
variables.
4.23. GENERAL RESOLUTION
In order to apply resolution to clauses containing variables, we need to
be able to find a substitution that can be applied to the parent clauses so
that they contain complementary literals. In discussing this case, it is
150
RESOLUTION
helpful to represent a clause by a set of literals (with the disjunction
between the literals in the set understood). Let the prospective parent
clauses be given by {L {} and {M i} and let us assume that the variables
occurring in these two clauses have been standardized apart. Suppose
that {li} is a subset of {L %} and that {m^} is a subset of {Mi} such that a
most general unifier s exists for the union of the sets {l x} and {-mj}. We
say that the two clauses {L {} and {M {} resolve and that the new clause,
{{L,} -{/,}}* U {{3/,} -{m,}}*,
is a resolvent of the two clauses.
If two clauses resolve, they may have more than one resolvent because
there may be more than one way in which to choose {l x} and {rrii}. In
any case, they can have at most a finite number of resolvents. As an
example, consider the two clauses
P[x,f{A)\y P[x,f{y)\\/ Q{y)
and
~P[z,f(A)\V ~Q{z).
With {/ 4} = {P[x,f(A)]} and {wj} = {~P[z,f(A)]), we obtain the
resolvent
/W001v~ß(z)Vßoo.
With {l i}=[P{x,AA)],P[xJ{y)]}*nd{m i} = {~P[z 9f(A)]} 9 we
obtain the resolvent
Q(A)V~Q(z).
Note that, in the latter case, two literals in the first clause were collapsed
by the substitution into a single literal, complementary to an instance of
one of the literals in the second clause.
There are, altogether, four different resolvents of these two clauses.
Three of these are obtained by resolving on P and one by resolving on Q.
It is not difficult to show that resolution is a sound rule of inference;
that is, that the resolvent of a pair of clauses also logically follows from
151
THE PREDICATE CALCULUS IN AI
that pair of clauses. When resolution is used in a special kind of
theorem-proving system, described in the next chapter and called a
refutation system, it is also complete. Every wff that logically follows from a set of wffs can be derived from that set of wffs using resolution
refutation. For this reason and because of its simplicity, resolution systems are an important class of theorem-proving systems. Their very
simplicity results, though, in certain inefficiencies that restrict their use in AI systems. Nevertheless, an understanding of resolution systems pro
vides a basic foundation for understanding several other more efficient
types of theorem-proving systems.
In the next two chapters, we examine a variety of these systems,
beginning with ones using resolution.
4.3. THE USE OF THE PREDICATE
CALCULUS IN AI
The situations, or states, and the goals of several types of problems can
be described by predicate calculus wffs. In Figure 4.1, for example, we
show a situation in which there are three blocks, A, B, and C, on a table.
We can represent this situation by the conjunction of the following
formulas:
ON(QA)
ONTABLE(A) ONTA B LE (B)
CLEAR(C)
CLEAR(B)
(Vx)[CLEAR(x) '{3y)ON(y
9x)]
Fig. 4,1 A situation with three blocks on a table.
152
THE USE OF THE PREDICATE CALCULUS IN AI
The formula CLEAR(B) is intended to mean that block B has a clear
top; that is, no other block is on it. The ON predicate is used to describe
which blocks are (directly) on other blocks. (For this example, ON is not
transitive; it is intended to mean immediately on top.) The formula
ONT ABLE {B) is intended to mean that B is somewhere on the table.
The last formula in the list gives information about how CLEAR and ON
are related.
A conjunction of several such formulas can serve as a description of a
particular situation or "world state." We call it a state description.
Actually, any finite conjunction of formulas really describes a, family of different world states, each member of which might be regarded as an
interpretation satisfying the formulas. Even assuming that we give the
obvious "blocks-world" interpretation to constituents of the formulas,
there is still an infinite family of states (perhaps involving additional blocks as well) whose members satisfy these formulas. We can always
eliminate some of these interpretations by adding additional formulas to
the state description; for example, the set listed above says nothing about
the color of the blocks and, thus, describes the family of states in which
the blocks can have various colors. If we added the formula
COLOR(B
9 YELLOW), some interpretations would obviously be elim
inated. Even though a finite conjunction of formulas describes a family of
states, we often loosely speak of the state described by the state
description. We really mean, of course, the set of such states.
We intend to use formulas, like those of our blocks-world example, as a
global database in a production system. The way in which these formulas
are used depends upon the problem and its representation.
Suppose the problem is to show that a certain property is true in a
given state. For example, we might want to establish that there is nothing
on block C in the state depicted in Figure 4.1. We can prove this fact by
showing that the formula ~(3y)ON(y 9C) logically follows from the
state description for Figure 4.1. Equivalently, we could show that
~(3y ) ON(y, C) is a theorem derived from the state description by the
application of sound rules of inference.
We can use production systems to attempt to show that a given
formula, called the goal wff, is a theorem derivable from a set of formulas
(the state description). We call production systems of this sort theorem-
proving systems or deduction systems. (In the next two chapters, we
present various commutative production systems for theorem proving.)
153
THE PREDICATE CALCULUS IN AI
In a forward production system, the global database is set to the initial
state description, and (sound) production rules are applied until a state
description is produced that either includes the goal formula or unifies
with it in some appropriate fashion. In a backward production system,
the global database is set to the goal formula and production rules are
applied until a subgoal is produced that unifies with formulas in the state
description. Combined, forward/backward, systems are also possible.
One obvious and direct use of theorem-proving systems is for proving
theorems in mathematics and logic. A less obvious, but important, use of
them is in intelligent information retrieval systems where deductions must be performed on a database of facts in order to derive an answer to a query. For example, from expressions like
MANAGER(PURCHASING-DEPTJOHN-JONES),
WORKS-IN(PURCHASING-DEPTJOE-SMITH),
and
{[ WORKS-IN(x,y) Λ MANAGER(x,z)] =Φ BOSS-OF(y,z)} ,
an intelligent retrieval system might be expected to answer a query like "Who is Joe Smith's
boss?" Such a query might be stated as the following
theorem to be proved:
(3JC ) BOSS-OF(JOE-SMITH, x ).
A constructive proof (that is, one that exhibited the "JC" that exists) would
provide an answer to the query.
Even many commonsense reasoning tasks that one would not ordin
arily formalize can, in fact, be handled by predicate calculus theorem-
proving systems. The general strategy is to represent specialized know
ledge about the domain as predicate calculus expressions and to
represent the problem or query as a theorem to be proved. The system then attempts to prove the theorem from the given expressions.
Other kinds of problems involve changing the state description to one
that describes an entirely different state. Suppose, for example, that we
154
THE USE OF THE PREDICATE CALCULUS IN AI
have a "robot-type" problem in which the system must find a sequence of
robot actions that change a configuration of blocks. We can specify the
goal by a wff that describes the set of states acceptable as goal states.
Referring to Figure 4.1, we might want to have block A on block 2?, and
block 2?, in turn, on block C. Such a goal state (or rather set of states)
could be expressed by the goal formula [ΟΝ(Α,Β) Λ ON(B 9C)]. Note
that this goal formula certainly cannot be proved as a theorem from the
state description for Figure 4.1. The robot must change the state to one
that can be described by a set of formulas from which the goal wff can be
proved.
Problems of this sort can be solved by production systems also. For a
forward system, the global database is the state description. Each possible
robot action is modeled by a production rule (an F-rule in forward
systems). For example, if the robot can pick up a block, our production
system would have a corresponding F-rule. The action of picking up a block changes the state of the world; application of the F-rule that
models the action of picking up a block should make a corresponding
change to the state description. A sequence of actions for achieving a goal
can be computed by a forward production system that applies these
F-rules to state descriptions until a terminal state description is produced,
from which the goal wff can be proved. The solution sequence of F-rules
constitutes a specification of apian of actions for achieving the goal state.
Backward production systems for state-changing problems are also
possible. They would use B-rules that are "inverse" models of the robot's
actions. The formula describing the goal state would be used as the global
database. B-rules would be applied until a subgoal formula was produced
that could be proved from the initial state description.
Production systems that use F-rules and B-rules in this way, to model
state-changing actions, are typically not commutative. An F-rule for
picking up a block, for example, might have as a precondition that the block have a clear top. In Figure 4.1, this precondition is satisfied for block B, but it would not be true for block B after block C is placed on it.
Thus, applying one F-rule to a certain state description might render
other F-rules suddenly inapplicable. Production systems for solving
state-changing problems are explored in detail in chapters 7 and 8. They find application especially in robot problem solving and in automatic
programming.
155
THE PREDICATE CALCULUS IN AI
4.4. BIBLIOGRAPHICAL AND HISTORICAL
REMARKS
A book by Pospesel (1976) is a good elementary introduction to
predicate calculus with many examples of English sentences represented
as wffs. Two excellent textbooks on logic are those of Mendelson (1964)
and Robbin (1969). Books by Chang and Lee (1973), Loveland (1978), and Robinson (1979) describe resolution methods.
A unification algorithm and a proof of correctness is presented in
Robinson (1965). Several variations have appeared since. Raulefs et al.
(1978) survey unification and matching. Paterson and Wegman (1976)
present a linear-time (and space) unification algorithm.
The resolution rule was introduced by Robinson (1965) based on
earlier work by Prawitz (1960) and others. The soundness and complete
ness of resolution was originally proved by Robinson (1965); proofs of
these properties due to Kowalski and Hayes (1969) are presented in
Nilsson (1971). The steps that we have outlined for converting any wff into clause form are based on the procedure of Davis and Putnam (1960).
Clause form is also called quantifier-free, conjunctive-normal form.
Manna and Waldinger (1979) have proposed a generalization of resolu
tion that is applicable to wffs in nonclausal form. Maslov (1971 and other
earlier papers in Russian) proposed a dual form of resolution, working
with "goal clauses" that are disjunctions of conjunctions of literals. [See
also Kuehner (1971).]
EXERCISES
4.1 Suppose that we represent "Sam is Bill's father" by FA-
THER(BILL,SAM) and "Harry is one of Bill's ancestors" by ANCES
TOR ( BILL, HARR Y). Write a wff to represent "Every ancestor of Bill is
either his father, his mother, or one of their ancestors."
4.2 The connective ® (exclusive or) is defined by the following truth
table:
156
EXERCISES
XI
τ
F
T
F X2
T
T
F
F XI Θ Χ2
F
T
T
F
What wff containing only ~, V, and Λ connectives is equivalent to
(XI ®X2)1
43 Represent the following sentences by predicate calculus wffs. (Lean
toward extravagance rather than economy in the number of different
predicates and terms used. Do not, for example, use a single predicate
letter to represent each sentence.)
(a) A computer system is intelligent if it can
perform a task which, if performed by a human, requires intelligence.
(b) A formula whose main connective is a
=Φ
is equivalent to some formula whose main
connective is a V.
(c) If the input to the unification algorithm is
a set of unifiable expressions, the output is
the mgu; if the input is a set of non-unifiable
expressions, the output is FAIL.
(d) If a program cannot be told a fact, then it
cannot learn that fact.
(e) If a production system is commutative,
then, for any database, £>, each member of
the set of rules applicable to D is also
applicable to any database produced by
applying an applicable rule to D.
4.4 Show that modus ponens in the propositional calculus is sound.
157
THE PREDICATE CALCULUS IN AI
4.5 Show that (3Z)(VJC)[/>(*)=> ß(z)]and (3z)[(3x)P(x)^> Q(z)]
are equivalent.
4.6 Convert the following wffs to clause form:
(a) (Vx)[P(x)^P(x)]
(b) {~{(Vx)P(x)})=>(3x)[~P(x)]
(c) ~Cx){P{x)=>{C*yyiP(y)=*P(f{x,y))]
(d) (Vx)(3y)
{[P(x,y)^Q(y,x)]A[Q(y,x)^S(x,y)]}
^(3x)(Vy)[P(x,y)^S(x,y)]
4.7 Show by an example that the composition of substitutions is not
commutative.
4.8 Show that resolution is sound; that is, show that the resolvent of two
clauses logically follows from the two clauses.
4.9 Find the mgu of the set {Ρ(χ,ζ,γ), P(w,u,w), P(A,u,u)}.
4.10 Explain why the following sets of literals do not unify:
(a) {P(f(x,x),A),P(f(y,f(y,A)),A)}
(b) {~P(A),P(x)}
(c) {P(f(A),x),P(x,A)}
4.11 The following wffs were given a "blocks-world" interpretation in
this chapter:
ON(C,A)
ONTABLE(A)
ONTABLE(B)
CLEAR(C) CLEAR(B)
(\/x)[CLEAR(x)^> ~(3y)ON(y,x)]
Invent two different (non-blocks-world) interpretations that satisfy the
conjunction of these wffs.
158
EXERCISES
4.12 In our examples representing English sentences by wffs, we have
not been concerned about tense. Can you express the following sentences
as wffs:
Shakespeare writes "Hamlet."
Shakespeare wrote "Hamlet." Shakespeare will write "Hamlet." Shakespeare will have written "Hamlet." Shakespeare had written "Hamlet."
159
CHAPTER 5
RESOLUTION REFUTATION
SYSTEMS
In this chapter and chapter 6, we are primarily concerned with systems
that prove theorems in the predicate calculus. Our interest in theorem
proving is not limited to applications in mathematics; we also investigate
applications in information retrieval, commonsense reasoning, and
automatic programming. Two main types of theorem-proving systems
will be discussed: here, systems based on resolution, and in chapter 6,
systems that use various forms of implications as production rules.
In the prototypical theorem-proving problem, we have a set, 5, of wffs
from which we wish to prove some goal wff, W. Resolution-based systems
are designed to produce proofs by contradiction or refutations. In a
resolution refutation, we first negate the goal wff and then add the
negation to the set, S. This expanded set is then converted to a set of
clauses, and we use resolution in an attempt to derive a contradiction,
represented by the empty clause, NIL.
A simple argument can be given to justify the process of proof by
refutation. Suppose a wff, W, logically follows from a set, S, of wffs; then,
by definition, every interpretation satisfying S also satisfies W. None of
the interpretations satisifying S can satisfy ~W, and, therefore, no
interpretation can satisfy the union of S and (~ W). A set of wffs that cannot be satisfied by any interpretation is called
unsatisfiable; thus, if W
logically follows from S, the set S U {~ W) is unsatisfiable.
It can be shown that if resolution is applied repeatedly to a set of
unsatisfiable clauses, eventually the empty clause, NIL, will be produced.
Thus, if W logically follows from S, then resolution will eventually
produce the empty clause from the clause representation of S U {~ W).
Conversely, it can be shown that if the empty clause is produced from the
clause representation of S U {~W}, then W logically follows from S.
161
RESOLUTION REFUTATION SYSTEMS
Let us consider a simple example of this process. Suppose the
following statements are asserted:
(1) Whoever can read is literate.
(Vx)[R(x)^L(x)]
(2) Dolphins are not literate.
(V*XD(x)=>~L(x)]
(3) Some dolphins are intelligent.
(3x)[D(x)AI(x)]
From these, we want to prove the statement:
(4) Some who are intelligent cannot read.
(3x)[I(x)A ~R(x)]
The set of clauses corresponding to statements 1 through 3 is:
(1) ~R(x) VL(JC)
(2) ~D(y)V ~L(y)
(3a) D(A)
(3b) 1(A)
where the variables have been standardized apart and where A is a
Skolem constant. The negation of the theorem to be proved, converted to
clause form, is
(4') ~I(z)VR(z) .
To prove our theorem by resolution refutation involves generating
resolvents from the set of clauses 1-3 and 4', adding these resolvents to
the set, and continuing until the empty clause is produced. One possible proof (there are more than one) produces the following sequence of
resolvents:
(5) R (A ) resolvent of 3b and 4'
(6) L (A ) resolvent of
5 and 1
162
PRODUCTION SYSTEMS FOR RESOLUTION REFUTATIONS
(7) ~D(A) resolvent of 6 and 2
(8) NIL resolvent of 7 and 3a
5.1. PRODUCTION SYSTEMS FOR RESOLUTION
REFUTATIONS
We can think of a system for producing resolution refutations as a
production system. The global database is a set of clauses, and the rule
schema is resolution. Instances of this schema are applied to pairs of
clauses in the database to produce a derived clause. The new database is
then the old set of clauses augmented by the derived clause. The
termination condition for this production system is a test to see if the database contains the empty clause.
It is straightforward to show that such a production system is
commutative. Because it is commutative, we can use an irrevocable
control regime. That is, after performing a resolution, we never need to
provide for backtracking or for consideration of alternative resolutions
instead. We must emphasize that using an irrevocable control regime
does not necessarily mean that every resolution performed is "on the
path" to producing the empty clause; usually there will be several irrelevant resolutions applied. But, because the system is commutative, we are never prevented from applying an appropriate resolution later, even after having applied some irrelevant ones.
Suppose we start with a set, S, of clauses called the base set. The basic
algorithm for a resolution refutation production system can then be
written as follows:
Procedure RESOLUTION
1 CLAUSES+-S
2 until NIL is a member of CLA USES, do:
3 begin
163
RESOLUTION REFUTATION SYSTEMS
4 select two distinct, resolvable clauses
Ci and Cj in CLA USES
5 compute a resolvent, r i; of c{
and Cj
6 CLA USES «— The set produced by adding r {j
to CLAUSES
7 end
5.2. CONTROL STRATEGIES FOR RESOLUTION
METHODS
The decisions about which two clauses in CLAUSES to resolve
(statement 4) and which resolution of these clauses to perform (statement
5) are made irrevocably by the control strategy. Several strategies for
selecting clauses have been developed for resolution; we give some
examples shortly.
In order to keep track of which resolutions have been selected and to
avoid duplicated effort, it is helpful for the control strategy to use a
structure called a derivation graph. The nodes in such a graph are labeled
by clauses; initially, there is a node for every clause in the base set. When two clauses, q and c
j9 produce a resolvent, r i;, we create a new node,
labeled rij9 with edges linking it to both the c { and c s nodes. Here we
deviate from the usual tree terminology and say that c { and c, are the
parents of r 0 and that r {j is a descendant of c { and c,. (Recall that we
introduced the concept of a derivation graph in chapter 3.)
A resolution refutation can be represented as a refutation tree (within
the derivation graph) having a root node labeled by NIL. In Figure 5.1
we show a refutation tree for the example discussed in the last section.
The control strategy searches for a refutation by growing a derivation
graph until a tree is produced with a root node labeled by the empty
164
CONTROL STRATEGIES FOR RESOLUTION METHODS
clause, NIL. A control strategy for a refutation system is said to be
complete if its use results in a procedure that will find a contradiction
(eventually) whenever one exists. (The completeness of a, strategy should
not be confused with the logical completeness of an inference rule
discussed in chapter 4.) In AI applications, complete strategies are not so
important as ones that find refutations efficiently.
5.2.1. THE BREADTH-FIRST STRATEGY
In the breadth-first strategy, all of the first-level resolvents are
computed first, then the second-level resolvents, and so on. {A first-level
resolvent is one between two clauses in the base set; an i-th level resolvent
is one whose deepest parent is an (/ — l)-th level resolvent.) The
breadth-first strategy is complete, but it is grossly inefficient.
In Figure 5.2 we show the refutation graph produced by a breadth-first
strategy for the example problem of the last section. All of the first- and
second-level resolvents are shown, and we indicate that NIL is among the
third-level resolvents. (Note that our refutation shown in Figure 5.1 did
not produce the empty clause until the fourth level.)
~/(z) V R(z) KA)
R(A) ~R{x)V L(x)
L(A) ~D(y) V ~L(y)
-D(A) D(A)
NIL
Fig. 5.1 A resolution refutation tree.
165
ON
~R{x)V L(x) -D(y)V -L(y)
Third-Level
Resolvents Pi W in
G H δ z
w
H
1
C/3
C/2 H
W
C/5
Fig. 5.2 Illustration of a breadth-first strategy.
CONTROL STRATEGIES FOR RESOLUTION METHODS
5.2.2. THE SET-OF-SUPPORT STRATEGY
A set-of-support refutation is one in which at least one parent of each
resolvent is selected from among the clauses resulting from the negation
of the goal wff or from their descendants (the set of support). It can be
shown that a set-of-support refutation exists whenever any refutation
exists and, therefore, that the set of support can be made the basis of a
complete strategy. The strategy need only guarantee to search for all
possible set-of-support refutations (in breadth-first manner, say). Set-of-
support strategies are usually more efficient than unconstrained breadth-first
ones.
In a set-of-support refutation, each resolution has the flavor of a
backward reasoning step because it uses a clause originating from the
goal wff, or one of its descendants. Each of the resolvents in a set-of-support refutation might then correspond to a subgoal in a backward production system. One advantage of a refutation system is that it permits what are essentially backward and forward reasoning steps
to occur in a simple fashion in the same production system. (Forward reasoning steps correspond to resolutions between clauses that do not
descend from the theorem to be proved.)
In Figure 5.3 we show a refutation graph produced by the set-of-sup
port strategy for our example problem. Notice that, in this case, set of support does not permit finding the empty clause at the third level. A third-level refutation for this problem necessarily involves resolving two clauses outside the set of support. Comparing Figure 5.2 with Figure 5.3,
we see that set of support produces fewer clauses at each level than does
unconstrained breadth-first resolution. Typically, the set-of-support strategy results in slower growth of the clause set and thus helps to
moderate the usual combinatorial explosion. Usually this containment of
clause-set growth more than compensates for the fact that a restrictive
strategy, like set of support, often increases the depth at which the empty
clause is first produced.
The refutation tree in Figure 5.1 is one that could have been produced
by a set-of-support strategy. We show the top part of this tree by
darkening some of the branches in Figure 5.3.
5.23. THE UNIT-PREFERENCE STRATEGY
The unit-preference strategy is a modification of the set-of-support
strategy in which, instead of filling out each level in breadth-first fashion,
167
oo
Original
Clauses
S -/(z)VÄ(z) HA) ~R(x)V L(x) -D(y)\f ~L(y) D(A)
Third-Level
Resolvents -D(A) -KA) ~D(A) -D(A) w
a
H
1
w
i
H W
F/g. 5.5 Illustration of a set-of-support strategy.
CONTROL STRATEGIES FOR RESOLUTION METHODS
we try to select a single-literal clause (called a unit ) to be a parent in a
resolution. Every time units are used in resolution, the resolvents have
fewer literals than do their other parents. This process helps to focus the
search toward producing the empty clause and, thus, typically increases
efficiency.
The refutation tree of Figure 5.1 is one that might have been produced
by a unit-preference strategy.
5.2.4. THE LINEAR-INPUT FORM STRATEGY
A linear-input form refutation is one in which each resolvent has at
least one parent belonging to the base set. In Figure 5.4 we show how a
refutation graph would be generated using this strategy on our example
problem. Note that the first level of Figure 5.4 is the same as the first level
of Figure 5.2. At subsequent levels, the linear-input form strategy does
reduce the number of clauses produced. Again, the use of this strategy on
our example problem does not permit us to find a third-level empty
clause. Note that the refutation tree of Figure 5.1 qualifies as a
linear-input form refutation. We indicate part of this tree by darkening
some of the branches in Figure 5.4.
There are cases in which a refutation exists but a linear-input form
refutation does not; therefore, linear-input form strategies are not
complete. To see that linear-input form refutations do not always exist for
unsatisfiable sets, consider the following example set of clauses:
Q(u)V P(A)
~ß(w) V P(w)
~Q{x) V ~P(x)
Q(y)V ~P(y)
The set is clearly unsatisfiable, as evidenced by the refutation tree of
Figure 5.5. A linear-input form refutation must (in particular) have one
of the parents of NIL be a member of the base set. But to produce the
empty clause in this case, one must either resolve two single-literal clauses or two clauses that collapse in resolution to single-literal clauses.
None of the members of the base set meets either of these criteria, so
there cannot be a linear-input form refutation for this set.
Notwithstanding their lack of completeness, linear-input form strate
gies are often used because of their simplicity and efficiency.
169
ο
Original
Clauses,
S
First-Level
Resolvents
Second-Level
Resolvents
Third-Level
Resolvents 1(A)
R(A)
L(A)
\ ~/(z)V R(z) ~R(x) V L(x)
\ -V ~/(z)VL(z)
L(A)
• • • ~D(y) V -L(y)
~R(y) V ~D(y) —i
-KA) D(A)
~/(z) V ~D(z)
• · · 1 ~I(y) V -£>(>>)
-1(A) ~R(A)
• · · w
H
w
_ s 1
C/i
C/5
H
W
c/3
.F/g. 5.4 Illustration of a linear-input form strategy.
CONTROL STRATEGIES FOR RESOLUTION METHODS
5.2.5. THE ANCESTRY-FILTERED FORM STRATEGY
An ancestry-filtered form refutation is one in which each resolvent has
a parent that is either in the base set or that is an ancestor of the other
parent. Thus, ancestry-filtered form is very much like linear form. It can
be shown that a control strategy guaranteed to produce all ancestry-filtered form proofs is complete.
As an example, the refutation tree of Figure 5.5 is one that could have
been produced by an ancestry-filtered form strategy. The clause marked
with an asterisk is used as an "ancestor" in this
case. It can also be shown
that completeness of the strategy is preserved if the ancestors that are
used are limited to merges. (Recall from chapter 4 that a merge is a
resolvent that inherits a literal from each parent such that this literal is
collapsed to a singleton by the mgu.) We note in Figure 5.5 that the clause
marked by an asterisk is a merge.
■Q(*)V -P(x) (200 V^O)
Q(u)VP(A)
NIL
Fig. 5.5 A refutation tree.
171
RESOLUTION REFUTATION SYSTEMS
5.2.6. COMBINATIONS OF STRATEGIES
It is also possible to combine control strategies. A combination of set of
support with either linear-input form or ancestry-filtered form is com
mon. Let us consider the set-of-support/linear-input form strategy, as an
example. This strategy can be viewed as a simple type of reasoning
backward from a goal to subgoal to sub-subgoal and so on. It happens
that the first three levels in Figure 5.3 contain only clauses that are
permitted by this combination strategy, so that the combination for those
levels does not further restrict the set-of-support strategy used in that
figure. Occasionally, however, the combination strategy leads to a slower
growth of the clause set than would either strategy alone.
The set-of-support, linear-input form, and ancestry-filtered form
strategies restrict resolutions. Of all the resolutions that these strategies
allow, the strategies say nothing about the order in which these
resolutions are performed. We have already mentioned that an inappro
priate order does not prevent us from finding a refutation. This fact does
not mean, however, that resolution order has no effect on the efficiency of
the process. On the contrary, an appropriate order of performing
resolutions can prevent the generation of large numbers of unneeded clauses. The unit-preference strategy is one example of an ordering
strategy. Other ordering strategies based on the number of literals in a
clause and the complexity of the terms in a clause can also be devised. The order in which resolutions are performed is crucial to the efficiency of resolution systems. Since we do not concentrate on applications of
resolution refutation systems in this book, the interested reader is
referred to the citations at the end of this chapter for references to papers
and books dealing with ordering strategies for resolution systems.
5.3. SIMPLIFICATION STRATEGIES
Sometimes a set of clauses can be simplified by elimination of certain
clauses or by elimination of certain literals in the clauses. These
simplifications are such that the simplified set of clauses is unsatisfiable if
and only if the original set is unsatisfiable. Thus, employing these
simplification strategies helps to reduce the rate of growth of new clauses.
172
SIMPLIFICATION STRATEGIES
53.1. ELIMINATION OF TAUTOLOGIES
Any clause containing a literal and its negation (we call such a clause a
tautology) may be eliminated, since any unsatisfiable set containing a
tautology is still unsatisfiable after removing it, and conversely. Thus,
clauses like P(x) V B(y) V ~B(y) and P(f(A )) V ~P(f(A )) may
be eliminated.
53.2. PROCEDURAL ATTACHMENT
Sometimes it is possible and more convenient to evaluate the truth
values of literals than it would be to include these literals, or their
negations, in the base set. Typically, evaluations are performed for
ground instances. For example, if the predicate symbol "£" stands for the
equality relation between numbers, it is a simple matter to evaluate
ground instances such as E(1,3) when they occur; whereas we would probably not want to include in the base set a table containing a large
number of ground instances of E(x,y) and ~E(x,y).
It is instructive to look more closely at what is meant by "evaluating"
an expression like £(7,3). Predicate calculus expressions are linguistic
constructs that denote truth values, elements, functions, or relations in a
domain. Such expressions can be interpreted with reference to a model
which associates linguistic entities with appropriate domain entities. The
end result is that the values T or F become associated with sentences in the language.
Given a model, we could use any finite processes for interpretation
with respect to it as a way of deciding truth values of sentences.
Unfortunately, models and interpretation processes are not, in general,
finite. Often, we can use partial models, however. In our equality
example, we can associate with the predicate symbol, £, a computer
program that tests the equality of
two numbers within the finite domain
of the program. Let us call this program EQUALS. We say that the program EQUALS is attached to the predicate symbol E. We can
associate the linguistic symbols
7 and 3 (i.e., numerals ) with the computer
data items 7 and 3 (i.e., numbers), respectively. We say that 7 is attached
to 7, and that 3 is attached to 3, and that the computer program and
arguments represented by EQUALS(7,3) are attached to the linguistic
expression £(7,3). Now we can run the program to obtain the value F
(false) which in turn induces the value F for £(7,3).
173
RESOLUTION REFUTATION SYSTEMS
We can also attach procedures to function symbols. For example, an
addition program can be attached to the function symbol plus. In this
manner, we can establish a connection or procedural attachment between
executable computer code and some of the linguistic expressions in our predicate calculus language. Evaluation of attached procedures can be
thought of as a process of interpretation with respect to di partial model.
When it can be used, procedural attachment reduces the search effort that would otherwise be required to prove theorems.
A literal is evaluated when it is interpreted by running attached
procedures. Typically, not all of the literals in a set of clauses can be
evaluated, but the clause set can nevertheless be simplified by such evaluations. If a literal in a clause evaluates to Γ, the entire clause can be
eliminated without affecting the unsatisfiability of the rest of the set. If a
literal evaluates to F, then the occurrence of just that literal in the clause
can be eliminated. Thus the clause P(x) V Q(A)V E(7,3) can be
replaced by P(x)V Q(A), since E(7,3) evaluates to F.
5.33. ELIMINATION BY SUBSUMPTION
By definition, a clause { L{} subsumes a clause { M{} if there exists a
substitution s such that { L{} s is a subset of { M{}. As examples:
P(x) subsumes P(y) V Q(z)
P(x) subsumes P(A )
P(x) subsumes P(A) V Q(z)
P(x) V Q(A) subsumes P(f(A)) V Q(A) V R(y)
A clause in an unsatisfiable set that is subsumed by another clause in
the set can be eliminated without affecting the unsatisfiability of the rest
of the set. Eliminating clauses subsumed by others frequently leads to
substantial reductions in the number of resolutions that need to be made
in finding a refutation.
174
EXTRACTING ANSWERS FROM RESOLUTION REFUTATIONS
5,4. EXTRACTING ANSWERS FROM RESOLUTION
REFUTATIONS
Many applications of predicate calculus theorem-proving systems
involve proving formulas containing existentially quantified variables,
and finding values or instances for these variables. That is, we might want
to know if a wff such as (3x ) W( x ), logically follows from S, and if it
does, we want an instance of the "JC" that exists. The problem of finding a
proof for (Bx) W{X) from S is an ordinary predicate calculus theorem-
proving problem, but producing the satisfying instance for x requires
that the proof method be "constructive."
We note that the prospect of producing satisfying instances for
existentially quantified variables allows the possibility for posing quite
general questions. For example, we could ask "Does there exist a solution
sequence to a certain 8-puzzle?" If a constructive proof can be found that
a solution does exist, then we could produce the desired solution also. We
could also ask whether there exist programs that perform desired
computations. From a constructive proof of a program's existence, we
could produce the desired program. (We must remember, though, that complex questions will generally have complex proofs, possibly so complex that our automatic proof-finding procedures will not find them.)
In this section we describe a process by which a satisfying instance of an existentially quantified variable in a wff can be extracted from a
resolution refutation for that wff.
5.4.1. AN EXAMPLE
Consider the following trivially simple problem: "If Fido goes
wherever John goes and if John is at school, where is Fido?" Quite clearly
the problem specifies two facts and then asks a question whose answer
presumably can be deduced from these facts. The facts might be
translated into the set S of wffs
(Vx )[A T{JOHN,x ) => A T(FID0 9x )]
and
AT {JOHN, SCHOOL).
175
RESOLUTION REFUTATION SYSTEMS
The question "where is Fido?" can be answered if we first prove that
thewff
(3x)AT(FIDO,x)
logically follows from S and then find an instance of the x "that exists."
The key idea is to convert the question into a goal wfT containing an existential quantifier such that the existentially quantified variable
represents an answer to the question. If the question can be answered from the facts given, the goal wff created in this manner will logically follow from S. After obtaining a
proof, we then try to extract an instance
of the existentially quantified variable to serve as an answer. In our
example we can easily prove that (3x)AT(FIDO,x) follows from S. We can also show that a relatively simple process extracts the appropriate
answer.
The resolution refutation is obtained in the usual manner, by first
negating the wff to be proved, adding this negation to the set S,
converting all of the members of this enlarged set to clause form, and
then, by resolution, showing that this set of clauses is unsatisfiable. A
refutation tree for our example is shown in Figure 5.6. The clauses resulting from the wifs in S are called axioms. Note that the negation of the goal wff (3x )A T( FIDO, x ) produces
(Vx)[~AT(FIDO,x)],
whose clause form is simply ~AT(FIDO,x).
Next we must extract an answer to the question "Where is Fido?" from
this refutation tree. The process for doing so in this case is as follows:
(1) Append to each clause arising from the negation
of the goal wff its own negation. Thus
~AT(FIDO,x) becomes the tautology
-AT (FIDO, x) V AT (FIDO, x).
(2) Following the structure of the refutation tree,
perform the same resolutions as before until some
clause is obtained at the root. (We make the phrase
the same resolutions more precise later.)
(3) Use the clause at the root as an answer statement.
176
EXTRACTING ANSWERS FROM RESOLUTION REFUTATIONS
~AT(FIDO,x)
(Negation of Goal) -A T{JOHN,y ) V A T(FIDO,y )
(Axiom 1)
-AT(JOHN,x) AT(JOHN,SCHOOL)
(Axiom 2)
NIL
Fig. 5.6 Refutation tree for example problem.
~AT(FIDO,x) V AT(FIDO,x) -A T(JOHN,y) V A T(FIDO,y)
-AT(JOHN,x) V AT(FIDO,x) AT(JOHN, SCHOOL)
AT {FIDO, SCHOOL)
Fig. 5.7 The modified proof tree for example problem.
177
RESOLUTION REFUTATION SYSTEMS
In our example, these steps produce the proof tree shown in Figure 5.7
with the clause AT {FIDO, SCHOOL) at the root. This clause, then, is
the appropriate answer to the problem.
We note that the answer statement has a form similar to that of the goal
wff. In this case, the only difference is that we have a constant (the
answer) in the answer statement in the place of the existentially
quantified variable in the goal wff.
In the next sections, we deal more thoroughly with the answer
extraction process, justify its validity, and discuss how it should be
employed if the goal wff contains universal as well as existential quantifiers.
5.4.2. THE ANSWER EXTRACTION PROCESS
Answer extraction involves converting a refutation tree (with NIL at
the root) to a proof tree with some statement at the root that can be used
as an answer. Since the conversion involves converting every clause
arising from the negation of the goal wff into a tautology, the converted
proof tree is a resolution proof that the statement at the root logically
follows from the axioms plus tautologies. Hence it also follows from the
axioms alone. Thus, the converted proof tree itself justifies the extraction
process!
Although the method is simple, there are some fine points that can be
clarified by considering some additional examples.
EXAMPLE 1. Consider the following set of wffs:
1. (yx)0/y){[P(x,y)AP(y,z)]^G(x,z)}
and
2. (Vy)(3x)P(x,y).
We might interpret these as follows:
For all x and y, if x is the parent of y and y is the parent of z, then x is
the grandparent of z.
178
EXTRACTING ANSWERS FROM RESOLUTION REFUTATIONS
and
Everyone has a parent.
Given these wffs as hypotheses, suppose we asked the question "Do there
exist individuals x and y such that x is the grandparent of yl" The goal
wff corresponding to this question is:
(3x)(3y)G(x,y).
The goal wff is easily proved by a resolution refutation. The refutation
tree is shown in Figure 5.8. The literals that are unified in each resolution
are underlined. We call the subset of literals in a clause that is unified
during a resolution the unification set.
(Negation of Goal) ~P(x,y) V ~P(y,z) V G(x,z)
(Axiom 1)
~P(u,y) V ~P(y,v)
P(f(w),w)
(Axiom 2)
-P(u,f(v))
P(f(w),w)
(Axiom 2)
NIL
Fig. 5.8 A refutation tree for Example 1.
179
RESOLUTION REFUTATION SYSTEMS
Note that the clause P(f(w),w) contains a Skolem function, /,
introduced to eliminate the existential quantifier in Axiom 2. (The
function/can be interpreted as a function that is defined to name the
parent of any individual.) The modified proof tree is shown in i igure 5.9.
The negation of the goal wff is transformed into a tautology, and the
resolutions follow those performed in the tree of Figure 5.8. Each
resolution in the modified tree uses unification sets that correspond
precisely to the unification sets of the refutation tree. Again, the unification
sets are underlined.
The proof tree of Figure 5.9 has G(f(f(v)), v) at the root. This clause
represents the wff (Vv )[ G (/*(/( v )), v )], which is the answer statement.
The answer statement provides an answer to the question "Are there x
and y such that x is the grandparent of yV The answer in this case
involves the definitional function/. Any v and the parent of the parent of
v are examples of individuals satisfying the conditions of the question.
Again, the answer statement has a form similar to that of the goal wff.
EXAMPLE 2. Here we illustrate the way in which more complex clauses
arising from the negation of the goal wff are transformed into tautologies.
-G(u,v) V G(u,v)
P(x,y)V ~P(y,z)V G(x,z)
~P(u,y)V ~P(y,v)V G{u,v)
P(f(w),w)
P(f(w),w)
G(f(f(v)),v)
180 Fig. 5.9 The modified proof tree for Example 1.
EXTRACTING ANSWERS FROM RESOLUTION REFUTATIONS
Consider the following set of clauses or axioms:
~A(x) V F(x)VG(f(x))
~F(x) V B(x) -/■(*) V C(x) ~G(x)V B(x) ~G(x) VD(x)
ii(S(*))VF(A(*))
(In this example, we assume that the variables in these clauses are
standardized apart before performing resolutions. For simplicity, we do not indicate this process explicitly.) We desire to prove, from these
axioms, the goal wff
(3χ)(3γ){[Β(χ) A C(x)] V [D(y) A B(y))} .
The negation of this wff produces two clauses, each with two literals:
~B(x) V ~C(x)
~B(x) V ~D(x) .
A refutation tree for this combined set of clauses is shown in Figure 5.10.
Now, to transform this tree we must convert the clauses resulting from
the negation of the goal wff (shown in double boxes in Figure 5.10) into
tautologies, by appending their own negations. In this case, the negated
clauses involve Λ symbols. For example, the clause ~B(x)V ~C(x) is converted to the formula— B
(JC) V ~ C(x) V [B(x)A C(x)].This
formula is not a clause because of the occurrence of the conjunction [B(x) A C(x)]; nevertheless, we treat this conjunction as a single literal
and proceed formally as if the formula were a clause (none of the
elements of this conjunction are ever in any unification sets). Similarly,
we transform the clause —
Z)(JC)V~B(x) into the tautology
~D(x) V ~B(x) V [D(x) A B(x)].
Performing the resolutions dictated by corresponding unification sets,
we then produce the proof graph shown in Figure 5.11. Here the root
clause is the wff
Q/x){\B(g(x)) A C(g(x))] V [D(f(g(x))) A B(f(g(x)))]
V[B(h(x))AC(h(x))]} .
181
RESOLUTION REFUTATION SYSTEMS
-B(x) V ~C(x) ~D(x) V ~B(x)
~F(x)V B(x)
~F(x)V ~C(x) -G(x)V D(x)
~B(x)V~G(x)
~F(x)V C(x)
~F{x) ~G(x)V B(x)
~G(x)
i(g(x))VFMx))
Fig. 5.10 A re filiation tree for Example 2.
We note that, in this example, the answer statement has a form somewhat
different from the form of the goal wff. The underlined part of the answer
statement is obviously similar to the entire goal wff—with g(x) taking the place of the existentially quantified variable x in the goal wff, and
f(g(x)) taking the place of the existentially quantified variable y in the
goal wff—but, in this example, there is the extra disjunct [ B (h (x )) Λ C(h (x
))] in the answer statement. This disjunct, however,
is similar to one of the disjuncts of the goal wff, with h(x) taking the
place of the existentially quantified variable x of the goal wff.
182
EXTRACTING ANSWERS FROM RESOLUTION REFUTATIONS
~F(x) V (B(x) A C(x)) ~D(x) V ~B(x) V (D(x) A B(x))
~G(x)\/D(x)
~B{x) V ~G(x) V (D(x) A B(x))
~G(x) V B{x) 1
~G(x)V(D(x)AB(x))
~A(x)VF(x)VG(f(x))
-A(x) V G(f(x)) V (£(*) Λ C(x))
~A(x) V (*(*) Λ C(x)) V OW*)) V B(f(x)))
A(g(x)) V F(h(x))
F(h(x)) V (£&(*)) Λ C(g(x)))V (D(f(g(x))) AB(f(g(x))))
[B(h(x)) A C(h(x))} V [D(f(g(x))) A B(f(g(x)))} V [B(g(x)) A C(g(x))]
Fig. 5.11 The modified proof tree for Example 2.
In general, if the goal wff itself is in disjunctive normal form, then our
answer-extraction process will produce a statement that is a disjunction of
expressions, each of which is similar in form either to the entire goal wff
or to one or more disjuncts of the entire goal wff. For this reason we claim
that the root clause here can be used as an "answer" to the "question"
represented by the goal wff.
183
RESOLUTION REFUTATION SYSTEMS
5.43. GOAL WFFS CONTAINING UNIVERSALLY
QUANTIFIED VARIABLES
A problem arises when the goal wff contains universally quantified
variables. These universally quantified variables become existentially
quantified in the negation of the goal wff, causing Skolem functions to be
introduced. What is to be the interpretation of these Skolem functions if
they should eventually appear as terms in the answer statement?
We illustrate this problem with another example. Let the clause form
of the axioms be:
C(x,p (x )), meaning "For all x, x is the child ofp(x )" (that
is, p is a function mapping a child of an individual into the
individual);
and
~ C(x,y) V P(y,x% meaning "For all x andy, if x is the child
of y, then y is the parent of x"
Now suppose we wish to ask the question "For any x, who is the parent
of jc?" The goal wff corresponding to this question is:
(Vx)(3y)P(y,x).
Converting the negation of this goal wff to clause form, we obtain, first:
(3x)(Vy)[~P(y,x)l
and then:
~P(y,A),
where A is a Skolem function of no arguments (i.e., a constant)
introduced to eliminate the existential quantifier occurring in the
negation of the goal wff. (The negation of the goal wff alleges that there is
some individual, whom we call "Λ," that has no parent.) A modified
proof tree with answer statement at the root is shown in Figure 5.12.
Here we obtain the somewhat obtuse answer statement P(p(A ),A ),
containing the Skolem function A. The interpretation should be that,
184
EXTRACTING ANSWERS FROM RESOLUTION REFUTATIONS
regardless of the Skolem function Λ (hypothesized to spoil the validity of
the goal wfi), we are able to prove P(p (A ),A ). That is, any individual Λ,
thought to spoil the goal wff, actually satisfies the goal wff. The constant A
could have been a variable without invalidating the proof shown in
Figure 5.12. It can be shown [Luckham and Nilsson (1971)] that in the
answer-extracting process it is correct to replace any Skolem functions in the clauses coming from the negation of the goal wff by new variables.
These new variables will never be substituted out of the modified proof
but will merely trickle down to occur in the final answer statement.
Resolutions in the modified proof will still be limited to those defined by
those unification sets corresponding to the unification sets occurring in the original refutation. Variables might be renamed during some
resolutions so that, possibly,
a variable used in place of a Skolem function
may get renamed and thus might be the "ancestor" of several new
variables in the final answer statement. We illustrate some of the things
that might happen in the latter case by two simple examples.
EXAMPLE 3. Suppose S consists of the single axiom (in clause form):
P(B,w,w) V P(A,u,u),
and suppose we wish to prove the goal wff:
(3χ)(\/ζ)(3γ)Ρ(χ,ζ,γ).
~C(x,y)V P(y,x) ~P(y,A)VP(y,A)
C(x,pM)
Fig. 5.12 A modified proof tree for an answer statement.
185
RESOLUTION REFUTATION SYSTEMS
A refutation tree is shown in Figure 5.13. Here, the clause resulting from
the negation of the goal wff contains the Skolem function g ( x ). In Figure
5.13 we also show the modified proof tree in which the variable t is used
in place of the Skolem function g(x). Here we obtain a proof of the
answer statement P(A 9t9t) V P(B,z,z) that is identical (except for
variable names) to the single axiom. This example illustrates how
variables introduced by renaming variables in one clause during a
resolution can finally appear in the answer statement.
~P(x,g(x),y)
P(B,w,w)V P(A,u,u)
P(B,w,w)
~P(x,g(x).y)
NIL
~P(x,t,y)V P(x,t,y)
P(B,w,w) V P(A,u,u)
P(B,w,w)\/ P{A,t,t)
~P(x,t,y)V P(x,t,y)
P(A,t,t)V P(B,z,z)
186 Fig. 5.13 Trees for Example 3.
EXTRACTING ANSWERS FROM RESOLUTION REFUTATIONS
EXAMPLE 4. As another example, suppose we wish to prove the same
goal wff as before, but now from the single axiom P(z,u 9z) V P(A,u,u).
The refutation tree is shown in Figure 5.14. Here the clause coming from
the negation of the goal wff contains the Skolem function g (x).
In Figure 5.14 we also show the modified proof tree in which the
variable w is used in place of the Skolem function g (x ). Here we obtain a
proof of the answer statement:
P{z,w 9z)VP(A,w 9w)9
~~P(x.g(x),y)
-P(x,g(x)y>) P(z.u,z)V P{A,u,u)
NIL
~P(x,w,y)V P(x,w,y)
P(z,u.z)V P(A,u,u)
~P(x,w,y) V P(x,w,y)\ P(z,w,z)\/ P(A,w,w)
P(z,w,z)V P(A,w,w)
Fig. 5.14 Trees for Example 4.
187
RESOLUTION REFUTATION SYSTEMS
which is identical (except for variable names) to the single axiom. Careful
analysis of the unifying substitutions in this example will show that
although the resolutions in the modified tree are constrained by
corresponding unification sets, the substitutions used in the modified tree
can be more general than those in the original refutation tree.
In conclusion, the steps of the answer extraction process can be
summarized as follows:
1. A resolution-refutation tree is found by some search process. The
unification subsets of the clauses in this tree are marked.
2. New variables are substituted for any Skolem functions occurring in
the clauses that result from the negation of the goal wff.
3. The clauses resulting from the negation of the goal wff are converted
into tautologies by appending to them their own negations.
4. A modified proof tree is produced modeling the structure of the
original refutation tree. Each resolution in the modified tree uses a
unification set determined by the unification set used by the correspond
ing resolution in the refutation tree.
5. The clause at the root of the modified tree is the answer statement
extracted by this process.
Obviously, the answer statement depends upon the refutation from
which it is extracted. Several different refutations might exist for the
same problem; from each refutation we could extract an answer, and,
although some of these answers might be identical, it is possible that
some answer statements would be more general than others. Usually we
have no way of knowing whether or not the answer statement extracted
from a given proof is the most general answer possible. We could, of
course, continue to search for proofs until we found one producing a
sufficiently general answer. Because of the undecidability of the predicate
calculus, though, we would not always know whether we had found all of the possible proofs for a wff, W, from a set, S.
188
BIBLIOGRAPHICAL AND HISTORICAL REMARKS
5.5. BIBLIOGRAPHICAL AND HISTORICAL
REMARKS
Various control strategies for resolution refutations are discussed in
Loveland (1978) and Chang and Lee (1973). Ordering strategies have
been proposed by Boy er (1971), Kowalski (1970), Reiter (1971),
Kowalski and Kuehner (1971), Minker, Fishman, and McSkimin (1973),
and Minker and Zanon (1979).
Some examples of large-scale resolution refutation systems are those of
Guard et al. (1969), McCharen et al. (1976), Minker et al. (1974), and
Luckham et al. (1978) [The latter is also described in Allen and Luckham
(1970).] Unlike some of the very earliest resolution systems, many of
these possess control knowledge adequate to prove some rather difficult
theorems.
Our discussion of procedural attachment is based on the work of
Weyhrauch (1980) on FOL. The process for extracting answers from
resolution refutations was originally proposed by Green (1969b). Our treatment of answer extraction
is based on work by Luckham and Nilsson
(1971), who extended the method.
EXERCISES
5.1 Find a linear input form refutation for the following unsatisfiable set
of clauses:
~rvp s
~R ~sv u
~UV Q
189
RESOLUTION REFUTATION SYSTEMS
5.2 Indicate which of the following clauses are subsumed by P (f( x ),y ) :
(a) P(f(A),f(x))VP(z,f(y))
(b) P(z,A)V ~P(A,z)
(c) P(/(/·(x)),z)
(d) P(f(z),z)VQ(x)
(e) P(A,A)V P(f(x),y)
53 Show by a resolution refutation that each of the following formulas
is a tautology:
(a) (?^ß)=>pVP)=>(ÄVß)]
(b) [(7»=>ß)=»j»]=>p
(c) (~P^P)^P
(d) (/>=Φρ)^(~ρ=>~ρ)
5.4 Prove the validity of the following wffs using the method of
resolution refutation:
(a) (3x){[P(x)^P(A)]A[P(x)^P(B)]}
(b) (Vz)[ß(z)^/>(z)]
=> {(3χ)[β(χ)=>/>(Λ)] A[Q(x)=>P(B)]}
(c) (3x)(3y){[P(f(x)) A Q(f(B))]
=*[P(f(A))AP(y)AQ(y)]}
(d) (3x)(Vy)P(x,y)
=>(yyX3x)P(jc,7)
(e) (V*){/»(*)A[ß(^)Vß(*)]}
=>(3χχΡ(χ)Λβ(χ)]
5.5 Show by a resolution refutation that the wff (Bx)P(x) logically
follows from the wff [P(A1 ) V P(A2)]. However, the Skolemized form of (3x)P(x), namely, P(A ), does not logically follow from
[P(A1) V P(A2)]. Explain.
5.6 Show that a production system using the resolution rule schema
operating on a global database of clauses is commutative in the sense
defined in chapter 1.
5.7 Find an ancestry-filtered form refutation for the clauses of EXAM
PLE 2 in Section 5.4.2. Compare with the refutation graph of Figure 5.10.
190
BIBLIOGRAPHICAL AND HISTORICAL REMARKS
5.8 Referring to the discussion in Section 3.3. on derivation graphs (and
to Exercise 3.4) propose a heuristic search strategy for a resolution
refutation system. On what factors would you base an h function?
5.9 In this exercise we preview a relationship between computation and
deduction that will be more fully explored in chapter 6.
The expression cons(x,y) denotes the list formed by inserting the
element * at the head of the list y. We denote the empty list by NIL ; the
list (2) by cons(2,NIL)\ the list (1,2) by cons(\ 9cons(2,NIL)); etc. The
expression LAST(x,y) is intended to mean thatj is the last element of
the list x. We have the following axioms:
(yu)LAST(cons(u,NIL),u)
(Vx )(Vy )( Vz )[ LA ST(y, z)^>LAST( cons ( x,y ), z )]
Prove the following theorem from these axioms by the method of
resolution refutation:
(3v)LAST(cons(2,cons(l,NIL)),v)
Use answer extraction to find v, the last element of the list (2,1). Describe
briefly how this method might be used to compute the last element of
longer lists.
191
CHAPTER 6
RULE-BASED DEDUCTION SYSTEMS
The way in which a piece of knowledge about a certain field is
expressed by an expert in that field often carries important information
about how that knowledge can best be used. Suppose, for example, that a
mathematician says:
If x and y are both greater that zero, so is the product of x andj.
A straightforward rendering of this statement into predicate calculus is:
(Vx)(Vy){[G(x,0) Λ G(yfi)] ^ G (times (x, y ) fi)} .
However, we could instead have used the following completely equiva
lent formulation:
(Vjc)(Vy){[G(jc,0) Λ ~G(times(x,y)fi)] => ~G(y,0)} .
The logical content of the mathematician's statement is, of course,
independent of the many equivalent predicate calculus forms that could
represent it. But the way in which English statements are worded often
carries extra-logical, or heuristic, control information. In our example,
the statement seems to indicate that we are to use the fact that x and y are
individually greater than zero to prove that x multiplied by y is greater
than zero.
Much of the knowledge used by AI systems is directly representable by
general implicational expressions. The following statements and expressions are additional examples:
(1) All vertebrates are animals.
(Vx)[ VERTEBRATE(x)^> ANIMAL(x)]
193
RULE-BASED DEDUCTION SYSTEMS
(2) Everyone in the Purchasing Dept. over 30 is married.
(V*) (Vy) {[ WORKS-IN{PURCHASING-DEPT,x)
A AGE(x 9y) A G(y,30)]=ï MARRIED(x)}
(3) There is a cube on top of every red cylinder.
Çix){[CYLINDER(x) A RED(x)]
^>(3y)[CUBE(y) A ON(y,x)]}
If we were to convert expressions such as these into clauses, we would
lose the possibly valuable control information contained in their given
implicational forms. The clausal expression (A V B V C), for example,
is logically equivalent to any of the implications {~A A ~B)=> C,
(~A A ~C)=ïB 9 ΗΛ ~0=ΦΛ, ~A=ï(B VC),
~B=>(A V C), or ~ C=ï(A V B)\ but each of these implications
carries its own, rather different, extra-logical control information not
carried at all by the clause form. In this chapter we argue that
implications should be used in the form originally given, as F-rules or
B-rules of a production system.
The use of implicational wffs as rules in a production system prevents
the system from making inferences directly from these rule wffs alone.
All inferences made by a production system result from the application of
production rules to the global database. Therefore each inference can
involve only one rule wff at a time. This restriction has beneficial effects on the efficiency of the system. Additionally, we can show, in general, that converting wffs to clauses can lead to inefficiencies.
Consider the problem of attempting to prove the wff P A ( Q V R
). If
we used a resolution refutation system, we would negate this wff and
convert it to clause form through the following steps:
~[PA(QVR)]
~/>V -(gVÄ) —P V(~(?A ~R)
(1) -PV-Ô
(2) ~P V ~R
194
Suppose the base set also contains the following clauses:
(3) ~S V P
(4) ~t/VS
(5) U
(6) ~ W V R
(7) W
One reasonable strategy for obtaining a refutation might involve
selecting clause 1, say, and using it and its descendants in resolutions. We
can resolve clauses 1 and 3 to produce ~5V ~ Q, and then use clauses 4
and 5 in sequence to produce ~Q. At this stage, we have "resolved away"
the literal ~P from clause 1. Unfortunately, we now discover that we
have no way to resolve away ~g, so our search must consider working with clause 2. The previous work in resolving away ~P is wasted because
we must search for a way to resolve it away again, to produce the clause
~R, which is on the way to a final solution. The fact that we had to
resolve away ~P twice is an inefficiency caused by "multiplying out" a
subexpression in the conversion to clause form. If we look at our original
goal, namely, to prove P Λ ( Q V R ), it is obvious that the component P
needs to be proved only once. Conversion to clauses makes this sort of duplication difficult to avoid.
The systems described in this chapter do not convert wffs to clauses;
they use them in a form close to their original given form. Wffs representing assertional knowledge about the problem are separated into two categories: rules and facts. The rules consist of those assertions given in implicational form. Typically, they express general knowledge about a particular subject area and are used as production rules. The facts are the
assertions that are not expressed as implications. Typically, they repre
sent specific knowledge relevant to a particular case. The task of the production systems discussed in this chapter is to prove a goal wfffrom these facts and rules.
In forward systems, implications used as F-rules operate on a global
database of facts until a termination condition involving the goal wff is
achieved. In backward systems, the implications used as B-rules operate
195
RULE-BASED DEDUCTION SYSTEMS
on a global database of goals until a termination condition involving the
facts is achieved. Combined forward and backward operation is also
possible. The details about rule operation and termination are explained
in the next few pages.
This sort of theorem-proving system is a direct system rather than a
refutation system. A direct system is not necessarily more efficient than a
refutation system, but its operation does seem intuitively easier for
people to understand.
Systems of this kind are often called rule-based deduction systems, to
emphasize the importance of using rules to make deductions. AI research
has produced many applications of rule-based systems.
6.1. A FORWARD DEDUCTION SYSTEM
6.1.1. THE AND/OR FORM FOR FACT EXPRESSIONS
We begin by describing a simple type of forward production system
that processes fact expressions of arbitrary form. Then we consider a dual
form of this system, namely, a backward system that is able to prove goal expressions of arbitrary form. Finally, we combine the two in a single system.
Our forward system has as its initial global database a representation
for the given set of facts. In particular, we do not intend to convert these
facts into clause form. The facts are represented as a predicate calculus
wff that has been transformed into implication-free form that we call
AND /OR
form. To convert a wff into AND/OR form, the => symbols (if
there are any) are eliminated, using the equivalence of ( Wl => W2) and
(~ Wl V W2). (Typically, there will be few => symbols among the facts
because implications are preferably represented as rules.) Next, negation
symbols are moved in (using de Morgan's laws) until their scopes include
at most a single predicate. The resulting expression is then Skolemized and prenexed; variables within the scopes of universal quantifiers are standardized by renaming, existentially quantified variables are replaced
by Skolem functions, and the universal quantifiers are dropped. Any
variables remaining are assumed to have universal quantification.
196
A FORWARD DEDUCTION SYSTEM
For example, the fact expression:
(3«)(Vv){ Q(v,u) A ~[[R(v) V P(v)] A S(u,v)]}
is converted to
Q(v,A ) Λ {[~/*(v) Λ ~P(v)] V ~S(A, v)]} .
Variables can be renamed so that the same variable does not occur in
different (main) conjuncts of the fact expression. Renaming variables in
our example yields the expression:
Q(w,A) Λ {[~Ä(v) Λ ~P(v)] V ~£(Λ,ν)} .
Note that the variable v, in Q ( v,A ), can be replaced by a new variable, w,
but that neither occurrence of the variable v in the conjuncts of the
embedded conjunction, [~Ä(v) Λ ~P(v)], can be renamed because
this variable also occurs in the disjunct ~S(A,v). An expression in
AND/OR form consists of subexpressions of literals connected by Λ and
V symbols. Note that an expression in AND/OR form is not in clause
form. It is much closer to the form of the original expression. In
particular, subexpressions are not multiplied out.
6.1.2. USING AND/OR GRAPHS TO REPRESENT FACT
EXPRESSIONS
An AND/OR graph can be used to represent a fact expression in
AND/OR form. For example, the AND/OR tree of Figure 6.1 repre
sents the fact expression that we just put into AND/OR form above.
Each subexpression of the fact expression is represented by a node in the
graph. Disjunctively related subexpressions, E l,..., E k, of a fact,
(Ej V ... V E k\ are represented by descendant nodes connected to
their parent node by a fc-connector. Each conjunctive subexpression,
E1,..., E n, of an expression, ( E t Λ ... Λ E n ), is represented by a single
descendant node connected to the parent node by a 1-connector. It may
seem surprising that we use /c-connectors (a conjunctive notion) to
separate disjunctions in fact expressions. We see later why we have
adopted this convention.
The leaf nodes of the AND/OR graph representation of a fact
expression are labeled by the literals occurring in the expression. We call
197
RULE-BASED DEDUCTION SYSTEMS
that node in the graph labeling the entire fact expression, the root node. It
has no ancestors in the graph.
An interesting property of the AND/OR graph representation of a wff
is that the set of clauses into which that wff could have been converted
can be read out as the set of solution graphs (terminating in leaf nodes) of
the AND/OR graph. Thus, the clauses that result from the expression
Q(w,A)A {[~Α(ν)Λ ~Ρ(ν)] V ~S(A,v)} are:
Ô(vM)
~S(A,v) V
~S(A 9v)V 'Ä(v)
Each clause is obtained as the disjunction of the literals at the leaf nodes
of one of the solution graphs of Figure 6.1. We might therefore think of the AND/OR graph as a compact representation for a set of clauses. [The
AND/OR graph representation for an expression is actually slightly less
general than the clause representation, however, because not multiplying
out common subexpressions can prevent certain variable renamings that
are possible in clause form. In the last of the clauses above, for example,
the variable v can be renamed u throughout the clause. This renaming
cannot be expressed in the AND/OR graph, which results in loss of generality that can sometimes cause difficulties (discussed later in the chapter).]
Q(w,A) Λ {[-/?(»·) Λ ~P{v)) ~S{A,v)}
Q(w,A) [-R(v) Λ -P(v)] V ~S(A,v)
S(A,v)
~R(v)
Fig. 6.1 An AND/OR tree representation of a fact expression.
198
A FORWARD DEDUCTION SYSTEM
Usually, we draw our AND/OR graph representations of fact expres
sions "upside down." Later we also use AND/OR graph representations
of goal wffs; these are displayed in the usual manner, "rightside up."
When we represent wffs by AND/OR graphs, we are using AND/OR
graphs for a quite different purpose than that described in chapters 1 and
3. There, AND/OR graphs were representations used by the control
strategy to monitor the progress of decomposable production systems.
Here we are using them as representational forms for the global database
of a production system. Various of the processes to be described in this
chapter involve transformations and tests on the AND/OR graph as a
whole, and thus it is appropriate to use the entire AND/OR graph as the
global database.
6.13. USING RULES TO TRANSFORM AND/OR GRAPHS
The production rules used by our forward production system are
applied to AND/OR graph structures to produce transformed graph structures. These rules are based on the implicational wffs that represent general assertional knowledge about
a problem domain. For simplicity of
explanation, we limit the types of wffs that we allow as rules to those of the form:
where L is a single literal, W is an arbitrary wff (assumed to be in
AND/OR form), and any variables occurring in the implication are
assumed to have universal quantification over the entire implication.
Variables in the facts and rules are standardized apart so that no variable
occurs in more than one rule and so that the rule variables are different
than the fact variables.
The restriction to single-literal antecedents considerably simplifies the
matching process in applying rules to AND/OR graphs. This restriction
is a bit less severe than it appears because implications having antece
dents consisting of a disjunction of literals can be written as multiple
rules; for example, the implication {LI V L2) => Wis equivalent to the
pair of rules LI => W and L2 => W. In any case, the restrictions on rule
forms that we impose in this chapter do not seem to cause practical
limitations on the utility of the resulting deduction systems.
199
RULE-BASED DEDUCTION SYSTEMS
Any implication with a single-literal antecedent, regardless of its
quantification, can be put in a form in which the scope of quantification is
the entire implication by a process that first "reverses" the quantification
of those variables local to the antecedent and then Skolemizes all
existential variables. For example, the wif
Q/x){[(3y)Qfz)P(x 9y9z)]=> (Vii)ß(jc,n)}
can be transformed through the following steps:
(1) Eliminate (temporarily) implication symbol.
<yX){~[(3y)Qfz)P(x,y,z)]
V(Vu)Q(x,u)}
(2) Reverse quantification of variables in first disjunct
by moving negation symbol in.
0tx){Qty)(3z)i~P(x,y,z)]
V(V«)ß(x,u)}
(3) Skolemize.
(\/χ){(νγ)[~Ρ(χ,γ,/(χ,γ))]
V(V«)ß(*,«)}
(4) Move all universal quantifiers to the front and drop.
~P(x,y,f(x,y))V Q(x,u)
(5) Restore implication.
P(x 9y9f(x,y))=>Q(x,u)
To explain how rules of this sort are applied to AND/OR graphs, we
first consider the variable-free propositional calculus case. A rule of the
form L=>W (where L is a literal and W is a wff in AND/OR form) can
be applied to any AND/OR graph having a leaf node, n, labeled by literal
L. The result is a new AND/OR graph in which node n now has an
outgoing 1-connector to a descendant node (also labeled by L) which is
the root node of that AND/OR graph structure representing W.
200
A FORWARD DEDUCTION SYSTEM
As an example, consider the rule
S=*(XA Y)V Z.
We can apply this rule to the AND/OR graph of Figure 6.2 at the leaf
node labeled by S. The result is the graph structure shown in Figure 6.3.
The two nodes labeled by S are connected by an arc that we call a match
arc.
Before applying a rule, an AND/OR graph, such as that of Figure 6.2,
represented a particular fact expression. (Its set of solution graphs
terminating in leaf nodes represented the clause form of the fact
expression.) We intend that the graph resulting after rule application
represent both the original fact and a fact expression that is inferable
from the original one and the rule.
Suppose we have a rule L => ÌV, where L is a literal and W is a wff.
From this rule and from the fact expression F(L), we can infer the
expression F{ W) derived from F(L) by replacing all of the occurrences
of L in F by W. When using a rule L => W to transform the AND/OR
graph representation of F(L) in the manner described, we produce a
new graph that can be considered to contain a representation of F( W)\ that is, its set of solution graphs terminating in leaf
nodes represents the
set of clauses in the clause form of F( W). This set of clauses includes the
entire set that would be produced by performing all possible resolutions
on L between the clause form of F( L ) and the clause form of L => W.
Ξ
(P V Q) H H V
(T V U)
Fig. 6.2 An AND /OR graph with no variables.
201
RULE-BASED DEDUCTION SYSTEMS
Consider the example of Figure 6.3. The clause form of the rule
5=>[(*Λ 7)VZ]is:
-svxvz
and
~SV YV Z.
Those clauses in the clause form of
[(PVQ)AR] V[S A(TV U)]
that would resolve (on S) with either of the two rule clauses are:
P V Q V S
and
RV S.
Match Arc
(P V Q) I I R I s\
(PVQ)AR S A (T V U)
[(P V Q) A R] V [S A (T V U))
Fig. 6.3 An AND/OR graph resulting from applying a rule.
202
A FORWARD DEDUCTION SYSTEM
The complete set of resolvents that can be obtained from these four
clauses by resolving on S is:
IVZVPVQ
YVZV?VQ
RV YV Z
RV XV Z
All of these are included in the clauses represented by the solution graphs
of Figure 6.3.
From this example, and from the foregoing discussion, we see that the
process of applying a rule to an AND/OR graph accomplishes in an
extremely economical fashion what might otherwise have taken several
resolutions.
We want the AND/OR graph resulting from a rule application to
continue to represent the original fact expression as well as the inferred
one. This effect is obtained by having identically labeled nodes on either
side of the match arc. After a rule is applied at a node, this node is no
longer a leaf node of the graph, but it is still labeled by a single literal and
may continue to have rules applied to it. We call any node in the graph
labeled by a single literal a literal node. The set of clauses represented by
an AND/OR graph is the set that corresponds to the set of solution
graphs terminating in literal nodes of the graph.
All of our discussion so far about rule applications has been for the
propositional calculus case in which the expressions do not contain
variables. Soon we will describe how expressions with variables are dealt with, but first we discuss the termination condition for the variable-free
case.
6.1.4. USING THE GOAL WFF FOR TERMINATION
The object of the forward production system that we have described is
to prove some goal wff from a fact wff and a set of rules. This forward
system is limited in the type of goal expressions that it can prove;
specifically, it can prove only those goal wffs whose form is a disjunction
of literals. We represent this goal wff by a set of literals and assume that
the members of this set are disjunctively related. (Later, we describe a
backward system and a bidirectional system that are not limited to such
203
RULE-BASED DEDUCTION SYSTEMS
Goal Nodes
Rules:
A=>C A D
B^E A G
Fact
Fig. 6.4 An AND/OR graph satisfying termination.
simple goal expressions.) Goal literals (as well as rules) can be used to add
descendants to the AND/OR graph. When one of the goal literals
matches a literal labeling a literal node, n, of the graph, we add a new
descendant of node n, labeled by the matching goal literal, to the graph.
This descendant is called a goal node. Goal nodes are connected to their
parents by match arcs. The production system successfully terminates when it produces an AND/OR graph containing a solution graph that
terminates in goal nodes. (At termination, the system has essentially inferred a clause identical to some subpart of
the goal clause.)
In our illustrations of AND/OR graphs, we represent matches
between literal nodes and goal nodes in the same way that we represent matches between literal nodes and nodes representing rule antecedents.
We show, in Figure 6.4, an AND/OR graph that satisfies a termination
condition based on the goal wff
( C V G ). Note the match arcs to the goal
nodes.
The AND/OR solution graph of Figure 6.4 can also be interpreted as a
proof of the goal expression (CVG) using a "reasoning-by-cases"
strategy. Initially, we have the fact expression, (A V B). Since we don't
204
A FORWARD DEDUCTION SYSTEM
know whether A or B is true, we might attempt first to prove the goal by
assuming that A is true and then attempt to prove the goal assuming B is
true. If both proofs succeed, we hâve a proof based simply on the
disjunction (A V B ), and it wouldn't matter which of A or B was true. In
Figure 6.4, the descendants of the node labeled by (A V B) are
connected to it by a 2-connector; thus both of these descendants must
occur (as they indeed do) in the final solution graph. Now we can see the
intuitive reason for using A>connectors to separate disjunctively related
subexpressions in facts. If a solution graph for node n includes any
descendant of AZ through a certain A>connector, it must include all of the
descendants through this /c-connector.
The production system that we have described, based on applying
rules to AND/OR graphs, is commutative; therefore an irrevocable
control regime suffices. The system continues to apply applicable rules until an AND/OR graph containing a solution graph is produced.
6.1.5. EXPRESSIONS CONTAINING VARIABLES
We now describe forward production systems that deal with expres
sions containing variables. We have already mentioned that variables in facts and rules have implicit universal quantification. We assume that any existential variables in facts and rules have been Skolemized.
For goal wffs containing existentially or universally quantified vari
ables, we use a Skolemization process that is dual to that used for facts
and rules. Universal variables in goals are replaced by Skolem functions
of the existential variables in whose scopes these universal variables
reside. Recall that in resolution refutation systems, goal wffs are negated,
converting universal quantifiers into existential ones, and vice versa.
Existential variables in these expressions are then replaced by Skolem
functions. We achieve the same effect in direct proof systems if we
replace universally quantified goal variables by Skolem functions. The
existential quantifiers in the Skolemized goal wff can then be dropped,
and variables remaining in goal expressions have assumed existential
quantification.
We are still restricting our goal wffs to those that are a disjunction of
literals. After Skolemizing a goal wff, we can rename its variables so that
the same variable does not occur in more than one disjunct of the goal
wff. (Recall the equivalence between the wff (3x)[ Wl{x) V W2(x)]
and the wff [(Bx) Wl(x) V (3y) W2(y)].)
205
RULE-BASED DEDUCTION SYSTEMS
Now we consider the process of applying a rule of the form ( L => W)
to an AND/OR graph, where L is a literal, W is a wff in AND/OR form,
and all expressions might contain variables. The rule is applicable if the
AND/OR graph contains a literal node L' that unifies with L. Suppose
the mgu is u. Then, application of this rule extends the graph (just as in the propositional calculus case) by creating a match arc directed from the node labeled by L! in the AND/OR graph to a new descendant node labeled by L. This descendant node is the root node of the AND/OR
graph representation of Wu. We also label the match arc by the mgu, u.
As an example, consider the fact expression
{P(x,y)V[Q(x,A)AR(B,y)\}.
The AND/OR graph representation for this fact is shown in Figure 6.5.
Now, if we apply the rule:
P(A,B)=ï[S(A) V X(B)]
to this AND/OR graph, we obtain the AND/OR graph shown in Figure
6.6.
The AND/OR graph shown in Figure 6.6 has two solution graphs that
terminate in leaf nodes and that include the newly added match arc. The
clauses corresponding to these solution graphs are:
S(A)V X(B)V Q(A,A)
and
S(A)V X(B)V R(B,B).
In constructing these clauses, we have applied the mgu,
w, to the literals
occurring at the leaf nodes of the solution graphs. These clauses are just
those that could be obtained from the clause form of the fact and the rule
wffs by performing resolutions on P.
The AND/OR graph of Figure 6.6 continues to represent the original
fact expression, because we take it generally to represent all of those
clauses corresponding to solution graphs terminating in literal nodes.
After more than one rule has been applied to an AND/OR graph, it
contains more than one match arc. In particular, any solution graph
206
A FORWARD DEDUCTION SYSTEM
(terminating in literal nodes) can have more than one match arc. In
computing the sets of clauses represented by an AND/OR graph
containing several match arcs, we count only those solution graphs
terminating in literal nodes having consistent match arc substitutions.
The clause represented by a consistent solution graph is obtained by
applying a special substitution, called the unifying composition, to the
disjunction of the literals labeling its terminal (literal) nodes.
\Q(X,A) R(B,y)
Fig. 6.5 An AND/OR graph representation of a fact expression containing variables.
\s(A) xm
Fig. 6.6 An AND /OR graph resulting after applying a rule containing variables.
207
RULE-BASED DEDUCTION SYSTEMS
The notions of a consistent set of substitutions and a unifying
composition of substitutions are defined as follows. Suppose we have a
set of substitutions, {u l9ug9.. ,,u n). Each u { is, in turn, a set of pairs:
ui — {Ul/Vili · · ·> hm(i)/vim(i)}
where the ts are terms and the vs are variables. From the ( u1,..., un ), we
define two expressions:
Ul — (vllv · -»vitna)v · ·)νηίν · '»vnm(n))
and
Ü2 — (hi y · ·>^1ιη(1)ν · -^ηΐν · ^nm(n)) ·
The substitutions (u l9.. .,w n) are called consistent if and only if £/ 2 and
i/^ are unifiable. The unifying composition, w, of (u l9.. .,w n) is the most
general unifier of Uj and t/j.
Some examples of unifying compositions [(Sickel (1976) and Chang
and Slagle (1979)] are given in Table 6.1.
Table 6.1
Examples of Unifying Compositions of Substitutions
"1
{A/x)
{x/y}
ίΛΟΑ}
{x/y,x/z}
{*)
{gir)/*)
[f(g{xl))/x3,
f(x2)/x4) Ug
{B/x}
0-/*}
tf(A)/x}
{Λ/ζ}
{}
W*)/J>i
{x4/x3,g(xl)/x2) u
inconsistent
(x/y,x/z)
{/(A)/x,A/z)
{A/x,A/y,A/z}
{*}
inconsistent
{J(g(xl))/x3,
f(g(xl))/x4,g(xl)/x2)
208
A FORWARD DEDUCTION SYSTEM
It is not difficult to show that the unifying composition operation is
associative and commutative. Thus, the unifying composition associated
with a solution graph does not depend on the order in which match arcs
were generated while constructing the graph. (Recall that the composition
of substitutions is associative but not commutative.)
It is reasonable to expect that a solution graph must have a set of
consistent match arc substitutions in order for its corresponding clauses
to be ones that can be inferred from the original fact expression and the
rules. Suppose, for example, that we have the fact
P(x)VQ(x)
and the two rules
P(A)=>R(A)
and
Q(B)^R(B).
Application of both of these rules would produce the AND/OR graph
shown in Figure 6.7. Even though this graph contains a solution graph
with literal nodes labeled by R (A ) and R(B), this graph has inconsistent
substitutions. Therefore, the clause [R(A ) V R(B)] is not one of those represented by the AND/OR graph shown in Figure 6.7. Of course,
neither could this clause be derived by resolution from the clause form of the fact and rule wffs.
R(A)
P(A)
[A M
P(x) R(B)
Q(B)
[B/x]
Q(x)
P(x) V Q(x)
Fig. 6.7 An AND/OR graph with inconsistent substitutions.
209
RULE-BASED DEDUCTION SYSTEMS
The graph of Figure 6.7 does, however, contain a representation for the
clause [R(A) V Q(A)]. It is the clause obtained by applying the
substitution {A/x } (which is the trivial unifying composition of the set
containing the single element {A/x}) to the expression [R(A) V Q(x)]. This expression, in turn, corresponds to the solution
graph terminating in the literal nodes labeled by R (A ) and Q (x ).
If the same rule is applied more than once, it is important that each
application use renamed variables. Otherwise, we may needlessly over-
constrain the substitutions.
The AND/OR graph can also be extended by using the goal literals.
When a goal literal, L, unifies with a literal U labeling a literal node, n, of
the graph, we can add a match arc (labeled by the mgu) directed from
node « to a new descendant goal node labeled by L. The same goal literal
can be used a number of
times, creating multiple goal nodes, but each use
must employ renamed variables.
The process of extending the AND/OR graph by applying rules or by
using goal literals successfully terminates when a consistent solution graph is produced having goal nodes for all of its terminal nodes. The
production system has then proved that goal (sub)disjunction obtained by applying the unifying composition of the final solution graph to the
disjunction of the literals labeling the goal nodes in the solution graph.
We illustrate how this forward production system operates by a simple
example. Suppose we have the following fact and rules:
Fido barks and bites, or Fido is not a dog:
-DOG(FIDO) V [BARKS(FIDO) Λ BITES(FIDO)]
All terriers are dogs:
Rl: ~DOG(x)=> -TERRIER(x)
(We use the contrapositive form of the implication here.)
Anyone who barks is noisy:
R2
: BA RKS (y ) => NOISY (y )
210
A FORWARD DEDUCTION SYSTEM
Goal Nodes
-TERRIER(z)
{FIDO I z]
-TERRIER(FIDO)
Rl
~DOG(x)
[FIDOlx] NOISY(z)
[FIDO/z]
NOISY(FIDO)
i
R2
BARKS{y)
[FIDOly]
BARKS(FIDO) BITES(FIDO)
Fig. 6.8 An AND/OR graph for the "Terrier** problem.
Now suppose we want to prove that there exists someone who is not a
terrier or who is noisy. The goal wff representing the statement to be
proved is:
-TERRIER(z) V NOISY(z) .
Recall that z is an existentially quantified variable.
The AND/OR graph for this problem is shown in Figure 6.8. The goal
nodes are shown by double-boxed expressions, and rule applications are
labeled by the rule numbers. A consistent solution graph within this
211
RULE-BASED DEDUCTION SYSTEMS
AND/OR graph has the substitutions {FIDO/x}, {FIDO/y},
{FIDO/z}. The unifying composition of these substitutions is simply
{ FIDO/x, FIDO/y, FIDO/z}. Applying this unifying composition to
the goal literals used in the solution yields
-TERRIER(FIDO) V NOISY(FIDO),
which is the instance of the goal wff that our system has proved. This
instantiated expression can thus be taken as the answer statement.
There are several extensions that we could make to this simple forward
production system. We have not yet explained how we might achieve
resolutions between components of the fact expressions—sometimes allowing certain intraf act resolutions is useful (and necessary); nor have
we described how we might proceed in those cases in which a fact
(sub)expression might be needed more than once in the same
proof, with
differently named variables in each usage. Of course, there is also the very
important problem of controlling this production system so that it finds consistent solution graphs efficiently. We postpone further consideration of these matters until they arise again in the backward system, described
next.
6.2. A BACKWARD DEDUCTION SYSTEM
An important property of logic is the duality between assertions and
goals in theorem-proving systems. We have already seen an instance of
this principle of duality in resolution refutation systems. There the goal
wff was negated, converted to clause form, and added to the clause form
of the assertions. Duality between assertions and goals allows the negated goal to be treated as if it were an assertion. Resolution refutation systems
apply resolution to the combined set of clauses until the empty clause
(denoting F) is produced.
We could also have described a dual resolution system that operates on
goal expressions. To prepare wffs for such a system, we would first negate
the wff representing the assertions, convert this negated wff to the dual of
clause form (namely, a disjunction of conjunctions of literals), and add
these clauses to the dual clause form of the goal wff. Such a system would
then apply a dual version of resolution until the empty clause (now
denoting T) was produced.
212
A BACKWARD DEDUCTION SYSTEM
We can also imagine mixed systems in which three different forms of
resolution are used, namely, resolution between assertions, resolution
between goal expressions, and resolution between an assertion and a goal. The forward system described in the last section might be regarded
as one of
these mixed systems because it involved matching a fact literal
in the AND/OR graph with a goal literal. The backward production system, described next, is also a mixed system that, in some respects, is
dual to the forward system just described. Its operation involves the same
sort of representations and mechanisms that were used in the forward
system.
6.2.1. GOAL EXPRESSIONS IN AND/OR FORM
Our backward system is able to deal with goal expressions of arbitrary
form. We first convert the goal wffto AND/OR form by the same sort of
process used to convert a fact expression. We eliminate =Φ symbols, move
negation symbols in, Skolemize universal variables, and drop existential
quantifiers. Variables remaining in the AND/OR form of a goal
expression have assumed existential quantification.
For example, the goal expression:
(3y)(Vx){P(x)^>[Q(x,y) Λ ~[R(x) Λ S(y)]]}
is converted to
~P(f(y)) v {Q(f(y),y) Λ [~*(/Ό0) v ~S00]},
wheref(y) is a Skolem function.
Standardizing variables apart in the (main) disjuncts of the goal yields:
~P(f(z)) V { Q(f(y),y) A [~R(f(y)) V ~S(y)]} .
(Note that the variable y cannot be renamed within the disjunctive
subexpression to give each disjunct there a different variable.)
Goal wffs in AND/OR form can be represented as AND/OR graphs.
But with goal expressions, A>connectors in these graphs are used to
separate conjunctively related subexpressions. The AND/OR graph
representation for the example goal wff used above is shown in Figure
213
RULE-BASED DEDUCTION SYSTEMS
~P(f(z))V {Q(f(y)y)A[-R{f(y)) V ~S(y)}}
Fig. 6.9 An AND /OR graph representation of a goal wff.
6.9. The leaf nodes of this graph are labeled by the literals of the goal
expression. In AND/OR goal graphs, we call any descendant of the root
node, a subgoal node. The expressions labeling such descendant nodes
are called subgoals.
The set of clauses in the clause form representation of this goal wff can
be read from the set of solution graphs terminating in leaf nodes:
~P(f(z))
Q(f(y),y)A~S(y)
Goal clauses are conjunctions of literals and the disjunction of these
clauses is the clause form of the goal wff.
6.2.2. APPLYING RULES IN THE BACKWARD SYSTEM
The B-rules for this system are based on assertional implications. They
are assertions just as were the F-rules of the forward system. Now,
however, we restrict these B-rules to expressions of the form
W=$L,
214
A BACKWARD DEDUCTION SYSTEM
where W is any wff (assumed to be in AND/OR form), L is a literal, and
the scope of quantification of any variables in the implication is the entire
implication. [Again, restricting B-rules to implications of this form
simplifies matching and does not cause important practical difficulties.
Also, an implication such as W =Φ {LI Λ L2) can be converted to the
two rules W^> LI and W^> L2.]
Such a B-rule is applicable to an AND/OR graph representing a goal
wff if that graph contains a literal node labeled by U that unifies with L.
The result of applying the rule is to add a match arc from the node
labeled by U to a new descendant node labeled by L. This new node is
the root node of the AND/OR graph representation of Wu where u is the
mgu of L and ΖΛ This mgu labels the match arc in the transformed graph.
Our explanation of the appropriateness of this operation is dual to the
explanation for applying an F-rule to a fact AND/OR graph. The assertional rule
W=5> L can be negated and added (disjunctively) to the
goal wff. The negated form is (WΛ ~L). Performing all (goal) resolutions on L between the clauses deriving from ( W Λ ~L) and the goal wff
clauses produces a set of resolvents that are identical to clauses
included among those associated with the consistent solution graphs of
the transformed AND/OR graph.
6.23. THE TERMINATION CONDITION
The fact expressions used by our backward system are limited to those
in the form of a conjunction of literals. Such expressions can be
represented as a set of literals. Analogous to the forward system, when a
fact literal matches a literal labeling a literal node of the graph, a
corresponding descendant fact node can be added to the graph. This fact
node is linked to the matching subgoal literal node by a match arc labeled
by the mgu. The same fact literal can be used a multiple number of times
(with different variables in each use) to create multiple fact nodes.
The condition for successful termination for our backward system is
that the AND/OR graph contain a consistent solution graph terminating
in fact nodes. Again, a consistent solution graph is one in which the match
arc substitutions have a unifying composition.
Let us consider a simple example of how the backward system works.
215
RULE-BASED DEDUCTION SYSTEMS
CAT(x)
■J [x/x5
CAT{x5)
\ R5
MEOWS(x)
< [MYi ì
RTL
MEOWS{MYRTLE)
Fig. 6.10 A consistent solution graph for a backward system.
216
A BACKWARD DEDUCTION SYSTEM
Let the facts be:
Fl: DOG(FIDO)
F2: -BARKS(FIDO)
F3: WAGS-TAIL(FIDO)
F4: MEOWS(MYRTLE)
and let us use the following rules:
RI: [WAGS-TAIL(xl) A DOG(xl)]=> FRIENDLY(xl)
R2: [FRIENDLY(x2) A ~BARKS(x2)]
=>~AFRAID (y2,x2)
R3: DOG(x3)^>ANIMAL(x3)
R4: CAT(x4)=ïANIMAL(x4)
R5: MEOWS(x5)^>CAT(x5)
Suppose we want to ask if there are a cat and a dog such that the cat is
unafraid of the dog. The goal expression is:
(3x)(3y)[CAT(x) A DOG(y) A ~AFRAID(x,y)].
We show a consistent solution graph for this problem in Figure 6.10.
The fact nodes are shown double-boxed, and rule applications are
labeled by the rule number. To verify the consistency of this solution
graph, we compute the unifying composition of all of the substitutions
labeling the match arcs in the solution graph. For Figure 6.10, we must
compute the unifying composition of ({x/x5}, {MYRTLE/x} 9
{FIDO/y}, {x/y2, y/x2], {FIDO/y}, {y/xl} 9 {FIDO/y} 9
{FIDO/y}). The result is {MYRTLE:/x5 9 MYRTLE/x, FIDO/y,
MYRTLEi>y2, FIDO/x2, FIDO/xl}. This unifying composition ap
plied to the goal expression yields the answer statement
[CAT(MYRTLE) A DOG(FIDO)
A -AFRAID(MYRTLE,FIDO)] .
6.2.4. CONTROL STRATEGIES FOR DEDUCTION SYSTEMS
Various techniques can be used to control the search for a consistent
solution graph. We describe some of these as they might apply to a
backward system; the same ideas can also be used with forward systems.
The control strategy for our backward deduction system might attempt to
find a consistent solution graph by first finding any solution graph and
217
RULE-BASED DEDUCTION SYSTEMS
then checking it for consistency. If this candidate graph is not consistent,
the search must continue until a consistent one is found.
A more sophisticated strategy would involve checking for consistency
as the partial, candidate solution graphs are being developed (that is,
before a complete candidate solution is found). Sometimes inconsisten
cies are revealed early in the process of developing a partial solution graph; these inconsistent partial solution graphs can be immediately
ruled out, thus reducing the amount of search effort.
Consider the following example. Suppose that we want to prove the
goal P(x
) Λ Q (x ) and that the facts include R (A ) and Q (A ). Suppose
that the rules include
Rl: R{y)=>P{y)
R2: S(z)^>P(B)
Now, at a certain stage, the backward system might have produced the
AND/OR graph shown in Figure 6.11. There are two partial candidate solution graphs in Figure 6.11. One has the substitutions ({x/y},
{A/x}), and the other has the substitutions ({B/x}, {A/x}). The latter
is inconsistent. Furthermore, if ß(^4 ) is the only match for the subgoal
Q (x ), we can see that rule R2 could not possibly be a part of any solution.
Thus, detecting inconsistencies early in the search process can lead to
opportunities for pruning the AND/OR graph. In our example, we do
not need to generate subgoals of S(z).
P(x) A Q(x)
Piy) P{B)
RL R2 PW I I Q(x)
Q(A)
R(x) S(z)
Fig. 6.11 An AND/OR graph with inconsistent substitutions.
218
A BACKWARD DEDUCTION SYSTEM
Pruning operations that result from consistency checks among
different levels of the graph are also possible. Consider the following
example. Suppose the rules include:
Rl
R2
R3
R4
R5 [ß(u)AÄ(v)]=>P(«,v)
W(y)^R(y)
S(w)^>R(w)
i/(z)=>S(C)
V(A)^Q(A)
Now, in attempting to deduce the goal P(x,x ), we might produce the
AND/OR graph shown in Figure 6.12. Note that rules R4 and R5 are in
the same partial candidate solution graph and that their associated substitutions, namely, {A/x
} and { C/x }, are inconsistent. If rule R5 is
the only possible match for subgoal Q (x ), this inconsistency would allow
us to prune the subgoal U(z) from the graph. Solving U(z) cannot contribute to a consistent solution graph. Notice, however, that subgoal
S(x) can be left in the graph; it might still permit the substitution
{A/x}. The general rule is that a match need not be attempted if it is
inconsistent with the match substitutions in all other partial solution
graphs containing it.
Another control strategy for backward, rule-based deduction systems
involves building a structure called a rule connection graph. In this method, we precompute all possible matches among the rules and store
the resulting substitutions. This precomputation is performed before solving any specific problems with the rules; the results are potentially useful in all problems so long as the set of rules is not changed. Such a process is, of
course, only practical for rule sets that are not too large.
We show, in Figure 6.13, an example rule connection graph for the
rules of our earlier "cat and dog" example. The graph is constructed by
writing down each rule in AND/OR graph form and then connecting
(with match arcs) literals in rule antecedents to all matching rule
consequents. The match arcs are then labeled by the mgus.
When an actual problem is to be solved, we can connect the AND/OR
goal graph and fact nodes to the rule connection graph by connecting the
goal literal nodes to all matching rule consequents, and by connecting
fact nodes to all matching literals in the rule antecedents. This enlarged
connection graph can next be scanned to find candidate solution graphs
within it. Once a candidate is found, we attempt to compute the unifying
219
RULE-BASED DEDUCTION SYSTEMS
Q(x)
1 1 [A/x]
Q(A)
\V(A) P(x,x)
{x/u,x/v}
<>
P(u.v)
i? 1 t\l
R(x)
l*/yJX^<^M
R(y) R(w)
J~RJ
Six)
[c/x] 11
S(Q
J/W
U(z)
Fig. 6.12 Another AND /OR graph with inconsistent substitutions.
composition of the substitutions involved in this graph. If such a unifying
composition exists, we have a consistent AND/OR solution graph and,
thus, a solution. Otherwise, we must look for another candidate solution
graph within the connection graph.
Using connection graphs of this sort, we are really producing
AND/OR graphs largely from precomputed structure. There is one
important complication, however, that we have not yet mentioned: We
might need to use the same rule in the rule connection graph more than once in a candidate solution graph. Each time it is used, it must have
differently named
variables. These differently named variables must then
also occur in the substitutions copied over to the candidate solution
graph.
Let us consider a specific example. Suppose we have the rule
P(x)=$ P(f(x)) and the fact P(A ). Suppose we want to prove the goal
P(f(f(A))). The rule connection graph for this problem is shown in
Figure 6.14. Here we use an (unlabeled) match arc between the rule's
consequent and antecedent to remind us that a new instance of the rule
220
A BACKWARD DEDUCTION SYSTEM
ANIMAL(x3)
R3
D0G(x3) ANIMAL(x4)
\ R4
\
CAT(x4)
>> [x4lx5]
V
CAT(x5)
R5
\
MEOW S{x 5)
Fig. 6.13 A rule connection graph.
can have its consequent match the original antecedent, and so on. When
the goal and fact nodes are connected, we have the graph shown in Figure
6.15. Scanning this connection graph for candidate solution graphs can
produce the one shown in Figure 6.16. This graph uses the same rule
twice (going around a loop in the rule connection graph), and, thus, the
variables occurring in the rule and in the associated substitutions must be
renamed. The substitutions in the solution graph have the unifying
composition {f(A )/JC, A/y).
221
RULE-BASED DEDUCTION SYSTEMS
ά^
P(fW)
P(x) P(f(f(A)))
[f(A)/x]
^
P(f(x))
Fig. 6.14 Another rule connection graph. P(x)
P(A)
Fig. 6.15 A connection graph.
P(f(f(A)))
if(A)/x)
PtfM)
P(f{A))
[A/y]
o
WOO)
P(A)
P(A)
Fig. 6.16 A candidate solution graph.
222
A BACKWARD DEDUCTION SYSTEM
6.2.5. EXAMPLES OF BACKWARD, RULE-BASED
DEDUCTION SYSTEMS
To give a more concrete idea of the use of rule-based deduction
systems in AI, we next describe some example systems. Each is
illustrative only; practical versions of these systems would of course be
much larger and need many additional features. It is interesting to note,
however, that there are many important applications that can be attacked
even with the restrictions we have imposed so far on the allowed forms
for rules and facts in backward systems.
6.2.5.1. An Information Retrieval System. Let us imagine that our set
of facts contains personnel data for a business organization and that we
want an automatic system to answer various questions about personnel
matters. A highly simplified example system might have facts such as the
following :
MANAGER (P-D,JOHN-JONES)
John Jones is the manager of the Purchasing Dept.
WORKS-IN( P-D, JOE-SMITH)
Joe Smith works in the Purchasing Department.
WORKS-IN( P-D,SALLY-JONES)
WORKS-IN ( P-D, PETE-S WANSON)
MANAGER(S-D,HARRY-TURNER)
Harry Turner is the manager of the Sales Department.
WORKS-IN (S-D, MAR Y-JONES)
WORKS-IN (S-D, BILL- WHITE)
MARRIED (JOHN-JONES,MAR Y-JONES)
In order to provide certain commonsense information about personnel
concepts and to allow the set of facts to be kept concise, we might have
the following rules:
223
RULE-BASED DEDUCTION SYSTEMS
Rl : MA NA GER (x 9y)^> WORKS-IN ( x,y )
R2: [ WORKS-IN(χ,γ) Λ MANAGER(x.z)]
^>BOSS-OF(y,z)
(A more precise formulation might also state that a
person cannot be his own boss.)
R3 : [ WORKS-IN ( x9y ) Λ WORKS-IN (x,z)]
^~MARRIED(y,z)
(Company policy does not allow married couples
to work in the same department.)
R4: MARRIED(y,z)^> MARRIED(z,y)
(Marriage is symmetrical. A more precise formulation
might also state that persons cannot be married to
themselves.)
R5: [MARRIED(x,y) A WORKS-IN(P-D,x)]
=> INSURED-BY(x 9EAGLE-CORP)
(All married employees of the Purchasing
Department are insured by the Eagle Corporation.)
With these facts and rules, a simple backward production system can
answer a variety of questions. For these examples, we assume that the
control strategy guides the generation of the AND/OR graph by
pursuing a depth-first search for a consistent solution graph. In selecting
a literal node within a partial solution graph to match against a B-rule
consequent or fact, we assume that a look-ahead process selects that
subgoal literal which has the fewest consistent matches.
Those queries that can be answered without using rules are handled
most simply. We show some example solution graphs in Figure 6.17. The
solution graph is shown in such a way that a depth-first, left-to-right ordering of the literal nodes in the graph corresponds to the actual order
in which the control regime found matches for these literals. The
double-boxed nodes are fact nodes. In the second example, MAR
RIED (y,x) has the fewest potential matches, so it is matched first. If we
apply the unifying composition of the substitutions occurring in the
solution graph to the query, we obtain the answer
WORKS-IN (SD, MAR
Y-JONES)
A MARRIED (JOHN-JONES, MARY-JONES).
224
A BACKWARD DEDUCTION SYSTEM
Name someone who works in the Purchasing Department.
WORKS-IN(P-D,x)
[JOE-SMITH/x]
WORKS-IN{P-D JOE-SMITH)
Name someone who is married and works in the Sales Department.
[JOHN-JONES/y,
MARY-JONES/x]
Ü
MARRIED{JOHN-JONES,MAR Y-JONES) [MARY-JONES/x]
O
WORKS-IN(S-D,MAR Y-JONES)
Fig. 6.17 Some simple queries that can be matched directly by facts.
225
RULE-BASED DEDUCTION SYSTEMS
Now let us try some more complex queries, ones that require using
rules to answer. We show, in Figure 6.18, the solution graph for the query
"Who is Joe Smith's Boss?"
The only rule that can be applied at the beginning is rule R2. Of the
resulting new literal nodes, MANAGER(xl 9zl ) has the fewest possible
matches, so it is matched first. Matching this subgoal against MAN-
AGER(S-D, HARRY-TURNER) cannot lead to a consistent solution
graph, so ultimately the control process would have returned to try the
match shown in Figure 6.18. (Notice that we have renamed the variables
in rule R2 so that they are standardized apart from the goal wff.) After a
solution is obtained, we can apply the unifying composition of the
substitutions to the query to obtain the answer BOSS-OF(JOE-
SMITH, JOHN-JO NE S ).
As a more complex example, consider the request "Name someone
insured by the Eagle Corporation." We show the solution graph for this
query in Figure 6.19. The MARRIED(x,y1) subgoal component is
solved first, and then the rule Rl is applied to WORKS-IN ( P-D, x ) to set
up the solution of the other subgoal component. Applying the unifying
composition to the query produces the answer INSURED-BY(JOHN-
JONES, E A GLE-CORP ).
[P-D/x l,JOHN-JONES/zl] MANAGER(xl,zl) WORKS-IN(xl JOE-SMITH)
MAN A GER(P-D JOHN-JONES) [P-D/xl]
WORKS-IN(P-DJOE-SMITH)
Fig. 6.18 The solution graph for "Who is Joe Smith's boss?"
226
A BACKWARD DEDUCTION SYSTEM
Suppose we wanted to ask "Is John Jones married to Sally Jones?" The
system might first try to prove MARRIED (JOHN-JONES, SALLY-
JONES). No matches with facts are possible, and the subgoal obtained
by using rule R4 doesn't help either. When no proof can be found, it is
reasonable to attempt to prove the negation of the query. The solution graph for the negated goal is shown in Figure 6.20.
We can also use this example to illustrate how additional knowledge
and capabilities can be added without extensive changes to the system.
Suppose, for example, that we want to refine rule R5 by introducing the
notion of a temporary employee. The new rule, R5\ is:
R5': [MARRIED(x,y) f\WORKS-IN(P-D,x)
A -TEMPORARY (x)]
=> INSURED-BY(x,EAGLE-CORP)
{JOHN-JONES/x,
MARY-JONES/yl] [P-D/x2,x/y2]
MARRIED(JOHN-JONES,MAR Y-JONES) WORKS-IN(x2,y2)
Rl
MANAGER(P-D,x)
{JOHN-JONES/x
O
M AN A GER(P-DJOHN-JONES)
Fig. 6.19 The solution graph for "Name someone insured by the Eagle Corporation.
227
RULE-BASED DEDUCTION SYSTEMS
Now we must add to our set of facts the information about whether the
employees are temporary or not. We might also have an additional
definitional rule:
R6: PERMANENT(x)^ -TEMPORARY(x).
Additional facts might now include:
PERMANENT(JOHN-JONES )
TEMPORARY(SALLY-JONES)
The new rules and facts have little influence on the way in which
previous queries are answered. As new rules are added to a deduction
system, it is important, however, to check to see that they do not conflict
with older rules. For example, suppose we were to add the rule:
~MARRIED{JOHN-JONES,SALL Y-JONES)
[JOHN-JONES/yl,SALL Y-JONES/zl]
Fig. 6.20 The solution graph for "John Jones is Not Married to Sally Jones. '
228
A BACKWARD DEDUCTION SYSTEM
R7\ PREV-EMP(x,G-TEK)
=> INSURED-BY(x,METRO-CORP)
(Anyone previously employed by G-TEK is
insured by Metro Corporation.)
We would also add facts about the previous employment of employees.
With these additions it now might be possible to derive conflicting
INSURED-BYs. Resolution of such conflicts can usually be obtained by
making the antecedents of the rules more precise.
One desirable feature involves meta-rules like "If the database does not
say explicitly that an employee is temporary, then that employee is
permanent." This rule makes a statement that refers to databases in
addition to employees! To use rules like this, our system would need a
linguistic expression that denoted its own database. Additionally, it would be desirable to have the appropriate attachments between these
expressions and the computer code comprising the database. Such
considerations, however, would involve us in interesting complexities
slightly beyond the scope of this book. [But see Weyhrauch (1980).]
6.2.5.2. A System For Reasoning About Inequalities. Now let us turn
our attention to some simple mathematics. We can use a system that
reasons about inequalities to illustrate some additional points. This
system will be able to show, for example, that if C > E > 0 and if
B > A > 0, then [B(A + C)/E] > B. To simplify our present discus
sion we allow only one predicate, G. The intended meaning of G(x,y) is
that x is greater than y. (Sometimes we use the more familar infix
notation x > y.) In this system we do not deal with equal or "less-than"
relations, so we specifically exclude the negation of G.
The present system is not able to perform arithmetic operations, but it
is able to represent their results by functional expressions. For addition
and multiplication we use the expressions plus and times. Each of these takes as its single argument a bag, that is, an unordered group of elements.
Thus, plus (3,4,3) is the same as plus (4,3,3), for example. (Most
importantly, the two expressions are unifiable because they are regarded
as the same expression.) We let the functions "divides" and "subtracts"
have two arguments because their order is important. We represent x/y
by divides(x,y), and x— y by subtracts (x,y).
Using this notation, a typical expression might be G[ di
vides (times (B,plus (A, C)),E),B] which is more familiarly represented
229
RULE-BASED DEDUCTION SYSTEMS
as [B(A + C)/E] > B. The reason that we are using the more cumber
some prefix notation is to avoid possible sources of confusion when
unifying terms. After one example of a deduction using prefix notation
we revert to the more familiar infix convention.
Our system uses rules that express certain properties of inequalities.
We begin with the following set of rules:
Rl: [G(xfi) A G(y,0)]^> G(times(x,y\0)
that is, [(JC > 0) Λ (y > 0)] => (xy > 0)
R2\ [G(JC,0)A G(y,z)] => G(plus(x 9y),z)
that is, [(JC > 0) Λ (y > z)] =Φ [(JC + y) > z]
R3: [G(JC,W)A G(y,z)]=> G(plus(x,y),plus(w,z))
that is, [(JC > w) A (y >Z)]=>[(JC + y) > (w + z)]
R4\ [G(JC,0)A G(j>,z)]=> G(times(x,y\times(x,z))
that is, [(JC > 0) Λ (y > z)] => (jcy > JCZ)
Ä5: [G(l,w) Λ G(jc,0)]^>G(jc,/z>n^(jc,w))
that is, [(1 > w) A (JC > 0)] => (Λ: > jew)
R6 : G ( x,plus ( //mes ( w, z ), f/mey (j, z )))
=Φ G ( x, times (plus ( u>, j ), z ))
that is, [x > (wz -h^z^^Ijc > (w + y)z]
R7: [ G(JC, tf/wes(w,y)) Λ G(j,0)] => G(rfivW«(x,y), w)
that is, [(JC > wy) A (y > 0)] ^> [(jc/y) > wl
These, of course, are not the only rules that would be useful; in fact, we
shall introduce more later. Our system uses these rules as B-rules only.
Various control strategies might be used, but since the AND/OR graphs
resulting from applying these rules are all relatively small, we present the
entire graphs in our examples.
Our first problem is to prove [B(A + C)]/E > B from the following
facts: E > 0, B > 0, A > 0, C> E, and C> 0. The AND/OR graph for
this problem is shown in Figure 6.21. The solution graph is indicated by heavy branches, and facts that match (sub)goals are drawn in double
boxes. We note that rule R2 is used twice with different substitutions, but
one of these applications leads to an unsolvable subgoal.
230
A BACKWARD DEDUCTION SYSTEM
Examining the facts supporting this proof, we note some redundancy
that could have been avoided by use of the transitive property of G. That
is, from C > E and E > 0, we ought to be able to derive C > 0 when
needed, instead of having to represent it explicitly as a fact. Such a
derivation could be made from a transitivity rule:
R8: [(x>y)A(y>z)]^(x>z) .
G(divides(times(B,plus(A,C)),E),B)
1 I [times(B,plus(A,C))/xl,
^r E/yI,B/wl]
G(divides(xl,yl),wl)
G(times(B,plus(A,C)),times(B,E))
^ [B/x2,plus(A,C
G(times(x2,y2),times(x2,z2))
G(B,0) I
G(B,0) R4 ^
{A lx 5,C/y5,E/z5] R7 ^V.
)ly2,E/z2]
G(plus(A,C),E) s
G(plus(x5,y5),z5]
/
1 G(A,0)
^ i_
1 GO4,0)| R2 ^
\
G(C.E)
^l 1 Ie
V
|G(C,£-)| [c G(E,0)
Ϊ
G(E,0)
^[C/x6 tA/y6 tE/z6]
G(plus(x6,y6),z6)
/
KC.O)
V
r(C0) | R2
\
G(A,E)
no
successors
Fig. 6.21 The AND/OR graph for an inequality problem.
231
RULE-BASED DEDUCTION SYSTEMS
Comparing R8 with the other rules, we note that its use is relatively
unconstrained; it contains too many variables unencumbered by func
tions. Thus, it can participate in too many matches and will tend to get
applied too often. Used as a B-rule, the consequent of R8, namely,
G(x,z), matches any subgoal produced by our system. Clearly, we don't
want to use transitivity at every step.
Fortunately, there are ways to structure data so that special relations
like transitivity can be implicitly encoded in the structure. For example, if
the facts expressing an ordering relation are stored as nodes in a
lattice-like structure, the desired consequences of transitivity (of the
ordering) result automatically from simple computations on the lattice.
These computations can be viewed as procedural attachments to the
predicate denoting the ordering relation.
Let us consider a more difficult proof. From B > 1, 1 > A, A > 0,
C> Z), and D > 0, prove:
(3u)[(Au + Bu)>D].
Also, from among the constants named in the facts, we would like an
example of the u that satisfies the theorem.
Let us assume that the facts are stored in a lattice-like structure that
makes the following derived facts readily évaluable: B > A, B > 0,
1 > 0, and C > 0. In the following example, we assume that any of these
facts can be used as needed.
The system first attempts to apply B-rules to the main goal. Only rule
R2 is applicable, but there are two alternative substitutions that can be
used. For brevity, let's follow the derivation along just one of them. (The
other one leads very quickly to some unsolvable subgoals, as the reader
might want to verify for himself.)
Using just the rules Rl through R7, our system would generate the
AND/OR graph shown in Figure 6.22. Note the subgoal (Bu > D)
marked by an asterisk (*). No rules are applicable to this goal, so our
present system would fail on this problem. What can be done to extend the power of the system?
Here again we see an example in which the power of a production
system can be extended in an evolutionary manner without extensive redesign. We can add the following rule to our system:
232
A BACKWARD DEDUCTION SYSTEM
R9: [{y> 1) Λ (x> z)] => (jcy > z) .
This rule is applicable to the goal (Bu> D\ and its presence does not
otherwise greatly diminish the efficiency of the system. [The reader may
want to investigate the effect of R9 on the AND/OR graph of Figure
6.21. Its presence allows some additional—but ultimately fu
tile—matches to the subgoal G ( times ( B, plus (A, C )), times ( B, E ))].
In Figure 6.23, we show the AND/OR graph produced by rule
applications below Bu > D. Note that there are two 2-connectors below
the top node. The left-hand one is futile, but the right-hand one is
successful, with C substituted for u. We note that in Figure 6.22 the
substitution { C/u } is one of the ones permitted under the goal u > 0.
Thus our proof is complete, and a value oft/ that satisfies the theorem is
u = C.
(Au+Bu)>D
{Au/x,Bu/y,D/z) [Bu/xl,Au/yl.D/zl]
(x+y)>z (xl+yl)>zl
R2 R2
Au>0
{A/x2,u/y2} Bu>D Bu>0 Au>D
[u/x3,A/yS}
x2y2 > 0 x3y3 > 0
Fig. 6.22 A partial solution graph.
233
RULE-BASED DEDUCTION SYSTEMS
{B/x,u/y,D/z} {B/yl,u/xl,D/zl}
[Blu] [C/u]
Fig. 6.23 Subgoals produced by the new rule.
Some additional extensions to our inequality reasoning system would
increase its power further. One of the facts provided in our last example
was (1 > 0). We should not have to represent all possible inequalities
between numbers as facts. What is needed is an attachment to a
"greater-than" computation that would allow evaluation of ground
instances of G literals. There should also be attachments to arithmetic
programs so that G(\0,A ) could be substituted for G (plus (3,7), A ), for
example. A means should be provided to simplify algebraic expressions
and to handle equality predicates. Some of the mechanisms for efficiently
implementing improvements such as these depend on techniques to be
discussed at the end of this chapter.
6.3. "RESOLVING" WITHIN AND/OR GRAPHS
The backward system we have described is not able to prove valid or
tautological goal expressions such as^PVP) unless it can prove ~P
or P separately. Neither can the forward system recognize contradictory
fact expressions such as (~P Λ P). In order for these systems to
overcome these deficiencies, they must be able to perform intragoal or
intrafact inferences.
234
"RESOLVING" WITHIN AND/OR GRAPHS
Let us describe how certain intragoal inferences might be performed.
Consider, for example, the following expressions used by a backward
system:
Goal
[P(*,y)VQ(x,y)]A V(x,y)
Rules
Rl: [R(v)AS(u,B)]^>P(u,v)
R2: [~S(A,s)A W(r)] => Q(r,s)
Facts
R(B)A W{B)A V(A,B)A V(B,B)
After rules Rl and R2 have been applied, we have the AND/OR graph
shown in Figure 6.24. This graph has two complementary literals whose
predicates unify with mgu {A/x, B/y). We indicate this match in Figure
6.24 by an edge between the nodes representing the complementary
literals. The edge is labeled by the mgu. The (goal) clause form of the
expressions represented by this AND/OR graph include the clauses:
V(x,y)AR(y)AS(x,B)
and
V(x,y) A W(x) A ~S(A,y).
If we were to perform a goal resolution (on S) between these two clauses (after standardizing variables apart), we would obtain the (goal) resol
vent:
V(A 9y)AR(y)A V(t,B) A W{t) .
We mentioned at the beginning of this chapter that the AND/OR
graph representation for an expression is slightly less general than clause form because variables in the AND/OR graph cannot be fully standard
ized apart. This constraint makes it difficult to represent, with full generality, the expressions that can be obtained by resolving goal subexpressions.
235
RULE-BASED DEDUCTION SYSTEMS
Fig. 6.24 An AND/OR graph with complementary literal nodes.
One way to represent a resolution operation performed between two
goal clauses is to connect a literal in one partial solution graph with a
complementary literal in another (as we have done in Figure 6.24). We
take this connected structure to represent the clauses composed of the
literal nodes in the pairs of all solution graphs (terminating in literal
nodes) thus joined. We associate with a paired solution graph a
substitution that is the unifying composition of the substitutions in each
member of the pair plus the substitution associated with the match
between the complementary literals. The substitution associated with a
paired solution graph (terminating in literal nodes) is applied to its
terminating literal nodes to obtain the clause that it represents.
Thus, the structure of Figure 6.24 includes a representation for the
clause:
R(B)A W{A)/\ V(A,B).
This clause is not as general as the one we obtained earlier by goal
resolution between goal clauses whose variables had been standardized
apart, and this restricted generality can prevent us from finding certain
236
"RESOLVING" WITHIN AND/OR GRAPHS
proofs. (The expression [R(B) A W{A ) Λ V(A,B)] cannot be proved
from the facts that we have given, whereas the expression
[V(A,y) A R(y) A V(t,B) Λ W(t)] can.) We might say that this
operation, of matching complementary pairs of literals in AND/OR goal
graphs, is a restricted goal resolution (RGR).
To use RGR in a backward production system, we must modify the
termination criterion. We can assume, for the purposes of finding
candidate solution graphs, that literals joined by an RGR match edge are
terminal nodes. A pair of partial solution graphs thus joined constitutes a
candidate solution if all of its other leaf nodes are terminal (that is, if they
are either goal nodes or if they participate in other RGR matches). Such a
candidate solution graph is a final solution graph if its associated
substitution is consistent.
In our example, matching the remaining nonterminal leaf nodes of
Figure 6.24 with facts fails to produce a consistent solution graph because
the solution of this problem requires more generality than can be
obtained by applying RGR to the AND/OR graph representation of the
goal expression. The required generality can be obtained in this case by
multiplying out the goal expression into clauses and standardizing the
variables apart between the two clauses, producing the expression:
[P(xl,yl) A V(xl,yl)] V [Q(x2,y2) A V(x2,y2)] .
Now this expression can be represented as an AND/OR graph, and rules
and RGR can be applied to produce the consistent solution graph shown
in Figure 6.25. The unifying composition associated with this solution
includes the substitution {B/yl,A/xl,B/x2,B/y2). Applying this sub
stitution to the root node of the graph yields the answer statement:
[P(A 9B)] A V(A,B)] V[Q(B,B) A V(B,B)] .
To avoid conflicting substitutions when using RGR, it is sometimes
necessary to multiply out part or all of the goal expression into clause
form. A reasonable strategy for deduction systems of this type might be
to attempt first to find a proof using the original goal expression. If this
attempt fails, the system can convert (part of) the goal expression to
clause form, standardize variables, and try again. In the example above,
we had to multiply out the entire goal expression into clause form in order
to find a proof. In general, it suffices to multiply out just that subexpres
sion of the goal that contains all of the occurrences of the variables that
237
oo [P{xl,yl) A V(xl,yJ)] V [Q(x2,y2) A V(x2,y2)]
P{xl,yl)AV{xl,yl) Q(x2,y2) A V{x2,y2)
P(xl,yl)
{xllu,yl/v} [A/xJ.B/yl] V{xl,yl)\ \Q(x2,y2)
<Jr {x2/r,y2/s}
V(A,B)
S(xl.B) Q(r,s)
\R2
w
~S(A.v2)\
{A/xl,B/y2}
Fig. 6.25 A solution graph using RGR. V(x2,y2)
[B/x2,B/y2] 7*
r w
03
w a
a w a
n
H δ z
<
H W
C/3
"RESOLVING" WITHIN AND/OR GRAPHS
need renaming. These variables are those for which substitution incon
sistencies were detected in the first proof attempt. Comparing Figure 6.24
and Figure 6.25 reveals that the second proof attempt can be guided by
the structure of the first.
We can sometimes avoid multiplying out into clause form by using
conditional substitutions. The idea of conditional substitutions is impor
tant in program synthesis applications. A conditional substitution is one
that contains a conditional expression. The conditions that we use in
conditional substitutions are ones based on a complementary pair of
unifiable literals in alternative partial solution graphs. For example, in
Figure 6.24, the literals S ( x, B ) and ~ S ( A ,y ) are in two different partial
solution graphs and their predicates unify with mgu {A/x 9B/y}.
Applying this mgu to S(x,B) yields S(A,B); applying it to ~S(A 9y)
yields ~S(A,B). We could match the node labeled by S(x,B) with a
fact node if S (Α,Β) had value T. In a sense, the conditional substitution
((if S(A,B), thcnA/x)} unifies S(x,B) with T. Also, the conditional
substitution ((if ~ S(A,B), then B/y)} unifies ~ S(A,y) with T.
Using these two substitutions permits us to find the two consistent
solution graphs shown in Figure 6.26. The unifying composition of the substitutions in the graph on the left includes the substitution ((if
S(A,B\A/x,B/y)}. The unifying composition of the substitutions in
the graph on the right includes the substitution ((if
~S(A,B),B/y,B/x)}. Since either S(A,B) or ~S(A,B) must be true,
we can combine these two solutions into one, with the unifying
composition {B/y, (if S(A,B), A/x; else
B/x)}. Such a substitution
might well provide a useful answer statement to associate with the goal
wff if S (A, B ) is a literal that can be evaluated by the user at the time the
answer is needed.
Dual processes could be described for restricted resolutions within
AND/OR graphs representing facts, but we omit an explicit description
because we do not usually expect to encounter contradictions among the facts of an AI system. (Tautologies among goals or subgoals is more
common.)
In the next section, we show how we can make use of the version of
RGR using conditional expressions in systems that synthesize computer
programs. First, though, we describe an alternative method for dealing with implicational goal wffs. Ordinarily we convert a goal wff of the form
P1^>P2 to its AND/OR form (~PI V P2). Suppose, for simplicity,
239
RULE-BASED DEDUCTION SYSTEMS
{Β/χ,Β/y}
I R(B) 1 T T W(B)
Fig. 6.26 Two solution graphs with conditional substitutions.
that PI is a literal. If the system then generates some subgoal of P2 that
contains the literal PI, it can use RGR between ~P1 and PL
An alternative treatment of a goal of the form PI => P2 involves
converting this goal to the subgoal P2 while adding PI to the set of facts
that can be used in proving P2 or its subgoals. Then, if the system
generates PI as a subgoal of P2, this subgoal can be matched against the
assumed fact PL
The process of converting goal antecedents to assumed facts can be
applied repeatedly so long as the subgoals contain implications, but the
system must maintain a separate set of assumed facts for each subgoal
240
COMPUTATION DEDUCTIONS AND PROGRAM SYNTHESIS
that is created in this manner. Also, the goal antecedents must be in the
form of a conjunction of literals, because we are still restricted to fact
expressions ofthat form.
The logical justification for treating an implicational goal in this
manner rests on the deduction theorem of logic, which states that if W2
logically follows from Wl, then Wl => W2 is valid. We have occasion to
use this method in one of the examples in the next section.
6.4. COMPUTATION DEDUCTIONS AND
PROGRAM SYNTHESIS
We next show how backward, rule-based deduction systems can be
used for performing computations and for synthesizing certain kinds of
computer programs. For such applications, we use a predicate calculus
expression to denote the relationship between the input and output of the
computation or of the program to be synthesized. For example, suppose
the input to a program is denoted by the variable "x," and the output is
denoted by the variable "j." Now suppose that we want to synthesize a
program such that the relationship P holds between input and output.
We can state the synthesis problem as the problem of finding a
constructive proof for the expression Çix)(3y)P{x,y). If we prove that
such a y exists by one of our theorem-proving methods, then we can
exhibit y as some composition of functions of x. This composition of
functions is then the program that we wished to synthesize. The
elementary functions comprising the composition are the primitives of
the particular programming language being used. "Pure" LISP is a
convenient language for this sort of approach because its operations can
all be defined in terms of functional expressions.
Let us illustrate this approach by some examples. First, we show how
we might compute an expression that bears a given relation to a given
input expression. Then we illustrate how a recursive program can be
synthesized for arbitrary inputs.
Suppose we simply want to reverse the list (1,2). That is, we want a
computation that takes the list (1,2) as input and produces the list (2,1) as
output. We show how a rule-based deduction system can perform this
241
RULE-BASED DEDUCTION SYSTEMS
computation. First, we specify the relationship between input and output
by a two-place predicate "REVERSED" whose arguments are terms
denoting lists. REVERSED is defined, in turn, in terms of other
predicates and primitive LISP expressions.
We adopt the convention used in LISP for representing lists as nested
dotted pairs. In LISP notation, the list (A,B,C,D), for example, is
represented as A.( B.( C.{ D.NIL ))). The dots can be regarded as a special
infix function symbol whose prefix form we call cons. Thus, the prefix form of
A.B is cons (A, B). We prefer the prefix form because that is the
form we have been using for functional terms in our predicate calculus language. Using this convention for representing lists, we show how the desired computation can be performed by a system that attempts to prove the goal expression:
(By
) RE VERSED ( cons ( 1, cons (2, NIL )),y ).
In specifying rules and facts to use in our proof, we use the three-place
predicate "APPENDED:9 APPENDED (x,y,z) has the value T just
when z is the list formed by appending the list x onto the front of the list
y. [For example, appending the list (1,2) onto the list (3,4) produces the
list (1,2,3,4).]
The facts that we need in proving the goal expression are:
El: REVERSED(NIL,NIL)
F2: APPENDED(NIL,xl,xl)
We express certain relationships involving REVERSED and AP
PENDED by the following rules:
Rl: APPENDED(x2 9y2,z2)
=> APPENDED(cons(ul,x2),y2,cons(ul, z2))
R2: [REVERSED (x3,y3)
Λ APPENDED (y3,cons(u2, NIL ), vl )]
=> RE VERSED ( cons ( u2, x3 ), vl )
Rule Rl states that the list created by appending a list, whose first element is ul and whose tail is x2
9 to a listy2 is the same as the list created
by adding the element ul to the front of the list formed by appending x2
242
COMPUTATION DEDUCTIONS AND PROGRAM SYNTHESIS
to y2. Rule R2 states that the reverse of a list formed by adding an
element u2 to the front of a list x3 is the same as appending the reverse of
x3 onto the list consisting of the single element u2.
Let us show how a backward production system might go about
reversing the list (1,2) given these facts and B-rules. We do not attempt to
explain here how a control strategy for this system might efficiently
decide which applicable rule ought to be applied. Much of the control
knowledge needed to make these sorts of choices intelligently is special to the domain of automatic programming and outside the scope of our
present discussion of general mechanisms.
We first look for facts and rules that match the goal RE
VERSED (cons(1,cons(2,NIL)\y). We can apply B-rule R2 with mgu
(l/w2, cons(2,NIL)/x3,y/vl }. Applying this mgu to the antecedent of
R2 yields new literal nodes labeled by
RE VERSED ( cons (2, NIL \y3 )
and
APPENDED(y3,cons(\,NIL\y) .
We can apply B-rule R2 to the subgoal RE VERSED ( cons (2, NIL ),y3 ),
creating two new literal nodes. (We rename the variables in R2 before
application to avoid confusion with the variables used in the previous
application.)
A consistent solution graph for this problem is shown in Figure 6.27.
The output expression that results from this proof is obtained by
combining substitutions to find the term substituted for y, namely,
cons(2,cons(\,NIL)). This expression represents the list that is the
reverse of the input list (1,2).
It is interesting to compare the computations involved in the search for
the proof shown in Figure 6.27 with the computations involved in
executing the following LISP program for reversing an arbitrary list:
reverse(x):
ifnull(x), NIL
else, append(reverse(cdr(x)), cons(csir(x) y NIL)))
243
RULE-BASED DEDUCTION SYSTEMS
{cons(ul,x2)/y3,cons{l,NIL)ly2,
cons(ul,z2)/y)
REVERSED(cons(u3,x4),v2) APPENDED(cons{ul ,x2\y2,cons{ul ,z2))
R2 RÌ
REVERSED(NIL,y4)
{NIL/y4} APPENDED(x2,cons(l ,NIL\z2)
APPENDED(y4,cons(2,NIL ),y3)
REVERSED(NIL,NIL) [NIL/x2,cons(l,NIL)/x5,
cons(l,NIL)/z2]
APPENDED(NIE,x5,x5)
{NIL/y4,cons(2,NIL)/xl,cons(2,NIL)/y3}
APPENDED(NIL,xl,x 1 )
Fig. 6.27 The solution graph for reversing a list.
244
COMPUTATION DEDUCTIONS AND PROGRAM SYNTHESIS
append(x,j>):
if null(jc),j
else, cons(car(x), append(cdr(x),^))
If the search process of our backward production system is sufficiently
well-guided by an appropriate control strategy, then the steps in the
search process correspond quite closely to the steps involved in executing
the LISP program on the input list (1,2).
We can control the production system search process by specifying
which applicable fact or rule should be used at any stage, and in which
order, to solve the component subgoals. A "language" for specifying this
control information can be based on conventions about the order in
which rules and facts are tested for possible matches and the order in
which literals appear in rule antecedents. When a rule or fact must be
selected for use, we select the first one in this ordering that can be
matched. When a subgoal component must be selected for solution, we select according to the ordering in which literals are written in rule
antecedents. It turns out that the order (FI, F2, RI, R2 ) for rule and fact
matching and the order in which we have written the antecedents of rules
Rl and R2 provide a very efficient control strategy for our example
problem. With this control strategy, the steps performed in the search
process for a proof mirror almost exactly the computational steps of
executing the LISP program.
To see the parallel, let us trace out just a few steps of the search process.
Beginning with the goal RE VERSED
( cons ( 1, cons (2, NIL )), y ), we first
check (in the order FI, F2, RI, R2 ) for a match. There might be a match
against Fl, so we check to see if cons(\,cons(2,NIL)) unifies with NIL.
[Compare with if null(x ) in the program.] Failing this test, we check for a
match against the consequent of R2. This test involves matching
cons ( u2, x3 ) against cons ( 1, cons (2, NIL )). This match succeeds with the
substitution {1 /u2, cons (2, NIL )/x3}. [Compare with computing
car(x) and cdr(jc) in the second line of the reverse program.] The first
subgoal component [namely, REVERSED (cons(2, NIL),y)\ of the
antecedent of R2 is worked on first. [Compare with the recursive call to
reverse(cdr(x )) in the program.] Again, we check for a match against F I
by checking to see if cons (2, NIL) equals NIL. Failing in this test again,
we pass to another level of subgoal generation in the proof search (and of
recursion in the program). At this level, we succeed in our match against
Fl (with mgu {NIL/y4}), so we work on the next subgoal ΛΡ-
245
RULE-BASED DEDUCTION SYSTEMS
PEND ED (y4, cons (2, NIL ), y 3 ). [In the program, we call the subroutine
append(N/L,cons(2,JV7L)).] This same parallelism holds between the
rest of the proof search and the program.
In many cases, it is possible to control the search process sufficiently so
that it mimics efficient computation, and, for this reason, it has been said
that computation is controlled deduction [Hayes (1973b)]. In fact, a
programming language, called PROLOG, is based on this very idea.
PROLOG "programs" consist of a sequence of "facts" and "rules." The
rules are implications just like our rules except that, in PROLOG, the rule
antecedents are restricted to conjunctions of literals. A program is
"called" by a goal expression. The fact and rule statements in the
program are scanned to find the first match for the first component in the
goal expression. The substitutions found in the match correspond to
variable binding, and control is transferred to the first subgoal compo
nent of the rule. Thus, the "interpreter" for a PROLOG program
corresponds to a backward, rule-based production system with very
specific control information about what to do next. (The PROLOG
interpreter is a bit less flexible than our backward system, because in
PROLOG the substitutions used in matching one literal of a conjunctive
subgoal are straightaway applied to the other conjuncts. The subgoal
instances thus created might not have solutions, so PROLOG incorpo
rates a backtracking mechanism that can try other matches.)
The example that we have been considering has involved a fixed input
list, namely, (1,2). If this fixed list were different, the theorem-proving
system would have produced a different proof and a different answer.
(Presumably, though, our PROLOG program would continue to function analogously to the general LISP program.) Rather than perform the
search process each time we "run the program" (even though, apparently, this search can be made quite efficient), we are led to ask if
we
could automatically synthesize one general program (like the LISP one,
for example) that would accept any input list. To do so we must find a
proof for the goal:
(Vx)(3y) REVERSED (x,y).
(Of course, we don't literally mean "for all x" because the program
doesn't have to be defined for all possible inputs. We only require that it
be defined for lists. We could have expressed this input restriction in the
formula to be proved, but our illustrative example is simpler if we merely
assume that the domain of x is limited to lists.)
246
COMPUTATION DEDUCTIONS AND PROGRAM SYNTHESIS
Since we already know that the final program for any given input list
has a repetitive character, we might guess that the program we are
seeking for arbitrary input lists is recursive. The introduction of recursive
functions in program synthesis comes about by using mathematical
induction in the proof. It turns out that in reversing a list by using an
append function, we have double recursion, once in reverse and once in
append. As a simpler example, let's consider just the problem of
producing a program to append one list to (the front of) another. That is, our goal is to prove:
(\/x)(Vy)(3z)APPENDED(x,y,z).
In this case, we have two input lists, x and
y, and one output list, z.
Skolemizing the goal wff yields
APPENDED(A,B,z) 9
where A and B are Skolem constants. To prove this goal, we'll need fact
F2 and rule Rl from our previous example. (The presence of the other
unneeded fact and rule does no harm, however.) Our explanation of this example is simplified if we re-represent Fl and Rl as the following rules:
R3: NULL(u)^APPENDED(u,xl,xl)
R4: [~NULL(v)f\APPENDED(cdr(v) yyO,zl)]
^>APPENDED(v 9yO,cons(car(v),zl))
In these expressions, we introduce the primitive LISP functions, namely,
cons, car, and cdr, out of which our program will be constructed. These
LISP expressions could have been introduced instead by the rule
~NULL(x)=> EQUAL(x, cons (car(x%cdr(x))) .
This alternative, however, would have involved us in some additional
complexities regarding special techniques for using equality axioms. We
avoid these difficulties, and simplify our example, by using rules R3 and
R4 instead. The needed equality substitutions are already contained in
these rules.
As already mentioned, to synthesize a recursive program using
theorem-proving methods requires the use of induction. We use the
247
RULE-BASED DEDUCTION SYSTEMS
method of structural induction for lists. To do so, we need the concept of a
list as a sublist of a given list. This relation is denoted by the predicate
SUBLIST(u,x). The principal property of SUBLIST on which our inductive argument depends can be expressed as the rule:
R5: ~NULL(x)=> SUBLIST(cdr(x),x)) ,
that is, the tail of any nonempty list,
JC, is a sublist of x.
To prove
(Vyl)(\fy2)(3zl)APPENDED(yl,y2,zl),
using structural induction for lists, we would proceed as follows:
1. Assume the induction hypothesis
(Vw7 )(Vw2 )[ SU BUST {ul,xl )
=>(3z2)APPENDED(ul,u2,z2)] .
That is, we assume our goal expression true for all input lists ul and u2
such that ul is a sublist of some arbitrary list xl.
2. Next, given the induction hypothesis, we attempt to prove our goal
expression true for all input lists xl and x2 where xl is the arbitrary list of
the induction hypothesis.
If step 2 is successful, then our goal expression is true for all input lists, yl
andy2.
We can capture this argument in a single formula, which we call the
induction rule.
{(Vxl)(\/x2)
{(Vul )(Vw2 )[ SUBLIST {ul 9xl )
=> (3z2)APPENDED ( «7, u2,z2)]}
^> (3z3)APPENDED(xl,x2,z3)}
^> (Vyl )(\fy2)(3zl )APPENDED(yl 9y29zl )
Although this rule looks rather complicated, we use it in a straightfor
ward manner. Ignoring quantifiers, the rule is of the form:
\{A^>C1)^C2]^>C3 .
248
COMPUTATION DEDUCTIONS AND PROGRAM SYNTHESIS
We will be using this rule as a B-rule to prove C3. Such a use creates the
subgoal of proving
\{A^>C1)^>C2).
We elect to prove this subgoal by proving C2 while having available (only
for use on C2 and its descendant subgoals) the B-rule {A =4> Cl ). (This
manner of treating an implicational goal was discussed earlier. Now,
however, rather than assume the goal's antecedent as a.fact, we assume it
as a rule.) A diagram that illustrates this strategy is shown in Figure 6.28.
Alternatively, we could transform the antecedent of the induction rule
into AND/OR form and use the rule to create the subgoal
[(A Λ ~C1) V C2]. This use of the induction rule is entirely equiva
lent, but it is a bit less intuitive and more difficult to explain, because an
RGR step between ~ Cl and C2 would ultimately be required to prove
the subgoal.
The induction rule can be Skolemized as follows:
{[SUBLIST(ul,Al)^>APPENDED(ul,u2 9skl(ul,u2))]
=> APPENDED(Al 9A29z3)}
=>APPENDED(yl 9y2,sk2(yl,y2)) .
Note the Skolem constants and functions Al,A2 9skl, and sk2. The
program that we seek will, in fact, turn out to be either of the Skolem
functions ski or sk2. Thus, it is reasonable now to represent both of them
by the single function symbol append. With this renaming, our induction
rule, in the form in which we use it, is:
RI: {[SUBLIST(ul,Al)
=» APPENDED ( ul9 u2 9 append(ul, u2 ))]
=Φ APPENDED(Al,A2 9z3)}
=> APPENDED (yl,y2 9append(yl,y2)) .
C3
The B-rule A=>C1 can be used
on this goal or on any of its descendants
Fig. 6.28 Using the induction rule.
249
RULE-BASED DEDUCTION SYSTEMS
NULL(A) [A/v.B/yO.
cons(car(A),zI )/z]
APPENDED(v.y(),cons(car{r),zl))
note appropriateness of RJ {A/yl.B/y2.
append (A,B)/z]
V
APPENDED {y 1,y 2,append tv 1 ,y2 ))
RI
APPENDED(Al,A2,z3)
below this node we can use the rule /?/'
SUBLIST(ul,Al)=>
APPENDED(ul,u2,append(ul,u2))
(continued on next page)
Fig. 6.29 A search graph for the APPENDED problem.
250
COMPUTATION DEDUCTIONS AND PROGRAM SYNTHESIS
(continued from preceding page)
[AI/u.A2/xl,A2/z.
{AJ/v,A2/yO,cons(car(Al),zl))/z3}
{cdr(Al)/ul,A2/u2,
append(cdr(Al),A2)/zl}
APPENDED(ul,u2,append(ul,u2))
' RÎ
r
SUBLIST{cdr(Al),Al)
^
SUBLIST(cdr(x),x)
R5
-NULL (Al)
251
RULE-BASED DEDUCTION SYSTEMS
An AND/OR search graph for the problem of proving AP-
PENDED(A,B,z) is shown in Figure 6.29. In our example, search
begins by applying rules R3 and R4 to the main goal. One of the subgoals
produced by R4 is recognized as similar to the main goal. Producing a
subgoal having this sort of similarity suggests, to the control strategy, the
appropriateness of applying the induction rule, RI, to the main goal. (Of
course, it is logically correct to apply the induction rule to the main goal
at any time. Since proof by induction is relatively complicated, the induction rule should not be used unless it is judged heuristically
appropriate. When a straightforward proof attempt produces this sort of "instance" of the main goal as a subgoal, induction is usually appro
priate.)
Applying RI to the main goal produces the subgoal AP
PENDED (Al, A2,z3) and the rule:
RF: SUBLIST(ul,Al)^APPENDED(ul,u2,append(ul,u2)).
This rule can be used only in the proof of APPENDED (Al, A2,z3
) or its
subgoals.
Next, the control strategy applies the same rules as were applied earlier
to the main goal (namely, R3 and R4) to the subgoal produced by the induction rule. Ultimately, two différent solution graphs are produced
that are complete except for the occurrence of NULL(Al) in one and
~NULL(A1) in the other. An RGR step completes the solution and
yields the conditional substitution:
{(if mx\\(Al),A2/z3\
else cons
( car (Al ), append(cdr (Al ),A2))/z3 )} .
This substitution produces a term for variable z3, which occurred in a
subgoal of the maingoal. This subgoal, which we have now proved, is
APPENDED(Al,A2,(if nu\l(Al ), A2 ;
else cons ( car (Al), append ( cdr (A1),A2 )))) .
Since Al and A2 are Skolem constants originating from universal
variables in a goal expression, they can be replaced by universally
quantified variables when constructing an answer. Thus, we have proved:
(Vx7 )(Vx2)APPENDED(xl,x2,(if null(xl), x2 ;
else cons ( car (xl), append ( cdr (xl),x2 )))) .
252
A COMBINATION FORWARD AND BACKWARD SYSTEM
Now we recognize that the third argument of APPENDED in the above
expression is a recursive program satisfying our input/output condition.
There are many subtleties involved in using induction in program
synthesis. A full account of the process is beyond the scope of this book
and would involve an explanation of methods for constructing auxiliary
functions, recursion within recursive programs, and the use of induction
hypotheses that are more general or "stronger" than the theorem to be proved. The special induction rule for APPENDED that we used in our
example could be replaced by more general structural induction rule schémas. These would use well-founded ordering conditions more general
than SUBLIST
[see Manna and Waldinger (1979)].
6.5. A COMBINATION FORWARD AND
BACKWARD SYSTEM
Both the forward and the backward rule-based deduction systems had
limitations. The backward system could handle goal expressions of
arbitrary form but was restricted to fact expressions consisting of
conjunctions of literals. The forward system could handle fact expres
sions of arbitrary form but was restricted to goal expressions consisting of
disjunctions of literals. Can we combine these two systems into one that
would have the advantages of each without the limitations of either?
We next describe a production system that is based on a combination
of the two we have just described. The global database of this combined system consists of two AND/OR graph structures, one representing goals
and one representing facts. These AND/OR structures are initially set to represent the given goal and fact expressions whose forms are now
unrestricted.
These structures are modified by the B-rules and F-rules, respectively,
of our two previous systems. The designer must decide which rules are to
work on the fact graph and which are to work on the goal graph. We
continue to call these rules B-rules and F-rules even though our new
production system is really only proceeding in one direction as it modifies
its bipartite global database. We continue to restrict the B-rules to
single-literal consequents, and the F-rules to single-literal antecedents.
253
RULE-BASED DEDUCTION SYSTEMS
The major complication introduced by this combined production
system is its termination condition. Termination must involve the proper
kind of abutment between the two graph structures. These structures can
be joined by match edges at nodes labeled by literals that unify. We label
the match edges themselves by the corresponding mgus. In the initial
graphs, match edges between the fact and goal graphs must be between leaf
nodes. After the graphs are extended by B-rule and F-rule applica
tions, the matches might occur at any literal node.
After all possible matches between the two graphs are made, we still
have the problem of deciding whether or not the expression at the root node of the goal graph has been proved from the rules and the expression at the root node of the fact graph. Our proof procedure should terminate
only when such a proof is found (or when we can conclude that one
cannot be found within given resource limits).
One simple termination condition is a straightforward generalization
of the procedure for deciding whether the root node of an AND/OR graph is "solved." This termination condition is based on a symmetric relationship, called CANCEL, between a fact node and a goal node.
CANCEL is defined recursively as follows:
Two nodes n and m CANCEL each other if
one of (
n, m ) is a fact node and the other a
goal node,
and
if n and m are labeled by unifiable literals, or
n has an outgoing fc-connector to a set of
successors {s {}, such that CANCEL{s {,m)
holds for each member of the set.
When the root node of the goal graph and the root node of the fact
graph CANCEL each other, we have a candidate solution. The graph
structure, within the goal and fact graphs, that demonstrates that the goal
and fact root nodes CANCEL each other is called a candidate CANCEL
graph. The candidate solution is an actual solution if all of the match
mgus in the candidate CANCEL graph are consistent.
As an example, we show the matches between an initial fact graph and
an initial goal graph in Figure 6.30. A consistent candidate CANCEL
254
A COMBINATION FORWARD AND BACKWARD SYSTEM
Initial
^Goal
Graph
Initial
y Fact
Graph
Fig. 6.30 An example CANCEL graph.
255
RULE-BASED DEDUCTION SYSTEMS
graph is indicated by the darkened arcs. The mgus of each of the fact-goal
node matches are shown next to the match edges, and the unifying
composition of all of these mgus is {f(A )/v,A/y).
Note that our CANCEL graph method treats conjunctively related
goal nodes correctly. Each conjunct must be proved before the parent is
proved. Disjunctively related fact nodes are treated in a similar manner.
In order to use one member of a disjunction in a proof, we must be able to
prove the same goal using each of the disjuncts separately. This process
implements the "reasoning-by-cases" strategy.
As the AND/OR search graphs are developed by application of
B-rules and F-rules, substitutions are associated with each rule applica
tion. All substitutions in a solution graph, including the mgus obtained in
rule matches and the mgus obtained between matching fact and goal
literals, must be consistent.
Goal Graph
H H Ξ 0
0 f B
X 1 1 sx^
(B VC)
/ c
J
k
A A (B VC)A D\ 0 Fact Graph
Fig. 6.31 The termination check fails to detect a proof.
256
CONTROL KNOWLEDGE FOR RULE-BASED DEDUCTION SYSTEMS
We note that pruning the AND/OR graphs by detecting inconsistent
substitutions may be impossible in systems that use both B-rules and
F-rules because, for these, both the fact and goal graphs change
dynamically, making it impossible to tell at any stage whether all possible
matches have already been made for a given literal node. Also, when
using F-rules and B-rules simultaneously, it may be important to treat the appropriate instances of solved goals as facts, so that F-rules can be applied to them. (A solved goal is one that is CANCELltd by the root node of the fact graph.)
The termination condition we have just described is adequate for many
problems but would fail to detect that the goal graph follows from the fact
graph in Figure 6.31. A more general sort of "fact-goal" resolution
operation would be needed for this problem than that embodied in our simple CANCEL-bascd termination check.
An alternative way of dealing with both arbitrary fact and goal
expressions is to use a (unidirectional) refutation system that processes facts only. The goal expression is first negated and then converted to
AND/OR form and conjoined with the fact expression. F-rules, the
contrapositive forms of B-rules, and restricted resolution operations are
then applied to this augmented fact graph until a contradiction is
produced.
6.6. CONTROL KNOWLEDGE FOR RULE-BASED
DEDUCTION SYSTEMS
Earlier we divided the knowledge needed by AI systems into three
categories: declarative knowledge, procedural knowledge, and control
knowledge. The production systems discussed in this chapter make it
relatively easy to express declarative and procedural knowledge. Experts
in various fields such as medicine and mathematics, who might not be
familiar with computers, have found it quite convenient and natural to
express their expertise in the form of predicates and implicational rules.
Nevertheless, there is still the need to supply control knowledge for
deduction systems. Efficient control strategies for the production systems
we describe might need to be rather complex. Embedding these
strategies into control programs requires a large amount of programming
257
RULE-BASED DEDUCTION SYSTEMS
skill. Thus, there is the temptation to leave the control strategy design
entirely to the AI expert. But much important control knowledge is specific to the domain in which the AI program is to operate. It is often
just as important for the physicians, chemists, and other domain experts
to supply control knowledge as it is for them to supply declarative and
procedural knowledge.
There are several examples of control knowledge that might be specific
to a particular application. Separating the rules into B-rules and F-rules
relieves the control strategy of the burden of deciding on the direction of
rule application. The best direction in which to apply a rule sometimes
depends on the domain. As an example of the importance of the direction
in which a rule is applied, consider rules that express taxonomic
information such as "all cats are animals," and "all dogs are animals":
CAT(x)^>ANIMAL(x)
DOG(x)=ïANIMAL(x)
If we had several such rules, one for each different type of animal, it
would be extremely inefficient to use any of them in the backward
direction. That is, one should not go about attempting to prove that Sam,
say, is an animal by first setting up the subgoal of proving that he is a cat
and, failing in that, trying the other subgoals. The taxonomic hierarchy branches out too extensively in the direction of search.
Whenever possible, the direction of reasoning ought to be in the
direction of a decreasing number of alternatives. The rules above can
^afely be used in the forward direction. When we learn that Sam is a cat,
say, we can efficiently assert that he is also an animal. Following the
hierarchy in this direction does not lead to a combinatorial explosion
because search is pinched
off* by the ever-narrowing number of catego
ries.
The contrapositive form of CAT(x)=$>ANIMAL(x) is ~ANI-
MAL(x)^> ~CAT(x). This rule should be used in the backward direction only. That is, to prove that Sam is not a cat, it is efficient to
attempt to prove that he is not an animal. Again, search is pinched off by
the narrow end of the taxonomic hierarchy.
There is other important control information that might depend on the
domain. In a rule of the form [PI A P2 A ... Λ PN] => Q, used as a
258
CONTROL KNOWLEDGE FOR RULE-BASED DEDUCTION SYSTEMS
B-rule, the domain expert may want to specify the order in which the
subgoals should be attacked. For each of these subgoals, he may further
want to specify explicitly a set of B-rules to be used on them and the order
in which these B-rules should be applied. Similarly, whenever a rule of
the form P => [ Ql Λ ... Λ QN] is used as an F-rule, he may want to
specify an additional set of F-rules that can now be applied and the order
in which these F-rules ought to be applied.
It may be appropriate for the control strategy to make other tests
before deciding whether to apply a B-rule or an F-rule. In an earlier
example, the transitivity of the "greater-than" predicate played an
important role. It would typically be inefficient to apply a transitivity rule
in the backward direction; but there may be specific cases in which it is
efficient to do so. Recall that the transitivity rule was of the form:
[(χ>γ)Α(γ>ζ)]^(χ>ζ).
We might want to apply this rule as a B-rule if one of the subgoal
conjuncts could match an existing fact, for example. This conditional
application would greatly restrict the use of the rule. Application
conditions comprise important control knowledge.
In order to use this sort of control knowledge, we need suitable
formalisms in which to represent it. There seem to be several approaches
to the problem. First, we could consider the control strategy problem
itself as a problem to be solved by another AI production system. The
object-level AI system would have declarative and procedural knowledge
about the applications domain; the meta-level AI system would have
declarative and procedural knowledge relevant to the control of the
object-level system. Such a scheme might conveniently allow the
formulation of object-level control knowledge as meta-level rules.
A second approach involves embedding some of the control knowl
edge into evaluation functions used by the control strategy. When a
domain expert specifies that some conjunctive subgoal A, say, is to be
solved before 2?, then we must arrange that the function used to order the
AND nodes of a partial AND/OR solution graph places A before B in
the ordering. This approach has not been thoroughly explored.
A third method involves embedding the relevant control knowledge
right into the rules. This approach has been embodied in several
high-level AI programming languages. We attempt to describe the
essence of this approach in the following section.
259
RULE-BASED DEDUCTION SYSTEMS
6.6.1. F-RULE AND B-RULE PROGRAMS
Control knowledge specifies the order in which operations should be
performed: Do this before that, do this first, do this if that is true, and so
on. It is natural to attempt to represent this sort of knowledge in
programs. F-rules and B-rules can be considered programs that operate on facts and goals. The most straightforward solution to the control
problem is to embed control responsibility directly into the F-rules and
B-rules.
Just how much control should be given to the F-rules and B-rules? So
far, we have been considering one extreme (production systems) in which
a separate global control system retained total control and none was given to the rules. Let us now briefly investigate another extreme in which all
control is given over to the rules (with a consequent atrophying of the
global control system).
We want to retain the basic character of the F-rules and B-rules. That
is, F-rules should be called only when they can be applied to facts, and
B-rules should be called only when they can be applied to goals. The
calling mechanism should invoke rules only when new goals or facts are derived. This type of mechanism might be called goal· (fact-) directed
function invocation. An extremely simple scheme for performing this
invocation involves the following: When a new goal (fact) is created, all
of the rules that are applicable to this new goal (fact) are collected. One of these is then selected and given complete control. This program is then executed; it may set new goals (invoking other B-rules) or it may assert new facts (invoking other F-rules). In either case, the control structure is
otherwise much like that of conventional programs. A rule program runs
until it encounters a RETURN statement. It then returns control to the program from which it was invoked. While it is running, a rule program has complete control. If an executing rule program fails (for one of several reasons to be discussed later), control automatically backtracks to
the next highest choice point where another selection is made. Thus, the
scheme we are describing corresponds to a simple backtrack control regime in which all of the control information is embedded in the rules.
We elaborate later on the mechanism by which one of the many
possible applicable rules is selected for invocation. We must also describe
how consequents and antecedents of rules are represented in programs
and how matching is to be handled.
260
CONTROL KNOWLEDGE FOR RULE-BASED DEDUCTION SYSTEMS
We next present a simplified syntax for our F- and B-rule programs.
(This syntax is related to, but not identical to, syntaxes of the high-level
AI languages PLANNER, QLISP, and CONNIVER.)
A goal or subgoal is introduced by a GOAL statement; for example,
GOAL (ANIMAL Ίχ). This statement is equivalent to the predicate
calculus goal expression (3x) ANIMAL (x). The variable x with a ?
prefix is existentially quantified when it occurs in GOAL statements.
A new or inferred fact is added to the set of facts by an ASSERT
statement; for example,
ASSERT (CATSAM)
or
ASSERT (DOGlx).
The latter is equivalent to the predicate calculus expression
(Vx)DOG(x). The variable x with a ? prefix is universally quantified
when it occurs in facts or in ASSERT statements.
F-rule and B-rule programs each have triggering expressions that are
called their patterns. For F-rule programs, the pattern is the antecedent of
the corresponding rule; for B-rule programs, the pattern is the con
sequent. For simplicity, we assume that a pattern consists of a single
literal only. Patterns can contain ?-variables, and these variables can be
matched against anything when invoking a program. Since F-rule
patterns are used only to match facts and B-rule patterns are used only to
match goals, the use of ?-variables in both patterns is consistent with our
assumptions about variable quantifications in facts and goals.
The body of rule programs contains, besides control information, that
part of the corresponding rule not in the pattern. Thus, F-rule programs
contain ASSERT statements corresponding to consequents, and B-rule
programs contain GOAL statements corresponding to antecedents. Any
variables in these statements that are the same as pattern variables are
preceded by a $ and are called $-variables. When a pattern is matched to a
fact or goal, the ?-variables are bound to the terms that they match. The
corresponding $-variables in the body of the program receive the same
bindings. These bindings also apply locally to subsequent statements in
261
RULE-BASED DEDUCTION SYSTEMS
the calling program that contained the GOAL or the ASSERT statement
that caused the match. Pattern matching then takes the place of
unification, and variable binding takes the place of substitution.
Using this syntax, we could represent the rule CAT(x)=>ANI
MA L(x) by the following simple F-rule program:
FRI {CATIx)
ASSERT {ANIMAL %x) RETURN
The pattern, {CAT Ίχ\ occurs immediately after the name of the
program FRI. In this case, the body of the program consists only of an
ASSERT statement. The variable %x is bound to that entity to which
?JC
was matched when the pattern {CATlx) was matched against a fact.
Consider the rule, ELEPHANT{x)^> GRAY{x). This rule can be
written as a B-rule program as follows:
BRl{GRAYlx)
GOAL ( ELEPHANT $ x )
ASSERT {GRA Y$x)
RETURN
The variable $x is bound to whatever individual matched ?JC during the
pattern match.
Mechanisms for applying rules to facts and goals can be simply
captured in programs, but we must also be able to match goals directly
against facts. This objective is accomplished simply by checking the facts
(in addition to the B-rule patterns) whenever a GOAL statement is
encountered. Ordinarily we would check the facts first.
Let's look at a simple example to see how these programs work and to
gain familarity with the syntax.
Suppose we have the following programs:
BRI {BOSS-OFlylz)
GOAL ( WORKS-IN Ίχ$γ)
GOAL {MANAGER $x $z)
ASSERT {BOSS-OF$y $z)
RETURN
262
CONTROL KNOWLEDGE FOR RULE-BASED DEDUCTION SYSTEMS
(If y works in x and z is the manager of x, then z is the boss ofy).
(Note that the B-rule program allows us naturally to specify the order in
which conjunctive goals are to be solved. The variable $ x in the second
subgoal is bound to whatever is matched against ?JC in the first subgoal.)
BR2 (HAPPYfx)
GOAL ( MA RRIED $χΊγ)
GOAL ( WORKS-IN Tz$y)
ASSERT (HAPPY$x)
RETURN
(Happy is the person with a working spouse.)
ΒΈϋ(ΗΑΡΡΥΊχ)
GOAL ( WORKS-IN P-D$x)
ASSERT (HAPPY%x)
RETURN
(If x works in the Purchasing Department, x is happy.)
BR4 ( WORKS-IN Ίχ 1y)
GOAL (MANAGER %x$y)
ASSERT ( WORKS-IN $x$y)
RETURN
(If y is the manager of x,y works in x.)
Suppose the facts are as follows:
FI : MAN A GER ( P-DJOHN-JONES )
F2 : WORKS-IN ( P-D, JOE-SMITH )
F3 : WORKS-IN ( SD, SA LL Y-JONES )
F4: MARRIED (JOHN-JONES, MARY-JONES)
Consider the problem of finding the name of an employee who has a
happy boss. The query can be expressed by the following program:
BEGIN
GOAL (BOSS-OF'ìu'ìv) GOAL(HAPPYSv) PRINT $ u "has happy boss" $
v
END
263
RULE-BASED DEDUCTION SYSTEMS
Let us trace a typical execution. We first encounter GOAL ( BOSS-
OFlu ?v). Since no facts match this goal, we look for B-rules and find
BRI. The pattern match merely passes along the existential variables.
The computational environment is now as shown in Figure 6.32. The
asterisk marks the next statement to be executed, and the bindings that apply for a sequence of statements are shown at the top of the sequence.
The next statement encountered (after binding variables) is:
GOAL ( WORKS-IN
Ixlu).
Here we have a match against F2 with ? x bound to P-D and ? u bound to
JOE-SMITH. Following the sequence of Figure 6.32, we next meet:
GOAL (MANAGER P-Dlv).
This statement matches Fl, binding ? v to JOHN-JONES. We can now
assert BOSS-OF(JOE-SMITH, JOHN-JONES) and return to the
query program to encounter GOAL (HAPPY JOHN-JONES). Now
there are two different sequences of programs that might be used.
GOAL (HAPPY JOHN-JONES) might invoke either BR2 or BR3. We
leave it to the reader to trace through either or both of these paths.
A GOAL statement can FAIL if there are no facts or B-rules that match
its pattern. Suppose, for example, that we matched GOAL (WORKS-
IN Ίχ ? u ) against F3 instead of against F2. This match would have led to
an attempt to execute GOAL ( M AN A GER S-D ? v ). The set of facts does
not include any information about the manager of the Sales Department.
BEGIN (bindings: ?u/?y,?v/?z)
* GOAL (WORKS-IN ?x $y)
GOAL(MANAGER $x $z)
ASSERT (BOSSOF $y $z)
RETURN
GOAL (HAPPY $v)
PRINT $w "has happy boss" $v END
Fig. 6.32 A state in the execution of a query.
264
CONTROL KNOWLEDGE FOR RULE-BASED DEDUCTION SYSTEMS
No B-rule applies either, so the GOAL statement FAILS. In such a case,
control backtracks to the previous choice point, namely, the pattern
match for GOAL (WORKS-IN Ίχ lu). In addition to transferring
control, all bindings made since this choice point are undone. Now we
can use the ultimately successful match against F2.
Because rules are now programs, we can augment them with other
useful control statements. For example, we can include tests to decide
whether an F-rule or B-rule program ought to be applied. If the test
indicates inappropriateness of the program, we can execute a special
FAIL statement that causes backtracking. The general form of such a
condition statement is:
IF < condition > FAIL .
The < condition > can be an arbitrary program that evaluates to true
or false. Such statements are usually put at the beginning of the program
to trap cases where the program ought not to continue.
An important category of conditions involves testing to see if there is a
fact that matches a particular pattern. This testing is done by an IS
statement. The general form is:
IS < pattern > .
If < pattern > matches a fact, bindings are made (that apply locally to any following statements) and the program continues. Otherwise, the
statement FAILS and backtracking occurs.
Recall that earlier we mentioned that the transitivity rule for the
"greater-than" predicate might be used as a B-rule if one of the
antecedents was already a fact. We could implement such a B-rule as
follows:
BTRANS (G?JC?Z)
IS (G$xly)
GOAL (G$y$z)
RETURN
Now if G (Α,Β) and G (£, C) were facts, we could use BTRANS to
prove G(A,C) as follows: First, we match BTRANS against
GOAL(G^ C) and thus attempt to execute IS(GAly). This test is
265
RULE-BASED DEDUCTION SYSTEMS
successful, 1y is bound to B, and we next encounter GOAL ( G B C). This
goal matches one of the facts directly, and we are finished. If the IS test
failed, we would not have used this transitivity B-rule and, thus, would
have avoided generating the subgoal. We'll see additional examples later
of the usefulness of applicability conditions.
Another important type of control information might be called
"advice." At the time a GOAL statement is made, we may want to give
advice about the B-rules that might be used in attempting to solve it. This
advice can be in the form of a list of B-rules to be tried in order. Similarly,
ASSERT statements can be accompanied by a list of F-rules to be tried in
order. These lists can be dynamically modified by other programs, thus
enabling quite flexible operation.
There are other advantages of rule programs beyond those related to
control strategies. We can write very general procedures to transform
certain goals into subgoals, to evaluate goals, and to assert new facts. To
achieve these same effects by ordinary production rules could sometimes
be cumbersome.
Suppose, for example, that in doing inequality reasoning we encounter
the subgoal G (8,5). Now, as mentioned earlier, we certainly do not want
to include G predicates for all pairs of numbers. The effect of procedural
attachment to a "greater-than" computation can be achieved by the
following B-rule:
BG(G?JC?7)
IF (NOTNUM $JC) FAIL
IF (NOTNUM $γ) FAIL
IF(NOTG$;c$jOFAIL
ASSERT (G$x $y)
RETURN
In this program, NOTNUM tests to see if its argument is not a number.
If NOTNUM returns T (i.e., if its argument is not a number), we FAIL
out of this B-rule. If both NOTNUMs return F, we stay in the B-rule and
use the program NOTG to see if the first numerical argument is greater than the second. If it is, we successfully bypass another FAIL and return.
Similar examples could be given of procedural attachment in the
forward direction. Suppose that in a circuit analysis problem, it has been
computed that a 1/2 ampere current flows through a certain 1000 ohm
266
BIBLIOGRAPHICAL AND HISTORICAL REMARKS
resistor named R3. After the current has been computed (but not before),
we may want to ASSERT the value of the voltage across this resistor.
Such an assertion could be appropriately made by the following general
F-rule:
FV (CURRENT!RlI)
IF (NOTNUM (VALUE $ R )) FAIL
IF(NOTNUM$/)FAIL
SET ? V (TIMES $ / (VALUE $ R ))
ASSERT ( VOLTAGE %R $ V)
RETURN
Now when the statement (ASSERT CURRENT R3 0.5) is made, FV is
invoked. We compute VALUE(ÄJ) to be 1000, so we pass through the
first NOTNUM. Similarly, since $ / is bound to 0.5, we pass through the
second NOTNUM and encounter the SET statement. This binds ? F to
500, we assert VOLTAGE (R3 500) and return. In this case we have
attached a multiplication procedure that implements Ohm's law to the
predicate VOLTAGE.
6.7. BIBLIOGRAPHICAL AND HISTORICAL
REMARKS
One of the reasons for the inefficiency of early resolution theorem-
proving systems is that they lacked domain-specific control knowledge.
The AI languages PLANNER [Hewitt (1972), Sussman, Winograd, and
Charniak (1971)], QA4 [Rulifson, Derksen, and Waldinger (1972)], and
CONNIVER [McDermott and Sussman (1972)] are examples of attempts
to develop deduction and problem-solving formalisms in which control information could be explicitly represented. Moore (1975a) discusses
some of the logical inadequacies of these languages and proposes some
remedies. Among other points, Moore notes: (a) clause form is an
inefficient representation for many wffs, (b) general implicational wffs should be used
as rules and these rules should be kept separate from facts,
and (c) the direction of rule use (forward or backward) is often an
important factor in efficiency.
Other researchers, too, moved away from resolution after its early
popularity. Bledsoe (1977) presents a thorough discussion of "nonre-
267
RULE-BASED DEDUCTION SYSTEMS
solution" theorem proving. Examples of some nonresolution systems
include those of Bledsoe and Tyson (1978), Reiter (1976), Bibel and
Schreiber (1975), Nevins (1974), Wilkins (1974), and Weyhrauch (1980).
Many of the techniques for enhancing efficiency used by these nonre
solution systems can be used in the rule-based systems described in this
chapter, where the relationship with resolution is clear.
Unifying compositions of substitutions and their properties are dis
cussed by van Vaalen (1975) and by Sickel (1976), both of whom discuss
the importance of the use of these substitutions in theorem proving with
AND/OR graphs. Kowalski (1974b, 1979b) discusses the related process
of finding simultaneous unifiers.
The forward and backward rule-based deduction systems discussed in
this chapter are intended to be models of various rule-based systems used
in AI. The use of AND/OR graph structures (often called AND/OR goal
trees) in theorem proving has a long history; however, many systems that
have used them have important logical deficiencies. Our versions of these
systems have a stronger logical base than most existing systems. The
RGR operation used in our backward system is based on a similar
operation proposed by Moore (1975a). Loveland and Stickel (1976) and
Loveland (1978) also propose systems based on AND/OR graphs and
discuss relationships with resolution.
Human experts in some subject domains seem to be able to deduce
useful conclusions from rules and facts about which they are less than
completely certain. Extensions to rule-based deduction systems that
allow use of only partially certain rules and facts were made by Shortliffe
(1976) in the MYCIN system, for medical diagnosis and therapy selection.
We might describe MYCIN as a backward, rule-based deduction system
(without RGR) for the propositional calculus, augmented by the ability
to handle partially certain rules and facts. A technique based on the use of
Bayes' rule and subjective probabilities for dealing with uncertain facts and rules is described by Duda, Hart, and Nilsson (1976).
Checking the consistency of substitutions as search proceeds derives
from a paper by Sickel (1976). The use of connection graphs was
originally suggested by Kowalski (1975). Other authors who have used
various forms of connection graphs are Cox (1977), Klahr (1978), Chang
and Slagle (1979), and Chang
(1979). Cox (1977) proposes an interesting
technique for modifying inconsistent solutions to make them consistent.
268
BIBLIOGRAPHICAL AND HISTORICAL REMARKS
Most of these ideas were originally proposed as control strategies for
resolution refutation systems rather than for rule-based deduction
systems.
The use of a metasystem, with its own rules, to control a deduction
system has been suggested by several researchers, including Davis (1977),
de Kleer et al. (1979), and Weyhrauch (1980). Hayes (1973b) proposes a related idea.
Using deduction systems for intelligent information retrieval is dis
cussed in several papers in the volume by Gallaire and Minker (1978).
Wong and Mylopoulos (1977) discuss the relationships between data
models in database management and predicate calculus knowledge
representations in AI.
Bledsoe, Bruell, and Shostak (1978) describe a theorem-proving
system for inequalities. A system developed by Waldinger and Levitt
(1974) is able to prove certain inequalities arising in program verification
problems.
Our use of conditional substitutions is related to an idea proposed by
Tyson and Bledsoe
(1979). Manna and Waldinger (1979) employ the idea
of conditional substitutions in their program synthesis system.
Green (1969a) described how theorem-proving systems could be used
both for performing computations and for synthesizing programs.
Program synthesis through deduction was also studied by Waldinger and
Lee (1969) and by Manna and Waldinger (1979). [For approaches to
program synthesis based on techniques other than deduction, see the
survey by Hammer and Ruth (1979). For a discussion of programming
"knowledge" needed by an automatic programming system, see Green
and Barstow (1978).] Our use of induction to introduce recursion is based
on a technique described in Manna and Waldinger (1979).
Using deduction systems to perform computations (and predicate logic
as a programming language) was advocated by Kowalski (1974a). Based
on these ideas, a group at the University of Marseille [see Roussel (1975),
and Warren (1977)] developed the PROLOG language. Warren and
Pereira (1977) describe PROLOG and compare it with LISP. Van Emden
(1977) gives a clear tutorial account of these ideas. One of the appealing
features of PROLOG is that it separates control information from logic
269
RULE-BASED DEDUCTION SYSTEMS
information in programming. This idea, first advocated by Hayes
(1973b), has also been advanced in Kowalski (1979a) and by Pratt (1977).
[For a contrary view, see Hewitt (1975, pp. 195ff.)]
The combined forward/backward deduction system and the CAN-
CEL relation for establishing termination is based on a paper by Nilsson
(1979).
Our section on F-rule and B-rule programs is based on ideas in the AI
languages PLANNER [Hewitt (1972), Sussman, Winograd, and Charniak
(1971)] and QLISP [Sacerdoti et al. (1976)]. [See also the paper by
Bobrow and Raphael (1974).]
EXERCISES
6.1 Represent the following statements as production rules for a
rule-based geometry theorem-proving system:
(a) Corresponding angles of two congruent triangles are
congruent.
(b) Corresponding sides of two congruent triangles are
congruent.
(c) If the corresponding sides of two triangles are congruent,
the triangles are congruent.
(d) The base angles of an isocèles triangle are congruent.
6.2 Consider the following piece of knowledge: Tony, Mike, and John
belong to the Alpine Club. Every member of the Alpine Club who is not
a skier is a mountain climber. Mountain climbers do not like rain, and
anyone who does not like snow is not a skier. Mike dislikes whatever
Tony likes and likes whatever Tony dislikes. Tony likes rain and snow.
Represent this knowledge as a set of predicate calculus statements
appropriate for a backward rule-based deduction system. Show how such
a system would answer the question. "Is there a member of the Alpine
Club who is a mountain climber but not a skier?"
270
EXERCISES
63 A blocks-world situation is described by the following set of wffs:
ONTABLE(A ) CLEAR(E)
ONTABLE(C) CLEAR(D)
ON(D,C) HEAVY(D)
ON(B,A) WOODEN(B)
HEAVY(B) ON(E,B)
Draw a sketch of the situation that these wffs are intended to describe.
The following statements provide general knowledge about this blocks
world:
Every big, blue block is on a green block.
Each heavy, wooden block is big.
All blocks with clear tops are blue.
All wooden blocks are blue.
Represent these statements by a set of implications having single-literal
consequents. Draw a consistent AND/OR solution tree (using B-rules)
that solves the problem: "Which block is on a green block?"
6.4 Consider the following restricted version of a backward rule-based
deduction system: Only leaf nodes of the AND/OR graph can be
matched against rule consequents or fact literals, and the mgu of the
match is then applied to all leaf nodes in the graph. Explain why the
resulting system is not commutative. Show how such a system would
solve the problem of reversing the list (1,2), using the facts and rules of
Section 6.4. What sort of control regime did you use?
6.5 Discuss how a backward rule-based deduction system should deal
with each of the following possibilities:
(a) A subgoal literal is generated that is an instance of a higher goal (i.e., one of its
ancestor goals in the AND/OR graph).
(b) A subgoal literal is generated such that a
higher goal is an instance of the subgoal.
(e) A subgoal literal is generated that unifies
with the negation of a higher goal.
271
RULE-BASED DEDUCTION SYSTEMS
(d) A subgoal literal is generated that is
identical to another subgoal literal in the
same potential solution graph.
6.6 Show how RGR can be used in a backward deduction system to
obtain a proof for the goal wff :
[(3x)(Vy)P(x,y)^(Vy)(3x)P(x,y)]
6.7 Propose a heuristic search method to guide rule selection in
rule-based deduction systems.
6.8 Although we have used AND/OR graphs in this chapter to
represent formulas, we have not advocated the use of decomposable
production systems for theorem proving. What is wrong with the idea of
decomposing a conjuctive goal formula, for example, and processing each conjunct independently? Under what circumstances might decom
position be a reasonable strategy?
6.9 Describe how to use a formula like EQUALS(f(x),g(h(x)))
as a "replacement rule" in a rule-based deduction system. What heuristic
strategies might be useful in using replacement rules?
6.10 Critically examine the following proposal:
An implication of the form (LI Λ L2)=> W,
where LI and L2 are literals, can be used as
an F-rule if it is first converted to the
equivalent form LI
=> (L2 => W). The rule
can be applied when LI matches a fact literal,
and the effect of the rule is to add the new
F-rule L2 => W.
6.11 Deduction systems based on rule programs cannot (easily) perform
resolutions between facts or between goals. Why not?
6.12 Consider the following electrical circuit diagram:
Rl = 2 ohms
o vw—
-Wr R2 <RJ
R4 = Vi ohm
272
EXERCISES
We represent the fact that resistors Rl and R4 are in series by the
assertion (SERIES RI R4). We represent the fact that the current
through Rl is 2 amperes by the assertion (CURRENT Rl 2). We
represent the fact that Rl has resistance 2 ohms by the assertion
(RESISTANCE Rl 2), etc.
Write a forward rule program that expresses the fact that if a current / flows through a resistor R, then that same current flows through any resistor in series with R.
Write a backward rule program that expresses the fact that the voltage
across a resistor is equal to the current through it multiplied by its
resistance. Assuming that the forward program executes first (triggered by the assertion about the current in Rl
), trace the effect of the following
GOAL statement:
GOAL ( VOLTAGE R41V).
6.13 Propose facts and rules involving the predicate MEMBER(x,y),
which is intended to mean that atom x is a member of the list of atoms y.
Use these facts and rules in a rule-based deduction system to prove the goal wff MEMBER
(3, cons (4, cons (2, cons (3, NIL )))). What control in
formation results in an efficient search for a proof? What fact would be
needed in order to prove ~MEMBER(3,cons(4,NIL))l
273
CHAPTER 7
BASIC PLAN-GENERATING SYSTEMS
In chapters 5 and 6 we saw that a wide class of deduction tasks could
be solved by commutative production systems. For many other problems
of interest in AI, however, the most natural formulations involve
noncommutative systems. Typical problems of this sort are ones where goals are achieved by a sequence (or
program ) of actions. Robot problem
solving and automatic programming are two domains in which these
kinds of problems occur.
7.1. ROBOT PROBLEM SOLVING
Research on robot problem solving has led to many of our ideas about
problem solving systems. Since robot problems are simple and intuitive,
we use examples from this domain to illustrate the major ideas. In the
typical formulation of a "robot problem" we have a robot that has a
repertoire of primitive actions that it can perform in some easy-to-un-
derstand world. In the "blocks world," for example, we imagine a world
of several labeled blocks (like children's blocks) resting on a table or on each other and a robot consisting of
a moveable hand that is able to pick
up and move blocks. Many other types of robot problems have also been
studied. In some problems the robot is a mobile vehicle that performs
tasks such as moving objects from place to place through an environment
containing other objects.
Programming a robot involves integrating many functions, including
perception of the world around it, formulation of plans of action, and
monitoring of the execution of these plans. Here, we are concerned
mainly with the problem of synthesizing a sequence of robot actions that
will (if properly executed) achieve some stated goal, given some initial
situation.
275
BASIC PLAN-GENERATING SYSTEMS
The action synthesis part of the robot problem can be solved by a
production system. The global database is a description of the situation,
or state, of the world in which the robot finds itself, and the rules are
computations representing the robot's actions.
7.1.1. STATE DESCRIPTIONS AND GOAL DESCRIPTIONS
State descriptions and goals for robot problems can be constructed
from predicate calculus wffs, as discussed in chapter 4. As an example,
consider the robot hand and configuration of blocks shown in Figure 7.1.
This situation can be represented by the conjunction of formulas shown
in the figure. The formula CLEAR(B) means that block B has a clear
top; that is, no other block is on it. The ON predicate is used to describe
which blocks are (directly) on other blocks. The "robot" in this situation
is a simple hand that can move blocks about in a manner to be described
momentarily. The predicate HANDEMPTY has value Tjust when the robot hand is empty, as in the situation depicted. Of course, any finite
conjunction of formulas actually describes a family of different world situations, where each member can be regarded as an
interpretation
satisfying the formulas (as discussed in chapter 4). For brevity, however,
we usually use the phrase "the situation" rather than "the family of
situations."
Goal descriptions also can be expressed as predicate logic formulas.
For example, if we wanted the robot of Figure 7.1 to construct a stack of
blocks in which block B was on block C, and block A was on block 2?, we
might describe the goal as:
ON(B,C) A ON(A,B).
Robot
'Hand
CLEAR (B)
CLEAR (C) ON{C,A)
HANDEMPTY ONTABLE(A)
ONTA B LE (B)
Fig. 7.1 A configuration of blocks.
276
ROBOT PROBLEM SOLVING
Such a formula describes a family of world states, any one of which
suffices as a goal.
For ease of exposition, we place certain restrictions on the kinds of
formulas that we allow for descriptions of world states and goals. (Many
of these restrictions could be lifted by using some of the techniques described in the last chapter for dealing with complex wffs.) For goal (and subgoal) expressions, we allow conjunctions of literals only, and any variables in goal expressions are assumed to have existential quantifica
tion. For initial and intermediate state descriptions, we allow only
conjunctions of ground literals (i.e., literals without variables). The
formulas in Figure 7.1 clearly satisfy these restrictions.
7.1.2. MODELING ROBOT ACTIONS
Robot actions change one state, or configuration, of the world into
another. We can model these actions by F-rules that change one state
description into another. One simple, but extremely useful technique for
representing robot actions was employed by a robot problem-solving
system called STRIPS. This technique can be contrasted with our use of implicational rules as production rules, discussed in chapter 6. There,
when an implicational rule was applied to a global database, the database was changed, by appending additional structure, but nothing
was deleted
from the database. In modeling robot actions, however, F-rules must be
able to delete expressions that might no longer be true. Suppose, for
example, that the robot hand of Figure 7.1 were to pick up block B. Then
certainly the expression ONTABLE(B) would no longer be true and
should be deleted by any F-rule modeling this pick-up action. F-rules of the STRIPS type specify the expressions to be deleted by listing them explicitly.
STRI PS-form F-rules consists of three components. The first is the
precondition formula. This component is like the antecedent of an
implicational rule. It is a predicate calculus expression that must logically
follow from the facts in the state description in order for the F-rule to be
applicable to that state description. Consistent with our restrictions on
the form of goal wffs, we assume here that the preconditions of our F-rules consist of a conjunction of
literals. Variables in these precondi
tion formulas are assumed to have existential quantification. To decide
whether or not a conjunction of literals (the precondition formula)
logically follows from another conjunction of literals (the facts) is
277
BASIC PLAN-GENERATING SYSTEMS
straightforward: It follows if there are literals among the facts that unify
with each of the precondition literals and if all of the mgu's are consistent
(that is, if these mgu's have a unifying composition). If such a match can
be found, we say that the precondition of the F-rule matches the facts. We
call the unifying composition, the match substitution. For a given F-rule
and state description, there may be many match substitutions. Each leads
to a different instance of F-rule that can be applied.
The second component of the F-rule is a list of literals (possibly
containing free variables) called the delete list. When an F-rule is applied
to a state description, the match substitution is applied to the literals in the delete list; and the ground instances thus obtained are deleted from the old state description as the first step of constructing the new one. We
assume that all of the free variables in the delete list occur as (existentially
quantified) variables in the precondition formula. This restriction ensures that any match instance of a delete list literal is a ground literal.
The third component is the add formula. It consists of a conjunction of
literals (possibly containing free variables) and is like the consequent of
an implication^ F-rule. When an F-rule is applied to a state description, the match substitution is applied to the add formula and the resulting
match instance is added to the old state description (after the literals in the delete list are deleted) as the final step in constructing the new state description. Again we assume that all of the free variables in the add
formula occur in the precondition formula so that any match instance of an add formula will be a conjunction of ground literals. Again, it is
possible to lift some of these restrictions on F-rule components; we use
them solely because they make our presentation much simpler.
As an example of an F-rule, we model the action of picking up a block
from a table. Let us say that the preconditions for executing this action are
that the block be on the table, that the hand be empty, and that the block
have nothing on top of it. The effect of the action is that the hand is holding the block. We might represent such an action as follows:
pickup(X)
Precondition: ONTA B LE (x) A HAND EMPTY
Λ CLEAR(x)
Delete list: ONTABLE(x\ HANDEMPTY, CLEAR(x)
Add formula: HOLDING(x)
Since, with our restrictions, the precondition and add formulas are
conjunctions of literals, we can represent each of them by a set or list of
278
ROBOT PROBLEM SOLVING
literals. Sometimes, as in the above example, the precondition formula
and the delete list contain identical literals. In our example, we have
chosen to include only HOLDING(x) in the add formula rather than,
additionally, the negations of literals in the delete list. For our purposes,
it will suffice merely to delete these literals from the state description.
We see that we can apply pickupO ) to the situation of Figure 7.1 only
if B is substituted for x. The new state description, in this case, would be
given by:
CLEAR(C) ON(C,A)
ONTABLE(A ) HOLDING(B)
Production systems using STRI PS-form F-rules are not, in general,
commutative because these rules may delete certain literals from a state
description. Such F-rules change one set of states to another set of states,
in contrast to rules based on implications, whose application merely restricts the original set of states. Special methods must be used with
STRI PS-form rules. These methods are the main focus of this chapter
and chapter 8.
7.13. THE FRAME PROBLEM
To use a familiar analogy, the changes between one state description
and another can be compared to changes between frames in an animated
film. In very simple animations, certain characters move in a fixed background from frame to frame. In more realistic (and expensive)
animations, many changes occur in the background also. A STRIPS F-rule (with short delete and add lists) treats most of the wffs in a state
description as fixed background.
The problem of specifying which wffs in a state description should
change and which should not is usually called the frame
problem in AI.
The best approach to dealing with the frame problem depends on the sort
of world states and actions that we are modeling. Speaking loosely, if the
components of a world state are very closely coupled or unstable, then
each action might have profound and global effects on the world state. In
such a world, picking up the top block from a stack of blocks, for
example, might topple the whole stack of blocks, causing other stacks to
topple also, in domino fashion. A simple STRIPS F-rule would not be an
appropriate action model in that kind of world.
279
BASIC PLAN-GENERATING SYSTEMS
Typically, the components of a world state are sufficiently decoupled to
permit us to assume that the effects of actions are relatively local. When
such an assumption is justified, STRIPS F-rules are efficient and
appropriate models of many types of actions.
Applying an F-rule to a state description can be regarded as simulating
the action represented by the F-rule. Simulations vary with respect to the
level of detail and accuracy with which they model actions. The F-rule
pickupO), for example, is a much more approximate representation of
the pick-up action than a simulation program that took into account such
factors as the weight and size of blocks, friction in robot arm joints,
ambient temperature, etc. In the next chapter we argue that it is useful to
have models of actions at several levels of detail. Gross and approximate
models are useful for computing high-level plans; more accurate models are necessary for computing detailed
plans. Typically, the frame problem
is more critical for the detailed models because they must take into
account couplings among world state components that might be ignored at higher levels.
Another aspect of the frame problem concerns how to deal with
anomalous conditions. We can regard the F-rule pickup(x) as being an
appropriate model for the
normal operation of a picking-up action. But
suppose the robot arm is broken, or that the block being picked up is too
heavy, or that there is a power failure that prevents the motors in the arm
from operating, or that the block being picked up is glued to the table,
etc. Of course, we could include the negation of each of these anomalous
conditions in the precondition of the F-rule to render the rule inapplica
ble as appropriate. But there are too many such conditions (an infinite
number might be imagined), and normally the deviant conditions do not
hold. Yet, if any of them do hold, the simple F-rule model is inaccurate.
Several approaches to the problem of anomalous conditions have been
suggested, but none of these, so far, is compelling. If a hierarchy of action
models is used, it seems that the most detailed and accurate simulations
automatically take into account all of the conditions of which the system
can (by definition) be aware.
Let us leave the frame problem now and make use of the representa
tions that we have been discussing in systems for solving robot problems.
We begin with a forward production system.
280
A FORWARD PRODUCTION SYSTEM
7.2. A FORWARD PRODUCTION SYSTEM
The simplest type of robot problem-solving system is a production
system that uses the state description as the global database and the rules
modeling robot actions as F-rules. In such a system, we select applicable
F-rules to apply until we produce a state description that matches the
goal expression. Let us examine how such a system might operate in a
concrete example.
Consider the F-rules given below, in STRI PS-form, corresponding to a
set of actions for the robot of Figure 7.1.
1) pickup(jc)
P&D: ONTABLE(x%CLEAR(x), HANDEMPTY
A: HOLDING(x)
2) putdown(x)
P&D: HOLDING(x)
A: ONTABLE(x), CLEAR(x), HANDEMPTY
3) stack(;c,j)
P&D: HOLDING(x),CLEAR(y)
A: HANDEMPTY, ΟΝ(χ,γ), CLEAR(x)
4) unstack(jc,j)
P&D: HANDEMPTY, CLEAR(x), ΟΝ(χ,γ)
A: HOLDING(x),CLEAR(y)
Note that in each of these rules, the precondition formula (expressed as a
list of literals) and the delete list happen to be identical. The first rule is
the same as the rule that we used as an example in the last section. The
others are models of actions for putting down, stacking, and unstacking blocks.
Suppose our goal is the state shown in Figure 7.2. Working forward
from the initial state description shown in Figure 7.1, we see that
pickup(B) and unstack(C,A ) are the only applicable F-rules. Figure 7.3
shows the complete state-space for this problem, with a solution path
indicated by the dark branches. The initial state description is labeled
281
BASIC PLAN-GENERATING SYSTEMS
B
C
GOAL: [ON(B,C)AON(A,B)}
Fig. 7.2 Goal for a robot problem.
SO, and a state matching the goal is labeled G in Figure 7.3. (Contrary to
custom and merely to reveal symmetries in the problem, SO is not the top
node in Figure 7.3.) Note that in this example, each F-rule has an inverse.
In this very simple example (with only 22 states in the entire
state-space), a forward production system, with an unsophisticated
control strategy, can quickly find a path to a goal state. For more complex
problems, we would expect, however, that a forward search to the goal
would generate a rather large graph and that such a search would be
feasible only if combined with a well-informed evaluation function.
7.3. A REPRESENTATION FOR PLANS
We can construct the desired sequence of actions for achieving the goal
in our example by referring to the F-rules labeling the arcs along the
branch to the goal state. The sequence is: (unstack(C,^4 ), putdown(C),
,pickup(£), stack(£,C), pickup(^), stack(^,5)}. We call such a se
quence di plan for achieving the goal. (In this case all of the elements of
the plan refer to "primitive" actions. In chapter 8 we consider plans
whose elements might themselves be intermediate level goals requiring
further and more detailed problem solving before being reduced to
primitive actions.)
For many purposes, it is useful to have additional information
included in a specification of a plan. We might want to know, for
example, what the relationships are between the F-rules and the preconditions that they provide for other F-rules. Such contextual
information can be provided conveniently by a triangular table whose entries correspond to the preconditions and additions of the F-rules in the plan.
282
putdown(£ ),/'^^pickup(Ä)
y
ack(
pic
stac
G CLEARiA)
CLEARiC)
HOLDING iB)
ONTABLEiA)
ONTABLEiC)
y^stack(B,Ay\ CLEARiA) ONTABLEiA)
CLEARiB) ONTABLEiB) CLEARiC) ONTABLEiC)
HANDEMPTY
\ / pickup (/i irS/N. putdown(/4 )
pickup (C)\ Xputdown(C)
\unstack(Ä,,4 ) y 1 CLEAR iA) 1
CLEARiB)
HOLDINGiC)
ONTABLEiA)
\ ONTABLEiB) \
/\S stack(C,/l)V
B,C)//unstack(B,C) \\ stack(C.Ä)/^/unstack(C,5)
CLEAR (A)
ONiB.C) CLEAR
iB)
ONTABLE (A)
ONTA BLE(C)
HANDEMPTY
iup(A ) 1 putdov
ONiB.C)
CLEARiB)
HOLDING(A)
ONTABLEiC)
k(A.B) 1 l
uns tac
CLEARiA)
ONiA.B)
ONiB.C)
ONTABLEiC)
HANDEMPTY CLEARiC)
ONiB.A)
CLEARiB)
ONTABLEiA)
ONTABLEiC)
HANDEMPTY
pickup(C)
m(
sta<
kM 4) CLEARiA)
ON(C.B)
CLEARiC)
ONTABLEiA)
ONTABLEiB)
HANDEMPTY
A
putdown(C)
ONiB.A)
CLEARiB)
HOLDINGiC)
ONTABLEiA)
MCB)
.B)
r l
uns tac
CLEARiC)
ONiCB)
ONiB,A)
ONTABLEiA)
HANDEMPTY pickup(/4 )
«c i
' 50 V unstack(C,/l ) / CLEARiB)
CLEARiC)
HOLDINGiA)
ONTABLEiB)
ONTABLEiC)
y/stack(/l.Ä)V \unstack(i4.£)
ΝΛ^ stack(/l,C)//unstack(^,C) \\
CLEARiB)
ONiCA)
CLEARiC)
ONTABLEiA)
ONTABLEiB)
HANDEMPTY
pickup(Ä)
putdown(/l )
ONiCB)
CLEARiC)
HOLDINGiA)
ONTABLEiB)
B)
tackM.C)
1 i
' CLEARiB)
ONiA.C)
CLEARiA)
ONTABLEiB)
ONTABLEiC)
HANDEMPTY
1 A
putdown(ß)
ONiCA)
CLEARiC) HOLDINGiB)
ONTABLEiA)
stack(5,C)
unstack(/l,C)
CLEARiA)
ONiA.C)
ONiCB)
ONTABLEiB)
HANDEMPTY * A
pickup(#)|
ir CLEARiC)
ONiA.B)
CLEARiA)
ONTABLEiB)
ONTABLEiC)
HANDEMPTY
pickup(C) putdown(C)
putdown(ß)
ONiA.C)
CLEARiA)
HOLDINGiB)
ONTABLEiC)
unstack(Ä.C)
s
CLEARiB)
ONiB.C)
ONiCA)
ONTABLEiA)
HANDEMPTY A
tack(5.,4 ) • ONiA.B)
CLEARiA)
HOLDINGiC)
ONTABLEiB)
5tack(C,/l ) A
unstack(CM)
unstack(Ä,/l)
CLEARiB)
ONiB.A)
ONiA.C)
ONTABLEiC)
HANDEMPTY CLEARiC)
ONiCA)
ONiA.B) ONTABLEiB)
HANDEMPTY >
w
w
s
δ
TI
00
Fig. 7.3 The state-space for a robot problem.
BASIC PLAN-GENERATING SYSTEMS
An example of a triangle table is shown in Figure 7.4. It is a table whose
columns are headed by the F-rules in the plan. Let the leftmost column
be called the zero-th column; then they-th column is headed by they-th
F-rule in the sequence. Let the top row be called the first row. If there are
N F-rules in the plan sequence, then the last row is the (N + l)-th row.
The entries in cell (/,y) of the table, for y > 0 and i < N + 1, are those
literals added to the state description by they-th F-rule that survive as
preconditions of the i-th F-rule. The entries in cell (/,0), for i < N + 1, are those literals in the initial state description that survive as precondi
tions of the i-th F-rule. The entries in the (N + l)-th row of the table are
then those literals in the original state description, and those added by the
various F-rules, that are components of the goal (and that survive the
entire sequence of F-rules).
Triangle tables can easily be constructed from the initial state
description, the F-rules in the sequence, and the goal description. These
tables are concise and convenient representations for robot plans. The
entries in the row to the left of the ι-th F-rule are precisely the
preconditions of the F-rule. The entries in the column below the i-th
F-rule are precisely the add formula literals ofthat F-rule that are needed
by subsequent F-rules or that are components of
the goal.
Let us define the i-th kernel as the intersection of all rows below, and
including, the i-th row with all columns to the left of the i-th column. The
4th kernel is outlined by double lines in Figure 7.4. The entries in the i-th
kernel are then precisely the conditions that must be matched by a state
description in order that the sequence composed of the i-th and
subsequent F-rules be applicable and achieve the goal. Thus, the first
kernel, that is, the zero-th column, contains those conditions of the initial
state needed by subsequent F-rules and by the goal; the (JV + l)-th
kernel [i.e.,the(A^ + l)-th row] contains the goal conditions themselves. These properties of triangle tables are very useful for monitoring the
actual execution of robot plans.
Since robot plans must ultimately be executed in the real world by a
mechanical device, the execution system must acknowledge the possibility that the actions in the plan may not accomplish their intended effects
and that mechanical tolerances may introduce errors as the plan is
executed. As actions are executed, unplanned effects might either place
us unexpectedly close to the goal or throw us off the track. These
problems could be dealt with by generating a new plan (based on an updated state description) after each execution step, but obviously, such
284
0
1
2
3
4
5
6
7 HANDEMPTY
CLEAR(C)
ON(C,A)
ONTABLE(B)
CLEAR(B)
ONTABLE(A) 1
unstack(C,/l )
HOLDINGS)
CLEAR{A) 2
putdown(C)
HANDEMPTY
CLEAR(C) 3
pickup(Z?)
HOLDING(B)\
4
stack(fl.C)
HANDEMPTY
CLEAR(B)
ON(B,C) 5
pickup(/l )
HOLDING(A) 6
stacks, B)
ON(A,B) >
M
M
e«
M
δ
g
to oo
KJÌ Fig. 7.4 A triangle table.
BASIC PLAN-GEN ERATING SYSTEMS
a strategy would be too costly, so we instead seek a scheme that can
intelligently monitor progress as a given plan is being executed.
The kernels of triangle tables contain just the information needed to
realize such a plan execution system. At the beginning of a plan
execution, we know that the entire plan is applicable and appropriate for
achieving the goal because the literals in the first kernel are matched by
the initial state description, which was used when the plan was created.
(Here we assume that the world is static; that is, no changes occur in the
world except those initiated by the robot itself.) Now suppose the system
has just executed the first / — 1 actions of a plan sequence. Then, in order
for the remaining part of the plan (consisting of the /-th and subsequent
actions) to be both applicable and appropriate for achieving the goal, the
literals in the /-th kernel must be matched by the new current state
description. (We presume that a sensory perception system continuously updates the state description as the plan is executed so that this
description accurately models the current state of the world.) Actually,
we can do better than merely check to see if the expected kernel matches
the state description after an action; we can look for the highest
numbered matching kernel. Then, if an unanticipated effect places us
closer to the goal, we need only execute the appropriate remaining
actions; and if an execution error destroys the results of previous actions,
the appropriate actions can be re-executed.
To find the appropriate matching kernel, we check each one in turn
starting with the highest numbered one (which is the last row of the table)
and work backward. If the goal kernel (the last row of the table) is
matched, execution halts; otherwise, supposing the highest numbered
matching kernel is the /-th one, then we know that the /-th F-rule is
applicable to the current state description. In this case, the system executes the action corresponding to this /-th F-rule and checks the
outcome, as before, by searching again for the highest numbered matching kernel. In an ideal world, this procedure merely executes in order each action in the plan. In a real-world situation, on the other hand,
the procedure has the flexibility to omit execution of unnecessary actions
or to overcome certain kinds of failures by repeating the execution of
appropriate actions. Replanning is initiated when there are no matching
kernels.
As an example of how this process might work, let us return to our
block-stacking problem and the plan represented by the triangle table in
Figure 7.4. Suppose the system executes actions corresponding to the first
286
A BACKWARD PRODUCTION SYSTEM
four F-rules and that the results of these actions are as planned. Now
suppose the system attempts to execute the pick-up-block-v4 action, but
the execution routine (this time) mistakes block B for block A and picks up block B instead. [Assume again that the perception system accurately updates the state description by adding HOLDING(B) and deleting
ON(B, C); in particular, it does not add HOLDING(A ).] If there were
no execution error, the 6th kernel would now be matched; the result of the error is that the highest numbered matching kernel is now kernel 4.
The action corresponding to stack(£, C) is thus re-executed, putting the
system back on the track.
The fact that the kernels of triangle tables overlap can be used to
advantage to scan the table efficiently for the highest numbered matching
kernel. Starting in the bottom row, we scan the table from left to right, looking for the first cell that contains a literal that does not match the
current state description. If we scan the whole row without finding such a
cell, the goal kernel is matched; otherwise, if we find such a cell in column
/, the number of the highest numbered matching kernel cannot be greater
than i. In this case, we set a boundary at column i and move up to the
next-to-bottom row and begin scanning this row from left to right, but not past column /. If
we find a cell containing an unmatched literal, we
reset the column boundary and move up another row to begin scanning that row, etc. With the column boundary set to k, the process terminates by finding that the À>th kernel is the highest numbered matching kernel
when it completes a scan of the fc-th row (from the bottom) up to the
column boundary.
7.4. A BACKWARD PRODUCTION SYSTEM
7.4.1. DEVELOPMENT OF THE B-RULES
In order to construct robot plans in an efficient fashion, we often want
to work backward from a goal expression to an initial state description,
rather than vice versa. Such a system starts with a goal description (again
a conjunction of literals) as its global database and applies B-rules to this
database to produce subgoal descriptions. It successfully terminates
when it produces a subgoal description that is matched by the facts in the
initial state description.
287
BASIC PLAN-GENERATING SYSTEMS
Our first step in designing a backward production system is to specify a
set of B-rules that transform goal expressions into subgoal expressions.
One strategy is to use B-rules that are based on the F-rules that we have
just discussed. A B-rule that transforms a goal G into a subgoal G' is
logically based on the corresponding F-rule that when applied to a state
description matching Gf produces a state description matching G.
We know that the application of an F-rule to any state description
produces a state description that matches the add list literals. Therefore,
if a goal expression contains a literal, L, that unifies with one of the
literals in the add list of an F-rule, then we know that if we produce a state
description that matches appropriate instances of the preconditions of that F-rule, the F-rule can be applied to produce a state description matching L. Thus, the subgoal expression produced by a backward
application of an F-rule must certainly contain instances of the precon
ditions of that F-rule. But if the goal expression contains other literals
(besides L
), then the subgoal expression must also contain other literals,
which after application of the F-rule, become those other literals (i.e.,
other than L ) in the goal expression.
7.4.2. REGRESSION
To formalize what we have just stated, suppose that we have a goal
given by a conjunction of literals [L Λ Gl A ... Λ GN] and that we
want to use some F-rule (backward) to produce a subgoal expression.
Suppose an F-rule with precondition formula, P, and add formula, A,
contains a literal U in A that unifies with L, with most general unifier u. Application of
u to the components of the F-rule creates an instance of
the F-rule. Certainly the literals in Pu are a subset of the literals of the
subgoal that we seek. We must also include the expressions Gl\ ..., GN'
in the complete subgoal. The expressions Gl\ ..., GN' must be such that
the application of the instance of the F-rule to any state description matching these expressions produces a state description matching
G7,..., GN. Each GÏ is called the regression oïGi through the instance of
the F-rule. The process of obtaining GV from Gi is called regression.
For F-rules specified in the simple STRI PS-form, the regression
procedure is quite easily described for ground instances of rules. (A
ground instance of an F-rule is an instance in which all of the literals in the
precondition formula, the delete list, and the add formula are ground
288
A BACKWARD PRODUCTION SYSTEM
literals.) Let R [ Q ; Fu ] be the regression of a literal Q through a ground
instance Fu of an F-rule with precondition, P, delete list, D, and add list,
A. Then,
if Qu is a literal in Au,
R[Q;Fu] = T(Truc)
else, if Qu is a literal in Du,
R[Q ; Fu] = F (False)
else, #[£;/*] = Qw
In simpler terms, ß regressed through an F-rule is trivially TifQis one of
the add literals, it is trivially F if Q is one of the deleted literals; otherwise,
it is Q itself.
Regressing expressions through incompletely instantiated F-rules is
slightly more complicated. We describe how we deal with incompletely
instantiated F-rules by some examples. Suppose the F-rule is unstack, given earlier and repeated here:
unstack(jc,j)
P&D: HANDEMPTY, CLEAR(x), ON(x,y)
A: HOLDING(x),CLEAR(y)
In particular, suppose we are considering the instance umteck(B,y),
perhaps because our goal is to produce HOLDING(B). This instance is
not fully instantiated. If we were to regress HOLDING(B) through this
F-rule instance, we would obtain T, as expected. (The literal HOLD
ING (B) is unconditionally true in the state resulting after applying the
F-rule.) If we were to regress HANDEMPTY through this F-rule
instance, we would obtain F. (The literal HANDEMPTY can never be
true immediately after applying unstack.) If we were to regress OiV-
TABLE(C), we would obtain ONTABLE(C). (The literal ON-
TABLE(C) is unaffected by the F-rule.)
Suppose we attempt to regress CLEAR(C) through this incompletely
instantiated instance of the F-rule. Note that if y were equal to C,
CLEAR(C) would regress to T\ otherwise, it would simply regress to
289
BASIC PLAN-GEN ERATING SYSTEMS
CLEAR (C). We could summarize this result by saying that CLEAR ( C)
regresses to the disjunction (y == C) V CLEAR(C). (In order for
CLEAR ( C) to hold after applying any instance of unstack(2?,/), either/
must be equal to C or CLEAR(C) had to have held before applying the
F-rule.) Unfortunately, to accept a disjunctive subgoal expression would
violate our restrictions on the allowed forms of goal expressions. Instead,
when such a case arises, we produce two alternative subgoal expressions.
In the present example, one subgoal expression would contain the
precondition of unstack( i?,C), and the other would contain the unin-
stantiated precondition of unstack(Z?,j) conjoined with the literal
~{y = C).
A related complication occurs when we regress an expression matching
an incompletely instantiated literal in the delete list. Suppose, for example that we want to regress CLEAR ( C) through unstack(x, B
). If x
were equal to C, then CLEAR(C) would regress to F\ otherwise, it
would regress to CLEAR ( C). We could summarize this result by saying
that CLEAR(C) regressed to
[(JC = C)=>F]A[~(x = C)^>CLEAR(C)].
As a goal, this expression is equivalent to the conjunction
[~(JC = C) A CLEAR(C)].
The reader might ask what would happen if we were to regress
CLEAR(B) through unstack(2?,j). In our example, we would obtain T
for the case y — B. But y — B corresponds to the instance unstack( B, B ),
which really ought to be impossible because its precondition involves
ON(B,B). Our simple example would be made more realistic by adding
the precondition ~(x = y) to unstack(jc,j).
In summary, a STRI PS-form F-rule can be used as a B-rule in the
following manner. The applicability condition of the B-rule is that the
goal expression contain a literal that unifies with one of the literals in the
add list of the F-rule. The subgoal expression is created by regressing the
other (the nonmatched) literals in the goal expression through the match
instance of the F-rule and conjoining these and the match instance of the
precondition formula of the F-rule.
Let's consider a few more examples to illustrate the regression process.
Suppose our goal expression is [ΟΝ(Α,Β) Λ ON(B,C)]. Referring to
the F-rules given earlier, there are two ways in which stack(x,y) can be
290
A BACKWARD PRODUCTION SYSTEM
used on this expression as a B-rule. The mgu's for these two cases are
{A/x,B/y} and {B/x,C/y}. Let's consider the first of these. The
subgoal description is constructed as follows:
(1) Regress the (unmatched) expression ON(B, C)
through stack(,4,£) yielding ON(B, C).
(2) Add the expressions HOLDING (A), CLE A R(B)
to yield, finally, the subgoal
[ON(B,C) A HOLDING{A) A CLEAR(B)].
Another example illustrates how subgoals having existentially
quantified variables are created. Suppose our goal expression is
CLEAR(A ). Two F-rules have CLEAR on their add list. Let's consider
unstack( x,y ). As a B-rule, the mgu is {A/y}, and the subgoal expression
created is [HANDEMPTYA CLEAR(x) A ON(x,A)l In this ex
pression, the variable x is interpreted as existentially quantified. That is,
if we can produce a state in which there is a block that is on A and whose
top is clear, we can apply the F-rule, unstack, to this state to achieve a
state that matches the goal expression, CLEAR(A ).
A final example illustrates how we might generate "impossible"
subgoal descriptions. Suppose we attempt to apply the B-rule version of
unstack to the goal expression [CLEAR(A) A HANDEMPTY]. The
mgu is {A/y}. The regression of HANDEMPTY through unstackO^ )
is F. Since no conjunction containing F can be achieved, we see that the
application of this B-rule has created an impossible subgoal. [That is,
there is no state from which the application of an instance of un-
stack(x,^ ) produces a state matching CLEAR(A ) Λ HANDEMPTY.]
Impossible goal states might be detected in other ways also. In general,
we could use some sort of theorem prover to attempt to deduce a
contradiction. If a goal expression is contradictory, it cannot be achieved.
Checking for the consistency of goals is important in order to avoid
wasting effort attempting to achieve those that are impossible.
Sometimes the mgu of a match between a literal on the add list of an
F-rule and a goal literal does not further instantiate the F-rule. Suppose,
for example, that we want to use the STRIPS rule unstack(u, C) as a
B-rule applied to the goal [CLEAR(x) A ONTABLE(x)]. The mgu is
{ C/x }. Now, even though this substitution does not further instantiate
291
BASIC PLAN-GEN ERATING SYSTEMS
unstack(w, C), the substitution is used in the regression process. When
ONTABLE(x) is regressed through this instance of unstack(w, C), we
obtain ONTABLE(C).
7.43. AN EXAMPLE SOLUTION
Let us show how a backward production system, using the STRIPS
rules given earlier, might achieve the goal:
[ON(A,B)AON(B,C)].
In this particular example, the subgoal space generated by applying all
applicable B-rules is larger than the state space that we produced using F-rules. Many of the subgoal descriptions, however, are "impossible," that
is, either they contain F explicitly or rather straightforward theorem
proving would reveal their impossibility. Pruning impossible subgoals
greatly reduces the subgoal space.
In Figure 7.5 we show the results of applying some B-rules to our
example goal. (The tail of each B-rule arc is adjacent to that goal literal used to match a literal in the add list of the rule.) Note in Figure 7.5 that
when unstack was matched against CLEAR(B), it was not fully
instantiated. As we discussed earlier, if a possible instantiation allows a
literal in the add list of
the rule to match a literal in the goal expression,
we make this instantiation explicit by creating a separate subgoal node
using it.
All but one of the tip nodes in this figure can be pruned. The tip nodes
marked "*" all represent impossible goals. That is, no state description
can possibly match these goals. In one of them, for example, we must
achieve the conjunct [HOLDING(B) A ON(A,B)], an obvious im
possibility. We assume that our backward reasoning system has some sort
of mechanism for detecting such unachievable goals.
The tip node marked "**" can be viewed as a further specification of
the original goal (that is, it contains all of the literals in the original goal
plus some additional ones.) Heuristically, we might prune (or at least
delay expansion of) this subgoal node, because it is probably harder to
achieve than the original goal. Also, this subgoal is one of those produced
by matching CLEAR ( B ) against the add list of a rule. Since CLEAR ( B )
is already true in the initial state, there are heuristic grounds against
292
A BACKWARD PRODUCTION SYSTEM
stacks, B) stack(£,C)
HOLDING(A)
CLEAR(B)
ON(B,C)
pickup( A )/ stack(£, C )
ONTABLE(A)
CLEAR(A)
HANDEMPTY
ON(B,C)
CLEAR(B) unstack(jt:,Ä)
HOLDING(B)
CLEAR(C)
HOLDING(A) HANDEMPTY
CLEAR(x)
ON(x,B)
(x±A)
HOLDING(A)
ON(B,C) HANDEMPTY
CLEAR(A)
ON{A,B)
ON(B,C)
Fig. 7.5 Part of the backward {goat) search graph for a robot problem.
attempting to achieve it when it occurs in subgoal descriptions. (Some
times, of course, goal literals that already match literals in the initial state
might get deleted by early F-rules in the plan and need to be reachieved
by later F-rules. Thus, this heuristic is not always reliable.)
The pruning operations leave just one subgoal node. The immediate
successors of this subgoal are shown numbered in Figure 7.6. In this
figure, nodes 1 and 6 contain conditions on the value of the variable x.
(Conditions like these are inserted by the regression process when the
delete list of the rule contains literals that might match regressed literals.)
Both nodes 1 and 6 can be pruned in any case, because they contain the
literal F, which makes them impossible to achieve. Note also that node 2 is impossible to achieve because of the conjunction HOLD
ING(B) A ON (B,C). Node 4 is identical to one of its ancestors (in
Figure 7.5), so it can be pruned also. (If
a subgoal description is merely
implied by one of its ancestors instead of being identical to one of them,
293
BASIC PLAN-GENERATING SYSTEMS
unstack(x,^l )
HANDEMPTY
CLEAR(x)
ON(x,A)
ONTABLE(A)
F
ON(B,C)
CLEAR(B)
From Regressing
~ HANDEMPTY
unstack(x,Z?)
HANDEMPTY
CLEAR(x)
ON(x,B)
ONTABLE(A)
CLEAR(A)
(ΧΦΑ)
F
ON(B,C)
From Regressing
" HANDEMPTY \ pickupM)
ONTABLE(A)
CLEAR(A)
HANDEMPTY
ON{B,C) putdown(v4)
CLEAR(B)
HOLDING(B)
ONTABLE(A)
CLEAR(A)
ON(B,C) HOLDING(B)
CLEAR(C) ONTABLE(A)
CLEAR(A) HOLDING(x)
ONTABLE(A)
CLEAR(A)
(x*A)
ON(B,C)
CLEAR(B)
Fig. 7.6 Continuation of the backward search graph.
294
A BACKWARD PRODUCTION SYSTEM
unstack(ß.v)
HOLDING {x)
ONTABLE(B)
CLEAR(B)
{χΦΒ)
CLEAR(C)
(x*C)
ONTA BLE (A)
CLEAR (A) This subgoal
matches
the initial
state
description
Fig. 7.7 Conclusion of the backward search graph.
295
BASIC PLAN-GENERATING SYSTEMS
we cannot, in general, prune it. Some of the successors generated by the
ancestor might have been impossible because literals in the ancestor, but
not in the subgoal node, might have regressed to F.)
These pruning operations leave us only nodes 5 and 3. Let's examine
node 5 for a moment. Here we have an existential variable in the goal
description. Since the only possible instances that can be substituted for x (namely, B and C in this case) lead to impossible goals, we are justified in
pruning node 5 also.
In Figure 7.7 we show part of the goal space below node 3, the sole
surviving tip node from Figure 7.6. This part of the space is a bit more
branched than before, but we soon find a solution. (That is, we produce a
subgoal description that matches the initial state description.) If we
follow the B-rule arcs back to the top goal (along the darkened branches),
we see that the following sequence of F-rules solves our problem:
{unstack(C,v4), putdown(C), pickup(2?), stack(2?, C), pickup(^l),
stacks, £)}.
7.4.4. INTERACTING GOALS
When literals in a goal description survive into descendant descrip
tions, some of the same B-rules are applicable to the descendants as were
applicable to the original goal. This situation can involve us in a search
through all possible orderings of a sequence of rules before one that is
acceptable is found. In problems for which several possible orderings of
the different rules are acceptable, such a search is wastefully redundant.
This efficiency problem is the same one that led us to the concept of
decomposable systems.
One way to avoid the redundancy of multiple solutions to the same
goal component in different subgoals is to isolate a goal component and
work on it alone until it is solved. After solving one of the components, by
finding an appropriate sequence of F-rules, we can return to the
compound goal and select another component, and so on. This process is
related to splitting or decomposing compound (i.e., conjunctive) goals
into single-literal components and suggests the use of decomposable
systems.
If we attempted to use a decomposable system to solve our example
block-stacking problem, the compound goal would be split as shown in
Figure 7.8. Suppose the initial state of the world is as shown in Figure 7.1.
296
A BACKWARD PRODUCTION SYSTEM
If we work on the component goal ON(B,C) first, we easily find the
solution sequence (pickup(2?), stack(2?, C)}. But if we apply this
sequence, the state of the world would change, so that a solution to the other component goal, ON(A,B), would become more difficult. Furthermore, any solution to ON(A,B) from this state must "undo" the
achieved goal, ON(B,C). On the other hand, if we work on the goal
ON(A,B) first, we find we can achieve it by the sequence {un-
stack(C,^4), putdown(C), stack(A,B)}. Again, the state of the world
would change to one from which the other component goal, ON(B,C), would be harder to solve. There seems no way to solve this problem by
selecting one component, solving it, and then solving the other compo
nent without undoing the solution to the first.
We say that the component goals of this problem
interact. Solving one
goal undoes an independently derived solution to the other. In general, when a forward production system is noncommutative, the correspond
ing backward system is not decomposable and cannot work on compo
nent goals independently. Interactions caused by the noncommutative effects of F-rule applications prevent us from being able to use successfully the strategy of combining independent solutions for each compo
nent.
In our example problem, the component goals are highly interactive.
But in more typical problems, we might expect that component goals
would occasionally interact but often would not. For such problems, it
might be more efficient to assume initially that the components of compound goals can be solved separately, handling interactions, when
they arise, by special mechanisms—rather than assuming that all
compound goals are likely to interact. In the next section we describe a problem-solving system named STRIPS that is based on this general strategy.
Fig. 7.8 Splitting a compound goal.
297
BASIC PLAN-GENERATING SYSTEMS
7.5. STRIPS
The STRIPS system was one of the early robot problem-solving
systems. STRIPS maintains a "stack" of goals and focuses its problem-
solving effort on the top goal of the stack. Initially, the goal stack contains
just the main goal. Whenever the top goal in the goal stack matches the
current state description, it is eliminated from the stack, and the match
substitution is applied to the expressions beneath it in the stack.
Otherwise, if the top goal in the goal stack is a compound goal, STRIPS
adds each of the component goal literals, in some order, above the
compound goal in the goal stack. The idea is that STRIPS works on each
of these component goals in the order in which they appear on the stack.
When all of the component goals are solved, it reconsiders the compound
goal again, re-listing the components on the top of the stack if the
compound goal does not match the current state description. This
reconsideration of the compound goal is the (rather primitive) safety
feature that STRIPS uses to deal with the interacting goal problem. If solving one component goal undoes an already solved component, the
undone goal is reconsidered and solved again if needed.
When the top (unsolved) goal on the stack is a single-literal goal,
STRIPS looks for an F-rule whose add list contains a literal that can be
matched to it. The match instance of this F-rule then replaces the single-literal goal at the top of the stack. On top of the F-rule is then
added the match instance of its precondition formula, P. If P is
compound and does not match the current state description, its compo
nents are added above it, in some order, on the stack.
When the top item on the stack is an F-rule, it is because the
precondition formula of this F-rule was matched by the current state
description and removed from the stack. Thus, the F-rule is applicable,
and it is applied to the current state description and removed from the
top of the stack. The new state description is now used in place of the original one, and the system keeps track of the F-rule that has been
applied for later use in composing a solution sequence.
We can view STRIPS as a production system in which the global
database is the combination of
the current state description and the goal
stack. Operations on this database produce changes to either the state
description or to the goal stack, and the process continues until the goal
stack is empty. The "rules" of this production system are then the rules
298
STRIPS
that transform one global database into another. They should not be
confused with the STRIPS rules that correspond to the models of robot
actions. These top-level rules change the global database, consisting of
both state description and goal stack. STRIPS rules are named in the goal
stack and are used to change the state description.
The operation of the STRIPS system with a graph-search control
regime produces a graph of global databases, and a solution corresponds to a path in this graph leading from the start to a termination node. (A
termination node is one labeled by a database having an empty goal
stack.)
Let us see how STRIPS might solve a rather simple block-stacking
problem. Suppose the goal is
[ ON(C, B ) and ON {A, C)], and the initial
state is as shown in Figure 7.1. We note that this goal can be simply
accomplished by putting C on B and then putting A on C. We use the
same STRIPS rules as before.
In Figure 7,9 we show part of a graph that might be generated by
STRIPS during the solution of this example problem. (For clarity, we show a picture of the state of the blocks along with each state description.)
Since this problem was very simple, STRIPS quite easily obtains the
solution sequence {unstack( C, A ), stack(
C, B ), pickup( A ),
stack(^,C)}.
STRIPS has somewhat more difficulty with the problem whose goal is
[ON(B,C) Λ ON(A,B)]. Starting from the same initial configuration
of blocks, it is possible for STRIPS to produce a solution sequence longer
than needed, namely, {unstack( C, A ), putdown( C ), pickup( A ),
stack(^4, B ), unstack( A, B ), putdown( Λ ), pickup( B ), stack( 2?, C ),
pickup(yl), stack(A,B)}. The third through sixth rules represent an
unnecessary detour. This detour results in this case because STRIPS
decided to achieve ON(A 9B) before achieving ON(B,C). The interac
tion between these goals then forced STRIPS to undo ON{A,B) before
it could achieve ON(B,C).
299
STATE DESCRIPTION
CLEARiB)
CLEARiO
ONiC.A)
ONTABLEiA)
ONTABLEiB) GOAL STACK
ONiC.B) A ON(A.C)
Λ
STATE DESCRIPTION
CLEARiB)
CLEARiO
1 ONiC.A)
p-l"1 ONTABLEiA)
L£J ONTABLEiB)
MI \B\ HANDEMPTY GOAL STACK
ONiC.B)
ONiC.B) AONiA.C)
•
STATE DESCRIPTION GOAL STACK
CLEARiB)
CLEARiO
J^ ONiC.A)
r—i ONTABLEiA) L£J ONTABLEiB)
MI l
gl HANDEMPTY CLEARiC) A //OZ.D/,YGX4)
stack(/1.0
ONiC.B)
ONiC.B) AONiA.C)
Not a promising
solution path. ^ STATE DESCRIPTION
CLEARiB)
CLEARiO
I OV(C„4)
^ ONTABLEiA)
ONTABLEiB) GOAL STACK
av(/4.c)
ONiC.B) A 0.\'(A.C)
~Ä\ Γ^Ι HANDEMPTY
STATE DESCRIPTION
CL £V«/?(Ä)
C2.£V4Ä(C)
,_[_, av(c./i)
r—I ONTABLEiA)
p-| p-, ONTABLEiB)
MI 1*1 HANDEMPTY GOAL STACK
CLEARiB) A HOLDINGiO stack(Cfl)
O.YM.O
0A(Ci)A0.V(,l,O
STATE DESCRIPTION
<Γ££Λ/?(£)
C££Vltf(C)
OvV(C..4) ONTABLEiA)
ONTABLEiB)
HANDEMPTY Λ
JH GOAL STACK
HOLDINGiC)
CLEARiB)
CLEARiB) A HOLDINGiC)
stack(C.£)
aV(/l.C)
ONiC.B) A ON'iA.C)
Fig. 7.9 A search graph produced by STRIPS. Not a promising
solution patii. 03
n
3
6 w z w
5 5 o
·<
C/5
W
S
From previous page
STATE DESCRIPTION
CLEAR(B)
j CLEAR(Q
ON(C,A)
ONTABLE(A)
I ONTABLE(B)
»-^ HANDEMPTY
fc M" m GOAL STACK
HANDEMPTY A CLEAR(C)
AON{C,y)
unstack(C,7)
CLEAR(B)
CLEAR(B)AHOLDING(Q
stack(C,£)
ON(A,C)
ON{C,B) A ON(A,C)
With[A/y]the top subgoal matches
the current state description. We can
then apply unstack(C,A). Now the
next two goals match also, so we can apply stack(C,£).
STATE DESCRIPTION
CLEAR(C)
CLEAR{A)
1 ON{C,B)
1 1
FI C
£ HANDEMPTY
ONTABLE(A)
. ONTABLE(B) GOAL STACK
ON(A,C)
ON(C,B) A ON(A,C)
\ y STATE DESCRIPTION
CLEAR(A)
CLEAR(C)
1 CW(C,£)
"y—1 ONTABLE(A)
L£J ONTABLE(B)
Ml Ml HANDEMPTY GOAL STACK
CLEAR(C) A H0LD1NG(A )
stacks ,C)
ON(C,B) A ON{A,C)
STATE DESCRIPTION
Λ
JZL CLEARiC)
ON(C,B)
ONTABLE(A)
ONTABLE(B)
HANDEMPTY GOAL STACK
ONTABLE(A) A CLEAR(A)
A HANDEMPTY\
pickup(.4 ) CLEAR(C) A HOLDING(A)
st2ick(A,C)
ON(QB) A ON(A,C)
Now we can apply pickup(/l ), and
then the next goal will be matched, so we can apply stack(A,C). Now the last remaining goal on the stack
is matched.
STATE DESCRIPTION
Λ ON(A,C)
ON(C,B)
HANDEMPTY
CLEAR{A)
ONTABLE(B) GOAL STACK
NIL
BASIC PLAN-GEN ERATING SYSTEMS
7.5.1. CONTROL STRATEGIES FOR STRIPS
Several decisions must be made by the control component of the
STRIPS system. We'll mention some of these briefly. First, it must decide
how to order the components of a compound goal above the compound
goal in the goal stack. A reasonable approach is first to find all of those components that match the current state description. (Conceptually, they are put on the top of the stack and then immediately stripped off.) This step leaves only the unmatched goals to be ordered. We could create a
new successor node for each possible ordering (as we did in our examples) or we could select just one of them arbitrarily (perhaps that
goal literal heuristically judged to be the hardest) and create a successor
node in which only that component goal is put on the stack. The latter
approach is probably adequate because after this single goal is solved,
we'll confront the compound goal again and have the opportunity to select another one of its unachieved components.
When (existentially quantified) variables occur in the goal stack, the
control component may need to make a choice from among several possible instantiations. We can assume that a different successor can be created for each possible instantiation.
When more than one STRIPS F-rule would achieve the top goal on the
goal stack, we are again faced with a choice. Each relevant rule can
produce a different successor node.
A graph-search control strategy must be able to make a selection of
which leaf node to work on in the problem-solving graph. Any of the
methods of chapter 2 might be used here; in particular, we might develop
a heuristic evaluation function over these nodes taking into account, for example, such factors as length of the goal stack, difficulty of the problems on the goal stack, cost of the STRIPS F-rules, etc.
An interesting special case of STRIPS can be developed if we decide to
use a backtracking control regime instead of a graph-search control
regime. Here we can imagine a recursive function called STRIPS that
calls itself to solve the top goal on the stack. In this case, the explicit use of
a goal stack can be supplanted by the built-in stack mechanism of the
language (such as LISP) in which recursive STRIPS is implemented.
The program for recursive STRIPS would look something like the
following:
302
STRIPS
First, we set S, a global variable, to the initial state description. (We call
the program initially with the argument, G, the goal that STRIPS is
trying to achieve.)
Recursive Procedure STRIPS(G)
1 until S matches G, do:; the main loop of
STRIPS is iterative
2 begin
3 g 4— a component of G that does not match
S; a nondeterministic selection and
therefore a backtracking point
4 /**— an F-rule whose add list contains a
literal that matches g; another backtracking
point
5 p 4— precondition formula of appropriate
instance of/
6 STRIPS(/? ); a recursive call to solve the
subproblem
7 5 4— result of applying appropriate instance
of/to S
8 end
7.5.2. MEANS-ENDS ANALYSIS AND GPS
An early problem-solving system called GPS (standing for General
Problem Solver) used methods similar to those later used by STRIPS.
GPS used a technique for identifying some key F-rules, given a state description, 5, and a
goal, G. The identification process first attempted to
calculate a difference between S and G. This difference-calculating
process was performed by a function that needed to be written especially
for each domain of application.
303
BAS 1C PLAN GEN ERATIN G SYSTEMS
Differences were used to select “relevant” F-rules by accessing a
“difference table” in which F-rules were associated with differences. The
F-rules associated with a given difference are those F-rules that are
“relevant to reducing that difference.” The F-rules associated with each
difference were ordered according to relevance. A difference table had to
be provided for each domain of application. Once an F-rule was selected
as relevant to removing a difference, GPS worked recursively on the
preconditions for that F-rule. When these had been satisfied, the F-rule
was applied to the current state description, and the process continued.
Thus, we see that recursive GPS is very similar to (if slightly more
general than) recursive STRIPS. (Historically, the design of STRIPS was
motivated by GPS.) The program for recursive GPS might look
something like the following:
First, we set S, a global variable, to the initial state description. (We call
the program initially with the argument, G, the goal that GPS is trying to
achieve.)
Recursive Procedure GPS( G)
1 until S matches G, do:; the main loop of GPS
is iterative
2 begin
3 d 4- a difference between S and G;
a backtracking point
4 f4- an F-rule relevant to reducing d;
another backtracking point
5 p 4- precondition formula of appropriate
instance off
6 GPS(p); a recursive call to solve the subproblem
7 S 4- result of applying appropriate instance
off to s
8 end
304
STRIPS
The process of identifying differences and selecting F-rules to reduce
them is called means-ends analysis. Recursive STRIPS can be regarded as
a special case of GPS, where differences between S and G are those
components of G unmatched by S and where all F-rules whose add list
contains a literal L are considered relevant to reducing the difference, L.
Although, originally, GPS worked recursively, as we have described,
we could also easily imagine a GPS system having a graph-search control
regime similar to that discussed for STRIPS.
7.53. A PROBLEM THAT STRIPS CANNOT SOLVE
STRIPS produces straightforward solutions to many problems, but, as
we have seen, there are some problems for which STRIPS may produce
solutions longer than necessary. Also, there are some very simple
problems for which it is impossible for STRIPS (as described) to produce
any solution at all. An example of a problem that STRIPS cannot solve is the problem of generating a program to switch the contents of two
memory registers in a computer.
Suppose we have two memory registers X and y whose initial contents
are A and B respectively. We might represent this situation by the state
description [CONT(X,A) A CONT(Y,B)] where CONT(X
9A)9 for
example, means that register X has content A (i.e., program variable X
has value A ). In this example we must try not to be confused by the fact
that a program "variable," like X, is really a constant symbol of our predicate calculus language that refers to a definite object (a particular
memory register). Predicate calculus variables, like x and y, are used to
denote arbitrary program variables (like X) and their "values" (like A ).
To help avoid confusion, we purposely use the terms "register" and
"content" instead of "program variables" and "values."
Our goal for STRIPS is the expression
[
CONT(X, B ) Λ CONT{ Y,A )]. The only operation that we allow is the
assignment statement in which one register is "assigned" to another, that
is, its content is replaced by the content of the other. We can represent
such an assignment statement by an F-rule:
assign(w,r,/,s)
P: CONT{r,s) A CONT(uJ)
D: CONT(u,t)
A: CONT(u,s)
305
BASIC PLAN-GEN ERATING SYSTEMS
©
STATE DESCRIPTION
CONT(X,A)
CONT(Y,B) CONT(Z,0)
©
STATE DESCRIPTION
CONT(X.A)
CONT{Y,B)
CONT(Z,0) GOAL STACK
CONT(X,B) A CONT(Y,A)
r
GOAL STACK
CONT(X,B)
CONT(Y<A)
CONT(X,B)ACONT(Y,A)
® !
STATE DESCRIPTION
CONT(X,A)
CONT(Y,B)
CONT(Z,0)
©
STATE DESCRIPTION
CONT(X.B) CONT(Y,B) CONT(Z,0) GOAL STACK
CONT(r,B) A CONT(Xj)
assign
(X,r,t, B)
CONT(Y,A)
CONT(X,B)ACONT(Y,A)
Here, we can match the top g
[Y/r,A/t] and apply assign {)
1
GOAL STACK
CONT(Y,A)
CONT(X,B)ACONT(Y,A)
Fig. 7.10 A problem STRIPS cannot solve.
This assignment statement might be read: Assign the register u (with
current content l) to the register r (with current content s). The result is
that the current content of register u will be s, and the content of r will
remain s. The original content of w, namely t, is lost in this process.
A production system using this F-rule is noncommutative, because a
CO NT relation is deleted by assign. Well-known to beginning program-
306
USING DEDUCTION SYSTEMS TO GENERATE ROBOT PLANS
ming students, the destructive property of the assignment statement
requires that one must store the content of either Xov Y in a third register before attempting an exchange. To make the problem more than fair for
STRIPS, we explicitly name this needed third register at the beginning of
the problem. This naming can be done by adding the fact CONT(Z,0) to the initial state description. (In the next chapter we discuss a way in
which additional registers could be created if the system decides it needs
them.)
In Figure 7.10 we show an attempt by STRIPS at the solution to this
problem. Since the initial problem is completely symmetrical, it makes
no difference how we order the components of the initial compound goal
in node 1. At node 2, STRIPS quite reasonably decides to apply the
instance assign^,/*,/,£). This operation creates node 3. Now we see
STRIPS' fatal flaw: It is too anxious! It immediately decides that the top
goal of node 3 can be matched by the current state description with mgu
{ Y/r, A /t). This instance of assign unfortunately losest, making the top
goal in node 4 unsolvable. Furthermore, there is no other match for the top goal in node 3 with node 3's state description.
The only way that this problem could be solved would be to defer
temporarily matching the top goal of node 3, and to create a successor node with top goal CONT(r,B). Then perhaps in some ultimate
descendant, Z would be substituted for r. But to add this mechanism, of deferring goal matching, would greatly complicate STRIPS. Instead we describe in the next chapter some problem-solving systems that are
inherently more powerful than STRIPS.
7.6. USING DEDUCTION SYSTEMS TO GENERATE
ROBOT PLANS
From the examples given in this chapter, we see that the problem of
composing a sequence of actions has a straightforward formulation
involving STRI PS-form rules. A forward production system using these
rules is typically noncommutative because certain expressions may be
deleted when a rule is applied. We stress again that there is nothing
inherently commutative or noncommutative about robot problems
themselves: Commutativity (or its lack) depends entirely on the details of
the production system used to solve a problem. It is perfectly possible, for
307
BASIC PLAN-GEN ERATING SYSTEMS
example, to formulate robot problems so that they can be solved by
commutative production systems. One way to achieve such a commuta
tive formulation is to pose robot problems as theorems to be proved and
then use one of our commutative deduction systems. Formulating a robot problem as a problem of deduction is, perhaps, a bit more complex and
awkward than using STRI PS-form rules, but theorem-proving formula
tions have considerable theoretical interest and preceded STRIPS
historically. We describe two alternative approaches for posing robot
problems as theorem-proving problems.
7.6.1. GREENS FORMULATION
One of the first attempts to solve robot problems was by Green
(1969a), who formulated them in such a way that a resolution theorem-
proving system (a commutative system) could solve them. This formula
tion involved one set of assertions that described the initial state and another set that described the effects of the various robot actions on
states. To keep track of which facts were true in which state, Green
included a "state" or "situation" variable in each predicate. The goal
condition was then described by a formula with an existentially
quantified state variable. That is, the system would attempt to prove that
there existed a state in which a certain condition was true. A constructive proof method, then, could be used to produce the set of actions that
would create the desired state. In Green's system, all assertions (and the
negation of the goal condition) were converted to clause form for a resolution theorem prover, although other deduction systems could have
been used as well.
An example problem will help to illustrate exactly how this method
works. Unfortunately, the notation needed in these theorem-proving
formulations is a bit cumbersome, and the block-stacking examples that
we have been using need to be simplified somewhat to keep the examples
manageable.
Suppose we have the initial situation depicted in Figure
7.11. There are
just four discrete positions on a table, namely, Z>, E, F and G ; and there
are three blocks, namely, A, B and C, resting on three of the positions as
shown. Suppose we name this initial state SO. Then we denote the fact
that block A is on position D in SO by the literal ON(A,D,SO). The
state name is made an explicit argument of the predicate. The complete
308
USING DEDUCTION SYSTEMS TO GENERATE ROBOT PLANS
configuration of blocks in the initial state is then given by the following
set of formulas:
ON(A,D,SO)
ON(B,E,SO)
ON(C,F,SO)
CLEAR(A,SO)
CLEAR (B, SO)
CLEAR (C, SO)
CLEAR(G,SO)
Now we need a way to express the effects that various robot actions
might have on the states. In theorem-proving formulations, we express
these effects by logical implications rather than by STRI PS-form rules.
For example, suppose the robot has an action that can "transfer" a block
x from position y to position z, where 7 and z might be either the names of
other blocks that block x might be resting on or the names of positions on the table that block x might be resting on. Let us assume that both block x and position z (the target position) must be clear in order to execute this
action. We model this action by the expression
"trans (x,y,z)"
When an action is executed in one state, the result is a new state. We
use the special functional expression "do{action,state)" to denote the
function that maps a state into the one resulting from an action. Thus, if
trans(x,y,z) is executed in state, s, the result is a state given by
do[trans(x,y,z),s].
The major effect of the action modeled by trans can then be formulated
as the following implication:
[CLEAR(x,s) A CLEAR(z,s) A ON(x,y,s) A DIFF(x,z)]
^>[CLEAR(x,do[trans(x,y,z),s])
A CLEAR(y,do[trans(x,y,z),s])
A ON(x,z,do[trans(x 9y,z),s])] .
(All variables in assertions have implicit universal quantification.)
A B C mtmMmmmmm
G D E F
Fig. 7.11 An initial configuration of blocks.
309
BASIC PLAN-GENERATING SYSTEMS
This formula states that if x and z are clear and if x is on y in state s, and
if x and z are different, then x and y will be clear and x will be on z in the
state resulting from performing the action trans(x,^,z) in state s. (The
predicate DI FF does not need a state variable because its truth value is
independent of state.)
But this formula alone does not completely specify the effects of the
action. We must also state that certain relations are unaffected by the
action. In systems like STRIPS, the F-rules use the convention that relations not explicitly named in the rule are unaffected. But here the
effects and "non-effects" alike need to be stated explicitly.
Unfortunately, in Green's formulation, we must have assertions for
each relation not affected by an action. For example, we need the
following assertion to express that the blocks that are not moved stay in the same position:
[ON(u,v,s)ADIFF(u,x)]
=> ON
( w, v, do [ trans ( x,y, z ), s ]) .
And we would need another formula to state that block u remains clear if
block u is clear when a block v (not equal to u ) is put on a block w (not
equal to u ).
These assertions, describing what stays the same during an action, are
sometimes called the frame assertions. In large systems, there may be
many predicates used to describe a situation. Green's formulation would
require (for each action) a separate frame assertion for each predicate.
This representation could be condensed if we used a higher order logic, in
which we could write a formula something like:
(VP)[P(s)^> P[do(action^)].
But higher order logics have their own complications. (Later, we
examine another first-order logic formulation that does allow us to avoid
multiple frame assertions.)
After all of the assertions for actions are expressed by implications, we
are ready to attempt to solve an actual robot problem.
Suppose we wanted to achieve the simple goal of having block A on
block B. This goal would be expressed as follows:
310
USING DEDUCTION SYSTEMS TO GENERATE ROBOT PLANS
(3s)ON(A,B,s).
The problem can now be solved by finding a constructive proof of the
goal formula from the assertions. Any reasonable theorem-proving
method might be used.
As already mentioned, Green used a resolution system in which the
goal was negated and all formulas were then put into clause form. The
system then would attempt to find a contradiction, and an answer
extraction process would find the goal state that exists. This state would,
in general, be expressed as a composition of do functions, naming the
actions involved in producing the goal state. We show a resolution
refutation graph for our example problem in Figure 7.12 (the DIFF
predicate is evaluated, instead of resolved against). Applying answer
extraction to the graph of Figure 7.12 yields:
si = do[trans(A,D 9B),SO],
which names the single action needed to accomplish the goal in this case.
Instead of resolution, we could have used one of the rule-based
deduction systems discussed in chapter 6. The assertions describing the initial state might be used as facts, and the action and frame assertions
might be used as production rules.
The example just cited is trivially simple, of course—we didn't even
need to use any of the frame assertions in this case. (We certainly would
have had to use them if, for example, our goal had been the compound
goal [ON(A,B,s) A ON(B,C,s)]. In that case, we would have had to
prove that B stayed on C while putting A on B.) However, in even slightly
more complex examples, the amount of theorem-proving search required
to solve a robot problem using this formulation can grow so explosively
that the method becomes quite impractical. These search problems
together with the difficulties caused by the frame assertions were the major impetus behind the development of the STRIPS problem-solving system.
7.6.2. KOWALSKIS FORMULATION
Kowalski has suggested a different formulation. It simplifies the
statement of the frame assertions. What would ordinarily be predicates in
Green's formulation are made terms.
311
BASIC PLAN-GENERATING SYSTEMS
-CLEAR{x,s) V -CLEAR(z,s) V ~ON(x,y,s) V ~DlFF(x,z)
V ON(x,z,do[trans(x,y,z),s])
~CLEAR(A,s) V -CLEAR (B,s) V ~ON(x,y,s) V -DIFF(A,B)
~CLEAR(B,SO) V ~~ON(x,y tS0) V ~DIFF{A,B)
Fig. 7.12 A refiitation graph for a block-stacking problem.
312
USING DEDUCTION SYSTEMS TO GENERATE ROBOT PLANS
For example, instead of using the literal ON(A,D,SO)Xo denote the
fact that A is on D in state SO, we use the literal HOLDS [on(A,D),SO].
The term on(A,D) denotes the "concept" of A being on D; such
concepts are treated as individuals in our new calculus. Representing
what would normally be relations as individuals is a way of gaining some
of the benefits of a higher order logic in a first-order formulation.
The initial state shown in Figure 7.11 is then given by the following set
of expressions:
1 POSS(SO)
2 HOLDS[on(A,D),SO]
3 HOLDS[on(B,E),SO]
4 HOLDS[on(C,F),SO]
5 HOLDS[clear(A),SO]
6 HOLDS[clear(B),SO]
1 HOLDS[clear(C),SO]
8 HOLDS[clear(G),SO]
The literal PO SS (SO ) means that the state SO is a possible state, that
is, one that can be reached. (The reason for having the POSS predicate
will become apparent later.)
Now we express part of the effects of actions (the "add-list" literals) by
using a separate HOLDS literal for each relation made true by the action.
In the case of our action trans ( x,y, z ), we have the following expressions:
9 HOLDS [ clear ( x ), do [ trans ( x,y, z),s]]
10 HOLDS[clear(yido[trans(x,y 9z),s]]
11 HOLDS[on(x,z\ do [ trans (x,y, z),s]]
(Again, all variables in the assertions are universally quantified.)
Another predicate, PACT, is used to say that it is possible to perform a
given action in a given state, that is, the preconditions of the action match
that state description. PACT(a,s) states that it is possible to perform
action a in state s. For our action trans, we thus have:
12 [HOLDS[clear(x),s] A HOLDS[clear(z),s]
Λ HOLDS[on(x,y),s] A DIFF(x,z)}
=> PA CT[ trans ( x,y, z ), s ]
313
BASIC PLAN-GENERATING SYSTEMS
Next we state that if a given state is possible and if the preconditions of
an action are satisfied in that state, then the state produced by performing
that action is also possible:
13 [POSS(s) A PACT(u,s)]=> POSS[do(u,s)]
The major advantage of Kowalski's formulation is that we need only
one frame assertion for each action. In our example, the single frame
assertion is:
14 {HOLDS(v,s) A DIFF[v,clear(z)] A DIFF[v,on(x,y)]}
=> HOLDS[v,do[trans(x,y,z\s]]
This expression quite simply states that all terms different than clear (z)
and on (x,y) still HOLD in all states produced by performing the action
trans(x,y,z).
A goal for the system is given, as usual, by an expression with an
existentially quantified state variable. If we wanted to achieve B on C and
A on B, our goal would be:
(3s){POSS(s) A HOLDS[on(B,C\s] A HOLDS[on(A,B\s}}
The added conjunct, POSS(s), is needed to require that state s be
reachable.
Assertions 1-14, then, express the basic knowledge needed by a
problem solver for this example. If we were to use one of the rule-based
deduction systems of chapter 6 to solve problems using this knowledge,
we might use assertions 1-11 as facts and use assertions 12-14 as rules.
The details of operation of such a system would depend on whether the
rules were used in a forward or backward manner and on the specific
control strategy used by the system. For example, to make the rule-based system "simulate" the steps that would be performed by a backward
production system using STRI PS-form rules, we would force the control strategy of the deduction system, first, to match one of assertions 9-11 (the "adds") against the goal. (This step would establish the action through which we were attempting to work backward.) Next, assertions
13 and 12 would be used to set up the preconditions of that action.
Subsequently, the frame assertion, number 14, would be used to regress
the other goal conditions through this action. All DIFF predicates should
314
BIBLIOGRAPHICAL AND HISTORICAL REMARKS
be evaluated whenever possible. This whole sequence would then be
repeated on one of the subgoal predicates until a set of subgoals was produced that would unify with fact assertions 1-8.
Other control strategies could, no doubt, be specified that would allow
a rule-based deduction system to "simulate" the steps of STRIPS and
other more complex robot problem-solving systems, to be discussed in
the next chapter. One way to specify the appropriate control strategies
would be to use the ordering conventions on facts and rules that are used
by the PROLOG language discussed in chapter 6.
Comparing deduction systems with a STRI PS-like system, we must
not be tempted to claim that one type can solve problems that the other
cannot. In fact, by suitable control mechanisms, the problem-solving
traces of different types of systems can be made essentially identical. The
point is that to solve robot problems efficiently with deduction systems
requires specialized and explicit control strategies that are implicitly
"built-in
to" the conventions used by systems like STRIPS. STRI PS-like
robot problem-solving systems would appear, therefore, to be related to the deduction-based systems in the same way that a higher level programming language is related to lower level ones.
7.7. BIBLIOGRAPHICAL AND HISTORICAL
REMARKS
Modeling robot actions by STRI PS-form rules was proposed, as a
partial solution to the frame problem, in a paper by Fikes and Nilsson
(1971). A similar approach is followed in the PLANNER-like AI
languages [Bobrow and Raphael (1974); Derksen, Rulifson, and Wal-
dinger (1972)]. The frame problem is discussed in McCarthy and Hayes
(1969), Hayes (1973a), and Raphael (1971). The problem of dealing with
anomalous conditions is discussed in McCarthy and Hayes (1969) and in
McCarthy (1977). McCarthy calls this problem the qualification problem
and suggests that it may subsume the frame problem. Fahlman (1974) and Fikes (1975) avoid some frame problems by distinguishing between
primary and secondary relationships. Models of actions are defined in
terms of their effects on primary relationships; secondary relationships
are deduced (as needed) from the primary ones. Waldinger (1977, part 2)
315
BASIC PLAN-GEN ERATING SYSTEMS
contains a clear discussion of frame problems not overcome by STRI PS-
form rules. Hendrix (1973) proposes a technique for modeling continu
ous actions.
The robot actions used in the examples of this chapter are based on
those of Dawson and Siklóssy (1977). The use of triangle tables to
represent the structure of plans was proposed in a paper by Fikes, Hart,
and Nilsson (1972b). Execution strategies using triangle tables were also
discussed in that paper.
The use of regression for computing the effects of B-rules is based on a
similar use by Waldinger (1977). The STRIPS problem-solving system is described in Fikes and Nilsson (1971). The version of STRIPS discussed
in this chapter is somewhat simpler than the original system. Fikes, Hart,
and Nilsson (1972b) describe how solutions to specific robot problems
can be generalized and used as components of plans for solving more
difficult problems. Triangle tables play a key role in this process.
The GPS system was developed by Ne well, Shaw, and Simon (1960)
[see also Newell and Simon (1963)]. Ernst and Newell (1969) describe how later versions of GPS solve a variety of problems. Ernst (1969) presents a formal analysis of the properties of GPS.
For an interesting example of applying "robot" problem-solving ideas
to a domain other than robotics, see Cohen (1978), who describes a
system for planning speech acts.
The use of formal methods for solving robot problems was proposed in
the "advice taker" memoranda of McCarthy (1958, 1963). Work toward
implementing such a system was undertaken by Black (1964). Green (1969a) was the first to develop a full-scale formal system. McCarthy and
Hayes (1969) contains proposals for formal problem-solving methods.
Kowalski (1974b, 1979b) presents an alternative formulation that escapes
some of the frame problems of first-order systems. Simon (1972a)
discusses the general problem of reasoning about actions.
316
EXERCISES
EXERCISES
7.1 In LISP, rplaca(x,j>) alters the list structure x by replacing the car
part of x by y. Similarly, rplacd(.x,j>) relaces the cdr part of x by y.
Represent the effects on list structure of these two operations by STRIPS
rules.
7.2 Let right (x ) denote the cell to the right of cell x (when there is such
a cell) in the 8-puzzle. Define similarly left(x), up(x), and down(x).
Write STRIPS rules to model the actions move B (blank) up, move B
down, move B left, move B right.
13 Write simple English sentences that express the intended meanings
of each of the literals in Figure 7.1. Devise a set of context-free rewrite
rules to describe the syntax of these sentences.
7.4 Describe how the two STRIPS rules pickup(jc) and stack(x,y)
could be combined into a macro-rule put(x,y). What are the precondi
tions, delete list and add list of the new rule. Can you specify a general
procedure for creating macro-rules from components?
7.5 Referring to the blocks-world situation of Figure 7.1, let us define
the predicate ABOVE in terms of ON as follows:
ON(x,y)=ïABOVE(x,y)
ABOVE(x,y) A ABOVE(y,z)=>ABOVE(x,z).
The frame problems caused by the explicit occurrence of such derived
predicates in state descriptions make it difficult to specify STRIPS
F-rules. Discuss the problem and suggest some remedies.
7.6 Consider the following pictures:
~@\ A
ΓΕΠ 151
B
0 HI
c □ IS
317
BASIC PLAN-GEN ERATING SYSTEMS
Describe each by predicate calculus wffs and devise a STRIPS rule that is
applicable to both the descriptions of A and C ; and when applied to a
description of A, produces a description of B\ and when applied to a
description of C, produces a description of just one of pictures 1 through
5. Discuss the problem of building a system that could produce such
descriptions and rules automatically.
7.7 Two flasks, Fl and F2, have volume capacities of Cl and C2,
respectively. The v/ÏÏCONT(x,y) denotes that flask x contains/ volume units of a liquid. Write STRIPS rules to model the following actions:
(a) Pour the entire contents of Fl into F2.
(b) Fill F2 with (part of) the contents of Fl.
Can you see any difficulties that might arise in attempting to use these
rules in a backward direction? Discuss.
7.8 The "monkey-and bananas" problem is often used to illustrate AI
ideas about plan generation. The problem can be stated as follows:
A monkey is in a room containing a box and
a bunch of bananas. The bananas are hanging from the ceiling out of reach of the monkey.
How can the monkey obtain the bananas?
Show how this problem can be represented so that STRIPS would
generate a plan consisting of the following actions: go to the box, push the box under the bananas, climb the box, grab the bananas.
7.9 Referring to the block-stacking problem solved by STRIPS in
Figure 7.9, suggest an evaluation function that could be used to guide
search.
7.10 Write a STRIPS rule that models the action of interchanging the
contents of two registers. (Assume that this action can be performed
directly without explicit use of a third register.) Show how STRIPS
would produce a program (using this action) for changing the contents of
registers X, Y, and Z from A, B, and C, respectively, to C, B, and A,
respectively.
7.11 Suppose the initial state description of Figure 7.1 contained the
expression HANDEMPTYV HOLDING(D) instead of HAND-
SIS
EXERCISES
EMPTY. Discuss how STRIPS might be modified to generate a plan
containing a "runtime conditional" that branches on HANDEMPTY.
(Conditional plans are useful when the truth values of conditions not
known at planning time can be evaluated at execution time.)
7.12 Discuss how rule programs (similar to those described at the end of
chapter 6) can be used to solve block-stacking problems. (A DELETE
statement will be needed.) Illustrate with an example.
7.13 Find a proof for the goal wff :
(3s){POSS(s) A HOLDS[on(B,C\s] A HOLDS[on(A,B\s]}
given the assertions 1-14 of Kowalski's formulation described in Section
7.6.2. Use any of the deduction systems described in chapters 5 and 6.
7.14 A robot pet, Rover, is currently outside and wants to get inside.
Rover cannot open the door to let itself in; but Rover can bark, and
barking usually causes the door to open. Another robot, Max, is inside.
Max can open doors and likes peace and quiet. Max can usually still
Rover's barking by opening the door. Suppose Max and Rover each have
STRIPS plan-generating systems and triangle-table based plan-execution systems. Specify STRIPS rules and actions for Rover and Max and describe the sequence of planning and execution steps that bring about equilibrium.
319
CHAPTER 8
ADVANCED PLAN-GENERATING
SYSTEMS
In this chapter we continue our discussion of systems for generating
robot plans. First, we discuss two systems that can deal with interacting
goals in a more sophisticated manner than STRIPS. Then, we discuss
various hierarchical methods for plan generation.
8.1. RSTRIPS
RSTRIPS is a modification of STRIPS that uses a goal regression
mechanism for circumventing goal interaction problems. A typical use of
this mechanism prevents RSTRIPS from applying an F-rule, Fl, that
would interfere with an achieved precondition, P, needed by another
F-rule, F2, occurring later in the plan. Because F2 occurs later than Fl, it
must be that F2 has some additional unachieved precondition, P\ that
led to the need to apply Fl first. Instead of applying Fl, RSTRIPS rearranges the plan by regressing F through the F-rule that achieves P. Now, the achievement of
the regressed P' will no longer interfere with P.
Some of the techniques and conventions used by RSTRIPS can best be
introduced while discussing an example problem in which the goals do
not happen to interact. After these have been explained, we shall describe
in detail how RSTRIPS handles interacting goals.
EXAMPLE 1. Let us use one of the simpler blocks-world examples
from the last chapter. Suppose the goal is [ ON{ C, B ) Λ ON {A, C)] and
that the initial state is as shown in Figure 7.1. Until the first F-rule is
applied, RSTRIPS operates in the same manner as STRIPS. It does use
321
ADVANCED PLAN-GEN ERATING SYSTEMS
some special conventions in the goal stack, however. Specifically, when it
orders the components above a compound goal in the stack, it groups
these components along with their compound goal within a vertical
parenthesis in the stack. We shall see the use of this grouping shortly.
The goal stack portion of the global database produced by RSTRIPS at
the time that the first F-rule, namely, unstack( C,A ), can be applied is as
follows:
[HANDEMPTYA CLEAR(C) A ON(Qy)
unstack(C,jO
[HOLDING(C)
CLEAR(B)
IHOLDING(C) A CLEAR(B)
stack(C,5)
rON(C,B)
ON(A 9C)
[_ON(C,B)AON(A,C)
This goal stack is the same as the one produced by STRIPS at this stage of
the problem's solution. (See Figure 7.9 of chapter 7.) For added clarity in
the examples of this section, we retain the condition achieved by applying
an F-rule just under the F-rule that achieved it in the goal stack. Note the
vertical parentheses grouping goal components with compound goals.
With the substitution {Α/γ}, RSTRIPS can apply unstack(C^)
because its precondition (at the top of the stack) is matched by the initial
state description. Rather than removing the satisfied precondition and
the F-rule from the goal stack (as STRIPS did), RSTRIPS leaves these
items on the stack and places a marker just below HOLDING(C) to
indicate that HOLDING^ C) has just been achieved by the application of
the F-rule. As the system tests conditions on the stack, it adjusts the
position of the marker so that the marker is just above the next condition
in the stack that still needs to be satisfied. After applying unstack( C,A ) the goal stack is as follows:
322
RSTRIPS
[ HANDEMPTYA CLEAR(C) A ON(QA)
unstack(C,v4 ) |
ï*HOLDING(C) g
CLEAR(B) ^^^^^^^^LL
stack(C,£)
Γ ON(C,B)
ON(AX)
lON(C,B)A ON(AX)
The horizontal line running through the stack is the marker. All of the
F-rules above the marker have been applied, and the condition just
under the marker, namely, CLEAR ( 2? ), must now be tested. (For clarity,
we include next to our goal stacks a picture of the state produced by
applying the F-rules above the marker.)
When the marker passes through a vertical parenthesis (as it does in
the goal stack shown above), there are goals above the marker that have
already been achieved that are components of a compound goal below
the marker at the end of the parenthesis. RSTRIPS notes these
components and "protects" them. Such protection means that RSTRIPS
will ensure that no F-rule can be applied within this vertical parenthesis
that deletes or falsifies the protected goal components. Protected goals
are indicated by asterisks (*) in our goal stacks.
In the last chapter, whenever STRIPS satisfied the preconditions of an
F-rule in the goal stack, it applied that F-rule to the then current state
description to produce a new state description. RSTRIPS does not need
to perform this process explicitly. Rather, that part of the goal stack
above the marker indicates the sequence of F-rules applied so far. From this sequence of F-rules, RSTRIPS can always compute what the state
description would be if this sequence were applied to the initial state.
Actually, RSTRIPS never needs to compute such a state description. At
most it needs to be able to compute whether or not certain subgoals match the then current state description. This computation can be made
by regressing the subgoal to be tested backward through the sequence of
F-rules applied so far. For example, in the goal stack above, RSTRIPS
must next decide whether or not CLEAR(B) matches the state descrip-
323
ADVANCED PLAN-GEN ERATING SYSTEMS
tion achieved after applying unstack(C,^4). Regressing CLEAR(B)
through this F-rule produces CLEAR ( 2? ), which matches the initial state
description, so, therefore, it must also match the subsequent description.
(If CLEAR(B) did not match, RSTRIPS would next have had to insert
into the goal stack the F-rules for achieving it.)
At this stage, RSTRIPS notes that both of the preconditions for
stack(C,2?) are satisfied, so this F-rule is applied (by moving the
marker), and ON(C 9B) is protected. [Since the parenthesis of the
compound goal HOLDING(C) A CLEAR(B) is now entirely above the marker, the system removes its protection of HOLDING
( C).] Next,
RSTRIPS attempts to achieve ON(A,C). Finally, it produces the goal
stack shown below:
[HANDEMPTYA CLEAR(C) A ON(QA)
unstack(C,^) I
HOLDING(C) ^1
CLEAR(B) ^^^^^^^^
IHOLDING(C) A CLEAR(B) "'"""""'""""""'""
stack(C,£)
*ON(C,B)
[ HANDEMPTY A CLEAR(A ) Λ ONTABLE(A )
pickup(v4 )
HOLDING (A)
CLEAR(C)
lHOLDING(A)A CLEAR(C)
stack(A 9C)
ON(A,C)
L ON(C 9B)AON(A,C)
The preconditions of pickup(/4 ) match the current state description, as
can be verified by regressing them through the sequence of F-rules
applied so far, namely, {unstack(C,y4), stack(C,2?)}. (The condition
CLEAR (A ) did not match the initial state, but it becomes true in the
current one by virtue of applying unstack( C,A ). The condition HAND-
EMPTY matched the initial state, was deleted after applying un-
stack(C,^), and becomes true again after applying stack( C,i?). The
regression process reveals that these conditions are true currently.)
324
RSTRIPS
Before the F-rule, pickup(^4 ), can be applied, RSTRIPS must make
sure that it does not violate any protected subgoals. At this stage
ON(C,B) is protected. A violation check is made by regressing
ON(C,B) through pickup(^4). A violation of the protected status of
ON(C,B) would occur only if it regressed through to jF[that is, only if
ON(C,B) were deleted by application of the F-rule, pickup(>4 )]. Since
no protections are violated, the F-rule, pickup(y4 ), can be applied. The
marker is moved to just below HOLDING(A ), and HOLDING(A ) is
protected. [ON(C,B) retains its protected status.]
Regression through the sequence of F-rules of the other precondition
of stack(yl,C), namely, CLEAR(C), reveals that it matches the now
current state description. Thus, the compound precondition of
stack(^4,C) is satisfied. Regression of the previously solved main goal
component, ON(C,B) 9 through stack(A,C) reveals that its protected
status would not be violated, so RSTRIPS applies stack(^4, C) and moves
the marker below the last condition in the stack. RSTRIPS can now
terminate because all items in the stack are above the marker. The F-rules
in the goal stack at this time yield the solution sequence {unstack( C9A ),
staek( C, B ), pickup(^ ), stack(^, C )}.
This example was straightforward because there were no protection
violations. When goals interact, however, we will have protection
violations; next we describe how RSTRIPS deals with these.
EXAMPLE 2. Suppose the same initial configuration as before,
namely, that of Figure 7.1. Here, however, we attempt to solve the more
complicated goal [ ON(A,B) A ON(B, C)]. All goes well until the point
at which RSTRIPS has produced the goal stack on the following page.
325
ADVANCED PLAN-GENERATING SYSTEMS
' ONTABLE(A)
[HANDEMPTY A CLEAR(C) Λ ON(C,A)
unstack(C,^4)
CLEAR(A)
[HOLDING(C)
putdown(C)
HANDEMPTY
L ONTABLE(A ) Λ CLEAR(A ) Λ HANDEMPTY
pickup(/l ) i
HOLDING(A) rh
SoifÄ) Λ CL^*(£) ^Α^Μ^
^ stack(^,£)
*~ *ON(A,B)
^ONTABLE(B)
[HANDEMPTY A CLEAR(Z) A ON(Z,B)
unstack( z,Z?)
i/,4JVZ)£MPry
ONTABLE(B) A CLEAR(B) A HANDEMPTY
" pickup(2?)
HOLDING(B)
CLEAR(C)
IHOLDING(B) A CLEAR(C)
stack(£,C)
ON(A,B)A ON(B,C)
The F-rule sequence that has been applied to the initial state
description can be seen from the goal stack above the marker: (un-
stack( C, A ), putdown( C ), pickup( A ), stack( A, B )}. The subgoals
ON(A 9B) and ONTABLE(B) are currently solved by this sequence
and are protected. We note that the preconditions of F-rule un-
stack(^4,2?) are currently satisfied, but its application would violate the
protection of the goal ON(A,B). What should be done?
RSTRIPS first checks to see whether or not ON(A 9B) might be
reachieved by the sequence of F-rules below the marker and above the
end of its parenthesis. It is only at the end of its parenthesis that
ON(A,B) needs to be true. Perhaps one of the F-rules within its
parenthesis might happen to reachieve it; if so, such "temporary"
326
RSTRIPS
violations can be tolerated. In this case none of these F-rules reachieves
ON(A,B), so RSTRIPS must take steps to avoid the protection
violation.
RSTRIPS notes that the compound goal at the end of the parenthesis
of the violated goal is ON(A,B) A ON(B,C). An F-rule needed to
solve one of these components, namely, ON(B,C), would violate the
other's protection. We call ON(B,C) the protection violating compo
nent. RSTRIPS attempts to avoid the violation by regressing the
protection violating component, ON(B, C), back through the sequence
of F-rules (above the marker) that have already been applied until it has
regressed it through the F-rule that achieved the protected subgoal. Since the last F-rule to be applied, stack(A,B), was also the rule that achieved
ON(A,B), RSTRIPS regresses ON(B, C) through stack(,4,£) to yield
ON(B 9 C). In this case, the subgoal was not changed by regression, and
RSTRIPS now attempts to achieve this regressed goal at the point in the
plan just prior to the application of st*ck(A,B). This regression process
leaves RSTRIPS with the following goal stack:
ONTABLE(A)
[HANDEMPTY A CLEAR(C)A ON(QA)
unstack( C,A) )
CLEAR(A)
[HOLDING(C)
putdown( C ) ΓΗ Γ^Ί
HANDEMPTY /z?Mmzmz?M?7
L ONTABLE(A ) Λ CLEAR(A ) Λ HANDEMPTY
pickup(^4 )
*HOLDING(A)
*CLEAR(B)
ON(B,C)
HOLDING(A) A CLEAR(B) A ON(B,C)
stack(v4,£)
ON(A,B)
\ON{A,B)AON{BX)
The compound goal ON(A,B) A ON(B,C) at the end of the
parenthesis in which the potential violation was detected, is retained in
the stack. The other items below ON(A,B) in the stack of page 326 were
part of the now discredited plan to achieve ON(B,C). These items are
eliminated from the stack. The plan to achieve ON(A 9B) by applying
327
ADVANCED PLAN-GEN ERATING SYSTEMS
stack(A,B) is still valid and is left in the stack. Note that we have
combined the regressed goal ON (B,C) with the compound precondition
just above the F-rule, st*ck(A,B). Since the marker crosses a parenthe
sis, the subgoals HOLDING {A ) and CLEAR(B) are protected.
RSTRIPS begins again with this goal stack and does not discover any
additional potential protection violations until the following goal stack is
produced:
" ONT ABLE (A)
[HANDEMPTY A CLEAR(C) A ON(QA)
unstack(C,v4 )
CLEAR(A)
[HOLDING(C)
putdown(C)
HANDEMPTY
L ONTABLE(A ) Λ CLEAR(A ) Λ HANDEMPTY
pickup(^4 )
' *HOLDING(A)
*CLEAR(B)
^*ONTABLE{B) rh
*CLEAR(B) LJ
[HOLDING (X) WS^W^W^W
putdown(x)
HANDEMPTY
L ONTABLE(B) A CLEAR(B) A HANDEMPTY
pickup(2?)
HOLDING(B)
CLEAR(C)
\_HOLDING(B) A CLEAR(C)
stack(£,C)
~ON(B,C)
HOLDING(A) A CLEAR(B) A ON(B,C)
stack(A 9B)
ON(A,B)
[ON(A,B)AON(B,C)
328
RSTRIPS
RSTRIPS notes, by regression, that the precondition of putdown(v4 )
matches the current state description but that the application of put-
down^ ) would violate the protection of HOLDING {A ). The violation
is not temporary. To avoid this violation, RSTRIPS regresses the
protection violating component, ON{B, C), further backward, this time
through the F-rule pickup(^4 ).
After regression, the goal stack is as follows:
~*ONTABLE(A)
[HANDEMPTYA CLEAR(C) A ON(C 9A)
unstack(C, A)
*CLEAR(A)
[HOLDING(C)
putdown(C) jJL,
*HANDEMPTY
ON(B, C)
ONTABLE{A ) Λ CLEAR(A ) Λ HANDEMPTY
A ON(B, C)
pickup (A)
HOLDING(A)
CLEAR(B)
lHOLDING(A ) A CLEAR(B)
stack (A,B)
ON(A,B)
L ON(A,B)AON(B,C)
The plan for achieving ON (A, B)is retained, but the protection violating
plan for achieving ON(B, C) is eliminated.
Beginning again with the resulting goal stack, RSTRIPS finds another
potential protection violation when the following goal stack is produced:
329
ADVANCED PLAN-GENERATING SYSTEMS
~*ONTABLE(A)
[HANDEMPTY A CLEAR(C)AON(C,A)
unstack(C, A)
*CLEAR(A)
[HOLDING(C) r-1-,
putdown(C) [71 171 ΓΤΊ
*HANDEMPTY /////////////y//)////////)///
[ONTABLE(B) A CLEAR(B) A HANDEMPTY
pickup (B)
HOLDING(B)
CLEAR(C)
HOLDING(B)A CLEAR(C)
stack ( B, C)
ON(B, C)
ONTABLE(A ) Λ CLEAR(A ) Λ HANDEMPTY
A ON(B, C)
pickup (A)
HOLDING(A )
CLEAR(B)
HOLDING(A) A CLEAR(B)
st»ck(A,B)
ON(A,B)
|_ ON(A,B)AON(B,C)
If pickup( B ) were to be applied, the protection of HANDEMPTY would
be violated. But this time the violation is only temporary. A subsequent
F-rule, namely, stack(5,C) (within the relevant stack parenthesis)
reachieves HANDEMPTY, so we can tolerate the violation and proceed
directly to a solution.
In this case, RSTRIPS finds a shorter solution sequence than STRIPS
could have found on this problem. The F-rules in the solution found by
RSTRIPS are those above the marker in its terminal goal stack, namely,
(unstack(C^), putdown(C), pickup(B), stack(2?,C), pickup(zt),
st»ck(A,B)}.
330
RSTRIPS
EXAMPLE 3. As another example, let us apply RSTRIPS to the
problem of interchanging the contents of two registers. The F-rule is:
assign(w,r,i,s)
P: CONT(r,s) A CONT(u,t)
D: CONT(u 9t)
A: CONT(u,s)
Our goal is to achieve [ CONT(X,B) A CONT( Y,A )] from the initial
state [CONT(X,A) A CONT( Y,B) A CONT(Z,0)].
A difficulty is encountered at the point at which RSTRIPS has
produced the following goal stack:
[CONT( Y 9B) A CONT(X 9A)
assign^, y,,4,jB)
*CONT(X 9B)
Z:0 X:B Y:B
CONT(rl 9A)
CONT(Y,tl)
CONT(rl 9A ) Λ CONT( Yjl )
assign( Y,rl 9tl,A)
CONT(Y,A)
L CONT(X,B) A CONT( Y,A )
(We indicate the effect of applying assign( A\ Υ,Α,Β) by the notation
next to the goal stack.) The condition CONT(rl,A ) cannot be satisfied because after applying assign(
JT, Υ,Α,Β) there is no register having A as
its contents. Here RSTRIPS has confronted an impossible goal rather
than a potential protection violation. Goal regression is a useful tactic in
this situation as well. The impossible goal is regressed through the last
F-rule; perhaps there its achievement will be possible.
Regressing CONT(rl,A) through assign(Jf, Υ,Α,Β) yields the ex
pression:
[CONT(rl tA) A ~EQUAL(rl,X)] .
331
ADVANCED PLAN-GENERATING SYSTEMS
The resulting goal stack is:
Z:0 X:A Y.B
CONT(rl,A)
~EQUAL(rl,X)
CO NT (X, A)
CONT(Y,B)
CONT(X,A) A CONT{ Υ,Β) A CONT(rl,A )
Λ ~EQUAL(rl,X)
assign(X,Y,A,B)
CONT(X,B)
CONT(Y,tl) CONT(rl,A ) Λ CONT( Y,tl )
assign( Y,/7,f/,Λ)
CONT( Y,A ) CONT(X,B) A CONT( Y,A )
Next, RSTRIPS attempts to solve CONT(rl,A). It cannot simply
match this subgoal against the fact
CONT(X, A) because the substitution
{ X/rl } would make the next goal, ~EQUAL(X,X), impossible. The
only alternative is to apply the F-rule assign again. This operation
produces the following goal stack:
Z:0 X:A Y.B
CONT(r,A)
CONT(rl,t)
CONT(r,A)A CONT(rl,t)
assign(r7,r,/,^l )
~CONT(rl,A)
~EQUAL(rl,X)
CONT(X,A) CONT(Y,B) CONT(X,A) A CONT( Y,B) A CONT(rl,A )
Λ ~EQUAL(rl,X)
assiga(X,Y,A,B)
CONT(X,B)
CONT(Y,tl)
_ CONT(rl,A
) Λ CONT( Y,tl )
assign(y,r7,f/,j4)
CONT( Y,A )
CONT(X,B) A CONT( Y,A )
332
DCOMP
Now RSTRIPS can match CONT(r,A ) against the fact CONT(X,A).
Next, it can match CONT(rl,t) against the fact CONT(Zfi). These
matches allow application of assign(Z, Χ,Ο,Λί ). The next subgoal in the
stack, namely, ~EQUAL(Z,A ) is evaluated to T\ and all of the other
subgoals above assign^, Υ,Α,Β) match facts. Next, RSTRIPS matches
CONT( Y, tl ) against CONT( Y, B ) and applies assign( Y, Z, B,A ). The
marker is then moved to the bottom of the stack, and the process
terminates with the sequence (assign(Z,X,0,v4), assign^, Υ,Α,Β),
assign(y,Z,5,^)}.
The reader might object that we begged the question in this example
by explicitly providing a third register. It is perfectly straightforward to provide another F-rule, perhaps called genreg, that can generate new
registers when needed. Then, instead of matching CONT(rl,t) against
CONT(Zfi) as we have done in this example, RSTRIPS could apply
genreg to CONT(rl,t)to produce a new register. The effect of applying
genreg would be to substitute the name of the new register for rl, and 0
(say) for t.
8.2. DCOMP
We call our next system for dealing with interacting goals DCOMP. It
operates in two main phases. In phase 1, DCOMP produces a tentative
"solution," assuming that there are no goal interactions. Goal expressions
are represented as AND/OR graphs, and B-rules are applied to literal
nodes that do not match the initial state description. This phase
terminates when a consistent solution graph is produced with leaf nodes that match the initial state description. This solution graph serves as a tentative solution to the problem; typically, it must be processed by a
second phase to remove interactions.
A solution graph of an AND/OR graph imposes only ^partial
ordering
on the solution steps. If there were no interactions, then rules in the
solution graph that are not ancestrally related could be applied in
parallel, rather than in some sequential order. Sometimes the robot
hardware permits certain actions to be executed simultaneously. For example, a robot may be able to move its arm while it is locomoting. To
the extent that parallel actions are possible, it is desirable to express robot action sequences as partial orderings of actions. From the standpoint of
333
ADVANCED PLAN-GEN ERATING SYSTEMS
achieving some particular goal, the least commitment possible about the
order of actions is best. A solution graph of an AND/OR graph thus
appears to be a good format with which to represent the actions for
achieving a goal.
In phase 2, DCOMP examines the tentative solution graph for goal
interactions. Certain rules, for example, destroy the preconditions
needed by rules in other branches of the graph. These interactions force
additional constraints on the order of rule application. Often, we can find
a more constrained partial ordering (perhaps a strict linear sequence) that satisfies all of these additional constraints. In this case, the result of this second phase is a solution to the problem. When the additional ordering
constraints conflict, there is no immediate solution, and DCOMP must
make more drastic alterations to the plan found in phase 1.
These ideas can best be illustrated by some examples. Suppose we use
the simpler example from chapter 7 again. The initial state description is
as shown in Figure 7.1, and the goal is [ON(C,B) A ON(A,C)]. In
phase 1, DCOMP applies B-rules until all subgoals are matched by the
initial state description. There is no need to regress conditions through
F-rules, because DCOMP assumes no interactions.
A consistent solution graph that might be achieved by phase
1 is shown
in Figure 8.1. (In Figure 8.1, we have suppressed match arcs; consistency
of substitutions is not an issue in these examples. A substitution written near a leaf node unifies the literal labeling that node with a fact literal.)
The B-rules in the graph are labeled by the F-rules from which they stem,
because we will be referring to various properties of these F-rules later. All rule applications in the graph have been numbered (in no particular order) for reference in our discussion. Note also that we have numbered,
by 0, the "operation" in which the goal
[ ON (A, C) Λ ON( C, B )] is split
into the two components ON(A,C) and ON(C,B). We might imagine
that this backward splitting rule is based on an imaginary "join" F-rule that, in the final plan, assembles the two components into the final goal.
We see that the solution consists of two sequences of F-rules to be
executed in parallel, namely, {unstack(C,^4 ), stack(C,2?)} and (un-
stack(C,^l), pickup(yi ), stack(^4,C)}. Because of interactions, we obviously cannot execute these sequences in parallel. For example, F-rule 5
deletes a precondition, namely,
HANDEMPTY, needed by F-rule 2.
Thus, we cannot apply F-rule 5 immediately prior to F-rule 2. Worse,
F-rule 5 deletes a precondition, namely, HANDEMPTY, needed by the
334
DCOMP
ic/y) IC/y]
Fig. 8.1 A first-phase solution.
immediately subsequent F-rule 4. The graph of Figure 8.1 has several
such interaction defects.
The process for recognizing a noninteractive partial order involves
examination of every F-rule mentioned in the solution graph (including
the fictitious join rule) to see if its preconditions are matched by the state
description at the time that it is to be applied. Suppose we denote the /-th precondition literal of they-th F-rule in the graph as C
Xi. For each such
Cij in the graph, we compute two (possibly empty) sets. The first set, D ij9
335
ADVANCED PLAN-GEN ERATING SYSTEMS
is the set of F-rules specified in the graph that delete C i; and that are not
ancestors of rule j in the graph nor rule j itself. This set is called the
deleters of C {j. Any deleter of Q, might (as an F-rule) destroy this
precondition for F-rule j ; thus the order in which deleters occur relative
to F-rule j is important. If the deleter is a descendant of rule j in the
graph, we have special problems. (We are not concerned about rule j
itself or any of its ancestors that might delete Cih since the "purpose" of
Cij has by then already been served.)
The second set, Aih computed for the condition C ij9 is the set of
F-rules specified by the graph that add C i5 and are not ancestors of ruley
in the graph nor y itself. This set is called the adders of C i;. Any adder of
Cij is important because it might be ordered such that it occurs after a
deleter and before F-ruley, thus vitiating the effect of the deleter. Also, if
some rule, say rule k, was used in the original solution graph to achieve
condition Cij9 we might be able to apply one of the other adders before
F-rule j instead of F-rule k and thus eliminate rule k (and all of its
descendants!). Obviously F-ruley and any of its ancestors that might add
condition Ci} are not of interest to us because they are applied after
condition C^ is needed.
In Figure 8.2 we show all of the adders and deleters for all of the
conditions in the graph.
A partial order is noninteractive if, for each C i;· in the graph, either of
the following two conditions holds:
1) F-rule y occurs before all members of D%i
(In this case the condition, Cih is not deleted
until after F-rule y is applied); or
2) There exists a rule in Aij9 say rule k, such
that F-rule k occurs before F-rule y and no
member of D {j occurs between F-rule k and
F-rule y.
According to the above criteria, the solution graph of Figure 8.2 is not
noninteractive because, for example, F-rule 2 does not precede F-rule 5
in the ordering (and F-rule 5 deletes the preconditions of F-rule 2).
In its second phase, DCOMP attempts to transform the partial ordering
to one which is noninteractive. Often, such a transformation can be made.
There are two principal techniques for transforming the ordering. We
can further constrain the ordering so as to satisfy one of the two
336
DCOMP
conditions for noninteraction stated above, or we can eliminate an F-rule
(and its descendants) from the graph if its effect can be achieved by
constraining the order of one of the other adders.
For example, in Figure 8.2, F-rule 3 is a deleter of condition
CLEAR ( C) of F-rule 2. If we order F-rule 2 before F-rule 3, then F-rule
3 would no longer be a deleter of this condition. Also F-rule 5 is a deleter
of condition HANDEMPTY of F-rule 4. Obviously, we cannot make
F-rule 4 occur before F-rule 5; it is already an ancestor of F-rule 5 in the
partial ordering.
stack (C,B)
Adders: 5,2
Deleters: 2
Fig. 8.2 First-phase solution with adders and deleters listed
337
ADVANCED PLAN-GEN E RATIN G SYSTEMS
But we might be able to insert an adder, F-rule 1, between F-rule 5 and
F-rule 4. Or if F-rule 2 occurs before F-rule 4 and after any deleters of
this CLEAR(A) condition, we eliminate F-rule 5 entirely since
CLEAR(A ) is added by F-rule 2.
DCOMP attempts to render the phase 1 ordering noninteractive by
further constraining it or by eliminating F-rules. The general problem of
finding an acceptable set of manipulations seems rather difficult, and we
discuss it here only informally. The additional ordering constraints
imposed on the original solution graph must themselves be consistent. In
some cases, DCOMP is not able to find appropriate orderings. In our
example, however, DCOMP constructs an ordering by the following
steps:
1) Place F-rule 2 before F-rule 4 and
eliminate F-rule 5. Note that F-rule 4 cannot
now delete any preconditions of F-rule 2.
Also because F-rule 2 now occurs before
F-rule 3, F-rule 3 cannot delete any
preconditions of F-rule 2 either.
2) Place F-rule 1 before F-rule 4. Since F-rule
1 occurs after F-rule 2 and before F-rules 4
and 3 it reestablishes conditions needed by
F-rules 4 and 3 deleted by F-rule 2.
These additional constraints give us the ordering (2,1,4,3), correspond
ing to the sequence of F-rules {unstack(C,^4 ), stack( C,l?), pickup(/l ),
stack(^,C)}.
In this case, the ordering of the F-rules in the plan produced a strict
sequence. In fact, the F-rules that we have been using for these
blocks-world examples are such that they can only be applied in sequence; the robot has only one hand, and this hand is involved in each of the actions. Suppose we had a robot with two hands and that each was
capable of performing all four of the actions modeled by our F-rules.
These rules could be adapted to model the two-handed robot by providing each of them with an extra "hand" argument taking the values
"1" or "2." Also the predicates HANDEMPTY and HOLDING would
need to have this hand argument added. (We won't allow interactions between the hands, such as one of them holding the other.) The F-rules
for the two-handed robot are then as follows:
338
DCOMP
1) pickup(x,A)
P& D: ONTABLE(x), CLEAR(x), HANDEMPTY(h)
A: HOLDING(x,h)
2) putdown(x,Ä)
P&D: HOLDING(x,h)
A: ONTABLE(x), CLEAR(x), HANDEMPTY(h)
3) stack(x,j>,/z)
P&D: HOLDING(x,h),CLEAR(y)
A: HANDEMPTY(h), ON(x,y), CLEAR(x)
4) unstack(x, y,h)
P&D: HANDEMPTY (h\ CLEAR(x\ ON(x,y)
A: HOLDING(x,h\CLEAR(y)
With the rules just cited, we ought to be able to generate partially
ordered plans in which hands "1" and "2" could be performing actions
simultaneously. Let's attempt to solve the very same block-stacking
problem just solved [that is, the goal is [ ON (AX) A ON(C, B )], from
the initial state shown in Figure 7.1. [The HANDEMPTY predicate in
that state description is now, of course, replaced by HAND-
EMPTY(l) A HANDEMPTY(2).] In Figure 8.3, we show a possible
DCOMP first-phase solution with the adders and deleters listed for each
condition. Note that, compared with Figure 8.2, there are fewer deleters
of the HANDEMPTY predicates because we have two hands.
During the second phase of this problem, DCOMP might specify that
F-rule 2 occur before F-rule 4 so that we can delete rule 5. Further, F-rule
2 should occur before F-rule 3 to avoid deleting the CLEAR(C)
condition of F-rule 2. Now if F-rule 1 occurs between F-rules 2 and 3, the
CLEAR(C) condition of F-rule 3 would be re-established. These
additional constraints give us the partially ordered plan shown in Figure
8.4.
It is convenient to be able to represent any partially ordered plan in a
form similar to solution graphs of AND/OR graphs. If there were no
interactions at all among the subgoals of a solution graph produced by
the first phase, then that graph itself would be a perfectly acceptable
representation for the partially ordered plan. If the interactions were such
that there could be no parallel application of F-rules, than a solution path
like that shown in Figures 7.5 through 7.7 would be required. What about
339
ADVANCED PLAN-GEN ERATING SYSTEMS
cases between these extremes, such as that of our present two-handed
robot? We show in Figure 8.5 one way of representing the plan of Figure
8.4. Starting from the goal condition, we work backward along the plan
producing the appropriate subgoal states. When the plan splits, it is
because the subgoal condition at that point can be split into components.
Such a split occurs at the point marked "*" in Figure 8.5. These
components can be solved separately until they join again at the point
marked "**". Notice that CLEAR(C) in node 1 regresses to Γ, as does
CLEAR(A ) in node 2. Structures similar to those of Figure 8.5 have
been called procedural nets by Sacerdoti (1977).
Adders: 1 Deleters: 2
Deleters: 2
Fig. 8.3 A first-phase solution to a problem using two hands.
340
DCOMP
stack(i4,C,2)
st2Lck(C,B,l) pickup(y4,2)
unstack(C,A,I)
Fig. 8.4 A partially ordered plan for a two-handed block stacking problem.
ω ON(C,B) AON(A,C)
stackM,C,2)
HOLDING(A,2) A CLEAR(C) A ON(C,B)
ON(C,B) A CLEAR(C) H0LD1NG{A,2)
stack(C,£,7) pickup(y4,2)
HOLDING{C,l ) Λ CLEAR(B) HANDEMPTY{2) A CLEAR(A) A ONTABLE(A)
HOLDING(C,J) A CLEARiB) A HANDEMPTYÌ2)
ACLEAR(A) A ONTABLE(A)
unstack(C,AJ )
HANDEMPTYU) A CLEAR(C) A ON(C,A) A CLEAR(B)
A HANDEMPTY{2) A ONTABLE(A)
Fig. 8.5 Goal graph form for partially ordered plan.
ADVANCED PLAN-GEN ERATING SYSTEMS
8.3. AMENDING PLANS
Sometimes it is impossible to transform the phase-1 solution into a
noninteractive ordering merely by adding additional ordering con
straints. The general situation, in this case, is that the phase-2 process can
do no better than leave us with a partially ordered plan in which some of
the preconditions are unavoidably deleted. We assume that phase 2
produces a plan having as few such deletions as possible and that the
deletions that are left are those that are estimated to be easy to reachieve.
After producing some such "approximate plan," DCOMP calls upon a
phase-3 process to develop plans to reachieve the deleted conditions and
then to "patch" those plans into the phase-2 (approximate) plan in such a
way that the end result is noninteractive.
The main task of phase 3, then, is to amend an existing (and faulty)
plan. The process of amending plans requires some special explanation
so we consider this general subject next.
We begin our discussion by considering another example. Suppose we
are trying to achieve the goal [ CLEAR (A) A HANDEMPTY] from the
initial state shown in Figure 7.1 (with just one hand now). In Figure 8.6,
we show the result of phase 1, with the adders and deleters listed. Here,
we obviously have a solution that cannot be put into noninteractive form
by adding additional constraints; there is only one F-rule, and it deletes a
"precondition" of the join rule, number 0. The only remedy to this
situation is to permit the deletion and to plan to reachieve HAND-EMPTY in such a way that CLEAR (A) remains true.
Our strategy is to insert a plan, say P, between F-rule 1 and the join.
The requirements on P are that its preconditions must regress through
F-rule 1 to conditions that match the initial state description and that
CLEAR (A) regress through P unchanged (so that it can be achieved by
F-rule 1). The structure of the solution that we are seeking is shown in
Figure 8.7.
If we apply the B-rule version of putdown(x) to HANDEMPTY, we
obtain the subgoal HOLDING(x). This subgoal regresses through
unstack(C,^4) to Γ, with the substitution
{C/x}. Furthermore,
CLEAR (A) regresses through putdown( C) unchanged, so putdown( C)
is the appropriate patch. The final solution is shown in Figure 8.8.
342
AMENDING PLANS
CLEAR(A) A HANDEMPTY
Adders:
Fig. 8.6 First-phase solution requiring a patch.
CLEAR(A) A HANDEMPTY
P, a plan for achieving HANDEMPTY,
whose preconditions regressed through
unstack(C,v4 ) match the initial state
description. CLEAR(A) must regress
through P unchanged.
CLEAR(A ) Λ < Preconditions of P >
unst2ick(C,A)
< Conditions that match initial state description >
Fig. 8.7 The form of the patched solution.
343
ADVANCED PLAN-GEN ERATING SYSTEMS
When interactions occur that cannot be removed by additional
ordering constraints, the general situation is often very much like this last
example. In these cases, DCOMP attempts to insert patches as needed
starting with the patch that is to be inserted earliest in the plan (closest to the initial state). This patching process is applied iteratively until the
entire plan is free of interactions.
We illustrate the patching process by another example. Now we
consider the familiar, and highly interactive block-stacking problem that
begins with the initial configuration of Figure 7.1 and whose goal is
[ ΟΝ(Α,Β) Λ ON(B,
C)]. The first-phase solution, shown in Figure 8.9,
has interactions that cannot be removed by adding additional ordering constraints. The ordering 3^5—>4-»2-*l is a good approximate solution even though F-rule 3 deletes a precondition of F-rule 4, namely,
CLEAR(C), and it also deletes a precondition of F-rule 5, namely,
HANDEMPTY. Our patching process attempts to reachieve these
deleted conditions and works on the earliest one, HANDEMPTY, first.
The path of the approximate solution is shown in Figure 8.10; we do
not split the initial compound goal because neither of the components
can be achieved in an order-independent fashion. Note that regression must be used to create successor nodes and that some of the goal
components regress to Tand thus disappear. Here, we use the convention
that the tail of the B-rule arc adjoins the condition used to match a literal
in the add list of the rule. The conditions marked with asterisks (*) are
conditions that our approximate plan does not yet achieve.
CLEAR{A)A HANDEMPTY
putdown(C)
CLEAR(A) A HOLDING(C)
unstack(C,,4)
ON(C.A)A CLEAR{C)A HANDEMPTY
Fig. 8.8 The patched solution.
344
AMENDING PLANS
Deleters: 4 Adders: 4
Deleters: 5
Fig. 8.9 First-phase solution for an interactive block-stacking problem.
345
ADVANCED PLAN-GEN E RATIN G SYSTEMS
m ΟΝ(Α,Β)
ON(B,C)
pickup(Z?)
node 2-
unstack(C,,4 )
ON(C,A)
CLEAR(C)
HANDEMPTY
ONTABLE(B)
CLEAR(B)
ONTABLE(A)
Fig. 8.10 An approximate solution.
346
AMENDING PLANS
Adders: 1 Adders: 2
assign( X,rl ,tl,B)
Deleters: 2
[X/ri [Bit] [Y/rl] {A/tl}
Fig. 8.11 First-phase solution to the two-register problem.
We first attempt to insert a patch between F-rule 3 and F-rule 5 to
achieve HANDEMPTY. (Note the similarity of this situation with that
depicted in Figure 8.7.) The rule putdown(v) with the substitution
{C/x) is an appropriate patch. Its subgoal, HOLDING(C), regresses
through unstack( C,A ) to T. Furthermore, all of the conditions of node 2
[except HANDEMPTY, which is achieved by putdown(C)] regress unchanged through putdown( C).
Now, we can consider the problem of finding a patch for the other
deleted precondition, namely, CLEAR(C). Note, that in this case,
however, CLEAR ( C) regresses unchanged through F-rule 5, pickup( B ),
and then it regresses through our newly inserted rule, putdown( C), to T.
Therefore no further modifications of the plan are necessary, and we have
the usual solution
{unstack( C, A ), putdown( C ), pickup( B ), stack( B, C ),
pickup(^ ), steck(A,B)}.
The process of patching can be more complicated than our examples
have illustrated. If the preconditions of the patched plan have only to
regress through a strict sequence (as in this last example), the process is
straightforward, but how are conditions to be regressed through a partial
ordering? Some conditions may regress through to conditions that match
347
ADVANCED PLAN-GENERATING SYSTEMS
assign(X,rJ,A,B)
Deleters: 2
CONT(X,A) CONT(Y,B) CONT(rl,B) CONT(X,A)
Deleters: 1 r
Xr2.B) | Γ
[Ylr2] [Z/rl,0/t2]
Fig. 8.12 Solution to the two-register problem.
the initial state description for all strict orderings consistent with the
partial ordering; others may do so for none of these strict orderings. Or
we may be able to impose additional constraints on the partial ordering
such that the preconditions of a patched plan may regress through it to
conditions that are satisfied by the initial state description. The general
problem of patching plans into partial orderings appears rather complex
and has not yet received adequate attention.
As a final example of DCOMP, we consider again the problem of
interchanging the contents of two registers. From the initial state
348
HIERARCHICAL PLANNING
[CONT(X,A) A CONT(Y,B) A CONT(Z,0)], we want to achieve
the goal [CONT( Y yA ) Λ CONT(X,B)]. The first phase produces the
solution shown in Figure 8.11. The adders and deleters are indicated as usual. This first-phase solution has unavoidable deletions. F-rule 1
deletes a precondition of F-rule 2, and vice versa. They cannot both be
first! [Sacerdoti (1977) called this type of conflict a "double cross."]
The blame for the unavoidable deletion conflict might be assigned to
the substitutions used in one of the rules, say, rule 2. If Y were not
substituted for rl in rule 2, then F-rule 1 would not have deleted
CONT(rl,B). Then F-rule
1 could be ordered before F-rule 2 to avoid
the deletion of the precondition, CONT(X,A) 9 of F-rule 1 by F-rule 2. In
this manner, DCOMP is led to continue the search for a solution by
establishing the precondition, CONT(rl.B), of F-rule 2 but now
prohibiting the substitution { Y/rl}.
Continued search results in the tentative solution shown in Figure
8.12. From this tentative solution, DCOMP can compute that the
ordering 3 —> 1 —^ 2 produces a noninteractive solution. The final solu
tion produced is {assign (Z, Y, O, B ), assign ( Y, X, B, A ), as
sign (Χ,Ζ,Α, Β)}.
8.4. HIERARCHICAL PLANNING
The methods that we have considered so far for generating plans to
achieve goals have all operated on "one level." When working backward,
for example, we investigated ways to achieve the goal condition and then to achieve all of the subgoals, and so on. In many practical situations, we might regard some goal and subgoal conditions as mere details and postpone attempts to solve them until the major steps of the plan are in
place. In fact, the goal conditions that we encounter and the rules to
achieve them might be organized in a hierarchy with the most detailed
conditions and fine-grained actions at the lowest level and the major
conditions and their rules at the highest level.
Planning the construction of a building, for example, involves the high
level tasks of site preparation, foundation work, framing, heating and
electrical work, and so on. Lower level activities would detail more
precise steps for accomplishing the higher level tasks. At the very lowest
349
ADVANCED PLAN-GENERATING SYSTEMS
level, the activities might involve nail-driving, wire-stripping, and so on.
If the entire plan had to be synthesized at the level of the most detailed
actions, it would be impossibly long. Developing the plan level by level,
in hierarchical fashion, allows the plans at each level to be of reasonable
length and thus increases the likelihood of their being found. Such a
strategy is called hierarchical planning.
8.4.1. POSTPONING PRECONDITIONS
One simple method of planning hierarchically is to identify a hierarchy
of conditions. Those at the lower levels of the hierarchy are relatively
unimportant details compared to those at the higher levels, and achieve
ment of the former can be postponed until most of the plan is developed.
The general idea is that plan synthesis should occur in stages, dealing
with the highest level conditions first. Once a plan has been developed to
achieve the high-level conditions (and their high-level preconditions, and
so on), other steps can be added in place to the plan to achieve lesser conditions, and so on. This method does not require that the rules
themselves be graded according to a hierarchy. We can still have one set
of rules.
Hierarchical planning is achieved by constructing a plan in levels,
using any of the single-level methods previously described. During each
level, certain conditions are regarded as details and are thus postponed
until a subsequent level. A condition regarded as a detail at a certain level is effectively invisible at that level. When details suddenly become visible at a lower level, we must have a means of patching the higher level plans to achieve them.
8.4.2. ABSTRIPS
The patching process is relatively straightforward with a STRI PS-type
problem solver, so we illustrate the process of hierarchical planning first
by using STRIPS as the basic problem solver. When STRIPS is modified
in this way, it is called ABSTRIPS.
For an example problem, let us again use the goal
[ON(C,B) Λ ON(A,C)], and the initial state depicted in Figure 7.1.
This goal is one that the single-level STRIPS can readily solve but we use
it here merely to illustrate how ABSTRIPS works.
350
HIERARCHICAL PLANNING
The F-rules that we use are those that we have been using, but for
purposes of postponing preconditions we must specify a hierarchy of
conditions (including goal conditions). To be realistic, this hierarchy
ought to reflect the intrinsic difficulty of achieving the various conditions.
Clearly, the major goal predicate, ON, should be on the highest level of
the hierarchy; and perhaps HANDEMPTYshould be at the lowest level,
since it is easy to achieve. In this simple example, we use only three
hierarchical levels and place the remaining predicates, namely, ON-
TABLE, CLEAR, and HOLDING, in the middle level.
The hierarchical level of each condition can be simply indicated by a
criticality value associated with the condition. Small numbers indicate a
low hierarchical level or small criticality, and large numbers indicate a high hierarchical level or large criticality. The F-rules for ABSTRIPS,
with criticality values indicated above the preconditions, are shown
below:
1) pickup(x)
2 2 1
P & D: ONTABLE(x), CLEAR(x), HANDEMPTY
A: HOLDING (x)
2) putdown(jc)
2
P&D: HOLDING(x)
A: ONTABLE(x),CLEAR(x), HANDEMPTY
3) stack(x,j)
2 2
P&D: HOLDING(x),CLEAR(y)
A: HANDEMPTY, ON(x,y), CLEAR(x)
4) unstack(x,y)
1 2 3
P&D: HANDEMPTY, CLEAR(x), ON(x,y)
A: HOLDING(x),CLEAR(y)
Note that criticality values appear on both the preconditions and on
the delete-list literals. They do not appear on the add-list literals. When
an F-rule is applied, all of the literals in the add list are added to the state
description.
351
ADVANCED PLAN-GEN ERATING SYSTEMS
ABSTRIPS begins by considering only conditions of highest critical
ly, namely, those with criticality value 3 in this example. All conditions
having criticality values below this threshold value are invisible, that is,
they are ignored. Since our main goal contains two conditions of value 3, ABSTRIPS considers one of them, say, ON(C,
B ), and adds stack( C, B )
to the goal stack. (If ABSTRIPS had selected the other component to
work on first, it would later have had to back up; the reader might want to
explore this path on his own.) No preconditions (of stack) are added to
the goal stack, because they have a criticality value of only 2 (below
threshold) and are thus invisible at this level.
ABSTRIPS can therefore apply the F-rule stack (C, B), resulting in a
new state description. Next, it considers the other goal component
ON(A,C) and adds stack(^4, C) to the goal stack. (Again, the precondi
tions of this rule are invisible.) Then ABSTRIPS applies stack(^4, C) to
the current state resulting in a state description that matches the entire goal. We show the solution path for this level of the operation of ABSTRIPS in Figure 8.13. Note that when delete literals of rules are invisible, certain items that ought to be deleted from a state description
are not deleted. A contradictory state description may result, but this
causes no problems.
The first level solution, obtained by ignoring certain details, is the
sequence (stack(C,2?), stack(v4,C)}. (An equally valid solution at the first level, obtained by a different ordering of goal components, is
(stack(^4,C), stack(C,i?)}. This solution will run into difficulties at a
lower level causing the need to return to this first level to produce the
appropriately ordered sequence.) Our
first-level solution can be regarded
as a high-level plan for achieving the goal. From this view, the
block-stacking operations are considered most important, and a lower
level of planning can be counted on to fill in details.
We now pass down our first-level solution, namely, (staek(C,2?),
stack(v4, C)}, to the second level. In this level we consider conditions of
criticality value 2 or higher so that we begin to consider some of the
details. We can effectively pass down the higher level solution by
beginning the process at the next level with a goal stack that includes the sequence of F-rules in the higher level solution together with any of their
visible preconditions. The last item in the beginning goal stack is the main
goal. In this case the beginning goal stack for the second level is:
352
STATE DESCRIPTION
CLEAR(B)
CLEAR(C)
ON(C,A)
HANDEMPTY
ONTABLE(A) ONTABLE(B) GOAL STACK
ON(C,B) AON(A,C)
STATE DESCRIPTION GOAL STACK
CLEAR(B)
CLEAR(C)
ON(C,A)
HANDEMPTY
ONTABLE(A)
ONTABLE(B)
[STATE DESCRIPTION
CLEAR(B) CLEAR(C)
ON(C,A)
HANDEMPTY
ONTABLE(A)
ONTABLE(B) ON(C,B)
ON(A,C)
ON(C,B) A ON(A,C)
1
GOAL STACK
stack(C,5 )
ON(A,C)
ON(C,B) AON(A,C)
z f
1 STATE DESCRIPTION
CLEAR(B) CLEAR(C)
ON(C,A)
HANDEMPTY
ONTABLE(A)
ONTABLE(B) ON(C,B)
STATE DESCRIPTION
CLEAR(B)
CLEAR(C)
ON(C,A)
HANDEMPTY
ONTABLE(A)
ONTABLE(B)
ON(C,B) GOAL STACK
ON(A,C)
ON(C,B)A ON(A,C)
[
GOAL STACK
stack(,4,C)
ON(C,B) A ON(A,C)
f
STATE DESCRIPTION GOAL STACK
CLEAR (A)
CLEAR(B)
CLEAR(C)
ON(C,A)
HANDEMPTY
ONTABLE(A)
ONTABLE(B)
ON(C,B) 0N(A,O NIL
Fig. 8.13 The solution path for the first level o/ABSTRIPS. 3
n
S
ADVANCED PLAN-GEN ERATING SYSTEMS
HOLDING(C) Λ CLEAR(B)
stack(C,£)
HOLDING(A) A CLEAR(C)
stack(^,C)
ON(C,B)A ON(A,C)
Because STRIPS works with a goal stack, it is easy for a subsequent
level to patch in rules for achieving details. The plan passed down from
higher levels effectively constrains the search at lower levels, enhancing efficiency and diminishing the combinatorial explosion.
The reader can verify for himself that one possible solution produced
by this second level is the sequence {unstack( C,A ), stack(C,2?),
pickup(^4 ), stack(^,
C)}. If no solution can be found during one of the
levels, the process can return to a higher level to find another solution. In
this case our second-level solution is a good one and is complete except that in its construction we have ignored the condition HANDEMPTY.
During the next or third level, we lower to
1 the threshold on criticality
values. We start with a goal stack containing the sequence of F-rules from
the second-level solution together with (now all of) their preconditions.
The work at this level, for our present example, merely verifies that the
second-level solution is a correct solution even to the most detailed level
of the problem.
ABSTRIPS is thus a completely straightforward process for accom
plishing hierarchical planning. All that is required is a grading of the importance of predicates accomplished by assigning them criticality
values. In problems more complex than this example, ABSTRIPS is a
much more efficient problem solver than the single-level STRIPS.
8.43. VARIATIONS
There are several variations on this particular theme of hierarchical
problem solving. First, the basic problem solver used at each level does
not have to be STRIPS. Any problem-solving method can be used so
long as it is possible for the method at one level to be guided by the
solution produced at a higher level. For example, we could use RSTRIPS
or DCOMP at each level augmented by an appropriate patching process.
A minor variation on this hierarchical planning scheme involves only
two levels of precondition criticality and a slightly different way of using
354
HIERARCHICAL PLANNING
the criticality levels. Since this variant is important, we illustrate how it
works with an example using the set of F-rules given below:
1) pickup(x)
P& D: ONTABLE(x), CLEAR(x), P-HANDEMPTY
A: HOLDING(x)
2) putdown(x)
P&D: HOLDING(x)
A: ONTABLE(x%CLEAR(x),HANDEMPTY
3) stack(;c,7)
P&D: P-HOLDING{x\CLEAR{y)
A: HANDEMPTY,ON(x,y),CLEAR(x)
4) unstack(x,7)
P&D: P-HANDEMPTY, CLEAR(x), ON(x,y)
A: HOLDING(x\CLEAR(y)
The special P- prefix before a predicate indicates that achievement of
the corresponding precondition is always postponed until the next lower
level. We call these preconditions P-conditions. This scheme allows us to
specify, for each F-rule, which preconditions are the most important (to
be achieved during the current planning level) and which are details (to
be achieved in the immediately lower level).
In this example, we use STRIPS as the basic problem solver at each
level. Let us consider the same problem solved earlier, namely, to achieve
the goal [ ON ( C, B ) Λ ON ( A, C )] from the initial state shown in Figure
7.1. In Figure 8.14, we show a STRIPS solution path for the first level.
Note again that the state description may contain inconsistencies because
details are not deleted. The first level solution is the sequence
(stack(C,£), stack(,4,C)}.
We begin the second-level solution attempt with a goal stack contain
ing the sequence of F-rules just obtained and their preconditions. Now,
however, the P-conditions previously postponed must be included as
conditions and be achieved at this level. Also, when these F-rules are
applied, we delete these preconditions from the current state description.
Any new F-rules inserted at this level are treated as before.
355
u>
OS
STATE DESCRIPTION
CLEAR(B)
CLEAR(C)
ON(C,A)
HANDEMPTY
ONTABLE(A)
ONTABLE(B)
STATE DESCRIPTION
CLEAR(B) CLEAR(C)
ON(C,A)
HANDEMPTY
ONTABLE(A)
ONTABLE(B) GOAL STACK
ON(C,B) AON(A,C)
1
GOAL STACK
ON{C,B)
ON(A,C)
ON{C,B) AON{A,Ç)
*
STATE DESCRIPTION GOAL STACK
CLEAR(B)
CLEAR(C) ON{C,A)
HANDEMPTY
ONTABLE(A)
ONTABLE(B) CLEAR(B)
stack(C,£)
ON(A,C)
ON(C,B) A ON(A,C)
z "Ì
STATE DESCRIPTION
CLEAR(C)
ON(C,A)
HANDEMPTY
ONTABLE(A)
ONTABLE(B)
ON(C,B)
[STATE DESCRIPTION
CLEAR(C)
ON(C,A)
HANDEMPTY
ONTABLE(A)
ONTABLE(B)
ON(C,B) GOAL STACK
ON(A,C) ON{C,B) A ON(A,C)
1
GOAL STACK
CLEAR(C)
stack(A,C)
ON(C,B)AON(A,C)
w
STATE DESCRIPTION GOAL STACK
ON(C,A)
HANDEMPTY
ONTABLE(A) ONTABLE(B) ON(C,B)
ON(A,Q CEEAR(A) NIL n w a
6 w z w
z o
in < m
H W
Fig. 8.14 A first-level STRIPS solution using P-conditions.
BIBLIOGRAPHICAL AND HISTORICAL REMARKS
The beginning goal stack for the next level of problem solving is given
below. To distinguish the F-rules inherited from a previous level from
those that might be inserted at the present level, we precede the inherited
ones by an asterisk (*).
HOLDING(C) A CLEAR(B)
*stack(C,£)
HOLDING(A) Λ CLEAR(C)
*stack(^,C)
[ON(C,B)A ON(A,C)]
The STRIPS solution at this level is the sequence {unstack(C,^4 ),
stack(C,i?), pickup(^), stack(^4,C)}. Even though there were post
poned conditions at this level, namely, HANDEMPTY, this sequence is a
valid solution. The goal stack set up for the next lower level causes no
additional F-rules to be inserted in the plan. The problem-solving
process for this level merely verifies the correctness of the second-level
plan when all details are included.
8.5. BIBLIOGRAPHICAL AND HISTORICAL
REMARKS
RSTRIPS is based on systems for dealing with interacting goals
developed by Warren (1974) and by Waldinger (1977). [Warren's system,
WARPLAN, is clearly and economically implemented in PROLOG.] A
similar scheme was proposed by Rieger and London (1977).
DCOMP is based on Sacerdoti^ (1975, 1977) and Tate's (1976, 1977)
ideas for developing "nonlinear" plans. Sussman (1975) discusses several
of the problems of simultaneously achieving interacting goals and
recommends the strategy of creating a plan that tolerates a few bugs and
then debugging this plan in preference to the strategy of synthesizing a
perfect plan.
The ABSTRIPS system for hierarchical planning was developed by
Sacerdoti (1974). The LAWALY system of Siklóssy and Dreussi (1973)
also used hierarchies of subtasks. Our variation of ABSTRIPS using
"P-conditions" is based on Sacerdoti's (1977) NOAH system. NOAH
357
ADVANCED PLAN-GEN ERATING SYSTEMS
combines hierarchical and nonlinear planning; thus it might be thought
of as an AB-DCOMP using P-conditions. Tate's (1977) system for
generating project networks can be viewed as an elaboration of NOAH.
See also a hierarchical planning and execution system proposed by
Nilsson (1973).
Extensions to the capabilities of robot problem solving-systems have
been proposed by Fikes, Hart, and Nilsson (1972a). Feldman and Sproull
(1977) discuss problems caused by uncertainty in robot planning and
recommend the use of decision-theoretic methods.
EXERCISES
8.1 Starting with the initial state description shown in Figure 7.1, show
how RSTRIPS would achieve the goal [ON(B,A ) Λ ON(C,B)].
8.2 Use any of the plan generating systems described in chapters 7 and 8
to solve the following block-stacking problem:
x
D
Initial Goal
8.3 Show how DCOMP would solve the following blocks-world prob
lem:
x
Initial Goal
Use the predicates and STRIPS rules of chapter 7 to represent states and
actions.
358
BIBLIOGRAPHICAL AND HISTORICAL REMARKS
8.4 An initial blocks-world situation is described as follows:
CLEAR(A ) ONTABLE(A )
CLEAR(B) ONTABLE(B)
CLEAR(C) ONTABLE(C)
There is just one F-rule, namely:
puton(x,y)
P: CLEAR(x\CLEAR(y),ONTABLE{x)
D: CLEAR{y\ONTABLE{x)
A: ΟΝ(χ,γ)
Show how DCOMP would achieve the goal [ON(A,B) A ON(B,C)].
8.5 Sketch out the design of a hierarchical version of DCOM P that bears
the same relationship to DCOMP that ABSTRIPS bears to STRIPS. (We
might call the system AB-DCOMP.) Show how the system might work on
an example problem.
WARNING: There are some conceptual difficulties in designing AB-
DCOMP. Describe any that you encounter even if you do not solve them.
8.6 If certain nodes in the graph of Figure 7.3 were combined, it would
have the following structure:
Specify a hierarchical planning system based on the form of this structure
and illustrate its operation by an example.
8.7 Suppose a hierarchical planning system fails to find a solution at one
of its levels. What sort of information about the reason for the failure
might be useful in searching for an alternative higher level plan? Illustrate with an example.
359
ADVANCED PLAN-GEN ERATING SYSTEMS
8.8 Can you think of any ways in which the ideas about hierarchical
problem solving described in this chapter might be used in rule-based
deduction systems? Test your suggestions by applying them to a
deduction-system solution of a robot problem using Kowalski's formula
tion.
8.9 Can you find a counter-example to the following statement?
Any plan that can be generated by STRIPS
can also be generated by ABSTRIPS.
8.10 Discuss the "completeness" properties of RSTRIPS and DCOMP.
That is, can these planning systems find plans whenever plans exist?
360
CHAPTER 9
STRUCTURED OBJECT
REPRESENTATIONS
As we discussed in chapter 4, there are many ways to represent a body
of knowledge in the predicate calculus. The appropriateness of a
representation depends on the application. After deciding on a particular
form of representation, the system designer must also decide on how
predicate calculus expressions are to be encoded in computer memory.
Efficient storage, retrieval, and modification are key concerns in selecting an implementation design. Up to now in this book, we have not been
concerned with these matters of efficiency. We have treated each
predicate calculus statement, whether fact, rule, or goal, as an individual entity that could be accessed as needed without concern for the actual
mechanisms or costs involved in this access. Yet, ease of access is such an
important consideration that it has had a major effect on the style of
predicate calculus representation used in large AI systems. In this chapter, we describe some of
the specialized representations that address
some of these concerns. We also confront certain representational questions that might also have been faced earlier, say in chapter 6, but
seem more appropriate in this chapter.
The representations discussed here aggregate several related predicate
calculus expressions into larger structures (sometimes called
units ) that
are identified with important objects in the subject domain of the system.
When information about one of these objects is needed by the system, the
appropriate unit is accessed and all of the relevant facts about the object
are retrieved at once. We use the phrase structured objects to describe
these representational schemes, because of the heavy emphasis on the
structure of the representation. Indeed, the structure carries some of the
representational and computational burden. Certain operations that
might otherwise have been performed by explicit rule applications (in
361
STRUCTURED OBJECT REPRESENTATIONS
other representations) can be performed in a more automatic way by
mechanisms that depend on the structure of the representation. These
representational schemes are the subject of this chapter.
9.1. FROM PREDICATE CALCULUS TO UNITS
Suppose we want to represent the following sentences as predicate
calculus facts:
John gave Mary the book.
John is a programmer.
Mary is a lawyer.
John's address is 37 Maple St.
The following wffs appear to be a reasonable representation:
GIVE (JOHN, MAR Y, BOOK)
OCCUPA TION(JOHN, PROGRAMMER )
OCCUPATION (MARY, LAWYER)
ADDRESS (JOHN,31~MAPLE-ST)
In this small database, we have used individual constant symbols to
refer to six entitities, namely, JOHN, MAR Y, BOOK, PROGRAMMER,
LA WYER, and 31-MAPLE-ST. If the database were enlarged, we would
presumably mention more entities, but we would also probably add other
information about these same entities. For retrieval purposes, it would be
helpful if we gathered together all of the facts about a given entity into a
single group, which we call a unit. In our simple example, the unit JOHN
has associated with it the following facts:
JOHN
GIVE(JOHN,MARY,BOOK)
OCCUPA TION(JOHN, PROGRAMMER )
ADDRESS (JOHN,31-MAPLE-ST)
362
FROM PREDICATE CALCULUS TO UNITS
Similarly, we associate the following facts with the unit MARY:
MARY
GIVE (JOHN, MARY, BOOK)
OCCUPATION(MARY,LA WYER)
(It is possible to have the same fact associated with terms denoting
different entities in our domain.)
A representational scheme in which the facts are indexed by terms
denoting entities or objects of the domain is called an object-centered
representation.
Most notations for structured objects involve the use of binary
(two-argument) predicates for expressing facts about the objects. A
simple conversion scheme can be used to rewrite arbitrary wffs using only
binary predicates. To convert the three argument formula
GIVE (JOHN, MARY, BOOK), for example, to one involving binary
predicates, we postulate the existence of a particular "giving event" and a
set of such giving events. Let us call this set GIVING-E VENTS. For each
argument of the original predicate, we invent a new binary predicate that
relates the value of the argument to the postulated event. Using this scheme, the formula GIVE
(JOHN, MARY, BOOK) would be converted
to:
(3x)[EL(x,GIVING-EVENTS) A GIVER(xJOHN)
A RECIP(x,MARY) A OBJ(x,BOOK)\
The predicate EL is used to express set membership. Skolemizing the
existential variable in the above formula gives a name, say GI, to our postulated giving event:
EL(G1,GIVING-EVENTS) A GIVER(GIJOHN)
A RECIP(G1,MARY) A OBJ(GI,BOOK)
Thus, we have converted a three-argument predicate to the conjunc
tion of four binary ones.
The relations between GI and the original arguments of GIVE could
just as well be expressed as functions over the set GIVING-E VENTS
instead of as predicates. With this additional notational change, the
363
STRUCTURED OBJECT REPRESENTATIONS
sentence "John gave Mary the book" can be represented by the following
formula:
EL(G1,GIVING-EVENTS)
AEQ[giver(Gl),JOHN]
A EQ[recip(Gl),MARY]
AEQ[obj(Gl),BOOK]
The predicate EQ is meant to denote the equality relation. The
expression above uses certain functions, defined over the set GIVING-
E VENTS, whose values name other objects that participate in Gl.
There are some advantages in converting to a representation that uses
events and binary relations. For our purposes, the primary advantage is
modularity. Suppose, for example, that we want to add some information
about when a giving event takes place. Before converting to our binary
form, we would need to add a fourth (time) argument to the predicate
GIVE. Such a change might require extensive changes to the production
rules that referenced GIVE and to the control system. If, instead, giving is
represented as a domain entity, then additional information about it can easily be incorporated by adding new binary relations, functions, and
associated rules.
In this part of the book we represent all but a small number of
propositions as terms denoting "events" or "situations" that are consid
ered entities of our domain. The only predicates that we need are EQ, to
say that two entities are the same; SS, to say that one set is a subset of
another; and EL, to say that an entity is an element of a set. For our
example sentences above, we had events in which persons had occupa
tions and an event in which a person had an address. These sentences are represented as follows:
Gl
EL
( Gl, GIVING-EVENTS)
EQ[giver(Gl),JOHN]
EQ[recip(Gl),MARY]
EQ[obj(Gl),BOOK]
364
FROM PREDICATE CALCULUS TO UNITS
OC1
EL ( OC1, OCCUPA TION-EVENTS)
EQ[worker(OCl)JOHN]
EQ [profession ( OC1 ), PROGRAMMER ]
OC2
EL(OC2, OCCUPA TION-EVENTS)
EQ[worker{OC2\MARY]
EQ [profession {OC2\ LA WYER ]
ADR1
EL(ADR1 9 ADDRESS-EVENTS)
EQ[person(ADRl)JOHN] EQ
[ location (ADRI \31-MAPLE-ST]
In these units, we have freely invented functions to relate events with
other entities.
We notice that the units above share a common structure. First, an EL
predicate is used to state that the object described by the unit is a member
of some set. (If the object described by the unit had been a set itself, then
an SS predicate would have been used to state that it was a subset of some
other set.) Second, the values of the various functions of the object
described by the unit are related to other objects. We next introduce a special unit notation based on this general structure.
As an abbreviation for a formula like EQ[giver(GI),JOHN], we use
the expression or pair
"giver : JOHN" All of the EQ predicates that relate
functions of the object described by the unit to other objects are expressed by such pairs grouped below the unit name. Thus, drawing from our example, we have:
Gl
giver: JOHN
reap: MARY
obj: BOOK
365
STRUCTURED OBJECT REPRESENTATIONS
In AI systems using unit notation, constructs like "giver : JOHN" are
often called slots. The first expression, giver, is called the slotname, and
the second expression, JOHN, is called the slotvalue.
Sometimes the slotvalue is not a constant symbol (such as JOHN) but
a functional expression. In particular, the function may correspond to the
slotname of another unit. Consider, for example, the sentences "John
gave the book to Mary," and "Bill gave the pen to the person to whom
John gave the book." We express this pair of sentences by the following
units:
Gl
EL(G1,GIVING-EVENTS)
giver: JOHN
reap: MARY
obj: BOOK
G2
EL(G2, GIVING-EVENTS)
giver: BILL
recip : recip(Gl)
obj: PEN
In these examples, recip (Gl) and MARY are two different ways of
describing the same person. Later, we discuss a process for "evaluating" a
functional expression like recip (Gl) by finding the slotvalue of recip in
the unit Gl.
Slotvalues can also be existential variables. For example, a predicate
calculus version of the sentence "Someone gave Mary the book" might
include the formula (3x)EQ[giver(G3) 9x]. We might Skolemize the
existential variable to get an expression like EQ [ giver (G3),S ]. Usually,
we have some information about the existential variable. In our current
example, we would know that "someone" referred to a person. A better
rendering of "Someone gave Mary the book" would involve the formula:
(3x){EQ[giver(G3),x] Λ EL(x,PERSONS)]}
or simply,
EL [ giver ( G3 ), PERSONS ].
366
FROM PREDICATE CALCULUS TO UNITS
In order to handle this sort of formula in our unit notation, we invent
the special form "(element-of PERSONS)" as a kind of pseudo-slot-
value. This form serves as an abbreviation for the formula that used the
EL predicate. An expression using the abbreviated form can be thought
of as an indefinite description of the slotvalue.
To complete our set of abbreviating conventions, we use the "(ele
ment-of )" form in a slotname called "self to state that the object
described by the unit is an element of a set. With these conventions, our
set of units that were originally written as groups of predicate calculus
formulas can be rewritten as follows:
Gl
self: (element-of GIVING-EVENTS)
giver: JOHN
reap: MARY
obj: BOOK
OC1
self: ( element-of OCCUPA TION-E VENTS )
worker: JOHN
profession: PROGRAMMER
OC2
self: ( element-of OCCUPA TION-E VENTS )
worker: MARY
profession : LA WYER
ADR1
self: (element-ofADDRESS-EVENTS)
person : JOHN
location : 31-MAPLE-ST
Other entities in our domain might similarly be described by the
following units:
JOHN
self: (element-of PERSONS)
MARY
self: (element-ofPERSONS)
367
STRUCTURED OBJECT REPRESENTATIONS
BOOK
self: (element-ofPHYS-OBJS)
PROGRAMMER
self: (elemeni-ofJOBS)
LA WYER
self: (element-of JOBS)
31-MAPLE-ST
self: (element-of ADDRESSES)
PERSONS
self: (subset-of ANIMALS)
This set of units represents explicitly certain information (about set
membership) that was merely implicit in our original sentences. Note
that in the last unit, PERSONS, we use the form "(subset-of AN
IMALS)" This form is analogous to the "(element-of )" form; within
the PERSONS unit it stands for SS(PERSONS,ANIMALS).
It should be clear how to translate any of the above units back into
conventional predicate calculus notation.
We can also accommodate universally quantified variables in units.
Consider, for example, the sentence "John gave something to everyone."
In predicate calculus, this sentence might be represented as follows:
(Vx )(3y )(3z ){ EL (y, GIVING-E VENTS )
A EQ[giver(y)JOHN] A EQ[obj(y\z]
AEQ[recip(ylx]} .
Skolemization replaces the variables y and z by functions of x. In
particular, the giving event, y, is now a Skolem function of x and not a
constant. The family of giving events represented by this function can be
described by the functional unit:
g(x)
self: (element-ofGIVING-EVENTS)
giver: JOHN
obj: sk(x)
recip : x
368
FROM PREDICATE CALCULUS TO UNITS
In this unit, the slotvalue of obj is the Skolem function, sk(x). The
scope of universal variables in units is the entire unit. (We assume that all
predicate calculus formulas represented in unit notation are in prenex
Skolem form. That is, all negation signs are moved in, variables are
standardized apart, existential variables are Skolemized, and all universal
quantifiers apply to the entire expression. Thus, when translating unit notation back into predicate calculus, the universal variables all have maximum scopes.)
Since ideas about sets and set membership play such a prominent role
in the representations being discussed in this chapter, it will be helpful to
have some special functions for describing sets. To describe a set composed of certain individuals, we use the function
the-set-of; for
example, the-set-of {JOHN,MARY, BILL). We also use functions inter
section, union, and complement to describe sets composed of the
intersection, union, or complement of sets, respectively.
These set-describing functions can be usefully employed as a way to
represent certain sentences expressing disjunctions and negations. For
example, consider the sentences: "John bought a car," "It was either a Ford or a Chevy," and "It was not a convertible." These sentences could
be described by the following unit:
Bl
self: (element-ofBUYING-EVENTS)
buyer: JOHN
bought : ( element-of intersection ( union ( FORDS, CHE VYS ),
complement ( CON VER TIB LES ))) .
As another example, the sentence "John gave the book to either Bill or
Mary" might be represented by:
G4
self: (element-of GIVING-EVENTS)
giver: JOHN
recip : (element-of the-set-of (BILL, MARY))
obj: BOOK
We postpone the discussion of how to represent implications in unit
notation. It is not our intention here to develop the unit notation into a
completely adequate alternative syntax for predicate calculus. A com
plete syntax might be quite cumbersome; indeed, various useful AI
systems have employed quite restricted versions of unit languages.
369
STRUCTURED OBJECT REPRESENTATIONS
9.2. A GRAPHICAL REPRESENTATION: SEMANTIC
NETWORKS
The binary-predicate version of predicate calculus introduced in the
last section lends itself to a graphical representation. The terms of the
formalism (namely, the constant and variable symbols and the functional
expressions) can be represented by nodes of a graph. Thus, in our
examples above, we would have nodes for JOHN, Gì, MARY, LAW
YER, ADR1, etc. The predicates EQ, EL, and SS can be represented by
arcs; the tail of the arc leaves the node representing the first argument,
and the head of the arc enters the node representing the second
argument. Thus, the expression EL(G1,GIVING-EVENTS) is represented by the following structure:
CE>
The nodes and arcs of such graphs are labeled by the terms and
predicates that they denote.
When an EQ predicate relates a term and a unary function of another
term, we represent the unary function expression by an arc connecting
the two terms. For example, to represent the formula
EQ[giver{Gì),JOHN], we use the structure:
A collection of predicate calculus expressions of the type we have been
discussing can be represented by a graph structure that is often called a
semantic network. A network representation of our example collection of
sentences is shown in Figure 9.1. Semantic networks of this sort are useful
for descriptive purposes because they give a simple, structural picture of
a body of facts. They also depict some of the indexing structure used in
many implementations of predicate calculus representations. Of course,
whether we choose to describe the computer representation of a certain
body of facts by a semantic network, by a set of units, or by a collection of
linear formulas is mainly a matter of taste. The underlying computer data
structures may well be the same! We use all three types of descriptions
more or less interchangeably in this chapter.
We show another semantic net example in Figure 9.2. It represents the
same set of facts that were represented as predicate calculus expressions
in an information retrieval example in chapter 6. JOHN
370
A GRAPHICAL REPRESENTATION: SEMANTIC NETWORKS
ADDRESS-EVENTS ) (OCCUPATION-EVENTS) ( GIVING-EVENTS
fworker\
person worker / V™fession \ recip /\ \ profession
PROGRAMMER
EL JOHN
EL
Fig. 9.1 A simple semantic network.
371
STRUCTURED OBJECT REPRESENTATIONS
Fig. 9.2 A semantic network representing personnel information.
372
A GRAPHICAL REPRESENTATION: SEMANTIC NETWORKS
The nodes in the networks of Figures 9.1 and 9.2 are all labeled by
constant symbols. We can also accommocate variable nodes; these are
labeled by lower case letters near the end of the alphabet (e.g.,..., x9y, z ).
Again, the variables are standardized apart and are assumed to be
universally quantified. The scope of these quantifications is the entire fact
network.
We follow the same conventions converting predicate calculus for
mulas to network form as we did converting them to unit notation.
Existentially quantified variables are Skolemized, and the resulting
Skolem functions are represented by nodes labeled by functional
expressions. Thus the sentence "John gave something to everyone" can
be represented by the network in Figure 9.3. In this figure, "x" is
universally quantified. The nodes labeled by "g(x)" and "$&(*)" are
Skolem-function nodes. (Computer implementations of nodes labeled by
functional expressions would probably have some sort of pointer structure between the dependent nodes and the independent ones. For
simplicity, we suppress explicit display of
these pointers in our semantic
networks; although some net formalisms include them.)
We next discuss how to represent the propositional connectives
graphically. Representing conjunctions is easy: The multiple nodes and
EL and SS arcs in a semantic network represent the conjunction of the
associated atomic formulas. To represent a disjunction, we need some
way of setting off those nodes and arcs that are the disjuncts. In a linear
notation, we use parentheses or brackets to delimit the disjunction. For semantic networks, we employ
a graphical version of the parentheses, an
enclosure, represented by a closed, dashed line in our illustrations. For a
disjunction, each disjunctive predicate is drawn within the enclosure, and
the enclosure is labeled DIS. Thus, the expression [EL(A,B) V SS(B
9C)] is represented as in Figure 9.4.
To set off a conjunction nested within a disjunction, we can use an
enclosure labeled CONJ. (By convention, we omit the implied conjunc
tive enclosure that surrounds the entire semantic network.) Arbitrary
nesting of enclosures within enclosures can be handled in this manner. As
an example, Figure 9.5 shows the semantic network version of the
sentence "John is a programmer or Mary is a lawyer."
In converting predicate calculus expressions to semantic network form,
negation symbols are typically moved in, so that their scopes are limited
to a single predicate. In this case, expressions with negation symbols can
373
STRUCTURED OBJECT REPRESENTATIONS
be represented in semantic network form simply by allowing ~EL,
~SS, and ~EQ arcs. More generally, we can use enclosures to delimit
the scopes of negations also. In this case, we label the enclosure by NEG.
We show, in Figure 9.6, a graphical representation of
~[EL(A,B) Λ SS(B,C)]. To simplify the notation we assume, by
convention, that the predicates within a negative enclosure are conjunc
tive.
(^ΊθΗΝ^) [sk(x))
Fig. 9.3 A net with Skolem-function nodes.
disjunctive
enclosure
Fig. 9.4 Representing a disjunction.
374
A GRAPHICAL REPRESENTATION: SEMANTIC NETWORKS
Fig. 9.5 A disjunction with nested conjunctions.
Fig. 9.6 Representing a negation.
375
STRUCTURED OBJECT REPRESENTATIONS
In Figure 9.7 we show an example of a semantic network with both a
disjunctive and a negative enclosure. This semantic network is equivalent
to the following logical formula:
{EL(B1,BUYING-EVENTS) A EQ[buyer(Bl),JOHN\
A EQ[bought(Bl),X] A ~EL(X,CONVERTIBLES)
A [EL(X,FORDS) V EL(X,CHEVYS)]
A SS(FORDS,CARS) A SS(CHEVYS,CARS) A SS(CONVERTIBLES,CARS)}
Fig. 9.7 A semantic network with logical connectives.
376
A GRAPHICAL REPRESENTATION: SEMANTIC NETWORKS
If we negate an expression with a leading existentially quantified
variable and then move the negation symbol in past the quantifier, the
quantification is changed to universal. Thus, the statement "Mary is not a
programmer" might be represented as
~ {(3x ) EL ( jc, OCCUPA TION-E VENTS )
Λ EQ[profession(x ),PROGRAMMER]
A EQ[worker(x),MARY]} ,
which is equivalent to
(Vx )~ { EL (x, OCCUPA TION-EVENTS)
A EQ [profession (x ), PROGRAMMER ]
A EQ[worker(x),MARY]} .
The network representation for the latter formula is shown in Figure 9.8.
Enclosures can also be used to represent semantic network implica
tions. For this purpose, we have a linked pair of enclosures, one labeled
ANTE and one labeled CON SE. For example, the sentence "Everyone
who lives at 37 Maple St. is a programmer" might be represented by the
net in Figure 9.9. In this figure, o(x,y) is a Skolem function naming an
occupation event dependent on x and y. A dashed line links the ANTE
and CON SE enclosures to show that they belong to the same implication.
We discuss network implications in more detail later when we introduce
rules for modifying databases.
Fig. 9.8 One representation of a negated existential statement.
3ΊΊ
STRUCTURED OBJECT REPRESENTATIONS
ADDRESS-EVENTS OCCUPA TION-E VENTS
( EL ANTE CONSE
r y Λ \ Penon \ t 0 , | worker / ^ EL
y))
^ he profession
PROGRAMMER
Fig. 9.9 A network with an implication.
In all of these examples, enclosures are used to set off a group of EL,
SS, and function arcs and thus are drawn so as to enclose only arcs.
(Whether or not they enclose nodes has no consequence in our semantic
net notation.)
9.3. MATCHING
A matching operation, analogous to unification, is fundamental to the
use of structured objects as the global database of a production system.
We turn to this subject next.
To help us define what we mean by saying that two structured objects
"match," we must remember the fact that structured objects are merely
an alternative kind of predicate calculus formalism. The appropriate
378
MATCHING
definition must be something like: Two objects match if and only if the
predicate calculus formula associated with one of them unifies with the
predicate calculus formula associated with the other. We are interested in
a somewhat weaker definition of match, because our match operations
are not usually symmetrical. That is, we usually have a goal object that we
want to match against a, fact object. We say that a goal object matches a
fact object if the formula involving the goal object unifies with some
sub-conjunction of the formulas of the fact object. (Matching occurs only
if the goal object formulas are provable from the fact-object formulas.)
Let us look at some example matches between units using this
definition. Suppose we have the fact unit:
Ml
self: (element-of MARRIAGE-EVENTS)
male: JOHN-JONES
female: MARY-JONES
The predicate calculus formula associated with this unit is:
EL(M1,MARRIAGE-EVENTS)
A EQ [ male ( Ml ), JOHN-JONES ]
EQ[female(Ml\MARY-JONES] .
This fact unit would match the goal unit:
Ml
self: (element-of MARRIAGE-EVENTS)
male: JOHN-JONES
It would not match the goal unit:
Ml
self: (element-of MARRIAGE-EVENTS)
male: JOHN-JONES
female: MARY-JONES
duration : 10
For semantic networks, the situation is quite similar. In Figures 9.10
and 9.11 we show the fact and goal networks that correspond to the units
examples above. In these figures, we separate the fact and goal arcs by a
dashed line. (Again, only the location of the arcs, with respect to the
379
STRUCTURED OBJECT REPRESENTATIONS
male female EL
Fig. 9.10 A goal net that matches a fact net.
male female duration EL
Fig. 9.11 A goal net that does not match a fact net.
380
MATCHING
dashed line, is important; the location of nodes is irrelevant in our
formulation.) In order for a goal network structure to match a fact
network structure, the formula associated with the goal structure must
unify with some sub-conjunction of the formulas associated with the fact structure. In these examples, we merely have to find fact arcs that match
each of the goal arcs. The match is successful in Figure 9.10, but it is unsuccessful in Figure 9.11.
In any representational scheme there are often several alternative
representations for basically the same information. Since our definition
of structure matching depends on the exact form of the structure, such
alternatives do not strictly match. Consider the network examples of Figure
9.12. There we show two alternatives for representing "John Jones
is married to Mary Jones." One of these uses a "marriage-event," and the other uses the special wife-of function. (Ordinarily, our preference is not
to use functions like wife-of unless their values are truly independent of
other parameters, such as time.) Syntactically, the two structures of
Figure 9.12 do not match even though they semantically "say" the same
thing. Such a circumstance corresponds to the fact that two predicate
calculus forms for representing the same idea do not unify when they
contain different predicate or function symbols. We show a somewhat
more complex example of equivalent forms in Figure 9.13.
Some AI systems that use structured objects have elaborate matchers
that use special knowledge about the domain of application to enable
direct matches between structures like those shown in Figure 9.12 and
Figure 9.13. These systems have what are often described as "semantic
matchers," that
is, matchers that decide that two structures are the same if
they "mean" the same thing.
It is perhaps a matter of taste as to where one wants to draw the line
between matching and object manipulation computations and deduc
tions. Our preference is to prohibit operations in the matcher that require
specialized domain knowledge or that might involve combinatorial
computations. In these cases, we would prefer to use rule-based deduc
tive machinery to establish the semantic equivalence between different
syntactic forms. Such a strategy retains, for the control system, the
responsibility of managing all potentially combinatorial searches. It
permits the matcher to be a general-purpose routine that does not have to
be specially designed for each application. We postpone a discussion of
deductive machinery until later, when we talk about operations on structured objects.
381
STRUCTURED OBJECT REPRESENTATIONS
A common cause of syntactic differences between network structures
are the different ways of setting up chains of EL and SS arcs. Consider
the example of Figure 9.14. The goal structure can be derived from the
fact structure using a fundamental theorem from set theory. Because this
derivation occurs so often with structured objects, it is usually built into the matcher. In fact, one of the advantages of structured objects is that their pointer structures allow easy computation of element/subset/set
relationships. Thus, we say that the two structures in Figure 9.14 do match.
So far, we have only discussed matching between two constant
structures. Usually, one or both of the structures contain variables that
can have terms substituted for them during the matching process.
Variables that occur in fact structures have implicit universal quantification in all formulas in which they appear, and variables that occur in goal
structures have implicit existential quantification in all formulas in which
they appear. Our structured-object systems are first-order, so variables
can only occur as labels for nodes, units, or slotvalues.
Fig. 9.12 Two non-matching, equivalent structures.
382
MATCHING
John or Bill gave Mary the pen.
giver
Fig. 9.13 Another example of equivalent networks.
383
STRUCTURED OBJECT REPRESENTATIONS
PERSONS
SS
MEN
EL
JOHN-JONES
FACT NET
GOAL NET
EL
Fig. 9.14 Nets with EL and SS arcs.
Fig. 9.15 Matching nets.
384
MATCHING
A typical use of structures with variables is as goal structures. Suppose,
for example, that we wanted to ask the question "To whom did John give
the book?" This question could be represented by the following goal
unit:
x
self: {element-ofGIVING-EVENTS)
giver: JOHN
recip : y
obj: BOOK
Matching this goal unit against the fact unit, Gl, yields the substitution
{Gl/x,MARY/y}, which can be used to generate an answer to the
question. In network notation, we show the corresponding matching fact
and goal structures in Figure 9.15. In order for a goal net to be matched,
each of its elements (arcs and nodes) must unify with corresponding
fact-net elements.
In matching objects that contain functional expressions for slotvalues,
we assume that these functional expressions are evaluated whenever
possible. Evaluation is performed by reference to the object named by
the argument of the function. Suppose, for example, that we want to ask
the question: "Did Bill give Mary the pen?" This query can be expressed
as the goal unit:
x
self: {element-of GIVING EVENTS)
giver: BILL
recip: MARY
obj: PEN
Suppose our fact units include:
Gl
self: {element-ofGIVING-EVENTS)
giver: JOHN
recip: MARY
obj: BOOK
G2
self: {element-ofGIVING-EVENTS)
giver: BILL
recip : recip {Gl)
obj: PEN
385
STRUCTURED OBJECT REPRESENTATIONS
Because recip{Gl) can be evaluated to MARY, by reference to Gl, our
goal unit matches G2 ; and we can answer "yes" to the original query. We
permit the matcher to perform these kinds of evaluations because they
can be handled without domain-specific strategies and do not cause
combinatorial computations.
It might also be desirable to allow the matcher to use certain common
equivalences between units. One such equivalence involves the special
descriptive form {element-of ). For example, the sentence "Joe bought a
car" might be represented either by the unit:
B2
self: {element-ofBUYING-EVENTS)
buyer: JOE
bought : ( element-of CARS )
or by the pair of units:
B2
self: {element-ofBUYING-EVENTS)
buyer: JOE
bought: X
and
X
self: {element-ofCARS)
(The first unit could be considered an abbreviated form for the pair of
units.) We could build information about this abbreviation into the
matcher so that, for example, the pair of units would match the goal unit:
y
self: {element-ofBUYING-EVENTS)
buyer: JOE
bought : { element-of CARS )
386
DEDUCTIVE OPERATIONS ON STRUCTURED OBJECTS
9.4. DEDUCTIVE OPERATIONS ON STRUCTURED
OBJECTS
9.4.1. DELINEATIONS
Structured object representations can be used in production systems
for performing deductions. As in our earlier discussions of predicate
calculus deduction systems, the production rules are based on implica
tions. Before talking about how implications are used in general, we
consider a frequently occurring special use: when an implication asserts properties about every member of a given set.
Consider, for example, the sentence "All computer science students
have graduate standing." From this assertion and the sentence, "John is a computer science student," we should be able to deduce that "John has
graduate standing." We could represent these statements in the predicate calculus as follows:
Fact
: EL(JOHN, CS-STUDENTS)
Rule :EL(x, CS-STUDENTS )=>EQ[ class ( x ), GRA D ]
Goal: EQ[class{JOHN\GRAD]
An ordinary predicate calculus production system might use the rule (in
either direction) to prove the goal.
In unit language, our fact might be represented as:
JOHN
self: (element-of CS-STUDENTS)
and our goal might be represented as:
JOHN
class: GRAD
Our problem now is how to represent and use the implicational rule in
a system based on unit notation.
387
STRUCTURED OBJECT REPRESENTATIONS
In the unit formalism, we represent implications that assert properties
about every member of a set by a special kind of unit called a delineation
unit. Such a unit describes (delineates) each of the individuals in a set
denoted by another unit. For example, suppose we have a unit denoting
the set of computer science students:
CS-STUDENTS
self: (subset-of STUDENTS)
A delineation unit for this set is used to describe each of the individuals
in the set. We let this delineation unit be a sorted universal variable whose
domain of universal quantification is the set. The sort of the variable, that
is, the name of its domain set, follows the variable after a vertical bar, "|".
Thus, to describe each computer science student, we have the delineation
unit:
x | CS-STUDENTS
major : CS
class: GRAD
We must be careful not to confuse delineation units describing each
individual in a set with the unit describing the set itself, or with any
particular individuals in the set! Some AI systems using a unit formalism
have entities called prototype units that seem to play the same role as our
delineation units. In these systems, prototype units seem to be treated as
if they were a special kind of constant, representing a mythical "typical"
member of a set. The prototype units are then related to other members
of the set by an "instance" relation. But such prototype units might cause
confusion—because substituting a constant for a variable (instantiation)
should properly be thought of as a metaprocess rather than as a relation
in the formalism itself. It seems more reasonable to think of a delineation
unit as a special form of implicational rule.
Delineation units can be used in the forward direction to create new
fact units or to add properties to existing fact units. For example, suppose
we had the fact unit:
JOHN
self: (element-of CS-STUDENTS)
To use the delineation unit in the forward direction, we note that
x I CS-STUDENTS matches the fact unit JOHN. The sorted variable, JC,
388
DEDUCTIVE OPERATIONS ON STRUCTURED OBJECTS
matches any term that is an element of CS-STUDENTS. Applying the
delineation unit to the fact unit involves adding, to the fact unit, the slots
"major: CS" and "class: GRAD." Thus extended, the fact unit JOHN
matches our goal unit JOHN.
Used in the backward direction on the goal unit, the delineation unit
sets up the subgoal unit:
JOHN
self: (element-of CS-STUDENTS)
Since this subgoal unit matches the original fact unit, we again have a
proof.
In the CS-student example, the goal unit did not contain any variables.
Allowing (existential) variables in goals is perfectly straightforward.
Suppose we want to find out which individual has graduate standing. A
goal unit for this query might be:
y
class: GRAD
Reasoning in the backward direction, this goal unit can be matched
against the delineation unit x \ CS-STUDENTS to create the subgoal
unit:
y
self: (element-of CS-STUDENTS)
This subgoal unit, in turn, matches the fact unit JOHN, so the answer
to our original query can be constructed from the substitution
{JOHN/y}.
Delineations can be represented in the network formalism by sorted
variable nodes. The variable is assumed to have universal quantification
over the individuals in the sort set. The network representation for the
delineation of CS-STUDENTS, analogous to the unit representation just
discussed, is shown in Figure 9.16.
In addition to representations for a set of objects and characterizations
of the properties of every member of a set, we often use the idea of an
abstract individual in relation to members of the set. For example,
389
STRUCTURED OBJECT REPRESENTATIONS
Fig. 9.16 A network delineation for CS-STUDENTS.
consider the net shown in Figure 9.17. This net refers to the set of all
autos, describes some properties of each member of the set, and also
mentions a particular member, "car 54." Suppose we wanted a represen
tation of the sentence "The auto was invented in 1892." We could easily
construct a node representing an "invention situation" with function arcs
pointing to the inventor, the thing invented, etc. But to which node would
the thing-invented arc point? It wasn't car 54 or even the set of all autos
that was invented in 1892. Just what was invented?
We can answer this question satisfactorily for many purposes by using
the idea of an abstract auto, denoted by the node AB-AUTO. This
abstract individual is then related to the rest of the network as shown in
Figure 9.18. In that figure, the properties of each member of the set of
autos (as expressed by the delineation) are augmented to include the fact that the abstract auto is the abstraction of every member of the set of
autos.
Note that the abstraction-of function does not have an inverse; the
function is many-to-one. In systems that treat a delineation as if it were an
individual constant representing a typical set member, it would be possible to have an inverse function of
abstraction-of, say, reification-
prototype-of, whose value would be the prototype individual. Since the
390
DEDUCTIVE OPERATIONS ON STRUCTURED OBJECTS
prototype confers all of its properties on every member of the set, each
would have the absurd property that it was the reification prototype of the
abstract individual. Treating prototypes as universally quantified impli
cations instead of as constants avoids this difficulty.
Some constant objects, such as LA WYER and PROGRAMMER, that
were used in our earlier examples are probably best interpreted as
abstract individuals. We'll see more examples of abstract individuals in
the examples to follow.
number-of-wheels
Fig. 9.17 Some information about autos.
x\AUTOS Yabstraction-of \ thing-invent ed
Fig. 9.18 A net with a node denoting an abstract individual.
391
STRUCTURED OBJECT REPRESENTATIONS
9.4.2. PROPERTY INHERITANCE
In many applications, the structured objects denoting individuals and
sets form a taxonomic hierarchy. A common example is the tree-like
taxonomic division of the animals into species, families, orders, etc. The
taxonomies used in ordinary reasoning might be somewhat more "convoluted" than those used in formal biology—an individual may be an element of more than one set, for example. Usually, though, useful
hierarchies narrow toward a small number of sets at the top and, in any
case, the various sets form a partial order under the subset relation.
Consider the hierarchy shown in Figure 9.19. Learning that Clyde is an
elephant, we could use the delineations (together with some set theory) to
make several forward inferences. Specifically, we could derive that Clyde is gray and wrinkled, that he likes peanuts, that he is warm-blooded, etc.
The results of these operations could be used to augment the structured
object denoting Clyde. In any given reasoning problem, efficiency considerations demand that we do not derive all of these facts about
Clyde explicitly.
Similar efficiency problems arise when delineations in a taxonomic
hierarchy are used to reason backward. Suppose that we want to prove
that Clyde was gray (when we didn't know this fact explicitly). Using the delineations of Figure 9.19, we might set up several subgoals including showing that Clyde was a shark, a sperm whale, or an elephant. If the
facts had included the assertion that Clyde was an elephant, we ought to be able to reason more efficiently, since, then, we should be able at least to
avoid subgoals like Clyde being a shark. There is evidence that humans
are able to perform these sorts of reasoning tasks rapidly without being overwhelmed by combinatorial considerations.
Some of the forward uses of delineations in taxonomic hierarchies can
be efficiently built into the matcher without risking severe combinatorial
problems. We describe how this might be done for some simple examples
using the network formalism.
In taxonomic hierarchies that narrow toward a small number of
sets at
the top, there is little harm in building into the matcher itself the ability to
apply certain delineations in the forward direction. Consider the problem
of trying to find a match for a goal arc a between two fact nodes Nl and
N2. We show this situation in Figure 9.20. If there is a fact arc a between
Nl and N2 (as shown by one of the dashed arcs in Figure 9.20), then we
have an immediate match. We could restrict the matcher by permitting it
392
DEDUCTIVE OPERATIONS ON STRUCTURED OBJECTS
blood-temp
Fig. 9.19 A taxonomic hierarchy of sets and their delineations.
to look only for such immediate matches. If none were found, we could
apply production rules, like the delineation shown in Figure 9.20, to solve
the problem.
For the example of Figure 9.20, if the matcher could not find an
explicit a arc in the fact network between Nl and 7V2, then it would
ascend the taxonomic hierarchy from Nl checking for the presence of a
arcs to N2 from delineations of the sets (and supersets) to which Nl
393
STRUCTURED OBJECT REPRESENTATIONS
belongs. In Figure 9.20 we show, by dashed arcs, some of the possible a
arcs that the matcher is permitted to seek. If it can find such an arc, the
match is successful. Unless all of the goal arcs can be matched, the
matcher terminates with failure.
A system with an extended matcher of this type operates as if an object
automatically inherited all of the (needed) properties of its sets and
supersets. The ease with which properties can be inherited is one of the
advantages of using a structured object formalism. As an illustration of
this process, let's consider the following examples based on Figure 9.19.
First, suppose we want to prove that Clyde is gray when we know that
Clyde is an elephant (but we don't know explicitly that Clyde is gray).
This problem is represented in Figure 9.21, where we have included part
of the net shown in Figure 9.19. Since there is no color arc within the fact
net pointing from CLYDE to GRAY, we cannot obtain an immediate
match. So we move up to the ELEPHANTS delineation where we do have a
color arc to GRA Y. The matcher notes that CLYDE inherits this
color arc and finishes with a successful match.
Fig. 9.20 Matching a goal arc.
394
DEDUCTIVE OPERATIONS ON STRUCTURED OBJECTS
GOAL NET
color
Fig. 9.21 A net for proving that Clyde is gray.
Next, suppose we want to prove that Clyde is warm-blooded when we
know only that Clyde is an elephant. Again, we move up the taxonomic
hierarchy to the delineation unit for MAMMALS where a match is
readily determined.
Finally, suppose we want to prove that Clyde breathes oxygen and is
gray and warm-blooded, given only that Clyde is a mammal. Ascending
the delineation hierarchy picks up a blood-temp arc to WARM and an
inhalant arc to OXYGEN, but not a complete match. These two
properties are added explicitly to CL YD E before attempting to prove the
goal by rule-based means.
One might also want to build one other important operation into the
matcher, namely, an operation in which an inherited Skolem function
node must be proved equal to a constant node. Consider the example of
Figure 9.22. Our goal there is to show that Henry is a member of the
computer science faculty. Using the delineation x \ CS-STUDENTS in
395
STRUCTURED OBJECT REPRESENTATIONS
CS-STUDENTS
xlCS-STUDENTS t
EL
Fig. 9.22 A network with an inheritable Skolem-function node.
the forward direction on JOHN creates the structure shown in dashes in
Figure 9.22. Now, since the adviser arc represents a function, HENRY
must be equal to a {JOHN), and our match is complete.
One could use the following scheme for building this sort of reasoning
process into the matcher. Using the example of Figure 9.22 as an illustration, we first attempt an immediate match by looking for a fact EL
arc between HENRY and CS-FACULTY. Failing to find one, we then
look in the taxonomic hierarchy above HENR Y to see if there is an EL
arc to be inherited. In our example, we fail again. Next, we look for
function arcs pointing to HENRY from constant nodes. Suppose we find
an arc, ai, pointing to HENRY from a node, Ni. (That is,
396
DEDUCTIVE OPERATIONS ON STRUCTURED OBJECTS
Fig. 9.23 Matching a variable goal node.
EQ [ ai ( Ni ), HENR Y].) Then, we look in the taxonomic hierarchy above
each such node Ni to see if Ni inherits an ai arc to some Skolem function
node that has an EL arc directly to CS-FACULTY. If we find such an
inheritance, our extended matcher succeeds.
Strategies for matching a variable goal node against facts in the
database also depend on the structure of the net. In the simplest case, the
variable goal node, say, x, is tied to constant fact nodes, Nl, N2,..., Nk,
by arcs labeled al, a2, ..., ak, respectively. The situation is depicted in
Figure 9.23. The constant nodes Nl,..., Nk also have other arcs incident
on them. Our attempt to find a match must look back through al arcs
incident on Nl, a2 arcs incident on N2, etc. (We assume that our
implementation of the network makes it easy to trace through arcs in the
"reverse" direction.) Some of these arcs originate from constant nodes
and some from delineations.
A good strategy is to look first for a constant node, because the set of
possible nodes in the fact net that might match x can be quite large if the
delineations are considered. Suppose node Ni has the smallest set of
constant nodes sending ai arcs to Ni. We attempt to match x against the
nodes in this set and allow the matcher to use delineations in matching
the other arcs. In Figure 9.24, we show a simple example. In this case,
there is only one constant node, namely, CLYDE, having the desired
properties. In attempting a match against CLYDE, we must next find an EL arc between CLYDE and MAMMALS, and a blood-temp arc
397
STRUCTURED OBJECT REPRESENTATIONS
ELy^ [
Cy \MAMMALSJ
..>~
' blood-temp
r
C WARM J
FACT NET \
GOAL NET \
blood-temp\ :
EL ^^^55
f ELEPHANTS J
EL\
^
C CLYDE J
color
*
C GRAY J
/color
X
Fig. 9.24 An example with a variable goal node.
between CLYDE and WARM. The first of these arcs is inferred by a
subset chain, and the second is established by inheritance; so the match
succeeds.
We can always find at least one constant node to use as a candidate if
we allow the matcher to look backward down through SS and EL chains.
Consider, for example, the problem shown in Figure 9.25. In this net, there is no "immediate" constant node to serve as a candidate match, but
working down from MAMMALS through an SS and an EL chain puts us
at the constant node,
CLYDE. The rest of the match is easily handled by
property inheritance. We can assume that a variable goal node always has
398
DEDUCTIVE OPERATIONS ON STRUCTURED OBJECTS
Fig. 9.25 Another example with a variable goal node.
an EL (or SS) arc pointing to something in the fact net (every entity is at
least a member of the universal set).
This matching strategy can be elaborated to deal with cases in which
the goal net structure is more complex, where it contains more than one
variable node. Each variable node must be properly matched in order for the whole goal structure to be matched. In any case, if no match can be
obtained, either delineation rules must be used in the backward direction
or other rules must be used to change the fact or the goal structures. We discuss rule use in a later section.
399
STRUCTURED OBJECT REPRESENTATIONS
9.43. PROCEDURAL ATTACHMENT
In some applications, we can associate computer programs with the
slots of delineations. Executing these programs, for properly instantiated
arguments, produces slotvalues for instances of the delineation. Suppose,
just as a simple example, that we wanted to use a unit-based system to
multiply two numbers. One method is to provide such a system with a
large set of facts such as:
Ml
self: (element-ofMULTIPLICA TIONS)
mulîiplicandl : 1
multiplicand! : 1
product : 1
M2
self: ( element-of MOLTIPLICA TIONS )
multiplicandl : 1
multiplicand! : 2
product: 2
etc.
These units are a way of encoding a multiplication table. When we
want to know the product of two numbers, 3 and 6, we query the system
with the goal unit:
z
multiplicandl : 3
multiplicand! : 6
product: x
This goal would match some stored fact unit having a slot "product :
18."
400
DEDUCTIVE OPERATIONS ON STRUCTURED OBJECTS
Rather than store all the required facts explicitly, we could provide a
computer program, say, TIMES and "attach" it to the delineation of
MULTIPLICATIONS, thus:
x | MOLTIPLICA T10NS
multiplicand I : ( element-of N UM ERA LS)
multiplicand!', (element-ofNUMERALS)
product : TIMES[multiplicand! (JC ),multiplicand! (x)]
Delineation units with attached procedures are used just as ordinary
delineation units. Procedures occurring in substitutions are executed as
soon as their instantiations permit. To illustrate how all of this might
work, suppose again that we want to find the product of 3 and 6. First, we
introduce as a fact unit the existence of the multiplication situation for
which we want an answer:
M
self: (element-ofMULTIPLICA TIONS)
multiplicand! : 3
multiplicand! : 6
Next, we pose the goal unit:
M
product: y
When we attempt a match between goal M and fact M, the matcher
uses the delineation for multiplications to allow fact M to inherit the "product" slot. This process produces the substitution (TIMES(3,6)/)>}.
The correct answer is then obtained by executing the TIMES program.
A completely analogous example could have been given using the
network formalism.
9.4.4. UNIT RULES
Some implicational statements are not easily interpreted as expressing
information solely about members of a set. For these, we introduce the
concept of a unit rule having an antecedent and a consequent. The
401
STRUCTURED OBJECT REPRESENTATIONS
antecedent (ANTE) and consequent (CONSE) are lists of units
(possibly containing variables). When a unit rule is used in the forward
direction, if all of the units in the ANTE (regarded as goal units) are matched by fact units, then the units in the CONSE (properly instan
tiated) can be added to the set of fact units. (When ANTE units are regarded as goals, their variables are, of course, existential.) If some of the
added fact units already exist, the addition operation need only involve
adding those properties mentioned in the CONSE units. This usage is
consistent with how implications were used in the rule-based deduction
systems of chapter 6.
When a unit rule is used in the backward direction against a single goal
unit, one of the CONSE units (regarded as a fact unit) must match the goal unit. (When CONSE units are regarded as facts, their variables are
universal.) If the match succeeds, the units in the ANTE (properly
instantiated) are set up as subgoal units. A backward unit rule applied to a
(conjunctive) set of goal units is a slightly more complex operation; the
process is analogous to the methods discussed in chapter 6 involving
AND/OR graphs and substitution consistency tests. For simplicity of explanation in this chapter, we confine ourselves to examples that do not require these added mechanisms.
We'll next show some simple examples of the use of unit rules. The
reader might like to refer to our information retrieval example using
personnel data in chapter 6. There we had the rule:
Rl
: MANAGER(x,y)=> WORKS-IN(x.y)
Expressed in the predicate calculus system being used in this part of the
book, this rule becomes:
{EL(x,DEPARTMENTS) A EQ [ manager (x), y ]}
=> EQ [ works-in (y ), x ]
Using our syntax for unit rules, we would express this rule as follows:
Rl
ANTE: x
self: (element-of DEPARTMENTS)
manager: y
402
DEDUCTIVE OPERATIONS ON STRUCTURED OBJECTS
CON SE: y
works-in : x
Another rule used in our personnel problem example was:
R2: [ WORKS-IN (x,y) A MANAGER(x,z)]^> BOSS-OF(y.z)
Restated, this piece of information might be represented as:
{ EQ [ works-in (y ), x ] A EQ [ manager ( x ), z ]}
=>EQ[boss-of(j),z]
As a unit rule, we might represent it as follows:
R2
ANTE: y
works-in : x
x
manager: z
CON SE: y
boss-of: z
A variety of implications can be represented by unit rules of this kind.
These rules, in turn, can be used as production rules for manipulating fact
and goal units in deduction systems.
Earlier, we spoke of the fact that there are often many different ways of
representing the same knowledge. Complex systems might not limit
themselves to one alternative; thus there is a need to be able to translate
freely among them. Consider the example in Figure 9.12. There we showed two alternatives for representing "John Jones is married to Sally
Jones." The equivalence between these forms might be represented as
follows:
EQ[y,wife-of{x)] = (3z){ EL(z,MARRIAGE-EVENTS)
A EQ[x,male(z)] A EQ[y,female(z)]}
(Here, we use a wff of the form Wl = W2 as an abbreviation for
[W1^W2]A[W2^>W1 ].) Using the "left-to-right" implication, we
403
STRUCTURED OBJECT REPRESENTATIONS
have an existential variable within the scope of two universals. Skole-
mizing yields:
EQ[y,wife-of(x)]=ï{EL[m(x,y), M ARRI AGE-EVENTS]
Λ EQ[x,male(m(x,y))]
AEQ[y,female(m(x,y))]}
We represent this implication as the following unit rule:
R-M
ANTE.x
wife-of: y
CON SE: m(x,y)
self: {element-of MARRIAGE-EVENTS)
male: x
female: y
To use this rule in the forward direction, we match the ANTE to a fact
unit and then create a new constant unit corresponding to the instan
tiated unit in the CON SE.
The simplicity of the unit syntax makes representing implications that
are much more complex than those we have used in our examples
awkward. Even with this limitation, the formalism that has been
developed so far is quite useful for a wide variety of problems.
9.4.5. NET RULES
Earlier we mentioned the use of enclosures to represent network
implications. These implications can be used as forward or backward
rules in semantic network-based production systems. For example, the
implication:
{EL(x,DEPARTMENTS) Λ EQ[manager(x),y]}
=> EQ [ works-in (y ), x ]
might be represented by the network structure shown in Figure 9.26.
To use a network implication as a forward rule, the ANTE structure
(regarded as a goal) must match existing network fact structures. The
404
DEDUCTIVE OPERATIONS ON STRUCTURED OBJECTS
V works-in J ^^ .**'
CONSE
Fig. 9.26 Representing an implication.
M ARRI A GE-E VENTS
EL\
male
rife-of ) Y m{x,y)
ANTE
Q-\ female ^
CONSE
Fig. 9.27 A network implication with a S kolem function.
405
STRUCTURED OBJECT REPRESENTATIONS
CON SE structure (appropriately instantiated) can then be added to the
fact network. To use a network implication as a backward rule, the CON SE structure (regarded as a fact) must match the goal structure.
Then, the ANTE structure (appropriately instantiated) is the subgoal
produced by the rule application. Again, the situation is more complex
(involving AND/OR graphs and substitution consistency testing) when
the goal structure is first broken into component structures, and when these are matched individually by rule CON SE structures.
As a more complex example we show, in Figure 9.27, the network
version of an implication used earlier:
EQ[y,wife-of(x)]^>
{ EL [m(
x, y ), MA RRIA GEE VENTS ]
A EQ[x,male(m(x,y))]
A EQ[y,female(m(x 9y))]}
The node labeled m(x,y) is a Skolem function node. Every forward
application of the rule in Figure 9.27 creates a newly instantiated m(x,y )
node.
9.4.6. APPENDING ADVICE TO DELINEATIONS
In order to minimize combinatorial difficulties, rule applications must
be guided by an intelligent control strategy. One way to specify useful
control information is to add advice about rule applications to delinea
tions. We mention two forms for such advice: the "to-fiH" form, and the
"when-filled" form. The former gives advice about which rules should be
used in the backward direction when attempting to match existential
variables in goals. The latter gives advice about which rules should be
used in the forward direction to create new fact units.
As an illustration of the use of such advice, consider the rules Rl and
R2 used above in our personnel data example. We repeat these rules here
for convenience:
Rl
ANTE: x
self: (element-ofDEPARTMENTS)
manager: y
406
DEDUCTIVE OPERATIONS ON STRUCTURED OBJECTS
CONSE: y
works-in : x
R2
ANTE: y
works-in : x
x
manager: z
CON SE. y
boss-of: z
The following delineations contain advice about when to use these
rules:
REMPLOYEES
boss-of\ (element-of EMPLOYEES)
<to-fill: R2>
works-in: (element-of DEPARTMENTS)
r\ DEPARTMENTS
manager : ( element-of EM PL O YE E S )
<when-filled: Rl>
The notation <to-fill : R2> in u | EMPLO YEES states that whenever
a goal has a 60^-0/slotvalue that is a variable, rule R2 should be used in
the backward direction (when there is no direct match against a fact unit).
The notation <when-filled: Rl> in r\ DEPARTMENTS states that whenever a fact unit whose self slot contains "(element-of
DEPART
MENTS)" and whose manager slot has a value, rule Rl should be used.
Suppose we have the fact units:
JOE-SMITH
self: (element-ofEMPLOYEES)
works-in: P-D
407
STRUCTURED OBJECT REPRESENTATIONS
P-D
self: (element-of DEPARTMENTS)
manager: JOHN-JONES
When the second of these is asserted, a check of the delineation
r\ DEPARTMENTS indicates that rule Rl should be applied in the
forward direction. This application produces the fact unit:
JOHN-JONES
works-in : P-D
Suppose we want to ask "Who is Joe Smith's boss?" This query is
represented by the goal unit:
JOE-SMITH
boss-of: u
An attempt at a direct match against fact unit JOE-SMITH fails; but
one of the delineations, containing the boss-of slot, advises the system to
use rule R2 in the backward direction; and doing so produces the subgoal
units:
JOE-SMITH
works-in : x
x
manager: u
The first of these can be matched against fact JOE-SMITH, to produce
the substitution {P-D/x }. The instantiated second subgoal unit can then
be matched against fact P-D, to produce the substitution {JOHN-
JONES/u }, which contains the answer to our original query.
9.5. DEFAULTS AND CONTRADICTORY
INFORMATION
Many descriptive statements of the form "All xs have property P"
must be regarded as only approximately true. Perhaps most xs do have
property P, but typically we will come across exceptions. Examples of
408
DEFAULTS AND CONTRADICTORY INFORMATION
these kinds of exceptions abound: All birds can fly (except ostriches); all
insects have six legs (except juveniles like caterpillars); all lemons are
yellow (except unripe green ones or mutant orange ones); etc. It appears
that many general synthetic (as opposed to analytic or definitional)
statements that we might make about the world are incorrect unless
qualified. Furthermore these qualifications probably are so numerous
that the formalism would become unmanageable if we attempted to
include them all explicitly. Is there a way around this difficulty that would
still preserve the simplicity of a predicate-calculus language?
One approach to preserving simplicity is to allow implicit exceptions to
the domain of universal quantification in certain implicational state
ments. Thus, the statement "All elephants are gray" might initially be
given without listing any exceptions. Such a statement would allow us to
deduce that Clyde is gray when we learn that Clyde is an elephant. Later,
if we learn that Clyde is actually white, we must retract our deduction about his grayness and change the universal statement about elephants so
that it excludes Clyde. After making this change, it is no longer possible to deduce erroneous conclusions about Clyde's color.
The way in which the matcher uses property inheritance provides an
automatic mechanism for dealing with exceptions like Clyde's being
white. The matcher uses inheritance to deduce a property of an object
from a delineation of its class only if specific information about the
property ofthat object is lacking. Suppose, for example, that we want to
know the color of Clyde. Such a query might be stated as the following
goal unit:
CLYDE
color
: x
To answer this query, we first attempt a direct match with a fact unit.
Suppose we have a fact unit describing Clyde:
CLYDE
self: (element-of ELEPHANTS)
color: WHITE
In this case, the match substitution is { WHITE/x}, and WHITE is
our answer.
409
STRUCTURED OBJECT REPRESENTATIONS
If our fact unit states only that Clyde is an elephant, the matcher
automatically uses the delineation of ELEPHANTS to answer our query.
Such a delineation might be as follows:
y\ELEPHANTS
color: GRAY
This scheme, of countermanding general information by conflicting
specific information, can be extended to several hierarchical levels. For
example, we might have the following delineation for MAMMALS'.
u\ MAMMALS
texture: FUZZY
Now, in order to avoid deducing that elephants are fuzzy, we need only
include with the ELEPHANTS delineation a property such as "texture :
WRINKLED." Clyde, however, may be a fuzzy elephant, and this
property can be added to the unit CL YD E to override the ELEPHANTS
delineation. (The hierarchy may contain several such property reversals.)
For such a scheme to work, the use of delineations to deduce properties
needs always to proceed from the most specific to the more general. With
this built-in ordering on matching and retrieval processes, information at
the more specific levels protects the system from making possibly
contradictory deductions based on higher level delineations. It is as if the
universal quantifiers of delineations specifically exclude, from their
domains, all of the more specific objects that would contradict the
delineation.
Schemes of this sort do have certain problems, however. Suppose, for
example, that an object in the taxonomic hierarchy belongs to two
different sets and that the delineations of these sets are contradictory. We
show a network example in Figure 9.28. In this figure, we do not show an
explicit color arc for CLYDE, but CLYDE inherits contradictory color
values [assuming that ~EQ(GRA Y, WHITE)]. A possible way to deal with this problem is to indicate something about the quality of each arc or
slot in a delineation. In our example, if the color arc in the ALBINOS delineation were to dominate the color arc in the ELEPHANTS delineation, then we would always attempt to inherit the color value from
the ALBINOS delineation first.
410
DEFAULTS AND CONTRADICTORY INFORMATION
We can indicate that the arc or slot of a delineation is of low priority by
marking it as a default. Default delineations can be used only if there is no
other way to derive the needed information. In general, though, we need
an ordering on the default markers. If both of the delineations in Figure 9.28 were marked simply as defaults, for example, we would be at an impasse: We could prove that Clyde was gray only if
we could not prove
that he was any other color. However, we could prove that he was another
color, namely, white, if we could not prove that he was any other color.
And so on.
We must also be careful when we use default delineations as forward
rule applications, because then we risk adding objects to the fact database
that contradict existing or subsequent specific facts. The forward use of delineations must be coupled with "truth maintenance" techniques to ensure that contradictory facts (and facts that might be derived from
them) are either purged or otherwise inactivated.
Fig. 9.28 A net with contradictory delineations.
411
STRUCTURED OBJECT REPRESENTATIONS
9.6. BIBLIOGRAPHICAL AND HISTORICAL
REMARKS
Structured object representations are related to frames (no relation to
the frame problem) proposed by Minsky (1975); scripts proposed by
Schank and Abelson (1977); and beta-structures proposed by Moore and
Newell (1973). Bobrow et al. (1977) implemented a system called GUS
which used a frame-like representation. Roberts and Goldstein (1977)
implemented a simple frame language called FRL, and Goldstein and
Roberts (1979) describe a system for automatic scheduling written in
FRL. Stefik (1979) and Friedland (1979) describe a frame-based repre
sentation used by a computer system for planning experiments in
molecular genetics.
KRL-0 and KRL-7 are frame-based knowledge representation lan
guages developed by Bobrow and Winograd (1977a). [See also Bobrow
and Winograd (1977b), Lehnert and Wilks (1979), and Bobrow and
Winograd (1979) for discussion and criticisms of KRL] Winograd (1975)
presents a readable discussion of some of the advantages of frame-based
representations.
Hayes (1977,1979) discusses the relationships between predicate logic
and frame-based representations. Our treatment of structured objects in
this chapter, stressing relationships with the predicate calculus, leans
toward Hayes' point of view. Converting to binary predicates is discussed
by Deliyanni and Kowalski (1979c).
Work on semantic networks stems from many sources. In cognitive
psychology, Quillian (1968), Anderson and Bower (1973), and Rumel-
hart and Norman (1975) have all proposed memory models based on networks. In computer science, Raphael's (1968) SIR system is based on networks of property lists; Winston (1975) used networks for representing and learning information about configurations of blocks; and
Simmons (1973) discusses the uses of networks in natural language
processing. Woods (1975) discusses some of the logical inadequacies of
early semantic networks. It is interesting that Frege's (1879) original symbolism for the predicate calculus involved two-dimensional dia
grams.
Several semantic network "languages" have now been proposed that
have the full expressive power of predicate calculus. Shapiro's (1979a)
412
BIBLIOGRAPHICAL AND HISTORICAL REMARKS
SNePS system, Hendrix's (1975b, 1979)partitionedsemantic network for
malism and Schubert's (1976) [see also Schubert, Goebel and Cercone
(1979)] network formalism are examples. Papers in the volume edited by
Findler (1979) describe several different types of semantic networks. The
semantic network formalism described in this chapter seems to capture the main ideas of those that use binary predicates.
Example applications of semantic networks include natural language
processing [Walker (1978, Section 3)], database management [Mylo-poulos et al. (1975)], and computer representation of geological (ore-
prospecting) knowledge [Duda et al. (1978a)].
We base much of our discussion about matching network goal
structures against network fact structures on a matcher developed by
Fikes and Hendrix (1977) and, partially, on ideas of Moore (1975a).
Various mechanisms for inheritance of properties in unit systems or in
net formalisms have been suggested as approaches to what some have called the symbol-mapping problem. This problem is discussed at length
in two issues of the SIGART newsletter. [See McDermott (1975a,b),
Bundy and Stone (1975), Fahlman (1975), and Moore (1975b).] Fahlman
(1979) recommends using special-purpose hardware to solve the set intersection problems required to perform property inheritance efficiently.
Representing and using default information is discussed by Bobrow
and Winograd (1977a) and by Reiter (1978). Attempts to formalize
inferences of the form assume X
unless ~X can be proved have led to
non-monotonic logics. McDermott and Doyle (1980) discuss the history
of these attempts, propose a specific formalism of their own, and prove its
soundness and completeness. "Maintaining" databases by purging or modifying derived expressions, as appropriate, in response to changes in
the truth values of primitive expressions, is discussed by Doyle (1979).
Stallman and Sussman's (1977) system for reasoning about circuits uses a
"truth-maintenance" scheme to make backtracking more efficient.
Other complex representational schemes, related to those discussed in
this chapter, have been proposed by Martin (1978), Schank and Abelson
(1977), Srinivasan (1977), and Sridharan (1978).
413
STRUCTURED OBJECT REPRESENTATIONS
EXERCISES
9.1 Represent the situation of Figure 7.1 as a semantic network and
represent the STRIPS rule pickup(x) as a production rule for changing
networks. Explain how the rule pickup(2?) is tested for applicability and
how it changes the network representation of Figure 7.1.
9.2 The predicate D (x,y ) is intended to mean that sets x and y have an
empty intersection. Explain how this predicate might be used to label
arcs in a semantic network. Illustrate by an example. Can you think of any other useful arc predicates?
9.3 Represent the following sentences as semantic network delinea
tions:
(a) All men are mortal.
(b) Every cloud has a silver lining.
(c) All roads lead to Rome.
(d) All branch managers of G-TEK
participate in a profit-sharing plan.
(e) All blocks that are on top of blocks that
have been moved have also been moved.
9.4 Use EL and SS predicates to rewrite each of the following wffs as a
binary-predicate wff. Rewrite them also as sets of units and as semantic
networks.
(a) [ON(C,A)A ONTABLE(A)A ONTABLE(B)
A HANDEMPTY A CLEAR(B) A CLEAR(C)]
(b) [DOG(FIDO) A -BARKS(FIDO)
A WAGS-TAIL(FIDO) A MEOWS (MYRTLE)]
(c) (Vx)HOLDS[clear(x\do[trans(x,y,z),s]]
414
EXERCISES
9.5 Represent the major ideas about search techniques in a semantic
network taxonomic hierarchy. (Search techniques might first be divided
into uninformed ones and heuristic ones, for example.) Include a
delineation for each set represented in your network.
415
PROSPECTUS
We have seen in this book that generalized production systems
(especially those that process expressions in the first-order predicate
calculus) play a fundamental role in Artificial Intelligence. The organi
zation and control of AI production systems and the ways in which these systems are used to solve several varieties of problems have been discussed in some detail. Lest the reader imagine that all of these details—the formalisms and the mathematical and empirical results—
constitute an already mature engineering discipline routinely supporting
extensive applications, we attempt here a perspective on the entire AI
enterprise and point out several areas where further research is needed.
In fact, we might say that our present knowledge of the mechanisms of
intelligence consists of small islands in a large ocean of speculation, hope,
and ignorance.
The viewpoint presented in this book
is just one window on the core
ideas of AI. The specialist will also want to be familiar with AI
programming languages such as LISP and AI programming techniques.
We have not attempted to discuss these topics in this book, but there are
other books that concentrate on just these subjects [see Winston (1977);
Shapiro (1979); and Charniak, Riesbeck, and McDermott (1979)].
Serious students of AI will also want to be familiar with a variety of
large-scale AI applications. We have cited many of these in the
bibliographical remarks sections of this book.
In this prospectus, we give brief descriptions of problem areas that
seem to be very important for future progress in AL Some work has
already been done on most of these problems, but results are typically
tentative, controversial, or limited. We organize these problems into three main categories. The first category concerns novel AI system
architectures and the challenges of parallel and distributed processing.
The second category deals with the problems of knowledge acquisition
and learning. Last, there are the problems concerned with the adequacy
of AI processes and representational formalisms for dealing with topics such as knowledge, goals, beliefs, plans, and self-reference.
417
PROSPECTUS
10.1. AI SYSTEM ARCHITECTURES
10.1.1. MEMORY ORGANIZATION
One of the most important design questions facing the implementer of
AI systems concerns how to structure the knowledge base of facts and
rules so that appropriate items can be efficiently accessed. Several
techniques have been suggested. The QA3 resolution theorem-proving
system [Green (1969b)] partitioned its list of clauses into an active list and
a secondary storage list. Clauses were brought from the secondary list
into the active list only if no resolutions were possible within the active
list. The PLANNER-like AI languages generally had special methods for
storing and accessing expressions. McDermott (1975c) describes the special indexing features used by many of these languages. The discrimi
nation net used by QA4 [Rulifson, Derksen, and Waldinger (1972)] is an
example of such a feature.
Probably the most important aspect of the frame-like representations
(unit systems and semantic networks) is their built-in mechanisms for indexing. Indeed, the authors of KRL [Bobrow and Winograd (1977a)] speak specifically of permitting system designers to organize memory into those chunks that are most appropriate for the specific task at hand.
We can expect that work will continue along these lines as systems are
developed that must use the equivalent of hundreds of thousands of facts and rules.
10.1.2. PARALLEL AND DISTRIBUTED SYSTEMS
Our discussion of AI production systems was based on the tacit
assumption of a single serial processor that applied one rule at a time to a
database. Yet, there are several ways in which our production systems could be extended to utilize parallel processing. First, some of the
primitive operations of the system could be performed by parallel hardware. For example, Fahlman (1979) has suggested a parallel system for performing the set intersections needed for efficient property inheri
tance computations.
Second, in tentative control regimes, a system capable of parallel
processing could apply several rules simultaneously rather than back
tracking or developing a search tree one node at a time. If the number of
418
AI SYSTEM ARCHITECTURES
successors to be generated exceeds the number of parallel rule-applica
tion modules, the control system must attempt to apportion the available
rule-application modules as efficiently as possible.
Third, in decomposable production systems, parallel processors could
be assigned to each component database, and these processors (and their
descendants) could work independently until all databases were pro
cessed to termination. These three methods of using parallelism do not
alter the basic production-system paradigm for AI systems presented in
this book; they merely involve implementing this paradigm with parallel
processing.
A third use of parallelism involves an expansion of the ideas presented
here. One could imagine a large community of more-or-less independent
systems. (Each of these systems could be a production system or a system
of some different style, with internal processes either serial or parallel.) The systems communicate among themselves in order to solve problems cooperatively. If each of the component systems is relatively simple, the communication protocols and the procedures for control and cooperation
must be specified in rather precise detail by the designer of the
community. The augmented Petri nets of Zisman (1978) and the actor formalism of Hewitt (1977) seem to be examples of this type. [See also
Hewitt and Baker (1977) and Kornfeld (1979).] On the other hand, if each
of the systems is itself a complex AI system, then the situation is
analogous to a society of humans or other higher animals who must plan their own communication and cooperation strategies. We have little experience with complexes of interacting AI systems, but the work of
Lesser and Erman
(1979), Lesser and Corkill (1979), and of Corkill (1979)
are steps in that direction. Related work by Smith (1978, 1979) also involves networks of cooperating problem-solving components. Crane (1978) treats analogies between parallel computer systems and human
societies in a provocative manner.
10.2· KNOWLEDGE ACQUISITION
Formalizing knowledge and implementing knowledge bases are major
tasks in the construction of large AI systems. The hundreds of rules and
thousands of facts required by many of these systems are generally
obtained by interviewing experts in the domain of application. Representing expert knowledge as facts or rules (or as expressions in any other
419
PROSPECTUS
formalism) is typically a tedious and time-consuming process. Tech
niques for automating this knowledge acquisition process would constitute a major advance in AI technology.
We shall briefly discuss three ways in which knowledge acquisition
might be automated. First, special editing systems might be built that
allow persons who possess expert knowledge about the domain of application (but who are not themselves computer programmers) to
interact directly with the knowledge bases of AI systems. Second,
advances in natural language processing techniques will allow humans to
instruct and teach computer systems through ordinary conversations
(augmented, perhaps, with diagrams and other nontextual material).
Third, AI systems might learn important knowledge directly from their
experiences in their problem domains.
Virtually all large AI systems must have a knowledge base editing
system of some sort to facilitate the processes of adding, deleting, and
changing facts and rules as the systems evolve. Davis (1976) designed a
system called TEIRESIAS that allowed physicians to interact directly
with the knowledge base of the MYCIN medical diagnosis system.
Friedland (1979) reports on a representation system that contains expert
knowledge about molecular genetics; a key feature of this system is its
family of editors for interacting with the knowledge base. Duda et al.
(1979) describes a knowledge-base editing system for the PROSPECTOR system. As systems of these kinds become capable of conversing
with their designers in natural language, knowledge entry and modifica
tion processes will become much more efficient. One must remember,
however, that computer systems will be incapable of truly flexible
dialogues about representations and the concepts to be used in these
representations until designers are able to give these systems useful
meta-knowledge about representations themselves. Unfortunately, we
do not even have
a very clear outline yet of a general theory of knowledge
representation.
It has often been hoped that the knowledge acquisition task could be
eased somewhat by automatic learning mechanisms built into AI
systems. Humans and other animals seem to have impressive capacities for learning from experience. Indeed, some early work in AI was based
on the strategy of constructing intelligent machines that could learn how
to perform tasks.
There
are, of course, several varieties of learning. Almost any change to
an AI system, such as the entry of a single new fact, the addition of a new
420
KNOWLEDGE ACQUISITION
component to a control strategy, or a profound reorganization of system
architecture, might be called an instance of learning. Furthermore, these
changes might be caused either directly by a programmer (design
changes) or indirectly through conversation with a human or other
system (teaching) or through response to experience in an environment (adaptive learning). Evolutionary design changes already play an impor
tant role in the development of AI systems. Some work has also been done on developing techniques for teaching AI systems. Strategies for
adaptive learning, however, have so far met with only limited success. It
can be expected that all of these varieties of learning will be important in
future AI systems. The subject is an important area for AI research.
Early work in adaptive learning concentrated on systems for pattern
classification [Nilsson (1965)] and for game playing [Samuel (1959,
1967)]. This work involved automatic adjustment of the parameters of
simple classification and evaluation functions. Winston (1975) developed
a system that could learn reasonably complex predicates for category membership; as with many learning systems, efficiency depended
strongly on appropriately sequenced experiences. Mitchell (1979) and
Dietterich and Michalski (1979) give good discussions of their own and other approaches to the problem of
concept learning and induction.
Some efforts have also been made to save the results of AI computa
tions (such as proofs of theorems and robot plans) in a form that permits their use in later problems. For example, Fikes, Hart, and Nilsson (1972b) proposed a method for generalizing and saving triangle tables so that they could be used as macro-operators in the construction of more complex plans.
One of the most powerful ways of using learned or remembered
material involves the ability to recognize analogies between current
problems and those previously encountered. An early program by Evans
(1968) was able to solve geometric analogy problems of the sort found in
standard intelligence tests. Kling (1971) used an analogy-based method
to improve the efficiency of a theorem-proving system. Ulrich and Moll
(1977) describe a system that uses analogies in program synthesis.
Winston (1979) describes
a theory (accompanied by a program) about the
use of analogy in learning, and McDermott (1979) discusses how a
program might learn analogies.
A system described by Vere (1978) is able to learn STRI PS-like rules
by observing state descriptions before and after actions that modify them.
421
PROSPECTUS
Buchanan and Mitchell (1978) describe a process for learning the
production rules used by the DENDRAL chemical-structure computing
system. A report by Soloway (1978) describes a system that learns some
of the rules of baseball by observing the (simulated) actions of players.
Last, we might mention the AM system of Lenat (1976) that uses a
stock of simple, primitive concepts in mathematics and discovers
concepts (such as prime numbers).
10.3. REPRESENTATIONAL FORMALISMS
The example problems that we have considered in this book demon
strate that the first-order predicate calculus can be used to represent
much of the knowledge needed by AI systems. There are varieties of
knowledge, however, that humans routinely use in solving problems and
in interacting with other humans that present certain difficulties for
first-order logic in particular and for AI systems in general. Examples
include knowledge that is uncertain or indefinite in various ways, commonsense knowledge about cause and effect, knowledge about plans
and processes, knowledge about the beliefs, knowledge, and goals of
ourselves and others, and knowledge about
knowledge. McCarthy (1977)
discusses these and other epistemologicalproblems of AI.
Some workers have concluded that logical formalisms are fundamen
tally inadequate to deal with these sorts of concepts and that some
radically different representational schemes will have to be invented [see, for example, Winograd (1980b)]. Citing previous successes of formal methods, others maintain that certain augmentations of
first-order logic,
or suitably complex theories represented in first-order logic, or perhaps
more complex logical formalisms will ultimately prove adequate to
capture the knowledge and processes used in human-like reasoning.
103.1. COMMONSENSE REASONING
Many of the existing ideas about AI techniques have been refined on
"toy" problems, such as problems in the "blocks world," in which the
necessary knowledge is reasonably easy to formalize. AI applications in
more difficult domains such as medicine, geology, and chemistry require
422
REPRESENTATIONAL FORMALISMS
extensive effort devoted to formalizing the appropriate knowledge.
Hayes (1978a) and others have argued that AI researchers should now
begin an attempt to formalize fundamental "commonsense knowledge
about the everyday physical world: about objects; shape; space; move
ment; substances (solids and liquids); time, etc." Hayes (1978b) has begun this task with an essay about the formalization of the properties of liquids. Kuipers (1978,1979) describes a system for modeling common-
sense knowledge of space.
Formalizing commonsense physics must be distinguished from the
rather precise mathematical models of the physics of
solids, liquids and
gases. The latter are probably too cumbersome to support commonsense
reasoning about physical events. (McCarthy argues, for example, that
people most likely do not—even unconsciously—perform complex hydrodynamic simulation computations in order to decide whether or
not to move in order to avoid getting burned by a spilled cup of hot
coffee.)
Formalizing commonsense physics is important because many appli
cations require reasoning about space, materials, time, etc. Also, much of
the content of natural language expressions is about the physical world;
certainly many metaphors have
a physical basis. Indeed, in order to make
full use of analogical reasoning, AI systems will need a thorough, even if somewhat inexact, understanding of simple physics.
Much commonsense reasoning (and even technical reasoning) is
inexact in the sense that the conclusions and the facts and rules on which
it is based are only approximately true. Yet, people are able to use uncertain facts and rules to arrive at useful conclusions about everyday
subjects or about specialized subjects such as medicine. A basic charac
teristic of such approximate reasoning seems to be that a conclusion
carries more conviction if it is independently supported by two or more
separate arguments.
We have previously cited the work of Shortliffe (1976) on MYCIN and
of Duda, Hart, and Nilsson (1976) on PROSPECTOR and referred to
their related methods for dealing with uncertain rules and facts. Their
techniques have various shortcomings, however, especially when the
facts and rules are not independent; furthermore, it is not clear that the
MYCIN/PROSPECTOR methods can easily be extended to rules and
facts containing quantified variables.
423
PROSPECTUS
Collins (1978) stresses the importance of meta-knowledge in plausible
reasoning. (We discuss the subject of meta-knowledge below.) Zadeh
(1979) invokes the ideas of fuzzy sets to deal with problems of approx
imate reasoning. The work on default reasoning and non-monotonic
logic, cited at the end of chapter 9, offers additional approaches to
plausible reasoning.
Another important component of commonsense reasoning is the
ability to reason about actions, processes and plans. To do so, we first
need ways of representing these concepts. In the bibliographic remarks
sections of chapters 7 and 8, we cited several sources relevant to the problem of modeling actions and plans. In addition to these, we might
mention the work of Moore (1979) who combines a technique for reasoning about actions with one for reasoning about knowledge (see below). The interaction between action and knowledge has not been discussed in this book (and, indeed, has not yet been adequately explored
in AI). Yet, this interaction is quite fundamental because actions typically change the state of knowledge of the actor, and because knowledge about
the world is necessary in order to perform actions.
Hendrix (1975a; 1979, pp. 76ff) discusses the use of semantic networks
for representing processes. Grosz (1977) and Robinson (1978) use structures similar to procedural nets [Sacerdoti (1977)] to help interpret natural language statements occurring in a dialogue with a user who is
participating in a process. Schank and Abelson (1977) propose structures for representing processes and plans for use in natural language understanding applications. Schmidt, Sridharan, and Goodson (1978)
propose techniques for recognizing plans and goals of actors from their
actions. All of these efforts are contributing to our ability to formalize—and thus ultimately to build systems that can reason about—plans,
actions, and processes.
103.2. REPRESENTING PROPOSITIONAL ATTITUDES
Certain verbs, such as know, believe, want, ana fear, can be used to
express a relation between an agent and ^proposition, as illustrated by the following examples:
Sam knows that Pete is a lawyer. Sam doesn't believe that John is a doctor.
Pete wants it to rain. (Or, Pete wants that it be raining.)
John fears that Sam believes that the morning star is not Venus.
424
REPRESENTATIONAL FORMALISMS
The italicized portions of these sentences are propositions, and the
relations know, believe, etc., refer to attitudes of agents toward these
propositions. Thus, know, believe, etc., are called propositional attitudes.
A logical formalism for expressing propositional attitudes must have a
way of expressing the appropriate relations between agents and attitudes.
It is well known that there are several difficulties in developing such a
logical formalism. One difficulty is the problem of referential transpar
ency. From the statements John believes Santa Claus brought him presents
at Christmas and John's father is Santa Claus, we would not want to be
able to deduce the statement John believes John's father brought him
presents at Christmas. These problems have been discussed by logicians
for several years, and various solutions have been proposed [see, for example, the essays in Linsky (1971)].
Moore (1977, 1979) discusses the problems of formalizing proposi
tional attitudes for AI applications. He points out several difficulties with
straightforward approaches and shows how a
modal logic with a possible
worlds semantics can be used to overcome these difficulties for the
attitude know. He then proceeds to show how this approach can be embedded in first order logic so that the usual sorts of
AI theorem-prov
ing systems can be used to reason about knowledge. (As we mentioned earlier, Moore also links his logic of knowledge with a logic of actions.)
Several other approaches have also been suggested. McCarthy (1979)
proposes that
concepts of domain entities be added to the domain of
discourse and shows how a first-order formulation involving these
concepts avoids some of the standard difficulties. Creary (1979) extends
this notion. Elschlager (1979) considers the problem of consistency of
knowledge statements in formulations that treat concepts as domain
entities.
Although formalizations for propositional attitudes have largely been
the concern of logicians, the problem is fundamental to future advances
in AI. Natural language communication between humans seems to
depend on the ability of the participants to make inferences about each
others' beliefs, and we should expect that natural language understand
ing systems will require similar abilities. Also, when two or more AI
systems cooperate to solve problems, they will need to be able to reason
about each others' goals, knowledge, and beliefs. Cohen (1978) discusses
how a system can plan to affect the state of knowledge of another system
by speech acts. Much more work along these lines needs to be done.
425
PROSPECTUS
103.3. METAKNOWLEDGE
A good solution to the problem of reasoning about the knowledge of
others ought also to confer the ability to reason about one's own
knowledge. We would like to be able to build systems that know or can
deduce whether or not they know facts and rules about certain subjects
without having to scan their large knowledge bases searching for these
items. We would also like systems to have knowledge about when and
how to use other knowledge. As mentioned in the bibliographic remarks
section of chapter 6, various researchers have suggested that systems
containing meta-rules be used to control production systems.
Collins (1978) has suggested that meta-knowledge would be useful in
deducing object knowledge. For example: Since I would know it if Henry
Kissinger were three meters tall, and since I don't know that he is, he isn't.
Meta-level reasoning is also an easy way to solve many problems. Bundy
et al. (1979) and Weyhrauch (1980) illustrate this principle applied to solving equations.
Two elegant arrangements of systems and metasystems are LCF [Cohn
(1979)] and FOL [Weyhrauch (1979)]. Weyhrauch stresses the ability of
FOL to refer to itself while avoiding problems of circularity. Self-refer
ence has been a haunting but illusive theme in Artificial Intelligence research. For an interesting book about problems of self-reference in
logic, music, and art, see Hofstadter (1979).
The matters that we have briefly discussed in this prospectus are now
the subjects of intense AI research activity. Empirical explorations and
new research results can be expected to challenge and expand the AI
paradigms and formalisms that have proved useful for organizing past results. In this book, we have used certain organizing ideas—such as generalized production systems, the language of the predicate calculus,
and heuristic search—to make our story just a bit simpler and more
memorable. We cannot now tell whether new results will fold in easily to the existing story or whether they will require the invention of new themes or a completely new plot. That is how science and technology
progress. Whatever the new results, we do know, however, that their
description will be as important as their invention in order that we (and machines) will be able to understand them.
426
KNOWLEDGE ACQUISITION
B
c X
//////////////////////////
427
BIBLIOGRAPHY
MNEMONICS FOR SYMPOSIA, PROCEEDINGS,
AND SPECIAL COLLECTIONS
COLLECTED WORKS
AHT
Elithorn, A., and Jones, D. (Eds.) 1973. Artificial And Human
Thinking. San Francisco: Jossey-Bass.
AIHP
Findler, N. V., and Meltzer, B. (Eds.) 1971. Artificial Intelligence and
Heuristic Programming. New York: American Elsevier.
AI-MIT
Winston, P. H., and Brown, R. H. (Eds.) 1979. Artificial Intelligence:
An MIT Perspective (2 vols.). Cambridge, MA: MIT Press.
AN
Findler, N. V. (Ed.) 1979. Associative Networks—The Representa
tion and Use of Knowledge in Computers. New York: Academic
Press.
CT
Feigenbaum, E., and Feldman, J. (Eds.) 1963. Computers and
Thought. New York: McGraw-Hill.
CVS
Hanson, A. R., and Riseman, E. M. (Eds.) 1978. Computer Vision
Systems. New York: Academic Press.
KBS
Davis, R., and Lenat, D. 1980. Knowledge-Based Systems in Artifi
cial Intelligence. New York: McGraw-Hill. In press.
429
BIBLIOGRAPHY
Mil
Collins, N. L., and Michie, D. (Eds.) 1967. Machine Intelligence 1.
Edinburgh: Edinburgh University Press.
MI2
Dale, E., and Michie, D. (Eds.) 1968. Machine Intelligence 2.
Edinburgh: Edinburgh University Press.
MB
Michie, D. (Ed.) 1968. Machine Intelligence 3. Edinburgh: Edin
burgh University Press.
MI4
Meltzer, B., and Michie, D. (Eds.) 1969. Machine Intelligence 4.
Edinburgh: Edinburgh University Press.
MI5
Meltzer, B., and Michie, D. (Eds.) 1970. Machine Intelligence 5.
Edinburgh: Edinburgh University Press.
MI6
Meltzer, B., and Michie, D. (Eds.) 1971. Machine Intelligence 6.
Edinburgh: Edinburgh University Press.
Mil
Meltzer, B., and Michie, D. (Eds.) 1972. Machine Intelligence 7.
Edinburgh: Edinburgh University Press.
MIS
Elcock E., and Michie, D. (Eds.) 1977. Machine Intelligence 8:
Machine Representations of Knowledge. Chichester: Ellis Horwood.
MI9
Hayes, J. E., Michie, D., and Mikulich, L. I. (Eds.) 1979. Machine
Intelligence 9: Machine Expertise and the Human Interface. Chi
chester: Ellis Horwood.
PCV
Winston, P. H. (Ed.) 1975. The Psychology of Computer Vision. New
York: McGraw-Hill.
430
PDIS
Waterman, D., and Hayes-Roth, F. (Eds.) 1978. Pattern-Directed
Inference Systems. New York: Academic Press.
RDST
Wegner, P. (Ed.) 1979. Research Directions in Software Technology.
Cambridge, MA: MIT Press.
RM
Simon, H. A., and Siklóssy, L. (Eds.) 1972. Representation and
Meaning: Experiments with Information Processing Systems. Engle-
wood Cliffs, NJ: Prentice-Hall.
Bobrow, D. G., and Collins, A. (Eds.) 1975. Representation and
Understanding. New York: Academic Press.
SIP
Minsky, M. (Ed.) 1968. Semantic Information Processing. Cam
bridge, MA: MIT Press.
TANPS
Banerji, R., and Mesarovic, M. D. (Eds.) 1970. Theoretical Ap
proaches to Non-Numerical Problem Solving. Berlin: Springer-
Verlag.
431
BIBLIOGRAPHY
PROCEEDINGS
IJCAI-1
Walker, D. E., and Norton, L. M. (Eds.) 1969. International Joint
Conference on Artificial Intelligence. Washington, D.C.; May.
IJCAI-2
1971. Advance Papers, Second International Joint Conference on
Artificial Intelligence. London: The British Computer Society;
September. (Xerographic or microfilm copies available from Xerox
University Microfilms, 300 North Zeeb Rd., Ann Arbor, MI, 48106;
or from University Microfilms Ltd., St. John's Rd., Tylers Green,
Penn., Buckinghamshire HP 10 8HR, England.)
IJCAI-3
1973. Advance Papers, Third International Joint Conference on
Artificial Intelligence. Stanford, CA; August. (Copies available from
Artificial Intelligence Center, SRI International, Inc., Menlo Park,
CA, 94025.)
IJCAI-4
1975. Advance Papers of the Fourth International Joint Conference
on Artificial Intelligence (2 vols.). Tbilisi, Georgia, USSR; Sep
tember. (Copies available from IJCAI-4, MIT AI Laboratory, 545
Technology Sq., Cambridge, MA, 02139.)
IJCAI-5
1977. Proceedings of the 5th International Joint Conference on
Artificial Intelligence (2 vols.). Massachusetts Institute of Technol
ogy, Cambridge, MA; August. (Copies available from IJCAI-77,
Dept. of Computer Science, Carnegie-Mellon University, Pitts
burgh, PA, 15213.)
IJCAI-6
1979. Proceedings of the Sixth International Joint Conference on
Artificial Intelligence (2 vols.). Tokyo; August. (Copies available
from IJCAI-79, Computer Science Dept., Stanford University,
Stanford, CA 94305.)
432
PA SC
1974. Proceedings of the AI SB Summer Conference. (Copies avail
able from Dept. of Artificial Intelligence, University of Edinburgh,
Hope Park Sq., Edinburgh, EH8 9NW, Scotland.)
PCAI
1978. Proceedings of the AISB/GI Conference on Artificial Intelli
gence. Hamburg; July. (Copies available from Dept. of Artificial
Intelligence, University of Edinburgh, Hope Park Sq., Edinburgh,
EH8 9NW, Scotland.)
SCAISB-76
1976. Conference Proceedings, Summer Conference on Artificial
Intelligence and Simulation of Behavior. Department of Artificial
Intelligence, University of Edinburgh; July. (Copies available from
Dept. of Artificial Intelligence, University of Edinburgh, Hope Park
Sq., Edinburgh, EH8 9NW, Scotland.)
TIN LAP-1
Nash-Webber, B., and Schank, R. (Eds.) 1975. Proceedings of
Theoretical Issues in Natural Language Processing. Cambridge,
MA; June.
TIN LAP-2
Waltz, D. (Ed.) 1978. Proceedings ofTINLAP-2: Theoretical Issues
in Natural Language Processing—2. University of Illinois; July.
(Copies available from the Association for Computing Machinery,
P.O. Box 12105, Church Street Station, New York, NY, 10249.)
WAD
Joyner, W. H., Jr. (Ed.) 1979. Proceedings of the Fourth Workshop on
Automated Deduction. Austin, Texas; February.
433
BIBLIOGRAPHY
REFERENCES
Abraham, R. G. 1977. Programmable automation of batch assembly
operations. The Industrial Robot, 4(3), 119-131. (International Fluidics
Services, Ltd.)
Agin, G. J. 1977. Vision systems for inspection and for manipulator
control. Proc. 1977 Joint Automatic Control Confi, vol. 1, pp. 132-138.
San Francisco, CA; June. New York: IEEE.
Aho, A. V., Hopcroft, J. E., and Ullman, J. D. 1974. The Design and
Analysis of Computer Algorithms. Reading, MA: Addison-Wesley.
Allen, J. 1978. Anatomy of LISP. New York: McGraw-Hill.
Allen, J., and Luckham, D. 1970. An interactive theorem proving
program. In M15, pp. 321-336.
Amarel, S. 1967. An approach to heuristic problem-solving and
theorem proving in the propositional calculus. In J. Hart and S. Takasu
(Eds.), Systems and Computer Science. Toronto: University of Toronto
Press.
Amarel, S. 1968. On representations of problems of reasoning about
actions. In M13, pp. 131-171.
Ambler, A. P., et al. 1975. A versatile system for computer-controlled
assembly. Artificial Intelligence, 6(2), 129-156.
Anderson, J., and Bower, G. 1973. Human Associative Memory. Wash
ington, D.C.: Winston.
Athans, M., et al. 1974. Systems, Networks and Computation: Mul
tivariable Methods. New York: McGraw-Hill.
Ball, W. 1931. Mathematical Recreations and Essays (10th ed.). Lon
don: Macmillan & Co.
Ballantyne, A. M., and Bledsoe, W. W. 1977. Automatic proofs of
theorems in analysis using non-standard techniques. JACM, 24(3),
353-374.
434
Banerji, R., and Mesarovic, M. D. (Eds.) 1970. Theoretical Approaches
to Non-Numerical Problem Solving. Berlin: Springer-Verlag.
Barr, A., and Feigenbaum, E. A. 1980. Handbook of Artificial Intelli
gence. Stanford, CA: Stanford University Computer Science Dept.
Barrow, H., and Tenenbaum, J. M. 1976. MSYS: A System for Reason
ing about Scenes, Tech. Note 121, Artificial Intelligence Center, Stanford
Research Institute, Menlo Park, CA; March.
Barstow, D. 1979. Knowledge-Based Program Construction. New York:
North-Holland.
Baudet, G. M. 1978. On the branching factor of the alpha-beta pruning
algorithm. Artificial Intelligence, 10(2), 173-199.
Bellman, R., and Dreyfus, S. 1962. Applied Dynamic Programming.
Princeton, NJ: Princeton University Press.
Berliner, H. J. 1978. A chronology of computer chess and its literature.
Artificial Intelligence, 10(2), 201-214.
Berliner, H. J. 1979. The B* tree search algorithm: A best-first proof
procedure. Artificial Intelligence, 12(1), 23-40.
Bernstein, M. 1.1976. Interactive Systems Research: Final Report to the
Director, Advanced Research Projects Agency. Rep. No. TM-
5243/006/00, System Development Corporation, Santa Monica, CA.
Bibel, W., and Schreiber, J. 1975. Proof search in a Gentzen-like system
of first-order logic. In E. Gelenbe and D. Potier (Eds.), International
Computing Symposium 1975. Amsterdam: North-Holland.
Biermann, A. W. 1976. Approaches to automatic programming. Ad
vances in Computers (vol. 15). New York: Academic Press.
Binford, T. O., et al. 1978. Exploratory Study of Computer Integrated
Assembly Systems, Memo AIM-285.4, Fifth Report, Stanford Artificial
Intelligence Laboratory, Stanford University, September.
Black, F. 1964. A Deductive Question-Answering System. Doctoral
dissertation, Harvard, June. (Reprinted in SIP, pp. 354-402.)
435
BIBLIOGRAPHY
Bledsoe, W. W. 1971. Splitting and reduction heuristics in automatic
theorem proving. Artificial Intelligence, 2(1), 55-77.
Bledsoe, W. W. 1977. Non-resolution theorem proving. Artificial Intel
ligence, 9(1), 1-35.
Bledsoe, W.W., and Bruell, P. 1974. A man-machine theorem-proving
system. Artificial Intelligence, 5(1), 51-72.
Bledsoe, W. W., Bruell, P., and Shostak, R. 1978. A Proverfor General
Inequalities. Rep. No. ATP-40, The University of Texas at Austin,
Departments of Mathematics and Computer Sciences.
Bledsoe, W. W., and Tyson, M. 1978. The UT Interactive Theorem
Prover. Memo ATP-17a, The University of Texas at Austin, Math. Dept.,
June.
Bobrow, D., and Raphael, B. 1974. New programming languages for
Artificial Intelligence research. A CM Computing Surveys, vol. 6, pp.
153-174.
Bobrow, D. G., and Collins, A. (Eds.) 1975. Representation and Under
standing. New York: Academic Press.
Bobrow, D. G., et al. 1977. GUS, A frame-driven dialog system.
Artificial Intelligence, 8(2), 155-173.
Bobrow, D. G., and Winograd, T. 1977a. An overview of KRL, a
knowledge representation language. Cognitive Science, 1(1), 3-46.
Bobrow, D. G., and Winograd, T. 1977b. Experience with KRL-0: one
cycle of a knowledge representation language. In IJCAI-5, pp. 213-222.
Bobrow, D. G., and Winograd, T. 1979. KRL: another perspective.
Cognitive Science, 3(1), 29-42.
Boden, M. A. 1977. Artificial Intelligence and Natural Man. New York:
Basic Books.
Boyer, R. S. 1971. Locking: A Restriction of Resolution. Doctoral
dissertation, University of Texas at Austin, August.
436
Boyer, R. S., and Moore, J S. 1979. A Computational Logic. New York:
Academic Press.
Brown, J. S. 1977. Uses of Artificial Intelligence and advanced com
puter technology in education. In R. J. Seidel and M. Rubin (Eds.),
Computers and Communications: Implications for Education. New York:
Academic Press.
Buchanan, B. G., and Feigenbaum, E. A. 1978. Dendral and Meta-Den-
dral: their applications dimension. Artificial Intelligence, 11(1,2), 5-24.
Buchanan, B. G. and Mitchell, T. M. 1978. Model-directed learning of
production rules. In PDIS, pp. 297-312.
Bundy, A. (Ed.) 1978. Artificial Intelligence: An Introductory Course.
New York: North Holland.
Bundy, A., and Stone, M. 1975. A note on McDermott's symbol-map
ping problem. SIGARTNewsletter, no. 53, pp. 9-10.
Bundy, A., et al. 1979. Solving mechanics problems using meta-level
inference. In IJCAI-6, pp. 1017-1027.
Cassinis, R. 1979. Sensing system in supersigma robot. 9th International
Symposium on Industrial Robots, Washington, D.C., September. Dear
born, MI: Society of Manufacturing Engineers. Pp. 437-448.
Chang, C. L. 1979. Resolution plans in theorem proving. In IJCAI-6,
pp. 143-148.
Chang, C. L., and Lee, R. C. T. 1973. Symbolic Logic and Mechanical
Theorem Proving. New York: Academic Press.
Chang, C. L., and Slagle, J. R. 1971. An admissible and optimal
algorithm for searching AND/OR graphs. Artificial Intelligence, 2(2),
117-128.
Chang, C. L., and Slagle, J. R. 1979. Using rewriting rules for connec
tion graphs to prove theorems. Artificial Intelligence, 12(2).
Charniak, E., Riesbeck, C, and McDermott, D. 1979. Artificial Intelli
gence Programming. Hillsdale, NJ: Lawrence Erlbaum Associates.
437
BIBLIOGRAPHY
Codd, E. F. 1970. A relational model of data for large shared data banks.
CACM, 13(6), June.
Cohen, P. R. 1978. On Knowing What to Say: Planning Speech Acts.
Tech. Rep. No. 118, University of Toronto, Dept. of Computer Science.
(Doctoral dissertation.)
Cohen, H. 1979. What is an image? In IJCAI-6, pp. 1028-1057.
Cohn, A. 1979. High level proof in LCF. In WAD, pp. 73-80.
Collins, A. 1978. Fragments of a theory of human plausible reasoning.
In TINLAP-2, pp. 194-201.
Collins, N. L., and Michie, D. (Eds.) 1967. Machine Intelligence 1.
Edinburgh: Edinburgh University Press.
Constable, R. 1979. A discussion of program verification. In RDST, pp.
393-403.
Corkill, D. D. 1979. Hierarchical planning in a distributed environment.
In IJCAI-6, pp. 168-175.
Cox, P. T. 1977. Deduction Plans: A Graphical Proof Procedure for the
First-Order Predicate Calculus. Rep. CS-77-28, University of Waterloo,
Faculty of Mathematics Research, Waterloo, Ontario, Canada.
Crane, H. D. 1978. Beyond the Seventh Synapse: The Neural Mar
ketplace of the Mind. Research Memorandum, SRI International, Menlo
Park, CA; December.
Creary, L. G. 1979. Propositional attitudes: Fregean representation and
simulative reasoning. In IJCAI-6, pp. 176-181.
Dale, E., and Michie, D. (Eds.) 1968. Machine Intelligence 2. Edin
burgh: Edinburgh University Press.
Date, C. J. 1977. An Introduction to Database Systems, (2nd ed.).
Reading, MA: Addison-Wesley.
Davis, R. 1976. Applications of Meta Level Knowledge to the Construc
tion, Maintenance and Use of Large Knowledge Bases. Doctoral disserta-
438
tion, Stanford University, Stanford Artificial Intelligence Laboraratory,
Memo 283. (Reprinted in KBS.)
Davis, R. 1977. Meta-level knowledge: overview and applications. In
IJCAI-5, pp. 920-927.
Davis, R., and King, J. 1977. An overview of production systems. In M18,
pp. 300-332.
Davis, R., and Lenat, D. 1980. Knowledge-Based Systems in Artificial
Intelligence. New York: McGraw-Hill. In press.
Davis, M., and Putnam, H. 1960. A computing procedure for
quantification theory. JACM, 7(3), 201-215.
Dawson, C, and Siklóssy, L. 1977. The role of preprocessing in problem
solving systems. In IJCAI-5, pp. 465-471.
de Kleer, J., et al. 1979. Explicit control of reasoning. In AI-MIT, vol. 1,
pp. 93-116.
Deliyani, A., and Kowalski, R. 1979. Logic and semantic networks.
CACM, 22(3), 184-192.
Derksen, J. A., Rulifson, J. F., and Waldinger, R. J. 1972. The QA4
language applied to robot planning. Proc. FallJoint Computer Confi, vol.
41, Part 2, pp. 1181-1192.
Dietterich, T. G. and Michalski, R. S. 1979. Learning and generalization
of characteristic descriptions: evaluation criteria and comparative review
of selected methods. In IJCAI-6, pp. 223-231.
Dijkstra, E. W. 1959. A note on two problems in connection with
graphs. Numerische Mathematik, vol. 1, pp. 269-271.
Doran, J. 1967. An approach to automatic problem-solving. In Mil, pp.
105-123.
Doran, J., and Michie, D. 1966. Experiments with the graph traverser
program. Proceedings of the Royal Society of London, vol. 294 (series A),
pp. 235-259.
439
BIBLIOGRAPHY
Doyle, J. 1979. A truth maintenance system. Artificial Intelligence,
12(3).
Duda, R. O., and Hart, P. E. 1973. Pattern Recognition and Scene
Analysis. New York: John Wiley and Sons.
Duda, R. O., Hart, P. E., and Nilsson, N. J. 1976. Subjective Bayesian
methods for rule-based inference systems. Proc. 1976 Nat. Computer
Confi (AFIPS Confi Proc), vol. 45, pp. 1075-1082.
Duda, R. O., et al. 1978a. Semantic network representations in
rule-based inference systems. In PDIS, pp. 203-221.
Duda, R. O., et al. 1978b. Development of the Prospector Consultation
System for Mineral Exploration. Final Report to the Office of Resource
Analysis, U.S. Geological Survey, Reston, VA (Contract No. 14-08-
0001-15985) and to the Mineral Resource Alternatives Program, The National Science Foundation, Washington, D.C. (Grant No. AER77-
04499). Artificial Intelligence Center, SRI International, Menlo Park,
CA; October.
Duda, R. O., et al. 1979. A Computer-Based Consultant for Mineral
Exploration. Final Report, Grant AER 77-04499, SRI International,
Menlo Park, CA; September.
Dudeney, H. 1958. The Canterbury Puzzles. New York: Dover Publica
tions. (Originally published in 1907.)
Dudeney, H. 1967. 536 Puzzles and Curious Problems, edited by M.
Gardner. New York: Charles Scribner's Sons. (A collection from two of
Dudeney's books: Modern Puzzles, 1926, and Puzzles and Curious
Problems, 1931.)
Edwards, D., and Hart, T. 1963. The Alpha-Beta Heuristic (rev.). MIT
AI Memo no. 30, Oct. 28. (Originally published as the Tree Prune (TP)
Algorithm, Dec. 4, 1961.)
Ehrig, H., and Rosen, B. K. 1977. Commutativity of Independent
Transformations of Complex Objects. IBM Research Division Report RC
6251 (No. 26882), October.
440
Ehrig, H., and Rosen, B. K. 1980. The mathematics of record handling.
SIAM Journal of Computing. To appear.
Elcock E., and Michie, D. (Eds.) 1977. Machine Intelligence 8: Machine
Representations of Knowledge. Chichester: Ellis Horwood.
Elithorn, A., and Jones, D. (Eds.) 1973. Artificial And Human Thinking.
San Francisco: Jossey-Bass.
Elschlager, R. 1979. Consistency of theories of ideas. In IJCAI-6, pp.
241-243.
Ernst, G. W. 1969. Sufficient conditions for the success of GPS. JA CM,
16(4), 517-533.
Ernst, G. W., and Newell, A. 1969. GPS: A Case Study in Generality and
Problem Solving. New York: Academic Press.
Evans, T. G. 1968. A program for the solution of a class of geometric-
analogy intelligence-test questions. In SIP, pp. 271-353.
Fahlman, S. E. 1974. A planning system for robot construction tasks.
Artificial Intelligence, 5(1), 1-49.
Fahlman, S. 1975. Symbol-mapping and frames. SIGART Newsletter,
no. 53, pp. 7-8.
Fahlman, S. E. 1979. Representing and using real-world knowledge. In
Λ/-Μ/Γ, vol. 1, pp. 453-470.
Feigenbaum, E. A. 1977. The art of Artificial Intelligence: I. Themes and
case studies of knowledge engineering. In IJCAI-5, pp. 1014-1029.
Feigenbaum, E., Buchanan, B., and Lederberg, J. 1971. Generality and
problem solving: a case study using the DENDRAL program. In MI6.
Feigenbaum, E., and Feldman, J. (Eds.) 1963. Computers and Thought.
New York: McGraw-Hill.
Feldman, J. A., and Sproull, R. F. 1977. Decision theory and Artificial
Intelligence II: the hungry monkey. Cognitive Science, 1(2), 158-192.
441
BIBLIOGRAPHY
Fikes, R. E. 1975. Deductive retrieval mechanisms for state description
models. In IJCAI-4, pp. 99-106.
Fikes, R. E., and Nilsson, N. J. 1971. STRIPS: a new approach to the
application of theorem proving to problem solving. Artificial Intelligence,
2(3/4), 189-208.
Fikes, R. E., Hart, P. E., and Nilsson, N. J. 1972a. New directions in
robot problem solving. In Mil, pp. 405-430.
Fikes, R. E., Hart, P. E., and Nilsson, N. J. 1972b. Learning and
executing generalized robot plans. Artificial Intelligence, 3(4), 251-288.
Fikes, R. E., and Hendrix, G. G. 1977. A network-based knowledge
representation and its natural deduction system. In IJCAI-5, pp. 235-246.
Findler, N. V. (Ed.) 1979. Associative Networks—The Representation
and Use of Knowledge in Computers. New York: Academic Press.
Findler, N. V., and Meltzer, B. (Eds.) 1971. Artificial Intelligence and
Heuristic Programming. New York: American Else vier.
Floyd, R. W. 1967. Assigning meanings to programs. Proc. of a
Symposium in Applied Mathematics, vol. 19, pp. 19-32. (American
Mathematical Society, Providence, RI.)
Frege, G. 1879. Begriffsschrift, a formula language modelled upon that
of arithmetic, for pure thought. In J. van Heijenoort (Ed.), From Frege to
Godei: A Source Book In Mathematical Logic, 1879-1931. Cambridge,
MA: Harvard Univ. Press, 1967. Pp. 1-82.
Friedland, P. 1979. Knowledge-based Experiment Design in Molecular
Genetics. Doctoral dissertation, Stanford University, Computer Science
Dept. Report CS-79-760.
Friedman, D. P. 1974. The Little LISPer. Science Research Associates,
Inc.
Gallaire, H., and Minker, J. (Eds.) 1978. Logic and Databases. New
York: Plenum Press.
442
Galler, B., and Perlis, A. 1970. A View of Programming Languages.
Reading, MA: Addison-Wesley.
Gardner, M. 1959. The Scientific American Book of Mathematical
Puzzles and Diversions. New York: Simon and Schuster.
Gardner, M. 1961. The Second Scientific American Book of Mathemati
cal Puzzles and Diversions. New York: Simon and Schuster.
Gardner, M. 1964,1965a,b,c. Mathematical games. Scientific American,
210(2), 122-130, February 1964; 212(3), 112-117, March 1965; 212(6),
120-124, June 1965; 213(3), 222-236, September 1965.
Gaschnig, J. 1979. Performance Measurement and Analysis of Certain
Search Algorithms. Report CMU-CS-79-124, Carnegie-Mellon Univer
sity, Dept. of Computer Science, May.
Gelernter, H. 1959. Realization of a geometry theorem-proving ma
chine. Proc. Intern. Conf. Inform Proc, UNESCO House, Paris, pp.
273-282. (Reprinted in CT, pp. 134-152.)
Gelernter, H. L., et al. 1977. Empirical explorations of SYNCHEM.
Science, 197(4308), 1041-1049.
Gelperin, D. 1977. On the optimality of A*. Artificial Intelligence, 8(1),
69-76.
Genesereth, M. R. 1978. Automated Consultation for Complex Com
puter Systems. Doctoral dissertation, Harvard University, September.
Genesereth, M. R. 1979. The role of plans in automated consultation. In
IJCAI-6, pp. 311-319.
Goldstein, I. P., and Roberts, R. B. 1979. Using frames in scheduling. In
ΑΙ-ΜΙΤ,νοΙ 1, pp. 251-284.
Goldstine, H. H., and von Neumann, J. 1947. Planning and coding of
problems for an electronic computing instrument, Part 2 (vols. 1-3).
Reprinted in A. H. Taub (Ed.), John von Neumann, Collected Works (vol.
5). London: Pergamon, 1963. Pp. 80-235.
443
BIBLIOGRAPHY
Golomb, S., and Baumert, L. 1965. Backtrack programming. JA CM,
12(4), 516-524.
Green, C. 1969a. Application of theorem proving to problem solving. In
IJCAI-1, pp. 219-239.
Green, C. 1969b. Theorem-proving by resolution as a basis for ques
tion-answering systems. In M14, pp. 183-205.
Green, C. 1976. The design of the PSI program synthesis system.
Proceedings of Second International Conference on Software Engineering,
San Francisco, CA, pp. 4-18.
Green C. C, and Barstow, D. 1978. On program synthesis knowledge.
Artificial Intelligence, 10(3), 241-279.
Grosz, B. J. 1977. The Representation and Use of Focus in Dialogue
Understanding. Tech. Note 151, SRI International Artificial Intelligence
Center, SRI International, Menlo Park, CA; July.
Grosz, B. J. 1979. Utterance and objective: issues in natural language
processing. In IJCAI-6, pp. 1067-1076.
Guard, J., et al. 1969. Semi-automated mathematics. JACM, 16(1),
49-62.
Hall, P. A. V. 1973. Equivalence between AND/OR graphs and
context-free grammars. CACM, vol. 16, pp. 444-445.
Hammer, M., and Ruth, G. 1979. Automating the software development
process. In RDST, pp. 767-790.
Hanson, A. R., and Riséman, E. M. (Eds.) 1978. Computer Vision
Systems. New York: Academic Press.
Harris, L. R. 1974. The heuristic search under conditions of error.
Artificial Intelligence, 5(3), 217-234.
Hart, P. E., Nilsson, N. J., and Raphael, B. 1968. A formal basis for the
heuristic determination of minimum cost paths. IEEE Trans. Syst.
Science and Cybernetics, SSC-4(2), 100-107.
444
Hart, P. E., Nilsson, N. J., and Raphael, B. 1972. Correction to "A
formal basis for the heuristic determination of minimum cost paths."
SIGART Newsletter, no. 37, December, pp. 28-29.
Hayes, J. E., Michie, D., and Mikulich, L. I. (Eds.) 1979. Machine
Intelligence 9: Machine Expertise and the Human Interface. Chichester:
Ellis Horwood.
Hayes, P. J. 1973a. The frame problem and related problems in
Artificial Intelligence. In A HT, pp. 45-49.
Hayes, P. J. 1973b. Computation and deduction. Proc. 2nd. Symposium
on Mathematical Foundations of Computer Science, Czechoslovakian
Academy of Sciences, pp. 105-118.
Hayes, P. J. 1977. In defence of logic. In IJCAI-5, pp. 559-565.
Hayes, P. J. 1978a. The Naive Physics Manifesto (working papers),
Institute of Semantic and Cognitive Studies, Geneva; May.
Hayes, P. J. 1978b. Naive Physics 1: Ontology for Liquids (working
papers), Institute of Semantic and Cognitive Studies, Geneva; August.
Hayes, P. J. 1979. The logic of frames. In The Frame Reader. Berlin: De
Gruyter. In press.
Hayes-Roth, F., and Waterman, D. 1977. Proceedings of the workshop
on pattern-directed inference systems. A CM SIGART Newsletter, no. 63,
June, pp. 1-83. (Some of the papers of the workshop that do not appear in
PDIS are printed here.)
Held, M., and Karp, R. M. 1970. The traveling-salesman problem and
minimum spanning trees. Operations Research, vol. 18, pp. 1138-1162.
Held, M., and Karp, R. 1971. The traveling salesman problem and
minimum spanning trees—Part II. Mathematical Prog., vol. 1, pp. 6-25.
Hendrix, G. G. 1973. Modeling simultaneous actions and continuous
processes. Artificial Intelligence, 4(3,4), 145-180.
Hendrix, G. G. 1975a. Partitioned Networks for the Mathematical
Modeling of Natural Language Semantics. Tech. Rep. NL-28, Dept. of
Computer Science, University of Texas at Austin.
445
BIBLIOGRAPHY
Hendrix, G. G. 1975b. Expanding the utility of semantic networks
through partitioning. In IJCAI-4, pp. 115-121.
Hendrix, G. G. 1979. Encoding knowledge in partitioned networks. In
AN, pp. 51-92.
Hewitt, C. 1972. Description and Theoretical Analysis (Using Schemata)
of PLANNER: A Language for Proving Theorems and Manipulating
Models in a Robot. Doctoral dissertation (June, 1971), MIT, AI Lab Rep. AI-TR-258.
Hewitt, C. 1975. How to use what you know. In IJCAI-4, pp. 189-198.
Hewitt, C. 1977. Viewing control structures as patterns of passing
messages. Artificial Intelligence, 8(3), 323-364.
Hewitt, C, and Baker, H. 1977. Laws for communicating parallel
processes. In B. Gilchrist (Ed.), Information Processing 77, IFIP. Am
sterdam: North-Holland. Pp. 987-992.
Hillier, F. S., and Lieberman, G. J. 1974. Introduction to Operations
Research (2nd ed.). San Francisco: Holden Day.
Hinxman, A. I. 1976. Problem reduction and the two-dimensional
trim-loss problem. In SCAISB-76, pp. 158-165.
Hofstadter, D. R. 1979. Godei, Escher, Bach: An Eternal Golden
Braid.
New York: Basic Books.
Hopcroft, J. E., and Ullman, J. D. 1969. Formal Languages and Their
Relation to Automata. Reading, MA: Addison-Wesley.
Horowitz, E., and Sahni, S. 1978. Fundamentals of Computer Al
gorithms. Potomac, MD: Computer Science Press.
Hunt, E. B. 1975. Artificial Intelligence. New York: Academic Press.
Jackson, P. C, Jr., 1974. Introduction to Artificial Intelligence. New
York: Petrocelli Books.
Joyner, W. H., Jr. (Ed.) 1979. Proceedings of the Fourth Workshop on
Automated Deduction, Austin, Texas; February.
446
Kanade, T. 1977. Model representations and control structures in image
understanding. In IJCAI-5, pp. 1074-1082.
Kanal, L. N. 1979. Problem-solving models and search strategies for
pattern recognition. IEEE Trans, of Pattern Analysis and Machine
Intelligence, PAM 1-1(2), 193-201.
Klahr, P. 1978. Planning techniques for rule selection in deductive
question-answering. In PDIS, pp. 223-239.
Klatt, D. H. 1977. Review of the ARPA speech understanding project.
Journal Acoust. Soc. Amer., 62(6), 1345-1366.
Kling, R. E. 1971. A paradigm for reasoning by analogy. Artificial
Intelligence, vol. 2, pp. 147-178.
Knuth, D. E., and Moore, R. W. 1975. An analysis of alpha-beta
pruning. Artificial Intelligence, 6(4), 293-326.
Kornfeld, W. A. 1979. ETHER—a parallel problem solving system. In
IJCAI-6, pp. 490-492.
Kowalski, R. 1970. Search strategies for theorem-proving. In MI5, pp.
181-201.
Kowalski, R. 1972. AND/OR Graphs, theorem-proving graphs, and
bidirectional search. In M17, pp. 167-94.
Kowalski, R. 1974a. Predicate logic as a programming language. Infor
mation Processing 74. Amsterdam: North-Holland. Pp. 569-574.
Kowalski, R. 1974b. Logic for Problem Solving. Memo no. 75, Dept. of
Computational Logic, University of Edinburgh, Edinburgh.
Kowalski, R. 1975. A proof procedure using connection graphs. JA CM,
vol. 22, pp. 572-595.
Kowalski, R. 1979a. Algorithm = logic + control. CACM, 22(7),
424-436.
Kowalski, R. 1979b. Logic for Problem Solving. New York: North-Hol
land.
447
BIBLIOGRAPHY
Kowalski, R., and Hayes P. 1969. Semantic trees in automatic theorem
proving. In M14, pp. 87-101.
Kowalski, R., and Kuehner, D. 1971. Linear resolution with selection
function. Artificial Intelligence, 2(3/4), 227-260.
Kuehner, D. G. 1971. A note on the relation between resolution and
Maslov's inverse method. In MI6, pp. 73-90.
Kuipers, B. 1978. Modeling spatial knowledge. Cognitive Science, 2(2),
129-153.
Kuipers, B. 1979. On representing commonsense knowledge. In AN, pp.
393-408.
Latombe, J. C. 1977. Artificial intelligence in computer aided design. In
J. J. Allen (Ed.), CAD Systems. Amsterdam: North-Holland.
Lauriere, J. L. 1978. A language and a program for stating and solving
combinatorial problems. Artificial Intelligence, 10(1), 29-127.
Lehnert, W., and Wilks, Y. 1979. A critical perspective on KRL.
Cognitive Science, 3(1), 1-28.
Lenat, D. B. 1976. AM: An A rtificial Intelligence Approach to Disco very
in Mathematics as Heuristic Search. Rep. STAN-CS-76-570, Stanford
University, Computer Science Dept.; July. (Reprinted in KBS.)
Lesser, V. R. and Corkill, D. D. 1979. The application of Artificial
Intelligence techniques to cooperative distributed processing. In IJCAI-
6, pp. 537-540.
Lesser, V. R., and Erman, L. D. 1979. An Experiment in Distributed
Interpretation. University of Southern California Information Sciences
Institute Report No. ISI/RR-79-76, May. (Also, Carnegie-Mellon Uni
versity Computer Science Dept. Technical Report CMU-CS-79-120,
May.)
Levy, D. 1976. Chess and Computers. Woodland Hills, CA: Computer
Science Press.
448
Levi, G., and Sirovich, F. 1976. Generalized AND/OR graphs. Artifi
cial Intelligence, 7(3), 243-259.
Lin, S. 1965. Computer solutions of the traveling salesman problem.
Bell System Tech. Journal, vol. XLIV, no. 10, December 1965.
Lindsay, P. H., and Norman, D. A. 1972. Human Information Process
ing: An Introduction to Psychology. New York: Academic Press.
Lindstrom, G. 1979. A Ipha-Beta Pruning on Evolving Game Trees. Tech.
Rep. UUCS 79-101, University of Utah, Dept. of Computer Science.
Linsky, L. (Ed.) 1971. Reference and Modality. London: Oxford Uni
versity Press.
London, R. L. 1979. Program verification. In RDST, pp. 302-315.
Loveland, D. W. 1978. Automated Theorem Proving: A Logical Basis.
New York: North Holland.
Loveland, D. W., and Stickel, M. E. 1976. A hole in goal trees: some
guidance from resolution theory. IEEE Trans, on Computers, C-25(4),
335-341.
Lowerre, B. T. 1976. The HARPY Speech Recognition System. Doctoral
dissertation, Carnegie-Mellon University; Tech. Rep., Computer Science
Dept., Carnegie-Mellon University.
Luckham, D. C. 1978. A study in the application of theorem proving. In
PCAI, pp. 176-188.
Luckham, D. C, and Nilsson, N. J. 1971. Extracting information from
resolution proof trees. Artificial Intelligence, 2(1), 27-54.
McCarthy, J. 1958. Programs with common sense. Mechanisation of
Thought Processes, Proc. Symp. Nat. Phys. Lab., vol. I, pp. 77-84.
London: Her Majesty's Stationary Office. (Reprinted in SIP, pp.
403-410.)
McCarthy, J. 1962. Towards a mathematical science of computation.
Information Processing, Proceedings of IFIP Congress 1962, pp. 21-28.
Amsterdam: North-Holland.
449
BIBLIOGRAPHY
McCarthy, J. 1963. Situations, Actions and Causal Laws. Stanford
University Artificial Intelligence Project Memo no. 2. (Reprinted in SIP,
pp. 410-418.)
McCarthy, J. 1977. Epistemological problems of Artificial Intelligence.
In IJCAI-5, pp. 1038-1044.
McCarthy, J. 1979. First order theories of individual concepts and
propositions. In MI9.
McCarthy, J., et al. 1969. A computer with hands, eyes, and ears. Proc.
of the American Federation of Information Processing Societies, vol. 33,
pp. 329-338. Washington, D.C.: Thompson Book Co.
McCarthy, J., and Hayes, P. J. 1969. Some philosophical problems
from the standpoint of Artificial Intelligence. In M14, pp. 463-502.
McCharen, J. D., Overbeek, R. A., and Wos, L. A. 1976. Problems and
experiments for and with automated theorem-proving programs. IEEE
Trans, on Computers, C-25(8), 773-782.
McCorduck, P. 1979. Machines Who Think. San Francisco: W. H.
Freeman.
McDermott, D. V. 1975a. Symbol-mapping: a technical problem in
PLANNER-like systems. SIGARTNewsletter, no. 51, April, pp. 4-5.
McDermott, D. V. 1975b. A packet-based approach to the symbol-
mapping problem. SIGART Newsletter, no. 53, August, pp. 6-7.
McDermott, D. V. 1975c. Very Large PLANNER-Type Data Bases.
MIT Artificial Intelligence Laboratory Memo. 339, MIT; September.
McDermott, D. V., and Doyle, J. 1980. Non-monotonic logic I. A rtificial
Intelligence, forthcoming.
McDermott, D. V., and Sussman, G. J. 1972. The CON NI VER Refer-
enee Manual, MIT AI Lab. Memo 259, May. (Rev., July 1973.)
McDermott, J. 1979. Learning to use analogies. In IJCAI-6, pp.
568-576.
450
Mackworth, A. K. 1977. Consistency in networks of relations. Artificial
Intelligence, 8(1), 99-118.
Manna, Z., and Waldinger, R. (Eds.) 1977. Studies in Automatic
Programming Logic. New York: North-Holland.
Manna, Z., and Waldinger, R. 1979. A deductive approach to program
synthesis. In IJCAI-6, pp. 542-551.
Markov, A. 1954. A Theory of Algorithms. National Academy of
Sciences, USSR.
Marr, D. 1976. Early processing of visual information. Phil. Trans.
Royal Society (Series B), vol. 275, pp. 483-524.
Marr, D. 1977. Artificial intelligence—a personal view. Artificial Intel
ligence, 9(1), 37-48.
Martelli, A. 1977. On the complexity of admissible search algorithms.
A rtificial Intelligence, 8(1), 1-13.
Martelli, A., and Montanari, U. 1973. Additive AND/OR graphs. In
IJCAI-3,pp. 1-11.
Martelli, A., and Montanari, U. 1975. From dynamic programming to
search algorithms with functional costs. In IJCAI-4, pp. 345-350.
Martelli, A., and Montanari, U. 1978. Optimizing decision trees through
heuristically guided search. CACM, 21(12), 1025-1039.
Martin, W. A. 1978. Descriptions and the Specialization of Concepts.
Rep. MIT/LCS/TM-101, MIT Lab. for Computer Science, MIT.
Martin, W. A., and Fateman, R. J. 1971. The MACSYMA system. Proc.
ACM 2d Symposium on Symbolic and Algebraic Manipulation, Los
Angeles, CA, pp. 23-25.
Maslov, S. J. 1971. Proof-search strategies for methods of the resolution
type. In MI6, pp. 77-90.
Medress, M. F., et al. 1977. Speech understanding systems: Report of a
steering committee. Artificial Intelligence, 9(3), 307-316.
451
BIBLIOGRAPHY
Meltzer, B., and Michie, D. (Eds.) 1969. Machine Intelligence 4.
Edinburgh: Edinburgh University Press.
Meltzer, B., and Michie, D. (Eds.) 1970. Machine Intelligence 5.
Edinburgh: Edinburgh University Press.
Meltzer, B., and Michie, D. (Eds.) 1971. Machine Intelligence 6.
Edinburgh: Edinburgh University Press.
Meltzer, B., and Michie, D. (Eds.) 1972. Machine Intelligence 7.
Edinburgh: Edinburgh University Press.
Mendelson, E. 1964. Introduction to Mathematical Logic. Princeton,
NJ: D. Van Nostrand.
Michie, D. (Ed.) 1968. Machine Intelligence 3. Edinburgh: Edinburgh
University Press.
Michie, D. 1974. On Machine Intelligence. New York: John Wiley and
Sons.
Michie, D., and Ross, R. 1970. Experiments with the adaptive graph
traverser. In M15, pp. 301-318.
Michie, D., and Sibert, E. E. 1974. Some binary derivation systems.
JACM, 21(2), 175-190.
Minker, J., Fishman, D. H., and McSkimin, J. R. 1973. The Q*
algorithm— a search strategy for a deductive question-answering system.
Artificial Intelligence, 4(3,4), 225-244.
Minker, J., and Zanon, G. 1979. Lust resolution: Resolution with
Arbitrary Selection Function, Res. Rep. TR-736, Univ. of Maryland,
Computer Science Center, College Park, MD.
Minker, J., et al. 1974. MRPPS: an interactive refutation proof proce
dure system for question answering. /. Computers and Information
Sciences, vol. 3, June, pp. 105-122.
Minsky, M. (Ed.) 1968. Semantic Information Processing. Cambridge,
MA: The MIT Press.
452
Minsky, M. 1975. A Framework for Representing Knowledge. In PCV,
pp. 211-277.
Mitchell, T. M. 1979. An analysis of generalization as a search problem.
In IJCAI-6, pp. 577-582.
Montanari, U. 1970. Heuristically guided search and chromosome
matching. Artificial Intelligence, 1(4), 227-245.
Montanari, U. 1974. Networks of constraints: fundamental properties
and applications to picture processing. Information Science, vol. 7, pp.
95-132.
Moore, E. F. 1959. The shortest path through a maze. Proceedings of an
International Symposium on the Theory of Switching, Part II. Cam
bridge: Harvard University Press. Pp. 285-292.
Moore, J., and Newell, A. 1973. How can MERLIN understand? In L.
Gregg (Ed.), Knowledge and Cognition. Hillsdale, NJ: Lawrence Erl-
baum Assoc.
Moore, R. C. 1975a. Reasoning from Incomplete Knowledge in a
Procedural Deduction System. Tech. Rep. AI-TR-347, MIT Artificial
Intelligence Lab, Massachusetts Institute of Technology, Cambridge,
MA.
Moore, R. C. 1975b. A serial scheme for the inheritance of properties.
SIGARTNewsletter, No. 53, pp. 8-9.
Moore, R. C. 1977. Reasoning about knowledge and action. In IJCAI-5,
pp. 223-227.
Moore, R. C. 1979. Reasoning About Knowledge and Action. Tech. Note
191, SRI International, Artificial Intelligence Center, Menlo Park, CA.
Moses, J. 1967. Symbolic Integration. MAC-TR-47, Project MAC,
Massachusetts Institute of Technology, Cambridge, MA.
Moses, J. 1971. Symbolic integration: the stormy decade. CACM,
14(8), 548-560.
453
BIBLIOGRAPHY
Mylopoulos, J., et al. 1975. TORUS—a natural language understanding
system for data management. In IJCAI-4, pp. 414-421.
Nash-Webber, B., and Schank, R. (Eds.) 1975. Proceedings of Theoreti
cal Issues in Natural Language Processing. Cambridge, MA; June.
Naur, P. 1966. Proofs of algorithms by general snapshots. BIT, 6(4),
310-316.
Nevins, A. J. 1974. A human-oriented logic for automatic theorem
proving. JA CM, vol. 21, pp. 606-621.
Nevins, J. L., and Whitney, D. E. 1977. Research on advanced assembly
automation. Computer (IEEE Computer Society), 10(12), 24-38.
Newborn, M., 1975. Computer Chess. New York: Academic Press.
Newborn, M. 1977. The efficiency of the alpha-beta search on trees with
branch-dependent terminal node scores. Artificial Intelligence, 8(2),
137-153.
Newell, A. 1973. Production systems: models of control structures. In
W.G. Chase, (Ed.), Visual Information Processing. New York: Academic
Press. Chapter 10, pp. 463-526.
Newell, A., Shaw, J., and Simon, H. 1957. Empirical explorations of the
logic theory machine. Proc. West. Joint Computer Confi, vol. 15, pp. 218-239. (Reprinted in CT, pp. 109-133.)
Newell, A., Shaw, J. C, and Simon, H. A. 1958. Chess-playing programs
and the problem of complexity. IBM Jour. R&D, vol. 2, pp. 320-355.
(Reprinted in CT, pp. 109-133.)
Newell, A., Shaw, J. C, and Simon, H. A. 1960. Report on a general
problem-solving program for a computer. Information Processing: Proc.
of the Int.
Confi, on Information Processing, UNESCO, Paris, pp. 256-264.
Newell, A., and Simon, H. A. 1963. GPS, a program that simulates
human thought. In CT, pp. 279-293.
Newell, A., and Simon, H. A. 1972. Human Problem Solving. Englewood
Cliffs, NJ: Prentice-Hall.
454
Newell, A., et al. 1973. Speech Understanding Systems: Final Report of a
Study Group. New York: American Elsevier.
Nilsson, N. J. 1965. Learning Machines: Foundations of Trainable
Pattern-Classifying Systems. New York: McGraw-Hill.
Nilsson, N. J. 1969. Searching problem-solving and game-playing trees
for minimal cost solutions. In A. J. H. Morrell (Ed.), Information
Processing 68 (vol. 2). Amsterdam: North-Holland. Pp. 1556-1562.
Nilsson, N. J. 1971. Problem-solving Methods in Artificial Intelligence.
New York: McGraw-Hill.
Nilsson, N. J. 1973. Hierarchical Robot Planning and Execution System.
SRI AI Center Technical Note 76, SRI International, Inc., Menlo Park,
CA, April.
Nilsson, N. J. 1974. Artificial Intelligence. In J. L. Rosenfeld (Ed.),
Technological and Scientific Applications; Applications in the Social
Sciences and the Humanities, Information Processing, 74: Proc. of IFIP
Congress 74, vol. 4, pp. 778-801. New York: American Elsevier.
Nilsson, N. J. 1979. A production system for automatic deduction. In
MI9.
Nitzan, D. 1979. Flexible automation program at SRI. Proc. 1979 Joint
Automatic Control Conference. New York: IEEE.
Norman, D. A., and Rumelhart, D. E. (Eds.) 1975. Explorations in
Cognition. San Francisco: W. H. Freeman.
Okhotsimski, D. E., et al. 1979. Integrated walking robot development.
In MI9.
Paterson, M. S., and Wegman, M. N. 1976. Linear Unification. IBM
Research Report 5304, IBM.
Pitrat, J. 1977. A chess combination program which uses plans. Artifi
cial Intelligence, 8(3), 275-321.
Pohl, I. 1970. First results on the effect of error in heuristic search. In
M/5, pp. 219-236.
455
BIBLIOGRAPHY
Pohl, I. 1971. Bi-directional search. In M16, pp. 127-140.
Pohl, 1.1973. The avoidance of (relative) catastrophe, heuristic compe
tence, genuine dynamic weighting and computational issues in heuristic problem solving. In IJCAI-3, pp. 12-17.
Pohl,
1.1977. Practical and theoretical considerations in heuristic search
algorithms. In MIS, pp. 55-72.
Pople, H. E., Jr. 1977. The formation of composite hypotheses in
diagnostic problem solving: an exercise in synthetic reasoning. In
IJCAI-5, pp. 1030-1037.
Pospesel, H. 1976. Introduction to Logic: Predicate Logic. Englewood
Cliffs, NJ: Prentice-Hall.
Post, E. 1943. Formal reductions of the general combinatorial problem.
American Jour. Math., vol. 65, pp. 197-268.
Pratt, V. R. 1977. The Competence/Performance Dichotomy in Pro
gramming. Memo 400, January, MIT Artificial Intelligence Laboratory,
MIT.
Prawitz, D. 1960. An improved proof procedure. Theoria, vol. 26, pp.
102-139.
Quillian, M. R. 1968. Semantic memory. In SIP, pp. 216-270.
Raphael, B. 1968. SIR: semantic information retrieval. In SIP, pp.
33-134.
Raphael, B. 1971. The frame problem in problem-solving systems. In
AIHP, pp. 159-169.
Raphael, B. 1976. The Thinking Computer: Mind Inside Matter. San
Francisco: W. H. Freeman.
Raphael, B., et al. 1971. Research and Applications—Artificial Intelli
gence, Stanford Research Institute Final Report on Project 8973. Ad
vanced Research Projects Agency, Contract NASW-2164; December.
456
Raulefs, P., et al. 1978. A short survey on the state of the art in matching
and unification problems. AI SB Quarterly, no. 32, December, pp. 17-21.
Reddy, D. R., et al. 1977. Speech Understanding Systems: A Summary of
Results of the Five- Year Research Effort. Dept. of Computer Science,
Carnegie-Mellon University, Pittsburgh, PA.
Reiter, R. 1971. Two results on ordering for resolution with merging
and linear format. JACM, vol. 18, October, pp. 630-646.
Reiter, R. 1976. A semantically guided deductive system for automatic
theorem proving. IEEE Trans, on Computers, C-25(4), 328-334.
Reiter, R. 1978. On reasoning by default. In TINLAP-2, pp. 210-218.
Rich, C, and Shrobe, H. E. 1979. Design of a programmer's apprentice.
In AI-MIT, vol. 1, pp. 137-173.
Rieger, C, and London, P. 1977. Subgoal protection and unravelling
during plan synthesis. In IJCAI-5, pp. 487-493.
Robbin, J. 1969. Mathematical Logic: A First Course. New York: W. A.
Benjamin.
Roberts, R. B., and Goldstein, I. P. 1977. The FRL Primer. Memo 408,
MIT Artificial Intelligence Laboratory, MIT.
Robinson, A. E. 1978. Investigating the Process of Natural Language
Communication: A Status Report. SRI International Artificial Intelli
gence Center Tech. Note 165. SRI International, Menlo Park, CA; July.
Robinson, J. A. 1965. A machine-oriented logic based on the resolution
principle. JA CM, 12(1), 23-41.
Robinson, J. A. 1979. Logic: Form and Function. New York: North-
Holland.
Rosen, C. A., and Nitzan, D. 1977. Use of sensors in programmable
automation. Computer (IEEE Computer Society Magazine), December,
pp. 12-23.
457
BIBLIOGRAPHY
Rosen, B. K. 1973. Tree-manipulating systems and Church-Rosser
theorems. JA CM, vol. 20, pp. 160-187.
Roussel, P. 1975. Prolog: Manual de reference et d'utilisation. Groupe
d'Intelligence Artificielle, Marseille-Luminy; September.
Rubin, S. 1978. The ARGO S Image Understanding System. Doctoral
dissertation, Dept. of Computer Science, Carnegie-Mellon University,
November. (Also in Proc ARPA Image Understanding Workshop,
Carnegie-Mellon, Nov. 1978, pp. 159-162.)
Rulifson, J. F., Derksen, J. A., and Waldinger, R. J. 1972. QA4: A
Procedural Calculus for Intuitive Reasoning. Stanford Research Institute
Artificial Intelligence Center Tech. Note 73, Stanford Research Institute,
Inc., November.
Rumelhart, D. E., and Norman, D. A. 1975. The active structural
network. In D. A. Norman and D. E. Rumelhart (Eds.), Explorations in
Cognition. San Francisco: W. H. Freeman.
Rustin, R. (Ed.) 1973. Natural Language Processing. New York:
Algorithmes Press.
Rychener, M. D. 1976. Production Systems as a Programming Lan
guage for Artificial Intelligence Applications. Doctoral dissertation, Dept.
of Computer Science, Carnegie-Mellon University.
Sacerdoti, E. D. 1974. Planning in a hierarchy of abstraction spaces.
Artificial Intelligence, 5(2), 115-135.
Sacerdoti, E. D. 1975. The non-linear nature of plans. In IJCAI-4, pp.
206-214.
Sacerdoti, E. D. 1977. A Structure for Plans and Behavior. New York:
Elsevier.
Sacerdoti, E. D., et al. 1976. QLISP—A language for the interactive
development of complex systems. Proceedings of A FI PS National
Computer Conference, pp. 349-356.
458
Samuel, A. L. 1959. Some studies in machine learning using the game of
checkers. IBM Jour. R&D, vol. 3, pp. 211-229. (Reprinted in CT, pp.
71-105.)
Samuel, A. L. 1967. Some studies in machine learning using the game
of checkers II—recent progress. IBM Jour. R&D, 11(6), 601-617.
Schank, R. C, and Abelson, R. P. 1977. Scripts, Plans, Goals and
Understanding. Hillsdale, NJ: Lawrence Erlbaum Assoc.
Schmidt, C. F., Sridharan, N. S., and Goodson, J. L. 1978. The plan
recognition problem: an intersection of psychology and Artificial Intelligence. Artificial Intelligence, 11(1,2), pp.
45-83.
Schubert, L. K. 1976. Extending the expressive power of semantic
networks. Artificial Intelligence, 7(2), pp. 163-198.
Schubert, L. K., Goebel, R. G., and Cercone, N. J. 1979. The structure
and organization of a semantic net for comprehension and inference. In
AN, pp. 121-175.
Shannon, C. E. 1950. Programming a computer for playing chess. Philosophical Magazine (Series 7), vol. 41, pp. 256-275.
Shapiro, S. 1979a. The SNePS Semantic Network Processing System.
In AN, pp. 179-203.
Shapiro, S. 1979b. Techniques of Artificial Intelligence. New York: D.
Van Nostrand.
Shirai, Y. 1978. Recognition of real-world objects using edge cue. In
CVS, pp. 353-362.
Shortliffe, E. H. 1976. Computer-Based Medical Consultations:
MYCIN. New York: American Elsevier.
Siklóssy, L., and Dreussi, J. 1973. An efficient robot planner which
generates its own procedures. In IJCAI-3, pp. 423-430.
Sickel, S. 1976. A search technique for clause interConnectivity graphs.
IEEE Trans, on Computers, C-25(8), 823-835.
459
BIBLIOGRAPHY
Simmons, R. F. 1973. Semantic networks: their computation and use
for understanding English sentences. In R. Schank and K. Colby (Eds.),
Computer Models of Thought and Language. San Francisco: W. H.
Freeman. Pp. 63-113.
Simon, H. A. 1963. Experiments with a heuristic compiler. JA CM,
10(4), 493-506.
Simon, H. A. 1969. The Sciences of the Artificial. Cambridge, MA: The
MIT Press.
Simon, H. A. 1972a. On reasoning about actions. In RM, pp. 414-430.
Simon, H. A. 1972b. The heuristic compiler. In RM, pp. 9-43.
Simon, H. 1977. Artificial Intelligence systems that understand. In
IJCAI-5,pp. 1059-1073.
Simon, H. A., and Kadane, J. B. 1975. Optimal problem-solving search:
all-or-none solutions. Artificial Intelligence, vol. 6, 235-247.
Slagle, J. R. 1963. A heuristic program that solves symbolic integration
problems in freshman calculus. In CT, pp. 191-203. (Also in JACM, 1963,
vol. 10, 507-520.)
Slagle, J. R. 1970. Heuristic search programs. In TAN PS, pp. 246-273.
Slagle, J. R. 1971. Artificial Intelligence: The Heuristic Programming
Approach. New York: McGraw-Hill.
Slagle, J. R., and Dixon, J. K. 1969. Experiments with some programs
that search game trees. JACM, 16(2), 189-207.
Smith, R. G. 1978. A Framework for Problem Solving in a Distributed Environment. Doctoral dissertation, Stanford University, Computer
Science Dept., Report STAN-CS-78-700; December.
Smith, R. G. 1979. A framework for distributed problem solving. In
IJCAI-6, pp.
836-841.
Smullyan, R. M. 1978. What Is The Name of This Book: The Riddle of
Dracula and Other Logical Puzzles. Englewood Cliffs, NJ: Prentice-Hall.
460
Soloway, E. M. 1978. "Learning = Interpretation + Generalization": A
Case Study in Knowledge-Directed Learning. Doctoral dissertation,
University of Massachusetts at Amherst, Computer and Information
Science Dept., Technical Report 78-13; July.
Sridharan, N. S. 1978. AIMDS User Manual—Version 2. Rutgers
University Computer Science Tech. Report CBM-TR-89, Rutgers, June.
Srinivasan, C. V. 1977. The Meta Description System: A System to
Generate Intelligent Information Systems. Part I: The Model Space.
Rutgers University Computer Science Tech. Report SOSAP-TR-20A,
Rutgers.
Stallman, R. M., and Sussman, G. J. 1977. Forward reasoning and
dependency-directed backtracking in a system for computer-aided cir
cuit analysis. Artificial Intelligence, 9(2), 135-196. (Reprinted in AI-MIT,
vol. 1, pp. 31-91.)
Stefik, M. 1979. An examination of a frame-structured representation
system. In IJCAI-6, pp. 845-852.
Stockman, G. 1977. A Problem-Reduction Approach to the Linguistic
Analysis of Waveforms. Doctoral dissertation, University of Maryland,
College Park, MD.; Computer Science Technical Report TR-538.
Sussman, G. J. 1975. A Computer Model of Skill Acquisition. New
York: American Else vier.
Sussman, G. J. 1977. Electrical design: a problem for artificial intelli
gence research. In IJCAI-5, pp. 894-900.
Sussman, G. J., and Stallman, R. M. 1975. Heuristic techniques in
computer aided circuit analysis. IEEE Trans, on Circuits and Systems,
CAS-22(11), November.
Sussman, G., Winograd, T., and Charniak, E. 1971. Micro-Planner
Reference Manual, MIT AI Memo 203a, MIT, 1970.
Takeyasu, K. et al. 1977. An approach to the integrated intelligent robot
with multiple sensory feedback: construction and control functions.
Proceedings 7th Intern. Symp. on Industrial Robots, Tokyo, Japan
Industrial Robot Assoc, pp. 523-530.
461
BIBLIOGRAPHY
Tate, A. 1976. Project Planning Using a Hierarchic Non-Linear Planner.
Research Report no. 25, Department of Artificial Intelligence, University
of Edinburgh.
Tate, A. 1977. Generating project networks. In IJCAI-5, pp. 888-893.
Turing, A. M. 1950. Checking a large routine. Report of a Conference on
high speed automatic calculating machines, University of Toronto,
Canada, June 1949, Cambridge University Mathematical Laboratory,
pp. 66-69.
Tyson, M., and Bledsoe, W. W. 1979. Conflicting bindings and general
ized substitutions. In WAD, pp. 14-18.
Ulrich, J. W. and Moll, R. 1977. Program synthesis by analogy. In
Proceedings of the Symposium on Artificial Intelligence and Programming
Languages (ACM); SIGPLAN Notices, 12(8); and SIGART Newsletter,
no. 64, pp. 22-28.
vanderBrug, G. J. 1976. Problem representations and formal properties
of heuristic search. Information Sciences, vol. II, pp. 279-307.
vanderBrug, G., and Minker, J. 1975. State-space, problem-reduction,
and theorem proving—some relationships. Comm. ACM, 18(2), 107-115.
van Emden, M. H. 1977. Programming with resolution logic. In MI8,
pp. 266-299.
van Vaalen, J. 1975. An extension of unification to substitutions with an
application to automatic theorem proving. In IJCAI-4, pp. 77-82.
Vere, S. A. 1978. Inductive learning of relational productions. In PDIS,
pp. 281-295.
Wagner, H. 1975. Principles of Operations Research (2nd ed.). Engle-
wood Cliffs, NJ: Prentice-Hall.
Waldinger, R. J. 1977. Achieving several goals simultaneously. In M18,
pp. 94-136.
Waldinger, R. J., and Lee, R. C. T. 1969. PROW: A step toward
automatic program writing. In IJCAI-1, pp. 241-252.
462
Waldinger, R. J., and Levitt, K. N. 1974. Reasoning about programs.
Artificial Intelligence, 5(3), 235—316. (Reprinted in Z. Manna and R. J.
Waldinger (Eds.), Studies in Automatic Programming Logic. New York:
North-Holland, 1977.)
Walker, D. E., and Norton, L. M. (Eds.) 1969. International Joint
Conference on Artificial Intelligence. Washington, D.C.; May.
Walker, D. E. (Ed.). 1978. Understanding Spoken Language. New York:
North Holland.
Waltz, D. 1975. Understanding line drawings of scenes with shadows. In
PCV,pp. 19-91.
Waltz, D. (Ed.) 1977. Natural language interfaces. SIGART Newsletter
no. 61, February, pp. 16-64.
Waltz, D. (Ed.) 1978. TIN LAP-2, University of Illinois, July.
Warren, D. H. D. 1974. WARPLAN: A System for Generating Plans.
Memo 76, Dept. of Computational Logic, University of Edinburgh
School of Artificial Intelligence, June.
Warren, D. H. D. 1977. Logic Programming and Compiler Writing. Res.
Rep. No. 44, Dept. of Artificial Intelligence, University of Edinburgh.
Warren, D. H. D., and Pereira, L. M. 1977. PROLOG—The language
and its implementation compared with LISP. Proceedings of the Sympo
sium on Artificial Intelligence and Programming Languages (ACM);
SIGPLAN Notices, 12(8); and SIGART Newsletter, no. 64, pp. 109-115.
Waterman, D., and Hayes-Roth, F. (Eds.) 1978. Pattern-Directed Infer
ence Systems. New York: Academic Press.
Wegner, P. (Ed.) 1979. Research Directions in Software Technology
Cambridge, MA: The MIT Press.
Weiss, S. M., Kulikowski, C. A., Amarel, S., and Safir, A. 1978. A
model-based method for computer-aided medical decision-making.
Artificial Intelligence, 11(1,2), 145-172.
463
BIBLIOGRAPHY
Weissman, C. 1967. LISP 1.5 Primer. Belmont, CA: Dickenson Pub
lishing Co.
Weyhrauch, R. 1980. Prolegomena to a theory of mechanized formal
reasoning. Artificial Intelligence, forthcoming.
Wickelgren, W. A. 1974. How to Solve Problems. San Francisco: W. H.
Freeman.
Wiederhold, G. 1977. Database Design. New York: McGraw-Hill.
Wilkins, D. 1974. A non-clausal theorem proving system. In PASC.
Wilkins, D. 1979. Using plans in chess. In IJCAI-6, pp. 960-967.
Will, P., and Grossman, D. 1975. An experimental system for computer
controlled mechanical assembly. IEEE Trans, on Computers, C-24(9),
879-888.
Winker, S. 1979. Generation and verification of finite models and
counterexamples using an automated theorem prover answering two
open questions. In WAD, pp. 7-13.
Winker, S. and Wos, L. 1978. Automated generation of models and
counterexamples and its application to open questions in ternary boolean
algebra. Proc. Eighth Int. Symposium on Multiple- Valued Logic (IEEE),
Rosemont, Illinois.
Winograd, T. 1972. Understanding Natural Language. New York:
Academic Press.
Winograd, T. 1975. Frame representations and the declarative/proce
dural controversy. In RU, pp. 185-210.
Winograd, T. 1980a. Language as a Cognitive Process. Reading, MA:
Addison-Wesley, forthcoming.
Winograd, T. 1980b. What does it mean to understand language?
Cognitive Science, 4. To appear.
Winston, P. H. 1972. The MIT robot. In MI7, pp. 431-463.
464
Winston, P. H. (Ed.) 1975. The Psychology of Computer Vision. New
York: McGraw-Hill.
Winston, P. H. 1975. Learning structural descriptions from examples.
In PCF, pp. 157-209.
Winston, P. H. 1977. Artificial Intelligence. Reading, MA: Addison-
Wesley.
Winston, P. H. 1979. Learning by Understanding Analogies. Memo 520,
MIT Artificial Intelligence Laboratory, April. (Rev., June.)
Winston, P. H., and Brown, R. H. (Eds.) 1979. Artificial Intelligence: an
MIT Perspective (2 vols.). Cambridge, MA: MIT Press.
Wipke, W. T., Ouchi, G. I., and Krishnan, S. 1978. Simulation and
evaluation of chemical synthesis—SECS: an application of artificial
intelligence techniques. Artificial Intelligence, 11(1,2), 173-193.
Wong, H. K. T., and Mylopoulos, J. 1977. Two views of data semantics:
a survey of data models in artificial intelligence and database manage
ment. Information, 15(3), 344-383.
Woods, W. 1975. What's in a link: foundations for semantic networks.
In RU 9 pp. 35-82.
Woods, W., et al. 1976. Speech Understanding Systems: Final Technical
Progress Report. (5 vols.), BBN No. 3438. Cambridge, MA: Bolt, Beranek
and Newman.
Zadeh, L. 1979. A theory of approximate reasoning. In M1-9.
Zisman, M. D. 1978. Use of production systems for modeling
asynchronous, concurrent processes. In PDIS, pp. 53-68.
465
AUTHOR INDEX
Abelson, R. P., 412, 413, 424
Abraham, R. G., 13
Agin, G. J., 15
Aho, A. V., 14
Allen, J., 16, 189
Amarel, S.,49, 127
Ambler, A. P., 13
Anderson, J., 412 Athans, M., 49
Baker, H., 419
Ball, W., 50
Ballantyne, A. M., 13
Banerji, R., 431
Barr, A., 11
Barrow, H., 15
Barstow, D., 14, 269
Baudet, G. M., 128
Baumert, L., 50
Bellman, R., 95
Berliner, H. J., 127, 128
Bernstein, M. I., 11
Bibel, W., 268
Biermann, A. W., 14 Binford, T. O., 13
Black, F., 316
Bledsoe, W. W., 13, 267, 268, 269
Bobrow, D. G., 50, 270, 315, 412, 413,
418, 431
Boden, M. A., 11 Bower, G., 412 Boyer, R. S., 13, 189
Brown, J. S., 15
Brown, R. H., 429
Bruell, P., 13, 269
Buchanan, B. G., 12, 422
Bundy, A., 11,413,426
Cassinis, R., 14
Cercone, N. J., 413
Chandra, A. K., 96
Chang, C L., 13, 127, 156, 189, 208,
268
Charniak, E.,267, 270, 417
Codd, E. F., 12 Cohen, H., 11 Cohen, P. R., 316, 425 Cohn, A., 426 Coleman, R., 95 Collins, A., 424, 426, 431
Collins, N. L., 430
Constable, R., 14
Corkill, D. D., 419
Cox, P. T., 268 Crane, H. D., 419
Creary, L. G., 425
Dale, E., 430
Date, C. J., 12
Davis, M., 156
Davis, R., 12,49,269,420,429
Dawson, C, 316
de Kleer, J., 269
Deliyani, A., 412
Derksen, J. A., 267, 315, 418
Dietterich, T. G., 421
Dijkstra, E. W., 95
Dixon, J. K., 96, 128
Doran, J., 90, 95,96
Doyle, J., 413
Dreussi, J., 357
Dreyfus, S., 95
Duda, R. O., 12, 15, 268, 413, 420, 423
Dudeney, H., 50
Edwards, D., 128
Ehrig, H., 49
Elcock, E.,430
Elithorn, A., 429
Elschlager, R., 425
Erman, L. D., 419
Ernst, G. W., 316
Evans, T. G., 421
Fahlman, S. E., 315, 413, 418
Fateman, R. J., 12
Feigenbaum, E. A., 11, 12, 15, 50, 429
Feldman, J. A., 358 Feldman, Julian, 15, 429
Fikes, R. E., 315, 316, 358, 413, 421
467
AUTHOR INDEX
Findler, N. V., 413, 429
Fishman, D. H., 189
Floyd, R. W., 14
Frege, G., 412 Friedland, P., 412, 420 Friedman, D. P., 16
Gallaire, H.,
12,269
Galler, B.,48
Gardner, M., 50
Gaschnig, J., 94
Gelernter, H. L., 13, 15
Gelperin, D., 95 Genesereth, M. R., 12
Goebel, R. G., 413
Goldstein, I. P., 412
Goldstine, H. H., 14 Golomb, S., 50
Goodson, J. L., 424
Green, C. C, 14, 189, 269, 308, 316,
418
Grossman, D., 14 Grosz, B. J.,
11,424
Guard, J., 189
Hall, P. A. V., 127
Hammer, M., 14, 269
Hanson, A. R., 15,429
Harris, L. R., 95, 128
Hart, P. E., 15, 95, 268, 316, 358, 421,
423
Hart, T., 128
Hayes, J. E., 430 Hayes, P. J., 156, 246, 269, 270, 315,
316, 412, 423
Hayes-Roth, F., 49, 431
Held, M.,50
Hendrix, G. G., 316, 412, 413, 424
Hewitt, C, 267, 270,419
Hillier, F. S., 14,50
Hinxman, A. I., 127
Hofstadter, D. R., 426
Hopcroft, J. E., 14,50
Horowitz, E., 94
Hunt, E. B., 10
Jackson, P. C, Jr., 10, 96
Jones, D., 429
Joyner, W. H., Jr., 433
Radane, J. B., 95
Kanade, T., 15
Kanal, L. N., 96
Karp, R. M.,50
King, J., 49 Klahr, P.,268
Klatt, D. H., 11
Kling, R. E.,421
Knuth, D. E., 128 Kornfeld, W. A., 419
Kowalski, R., 127, 156, 189, 268, 269,
270,311,316,412
Krishnan, S., 15
Kuehner, D. G., 156, 189
Kuipers, B., 423
Latombe, J. C, 15
Lauriere, J. L., 14
Lee, R. C. T., 13, 14, 156, 189, 269
Lehnert, W., 412
Lenat, D. B., 422,429
Lesser, V. R., 419
Levi, G., 127
Levin, M., 128
Levitt, K. N., 268
Levy, D., 128
Lieberman, G. J., 14, 50
Lin, S., 50
Lindsay, P. H., 11
Lindstrom, G., 128
Linsky, L., 425
London, P., 357
London, R. L., 14
Loveland, D. W., 13, 156, 189, 268 Lowerre, B. T., 96
Luckham, D. C, 13, 156, 189, 268
McCarthy, J., 13, 14, 315, 316, 422,
423, 425
McCharen, J. D., 13, 189
McCorduck, P., 11
McDermott, D. V., 267, 413, 417, 418
McDermott, J., 421
Mack worth, A. K., 94
McSkimin, J. R., 189
Manna, Z., 14, 156, 253,269
Markov, A., 48
Marr, D., 11, 15
Martelli, A., 95, 106, 127
Martin, W. A., 12,413
Maslov, S. J., 156
Medress, M. F., 11
Meltzer, B., 429, 430
Mendelson, E., 156
Mesarovic, M. D., 431
Michalski, R. S., 421
Michie, D., 11, 67, 90, 95, 96, 127, 430 Mikulich, L.
I.,430
Minker, J., 12, 127, 189, 269
Minsky, M., 412, 431
468
Mitchell, T. M., 421, 422
Moll, R., 421
Montanari, U., 94, 96, 106, 127
Moore, E. F., 95
Moore, J., 412
Moore, J S., 13
Moore, R. C, 267, 268, 413, 424, 425 Moore, R. W., 128
Moses, J., 50 Mylopoulos, J., 269, 413
Nash-Webber, B., 433
Naur, P., 14
Nevins, A. J., 268 Nevins, J. L., 13
Newborn, M., 128
Newell, A., 11, 13, 48, 95, 127, 128,
316, 412
Nilsson, N.J., 10, 11, 49, 95, 127, 156,
185, 268, 270, 315, 316, 358,
421,423
Nitzan, D., 13
Norman, D. A., 11, 412
Norton, L. M., 432
Okhotsimski, D. E., 14
Ouchi, G. I., 15
Paterson, M. S., 156
Pereira, L. M., 269
Perlis, A., 48
Pitrat, J., 128
Pohl, L, 95
Pople, H. E., Jr., 12 Pospesel, H., 156
Post, E., 48
Pratt, V. R., 270
Prawitz, D., 156
Putnam, H., 156
Ouillian, M. R., 412
Raphael, B., 11, 13, 50, 95, 270, 315,
412
Raulefs, P., 156
Reddy, D. R., 11
Reiter, R., 189,268, 413
Rich, C, 14
Rieger, C, 357
Riesbeck, C, 413
Riseman, E. M., 15, 429
Robbin, J., 156
Roberts, R. B., 412
Robinson, A. E., 424
Robinson, J. A., 13, 156 Rosen, B. K., 49
Rosen, C. A., 13
Ross, R., 67, 95 Roussel, P., 269
Rubin, S., 96
Rulifson, J. F., 267, 315, 418
Rumelhart, D. E., 412
Rustin, R., 11
Ruth, G., 14,269
Rychener, M. D., 48
Sacerdoti, E. D., 270, 340, 349, 357,
424
Sahni, S., 94 Samuel, A. L., 128, 421
Schank, R. C, 412, 413, 424, 433
Schmidt, C. F., 424
Schreiber, J., 268
Schubert, L. K., 413
Shannon, C. E., 127
Shapiro, S., 412, 417
Shaw, J., 13,95, 127, 128,316
Shirai, Y., 15
Shortliffe, E. H., 12,268,423
Shostak, R., 269
Shrobe, H. E., 14
Sibert, E. E., 127
Sickel, S.,208, 268
Siklóssy, L.,316, 357, 431
Simmons, R. F., 412
Simon, H. A., 11, 13, 14, 48, 49, 95,
127, 128, 316, 431
Sirovich, F., 127
Slagle, J. R., 10, 45, 49, 50, 96, 127,
128, 208, 268
Smith, R. G., 419
Smullyan, R. M., 50
Soloway, E. M., 422
Sproull, R. F., 358
Sridharan, N. S., 413, 424
Srinivasan, C. V., 413
Stallman, R. M., 12, 413
Stefik, M., 412
Stickel, M. E., 268
Stockman, G., 127
Stone, M., 413
Sussman, G. J., 12, 15, 267, 270, 357,
413
Takeyasu, K., 14
Tate, A., 357, 358
Tenenbaum, J. M., 15
Turing, A. M., 14
Tyson, M., 268, 269
AUTHOR INDEX
Ullman, J. D., 14, 50
Ulrich, J. W.,421
van Emden, M. H., 269
van Vaalen, J., 268
vanderBrug, G. J., 95, 127
Vere, S. A., 421
von Neumann, J., 14
Wagner, H., 14, 50
Waldinger, R. J., 14, 156, 253, 267,
269, 315, 316, 357, 418
Walker, D. E., 11, 413, 432
Waltz, D., 12,94,433
Warren, D. H. D., 269, 357
Waterman, D., 49, 431
Wegman, M. N., 156
Wegner, P., 431
Weiss, S. M., 12
Weissman, C, 16 Weyhrauch, R., 189, 229, 268, 269, 426
Whitney, D. E., 14
Wickelgren, W. A., 50
Wiederhold, G., 12
Wilkins, D., 128,268
Wilks, Y., 412
Will, P., 14
Winker, S., 13
Winograd, T., 11, 267, 270, 412, 413,
418, 422
Winston, P. H., 11, 13, 15, 412, 417,
421,429,430
Wipke, W. T., 15 Wong, H. K. T., 269 Woods, W., 11, 412
Wos, L. A., 13
Zadeh, L., 424
Zanon, G., 189
Zisman, M. D., 419
470
SUBJECT INDEX
A*:
admissibility of, 76-79
definition of, 76
optimality of, 79-81
properties of, 76-84
references for, 95
Abstract individuals, 389-391 ABSTRIPS, 350-354, 357
Actions, reasoning about, 307-315, 424
Actor formalism, 419
Add list, of STRIPS rules, 278 Adders, in DCOMP, 336 Admissibility, of search algorithms, 76 Advice, added to delineations, 406-408 AI languages, 261
references for, 267, 270, 417, 418
Alpha-beta procedure, for games, 121-
126
efficiency of, 125-126
references for, 127
Alphabetic variant, 141
AM, 422
Amending plans, 342-349
Analogies, 317-318, 421 Ancestor node, in graphs, (see Graph
notation)
Ancestry-filtered form strategy, in
resolution, 171
AND/OR form:
for fact expressions, 196-199
for goal expressions, 213-215
AND/OR graphs and trees:
definition of,
40-41, 99-100
references for, 49, 127
for representing fact expressions, 197-
199
for representing goal expressions,
213-215
for robot problem solving, 333
AND nodes, in AND/OR graphs, 40,
- 99-100
Answer extraction, in resolution, 175
Skolem functions in, 184
references for, 189 Answer statements:
in resolution, 176 in rule-based systems, 212
Antecedent, of an implication, 135
AO*:
definition of, 104-105
references for, 127
Applications of AI, 2-9, 11-15
Atomic formulas, in predicate calculus,
132
Attachment, procedural, 173-174, 232,
234, 400-401
Automatic programming, 5-6
by DCOMP, 348-349
references for, 14, 269
by resolution, 191
by RSTRIPS, 331-333
by rule-based systems, 241-253
by STRIPS, 305-307
Automation, industrial, 13-14
B-rules:
definition of, 34
for robot problems, 287-292
for rule-based deduction systems, 214-
215
Backed-up values, in game trees, 116
Backtracking control strategies:
algorithms for, 55-57, 59
definition of, 24-25
examples of, 25-26, 57-58, 60-61
references for, 50, 94
Backward production systems, 32-34
for robot problem solving, 287-296
for theorem proving, 212
Bag, 229 Base set, of clauses, 163
Beliefs, reasoning about, 424-425
Beta-structures, 412 Bidirectional production systems, 32-34 Bidirectional search, 88-90 Blackboard systems (see Production
systems)
Blocks world, 152-155, 275
471
SUBJECT INDEX
Branching factor, of search processes,
92-94
Breadth-first search, 69-71
Breadth-first strategy, in resolution,
165-166
CANCEL relation, in theorem proving,
254-257, 270
Candidate solution graph, 217-218, 254
Checker-playing programs, references
for, 128
Chess-playing programs, references for,
128
Church-Rosser theorems, 49
Clauses, 145
conversion to, 146-149
for goals, 214
CLOSED node, 64
Combinatorial explosion, 6-7
Combinatorial problems, 6-7, 14 Commonsense physics, 423
Commonsense reasoning, 154, 422-424
Commutative production systems:
definition of, 35
relation with decomposable systems,
109-112, 127
Completeness:
of inference rules, 144 of resolution refutation strategies, 165
Complexity of heuristic search, 95 Computation by deduction, 241-246,
269-270
Computer-based consulting systems, 4,
12
Conditional plans, 318-319 Conditional rule application, 259, 265-
267
Conditional substitutions, 239, 252, 269
Conjunctions, 134
Conjunctive goals:
in deductions, 213
in robot problem solving, 297
{Also see Interacting goals)
Conjunctive normal form, 148 Connection graphs, 219-222, 268 Connectives, in predicate calculus, 134-
135
Connectors, in AND/OR graphs, 100
CONNIVER, 261, 267
Consequent, of an implication, 135 Consistency restriction, in heuristic
search, 95
Consistency, of substitutions, 207-208,
218-219, 268
Constraint satisfaction, references for,
94 Contradiction, proof by {see
Refutations)
Contradictory information, 408-411
Contrapositive rules, 258 Control knowledge, definition, 48
Control strategy:
backtracking, 24-26, 55-57
for decomposable systems,
39-41, 103-
109
for game-playing systems, 112-126
graph-search, 22, 25, 27, 64-68
irrevocable, 21-24
of a production system, 17-18, 21-27
for resolution refutations, 164-172
for rule-based deduction systems, 217-
222, 257-260
for STRIPS, 302-303
tentative, 21-22, 24-27
Costs, of arcs and paths in graphs {see
Graph notation)
Criticality values of preconditions, 351
DCOMP, 333
Debugging, as a planning strategy, 357
Declarative knowledge, definition, 48 Decomposable production systems:
algorithm for, 39
control of, 39-41
definition of, 37-38
examples of, 41-47
relation with commutative systems,
109-112, 127
Deduction {see Theorem proving)
Deductive operations on structured
objects, 387
Defaults, 408-411
Delete list, of STRIPS rules, 278
Deleters, in DCOMP, 335-336
Delineations, of structured objects, 387-
391
DeMorgan's laws, 138
DENDRAL, 12, 41-44, 50, 422 Depth, in graphs, {see Graph notation)
Depth bound, definition, 56-57 Depth-first search, 68-70 Derivation graphs, 110, 164
Descendant node {see Graph notation) Differences, in GPS, 303-305
Disjunctions, 134 Distributed AI systems, 419 Double cross, in robot planning, 349 Dynamic programming, 95
8-puzzle:
breadth- and depth-first searches of,
68-71
472
description of, 18-20
heuristic search of, 73-74, 85-87
references for, 50
representation of, 18-20
8-queens problem, 6, 57-58, 60-61
Enclosures, in networks, 373-378
Epistemologica! problems, 422-426 Equivalence, of wffs, 138-139
Errors, effects of in heuristic search, 95
Evaluation, of predicates, 173-174
Evaluation functions:
for commutative systems, 112
definition of, 72-73 for derivation graphs, 112
examples of, 73, 85
for games, 115-117
Execution, of robot plans, 284-287 Expanding nodes (see Graph notation)
Expert systems, 4, 12
F-rules:
definition of, 34
for robot problem solving, 277-279
for rule-based deduction systems, 199-
203, 206
Fact node, 215
Fact object, 379
Facts, in rule-based deduction systems,
195
FOL, 426
Forward robot problem-solving system,
281-282
Forward rule-based deduction system,
196
Frame axioms or assertions, 310
Frame problem, 279-280
Frames, 8-9, 412
(Also see Semantic networks and
Units)
FRL, 412
Game-tree search, 112-126
references for, 127-128
Global database of a production system,
17-18
Goal clauses, 214
Goal descriptions, 276-277
Goal-directed invocation, 260 Goal node, in rule-based systems, 204,
210
(Also see Graph notation)
Goal object, 379 Goal stack, in STRIPS, 298 Goal wff, 153, 195
Goals, in rule-based deduction systems,
203-204 interacting, 296-297, 325
GPS, 303-305
Graph notation
for AND/OR graphs, 99-103
for ordinary graphs, 62-64
Graph-search control strategies:
A*, 76
admissibility of, 76
algorithm for, 64-68
for AND/OR graphs, 103-109
A0*, 104-105
breadth-first, 69-71
definition of, 22, 61-62
depth-first, 68-70
examples of, 25, 27, 28, 66-68, 85-87,
107-109
for game trees, 112-126
heuristic, 72
optimality of, 79-81
references for, 95-96, 127-128
uninformed, 68-71
Grammar, example of, 31-32
Ground instance, 141, 149 Grundy's game, 113-114
GUS, 412
Heuristic function, 76
for AND/OR graphs, 103
Heuristic power, 72, 85-88 Heuristic search, 72
Hierarchical planning, 349-357 Hierarchies, taxonomic, 392
Hill-climbing,
22-23, 49
Hype rares, 100
Hypergraphs, 100 Hypothesize-and-test, 8
Implications, 134-135 Induction, (mathematical) in automatic
programming, 247-253
as related to learning, 421
Inequalities, solution of, 229-234, 269
Inference rules, 140
soundness and completeness of, 145
Information retrieval, 3-4, 12, 154, 223-
229, 269
Informedness of search algorithms, 79
Inheritance, of properties, 392-397
Integration, symbolic, 43-47
Interacting goals, 296-297, 325, 333
Interactive partial orders, 336
Interpretations, of predicate calculus
wffs, 133-134
Irrevocable control strategy:
definition of, 21
examples of, 22-24, 163-164
473
SUBJECT INDEX
Kernels, of triangle tables, 284
Knowledge acquisition, 419-422 Knowledge, reasoning about, 424-425 KRL, 412
LAWALY, 357
LCF, 426
Leaf nodes, in AND/OR graphs, 101
Learning, 420-422
Linear-input form strategy in resolution,
169-170
LISP, references for, 16, 417
Literals, 135
Literal nodes, 203
MACSYMA, 12
Match arc, 201, 206
Matching structured objects, 378-386,
397-399
Means-ends analysis, 303-305
Memory organization, of AI systems,
418
Merge, in resolution, 150, 171
Meta-knowledge, 424, 426
(Also see Meta-rules)
Meta-rules, 229, 259, 269, 426
Mgu, 142
Minimax search in game trees, 115-121
references for, 127-128
Missionaries-and-cannibals problem, 50-
51
Modal logic, 425
Models, of predicate calculus wffs, 133-
134
Modus ponens, 140
Monkey-and-bananas problem, 318
Monotone restriction, on heuristic
functions, 81-84
for AND/OR graphs, 103
Most general unifier, 142 Multiplying out:
inefficiency of, 194-195
need for, 237-239
MYCIN, 268, 420, 423
Natural language processing, 2-3, 11-12
Naughts and crosses, 116-121
Negations, 135 Network rules, 404-406
Network, semantic, 370-378
Nim, 129 NOAH, 357, 358
Nonlinear plans, 333, 357
Non-monotonic logics, 413
NP-complete problems, 7, 14 Object-centered representations, 363
OPEN node, 64
Operations research, 14
Optimal paths, in graphs, (see Graph
notation)
Optimality of search algorithms, 79-81
OR nodes in AND/OR graphs, 41, 99-
100
Ordering strategies, in resolution, 172
references for, 189
Parallel execution of plans, 338-341 Parallel processing, 418-419 Partial models, in logic, 173-174
Partially ordered plans, 333-341 Partitions, in networks, 373-378
Patching plans, 342-349
Paths, in graphs, (see Graph notation)
Pattern-directed invocation, 260 Pattern matching, 144, 261-262
P-conditions, 355
Penetrance, 91-94
Perception, 7-9, 15, 96 Performance measures of search
algorithms, 91-94
Petri nets, augmented, 419
Plan generation, 275, 321 PLANNER, 261, 267, 270 PLANNER-like languages, 260
references for, 267, 270
Plans, 282
representation of, 282-287
execution of, 284-287
Possible worlds semantics, 425 Precondition:
criticality of, 351
postponing, 350, 355
of production rules, 18 of STRIPS rules, 277-278
references for, 156
Prenex form, 147-148
Problem reduction (see Decomposable
production systems)
Problem states, 19-20 Procedural attachment, 173-174, 232,
234, 400-401
Procedural knowledge, definition, 48
Procedural net, 340
Production rules:
based on implications, 195
definition of, 17-18
for semantic networks, 404-406
STRIPS-form, 277-279
for units, 401-404
474
Production systems:
algorithm for, 21
backward and bidirectional, 32-34
commutative, 35-37
control strategies for, 17-18, 21-27,
39-41
decomposable, 37-47
definition of, 17-18, 48-49
for resolution refutations, 163-164 for robot problems, 154-155, 281-282
for theorem proving, 152-154, 193
Program synthesis (see Automatic
programming)
Program verification (see Automatic
programming)
PROLOG, 246, 269-270, 315, 357
Proof, definition of, 140
Property inheritance, 392-397
Propositional attitudes, 424-425
Propositional calculus, 135
PROSPECTOR, 420, 423
Protection, of goals, 323
violation of, 326
Prototype units, 388, 390
PSI automatic programming system, 14
Puzzles, references for, 50
QA3, 418
OA4, 267, 418
OLISP, 261,270
Quantification, 136-137
in units, 368
in nets, 373
Reasoning:
about actions, 307-315, 424
by cases, 204-205, 256
commonsense, 154, 422-424
about knowledge and belief, 424-425
Referential transparency, 425
Refutation tree, 164
Refutations, 161 Regression, 288-292, 321
Representation:
examples of, 29-32 of plans, 424 problems of, 27-29, 49
Resolution, 145
within AND/OR graphs, 234-241
for general clauses, 150-152
for ground clauses, 149-150
references for, 156
Resolution refutations, 161
references for, 189
Resolvents, 149, 151 RGR, 237, 268
Robot problems, 152-153, 275, 307-315,
321
Robots, 5, 13-14
Root node (see Graph notation) RSTRIPS, 321
Rule-based systems, 193, 196
(Also see Production systems)
Rules (see Production rules)
SAINT, 45, 50
Satisfiability, of sets of wffs, 145 Scheduling problems, 6-7, 14 Schemas (see Semantic networks and
Units)
Scripts, 412 Search graph:
definition of, 64-65, 104
Search strategies (see Control strategies) Search tree:
definition of, 64-65 example of, 28
Self-reference, 426
Semantics, of predicate calculus wffs,
133-134
Semantic matching, 381 Semantic networks, 370-378
references for, 412-413
Set-of-support strategy, in resolution,
167
Simplification strategies, in resolution,
172-174
Simultaneous unifiers, 268 SIN, 50
SIR, 412
Situation variables, in robot problems,
308
Skolem functions, 146-147
Slots, 364
Slotnames, 364 Slotvalues, 364
Solution graph, in AND/OR graphs,
101-102
candidate, 217-218
SOLVED nodes in AND/OR search
graphs, 104-106
Soundness, of inference rules, 145 Speech acts, 316, 425
Speech recognition and understanding,
11, 96
Staged search, 90-91
Standardization of variables, 146, 149 Start node (see Graph notation) State descriptions, 153, 276 State variables, in robot problems, 308
475
SUBJECT INDEX
States, of a problem, 19-20
STRIPS, 277, 298
STRIPS-form rules, 277-279
Structured objects, 361 Subgoal, 214
Subgoal node, 214
Subsumption, of clauses, 174 Substitution instances, 141, 144 Substitutions, 140-142
associativity of, 141
composition of, 141
consistency of, 207-208, 218-219, 268
non-commutativity of, 142
unifying composition of, 207-208, 268
Successor node, in graphs (see Graph
notation)
Symbol mapping, 413
(Also see Property inheritance)
Tautologies, 144
elimination of, 173
Taxonomic hierarchies, 392-397
TEIRESIAS, 420
Tentative control strategy, definition of,
21-22
(Also see Backtracking and Graph-
search control strategies)
Terminal nodes, of AND/OR graphs, 41
Termination condition:
of backward, rule-based systems, 215 of forward, rule-based systems, 203,
210
of production systems, 18 of resolution refutation systems, 163
Theorem, definition of, 140 Theorem-proving, 4-5, 13, 153
for robot problem solving, 307-315
by resolution 151-152 by resolution refutations, 161 by rule-based systems, 193 Tic-tac-toe, 116-121
Time and tense, formalization of, 159
Tip nodes, in trees (see Graph notation)
Transitivity, 231-232
Traveling salesman problem, 6-7,
29-31,
50
Triangle tables, 282-287, 421
Triggers (see Advice)
Truth maintenance, 411, 413
Truth table, 138
Truth values, of predicate calculus wffs,
(see Interpretations)
Two-handed robot, 338-341
Uncertain knowledge:
in deductions, 268, 423-424
in robot planning, 358
UNDERSTAND, 49 Unification, 140-144
algorithm for, 142-143
references for, 156
of structured objects (see Matching)
Unification set, in answer extraction,
179
Unifying composition, of substitutions,
207-208, 268
Unit-preference strategy, in resolution,
167-169
Unit rules, 401-404
Units, 361-369
references for, 412
Universal specialization, 140
Validity, of wffs, 144 Vision (see Perception)
WARPLAN, 357
Wffs, of the predicate calculus, 131-132
476
|
AI_RMF_Playbook.pdf
|
AI RMF AI RMF
PLAYBOOK PLAYBOOK
Table of Contents
GOVERN ................................ ................................ ................................ ................................ ................................ ........... 4
GOVERN 1.1 ................................ ................................ ................................ ................................ ................................ ..................... 4
GOVERN 1.2 ................................ ................................ ................................ ................................ ................................ ..................... 5
GOVERN 1.3 ................................ ................................ ................................ ................................ ................................ ..................... 7
GOVERN 1.4 ................................ ................................ ................................ ................................ ................................ ..................... 9
GOVERN 1.5 ................................ ................................ ................................ ................................ ................................ .................. 11
GOVERN 1.6 ................................ ................................ ................................ ................................ ................................ .................. 12
GOVERN 1.7 ................................ ................................ ................................ ................................ ................................ .................. 13
GOVERN 2.1 ................................ ................................ ................................ ................................ ................................ .................. 15
GOVERN 2.2 ................................ ................................ ................................ ................................ ................................ .................. 17
GOVERN 2.3 ................................ ................................ ................................ ................................ ................................ .................. 18
GOVERN 3.1 ................................ ................................ ................................ ................................ ................................ .................. 19
GOVERN 3.2 ................................ ................................ ................................ ................................ ................................ .................. 21
GOVERN 4.1 ................................ ................................ ................................ ................................ ................................ .................. 23
GOVERN 4.2 ................................ ................................ ................................ ................................ ................................ .................. 24
GOVERN 4.3 ................................ ................................ ................................ ................................ ................................ .................. 27
GOVERN 5.1 ................................ ................................ ................................ ................................ ................................ .................. 28
GOVERN 5.2 ................................ ................................ ................................ ................................ ................................ .................. 30
GOVERN 6.1 ................................ ................................ ................................ ................................ ................................ .................. 32
GOVERN 6.2 ................................ ................................ ................................ ................................ ................................ .................. 33
MANAGE ................................ ................................ ................................ ................................ ................................ ........ 35
MANAGE 1.1 ................................ ................................ ................................ ................................ ................................ ................. 35
MANAGE 1.2 ................................ ................................ ................................ ................................ ................................ ................. 36
MANAGE 1.3 ................................ ................................ ................................ ................................ ................................ ................. 37
MANAGE 1.4 ................................ ................................ ................................ ................................ ................................ ................. 39
MANAGE 2.1 ................................ ................................ ................................ ................................ ................................ ................. 40
MANAGE 2.2 ................................ ................................ ................................ ................................ ................................ ................. 42
MANAGE 2.3 ................................ ................................ ................................ ................................ ................................ ................. 48
MANAGE 2.4 ................................ ................................ ................................ ................................ ................................ ................. 49
MANAGE 3.1 ................................ ................................ ................................ ................................ ................................ ................. 51
MANAGE 3.2 ................................ ................................ ................................ ................................ ................................ ................. 52
MANAGE 4.1 ................................ ................................ ................................ ................................ ................................ ................. 53
MANAGE 4.2 ................................ ................................ ................................ ................................ ................................ ................. 54
MANAGE 4.3 ................................ ................................ ................................ ................................ ................................ ................. 56
MAP ................................ ................................ ................................ ................................ ................................ ................ 58
MAP 1.1................................ ................................ ................................ ................................ ................................ ........................... 58
MAP 1.2................................ ................................ ................................ ................................ ................................ ........................... 62
MAP 1.3................................ ................................ ................................ ................................ ................................ ........................... 63
MAP 1.4................................ ................................ ................................ ................................ ................................ ........................... 65
MAP 1.5................................ ................................ ................................ ................................ ................................ ........................... 66
MAP 1.6................................ ................................ ................................ ................................ ................................ ........................... 68
MAP 2.1................................ ................................ ................................ ................................ ................................ ........................... 70
MAP 2.2................................ ................................ ................................ ................................ ................................ ........................... 71
MAP 2.3................................ ................................ ................................ ................................ ................................ ........................... 74
MAP 3.1................................ ................................ ................................ ................................ ................................ ........................... 77
MAP 3.2................................ ................................ ................................ ................................ ................................ ........................... 79
MAP 3.3................................ ................................ ................................ ................................ ................................ ........................... 80
MAP 3.4................................ ................................ ................................ ................................ ................................ ........................... 82
MAP 3.5................................ ................................ ................................ ................................ ................................ ........................... 84
MAP 4.1................................ ................................ ................................ ................................ ................................ ........................... 86
MAP 4.2................................ ................................ ................................ ................................ ................................ ........................... 88
MAP 5.1................................ ................................ ................................ ................................ ................................ ........................... 89
MAP 5.2................................ ................................ ................................ ................................ ................................ ........................... 90
MEASURE ................................ ................................ ................................ ................................ ................................ ...... 93
MEASURE 1.1 ................................ ................................ ................................ ................................ ................................ ............... 93
MEASURE 1.2 ................................ ................................ ................................ ................................ ................................ ............... 95
MEASURE 1.3 ................................ ................................ ................................ ................................ ................................ ............... 96
MEASURE 2.1 ................................ ................................ ................................ ................................ ................................ ............... 98
MEASURE 2.2 ................................ ................................ ................................ ................................ ................................ ............... 99
MEASURE 2.3 ................................ ................................ ................................ ................................ ................................ ............ 102
MEASURE 2.4 ................................ ................................ ................................ ................................ ................................ ............ 104
MEASURE 2.5 ................................ ................................ ................................ ................................ ................................ ............ 106
MEASURE 2.6 ................................ ................................ ................................ ................................ ................................ ............ 108
MEASURE 2.7 ................................ ................................ ................................ ................................ ................................ ............ 110
MEASURE 2.8 ................................ ................................ ................................ ................................ ................................ ............ 112
MEASURE 2.9 ................................ ................................ ................................ ................................ ................................ ............ 115
MEASURE 2.10 ................................ ................................ ................................ ................................ ................................ ......... 118
MEASURE 2.11 ................................ ................................ ................................ ................................ ................................ ......... 121
MEASURE 2.12 ................................ ................................ ................................ ................................ ................................ ......... 126
MEASURE 2.13 ................................ ................................ ................................ ................................ ................................ ......... 128
MEASURE 3.1 ................................ ................................ ................................ ................................ ................................ ............ 129
MEASURE 3.2 ................................ ................................ ................................ ................................ ................................ ............ 131
MEASURE 3.3 ................................ ................................ ................................ ................................ ................................ ............ 132
MEASURE 4.1 ................................ ................................ ................................ ................................ ................................ ............ 134
MEASURE 4.2 ................................ ................................ ................................ ................................ ................................ ............ 137
MEASURE 4.3 ................................ ................................ ................................ ................................ ................................ ............ 140
The Playbook provides suggested actions for achieving the outcomes laid out in
the AI Risk Management Framework (AI RMF) Core (Tables 1 – 4 in AI RMF
1.0). Suggestions are aligned to each sub-category within the four AI RMF
functions (Govern, Map, Measure, Manage).
The Playbook is neither a checklist nor set of steps to be followed in its entirety.
Playbook suggestions are voluntary. Organizations may utilize this information
by borrowing as many – or as few – suggestions as apply to their industry use
case or interests.About the Playbook
Govern Map Measure ManageFORWARD
GOVERN
4 of 142 Govern
Policies, processes, procedures and practices across the organization related to the
mapping, measuring and managing of AI risks are in place, transparent, and implemented
effectively.
GOVERN 1.1
Legal and regulatory requirements involving AI are understood, managed, and documented.
About
AI systems may be subject to specific applicable legal and regulatory requirements. Some
legal requirements can mandate (e.g., nondiscrimination, data privacy and security
controls) documentation, disclosure, and increased AI system transparency. These
requirements are complex and may not be applicable or differ across applications and
contexts.
For example, AI system testing processes for bias measurement, such as disparate impact,
are not applied uniformly within the legal context. Disparate impact is broadly defined as a
facially neutral policy or practice that disproportionately harms a group based on a
protected trait. Notably, some modeling algorithms or debiasing techniques that rely on
demographic information, could also come into tension with legal prohibitions on disparate
treatment (i.e., intentional discrimination).
Additionally, some intended users of AI systems may not have consistent or reliable access
to fundamental internet technologies (a phenomenon widely described as the “digital
divide”) or may experience difficulties interacting with AI systems due to disabilities or
impairments. Such factors may mean different communities experience bias or other
negative impacts when trying to access AI systems. Failure to address such design issues
may pose legal risks, for example in employment related activities affecting persons with
disabilities.
Suggested Actions
• Maintain awareness of the applicable legal and regulatory considerations and
requirements specific to industry, sector, and business purpose, as well as the
application context of the deployed AI system.
• Align risk management efforts with applicable legal standards.
• Maintain policies for training (and re -training) organizational staff about necessary
legal or regulatory considerations that may impact AI -related design, development and
deployment activities.
Transparency & Documentation
Organizations can document the following
• To what extent has the entity defined and documented the regulatory environment —
including minimum requirements in laws and regulations?
5 of 142 • Has the system been reviewed for its compliance to applicable laws, regulations,
standards, and guidance?
• To what extent has the entity defined and documented the regulatory environment —
including applicable requirements in laws and regulations?
• Has the system been reviewed for its compliance to relevant applicable laws,
regulations, standards, and guidance?
AI Transparency Resources
GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
References
Andrew Smith, "Using Artificial Intelligence and Algorithms," FTC Business Blog (2020).
Rebecca Kelly Slaughter, "Algorithms and Economic Justice," ISP Digital Future Whitepaper
& YJoLT Special Publication (2021).
Patrick Hall, Benjamin Cox, Steven Dickerson, Arjun Ravi Kannan, Raghu Kulkarni, and
Nicholas Schmidt, "A United States fair lending perspective on machine learning," Frontiers
in Artificial Intelligence 4 (2021).
AI Hiring Tools and the Law, Partnership on Employment & Accessible Technology (PEAT,
peatworks.org).
GOVERN 1.2
The characteristics of trustworthy AI are integrated into organizational policies, processes,
and procedures.
About
Policies, processes, and procedures are central components of effective AI risk management
and fundamental to individual and organizational accountability. All stakeholders benefit
from policies, processes, and procedures which require preventing harm by design and
default.
Organizational policies and procedures will vary based on available resources and risk
profiles, but can help systematize AI actor roles and responsibilities throughout the AI
lifecycle. Without such policies, risk management can be subjective across the organization,
and exacerbate rather than minimize risks over time. Policies, or summaries thereof, are
understandable to relevant AI actors. Policies reflect an understanding of the underlying
metrics, measurements, and tests that are necessary to support policy and AI system design,
development, deployment and use.
Lack of clear information about responsibilities and chains of command will limit the
effectiveness of risk management.
Suggested Actions
Organizational AI risk management policies should be designed to:
6 of 142 • Define key terms and concepts related to AI systems and the scope of their purposes
and intended uses.
• Connect AI governance to existing organizational governance and risk controls.
• Align to broader data governance policies and practices, particularly the use of sensitive
or otherwise risky data.
• Detail standards for experimental design, data quality, and model training.
• Outline and document risk mapping and measurement processes and standards.
• Detail model testing and validation processes.
• Detail review processes for legal and risk functions.
• Establish the frequency of and detail for monitoring, auditing and review processes.
• Outline change management requirements.
• Outline processes for internal and external stakeholder engagement.
• Establish whistleblower policies to facilitate reporting of serious AI system concerns.
• Detail and test incident response plans.
• Verify that formal AI risk management policies align to existing legal standards, and
industry best practices and norms.
• Establish AI risk management policies that broadly align to AI system trustworthy
characteristics.
• Verify that formal AI risk management policies include currently deployed and third -
party AI systems.
Transparency & Documentation
Organizations can document the following
• To what extent do these policies foster public trust and confidence in the use of the AI
system?
• What policies has the entity developed to ensure the use of the AI system is consistent
with its stated values and principles?
• What policies and documentation has the entity developed to encourage the use of its AI
system as intended?
• To what extent are the model outputs consistent with the entity’s values and principles
to foster public trust and equity?
AI Transparency Resources
GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
References
Off. Comptroller Currency, Comptroller’s Handbook: Model Risk Management (Aug. 2021).
GAO, “Artificial Intelligence: An Accountability Framework for Federal Agencies and Other
Entities,” GAO@100 (GAO -21-519SP), June 2021.
NIST, "U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical
Standards and Related Tools".
7 of 142 Lipton, Zachary and McAuley, Julian and Chouldechova, Alexandra, Does mitigating ML’s
impact disparity require treatment disparity? Advances in Neural Information Processing
Systems, 2018.
Jessica Newman (2023) “A Taxonomy of Trustworthiness for Artificial Intelligence:
Connecting Properties of Trustworthiness with Risk Management and the AI Lifecycle,” UC
Berkeley Center for Long -Term Cybersecurity.
Emily Hadley (2022). Prioritizing Policies for Furthering Responsible Artificial Intelligence
in the United States. 2022 IEEE International Conference on Big Data (Big Data), 5029 -5038.
SAS Institute, “The SAS® Data Governance Framework: A Blueprint for Success”.
ISO, “Information technology — Reference Model of Data Management, “ ISO/IEC TR
10032:200.
“Play 5: Create a formal policy,” Partnership on Employment & Accessible Technology
(PEAT, peatworks.org).
"National Institute of Standards and Technology. (2018). Framework for improving critical
infrastructure cybersecurity.
Kaitlin R. Boeckl and Naomi B. Lefkovitz. "NIST Privacy Framework: A Tool for Improving
Privacy Through Enterprise Risk Management, Version 1.0." National Institute of Standards
and Technology (NIST), January 16, 2020.
“plainlanguage.gov – Home,” The U.S. Government.
GOVERN 1.3
Processes and procedures are in place to determine the needed level of risk management
activities based on the organization's risk tolerance.
About
Risk management resources are finite in any organization. Adequate AI governance policies
delineate the mapping, measurement, and prioritization of risks to allocate resources
toward the most material issues for an AI system to ensure effective risk management.
Policies may specify systematic processes for assigning mapped and measured risks to
standardized risk scales.
AI risk tolerances range from negligible to critical – from, respectively, almost no risk to
risks that can result in irredeemable human, reputational, financial, or environmental
losses. Risk tolerance rating policies consider different sources of risk, (e.g., financial,
operational, safety and wellbeing, business, reputational, or model risks). A typical risk
measurement approach entails the multiplication, or qualitative combination, of measured
or estimated impact and likelihood of impacts into a risk score (risk ≈ impact x likelihood).
This score is then placed on a risk scale. Scales for risk may be qualitative, such as red -
amber -green (RAG), or may entail simulations or econometric approaches. Impact
8 of 142 assessments are a common tool for understanding the severity of mapped risks. In the most
fulsome AI risk management approaches, all models are assigned to a risk level.
Suggested Actions
• Establish policies to define mechanisms for measuring or understanding an AI system’s
potential impacts, e.g., via regular impact assessments at key stages in the AI lifecycle,
connected to system impacts and frequency of system updates.
• Establish policies to define mechanisms for measuring or understanding the likelihood
of an AI system’s impacts and their magnitude at key stages in the AI lifecycle.
• Establish policies that define assessment scales for measuring potential AI system
impact. Scales may be qualitative, such as red -amber -green (RAG), or may entail
simulations or econometric approaches.
• Establish policies for assigning an overall risk measurement approach for an AI system,
or its important components, e.g., via multiplication or combination of a mapped risk’s
impact and likelihood (risk ≈ impact x likelihood).
• Establish policies to assign systems to uniform risk scales that are valid across the
organization’s AI portfolio (e.g. documentation templates), and acknowledge risk
tolerance and risk levels may change over the lifecycle of an AI system.
Transparency & Documentation
Organizations can document the following
• How do system performance metrics inform risk tolerance decisions?
• What policies has the entity developed to ensure the use of the AI system is consistent
with organizational risk tolerance?
• How do the entity’s data security and privacy assessments inform risk tolerance
decisions?
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
References
Board of Governors of the Federal Reserve System. SR 11 -7: Guidance on Model Risk
Management. (April 4, 2011).
The Office of the Comptroller of the Currency. Enterprise Risk Appetite Statement. (Nov. 20,
2019).
Brenda Boultwood, How to Develop an Enterprise Risk -Rating Approach (Aug. 26, 2021).
Global Association of Risk Professionals (garp.org). Accessed Jan. 4, 2023.
GAO -17-63: Enterprise Risk Management: Selected Agencies’ Experiences Illustrate Good
Practices in Managing Risk.
9 of 142 GOVERN 1.4
The risk management process and its outcomes are established through transparent
policies, procedures, and other controls based on organizational risk priorities.
About
Clear policies and procedures relating to documentation and transparency facilitate and
enhance efforts to communicate roles and responsibilities for the Map, Measure and
Manage functions across the AI lifecycle. Standardized documentation can help
organizations systematically integrate AI risk management processes and enhance
accountability efforts. For example, by adding their contact information to a work product
document, AI actors can improve communication, increase ownership of work products,
and potentially enhance consideration of product quality. Documentation may generate
downstream benefits related to improved system replicability and robustness. Proper
documentation storage and access procedures allow for quick retrieval of critical
information during a negative incident. Explainable machine learning efforts (models and
explanatory methods) may bolster technical documentation practices by introducing
additional information for review and interpretation by AI Actors.
Suggested Actions
• Establish and regularly review documentation policies that, among others, address
information related to:
• AI actors contact informations
• Business justification
• Scope and usages
• Expected and potential risks and impacts
• Assumptions and limitations
• Description and characterization of training data
• Algorithmic methodology
• Evaluated alternative approaches
• Description of output data
• Testing and validation results (including explanatory visualizations and
information)
• Down - and up -stream dependencies
• Plans for deployment, monitoring, and change management
• Stakeholder engagement plans
• Verify documentation policies for AI systems are standardized across the organization
and remain current.
• Establish policies for a model documentation inventory system and regularly review its
completeness, usability, and efficacy.
• Establish mechanisms to regularly review the efficacy of risk management processes.
• Identify AI actors responsible for evaluating efficacy of risk management processes and
approaches, and for course -correction based on results.
10 of 142 • Establish policies and processes regarding public disclosure of the use of AI and risk
management material such as impact assessments, audits, model documentation and
validation and testing results.
• Document and review the use and efficacy of different types of transparency tools and
follow industry standards at the time a model is in use.
Transparency & Documentation
Organizations can document the following
• To what extent has the entity clarified the roles, responsibilities, and delegated
authorities to relevant stakeholders?
• What are the roles, responsibilities, and delegation of authorities of personnel involved
in the design, development, deployment, assessment and monitoring of the AI system?
• How will the appropriate performance metrics, such as accuracy, of the AI be monitored
after the AI is deployed? How much distributional shift or model drift from baseline
performance is acceptable?
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• Intel.gov: AI Ethics Framework for Intelligence Community - 2020.
References
Bd. Governors Fed. Rsrv. Sys., Supervisory Guidance on Model Risk Management, SR Letter
11-7 (Apr. 4, 2011).
Off. Comptroller Currency, Comptroller’s Handbook: Model Risk Management (Aug. 2021).
Margaret Mitchell et al., “Model Cards for Model Reporting.” Proceedings of 2019 FATML
Conference.
Timnit Gebru et al., “Datasheets for Datasets,” Communications of the ACM 64, No. 12, 2021.
Emily M. Bender, Batya Friedman, Angelina McMillan -Major (2022). A Guide for Writing
Data Statements for Natural Language Processing. University of Washington. Accessed July
14, 2022.
M. Arnold, R. K. E. Bellamy, M. Hind, et al. FactSheets: Increasing trust in AI services through
supplier’s declarations of conformity. IBM Journal of Research and Development 63, 4/5
(July -September 2019), 6:1 -6:13.
Navdeep Gill, Abhishek Mathur, Marcos V. Conde (2022). A Brief Overview of AI Governance
for Responsible Machine Learning Systems. ArXiv, abs/2211.13130.
John Richards, David Piorkowski, Michael Hind, et al. A Human -Centered Methodology for
Creating AI FactSheets. Bulletin of the IEEE Computer Society Technical Committee on Data
Engineering.
11 of 142 Christoph Molnar, Interpretable Machine Learning, lulu.com.
David A. Broniatowski. 2021. Psychological Foundations of Explainability and
Interpretability in Artificial Intelligence. National Institute of Standards and Technology
(NIST) IR 8367. National Institute of Standards and Technology, Gaithersburg, MD.
OECD (2022), “OECD Framework for the Classification of AI systems”, OECD Digital
Economy Papers, No. 323, OECD Publishing, Paris.
GOVERN 1.5
Ongoing monitoring and periodic review of the risk management process and its outcomes
are planned, organizational roles and responsibilities are clearly defined, including
determining the frequency of periodic review.
About
AI systems are dynamic and may perform in unexpected ways once deployed or after
deployment. Continuous monitoring is a risk management process for tracking unexpected
issues and performance changes, in real -time or at a specific frequency, across the AI
system lifecycle.
Incident response and “appeal and override” are commonly used processes in information
technology management. These processes enable real -time flagging of potential incidents,
and human adjudication of system outcomes.
Establishing and maintaining incident response plans can reduce the likelihood of additive
impacts during an AI incident. Smaller organizations which may not have fulsome
governance programs, can utilize incident response plans for addressing system failures,
abuse or misuse.
Suggested Actions
• Establish policies to allocate appropriate resources and capacity for assessing impacts
of AI systems on individuals, communities and society.
• Establish policies and procedures for monitoring and addressing AI system
performance and trustworthiness, including bias and security problems, across the
lifecycle of the system.
• Establish policies for AI system incident response, or confirm that existing incident
response policies apply to AI systems.
• Establish policies to define organizational functions and personnel responsible for AI
system monitoring and incident response activities.
• Establish mechanisms to enable the sharing of feedback from impacted individuals or
communities about negative impacts from AI systems.
• Establish mechanisms to provide recourse for impacted individuals or communities to
contest problematic AI system outcomes.
• Establish opt -out mechanisms.
12 of 142 Transparency & Documentation
Organizations can document the following
• To what extent does the system/entity consistently measure progress towards stated
goals and objectives?
• Did your organization implement a risk management system to address risks involved
in deploying the identified AI solution (e.g. personnel risk or changes to commercial
objectives)?
• Did your organization address usability problems and test whether user interfaces
served their intended purposes?
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• WEF Model AI Governance Framework Assessment 2020.
References
National Institute of Standards and Technology. (2018). Framework for improving critical
infrastructure cybersecurity.
National Institute of Standards and Technology. (2012). Computer Security Incident
Handling Guide. NIST Special Publication 800 -61 Revision 2.
GOVERN 1.6
Mechanisms are in place to inventory AI systems and are resourced according to
organizational risk priorities.
About
An AI system inventory is an organized database of artifacts relating to an AI system or
model. It may include system documentation, incident response plans, data dictionaries,
links to implementation software or source code, names and contact information for
relevant AI actors, or other information that may be helpful for model or system
maintenance and incident response purposes. AI system inventories also enable a holistic
view of organizational AI assets. A serviceable AI system inventory may allow for the quick
resolution of:
• specific queries for single models, such as “when was this model last refreshed?”
• high -level queries across all models, such as, “how many models are currently deployed
within our organization?” or “how many users are impacted by our models?”
AI system inventories are a common element of traditional model risk management
approaches and can provide technical, business and risk management benefits. Typically
inventories capture all organizational models or systems, as partial inventories may not
provide the value of a full inventory.
13 of 142 Suggested Actions
• Establish policies that define the creation and maintenance of AI system inventories.
• Establish policies that define a specific individual or team that is responsible for
maintaining the inventory.
• Establish policies that define which models or systems are inventoried, with preference
to inventorying all models or systems, or minimally, to high risk models or systems, or
systems deployed in high -stakes settings.
• Establish policies that define model or system attributes to be inventoried, e.g,
documentation, links to source code, incident response plans, data dictionaries, AI actor
contact information.
Transparency & Documentation
Organizations can document the following
• Who is responsible for documenting and maintaining the AI system inventory details?
• What processes exist for data generation, acquisition/collection, ingestion,
staging/storage, transformations, security, maintenance, and dissemination?
• Given the purpose of this AI, what is an appropriate interval for checking whether it is
still accurate, unbiased, explainable, etc.? What are the checks for this model?
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• Intel.gov: AI Ethics Framework for Intelligence Community - 2020.
References
“A risk -based integrity level schema”, in IEEE 1012, IEEE Standard for System, Software,
and Hardware Verification and Validation. See Annex B.
Off. Comptroller Currency, Comptroller’s Handbook: Model Risk Management (Aug. 2021).
See “Model Inventory,” pg. 26.
VertaAI, “ModelDB: An open -source system for Machine Learning model versioning,
metadata, and experiment management.” Accessed Jan. 5, 2023.
GOVERN 1.7
Processes and procedures are in place for decommissioning and phasing out of AI systems
safely and in a manner that does not increase risks or decrease the organization’s
trustworthiness.
About
Irregular or indiscriminate termination or deletion of models or AI systems may be
inappropriate and increase organizational risk. For example, AI systems may be subject to
regulatory requirements or implicated in future security or legal investigations. To maintain
trust, organizations may consider establishing policies and processes for the systematic and
deliberate decommissioning of AI systems. Typically, such policies consider user and
14 of 142 community concerns, risks in dependent and linked systems, and security, legal or
regulatory concerns. Decommissioned models or systems may be stored in a model
inventory along with active models, for an established length of time.
Suggested Actions
• Establish policies for decommissioning AI systems. Such policies typically address:
• User and community concerns, and reputational risks.
• Business continuity and financial risks.
• Up and downstream system dependencies.
• Regulatory requirements (e.g., data retention).
• Potential future legal, regulatory, security or forensic investigations.
• Migration to the replacement system, if appropriate.
• Establish policies that delineate where and for how long decommissioned systems,
models and related artifacts are stored.
• Establish practices to track accountability and consider how decommission and other
adaptations or changes in system deployment contribute to downstream impacts for
individuals, groups and communities.
• Establish policies that address ancillary data or artifacts that must be preserved for
fulsome understanding or execution of the decommissioned AI system, e.g., predictions,
explanations, intermediate input feature representations, usernames and passwords,
etc.
Transparency & Documentation
Organizations can document the following
• What processes exist for data generation, acquisition/collection, ingestion,
staging/storage, transformations, security, maintenance, and dissemination?
• To what extent do these policies foster public trust and confidence in the use of the AI
system?
• If anyone believes that the AI no longer meets this ethical framework, who will be
responsible for receiving the concern and as appropriate investigating and remediating
the issue? Do they have authority to modify, limit, or stop the use of the AI?
• If it relates to people, were there any ethical review applications/reviews/approvals?
(e.g. Institutional Review Board applications)
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• Intel.gov: AI Ethics Framework for Intelligence Community - 2020.
• Datasheets for Datasets.
15 of 142 References
Michelle De Mooy, Joseph Jerome and Vijay Kasschau, “Should It Stay or Should It Go? The
Legal, Policy and Technical Landscape Around Data Deletion,” Center for Democracy and
Technology, 2017.
Burcu Baykurt, "Algorithmic accountability in US cities: Transparency, impact, and political
economy." Big Data & Society 9, no. 2 (2022): 20539517221115426.
Upol Ehsan, Ranjit Singh, Jacob Metcalf and Mark O. Riedl. “The Algorithmic Imprint.”
Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
(2022).
“Information System Decommissioning Guide,” Bureau of Land Management, 2011.
GOVERN 2.1
Roles and responsibilities and lines of communication related to mapping, measuring, and
managing AI risks are documented and are clear to individuals and teams throughout the
organization.
About
The development of a risk -aware organizational culture starts with defining
responsibilities. For example, under some risk management structures, professionals
carrying out test and evaluation tasks are independent from AI system developers and
report through risk management functions or directly to executives. This kind of structure
may help counter implicit biases such as groupthink or sunk cost fallacy and bolster risk
management functions, so efforts are not easily bypassed or ignored.
Instilling a culture where AI system design and implementation decisions can be questioned
and course - corrected by empowered AI actors can enhance organizations’ abilities to
anticipate and effectively manage risks before they become ingrained.
Suggested Actions
• Establish policies that define the AI risk management roles and responsibilities for
positions directly and indirectly related to AI systems, including, but not limited to
• Boards of directors or advisory committees
• Senior management
• AI audit functions
• Product management
• Project management
• AI design
• AI development
• Human -AI interaction
• AI testing and evaluation
• AI acquisition and procurement
16 of 142 • Impact assessment functions
• Oversight functions
• Establish policies that promote regular communication among AI actors participating in
AI risk management efforts.
• Establish policies that separate management of AI system development functions from
AI system testing functions, to enable independent course -correction of AI systems.
• Establish policies to identify, increase the transparency of, and prevent conflicts of
interest in AI risk management efforts.
• Establish policies to counteract confirmation bias and market incentives that may
hinder AI risk management efforts.
• Establish policies that incentivize AI actors to collaborate with existing legal, oversight,
compliance, or enterprise risk functions in their AI risk management activities.
Transparency & Documentation
Organizations can document the following
• To what extent has the entity clarified the roles, responsibilities, and delegated
authorities to relevant stakeholders?
• Who is ultimately responsible for the decisions of the AI and is this person aware of the
intended uses and limitations of the analytic?
• Are the responsibilities of the personnel involved in the various AI governance
processes clearly defined?
• What are the roles, responsibilities, and delegation of authorities of personnel involved
in the design, development, deployment, assessment and monitoring of the AI system?
• Did your organization implement accountability -based practices in data management
and protection (e.g. the PDPA and OECD Privacy Principles)?
AI Transparency Resources
• WEF Model AI Governance Framework Assessment 2020.
• WEF Companion to the Model AI Governance Framework - 2020.
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
References
Andrew Smith, “Using Artificial Intelligence and Algorithms,” FTC Business Blog (Apr. 8,
2020).
Off. Superintendent Fin. Inst. Canada, Enterprise -Wide Model Risk Management for Deposit -
Taking Institutions, E -23 (Sept. 2017).
Bd. Governors Fed. Rsrv. Sys., Supervisory Guidance on Model Risk Management, SR Letter
11-7 (Apr. 4, 2011).
Off. Comptroller Currency, Comptroller’s Handbook: Model Risk Management (Aug. 2021).
17 of 142 ISO, “Information Technology — Artificial Intelligence — Guidelines for AI applications,”
ISO/IEC CD 5339. See Section 6, “Stakeholders’ perspectives and AI application framework.”
GOVERN 2.2
The organization’s personnel and partners receive AI risk management training to enable
them to perform their duties and responsibilities consistent with related policies,
procedures, and agreements.
About
To enhance AI risk management adoption and effectiveness, organizations are encouraged
to identify and integrate appropriate training curricula into enterprise learning
requirements. Through regular training, AI actors can maintain awareness of:
• AI risk management goals and their role in achieving them.
• Organizational policies, applicable laws and regulations, and industry best practices and
norms.
See [MAP 3.4]() and [3.5]() for additional relevant information.
Suggested Actions
• Establish policies for personnel addressing ongoing education about:
• Applicable laws and regulations for AI systems.
• Potential negative impacts that may arise from AI systems.
• Organizational AI policies.
• Trustworthy AI characteristics.
• Ensure that trainings are suitable across AI actor sub -groups - for AI actors carrying out
technical tasks (e.g., developers, operators, etc.) as compared to AI actors in oversight
roles (e.g., legal, compliance, audit, etc.).
• Ensure that trainings comprehensively address technical and socio -technical aspects of
AI risk management.
• Verify that organizational AI policies include mechanisms for internal AI personnel to
acknowledge and commit to their roles and responsibilities.
• Verify that organizational policies address change management and include
mechanisms to communicate and acknowledge substantial AI system changes.
• Define paths along internal and external chains of accountability to escalate risk
concerns.
Transparency & Documentation
Organizations can document the following
• Are the relevant staff dealing with AI systems properly trained to interpret AI model
output and decisions as well as to detect and manage bias in data?
18 of 142 • How does the entity determine the necessary skills and experience needed to design,
develop, deploy, assess, and monitor the AI system?
• How does the entity assess whether personnel have the necessary skills, training,
resources, and domain knowledge to fulfill their assigned responsibilities?
• What efforts has the entity undertaken to recruit, develop, and retain a workforce with
backgrounds, experience, and perspectives that reflect the community impacted by the
AI system?
AI Transparency Resources
• WEF Model AI Governance Framework Assessment 2020.
• WEF Companion to the Model AI Governance Framework - 2020.
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
References
Off. Comptroller Currency, Comptroller’s Handbook: Model Risk Management (Aug. 2021).
“Developing Staff Trainings for Equitable AI,” Partnership on Employment & Accessible
Technology (PEAT, peatworks.org).
GOVERN 2.3
Executive leadership of the organization takes responsibility for decisions about risks
associated with AI system development and deployment.
About
Senior leadership and members of the C -Suite in organizations that maintain an AI portfolio,
should maintain awareness of AI risks, affirm the organizational appetite for such risks, and
be responsible for managing those risks..
Accountability ensures that a specific team and individual is responsible for AI risk
management efforts. Some organizations grant authority and resources (human and
budgetary) to a designated officer who ensures adequate performance of the institution’s AI
portfolio (e.g. predictive modeling, machine learning).
Suggested Actions
• Organizational management can:
• Declare risk tolerances for developing or using AI systems.
• Support AI risk management efforts, and play an active role in such efforts.
• Integrate a risk and harm prevention mindset throughout the AI lifecycle as part of
organizational culture
• Support competent risk management executives.
• Delegate the power, resources, and authorization to perform risk management to
each appropriate level throughout the management chain.
19 of 142 • Organizations can establish board committees for AI risk management and oversight
functions and integrate those functions within the organization’s broader enterprise
risk management approaches.
Transparency & Documentation
Organizations can document the following
• Did your organization’s board and/or senior management sponsor, support and
participate in your organization’s AI governance?
• What are the roles, responsibilities, and delegation of authorities of personnel involved
in the design, development, deployment, assessment and monitoring of the AI system?
• Do AI solutions provide sufficient information to assist the personnel to make an
informed decision and take actions accordingly?
• To what extent has the entity clarified the roles, responsibilities, and delegated
authorities to relevant stakeholders?
AI Transparency Resources
• WEF Companion to the Model AI Governance Framework - 2020.
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
References
Bd. Governors Fed. Rsrv. Sys., Supervisory Guidance on Model Risk Management, SR Letter
11-7 (Apr. 4, 2011)
Off. Superintendent Fin. Inst. Canada, Enterprise -Wide Model Risk Management for Deposit -
Taking Institutions, E -23 (Sept. 2017).
GOVERN 3.1
Decision -makings related to mapping, measuring, and managing AI risks throughout the
lifecycle is informed by a diverse team (e.g., diversity of demographics, disciplines,
experience, expertise, and backgrounds).
About
A diverse team that includes AI actors with diversity of experience, disciplines, and
backgrounds to enhance organizational capacity and capability for anticipating risks is
better equipped to carry out risk management. Consultation with external personnel may
be necessary when internal teams lack a diverse range of lived experiences or disciplinary
expertise.
To extend the benefits of diversity, equity, and inclusion to both the users and AI actors, it is
recommended that teams are composed of a diverse group of individuals who reflect a
range of backgrounds, perspectives and expertise.
Without commitment from senior leadership, beneficial aspects of team diversity and
inclusion can be overridden by unstated organizational incentives that inadvertently
conflict with the broader values of a diverse workforce.
20 of 142 Suggested Actions
Organizational management can:
• Define policies and hiring practices at the outset that promote interdisciplinary roles,
competencies, skills, and capacity for AI efforts.
• Define policies and hiring practices that lead to demographic and domain expertise
diversity; empower staff with necessary resources and support, and facilitate the
contribution of staff feedback and concerns without fear of reprisal.
• Establish policies that facilitate inclusivity and the integration of new insights into
existing practice.
• Seek external expertise to supplement organizational diversity, equity, inclusion, and
accessibility where internal expertise is lacking.
• Establish policies that incentivize AI actors to collaborate with existing
nondiscrimination, accessibility and accommodation, and human resource functions,
employee resource group (ERGs), and diversity, equity, inclusion, and accessibility
(DEIA) initiatives.
Transparency & Documentation
Organizations can document the following
• Are the relevant staff dealing with AI systems properly trained to interpret AI model
output and decisions as well as to detect and manage bias in data?
• Entities include diverse perspectives from technical and non -technical communities
throughout the AI life cycle to anticipate and mitigate unintended consequences
including potential bias and discrimination.
• Stakeholder involvement: Include diverse perspectives from a community of
stakeholders throughout the AI life cycle to mitigate risks.
• Strategies to incorporate diverse perspectives include establishing collaborative
processes and multidisciplinary teams that involve subject matter experts in data
science, software development, civil liberties, privacy and security, legal counsel, and
risk management.
• To what extent are the established procedures effective in mitigating bias, inequity, and
other concerns resulting from the system?
AI Transparency Resources
• WEF Model AI Governance Framework Assessment 2020.
• Datasheets for Datasets.
References
Dylan Walsh, “How can human -centered AI fight bias in machines and people?” MIT Sloan
Mgmt. Rev., 2021.
Michael Li, “To Build Less -Biased AI, Hire a More Diverse Team,” Harvard Bus. Rev., 2020.
21 of 142 Bo Cowgill et al., “Biased Programmers? Or Biased Data? A Field Experiment in
Operationalizing AI Ethics,” 2020.
Naomi Ellemers, Floortje Rink, “Diversity in work groups,” Current opinion in psychology,
vol. 11, pp. 49 –53, 2016.
Katrin Talke, Søren Salomo, Alexander Kock, “Top management team diversity and strategic
innovation orientation: The relationship and consequences for innovativeness and
performance,” Journal of Product Innovation Management, vol. 28, pp. 819 –832, 2011.
Sarah Myers West, Meredith Whittaker, and Kate Crawford,, “Discriminating Systems:
Gender, Race, and Power in AI,” AI Now Institute, Tech. Rep., 2019.
Sina Fazelpour, Maria De -Arteaga, Diversity in sociotechnical machine learning systems. Big
Data & Society. January 2022. doi:10.1177/20539517221082027
Mary L. Cummings and Songpo Li, 2021a. Sources of subjectivity in machine learning
models. ACM Journal of Data and Information Quality, 13(2), 1 –9
“Staffing for Equitable AI: Roles & Responsibilities,” Partnership on Employment &
Accessible Technology (PEAT, peatworks.org). Accessed Jan. 6, 2023.
GOVERN 3.2
Policies and procedures are in place to define and differentiate roles and responsibilities for
human -AI configurations and oversight of AI systems.
About
Identifying and managing AI risks and impacts are enhanced when a broad set of
perspectives and actors across the AI lifecycle, including technical, legal, compliance, social
science, and human factors expertise is engaged. AI actors include those who operate, use,
or interact with AI systems for downstream tasks, or monitor AI system performance.
Effective risk management efforts include:
• clear definitions and differentiation of the various human roles and responsibilities for
AI system oversight and governance
• recognizing and clarifying differences between AI system overseers and those using or
interacting with AI systems.
Suggested Actions
• Establish policies and procedures that define and differentiate the various human roles
and responsibilities when using, interacting with, or monitoring AI systems.
• Establish procedures for capturing and tracking risk information related to human -AI
configurations and associated outcomes.
• Establish policies for the development of proficiency standards for AI actors carrying
out system operation tasks and system oversight tasks.
22 of 142 • Establish specified risk management training protocols for AI actors carrying out
system operation tasks and system oversight tasks.
• Establish policies and procedures regarding AI actor roles, and responsibilities for
human oversight of deployed systems.
• Establish policies and procedures defining human -AI configurations (configurations
where AI systems are explicitly designated and treated as team members in primarily
human teams) in relation to organizational risk tolerances, and associated
documentation.
• Establish policies to enhance the explanation, interpretation, and overall transparency
of AI systems.
• Establish policies for managing risks regarding known difficulties in human -AI
configurations, human -AI teaming, and AI system user experience and user interactions
(UI/UX).
Transparency & Documentation
Organizations can document the following
• What type of information is accessible on the design, operations, and limitations of the
AI system to external stakeholders, including end users, consumers, regulators, and
individuals impacted by use of the AI system?
• To what extent has the entity documented the appropriate level of human involvement
in AI -augmented decision -making?
• How will the accountable human(s) address changes in accuracy and precision due to
either an adversary’s attempts to disrupt the AI or unrelated changes in
operational/business environment, which may impact the accuracy of the AI?
• To what extent has the entity clarified the roles, responsibilities, and delegated
authorities to relevant stakeholders?
• How does the entity assess whether personnel have the necessary skills, training,
resources, and domain knowledge to fulfill their assigned responsibilities?
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• Intel.gov: AI Ethics Framework for Intelligence Community - 2020.
• WEF Companion to the Model AI Governance Framework - 2020.
References
Madeleine Clare Elish, "Moral Crumple Zones: Cautionary tales in human -robot interaction,"
Engaging Science, Technology, and Society, Vol. 5, 2019.
“Human -AI Teaming: State -Of-The-Art and Research Needs,” National Academies of
Sciences, Engineering, and Medicine, 2022.
Ben Green, "The Flaws Of Policies Requiring Human Oversight Of Government Algorithms,"
Computer Law & Security Review 45 (2022).
23 of 142 David A. Broniatowski. 2021. Psychological Foundations of Explainability and
Interpretability in Artificial Intelligence. National Institute of Standards and Technology
(NIST) IR 8367. National Institute of Standards and Technology, Gaithersburg, MD.
Off. Comptroller Currency, Comptroller’s Handbook: Model Risk Management (Aug. 2021).
GOVERN 4.1
Organizational policies, and practices are in place to foster a critical thinking and safety -first
mindset in the design, development, deployment, and uses of AI systems to minimize
negative impacts.
About
A risk culture and accompanying practices can help organizations effectively triage the most
critical risks. Organizations in some industries implement three (or more) “lines of
defense,” where separate teams are held accountable for different aspects of the system
lifecycle, such as development, risk management, and auditing. While a traditional three -
lines approach may be impractical for smaller organizations, leadership can commit to
cultivating a strong risk culture through other means. For example, “effective challenge,” is a
culture - based practice that encourages critical thinking and questioning of important
design and implementation decisions by experts with the authority and stature to make
such changes.
Red -teaming is another risk measurement and management approach. This practice
consists of adversarial testing of AI systems under stress conditions to seek out failure
modes or vulnerabilities in the system. Red -teams are composed of external experts or
personnel who are independent from internal AI actors.
Suggested Actions
• Establish policies that require inclusion of oversight functions (legal, compliance, risk
management) from the outset of the system design process.
• Establish policies that promote effective challenge of AI system design, implementation,
and deployment decisions, via mechanisms such as the three lines of defense, model
audits, or red -teaming – to minimize workplace risks such as groupthink.
• Establish policies that incentivize safety -first mindset and general critical thinking and
review at an organizational and procedural level.
• Establish whistleblower protections for insiders who report on perceived serious
problems with AI systems.
• Establish policies to integrate a harm and risk prevention mindset throughout the AI
lifecycle.
Transparency & Documentation
Organizations can document the following
• To what extent has the entity documented the AI system’s development, testing
methodology, metrics, and performance outcomes?
24 of 142 • Are organizational information sharing practices widely followed and transparent, such
that related past failed designs can be avoided?
• Are training manuals and other resources for carrying out incident response
documented and available?
• Are processes for operator reporting of incidents and near -misses documented and
available?
• How might revealing mismatches between claimed and actual system performance help
users understand limitations and anticipate risks and impacts?”
AI Transparency Resources
• Datasheets for Datasets.
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• WEF Model AI Governance Framework Assessment 2020.
References
Bd. Governors Fed. Rsrv. Sys., Supervisory Guidance on Model Risk Management, SR Letter
11-7 (Apr. 4, 2011)
Patrick Hall, Navdeep Gill, and Benjamin Cox, “Responsible Machine Learning,” O’Reilly
Media, 2020.
Off. Superintendent Fin. Inst. Canada, Enterprise -Wide Model Risk Management for Deposit -
Taking Institutions, E -23 (Sept. 2017).
GAO, “Artificial Intelligence: An Accountability Framework for Federal Agencies and Other
Entities,” GAO@100 (GAO -21-519SP), June 2021.
Donald Sull, Stefano Turconi, and Charles Sull, “When It Comes to Culture, Does Your
Company Walk the Talk?” MIT Sloan Mgmt. Rev., 2020.
Kathy Baxter, AI Ethics Maturity Model, Salesforce.
Upol Ehsan, Q. Vera Liao, Samir Passi, Mark O. Riedl, and Hal Daumé. 2024. Seamful XAI:
Operationalizing Seamful Design in Explainable AI. Proc. ACM Hum. -Comput. Interact. 8,
CSCW1, Article 119. https://doi.org/10.1145/3637396
GOVERN 4.2
Organizational teams document the risks and potential impacts of the AI technology they
design, develop, deploy, evaluate and use, and communicate about the impacts more
broadly.
About
Impact assessments are one approach for driving responsible technology development
practices. And, within a specific use case, these assessments can provide a high -level
structure for organizations to frame risks of a given algorithm or deployment. Impact
25 of 142 assessments can also serve as a mechanism for organizations to articulate risks and
generate documentation for managing and oversight activities when harms do arise.
Impact assessments may:
• be applied at the beginning of a process but also iteratively and regularly since goals
and outcomes can evolve over time.
• include perspectives from AI actors, including operators, users, and potentially
impacted communities (including historically marginalized communities, those with
disabilities, and individuals impacted by the digital divide),
• assist in “go/no -go” decisions for an AI system.
• consider conflicts of interest, or undue influence, related to the organizational team
being assessed.
See the MAP function playbook guidance for more information relating to impact
assessments.
Suggested Actions
• Establish impact assessment policies and processes for AI systems used by the
organization.
• Align organizational impact assessment activities with relevant regulatory or legal
requirements.
• Verify that impact assessment activities are appropriate to evaluate the potential
negative impact of a system and how quickly a system changes, and that assessments
are applied on a regular basis.
• Utilize impact assessments to inform broader evaluations of AI system risk.
Transparency & Documentation
Organizations can document the following
• How has the entity identified and mitigated potential impacts of bias in the data,
including inequitable or discriminatory outcomes?
• How has the entity documented the AI system’s data provenance, including sources,
origins, transformations, augmentations, labels, dependencies, constraints, and
metadata?
• To what extent has the entity clearly defined technical specifications and requirements
for the AI system?
• To what extent has the entity documented and communicated the AI system’s
development, testing methodology, metrics, and performance outcomes?
• Have you documented and explained that machine errors may differ from human
errors?
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• Datasheets for Datasets.
26 of 142 References
Dillon Reisman, Jason Schultz, Kate Crawford, Meredith Whittaker, “Algorithmic Impact
Assessments: A Practical Framework For Public Agency Accountability,” AI Now Institute,
2018.
H.R. 2231, 116th Cong. (2019).
BSA The Software Alliance (2021) Confronting Bias: BSA’s Framework to Build Trust in AI.
Anthony M. Barrett, Dan Hendrycks, Jessica Newman and Brandie Nonnecke. Actionable
Guidance for High -Consequence AI Risk Management: Towards Standards Addressing AI
Catastrophic Risks. ArXiv abs/2206.08966 (2022) https://arxiv.org/abs/2206.08966
David Wright, “Making Privacy Impact Assessments More Effective." The Information
Society 29, 2013.
Konstantinia Charitoudi and Andrew Blyth. A Socio -Technical Approach to Cyber Risk
Management and Impact Assessment. Journal of Information Security 4, 1 (2013), 33 -41.
Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, Madeleine Clare Elish, & Jacob Metcalf.
2021. “Assembling Accountability: Algorithmic Impact Assessment for the Public Interest”.
Microsoft. Responsible AI Impact Assessment Template. 2022.
Microsoft. Responsible AI Impact Assessment Guide. 2022.
Microsoft. Foundations of assessing harm. 2022.
Mauritz Kop, “AI Impact Assessment & Code of Conduct,” Futurium, May 2019.
Dillon Reisman, Jason Schultz, Kate Crawford, and Meredith Whittaker, “Algorithmic Impact
Assessments: A Practical Framework For Public Agency Accountability,” AI Now, Apr. 2018.
Andrew D. Selbst, “An Institutional View Of Algorithmic Impact Assessments,” Harvard
Journal of Law & Technology, vol. 35, no. 1, 2021
Ada Lovelace Institute. 2022. Algorithmic Impact Assessment: A Case Study in Healthcare.
Accessed July 14, 2022.
Kathy Baxter, AI Ethics Maturity Model, Salesforce
Ravit Dotan, Borhane Blili -Hamelin, Ravi Madhavan, Jeanna Matthews, Joshua Scarpino, &
Carol Anderson. (2024). A Flexible Maturity Model for AI Governance Based on the NIST AI
Risk Management Framework [Technical Report]. IEEE. https://ieeeusa.org/product/a -
flexible -maturity -model -for-ai-governance
27 of 142 GOVERN 4.3
Organizational practices are in place to enable AI testing, identification of incidents, and
information sharing.
About
Identifying AI system limitations, detecting and tracking negative impacts and incidents,
and sharing information about these issues with appropriate AI actors will improve risk
management. Issues such as concept drift, AI bias and discrimination, shortcut learning or
underspecification are difficult to identify using current standard AI testing processes.
Organizations can institute in -house use and testing policies and procedures to identify and
manage such issues. Efforts can take the form of pre -alpha or pre -beta testing, or deploying
internally developed systems or products within the organization. Testing may entail
limited and controlled in -house, or publicly available, AI system testbeds, and accessibility
of AI system interfaces and outputs.
Without policies and procedures that enable consistent testing practices, risk management
efforts may be bypassed or ignored, exacerbating risks or leading to inconsistent risk
management activities.
Information sharing about impacts or incidents detected during testing or deployment can:
• draw attention to AI system risks, failures, abuses or misuses,
• allow organizations to benefit from insights based on a wide range of AI applications
and implementations, and
• allow organizations to be more proactive in avoiding known failure modes.
Organizations may consider sharing incident information with the AI Incident Database, the
AIAAIC, users, impacted communities, or with traditional cyber vulnerability databases,
such as the MITRE CVE list.
Suggested Actions
• Establish policies and procedures to facilitate and equip AI system testing.
• Establish organizational commitment to identifying AI system limitations and sharing of
insights about limitations within appropriate AI actor groups.
• Establish policies for reporting and documenting incident response.
• Establish policies and processes regarding public disclosure of incidents and
information sharing.
• Establish guidelines for incident handling related to AI system risks and performance.
Transparency & Documentation
Organizations can document the following
• Did your organization address usability problems and test whether user interfaces
served their intended purposes? Consulting the community or end users at the earliest
28 of 142 stages of development to ensure there is transparency on the technology used and how
it is deployed.
• Did your organization implement a risk management system to address risks involved
in deploying the identified AI solution (e.g. personnel risk or changes to commercial
objectives)?
• To what extent can users or parties affected by the outputs of the AI system test the AI
system and provide feedback?
AI Transparency Resources
• WEF Model AI Governance Framework Assessment 2020.
• WEF Companion to the Model AI Governance Framework - 2020.
References
Sean McGregor, “Preventing Repeated Real World AI Failures by Cataloging Incidents: The
AI Incident Database,” arXiv:2011.08512 [cs], Nov. 2020, arXiv:2011.08512.
Christopher Johnson, Mark Badger, David Waltermire, Julie Snyder, and Clem Skorupka,
“Guide to cyber threat information sharing,” National Institute of Standards and Technology,
NIST Special Publication 800 -150, Nov 2016.
Mengyi Wei, Zhixuan Zhou (2022). AI Ethics Issues in Real World: Evidence from AI Incident
Database. ArXiv, abs/2206.07635.
BSA The Software Alliance (2021) Confronting Bias: BSA’s Framework to Build Trust in AI.
“Using Combined Expertise to Evaluate Web Accessibility,” W3C Web Accessibility Initiative.
GOVERN 5.1
Organizational policies and practices are in place to collect, consider, prioritize, and
integrate feedback from those external to the team that developed or deployed the AI
system regarding the potential individual and societal impacts related to AI risks.
About
Beyond internal and laboratory -based system testing, organizational policies and practices
may consider AI system fitness -for-purpose related to the intended context of use.
Participatory stakeholder engagement is one type of qualitative activity to help AI actors
answer questions such as whether to pursue a project or how to design with impact in
mind. This type of feedback, with domain expert input, can also assist AI actors to identify
emergent scenarios and risks in certain AI applications. The consideration of when and how
to convene a group and the kinds of individuals, groups, or community organizations to
include is an iterative process connected to the system's purpose and its level of risk. Other
factors relate to how to collaboratively and respectfully capture stakeholder feedback and
insight that is useful, without being a solely perfunctory exercise.
29 of 142 These activities are best carried out by personnel with expertise in participatory practices,
qualitative methods, and translation of contextual feedback for technical audiences.
Participatory engagement is not a one -time exercise and is best carried out from the very
beginning of AI system commissioning through the end of the lifecycle. Organizations can
consider how to incorporate engagement when beginning a project and as part of their
monitoring of systems. Engagement is often utilized as a consultative practice, but this
perspective may inadvertently lead to “participation washing.” Organizational transparency
about the purpose and goal of the engagement can help mitigate that possibility.
Organizations may also consider targeted consultation with subject matter experts as a
complement to participatory findings. Experts may assist internal staff in identifying and
conceptualizing potential negative impacts that were previously not considered.
Suggested Actions
• Establish AI risk management policies that explicitly address mechanisms for collecting,
evaluating, and incorporating stakeholder and user feedback that could include:
• Recourse mechanisms for faulty AI system outputs.
• Bug bounties.
• Human -centered design.
• User -interaction and experience research.
• Participatory stakeholder engagement with individuals and communities that may
experience negative impacts.
• Verify that stakeholder feedback is considered and addressed, including environmental
concerns, and across the entire population of intended users, including historically
excluded populations, people with disabilities, older people, and those with limited
access to the internet and other basic technologies.
• Clarify the organization’s principles as they apply to AI systems – considering those
which have been proposed publicly – to inform external stakeholders of the
organization’s values. Consider publishing or adopting AI principles.
Transparency & Documentation
Organizations can document the following
• What type of information is accessible on the design, operations, and limitations of the
AI system to external stakeholders, including end users, consumers, regulators, and
individuals impacted by use of the AI system?
• To what extent has the entity clarified the roles, responsibilities, and delegated
authorities to relevant stakeholders?
• How easily accessible and current is the information available to external stakeholders?
• What was done to mitigate or reduce the potential for harm?
• Stakeholder involvement: Include diverse perspectives from a community of
stakeholders throughout the AI life cycle to mitigate risks.
30 of 142 AI Transparency Resources
• Datasheets for Datasets.
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• AI policies and initiatives, in Artificial Intelligence in Society, OECD, 2019.
• Stakeholders in Explainable AI, Sep. 2018.
References
ISO, “Ergonomics of human -system interaction — Part 210: Human -centered design for
interactive systems,” ISO 9241 -210:2019 (2nd ed.), July 2019.
Rumman Chowdhury and Jutta Williams, "Introducing Twitter’s first algorithmic bias
bounty challenge,"
Leonard Haas and Sebastian Gießler, “In the realm of paper tigers – exploring the failings of
AI ethics guidelines,” AlgorithmWatch, 2020.
Josh Kenway, Camille Francois, Dr. Sasha Costanza -Chock, Inioluwa Deborah Raji, & Dr. Joy
Buolamwini. 2022. Bug Bounties for Algorithmic Harms? Algorithmic Justice League.
Accessed July 14, 2022.
Microsoft Community Jury , Azure Application Architecture Guide.
“Definition of independent verification and validation (IV&V)”, in IEEE 1012, IEEE Standard
for System, Software, and Hardware Verification and Validation. Annex C,
GOVERN 5.2
Mechanisms are established to enable AI actors to regularly incorporate adjudicated
feedback from relevant AI actors into system design and implementation.
About
Organizational policies and procedures that equip AI actors with the processes, knowledge,
and expertise needed to inform collaborative decisions about system deployment improve
risk management. These decisions are closely tied to AI systems and organizational risk
tolerance.
Risk tolerance, established by organizational leadership, reflects the level and type of risk
the organization will accept while conducting its mission and carrying out its strategy.
When risks arise, resources are allocated based on the assessed risk of a given AI system.
Organizations typically apply a risk tolerance approach where higher risk systems receive
larger allocations of risk management resources and lower risk systems receive less
resources.
Suggested Actions
• Explicitly acknowledge that AI systems, and the use of AI, present inherent costs and
risks along with potential benefits.
31 of 142 • Define reasonable risk tolerances for AI systems informed by laws, regulation, best
practices, or industry standards.
• Establish policies that ensure all relevant AI actors are provided with meaningful
opportunities to provide feedback on system design and implementation.
• Establish policies that define how to assign AI systems to established risk tolerance
levels by combining system impact assessments with the likelihood that an impact
occurs. Such assessment often entails some combination of:
• Econometric evaluations of impacts and impact likelihoods to assess AI system risk.
• Red -amber -green (RAG) scales for impact severity and likelihood to assess AI
system risk.
• Establishment of policies for allocating risk management resources along
established risk tolerance levels, with higher -risk systems receiving more risk
management resources and oversight.
• Establishment of policies for approval, conditional approval, and disapproval of the
design, implementation, and deployment of AI systems.
• Establish policies facilitating the early decommissioning of AI systems that surpass an
organization’s ability to reasonably mitigate risks.
Transparency & Documentation
Organizations can document the following
• Who is ultimately responsible for the decisions of the AI and is this person aware of the
intended uses and limitations of the analytic?
• Who will be responsible for maintaining, re -verifying, monitoring, and updating this AI
once deployed?
• Who is accountable for the ethical considerations during all stages of the AI lifecycle?
• To what extent are the established procedures effective in mitigating bias, inequity, and
other concerns resulting from the system?
• Does the AI solution provide sufficient information to assist the personnel to make an
informed decision and take actions accordingly?
AI Transparency Resources
• WEF Model AI Governance Framework Assessment 2020.
• WEF Companion to the Model AI Governance Framework - 2020.
• Stakeholders in Explainable AI, Sep. 2018.
• AI policies and initiatives, in Artificial Intelligence in Society, OECD, 2019.
References
Bd. Governors Fed. Rsrv. Sys., Supervisory Guidance on Model Risk Management, SR Letter
11-7 (Apr. 4, 2011)
Off. Comptroller Currency, Comptroller’s Handbook: Model Risk Management (Aug. 2021).
32 of 142 The Office of the Comptroller of the Currency. Enterprise Risk Appetite Statement. (Nov. 20,
2019). Retrieved on July 12, 2022.
GOVERN 6.1
Policies and procedures are in place that address AI risks associated with third -party
entities, including risks of infringement of a third party’s intellectual property or other
rights.
About
Risk measurement and management can be complicated by how customers use or integrate
third -party data or systems into AI products or services, particularly without sufficient
internal governance structures and technical safeguards.
Organizations usually engage multiple third parties for external expertise, data, software
packages (both open source and commercial), and software and hardware platforms across
the AI lifecycle. This engagement has beneficial uses and can increase complexities of risk
management efforts.
Organizational approaches to managing third -party (positive and negative) risk may be
tailored to the resources, risk profile, and use case for each system. Organizations can apply
governance approaches to third -party AI systems and data as they would for internal
resources — including open source software, publicly available data, and commercially
available models.
Suggested Actions
• Collaboratively establish policies that address third -party AI systems and data.
• Establish policies related to:
• Transparency into third -party system functions, including knowledge about training
data, training and inference algorithms, and assumptions and limitations.
• Thorough testing of third -party AI systems. (See MEASURE for more detail)
• Requirements for clear and complete instructions for third -party system usage.
• Evaluate policies for third -party technology.
• Establish policies that address supply chain, full product lifecycle and associated
processes, including legal, ethical, and other issues concerning procurement and use of
third -party software or hardware systems and data.
Transparency & Documentation
Organizations can document the following
• Did you establish mechanisms that facilitate the AI system’s auditability (e.g.
traceability of the development process, the sourcing of training data and the logging of
the AI system’s processes, outcomes, positive and negative impact)?
33 of 142 • If a third party created the AI, how will you ensure a level of explainability or
interpretability?
• Did you ensure that the AI system can be audited by independent third parties?
• Did you establish a process for third parties (e.g. suppliers, end users, subjects,
distributors/vendors or workers) to report potential vulnerabilities, risks or biases in
the AI system?
• To what extent does the plan specifically address risks associated with acquisition,
procurement of packaged software from vendors, cybersecurity controls, computational
infrastructure, data, data science, deployment mechanics, and system failure?
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• Intel.gov: AI Ethics Framework for Intelligence Community - 2020.
• WEF Model AI Governance Framework Assessment 2020.
• WEF Companion to the Model AI Governance Framework - 2020.
• AI policies and initiatives, in Artificial Intelligence in Society, OECD, 2019.
• Assessment List for Trustworthy AI (ALTAI) - The High -Level Expert Group on AI -
2019.
References
Bd. Governors Fed. Rsrv. Sys., Supervisory Guidance on Model Risk Management, SR Letter
11-7 (Apr. 4, 2011)
“Proposed Interagency Guidance on Third -Party Relationships: Risk Management,” 2021.
Off. Comptroller Currency, Comptroller’s Handbook: Model Risk Management (Aug. 2021).
GOVERN 6.2
Contingency processes are in place to handle failures or incidents in third -party data or AI
systems deemed to be high -risk.
About
To mitigate the potential harms of third -party system failures, organizations may
implement policies and procedures that include redundancies for covering third -party
functions.
Suggested Actions
• Establish policies for handling third -party system failures to include consideration of
redundancy mechanisms for vital third -party AI systems.
• Verify that incident response plans address third -party AI systems.
34 of 142 Transparency & Documentation
Organizations can document the following
• To what extent does the plan specifically address risks associated with acquisition,
procurement of packaged software from vendors, cybersecurity controls, computational
infrastructure, data, data science, deployment mechanics, and system failure?
• Did you establish a process for third parties (e.g. suppliers, end users, subjects,
distributors/vendors or workers) to report potential vulnerabilities, risks or biases in
the AI system?
• If your organization obtained datasets from a third party, did your organization assess
and manage the risks of using such datasets?
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• WEF Model AI Governance Framework Assessment 2020.
• WEF Companion to the Model AI Governance Framework - 2020.
• AI policies and initiatives, in Artificial Intelligence in Society, OECD, 2019.
References
Bd. Governors Fed. Rsrv. Sys., Supervisory Guidance on Model Risk Management, SR Letter
11-7 (Apr. 4, 2011)
“Proposed Interagency Guidance on Third -Party Relationships: Risk Management,” 2021.
Off. Comptroller Currency, Comptroller’s Handbook: Model Risk Management (Aug. 2021).
MANAGE
35 of 142 Manage
AI risks based on assessments and other analytical output from the Map and Measure
functions are prioritized, responded to, and managed.
MANAGE 1.1
A determination is made as to whether the AI system achieves its intended purpose and
stated objectives and whether its development or deployment should proceed.
About
AI systems may not necessarily be the right solution for a given business task or problem. A
standard risk management practice is to formally weigh an AI system’s negative risks
against its benefits, and to determine if the AI system is an appropriate solution. Tradeoffs
among trustworthiness characteristics —such as deciding to deploy a system based on
system performance vs system transparency –may require regular assessment throughout
the AI lifecycle.
Suggested Actions
• Consider trustworthiness characteristics when evaluating AI systems’ negative risks
and benefits.
• Utilize TEVV outputs from map and measure functions when considering risk treatment.
• Regularly track and monitor negative risks and benefits throughout the AI system
lifecycle including in post -deployment monitoring.
• Regularly assess and document system performance relative to trustworthiness
characteristics and tradeoffs between negative risks and opportunities.
• Evaluate tradeoffs in connection with real -world use cases and impacts and as
enumerated in Map function outcomes.
Transparency & Documentation
Organizations can document the following
• How do the technical specifications and requirements align with the AI system’s goals
and objectives?
• To what extent are the metrics consistent with system goals, objectives, and constraints,
including ethical and compliance considerations?
• What goals and objectives does the entity expect to achieve by designing, developing,
and/or deploying the AI system?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
• WEF Companion to the Model AI Governance Framework – Implementation and Self -
Assessment Guide for Organizations
36 of 142 References
Arvind Narayanan. How to recognize AI snake oil. Retrieved October 15, 2022.
Board of Governors of the Federal Reserve System. SR 11 -7: Guidance on Model Risk
Management. (April 4, 2011).
Emanuel Moss, Elizabeth Watkins, Ranjit Singh, Madeleine Clare Elish, Jacob Metcalf. 2021.
Assembling Accountability: Algorithmic Impact Assessment for the Public Interest. (June 29,
2021).
Fraser, Henry L and Bello y Villarino, Jose -Miguel, Where Residual Risks Reside: A
Comparative Approach to Art 9(4) of the European Union's Proposed AI Regulation
(September 30, 2021). [LINK](https://ssrn.com/abstract=3960461),
Microsoft. 2022. Microsoft Responsible AI Impact Assessment Template. (June 2022).
Office of the Comptroller of the Currency. 2021. Comptroller's Handbook: Model Risk
Management, Version 1.0, August 2021.
Solon Barocas, Asia J. Biega, Benjamin Fish, et al. 2020. When not to design, build, or deploy.
In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT*
'20). Association for Computing Machinery, New York, NY, USA, 695.
MANAGE 1.2
Treatment of documented AI risks is prioritized based on impact, likelihood, or available
resources or methods.
About
Risk refers to the composite measure of an event’s probability of occurring and the
magnitude (or degree) of the consequences of the corresponding events. The impacts, or
consequences, of AI systems can be positive, negative, or both and can result in
opportunities or risks.
Organizational risk tolerances are often informed by several internal and external factors,
including existing industry practices, organizational values, and legal or regulatory
requirements. Since risk management resources are often limited, organizations usually
assign them based on risk tolerance. AI risks that are deemed more serious receive more
oversight attention and risk management resources.
Suggested Actions
• Assign risk management resources relative to established risk tolerance. AI systems
with lower risk tolerances receive greater oversight, mitigation and management
resources.
• Document AI risk tolerance determination practices and resource decisions.
• Regularly review risk tolerances and re -calibrate, as needed, in accordance with
information from AI system monitoring and assessment .
37 of 142 Transparency & Documentation
Organizations can document the following
• Did your organization implement a risk management system to address risks involved
in deploying the identified AI solution (e.g. personnel risk or changes to commercial
objectives)?
• What assessments has the entity conducted on data security and privacy impacts
associated with the AI system?
• Does your organization have an existing governance structure that can be leveraged to
oversee the organization’s use of AI?
AI Transparency Resources
• WEF Companion to the Model AI Governance Framework – Implementation and Self -
Assessment Guide for Organizations
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
References
Arvind Narayanan. How to recognize AI snake oil. Retrieved October 15, 2022.
Board of Governors of the Federal Reserve System. SR 11 -7: Guidance on Model Risk
Management. (April 4, 2011).
Emanuel Moss, Elizabeth Watkins, Ranjit Singh, Madeleine Clare Elish, Jacob Metcalf. 2021.
Assembling Accountability: Algorithmic Impact Assessment for the Public Interest. (June 29,
2021).
Fraser, Henry L and Bello y Villarino, Jose -Miguel, Where Residual Risks Reside: A
Comparative Approach to Art 9(4) of the European Union's Proposed AI Regulation
(September 30, 2021). [LINK](https://ssrn.com/abstract=3960461),
Microsoft. 2022. Microsoft Responsible AI Impact Assessment Template. (June 2022).
Office of the Comptroller of the Currency. 2021. Comptroller's Handbook: Model Risk
Management, Version 1.0, August 2021.
Solon Barocas, Asia J. Biega, Benjamin Fish, et al. 2020. When not to design, build, or deploy.
In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT*
'20). Association for Computing Machinery, New York, NY, USA, 695.
MANAGE 1.3
Responses to the AI risks deemed high priority as identified by the Map function, are
developed, planned, and documented. Risk response options can include mitigating,
transferring, avoiding, or accepting.
38 of 142 About
Outcomes from GOVERN -1, MAP -5 and MEASURE -2, can be used to address and document
identified risks based on established risk tolerances. Organizations can follow existing
regulations and guidelines for risk criteria, tolerances and responses established by
organizational, domain, discipline, sector, or professional requirements. In lieu of such
guidance, organizations can develop risk response plans based on strategies such as
accepted model risk management, enterprise risk management, and information sharing
and disclosure practices.
Suggested Actions
• Observe regulatory and established organizational, sector, discipline, or professional
standards and requirements for applying risk tolerances within the organization.
• Document procedures for acting on AI system risks related to trustworthiness
characteristics.
• Prioritize risks involving physical safety, legal liabilities, regulatory compliance, and
negative impacts on individuals, groups, or society.
• Identify risk response plans and resources and organizational teams for carrying out
response functions.
• Store risk management and system documentation in an organized, secure repository
that is accessible by relevant AI Actors and appropriate personnel.
Transparency & Documentation
Organizations can document the following
• Has the system been reviewed to ensure the AI system complies with relevant laws,
regulations, standards, and guidance?
• To what extent has the entity defined and documented the regulatory environment —
including minimum requirements in laws and regulations?
• Did your organization implement a risk management system to address risks involved
in deploying the identified AI solution (e.g. personnel risk or changes to commercial
objectives)?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Datasheets for Datasets.
References
Arvind Narayanan. How to recognize AI snake oil. Retrieved October 15, 2022.
Board of Governors of the Federal Reserve System. SR 11 -7: Guidance on Model Risk
Management. (April 4, 2011).
39 of 142 Emanuel Moss, Elizabeth Watkins, Ranjit Singh, Madeleine Clare Elish, Jacob Metcalf. 2021.
Assembling Accountability: Algorithmic Impact Assessment for the Public Interest. (June 29,
2021).
Fraser, Henry L and Bello y Villarino, Jose -Miguel, Where Residual Risks Reside: A
Comparative Approach to Art 9(4) of the European Union's Proposed AI Regulation
(September 30, 2021). [LINK](https://ssrn.com/abstract=3960461),
Microsoft. 2022. Microsoft Responsible AI Impact Assessment Template. (June 2022).
Office of the Comptroller of the Currency. 2021. Comptroller's Handbook: Model Risk
Management, Version 1.0, August 2021.
Solon Barocas, Asia J. Biega, Benjamin Fish, et al. 2020. When not to design, build, or deploy.
In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT*
'20). Association for Computing Machinery, New York, NY, USA, 695.
MANAGE 1.4
Negative residual risks (defined as the sum of all unmitigated risks) to both downstream
acquirers of AI systems and end users are documented.
About
Organizations may choose to accept or transfer some of the documented risks from MAP
and MANAGE 1.3 and 2.1. Such risks, known as residual risk, may affect downstream AI
actors such as those engaged in system procurement or use. Transparent monitoring and
managing residual risks enables cost benefit analysis and the examination of potential
values of AI systems versus its potential negative impacts.
Suggested Actions
• Document residual risks within risk response plans, denoting risks that have been
accepted, transferred, or subject to minimal mitigation.
• Establish procedures for disclosing residual risks to relevant downstream AI actors .
• Inform relevant downstream AI actors of requirements for safe operation, known
limitations, and suggested warning labels as identified in MAP 3.4.
Transparency & Documentation
Organizations can document the following
• What are the roles, responsibilities, and delegation of authorities of personnel involved
in the design, development, deployment, assessment and monitoring of the AI system?
• Who will be responsible for maintaining, re -verifying, monitoring, and updating this AI
once deployed?
• How will updates/revisions be documented and communicated? How often and by
whom?
• How easily accessible and current is the information available to external stakeholders?
40 of 142 AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
• Datasheets for Datasets.
References
Arvind Narayanan. How to recognize AI snake oil. Retrieved October 15, 2022.
Board of Governors of the Federal Reserve System. SR 11 -7: Guidance on Model Risk
Management. (April 4, 2011).
Emanuel Moss, Elizabeth Watkins, Ranjit Singh, Madeleine Clare Elish, Jacob Metcalf. 2021.
Assembling Accountability: Algorithmic Impact Assessment for the Public Interest. (June 29,
2021).
Fraser, Henry L and Bello y Villarino, Jose -Miguel, Where Residual Risks Reside: A
Comparative Approach to Art 9(4) of the European Union's Proposed AI Regulation
(September 30, 2021). [LINK](https://ssrn.com/abstract=3960461),
Microsoft. 2022. Microsoft Responsible AI Impact Assessment Template. (June 2022).
Office of the Comptroller of the Currency. 2021. Comptroller's Handbook: Model Risk
Management, Version 1.0, August 2021.
Solon Barocas, Asia J. Biega, Benjamin Fish, et al. 2020. When not to design, build, or deploy.
In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT*
'20). Association for Computing Machinery, New York, NY, USA, 695.
MANAGE 2.1
Resources required to manage AI risks are taken into account, along with viable non -AI
alternative systems, approaches, or methods – to reduce the magnitude or likelihood of
potential impacts.
About
Organizational risk response may entail identifying and analyzing alternative approaches,
methods, processes or systems, and balancing tradeoffs between trustworthiness
characteristics and how they relate to organizational principles and societal values. Analysis
of these tradeoffs is informed by consulting with interdisciplinary organizational teams,
independent domain experts, and engaging with individuals or community groups. These
processes require sufficient resource allocation.
Suggested Actions
• Plan and implement risk management practices in accordance with established
organizational risk tolerances.
• Verify risk management teams are resourced to carry out functions, including
41 of 142 • Establishing processes for considering methods that are not automated; semi -
automated; or other procedural alternatives for AI functions.
• Enhance AI system transparency mechanisms for AI teams.
• Enable exploration of AI system limitations by AI teams.
• Identify, assess, and catalog past failed designs and negative impacts or outcomes to
avoid known failure modes.
• Identify resource allocation approaches for managing risks in systems:
• deemed high -risk,
• that self -update (adaptive, online, reinforcement self -supervised learning or
similar),
• trained without access to ground truth (unsupervised, semi -supervised, learning or
similar),
• with high uncertainty or where risk management is insufficient.
• Regularly seek and integrate external expertise and perspectives to supplement
organizational diversity (e.g. demographic, disciplinary), equity, inclusion, and
accessibility where internal capacity is lacking.
• Enable and encourage regular, open communication and feedback among AI actors and
internal or external stakeholders related to system design or deployment decisions.
• Prepare and document plans for continuous monitoring and feedback mechanisms.
Transparency & Documentation
Organizations can document the following
• Are mechanisms in place to evaluate whether internal teams are empowered and
resourced to effectively carry out risk management functions?
• How will user and other forms of stakeholder engagement be integrated into risk
management processes?
AI Transparency Resources
• Artificial Intelligence Ethics Framework For The Intelligence Community.
• Datasheets for Datasets.
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
References
Board of Governors of the Federal Reserve System. SR 11 -7: Guidance on Model Risk
Management. (April 4, 2011).
David Wright. 2013. Making Privacy Impact Assessments More Effective. The Information
Society, 29 (Oct 2013), 307 -315.
42 of 142 Margaret Mitchell, Simone Wu, Andrew Zaldivar, et al. 2019. Model Cards for Model
Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency
(FAT* '19). Association for Computing Machinery, New York, NY, USA, 220 –229.
Office of the Comptroller of the Currency. 2021. Comptroller's Handbook: Model Risk
Management, Version 1.0, August 2021.
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, et al. 2021. Datasheets for Datasets.
arXiv:1803.09010.
MANAGE 2.2
Mechanisms are in place and applied to sustain the value of deployed AI systems.
About
System performance and trustworthiness may evolve and shift over time, once an AI system
is deployed and put into operation. This phenomenon, generally known as drift, can degrade
the value of the AI system to the organization and increase the likelihood of negative
impacts. Regular monitoring of AI systems’ performance and trustworthiness enhances
organizations’ ability to detect and respond to drift, and thus sustain an AI system’s value
once deployed. Processes and mechanisms for regular monitoring address system
functionality and behavior - as well as impacts and alignment with the values and norms
within the specific context of use. For example, considerations regarding impacts on
personal or public safety or privacy may include limiting high speeds when operating
autonomous vehicles or restricting illicit content recommendations for minors.
Regular monitoring activities can enable organizations to systematically and proactively
identify emergent risks and respond according to established protocols and metrics.
Options for organizational responses include 1) avoiding the risk, 2)accepting the risk, 3)
mitigating the risk, or 4) transferring the risk. Each of these actions require planning and
resources. Organizations are encouraged to establish risk management protocols with
consideration of the trustworthiness characteristics, the deployment context, and real
world impacts.
Suggested Actions
• Establish risk controls considering trustworthiness characteristics, including:
• Data management, quality, and privacy (e.g. minimization, rectification or
deletion requests) controls as part of organizational data governance policies.
• Machine learning and end -point security countermeasures (e.g., robust models,
differential privacy, authentication, throttling).
• Business rules that augment, limit or restrict AI system outputs within certain
contexts
• Utilizing domain expertise related to deployment context for continuous
improvement and TEVV across the AI lifecycle.
• Development and regular tracking of human -AI teaming configurations.
43 of 142 • Model assessment and test, evaluation, validation and verification (TEVV)
protocols.
• Use of standardized documentation and transparency mechanisms.
• Software quality assurance practices across AI lifecycle.
• Mechanisms to explore system limitations and avoid past failed designs or
deployments.
• Establish mechanisms to capture feedback from system end users and potentially
impacted groups while system is in deployment.
• stablish mechanisms to capture feedback from system end users and potentially
impacted groups about how changes in system deployment (e.g., introducing new
technology, decommissioning algorithms and models, adapting system, model or
algorithm) may create negative impacts that are not visible along the AI lifecycle.
• Review insurance policies, warranties, or contracts for legal or oversight requirements
for risk transfer procedures.
• Document risk tolerance decisions and risk acceptance procedures.
Transparency & Documentation
Organizations can document the following
• To what extent can users or parties affected by the outputs of the AI system test the AI
system and provide feedback?
• Could the AI system expose people to harm or negative impacts? What was done to
mitigate or reduce the potential for harm?
• How will the accountable human(s) address changes in accuracy and precision due to
either an adversary’s attempts to disrupt the AI or unrelated changes in the operational
or business environment?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
References
Safety, Validity and Reliability Risk Management Approaches and Resources
AI Incident Database. 2022. AI Incident Database.
AIAAIC Repository. 2022. AI, algorithmic and automation incidents collected, dissected,
examined, and divulged.
Alexander D'Amour, Katherine Heller, Dan Moldovan, et al. 2020. Underspecification
Presents Challenges for Credibility in Modern Machine Learning. arXiv:2011.03395.
44 of 142 Andrew L. Beam, Arjun K. Manrai, Marzyeh Ghassemi. 2020. Challenges to the
Reproducibility of Machine Learning Models in Health Care. Jama 323, 4 (January 6, 2020),
305-306.
Anthony M. Barrett, Dan Hendrycks, Jessica Newman et al. 2022. Actionable Guidance for
High -Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic
Risks. arXiv:2206.08966.
Debugging Machine Learning Models, In Proceedings of ICLR 2019 Workshop, May 6, 2019,
New Orleans, Louisiana.
Jessie J. Smith, Saleema Amershi, Solon Barocas, et al. 2022. REAL ML: Recognizing,
Exploring, and Articulating Limitations of Machine Learning Research. arXiv:2205.08363.
Joelle Pineau, Philippe Vincent -Lamarre, Koustuv Sinha, et al. 2020. Improving
Reproducibility in Machine Learning Research (A Report from the NeurIPS 2019
Reproducibility Program) arXiv:2003.12206.
Kirstie Whitaker. 2017. Showing your working: a how to guide to reproducible research.
(August 2017).
[LINK](https://github.com/WhitakerLab/ReproducibleResearch/blob/master/PRESENTA
TIONS/Whitaker_ICON_August2017.pdf),
Netflix. Chaos Monkey.
Peter Henderson, Riashat Islam, Philip Bachman, et al. 2018. Deep reinforcement learning
that matters. Proceedings of the AAAI Conference on Artificial Intelligence. 32, 1 (Apr.
2018).
Suchi Saria, Adarsh Subbaswamy. 2019. Tutorial: Safe and Reliable Machine Learning.
arXiv:1904.07204.
Kang, Daniel, Deepti Raghavan, Peter Bailis, and Matei Zaharia. "Model assertions for
monitoring and improving ML models." Proceedings of Machine Learning and Systems 2
(2020): 481 -496.
Managing Risk Bias
National Institute of Standards and Technology (NIST), Reva Schwartz, Apostol Vassilev, et
al. 2022. NIST Special Publication 1270 Towards a Standard for Identifying and Managing
Bias in Artificial Intelligence.
Bias Testing and Remediation Approaches
Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, et al. 2018. A Reductions Approach to
Fair Classification. arXiv:1803.02453.
Brian Hu Zhang, Blake Lemoine, Margaret Mitchell. 2018. Mitigating Unwanted Biases with
Adversarial Learning. arXiv:1801.07593.
45 of 142 Drago Plečko, Nicolas Bennett, Nicolai Meinshausen. 2021. Fairadapt: Causal Reasoning for
Fair Data Pre -processing. arXiv:2110.10200.
Faisal Kamiran, Toon Calders. 2012. Data Preprocessing Techniques for Classification
without Discrimination. Knowledge and Information Systems 33 (2012), 1 –33.
Faisal Kamiran; Asim Karim; Xiangliang Zhang. 2012. Decision Theory for Discrimination -
Aware Classification. In Proceedings of the 2012 IEEE 12th International Conference on
Data Mining, December 10 -13, 2012, Brussels, Belgium. IEEE, 924 -929.
Flavio P. Calmon, Dennis Wei, Karthikeyan Natesan Ramamurthy, et al. 2017. Optimized
Data Pre -Processing for Discrimination Prevention. arXiv:1704.03354.
Geoff Pleiss, Manish Raghavan, Felix Wu, et al. 2017. On Fairness and Calibration.
arXiv:1709.02012.
L. Elisa Celis, Lingxiao Huang, Vijay Keswani, et al. 2020. Classification with Fairness
Constraints: A Meta -Algorithm with Provable Guarantees. arXiv:1806.06055.
Michael Feldman, Sorelle Friedler, John Moeller, et al. 2014. Certifying and Removing
Disparate Impact. arXiv:1412.3756.
Michael Kearns, Seth Neel, Aaron Roth, et al. 2017. Preventing Fairness Gerrymandering:
Auditing and Learning for Subgroup Fairness. arXiv:1711.05144.
Michael Kearns, Seth Neel, Aaron Roth, et al. 2018. An Empirical Study of Rich Subgroup
Fairness for Machine Learning. arXiv:1808.08166.
Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of Opportunity in Supervised
Learning. In Proceedings of the 30th Conference on Neural Information Processing Systems
(NIPS 2016), 2016, Barcelona, Spain.
Rich Zemel, Yu Wu, Kevin Swersky, et al. 2013. Learning Fair Representations. In
Proceedings of the 30th International Conference on Machine Learning 2013, PMLR 28, 3,
325-333.
Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh & Jun Sakuma. 2012. Fairness -Aware
Classifier with Prejudice Remover Regularizer. In Peter A. Flach, Tijl De Bie, Nello Cristianini
(eds) Machine Learning and Knowledge Discovery in Databases. European Conference
ECML PKDD 2012, Proceedings Part II, September 24 -28, 2012, Bristol, UK. Lecture Notes in
Computer Science 7524. Springer, Berlin, Heidelberg.
Security and Resilience Resources
FTC Start With Security Guidelines. 2015.
Gary McGraw et al. 2022. BIML Interactive Machine Learning Risk Framework. Berryville
Institute for Machine Learning.
46 of 142 Ilia Shumailov, Yiren Zhao, Daniel Bates, et al. 2021. Sponge Examples: Energy -Latency
Attacks on Neural Networks. arXiv:2006.03463.
Marco Barreno, Blaine Nelson, Anthony D. Joseph, et al. 2010. The Security of Machine
Learning. Machine Learning 81 (2010), 121 -148.
Matt Fredrikson, Somesh Jha, Thomas Ristenpart. 2015. Model Inversion Attacks that
Exploit Confidence Information and Basic Countermeasures. In Proceedings of the 22nd
ACM SIGSAC Conference on Computer and Communications Security (CCS '15), October
2015. Association for Computing Machinery, New York, NY, USA, 1322 –1333.
National Institute for Standards and Technology (NIST). 2022. Cybersecurity Framework.
Nicolas Papernot. 2018. A Marauder's Map of Security and Privacy in Machine Learning.
arXiv:1811.01134.
Reza Shokri, Marco Stronati, Congzheng Song, et al. 2017. Membership Inference Attacks
against Machine Learning Models. arXiv:1610.05820.
Adversarial Threat Matrix (MITRE). 2021.
Interpretability and Explainability Approaches
Chaofan Chen, Oscar Li, Chaofan Tao, et al. 2019. This Looks Like That: Deep Learning for
Interpretable Image Recognition. arXiv:1806.10574.
Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes
decisions and use interpretable models instead. arXiv:1811.10154.
Daniel W. Apley, Jingyu Zhu. 2019. Visualizing the Effects of Predictor Variables in Black Box
Supervised Learning Models. arXiv:1612.08468.
David A. Broniatowski. 2021. Psychological Foundations of Explainability and
Interpretability in Artificial Intelligence. National Institute of Standards and Technology
(NIST) IR 8367. National Institute of Standards and Technology, Gaithersburg, MD.
Forough Poursabzi -Sangdeh, Daniel G. Goldstein, Jake M. Hofman, et al. 2021. Manipulating
and Measuring Model Interpretability. arXiv:1802.07810.
Hongyu Yang, Cynthia Rudin, Margo Seltzer. 2017. Scalable Bayesian Rule Lists.
arXiv:1602.08610.
P. Jonathon Phillips, Carina A. Hahn, Peter C. Fontana, et al. 2021. Four Principles of
Explainable Artificial Intelligence. National Institute of Standards and Technology (NIST) IR
8312. National Institute of Standards and Technology, Gaithersburg, MD.
Scott Lundberg, Su -In Lee. 2017. A Unified Approach to Interpreting Model Predictions.
arXiv:1705.07874.
47 of 142 Susanne Gaube, Harini Suresh, Martina Raue, et al. 2021. Do as AI say: susceptibility in
deployment of clinical decision -aids. npj Digital Medicine 4, Article 31 (2021).
Yin Lou, Rich Caruana, Johannes Gehrke, et al. 2013. Accurate intelligible models with
pairwise interactions. In Proceedings of the 19th ACM SIGKDD international conference on
Knowledge discovery and data mining (KDD '13), August 2013. Association for Computing
Machinery, New York, NY, USA, 623 –631.
Post -Decommission
Upol Ehsan, Ranjit Singh, Jacob Metcalf and Mark O. Riedl. “The Algorithmic Imprint.”
Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
(2022).
Privacy Resources
National Institute for Standards and Technology (NIST). 2022. Privacy Framework.
Data Governance
Marijn Janssen, Paul Brous, Elsa Estevez, Luis S. Barbosa, Tomasz Janowski, Data
governance: Organizing data for trustworthy Artificial Intelligence, Government
Information Quarterly, Volume 37, Issue 3, 2020, 101493, ISSN 0740 -624X.
Software Resources
• PiML (explainable models, performance assessment)
• Interpret (explainable models)
• Iml (explainable models)
• Drifter library (performance assessment)
• Manifold library (performance assessment)
• SALib library (performance assessment)
• What -If Tool (performance assessment)
• MLextend (performance assessment)
- AI Fairness 360:
• Python (bias testing and mitigation)
• R (bias testing and mitigation)
• Adversarial -robustness -toolbox (ML security)
• Robustness (ML security)
• tensorflow/privacy (ML security)
• NIST De -identification Tools (Privacy and ML security)
• Dvc (MLops, deployment)
• Gigantum (MLops, deployment)
• Mlflow (MLops, deployment)
• Mlmd (MLops, deployment)
• Modeldb (MLops, deployment)
48 of 142 MANAGE 2.3
Procedures are followed to respond to and recover from a previously unknown risk when it
is identified.
About
AI systems – like any technology – can demonstrate non -functionality or failure or
unexpected and unusual behavior. They also can be subject to attacks, incidents, or other
misuse or abuse – which their sources are not always known apriori. Organizations can
establish, document, communicate and maintain treatment procedures to recognize and
counter, mitigate and manage risks that were not previously identified.
Suggested Actions
• Protocols, resources, and metrics are in place for continual monitoring of AI systems’
performance, trustworthiness, and alignment with contextual norms and values
• Establish and regularly review treatment and response plans for incidents, negative
impacts, or outcomes.
• Establish and maintain procedures to regularly monitor system components for drift,
decontextualization, or other AI system behavior factors,
• Establish and maintain procedures for capturing feedback about negative impacts.
• Verify contingency processes to handle any negative impacts associated with mission -
critical AI systems, and to deactivate systems.
• Enable preventive and post -hoc exploration of AI system limitations by relevant AI actor
groups.
• Decommission systems that exceed risk tolerances.
Transparency & Documentation
Organizations can document the following
• Who will be responsible for maintaining, re -verifying, monitoring, and updating this AI
once deployed?
• Are the responsibilities of the personnel involved in the various AI governance
processes clearly defined? (Including responsibilities to decommission the AI system.)
• What processes exist for data generation, acquisition/collection, ingestion,
staging/storage, transformations, security, maintenance, and dissemination?
• How will the appropriate performance metrics, such as accuracy, of the AI be monitored
after the AI is deployed?
AI Transparency Resources
• Artificial Intelligence Ethics Framework For The Intelligence Community.
• WEF - Companion to the Model AI Governance Framework – Implementation and Self -
Assessment Guide for Organizations.
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
49 of 142 References
AI Incident Database. 2022. AI Incident Database.
AIAAIC Repository. 2022. AI, algorithmic and automation incidents collected, dissected,
examined, and divulged.
Andrew Burt and Patrick Hall. 2018. What to Do When AI Fails. O’Reilly Media, Inc. (May 18,
2020). Retrieved October 17, 2022.
National Institute for Standards and Technology (NIST). 2022. Cybersecurity Framework.
SANS Institute. 2022. Security Consensus Operational Readiness Evaluation (SCORE)
Security Checklist [or Advanced Persistent Threat (APT) Handling Checklist].
Suchi Saria, Adarsh Subbaswamy. 2019. Tutorial: Safe and Reliable Machine Learning.
arXiv:1904.07204.
MANAGE 2.4
Mechanisms are in place and applied, responsibilities are assigned and understood to
supersede, disengage, or deactivate AI systems that demonstrate performance or outcomes
inconsistent with intended use.
About
Performance inconsistent with intended use does not always increase risk or lead to
negative impacts. Rigorous TEVV practices are useful for protecting against negative
impacts regardless of intended use. When negative impacts do arise, superseding
(bypassing), disengaging, or deactivating/decommissioning a model, AI system
component(s), or the entire AI system may be necessary, such as when:
• a system reaches the end of its lifetime
• detected or identified risks exceed tolerance thresholds
• adequate system mitigation actions are beyond the organization’s capacity
• feasible system mitigation actions do not meet regulatory, legal, norms or standards.
• impending risk is detected during continual monitoring, for which feasible mitigation
cannot be identified or implemented in a timely fashion.
Safely removing AI systems from operation, either temporarily or permanently, under these
scenarios requires standard protocols that minimize operational disruption and
downstream negative impacts. Protocols can involve redundant or backup systems that are
developed in alignment with established system governance policies (see GOVERN 1.7),
regulatory compliance, legal frameworks, business requirements and norms and l standards
within the application context of use. Decision thresholds and metrics for action s to bypass
or deactivate system components are part of continual monitoring procedures. Incidents
that result in a bypass/deactivate decision require documentation and review to
understand root causes, impacts, and potential opportunities for mitigation and
redeployment. Organizations are encouraged to develop risk and change management
50 of 142 protocols that consider and anticipate upstream and downstream consequences of both
temporary and/or permanent decommissioning, and provide contingency options.
Suggested Actions
• Regularly review established procedures for AI system bypass actions, including plans
for redundant or backup systems to ensure continuity of operational and/or business
functionality.
• Regularly review Identify system incident thresholds for activating bypass or
deactivation responses.
• Apply change management processes to understand the upstream and downstream
consequences of bypassing or deactivating an AI system or AI system components.
• Apply protocols, resources and metrics for decisions to supersede, bypass or deactivate
AI systems or AI system components.
• Preserve materials for forensic, regulatory, and legal review.
• Conduct internal root cause analysis and process reviews of bypass or deactivation
events.
• Decommission and preserve system components that cannot be updated to meet
criteria for redeployment.
• Establish criteria for redeploying updated system components, in consideration of
trustworthy characteristics
Transparency & Documentation
Organizations can document the following
• What are the roles, responsibilities, and delegation of authorities of personnel involved
in the design, development, deployment, assessment and monitoring of the AI system?
• Did your organization implement a risk management system to address risks involved
in deploying the identified AI solution (e.g. personnel risk or changes to commercial
objectives)?
• What testing, if any, has the entity conducted on the AI system to identify errors and
limitations (i.e. adversarial or stress testing)?
• To what extent does the entity have established procedures for retiring the AI system, if
it is no longer needed?
• How did the entity use assessments and/or evaluations to determine if the system can
be scaled up, continue, or be decommissioned?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
References
Decommissioning Template. Application Lifecycle And Supporting Docs. Cloud and
Infrastructure Community of Practice.
51 of 142 Develop a Decommission Plan. M3 Playbook. Office of Shared Services and Solutions and
Performance Improvement. General Services Administration.
MANAGE 3.1
AI risks and benefits from third -party resources are regularly monitored, and risk controls
are applied and documented.
About
AI systems may depend on external resources and associated processes, including third -
party data, software or hardware systems. Third parties’ supplying organizations with
components and services, including tools, software, and expertise for AI system design,
development, deployment or use can improve efficiency and scalability. It can also increase
complexity and opacity, and, in -turn, risk. Documenting third -party technologies, personnel,
and resources that were employed can help manage risks. Focusing first and foremost on
risks involving physical safety, legal liabilities, regulatory compliance, and negative impacts
on individuals, groups, or society is recommended.
Suggested Actions
• Have legal requirements been addressed?
• Apply organizational risk tolerance to third -party AI systems.
• Apply and document organizational risk management plans and practices to third -party
AI technology, personnel, or other resources.
• Identify and maintain documentation for third -party AI systems and components.
• Establish testing, evaluation, validation and verification processes for third -party AI
systems which address the needs for transparency without exposing proprietary
algorithms .
• Establish processes to identify beneficial use and risk indicators in third -party systems
or components, such as inconsistent software release schedule, sparse documentation,
and incomplete software change management (e.g., lack of forward or backward
compatibility).
• Organizations can establish processes for third parties to report known and potential
vulnerabilities, risks or biases in supplied resources.
• Verify contingency processes for handling negative impacts associated with mission -
critical third -party AI systems.
• Monitor third -party AI systems for potential negative impacts and risks associated with
trustworthiness characteristics.
• Decommission third -party systems that exceed risk tolerances.
Transparency & Documentation
Organizations can document the following
• If a third party created the AI system or some of its components, how will you ensure a
level of explainability or interpretability? Is there documentation?
52 of 142 • If your organization obtained datasets from a third party, did your organization assess
and manage the risks of using such datasets?
• Did you establish a process for third parties (e.g. suppliers, end users, subjects,
distributors/vendors or workers) to report potential vulnerabilities, risks or biases in
the AI system?
• Have legal requirements been addressed?
AI Transparency Resources
• Artificial Intelligence Ethics Framework For The Intelligence Community.
• WEF - Companion to the Model AI Governance Framework – Implementation and Self -
Assessment Guide for Organizations.
• Datasheets for Datasets.
References
Office of the Comptroller of the Currency. 2021. Proposed Interagency Guidance on Third -
Party Relationships: Risk Management. July 12, 2021.
MANAGE 3.2
Pre-trained models which are used for development are monitored as part of AI system
regular monitoring and maintenance.
About
A common approach in AI development is transfer learning, whereby an existing pre -
trained model is adapted for use in a different, but related application. AI actors in
development tasks often use pre -trained models from third -party entities for tasks such as
image classification, language prediction, and entity recognition, because the resources to
build such models may not be readily available to most organizations. Pre -trained models
are typically trained to address various classification or prediction problems, using
exceedingly large datasets and computationally intensive resources. The use of pre -trained
models can make it difficult to anticipate negative system outcomes or impacts. Lack of
documentation or transparency tools increases the difficulty and general complexity when
deploying pre -trained models and hinders root cause analyses.
Suggested Actions
• Identify pre -trained models within AI system inventory for risk tracking.
• Establish processes to independently and continually monitor performance and
trustworthiness of pre -trained models, and as part of third -party risk tracking.
• Monitor performance and trustworthiness of AI system components connected to pre -
trained models, and as part of third -party risk tracking.
• Identify, document and remediate risks arising from AI system components and pre -
trained models per organizational risk management procedures, and as part of third -
party risk tracking.
• Decommission AI system components and pre -trained models which exceed risk
tolerances, and as part of third -party risk tracking.
53 of 142 Transparency & Documentation
Organizations can document the following
• How has the entity documented the AI system’s data provenance, including sources,
origins, transformations, augmentations, labels, dependencies, constraints, and
metadata?
• Does this dataset collection/processing procedure achieve the motivation for creating
the dataset stated in the first section of this datasheet?
• How does the entity ensure that the data collected are adequate, relevant, and not
excessive in relation to the intended purpose?
• If the dataset becomes obsolete how will this be communicated?
AI Transparency Resources
• Artificial Intelligence Ethics Framework For The Intelligence Community.
• WEF - Companion to the Model AI Governance Framework – Implementation and Self -
Assessment Guide for Organizations.
• Datasheets for Datasets.
References
Larysa Visengeriyeva et al. “Awesome MLOps,“ GitHub. Accessed January 9, 2023.
MANAGE 4.1
Post -deployment AI system monitoring plans are implemented, including mechanisms for
capturing and evaluating input from users and other relevant AI actors, appeal and
override, decommissioning, incident response, recovery, and change management.
About
AI system performance and trustworthiness can change due to a variety of factors. Regular
AI system monitoring can help deployers identify performance degradations, adversarial
attacks, unexpected and unusual behavior, near -misses, and impacts. Including pre - and
post -deployment external feedback about AI system performance can enhance
organizational awareness about positive and negative impacts, and reduce the time to
respond to risks and harms.
Suggested Actions
• Establish and maintain procedures to monitor AI system performance for risks and
negative and positive impacts associated with trustworthiness characteristics.
• Perform post -deployment TEVV tasks to evaluate AI system validity and reliability, bias
and fairness, privacy, and security and resilience.
• Evaluate AI system trustworthiness in conditions similar to deployment context of use,
and prior to deployment.
• Establish and implement red -teaming exercises at a prescribed cadence, and evaluate
their efficacy.
54 of 142 • Establish procedures for tracking dataset modifications such as data deletion or
rectification requests.
• Establish mechanisms for regular communication and feedback between relevant AI
actors and internal or external stakeholders to capture information about system
performance, trustworthiness and impact.
• Share information about errors, near -misses, and attack patterns with incident
databases, other organizations with similar systems, and system users and
stakeholders.
• Respond to and document detected or reported negative impacts or issues in AI system
performance and trustworthiness.
• Decommission systems that exceed establish risk tolerances.
Transparency & Documentation
Organizations can document the following
• To what extent has the entity documented the post -deployment AI system’s testing
methodology, metrics, and performance outcomes?
• How easily accessible and current is the information available to external stakeholders?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities,
• Datasheets for Datasets.
References
Navdeep Gill, Patrick Hall, Kim Montgomery, and Nicholas Schmidt. "A Responsible Machine
Learning Workflow with Focus on Interpretable Models, Post -hoc Explanation, and
Discrimination Testing." Information 11, no. 3 (2020): 137.
MANAGE 4.2
Measurable activities for continual improvements are integrated into AI system updates
and include regular engagement with interested parties, including relevant AI actors.
About
Regular monitoring processes enable system updates to enhance performance and
functionality in accordance with regulatory and legal frameworks, and organizational and
contextual values and norms. These processes also facilitate analyses of root causes, system
degradation, drift, near -misses, and failures, and incident response and documentation.
AI actors across the lifecycle have many opportunities to capture and incorporate external
feedback about system performance, limitations, and impacts, and implement continuous
improvements. Improvements may not always be to model pipeline or system processes,
and may instead be based on metrics beyond accuracy or other quality performance
measures. In these cases, improvements may entail adaptations to business or
organizational procedures or practices. Organizations are encouraged to develop
55 of 142 improvements that will maintain traceability and transparency for developers, end users,
auditors, and relevant AI actors.
Suggested Actions
• Integrate trustworthiness characteristics into protocols and metrics used for continual
improvement.
• Establish processes for evaluating and integrating feedback into AI system
improvements.
• Assess and evaluate alignment of proposed improvements with relevant regulatory and
legal frameworks
• Assess and evaluate alignment of proposed improvements connected to the values and
norms within the context of use.
• Document the basis for decisions made relative to tradeoffs between trustworthy
characteristics, system risks, and system opportunities
Transparency & Documentation
Organizations can document the following
• How will user and other forms of stakeholder engagement be integrated into the model
development process and regular performance review once deployed?
• To what extent can users or parties affected by the outputs of the AI system test the AI
system and provide feedback?
• To what extent has the entity defined and documented the regulatory environment —
including minimum requirements in laws and regulations?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities,
• Artificial Intelligence Ethics Framework For The Intelligence Community.
References
Yen, Po -Yin, et al. "Development and Evaluation of Socio -Technical Metrics to Inform HIT
Adaptation."
Carayon, Pascale, and Megan E. Salwei. "Moving toward a sociotechnical systems approach
to continuous health information technology design: the path forward for improving
electronic health record usability and reducing clinician burnout." Journal of the American
Medical Informatics Association 28.5 (2021): 1026 -1028.
Mishra, Deepa, et al. "Organizational capabilities that enable big data and predictive
analytics diffusion and organizational performance: A resource -based perspective."
Management Decision (2018).
56 of 142 MANAGE 4.3
Incidents and errors are communicated to relevant AI actors including affected
communities. Processes for tracking, responding to, and recovering from incidents and
errors are followed and documented.
About
Regularly documenting an accurate and transparent account of identified and reported
errors can enhance AI risk management activities., Examples include:
• how errors were identified,
• incidents related to the error,
• whether the error has been repaired, and
• how repairs can be distributed to all impacted stakeholders and users.
Suggested Actions
• Establish procedures to regularly share information about errors, incidents and
negative impacts with relevant stakeholders, operators, practitioners and users, and
impacted parties.
• Maintain a database of reported errors, near -misses, incidents and negative impacts
including date reported, number of reports, assessment of impact and severity, and
responses.
• Maintain a database of system changes, reason for change, and details of how the change
was made, tested and deployed.
• Maintain version history information and metadata to enable continuous improvement
processes.
• Verify that relevant AI actors responsible for identifying complex or emergent risks are
properly resourced and empowered.
Transparency & Documentation
Organizations can document the following
• What corrective actions has the entity taken to enhance the quality, accuracy, reliability,
and representativeness of the data?
• To what extent does the entity communicate its AI strategic goals and objectives to the
community of stakeholders? How easily accessible and current is the information
available to external stakeholders?
• What type of information is accessible on the design, operations, and limitations of the
AI system to external stakeholders, including end users, consumers, regulators, and
individuals impacted by use of the AI system?
AI Transparency Resources
• GAO -21-519SP: Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities,
57 of 142 References
Wei, M., & Zhou, Z. (2022). AI Ethics Issues in Real World: Evidence from AI Incident
Database. ArXiv, abs/2206.07635.
McGregor, Sean. "Preventing repeated real world AI failures by cataloging incidents: The AI
incident database." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35.
No. 17. 2021.
Macrae, Carl. "Learning from the failure of autonomous and intelligent systems: Accidents,
safety, and sociotechnical sources of risk." Risk analysis 42.9 (2022): 1999 -2025.
MAP
58 of 142 Map
Context is established and understood.
MAP 1.1
Intended purpose, potentially beneficial uses, context -specific laws, norms and
expectations, and prospective settings in which the AI system will be deployed are
understood and documented. Considerations include: specific set or types of users along
with their expectations; potential positive and negative impacts of system uses to
individuals, communities, organizations, society, and the planet; assumptions and related
limitations about AI system purposes; uses and risks across the development or product AI
lifecycle; TEVV and system metrics.
About
Highly accurate and optimized systems can cause harm. Relatedly, organizations should
expect broadly deployed AI tools to be reused, repurposed, and potentially misused
regardless of intentions.
AI actors can work collaboratively, and with external parties such as community groups, to
help delineate the bounds of acceptable deployment, consider preferable alternatives, and
identify principles and strategies to manage likely risks. Context mapping is the first step in
this effort, and may include examination of the following:
• intended purpose and impact of system use.
• concept of operations.
• intended, prospective, and actual deployment setting.
• requirements for system deployment and operation.
• end user and operator expectations.
• specific set or types of end users.
• potential negative impacts to individuals, groups, communities, organizations, and
society – or context -specific impacts such as legal requirements or impacts to the
environment.
• unanticipated, downstream, or other unknown contextual factors.
• how AI system changes connect to impacts.
These types of processes can assist AI actors in understanding how limitations, constraints,
and other realities associated with the deployment and use of AI technology can create
impacts once they are deployed or operate in the real world. When coupled with the
enhanced organizational culture resulting from the established policies and procedures in
the Govern function, the Map function can provide opportunities to foster and instill new
perspectives, activities, and skills for approaching risks and impacts.
Context mapping also includes discussion and consideration of non -AI or non -technology
alternatives especially as related to whether the given context is narrow enough to manage
59 of 142 AI and its potential negative impacts. Non -AI alternatives may include capturing and
evaluating information using semi -autonomous or mostly -manual methods.
Suggested Actions
• Maintain awareness of industry, technical, and applicable legal standards.
• Examine trustworthiness of AI system design and consider, non -AI solutions
• Consider intended AI system design tasks along with unanticipated purposes in
collaboration with human factors and socio -technical domain experts.
• Define and document the task, purpose, minimum functionality, and benefits of the AI
system to inform considerations about whether the utility of the project or its lack of.
• Identify whether there are non -AI or non -technology alternatives that will lead to more
trustworthy outcomes.
• Examine how changes in system performance affect downstream events such as
decision -making (e.g: changes in an AI model objective function create what types of
impacts in how many candidates do/do not get a job interview).
• Determine actions to map and track post -decommissioning stages of AI deployment and
potential negative or positive impacts to individuals, groups and communities.
• Determine the end user and organizational requirements, including business and
technical requirements.
• Determine and delineate the expected and acceptable AI system context of use,
including:
• social norms
• Impacted individuals, groups, and communities
• potential positive and negative impacts to individuals, groups, communities,
organizations, and society
• operational environment
• Perform context analysis related to time frame, safety concerns, geographic area,
physical environment, ecosystems, social environment, and cultural norms within the
intended setting (or conditions that closely approximate the intended setting.
• Gain and maintain awareness about evaluating scientific claims related to AI system
performance and benefits before launching into system design.
• Identify human -AI interaction and/or roles, such as whether the application will
support or replace human decision making.
• Plan for risks related to human -AI configurations, and document requirements, roles,
and responsibilities for human oversight of deployed systems.
Transparency & Documentation
Organizations can document the following
• To what extent is the output of each component appropriate for the operational
context?
60 of 142 • Which AI actors are responsible for the decisions of the AI and is this person aware of
the intended uses and limitations of the analytic?
• Which AI actors are responsible for maintaining, re -verifying, monitoring, and updating
this AI once deployed?
• Who is the person(s) accountable for the ethical considerations across the AI lifecycle?
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities,
• “Stakeholders in Explainable AI,” Sep. 2018.
• "Microsoft Responsible AI Standard, v2".
References
Socio -technical systems
Andrew D. Selbst, danah boyd, Sorelle A. Friedler, et al. 2019. Fairness and Abstraction in
Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and
Transparency (FAccT'19). Association for Computing Machinery, New York, NY, USA, 59 –68.
Problem formulation
Roel Dobbe, Thomas Krendl Gilbert, and Yonatan Mintz. 2021. Hard choices in artificial
intelligence. Artificial Intelligence 300 (14 July 2021), 103555, ISSN 0004 -3702.
Samir Passi and Solon Barocas. 2019. Problem Formulation and Fairness. In Proceedings of
the Conference on Fairness, Accountability, and Transparency (FAccT'19). Association for
Computing Machinery, New York, NY, USA, 39 –48.
Context mapping
Emilio Gómez -González and Emilia Gómez. 2020. Artificial intelligence in medicine and
healthcare. Joint Research Centre (European Commission).
Sarah Spiekermann and Till Winkler. 2020. Value -based Engineering for Ethics by Design.
arXiv:2004.13676.
Social Impact Lab. 2017. Framework for Context Analysis of Technologies in Social Change
Projects (Draft v2.0).
Solon Barocas, Asia J. Biega, Margarita Boyarskaya, et al. 2021. Responsible computing
during COVID -19 and beyond. Commun. ACM 64, 7 (July 2021), 30 –32.
Identification of harms
Harini Suresh and John V. Guttag. 2020. A Framework for Understanding Sources of Harm
throughout the Machine Learning Life Cycle. arXiv:1901.10002.
Margarita Boyarskaya, Alexandra Olteanu, and Kate Crawford. 2020. Overcoming Failures of
Imagination in AI Infused System Development and Deployment. arXiv:2011.13416.
Microsoft. Foundations of assessing harm. 2022.
61 of 142 Understanding and documenting limitations in ML
Alexander D'Amour, Katherine Heller, Dan Moldovan, et al. 2020. Underspecification
Presents Challenges for Credibility in Modern Machine Learning. arXiv:2011.03395.
Arvind Narayanan. "How to Recognize AI Snake Oil." Arthur Miller Lecture on Science and
Ethics (2019).
Jessie J. Smith, Saleema Amershi, Solon Barocas, et al. 2022. REAL ML: Recognizing,
Exploring, and Articulating Limitations of Machine Learning Research. arXiv:2205.08363.
Margaret Mitchell, Simone Wu, Andrew Zaldivar, et al. 2019. Model Cards for Model
Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency
(FAT* '19). Association for Computing Machinery, New York, NY, USA, 220 –229.
Matthew Arnold, Rachel K. E. Bellamy, Michael Hind, et al. 2019. FactSheets: Increasing
Trust in AI Services through Supplier's Declarations of Conformity. arXiv:1808.07261.
Matthew J. Salganik, Ian Lundberg, Alexander T. Kindel, Caitlin E. Ahearn, Khaled Al -
Ghoneim, Abdullah Almaatouq, Drew M. Altschul et al. "Measuring the Predictability of Life
Outcomes with a Scientific Mass Collaboration." Proceedings of the National Academy of
Sciences 117, No. 15 (2020): 8398 -8403.
Michael A. Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020. Co -
Designing Checklists to Understand Organizational Challenges and Opportunities around
Fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing
Systems (CHI ‘20). Association for Computing Machinery, New York, NY, USA, 1 –14.
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, et al. 2021. Datasheets for Datasets.
arXiv:1803.09010.
Bender, E. M., Friedman, B. & McMillan -Major, A., (2022). A Guide for Writing Data
Statements for Natural Language Processing. University of Washington. Accessed July 14,
2022.
Meta AI. System Cards, a new resource for understanding how AI systems work, 2021.
When not to deploy
Solon Barocas, Asia J. Biega, Benjamin Fish, et al. 2020. When not to design, build, or deploy.
In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT*
'20). Association for Computing Machinery, New York, NY, USA, 695.
Post -decommission
Upol Ehsan, Ranjit Singh, Jacob Metcalf and Mark O. Riedl. “The Algorithmic Imprint.”
Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
(2022).
62 of 142 Statistical balance
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting
racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (25
Oct. 2019), 447 -453.
Assessment of science in AI
Arvind Narayanan. How to recognize AI snake oil.
Emily M. Bender. 2022. On NYT Magazine on AI: Resist the Urge to be Impressed. (April 17,
2022).
MAP 1.2
Inter -disciplinary AI actors, competencies, skills and capacities for establishing context
reflect demographic diversity and broad domain and user experience expertise, and their
participation is documented. Opportunities for interdisciplinary collaboration are
prioritized.
About
Successfully mapping context requires a team of AI actors with a diversity of experience,
expertise, abilities and backgrounds, and with the resources and independence to engage in
critical inquiry.
Having a diverse team contributes to more broad and open sharing of ideas and
assumptions about the purpose and function of the technology being designed and
developed – making these implicit aspects more explicit. The benefit of a diverse staff in
managing AI risks is not the beliefs or presumed beliefs of individual workers, but the
behavior that results from a collective perspective. An environment which fosters critical
inquiry creates opportunities to surface problems and identify existing and emergent risks.
Suggested Actions
• Establish interdisciplinary teams to reflect a wide range of skills, competencies, and
capabilities for AI efforts. Verify that team membership includes demographic diversity,
broad domain expertise, and lived experiences. Document team composition.
• Create and empower interdisciplinary expert teams to capture, learn, and engage the
interdependencies of deployed AI systems and related terminologies and concepts from
disciplines outside of AI practice such as law, sociology, psychology, anthropology,
public policy, systems design, and engineering.
Transparency & Documentation
Organizations can document the following
• To what extent do the teams responsible for developing and maintaining the AI system
reflect diverse opinions, backgrounds, experiences, and perspectives?
63 of 142 • Did the entity document the demographics of those involved in the design and
development of the AI system to capture and communicate potential biases inherent to
the development process, according to forum participants?
• What specific perspectives did stakeholders share, and how were they integrated across
the design, development, deployment, assessment, and monitoring of the AI system?
• To what extent has the entity addressed stakeholder perspectives on the potential
negative impacts of the AI system on end users and impacted populations?
• What type of information is accessible on the design, operations, and limitations of the
AI system to external stakeholders, including end users, consumers, regulators, and
individuals impacted by use of the AI system?
• Did your organization address usability problems and test whether user interfaces
served their intended purposes? Consulting the community or end users at the earliest
stages of development to ensure there is transparency on the technology used and how
it is deployed.
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• WEF Model AI Governance Framework Assessment 2020.
• WEF Companion to the Model AI Governance Framework - 2020.
• AI policies and initiatives, in Artificial Intelligence in Society, OECD, 2019.
References
Sina Fazelpour and Maria De -Arteaga. 2022. Diversity in sociotechnical machine learning
systems. Big Data & Society 9, 1 (Jan. 2022).
Microsoft Community Jury , Azure Application Architecture Guide.
Fernando Delgado, Stephen Yang, Michael Madaio, Qian Yang. (2021). Stakeholder
Participation in AI: Beyond "Add Diverse Stakeholders and Stir".
Kush Varshney, Tina Park, Inioluwa Deborah Raji, Gaurush Hiranandani, Narasimhan
Harikrishna, Oluwasanmi Koyejo, Brianna Richardson, and Min Kyung Lee. Participatory
specification of trustworthy machine learning, 2021.
Donald Martin, Vinodkumar Prabhakaran, Jill A. Kuhlberg, Andrew Smart and William S.
Isaac. “Participatory Problem Formulation for Fairer Machine Learning Through
Community Based System Dynamics”, ArXiv abs/2005.07572 (2020).
MAP 1.3
The organization’s mission and relevant goals for the AI technology are understood and
documented.
64 of 142 About
Defining and documenting the specific business purpose of an AI system in a broader
context of societal values helps teams to evaluate risks and increases the clarity of “go/no -
go” decisions about whether to deploy.
Trustworthy AI technologies may present a demonstrable business benefit beyond implicit
or explicit costs, provide added value, and don't lead to wasted resources. Organizations can
feel confident in performing risk avoidance if the implicit or explicit risks outweigh the
advantages of AI systems, and not implementing an AI solution whose risks surpass
potential benefits.
For example, making AI systems more equitable can result in better managed risk, and can
help enhance consideration of the business value of making inclusively designed, accessible
and more equitable AI systems.
Suggested Actions
• Build transparent practices into AI system development processes.
• Review the documented system purpose from a socio -technical perspective and in
consideration of societal values.
• Determine possible misalignment between societal values and stated organizational
principles and code of ethics.
• Flag latent incentives that may contribute to negative impacts.
• Evaluate AI system purpose in consideration of potential risks, societal values, and
stated organizational principles.
Transparency & Documentation
Organizations can document the following
• How does the AI system help the entity meet its goals and objectives?
• How do the technical specifications and requirements align with the AI system’s goals
and objectives?
• To what extent is the output appropriate for the operational context?
AI Transparency Resources
• Assessment List for Trustworthy AI (ALTAI) - The High -Level Expert Group on AI –
2019, [LINK](https://altai.insight -centre.org/),
• Including Insights from the Comptroller General’s Forum on the Oversight of Artificial
Intelligence An Accountability Framework for Federal Agencies and Other Entities,
2021,
References
M.S. Ackerman (2000). The Intellectual Challenge of CSCW: The Gap Between Social
Requirements and Technical Feasibility. Human –Computer Interaction, 15, 179 - 203.
65 of 142 McKane Andrus, Sarah Dean, Thomas Gilbert, Nathan Lambert, Tom Zick (2021). AI
Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks.
Abeba Birhane, Pratyusha Kalluri, Dallas Card, et al. 2022. The Values Encoded in Machine
Learning Research. arXiv:2106.15590.
Board of Governors of the Federal Reserve System. SR 11 -7: Guidance on Model Risk
Management. (April 4, 2011).
Iason Gabriel, Artificial Intelligence, Values, and Alignment. Minds & Machines 30, 411 –437
(2020).
PEAT “Business Case for Equitable AI”.
MAP 1.4
The business value or context of business use has been clearly defined or – in the case of
assessing existing AI systems – re-evaluated.
About
Socio -technical AI risks emerge from the interplay between technical development
decisions and how a system is used, who operates it, and the social context into which it is
deployed. Addressing these risks is complex and requires a commitment to understanding
how contextual factors may interact with AI lifecycle actions. One such contextual factor is
how organizational mission and identified system purpose create incentives within AI
system design, development, and deployment tasks that may result in positive and negative
impacts. By establishing comprehensive and explicit enumeration of AI systems’ context of
of business use and expectations, organizations can identify and manage these types of
risks.
Suggested Actions
• Document business value or context of business use
• Reconcile documented concerns about the system’s purpose within the business context
of use compared to the organization’s stated values, mission statements, social
responsibility commitments, and AI principles.
• Reconsider the design, implementation strategy, or deployment of AI systems with
potential impacts that do not reflect institutional values.
Transparency & Documentation
Organizations can document the following
• What goals and objectives does the entity expect to achieve by designing, developing,
and/or deploying the AI system?
• To what extent are the system outputs consistent with the entity’s values and principles
to foster public trust and equity?
• To what extent are the metrics consistent with system goals, objectives, and constraints,
including ethical and compliance considerations?
66 of 142 AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• Intel.gov: AI Ethics Framework for Intelligence Community - 2020.
• WEF Model AI Governance Framework Assessment 2020.
References
Algorithm Watch. AI Ethics Guidelines Global Inventory.
Ethical OS toolkit.
Emanuel Moss and Jacob Metcalf. 2020. Ethics Owners: A New Model of Organizational
Responsibility in Data -Driven Technology Companies. Data & Society Research Institute.
Future of Life Institute. Asilomar AI Principles.
Leonard Haas, Sebastian Gießler, and Veronika Thiel. 2020. In the realm of paper tigers –
exploring the failings of AI ethics guidelines. (April 28, 2020).
MAP 1.5
Organizational risk tolerances are determined and documented.
About
Risk tolerance reflects the level and type of risk the organization is willing to accept while
conducting its mission and carrying out its strategy.
Organizations can follow existing regulations and guidelines for risk criteria, tolerance and
response established by organizational, domain, discipline, sector, or professional
requirements. Some sectors or industries may have established definitions of harm or may
have established documentation, reporting, and disclosure requirements.
Within sectors, risk management may depend on existing guidelines for specific
applications and use case settings. Where established guidelines do not exist, organizations
will want to define reasonable risk tolerance in consideration of different sources of risk
(e.g., financial, operational, safety and wellbeing, business, reputational, and model risks)
and different levels of risk (e.g., from negligible to critical).
Risk tolerances inform and support decisions about whether to continue with development
or deployment - termed “go/no -go”. Go/no -go decisions related to AI system risks can take
stakeholder feedback into account, but remain independent from stakeholders’ vested
financial or reputational interests.
If mapping risk is prohibitively difficult, a "no -go" decision may be considered for the
specific system.
Suggested Actions
• Utilize existing regulations and guidelines for risk criteria, tolerance and response
established by organizational, domain, discipline, sector, or professional requirements.
67 of 142 • Establish risk tolerance levels for AI systems and allocate the appropriate oversight
resources to each level.
• Establish risk criteria in consideration of different sources of risk, (e.g., financial,
operational, safety and wellbeing, business, reputational, and model risks) and different
levels of risk (e.g., from negligible to critical).
• Identify maximum allowable risk tolerance above which the system will not be
deployed, or will need to be prematurely decommissioned, within the contextual or
application setting.
• Articulate and analyze tradeoffs across trustworthiness characteristics as relevant to
proposed context of use. When tradeoffs arise, document them and plan for traceable
actions (e.g.: impact mitigation, removal of system from development or use) to inform
management decisions.
• Review uses of AI systems for “off -label” purposes, especially in settings that
organizations have deemed as high -risk. Document decisions, risk -related trade -offs,
and system limitations.
Transparency & Documentation
Organizations can document the following
• Which existing regulations and guidelines apply, and the entity has followed, in the
development of system risk tolerances?
• What criteria and assumptions has the entity utilized when developing system risk
tolerances?
• How has the entity identified maximum allowable risk tolerance?
• What conditions and purposes are considered “off -label” for system use?
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• WEF Model AI Governance Framework Assessment 2020.
• WEF Companion to the Model AI Governance Framework - 2020.
References
Board of Governors of the Federal Reserve System. SR 11 -7: Guidance on Model Risk
Management. (April 4, 2011).
The Office of the Comptroller of the Currency. Enterprise Risk Appetite Statement. (Nov. 20,
2019).
Brenda Boultwood, How to Develop an Enterprise Risk -Rating Approach (Aug. 26, 2021).
Global Association of Risk Professionals (garp.org). Accessed Jan. 4, 2023.
Virginia Eubanks, 1972 -, Automating Inequality: How High -tech Tools Profile, Police, and
Punish the Poor. New York, NY, St. Martin's Press, 2018.
68 of 142 GAO -17-63: Enterprise Risk Management: Selected Agencies’ Experiences Illustrate Good
Practices in Managing Risk.
NIST Risk Management Framework.
MAP 1.6
System requirements (e.g., “the system shall respect the privacy of its users”) are elicited
from and understood by relevant AI actors. Design decisions take socio -technical
implications into account to address AI risks.
About
AI system development requirements may outpace documentation processes for traditional
software. When written requirements are unavailable or incomplete, AI actors may
inadvertently overlook business and stakeholder needs, over -rely on implicit human biases
such as confirmation bias and groupthink, and maintain exclusive focus on computational
requirements.
Eliciting system requirements, designing for end users, and considering societal impacts
early in the design phase is a priority that can enhance AI systems’ trustworthiness.
Suggested Actions
• Proactively incorporate trustworthy characteristics into system requirements.
• Establish mechanisms for regular communication and feedback between relevant AI
actors and internal or external stakeholders related to system design or deployment
decisions.
• Develop and standardize practices to assess potential impacts at all stages of the AI
lifecycle, and in collaboration with interdisciplinary experts, actors external to the team
that developed or deployed the AI system, and potentially impacted communities .
• Include potentially impacted groups, communities and external entities (e.g. civil society
organizations, research institutes, local community groups, and trade associations) in
the formulation of priorities, definitions and outcomes during impact assessment
activities.
• Conduct qualitative interviews with end user(s) to regularly evaluate expectations and
design plans related to Human -AI configurations and tasks.
• Analyze dependencies between contextual factors and system requirements. List
potential impacts that may arise from not fully considering the importance of
trustworthiness characteristics in any decision making.
• Follow responsible design techniques in tasks such as software engineering, product
management, and participatory engagement. Some examples for eliciting and
documenting stakeholder requirements include product requirement documents
(PRDs), user stories, user interaction/user experience (UI/UX) research, systems
engineering, ethnography and related field methods.
69 of 142 • Conduct user research to understand individuals, groups and communities that will be
impacted by the AI, their values & context, and the role of systemic and historical biases.
Integrate learnings into decisions about data selection and representation.
Transparency & Documentation
Organizations can document the following
• What type of information is accessible on the design, operations, and limitations of the
AI system to external stakeholders, including end users, consumers, regulators, and
individuals impacted by use of the AI system?
• To what extent is this information sufficient and appropriate to promote transparency?
Promote transparency by enabling external stakeholders to access information on the
design, operation, and limitations of the AI system.
• To what extent has relevant information been disclosed regarding the use of AI systems,
such as (a) what the system is for, (b) what it is not for, (c) how it was designed, and (d)
what its limitations are? (Documentation and external communication can offer a way
for entities to provide transparency.)
• How will the relevant AI actor(s) address changes in accuracy and precision due to
either an adversary’s attempts to disrupt the AI system or unrelated changes in the
operational/business environment, which may impact the accuracy of the AI system?
• What metrics has the entity developed to measure performance of the AI system?
• What justifications, if any, has the entity provided for the assumptions, boundaries, and
limitations of the AI system?
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• Stakeholders in Explainable AI, Sep. 2018.
• High -Level Expert Group on Artificial Intelligence set up by the European Commission,
Ethics Guidelines for Trustworthy AI.
References
National Academies of Sciences, Engineering, and Medicine 2022. Fostering Responsible
Computing Research: Foundations and Practices. Washington, DC: The National Academies
Press.
Abeba Birhane, William S. Isaac, Vinodkumar Prabhakaran, Mark Diaz, Madeleine Clare
Elish, Iason Gabriel and Shakir Mohamed. “Power to the People? Opportunities and
Challenges for Participatory AI.” Equity and Access in Algorithms, Mechanisms, and
Optimization (2022).
Amit K. Chopra, Fabiano Dalpiaz, F. Başak Aydemir, et al. 2014. Protos: Foundations for
engineering innovative sociotechnical systems. In 2014 IEEE 22nd International
Requirements Engineering Conference (RE) (2014), 53 -62.
70 of 142 Andrew D. Selbst, danah boyd, Sorelle A. Friedler, et al. 2019. Fairness and Abstraction in
Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and
Transparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 59 –68.
Gordon Baxter and Ian Sommerville. 2011. Socio -technical systems: From design methods
to systems engineering. Interacting with Computers, 23, 1 (Jan. 2011), 4 –17.
Roel Dobbe, Thomas Krendl Gilbert, and Yonatan Mintz. 2021. Hard choices in artificial
intelligence. Artificial Intelligence 300 (14 July 2021), 103555, ISSN 0004 -3702.
Yilin Huang, Giacomo Poderi, Sanja Šćepanović, et al. 2019. Embedding Internet -of-Things in
Large -Scale Socio -technical Systems: A Community -Oriented Design in Future Smart Grids.
In The Internet of Things for Smart Urban Ecosystems (2019), 125 -150. Springer, Cham.
Victor Udoewa, (2022). An introduction to radical participatory design: decolonising
participatory design processes. Design Science. 8. 10.1017/dsj.2022.24.
MAP 2.1
The specific task, and methods used to implement the task, that the AI system will support
is defined (e.g., classifiers, generative models, recommenders).
About
AI actors define the technical learning or decision -making task(s) an AI system is designed
to accomplish, or the benefits that the system will provide. The clearer and narrower the
task definition, the easier it is to map its benefits and risks, leading to more fulsome risk
management.
Suggested Actions
• Define and document AI system’s existing and potential learning task(s) along with
known assumptions and limitations.
Transparency & Documentation
Organizations can document the following
• To what extent has the entity clearly defined technical specifications and requirements
for the AI system?
• To what extent has the entity documented the AI system’s development, testing
methodology, metrics, and performance outcomes?
• How do the technical specifications and requirements align with the AI system’s goals
and objectives?
• Did your organization implement accountability -based practices in data management
and protection (e.g. the PDPA and OECD Privacy Principles)?
• How are outputs marked to clearly show that they came from an AI?
AI Transparency Resources
• Datasheets for Datasets.
71 of 142 • WEF Model AI Governance Framework Assessment 2020.
• WEF Companion to the Model AI Governance Framework - 2020.
• ATARC Model Transparency Assessment (WD) – 2020.
• Transparency in Artificial Intelligence - S. Larsson and F. Heintz – 2020.
References
Leong, Brenda (2020). The Spectrum of Artificial Intelligence - An Infographic Tool. Future
of Privacy Forum.
Brownlee, Jason (2020). A Tour of Machine Learning Algorithms. Machine Learning
Mastery.
MAP 2.2
Information about the AI system’s knowledge limits and how system output may be utilized
and overseen by humans is documented. Documentation provides sufficient information to
assist relevant AI actors when making informed decisions and taking subsequent actions.
About
An AI lifecycle consists of many interdependent activities involving a diverse set of actors
that often do not have full visibility or control over other parts of the lifecycle and its
associated contexts or risks. The interdependencies between these activities, and among the
relevant AI actors and organizations, can make it difficult to reliably anticipate potential
impacts of AI systems. For example, early decisions in identifying the purpose and objective
of an AI system can alter its behavior and capabilities, and the dynamics of deployment
setting (such as end users or impacted individuals) can shape the positive or negative
impacts of AI system decisions. As a result, the best intentions within one dimension of the
AI lifecycle can be undermined via interactions with decisions and conditions in other, later
activities. This complexity and varying levels of visibility can introduce uncertainty. And,
once deployed and in use, AI systems may sometimes perform poorly, manifest
unanticipated negative impacts, or violate legal or ethical norms. These risks and incidents
can result from a variety of factors. For example, downstream decisions can be influenced
by end user over -trust or under -trust, and other complexities related to AI -supported
decision -making.
Anticipating, articulating, assessing and documenting AI systems’ knowledge limits and how
system output may be utilized and overseen by humans can help mitigate the uncertainty
associated with the realities of AI system deployments. Rigorous design processes include
defining system knowledge limits, which are confirmed and refined based on TEVV
processes.
Suggested Actions
• Document settings, environments and conditions that are outside the AI system’s
intended use.
72 of 142 • Design for end user workflows and toolsets, concept of operations, and explainability
and interpretability criteria in conjunction with end user(s) and associated qualitative
feedback.
• Plan and test human -AI configurations under close to real -world conditions and
document results.
• Follow stakeholder feedback processes to determine whether a system achieved its
documented purpose within a given use context, and whether end users can correctly
comprehend system outputs or results.
• Document dependencies on upstream data and other AI systems, including if the
specified system is an upstream dependency for another AI system or other data.
• Document connections the AI system or data will have to external networks (including
the internet), financial markets, and critical infrastructure that have potential for
negative externalities. Identify and document negative impacts as part of considering
the broader risk thresholds and subsequent go/no -go deployment as well as post -
deployment decommissioning decisions.
Transparency & Documentation
Organizations can document the following
• Does the AI system provide sufficient information to assist the personnel to make an
informed decision and take actions accordingly?
• What type of information is accessible on the design, operations, and limitations of the
AI system to external stakeholders, including end users, consumers, regulators, and
individuals impacted by use of the AI system?
• Based on the assessment, did your organization implement the appropriate level of
human involvement in AI -augmented decision -making?
AI Transparency Resources
• Datasheets for Datasets.
• WEF Model AI Governance Framework Assessment 2020.
• WEF Companion to the Model AI Governance Framework - 2020.
• ATARC Model Transparency Assessment (WD) – 2020.
• Transparency in Artificial Intelligence - S. Larsson and F. Heintz – 2020.
References
Context of use
International Standards Organization (ISO). 2019. ISO 9241 -210:2019 Ergonomics of
human -system interaction — Part 210: Human -centred design for interactive systems.
National Institute of Standards and Technology (NIST), Mary Theofanos, Yee -Yin Choong, et
al. 2017. NIST Handbook 161 Usability Handbook for Public Safety Communications:
Ensuring Successful Systems for First Responders.
73 of 142 Human -AI interaction
Committee on Human -System Integration Research Topics for the 711th Human
Performance Wing of the Air Force Research Laboratory and the National Academies of
Sciences, Engineering, and Medicine. 2022. Human -AI Teaming: State -of-the-Art and
Research Needs. Washington, D.C. National Academies Press.
Human Readiness Level Scale in the System Development Process, American National
Standards Institute and Human Factors and Ergonomics Society, ANSI/HFES 400 -2021
Microsoft Responsible AI Standard, v2.
Saar Alon -Barkat, Madalina Busuioc, Human –AI Interactions in Public Sector Decision
Making: “Automation Bias” and “Selective Adherence” to Algorithmic Advice, Journal of
Public Administration Research and Theory, 2022;, muac007.
Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To Trust or to Think:
Cognitive Forcing Functions Can Reduce Overreliance on AI in AI -assisted Decision -making.
Proc. ACM Hum. -Comput. Interact. 5, CSCW1, Article 188 (April 2021), 21 pages.
Mary L. Cummings. 2006 Automation and accountability in decision support system
interface design.The Journal of Technology Studies 32(1): 23 –31.
Engstrom, D. F., Ho, D. E., Sharkey, C. M., & Cuéllar, M. F. (2020). Government by algorithm:
Artificial intelligence in federal administrative agencies. NYU School of Law, Public Law
Research Paper, (20 -54).
Susanne Gaube, Harini Suresh, Martina Raue, et al. 2021. Do as AI say: susceptibility in
deployment of clinical decision -aids. npj Digital Medicine 4, Article 31 (2021).
Ben Green. 2021. The Flaws of Policies Requiring Human Oversight of Government
Algorithms. Computer Law & Security Review 45 (26 Apr. 2021).
Ben Green and Amba Kak. 2021. The False Comfort of Human Oversight as an Antidote to
A.I. Harm. (June 15, 2021).
Grgić -Hlača, N., Engel, C., & Gummadi, K. P. (2019). Human decision making with machine
assistance: An experiment on bailing and jailing. Proceedings of the ACM on Human -
Computer Interaction, 3(CSCW), 1 -25.
Forough Poursabzi -Sangdeh, Daniel G Goldstein, Jake M Hofman, et al. 2021. Manipulating
and Measuring Model Interpretability. In Proceedings of the 2021 CHI Conference on
Human Factors in Computing Systems (CHI '21). Association for Computing Machinery, New
York, NY, USA, Article 237, 1 –52.
C. J. Smith (2019). Designing trustworthy AI: A human -machine teaming framework to
guide development. arXiv preprint arXiv:1910.03515.
74 of 142 T. Warden, P. Carayon, EM et al. The National Academies Board on Human System
Integration (BOHSI) Panel: Explainable AI, System Transparency, and Human Machine
Teaming. Proceedings of the Human Factors and Ergonomics Society Annual Meeting.
2019;63(1):631 -635. doi:10.1177/1071181319631100.
MAP 2.3
Scientific integrity and TEVV considerations are identified and documented, including those
related to experimental design, data collection and selection (e.g., availability,
representativeness, suitability), system trustworthiness, and construct validation.
About
Standard testing and evaluation protocols provide a basis to confirm assurance in a system
that it is operating as designed and claimed. AI systems’ complexities create challenges for
traditional testing and evaluation methodologies, which tend to be designed for static or
isolated system performance. Opportunities for risk continue well beyond design and
deployment, into system operation and application of system -enabled decisions. Testing
and evaluation methodologies and metrics therefore address a continuum of activities.
TEVV is enhanced when key metrics for performance, safety, and reliability are interpreted
in a socio -technical context and not confined to the boundaries of the AI system pipeline.
Other challenges for managing AI risks relate to dependence on large scale datasets, which
can impact data quality and validity concerns. The difficulty of finding the “right” data may
lead AI actors to select datasets based more on accessibility and availability than on
suitability for operationalizing the phenomenon that the AI system intends to support or
inform. Such decisions could contribute to an environment where the data used in
processes is not fully representative of the populations or phenomena that are being
modeled, introducing downstream risks. Practices such as dataset reuse may also lead to
disconnect from the social contexts and time periods of their creation. This contributes to
issues of validity of the underlying dataset for providing proxies, measures, or predictors
within the model.
Suggested Actions
• Identify and document experiment design and statistical techniques that are valid for
testing complex socio -technical systems like AI, which involve human factors, emergent
properties, and dynamic context(s) of use.
• Develop and apply TEVV protocols for models, system and its subcomponents,
deployment, and operation.
• Demonstrate and document that AI system performance and validation metrics are
interpretable and unambiguous for downstream decision making tasks, and take socio -
technical factors such as context of use into consideration.
• Identify and document assumptions, techniques, and metrics used for testing and
evaluation throughout the AI lifecycle including experimental design techniques for data
collection, selection, and management practices in accordance with data governance
policies established in GOVERN.
75 of 142 • Identify testing modules that can be incorporated throughout the AI lifecycle, and verify
that processes enable corroboration by independent evaluators.
• Establish mechanisms for regular communication and feedback among relevant AI
actors and internal or external stakeholders related to the validity of design and
deployment assumptions.
• Establish mechanisms for regular communication and feedback between relevant AI
actors and internal or external stakeholders related to the development of TEVV
approaches throughout the lifecycle to detect and assess potentially harmful impacts
• Document assumptions made and techniques used in data selection, curation,
preparation and analysis, including:
• identification of constructs and proxy targets,
• development of indices – especially those operationalizing concepts that are
inherently unobservable (e.g. “hireability,” “criminality.” “lendability”).
• Map adherence to policies that address data and construct validity, bias, privacy and
security for AI systems and verify documentation, oversight, and processes.
• Identify and document transparent methods (e.g. causal discovery methods) for
inferring causal relationships between constructs being modeled and dataset attributes
or proxies.
• Identify and document processes to understand and trace test and training data lineage
and its metadata resources for mapping risks.
• Document known limitations, risk mitigation efforts associated with, and methods used
for, training data collection, selection, labeling, cleaning, and analysis (e.g. treatment of
missing, spurious, or outlier data; biased estimators).
• Establish and document practices to check for capabilities that are in excess of those
that are planned for, such as emergent properties, and to revisit prior risk management
steps in light of any new capabilities.
• Establish processes to test and verify that design assumptions about the set of
deployment contexts continue to be accurate and sufficiently complete.
• Work with domain experts and other external AI actors to:
• Gain and maintain contextual awareness and knowledge about how human
behavior, organizational factors and dynamics, and society influence, and are
represented in, datasets, processes, models, and system output.
• Identify participatory approaches for responsible Human -AI configurations and
oversight tasks, taking into account sources of cognitive bias.
• Identify techniques to manage and mitigate sources of bias (systemic,
computational, human - cognitive) in computational models and systems, and the
assumptions and decisions in their development..
• Investigate and document potential negative impacts due related to the full product
lifecycle and associated processes that may conflict with organizational values and
principles.
76 of 142 Transparency & Documentation
Organizations can document the following
• Are there any known errors, sources of noise, or redundancies in the data?
• Over what time -frame was the data collected? Does the collection time -frame match the
creation time -frame
• What is the variable selection and evaluation process?
• How was the data collected? Who was involved in the data collection process? If the
dataset relates to people (e.g., their attributes) or was generated by people, were they
informed about the data collection? (e.g., datasets that collect writing, photos,
interactions, transactions, etc.)
• As time passes and conditions change, is the training data still representative of the
operational environment?
• Why was the dataset created? (e.g., were there specific tasks in mind, or a specific gap
that needed to be filled?)
• How does the entity ensure that the data collected are adequate, relevant, and not
excessive in relation to the intended purpose?
AI Transparency Resources
• Datasheets for Datasets.
• WEF Model AI Governance Framework Assessment 2020.
• WEF Companion to the Model AI Governance Framework - 2020.
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• ATARC Model Transparency Assessment (WD) – 2020.
• Transparency in Artificial Intelligence - S. Larsson and F. Heintz – 2020.
References
Challenges with dataset selection
Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kiciman. 2019. Social Data:
Biases, Methodological Pitfalls, and Ethical Boundaries. Front. Big Data 2, 13 (11 July 2019).
Amandalynne Paullada, Inioluwa Deborah Raji, Emily M. Bender, et al. 2020. Data and its
(dis)contents: A survey of dataset development and use in machine learning research.
arXiv:2012.05345.
Catherine D'Ignazio and Lauren F. Klein. 2020. Data Feminism. The MIT Press, Cambridge,
MA.
Miceli, M., & Posada, J. (2022). The Data -Production Dispositif. ArXiv, abs/2205.11963.
Barbara Plank. 2016. What to do about non -standard (or non -canonical) language in NLP.
arXiv:1608.07836.
77 of 142 Dataset and test, evaluation, validation and verification (TEVV) processes in AI system
development
National Institute of Standards and Technology (NIST), Reva Schwartz, Apostol Vassilev, et
al. 2022. NIST Special Publication 1270 Towards a Standard for Identifying and Managing
Bias in Artificial Intelligence.
Inioluwa Deborah Raji, Emily M. Bender, Amandalynne Paullada, et al. 2021. AI and the
Everything in the Whole Wide World Benchmark. arXiv:2111.15366.
Statistical balance
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting
racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (25
Oct. 2019), 447 -453.
Amandalynne Paullada, Inioluwa Deborah Raji, Emily M. Bender, et al. 2020. Data and its
(dis)contents: A survey of dataset development and use in machine learning research.
arXiv:2012.05345.
Solon Barocas, Anhong Guo, Ece Kamar, et al. 2021. Designing Disaggregated Evaluations of
AI Systems: Choices, Considerations, and Tradeoffs. Proceedings of the 2021 AAAI/ACM
Conference on AI, Ethics, and Society. Association for Computing Machinery, New York, NY,
USA, 368 –378.
Measurement and evaluation
Abigail Z. Jacobs and Hanna Wallach. 2021. Measurement and Fairness. In Proceedings of
the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ‘21).
Association for Computing Machinery, New York, NY, USA, 375 –385.
Ben Hutchinson, Negar Rostamzadeh, Christina Greer, et al. 2022. Evaluation Gaps in
Machine Learning Practice. arXiv:2205.05256.
Laura Freeman, "Test and evaluation for artificial intelligence." Insight 23.1 (2020): 27 -30.
Existing frameworks
National Institute of Standards and Technology. (2018). Framework for improving critical
infrastructure cybersecurity.
Kaitlin R. Boeckl and Naomi B. Lefkovitz. "NIST Privacy Framework: A Tool for Improving
Privacy Through Enterprise Risk Management, Version 1.0." National Institute of Standards
and Technology (NIST), January 16, 2020.
MAP 3.1
Potential benefits of intended AI system functionality and performance are examined and
documented.
78 of 142 About
AI systems have enormous potential to improve quality of life, enhance economic prosperity
and security costs. Organizations are encouraged to define and document system purpose
and utility, and its potential positive impacts and benefits beyond current known
performance benchmarks.
It is encouraged that risk management and assessment of benefits and impacts include
processes for regular and meaningful communication with potentially affected groups and
communities. These stakeholders can provide valuable input related to systems’ benefits
and possible limitations. Organizations may differ in the types and number of stakeholders
with which they engage.
Other approaches such as human -centered design (HCD) and value -sensitive design (VSD)
can help AI teams to engage broadly with individuals and communities. This type of
engagement can enable AI teams to learn about how a given technology may cause positive
or negative impacts, that were not originally considered or intended.
Suggested Actions
• Utilize participatory approaches and engage with system end users to understand and
document AI systems’ potential benefits, efficacy and interpretability of AI task output.
• Maintain awareness and documentation of the individuals, groups, or communities who
make up the system’s internal and external stakeholders.
• Verify that appropriate skills and practices are available in -house for carrying out
participatory activities such as eliciting, capturing, and synthesizing user, operator and
external feedback, and translating it for AI design and development functions.
• Establish mechanisms for regular communication and feedback between relevant AI
actors and internal or external stakeholders related to system design or deployment
decisions.
• Consider performance to human baseline metrics or other standard benchmarks.
• Incorporate feedback from end users, and potentially impacted individuals and
communities about perceived system benefits .
Transparency & Documentation
Organizations can document the following
• Have the benefits of the AI system been communicated to end users?
• Have the appropriate training material and disclaimers about how to adequately use the
AI system been provided to end users?
• Has your organization implemented a risk management system to address risks
involved in deploying the identified AI system (e.g. personnel risk or changes to
commercial objectives)?
AI Transparency Resources
• Intel.gov: AI Ethics Framework for Intelligence Community - 2020.
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
79 of 142 • Assessment List for Trustworthy AI (ALTAI) - The High -Level Expert Group on AI –
2019. [LINK](https://altai.insight -centre.org/),
References
Roel Dobbe, Thomas Krendl Gilbert, and Yonatan Mintz. 2021. Hard choices in artificial
intelligence. Artificial Intelligence 300 (14 July 2021), 103555, ISSN 0004 -3702.
Samir Passi and Solon Barocas. 2019. Problem Formulation and Fairness. In Proceedings of
the Conference on Fairness, Accountability, and Transparency (FAT* '19). Association for
Computing Machinery, New York, NY, USA, 39 –48.
Vincent T. Covello. 2021. Stakeholder Engagement and Empowerment. In Communicating in
Risk, Crisis, and High Stress Situations (Vincent T. Covello, ed.), 87 -109.
Yilin Huang, Giacomo Poderi, Sanja Šćepanović, et al. 2019. Embedding Internet -of-Things in
Large -Scale Socio -technical Systems: A Community -Oriented Design in Future Smart Grids.
In The Internet of Things for Smart Urban Ecosystems (2019), 125 -150. Springer, Cham.
Eloise Taysom and Nathan Crilly. 2017. Resilience in Sociotechnical Systems: The
Perspectives of Multiple Stakeholders. She Ji: The Journal of Design, Economics, and
Innovation, 3, 3 (2017), 165 -182, ISSN 2405 -8726.
MAP 3.2
Potential costs, including non -monetary costs, which result from expected or realized AI
errors or system functionality and trustworthiness - as connected to organizational risk
tolerance - are examined and documented.
About
Anticipating negative impacts of AI systems is a difficult task. Negative impacts can be due
to many factors, such as system non -functionality or use outside of its operational limits,
and may range from minor annoyance to serious injury, financial losses, or regulatory
enforcement actions. AI actors can work with a broad set of stakeholders to improve their
capacity for understanding systems’ potential impacts – and subsequently – systems’ risks.
Suggested Actions
• Perform context analysis to map potential negative impacts arising from not integrating
trustworthiness characteristics. When negative impacts are not direct or obvious, AI
actors can engage with stakeholders external to the team that developed or deployed
the AI system, and potentially impacted communities, to examine and document:
• Who could be harmed?
• What could be harmed?
• When could harm arise?
• How could harm arise?
80 of 142 • Identify and implement procedures for regularly evaluating the qualitative and
quantitative costs of internal and external AI system failures. Develop actions to
prevent, detect, and/or correct potential risks and related impacts. Regularly evaluate
failure costs to inform go/no -go deployment decisions throughout the AI system
lifecycle.
Transparency & Documentation
Organizations can document the following
• To what extent does the system/entity consistently measure progress towards stated
goals and objectives?
• To what extent can users or parties affected by the outputs of the AI system test the AI
system and provide feedback?
• Have you documented and explained that machine errors may differ from human
errors?
AI Transparency Resources
• Intel.gov: AI Ethics Framework for Intelligence Community - 2020.
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• Assessment List for Trustworthy AI (ALTAI) - The High -Level Expert Group on AI –
2019. [LINK](https://altai.insight -centre.org/),
References
Abagayle Lee Blank. 2019. Computer vision machine learning and future -oriented ethics.
Honors Project. Seattle Pacific University (SPU), Seattle, WA.
Margarita Boyarskaya, Alexandra Olteanu, and Kate Crawford. 2020. Overcoming Failures of
Imagination in AI Infused System Development and Deployment. arXiv:2011.13416.
Jeff Patton. 2014. User Story Mapping. O'Reilly, Sebastopol, CA.
Margarita Boenig -Liptsin, Anissa Tanweer & Ari Edmundson (2022) Data Science Ethos
Lifecycle: Interplay of ethical thinking and data science practice, Journal of Statistics and
Data Science Education, DOI: 10.1080/26939169.2022.2089411
J. Cohen, D. S. Katz, M. Barker, N. Chue Hong, R. Haines and C. Jay, "The Four Pillars of
Research Software Engineering," in IEEE Software, vol. 38, no. 1, pp. 97 -105, Jan. -Feb. 2021,
doi: 10.1109/MS.2020.2973362.
National Academies of Sciences, Engineering, and Medicine 2022. Fostering Responsible
Computing Research: Foundations and Practices. Washington, DC: The National Academies
Press.
MAP 3.3
Targeted application scope is specified and documented based on the system’s capability,
established context, and AI system categorization.
81 of 142 About
Systems that function in a narrow scope tend to enable better mapping, measurement, and
management of risks in the learning or decision -making tasks and the system context. A
narrow application scope also helps ease TEVV functions and related resources within an
organization.
For example, large language models or open -ended chatbot systems that interact with the
public on the internet have a large number of risks that may be difficult to map, measure,
and manage due to the variability from both the decision -making task and the operational
context. Instead, a task -specific chatbot utilizing templated responses that follow a defined
“user journey” is a scope that can be more easily mapped, measured and managed.
Suggested Actions
• Consider narrowing contexts for system deployment, including factors related to:
• How outcomes may directly or indirectly affect users, groups, communities and
the environment.
• Length of time the system is deployed in between re -trainings.
• Geographical regions in which the system operates.
• Dynamics related to community standards or likelihood of system misuse or
abuses (either purposeful or unanticipated).
• How AI system features and capabilities can be utilized within other
applications, or in place of other existing processes.
• Engage AI actors from legal and procurement functions when specifying target
application scope.
Transparency & Documentation
Organizations can document the following
• To what extent has the entity clearly defined technical specifications and requirements
for the AI system?
• How do the technical specifications and requirements align with the AI system’s goals
and objectives?
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• Assessment List for Trustworthy AI (ALTAI) - The High -Level Expert Group on AI –
2019. [LINK](https://altai.insight -centre.org/),
References
Mark J. Van der Laan and Sherri Rose (2018). Targeted Learning in Data Science. Cham:
Springer International Publishing, 2018.
Alice Zheng. 2015. Evaluating Machine Learning Models (2015). O'Reilly.
82 of 142 Brenda Leong and Patrick Hall (2021). 5 things lawyers should know about artificial
intelligence. ABA Journal.
UK Centre for Data Ethics and Innovation, “The roadmap to an effective AI assurance
ecosystem”.
MAP 3.4
Processes for operator and practitioner proficiency with AI system performance and
trustworthiness – and relevant technical standards and certifications – are defined,
assessed and documented.
About
Human -AI configurations can span from fully autonomous to fully manual. AI systems can
autonomously make decisions, defer decision -making to a human expert, or be used by a
human decision -maker as an additional opinion. In some scenarios, professionals with
expertise in a specific domain work in conjunction with an AI system towards a specific end
goal —for example, a decision about another individual(s). Depending on the purpose of the
system, the expert may interact with the AI system but is rarely part of the design or
development of the system itself. These experts are not necessarily familiar with machine
learning, data science, computer science, or other fields traditionally associated with AI
design or development and - depending on the application - will likely not require such
familiarity. For example, for AI systems that are deployed in health care delivery the experts
are the physicians and bring their expertise about medicine —not data science, data
modeling and engineering, or other computational factors. The challenge in these settings is
not educating the end user about AI system capabilities, but rather leveraging, and not
replacing, practitioner domain expertise.
Questions remain about how to configure humans and automation for managing AI risks.
Risk management is enhanced when organizations that design, develop or deploy AI
systems for use by professional operators and practitioners:
• are aware of these knowledge limitations and strive to identify risks in human -AI
interactions and configurations across all contexts, and the potential resulting impacts,
• define and differentiate the various human roles and responsibilities when using or
interacting with AI systems, and
• determine proficiency standards for AI system operation in proposed context of use, as
enumerated in MAP -1 and established in GOVERN -3.2.
Suggested Actions
• Identify and declare AI system features and capabilities that may affect downstream AI
actors’ decision -making in deployment and operational settings for example how
system features and capabilities may activate known risks in various human -AI
configurations, such as selective adherence.
• Identify skills and proficiency requirements for operators, practitioners and other
domain experts that interact with AI systems,Develop AI system operational
83 of 142 documentation for AI actors in deployed and operational environments, including
information about known risks, mitigation criteria, and trustworthy characteristics
enumerated in Map -1.
• Define and develop training materials for proposed end users, practitioners and
operators about AI system use and known limitations.
• Define and develop certification procedures for operating AI systems within defined
contexts of use, and information about what exceeds operational boundaries.
• Include operators, practitioners and end users in AI system prototyping and testing
activities to help inform operational boundaries and acceptable performance. Conduct
testing activities under scenarios similar to deployment conditions.
• Verify model output provided to AI system operators, practitioners and end users is
interactive, and specified to context and user requirements defined in MAP -1.
• Verify AI system output is interpretable and unambiguous for downstream decision
making tasks.
• Design AI system explanation complexity to match the level of problem and context
complexity.
• Verify that design principles are in place for safe operation by AI actors in decision -
making environments.
• Develop approaches to track human -AI configurations, operator, and practitioner
outcomes for integration into continual improvement.
Transparency & Documentation
Organizations can document the following
• What policies has the entity developed to ensure the use of the AI system is consistent
with its stated values and principles?
• How will the accountable human(s) address changes in accuracy and precision due to
either an adversary’s attempts to disrupt the AI or unrelated changes in
operational/business environment, which may impact the accuracy of the AI?
• How does the entity assess whether personnel have the necessary skills, training,
resources, and domain knowledge to fulfill their assigned responsibilities?
• Are the relevant staff dealing with AI systems properly trained to interpret AI model
output and decisions as well as to detect and manage bias in data?
• What metrics has the entity developed to measure performance of various components?
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• WEF Companion to the Model AI Governance Framework - 2020.
References
National Academies of Sciences, Engineering, and Medicine. 2022. Human -AI Teaming:
State -of-the-Art and Research Needs. Washington, DC: The National Academies Press.
84 of 142 Human Readiness Level Scale in the System Development Process, American National
Standards Institute and Human Factors and Ergonomics Society, ANSI/HFES 400 -2021.
Human -Machine Teaming Systems Engineering Guide. P McDermott, C Dominguez, N
Kasdaglis, M Ryan, I Trahan, A Nelson. MITRE Corporation, 2018.
Saar Alon -Barkat, Madalina Busuioc, Human –AI Interactions in Public Sector Decision
Making: “Automation Bias” and “Selective Adherence” to Algorithmic Advice, Journal of
Public Administration Research and Theory, 2022;, muac007.
Breana M. Carter -Browne, Susannah B. F. Paletz, Susan G. Campbell , Melissa J. Carraway,
Sarah H. Vahlkamp, Jana Schwartz , Polly O’Rourke, “There is No “AI” in Teams: A
Multidisciplinary Framework for AIs to Work in Human Teams; Applied Research
Laboratory for Intelligence and Security (ARLIS) Report, June 2021.
R Crootof, ME Kaminski, and WN Price II. Humans in the Loop (March 25, 2022). Vanderbilt
Law Review, Forthcoming 2023, U of Colorado Law Legal Studies Research Paper No. 22 -10,
U of Michigan Public Law Research Paper No. 22 -011.
S Mo Jones -Jang, Yong Jin Park, How do people react to AI failure? Automation bias,
algorithmic aversion, and perceived controllability, Journal of Computer -Mediated
Communication, Volume 28, Issue 1, January 2023, zmac029.
A Knack, R Carter and A Babuta, "Human -Machine Teaming in Intelligence Analysis:
Requirements for developing trust in machine learning systems," CETaS Research Reports
(December 2022).
SD Ramchurn, S Stein , NR Jennings. Trustworthy human -AI partnerships. iScience.
2021;24(8):102891. Published 2021 Jul 24. doi:10.1016/j.isci.2021.102891.
M. Veale, M. Van Kleek, and R. Binns, “Fairness and Accountability Design Needs for
Algorithmic Support in High -Stakes Public Sector Decision -Making,” in Proceedings of the
2018 CHI Conference on Human Factors in Computing Systems - CHI ’18. Montreal QC,
Canada: ACM Press, 2018, pp. 1 –14.
MAP 3.5
Processes for human oversight are defined, assessed, and documented in accordance with
organizational policies from GOVERN function.
About
As AI systems have evolved in accuracy and precision, computational systems have moved
from being used purely for decision support —or for explicit use by and under the
control of a human operator —to automated decision making with limited input from
humans. Computational decision support systems augment another, typically human,
system in making decisions.These types of configurations increase the likelihood of outputs
being produced with little human involvement.
85 of 142 Defining and differentiating various human roles and responsibilities for AI systems’
governance, and differentiating AI system overseers and those using or interacting with AI
systems can enhance AI risk management activities.
In critical systems, high -stakes settings, and systems deemed high -risk it is of vital
importance to evaluate risks and effectiveness of oversight procedures before an AI system
is deployed.
Ultimately, AI system oversight is a shared responsibility, and attempts to properly
authorize or govern oversight practices will not be effective without organizational buy -in
and accountability mechanisms, for example those suggested in the GOVERN function.
Suggested Actions
• Identify and document AI systems’ features and capabilities that require human
oversight, in relation to operational and societal contexts, trustworthy characteristics,
and risks identified in MAP -1.
• Establish practices for AI systems’ oversight in accordance with policies developed in
GOVERN -1.
• Define and develop training materials for relevant AI Actors about AI system
performance, context of use, known limitations and negative impacts, and suggested
warning labels.
• Include relevant AI Actors in AI system prototyping and testing activities. Conduct
testing activities under scenarios similar to deployment conditions.
• Evaluate AI system oversight practices for validity and reliability. When oversight
practices undergo extensive updates or adaptations, retest, evaluate results, and course
correct as necessary.
• Verify that model documents contain interpretable descriptions of system mechanisms,
enabling oversight personnel to make informed, risk -based decisions about system
risks.
Transparency & Documentation
Organizations can document the following
• What are the roles, responsibilities, and delegation of authorities of personnel involved
in the design, development, deployment, assessment and monitoring of the AI system?
• How does the entity assess whether personnel have the necessary skills, training,
resources, and domain knowledge to fulfill their assigned responsibilities?
• Are the relevant staff dealing with AI systems properly trained to interpret AI model
output and decisions as well as to detect and manage bias in data?
• To what extent has the entity documented the AI system’s development, testing
methodology, metrics, and performance outcomes?
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
86 of 142 References
Ben Green, “The Flaws of Policies Requiring Human Oversight of Government Algorithms,”
SSRN Journal, 2021.
Luciano Cavalcante Siebert, Maria Luce Lupetti, Evgeni Aizenberg, Niek Beckers, Arkady
Zgonnikov, Herman Veluwenkamp, David Abbink, Elisa Giaccardi, Geert -Jan Houben,
Catholijn Jonker, Jeroen van den Hoven, Deborah Forster, & Reginald Lagendijk (2021).
Meaningful human control: actionable properties for AI system development. AI and Ethics.
Mary Cummings, (2014). Automation and Accountability in Decision Support System
Interface Design. The Journal of Technology Studies. 32. 10.21061/jots.v32i1.a.4.
Madeleine Elish, M. (2016). Moral Crumple Zones: Cautionary Tales in Human -Robot
Interaction (WeRobot 2016). SSRN Electronic Journal. 10.2139/ssrn.2757236.
R Crootof, ME Kaminski, and WN Price II. Humans in the Loop (March 25, 2022). Vanderbilt
Law Review, Forthcoming 2023, U of Colorado Law Legal Studies Research Paper No. 22 -10,
U of Michigan Public Law Research Paper No. 22 -011.
[LINK](https://ssrn.com/abstract=4066781),
Bogdana Rakova, Jingying Yang, Henriette Cramer, & Rumman Chowdhury (2020). Where
Responsible AI meets Reality. Proceedings of the ACM on Human -Computer Interaction, 5, 1
- 23.
MAP 4.1
Approaches for mapping AI technology and legal risks of its components – including the use
of third -party data or software – are in place, followed, and documented, as are risks of
infringement of a third -party’s intellectual property or other rights.
About
Technologies and personnel from third -parties are another potential sources of risk to
consider during AI risk management activities. Such risks may be difficult to map since risk
priorities or tolerances may not be the same as the deployer organization.
For example, the use of pre -trained models, which tend to rely on large uncurated dataset
or often have undisclosed origins, has raised concerns about privacy, bias, and
unanticipated effects along with possible introduction of increased levels of statistical
uncertainty, difficulty with reproducibility, and issues with scientific validity.
Suggested Actions
• Review audit reports, testing results, product roadmaps, warranties, terms of service,
end user license agreements, contracts, and other documentation related to third -party
entities to assist in value assessment and risk management activities.
• Review third -party software release schedules and software change management plans
(hotfixes, patches, updates, forward - and backward - compatibility guarantees) for
irregularities that may contribute to AI system risks.
87 of 142 • Inventory third -party material (hardware, open -source software, foundation models,
open source data, proprietary software, proprietary data, etc.) required for system
implementation and maintenance.
• Review redundancies related to third -party technology and personnel to assess
potential risks due to lack of adequate support.
Transparency & Documentation
Organizations can document the following
• Did you establish a process for third parties (e.g. suppliers, end users, subjects,
distributors/vendors or workers) to report potential vulnerabilities, risks or biases in
the AI system?
• If your organization obtained datasets from a third party, did your organization assess
and manage the risks of using such datasets?
• How will the results be independently verified?
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• Intel.gov: AI Ethics Framework for Intelligence Community - 2020.
• WEF Model AI Governance Framework Assessment 2020.
References
Language models
Emily M. Bender, Timnit Gebru, Angelina McMillan -Major, and Shmargaret Shmitchell. 2021.
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. In Proceedings
of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21).
Association for Computing Machinery, New York, NY, USA, 610 –623.
Julia Kreutzer, Isaac Caswell, Lisa Wang, et al. 2022. Quality at a Glance: An Audit of Web -
Crawled Multilingual Datasets. Transactions of the Association for Computational
Linguistics 10 (2022), 50 –72.
Laura Weidinger, Jonathan Uesato, Maribeth Rauh, et al. 2022. Taxonomy of Risks posed by
Language Models. In 2022 ACM Conference on Fairness, Accountability, and Transparency
(FAccT '22). Association for Computing Machinery, New York, NY, USA, 214 –229.
Office of the Comptroller of the Currency. 2021. Comptroller's Handbook: Model Risk
Management, Version 1.0, August 2021.
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, et al. 2021. On the Opportunities and Risks
of Foundation Models. arXiv:2108.07258.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani
Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto,
88 of 142 Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus. “Emergent Abilities of Large Language
Models.” ArXiv abs/2206.07682 (2022).
MAP 4.2
Internal risk controls for components of the AI system including third -party AI technologies
are identified and documented.
About
In the course of their work, AI actors often utilize open -source, or otherwise freely
available, third -party technologies – some of which may have privacy, bias, and security
risks. Organizations may consider internal risk controls for these technology sources and
build up practices for evaluating third -party material prior to deployment.
Suggested Actions
• Track third -parties preventing or hampering risk -mapping as indications of increased
risk.
• Supply resources such as model documentation templates and software safelists to
assist in third -party technology inventory and approval activities.
• Review third -party material (including data and models) for risks related to bias, data
privacy, and security vulnerabilities.
• Apply traditional technology risk controls – such as procurement, security, and data
privacy controls – to all acquired third -party technologies.
Transparency & Documentation
Organizations can document the following
• Can the AI system be audited by independent third parties?
• To what extent do these policies foster public trust and confidence in the use of the AI
system?
• Are mechanisms established to facilitate the AI system’s auditability (e.g. traceability of
the development process, the sourcing of training data and the logging of the AI
system’s processes, outcomes, positive and negative impact)?
AI Transparency Resources
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• Intel.gov: AI Ethics Framework for Intelligence Community - 2020.
• WEF Model AI Governance Framework Assessment 2020.
• Assessment List for Trustworthy AI (ALTAI) - The High -Level Expert Group on AI -
2019. [LINK](https://altai.insight -centre.org/),
References
Office of the Comptroller of the Currency. 2021. Comptroller's Handbook: Model Risk
Management, Version 1.0, August 2021. Retrieved on July 7, 2022.
Proposed Interagency Guidance on Third -Party Relationships: Risk Management, 2021.
89 of 142 Kang, D., Raghavan, D., Bailis, P.D., & Zaharia, M.A. (2020). Model Assertions for Monitoring
and Improving ML Models. ArXiv, abs/2003.01668.
MAP 5.1
Likelihood and magnitude of each identified impact (both potentially beneficial and
harmful) based on expected use, past uses of AI systems in similar contexts, public incident
reports, feedback from those external to the team that developed or deployed the AI system,
or other data are identified and documented.
About
AI actors can evaluate, document and triage the likelihood of AI system impacts identified in
Map 5.1 Likelihood estimates may then be assessed and judged for go/no -go decisions
about deploying an AI system. If an organization decides to proceed with deploying the
system, the likelihood and magnitude estimates can be used to assign TEVV resources
appropriate for the risk level.
Suggested Actions
• Establish assessment scales for measuring AI systems’ impact. Scales may be qualitative,
such as red -amber -green (RAG), or may entail simulations or econometric approaches.
Document and apply scales uniformly across the organization’s AI portfolio.
• Apply TEVV regularly at key stages in the AI lifecycle, connected to system impacts and
frequency of system updates.
• Identify and document likelihood and magnitude of system benefits and negative
impacts in relation to trustworthiness characteristics.
• Establish processes for red teaming to identify and connect system limitations to AI
lifecycle stage(s) and potential downstream impacts
Transparency & Documentation
Organizations can document the following
• Which population(s) does the AI system impact?
• What assessments has the entity conducted on trustworthiness characteristics for
example data security and privacy impacts associated with the AI system?
• Can the AI system be tested by independent third parties?
AI Transparency Resources
• Datasheets for Datasets.
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• AI policies and initiatives, in Artificial Intelligence in Society, OECD, 2019.
• Intel.gov: AI Ethics Framework for Intelligence Community - 2020.
• Assessment List for Trustworthy AI (ALTAI) - The High -Level Expert Group on AI -
2019. [LINK](https://altai.insight -centre.org/),
90 of 142 References
Emilio Gómez -González and Emilia Gómez. 2020. Artificial intelligence in medicine and
healthcare. Joint Research Centre (European Commission).
Artificial Intelligence Incident Database. 2022.
Anthony M. Barrett, Dan Hendrycks, Jessica Newman and Brandie Nonnecke. “Actionable
Guidance for High -Consequence AI Risk Management: Towards Standards Addressing AI
Catastrophic Risks". ArXiv abs/2206.08966 (2022)
Ganguli, D., et al. (2022). Red Teaming Language Models to Reduce Harms: Methods, Scaling
Behaviors, and Lessons Learned. arXiv. https://arxiv.org/abs/2209.07858
Upol Ehsan, Q. Vera Liao, Samir Passi, Mark O. Riedl, and Hal Daumé. 2024. Seamful XAI:
Operationalizing Seamful Design in Explainable AI. Proc. ACM Hum. -Comput. Interact. 8,
CSCW1, Article 119. https://doi.org/10.1145/3637396
MAP 5.2
Practices and personnel for supporting regular engagement with relevant AI actors and
integrating feedback about positive, negative, and unanticipated impacts are in place and
documented.
About
AI systems are socio -technical in nature and can have positive, neutral, or negative
implications that extend beyond their stated purpose. Negative impacts can be wide -
ranging and affect individuals, groups, communities, organizations, and society, as well as
the environment and national security.
Organizations can create a baseline for system monitoring to increase opportunities for
detecting emergent risks. After an AI system is deployed, engaging different stakeholder
groups – who may be aware of, or experience, benefits or negative impacts that are
unknown to AI actors involved in the design, development and deployment activities –
allows organizations to understand and monitor system benefits and potential negative
impacts more readily.
Suggested Actions
• Establish and document stakeholder engagement processes at the earliest stages of
system formulation to identify potential impacts from the AI system on individuals,
groups, communities, organizations, and society.
• Employ methods such as value sensitive design (VSD) to identify misalignments
between organizational and societal values, and system implementation and impact.
• Identify approaches to engage, capture, and incorporate input from system end users
and other key stakeholders to assist with continuous monitoring for potential impacts
and emergent risks.
91 of 142 • Incorporate quantitative, qualitative, and mixed methods in the assessment and
documentation of potential impacts to individuals, groups, communities, organizations,
and society.
• Identify a team (internal or external) that is independent of AI design and development
functions to assess AI system benefits, positive and negative impacts and their
likelihood and magnitude.
• Evaluate and document stakeholder feedback to assess potential impacts for actionable
insights regarding trustworthiness characteristics and changes in design approaches
and principles.
• Develop TEVV procedures that incorporate socio -technical elements and methods and
plan to normalize across organizational culture. Regularly review and refine TEVV
processes.
Transparency & Documentation
Organizations can document the following
• If the AI system relates to people, does it unfairly advantage or disadvantage a
particular social group? In what ways? How was this managed?
• If the AI system relates to other ethically protected groups, have appropriate obligations
been met? (e.g., medical data might include information collected from animals)
• If the AI system relates to people, could this dataset expose people to harm or legal
action? (e.g., financial social or otherwise) What was done to mitigate or reduce the
potential for harm?
AI Transparency Resources
• Datasheets for Datasets.
• GAO -21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
• AI policies and initiatives, in Artificial Intelligence in Society, OECD, 2019.
• Intel.gov: AI Ethics Framework for Intelligence Community - 2020.
• Assessment List for Trustworthy AI (ALTAI) - The High -Level Expert Group on AI -
2019. [LINK](https://altai.insight -centre.org/),
References
Susanne Vernim, Harald Bauer, Erwin Rauch, et al. 2022. A value sensitive design approach
for designing AI -based worker assistance systems in manufacturing. Procedia Comput. Sci.
200, C (2022), 505 –516.
Harini Suresh and John V. Guttag. 2020. A Framework for Understanding Sources of Harm
throughout the Machine Learning Life Cycle. arXiv:1901.10002. Retrieved from
Margarita Boyarskaya, Alexandra Olteanu, and Kate Crawford. 2020. Overcoming Failures of
Imagination in AI Infused System Development and Deployment. arXiv:2011.13416.
Konstantinia Charitoudi and Andrew Blyth. A Socio -Technical Approach to Cyber Risk
Management and Impact Assessment. Journal of Information Security 4, 1 (2013), 33 -41.
92 of 142 Raji, I.D., Smart, A., White, R.N., Mitchell, M., Gebru, T., Hutchinson, B., Smith -Loud, J., Theron,
D., & Barnes, P. (2020). Closing the AI accountability gap: defining an end -to-end framework
for internal algorithmic auditing. Proceedings of the 2020 Conference on Fairness,
Accountability, and Transparency.
Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, Madeleine Clare Elish, & Jacob Metcalf.
2021. Assemlbing Accountability: Algorithmic Impact Assessment for the Public Interest.
Data & Society. Accessed 7/14/2022 at
Shari Trewin (2018). AI Fairness for People with Disabilities: Point of View. ArXiv,
abs/1811.10670.
Ada Lovelace Institute. 2022. Algorithmic Impact Assessment: A Case Study in Healthcare.
Accessed July 14, 2022.
Microsoft Responsible AI Impact Assessment Template. 2022. Accessed July 14, 2022.
Microsoft Responsible AI Impact Assessment Guide. 2022. Accessed July 14, 2022.
Microsoft Responsible AI Standard, v2.
Microsoft Research AI Fairness Checklist.
PEAT AI & Disability Inclusion Toolkit – Risks of Bias and Discrimination in AI Hiring Tools.
MEASURE
93 of 142 Measure
Appropriate methods and metrics are identified and applied.
MEASURE 1.1
Approaches and metrics for measurement of AI risks enumerated during the Map function
are selected for implementation starting with the most significant AI risks. The risks or
trustworthiness characteristics that will not – or cannot – be measured are properly
documented.
About
The development and utility of trustworthy AI systems depends on reliable measurements
and evaluations of underlying technologies and their use. Compared with traditional
software systems, AI technologies bring new failure modes, inherent dependence on
training data and methods which directly tie to data quality and representativeness.
Additionally, AI systems are inherently socio -technical in nature, meaning they are
influenced by societal dynamics and human behavior. AI risks – and benefits – can emerge
from the interplay of technical aspects combined with societal factors related to how a
system is used, its interactions with other AI systems, who operates it, and the social
context in which it is deployed. In other words, What should be measured depends on the
purpose, audience, and needs of the evaluations.
These two factors influence selection of approaches and metrics for measurement of AI
risks enumerated during the Map function. The AI landscape is evolving and so are the
methods and metrics for AI measurement. The evolution of metrics is key to maintaining
efficacy of the measures.
Suggested Actions
• Establish approaches for detecting, tracking and measuring known risks, errors,
incidents or negative impacts.
• Identify testing procedures and metrics to demonstrate whether or not the system is fit
for purpose and functioning as claimed.
• Identify testing procedures and metrics to demonstrate AI system trustworthiness
• Define acceptable limits for system performance (e.g. distribution of errors), and
include course correction suggestions if/when the system performs beyond acceptable
limits.
• Define metrics for, and regularly assess, AI actor competency for effective system
operation,
• Identify transparency metrics to assess whether stakeholders have access to necessary
information about system design, development, deployment, use, and evaluation.
• Utilize accountability metrics to determine whether AI designers, developers, and
deployers maintain clear and transparent lines of responsibility and are open to
inquiries.
• Document metric selection criteria and include considered but unused metrics.
94 of 142 • Monitor AI system external inputs including training data, models developed for other
contexts, system components reused from other contexts, and third -party tools and
resources.
• Report metrics to inform assessments of system generalizability and reliability.
• Assess and document pre - vs post -deployment system performance. Include existing
and emergent risks.
• Document risks or trustworthiness characteristics identified in the Map function that
will not be measured, including justification for non - measurement.
Transparency & Documentation
Organizations can document the following
• How will the appropriate performance metrics, such as accuracy, of the AI be monitored
after the AI is deployed?
• What corrective actions has the entity taken to enhance the quality, accuracy, reliability,
and representativeness of the data?
• Are there recommended data splits or evaluation measures? (e.g., training,
development, testing; accuracy/AUC)
• Did your organization address usability problems and test whether user interfaces
served their intended purposes?
• What testing, if any, has the entity conducted on the AI system to identify errors and
limitations (i.e. manual vs automated, adversarial and stress testing)?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
• Datasheets for Datasets.
References
Sara R. Jordan. “Designing Artificial Intelligence Review Boards: Creating Risk Metrics for
Review of AI.” 2019 IEEE International Symposium on Technology and Society (ISTAS),
2019.
IEEE. “IEEE -1012 -2016: IEEE Standard for System, Software, and Hardware Verification and
Validation.” IEEE Standards Association.
ACM Technology Policy Council. “Statement on Principles for Responsible Algorithmic
Systems.” Association for Computing Machinery (ACM), October 26, 2022.
Perez, E., et al. (2022). Discovering Language Model Behaviors with Model -Written
Evaluations. arXiv. https://arxiv.org/abs/2212.09251
Ganguli, D., et al. (2022). Red Teaming Language Models to Reduce Harms: Methods, Scaling
Behaviors, and Lessons Learned. arXiv. https://arxiv.org/abs/2209.07858
95 of 142 David Piorkowski, Michael Hind, and John Richards. "Quantitative AI Risk Assessments:
Opportunities and Challenges." arXiv preprint, submitted January 11, 2023.
Daniel Schiff, Aladdin Ayesh, Laura Musikanski, and John C. Havens. “IEEE 7010: A New
Standard for Assessing the Well -Being Implications of Artificial Intelligence.” 2020 IEEE
International Conference on Systems, Man, and Cybernetics (SMC), 2020.
MEASURE 1.2
Appropriateness of AI metrics and effectiveness of existing controls is regularly assessed
and updated including reports of errors and impacts on affected communities.
About
Different AI tasks, such as neural networks or natural language processing, benefit from
different evaluation techniques. Use -case and particular settings in which the AI system is
used also affects appropriateness of the evaluation techniques. Changes in the operational
settings, data drift, model drift are among factors that suggest regularly assessing and
updating appropriateness of AI metrics and their effectiveness can enhance reliability of AI
system measurements.
Suggested Actions
• Assess external validity of all measurements (e.g., the degree to which measurements
taken in one context can generalize to other contexts).
• Assess effectiveness of existing metrics and controls on a regular basis throughout the
AI system lifecycle.
• Document reports of errors, incidents and negative impacts and assess sufficiency and
efficacy of existing metrics for repairs, and upgrades
• Develop new metrics when existing metrics are insufficient or ineffective for
implementing repairs and upgrades.
• Develop and utilize metrics to monitor, characterize and track external inputs, including
any third -party tools.
• Determine frequency and scope for sharing metrics and related information with
stakeholders and impacted communities.
• Utilize stakeholder feedback processes established in the Map function to capture, act
upon and share feedback from end users and potentially impacted communities.
• Collect and report software quality metrics such as rates of bug occurrence and severity,
time to response, and time to repair (See Manage 4.3).
Transparency & Documentation
Organizations can document the following
• What metrics has the entity developed to measure performance of the AI system?
• To what extent do the metrics provide accurate and useful measure of performance?
• What corrective actions has the entity taken to enhance the quality, accuracy, reliability,
and representativeness of the data?
96 of 142 • How will the accuracy or appropriate performance metrics be assessed?
• What is the justification for the metrics selected?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
References
ACM Technology Policy Council. “Statement on Principles for Responsible Algorithmic
Systems.” Association for Computing Machinery (ACM), October 26, 2022.
Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical
Learning: Data Mining, Inference, and Prediction. 2nd ed. Springer -Verlag, 2009.
Harini Suresh and John Guttag. “A Framework for Understanding Sources of Harm
Throughout the Machine Learning Life Cycle.” Equity and Access in Algorithms,
Mechanisms, and Optimization, October 2021.
Christopher M. Bishop. Pattern Recognition and Machine Learning. New York: Springer,
2006.
Solon Barocas, Anhong Guo, Ece Kamar, Jacquelyn Krones, Meredith Ringel Morris, Jennifer
Wortman Vaughan, W. Duncan Wadsworth, and Hanna Wallach. “Designing Disaggregated
Evaluations of AI Systems: Choices, Considerations, and Tradeoffs.” Proceedings of the 2021
AAAI/ACM Conference on AI, Ethics, and Society, July 2021, 368 –78.
MEASURE 1.3
Internal experts who did not serve as front -line developers for the system and/or
independent assessors are involved in regular assessments and updates. Domain experts,
users, AI actors external to the team that developed or deployed the AI system, and affected
communities are consulted in support of assessments as necessary per organizational risk
tolerance.
About
The current AI systems are brittle, the failure modes are not well described, and the systems
are dependent on the context in which they were developed and do not transfer well
outside of the training environment. A reliance on local evaluations will be necessary along
with a continuous monitoring of these systems. Measurements that extend beyond classical
measures (which average across test cases) or expand to focus on pockets of failures where
there are potentially significant costs can improve the reliability of risk management
activities. Feedback from affected communities about how AI systems are being used can
make AI evaluation purposeful. Involving internal experts who did not serve as front -line
developers for the system and/or independent assessors regular assessments of AI systems
helps a fulsome characterization of AI systems’ performance and trustworthiness .
97 of 142 Suggested Actions
• Evaluate TEVV processes regarding incentives to identify risks and impacts.
• Utilize separate testing teams established in the Govern function (2.1 and 4.1) to enable
independent decisions and course -correction for AI systems. Track processes and
measure and document change in performance.
• Plan and evaluate AI system prototypes with end user populations early and
continuously in the AI lifecycle. Document test outcomes and course correct.
• Assess independence and stature of TEVV and oversight AI actors, to ensure they have
the required levels of independence and resources to perform assurance, compliance,
and feedback tasks effectively
• Evaluate interdisciplinary and demographically diverse internal team established in
Map 1.2
• Evaluate effectiveness of external stakeholder feedback mechanisms, specifically related
to processes for eliciting, evaluating and integrating input from diverse groups.
• Evaluate effectiveness of external stakeholder feedback mechanisms for enhancing AI
actor visibility and decision making regarding AI system risks and trustworthy
characteristics.
• Identify and utilize participatory approaches for assessing impacts that may arise from
changes in system deployment (e.g., introducing new technology, decommissioning
algorithms and models, adapting system, model or algorithm)
Transparency & Documentation
Organizations can document the following
• What are the roles, responsibilities, and delegation of authorities of personnel involved
in the design, development, deployment, assessment and monitoring of the AI system?
• How easily accessible and current is the information available to external stakeholders?
• To what extent does the entity communicate its AI strategic goals and objectives to the
community of stakeholders?
• To what extent can users or parties affected by the outputs of the AI system test the AI
system and provide feedback?
• To what extent is this information sufficient and appropriate to promote transparency?
Do external stakeholders have access to information on the design, operation, and
limitations of the AI system?
• What type of information is accessible on the design, operations, and limitations of the
AI system to external stakeholders, including end users, consumers, regulators, and
individuals impacted by use of the AI system?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
98 of 142 References
Board of Governors of the Federal Reserve System. “SR 11 -7: Guidance on Model Risk
Management.” April 4, 2011.
“Definition of independent verification and validation (IV&V)”, in IEEE 1012, IEEE Standard
for System, Software, and Hardware Verification and Validation. Annex C,
Mona Sloane, Emanuel Moss, Olaitan Awomolo, and Laura Forlano. “Participation Is Not a
Design Fix for Machine Learning.” Equity and Access in Algorithms, Mechanisms, and
Optimization, October 2022.
Rediet Abebe and Kira Goldner. “Mechanism Design for Social Good.” AI Matters 4, no. 3
(October 2018): 27 –34.
Upol Ehsan, Ranjit Singh, Jacob Metcalf and Mark O. Riedl. “The Algorithmic Imprint.”
Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
(2022).
MEASURE 2.1
Test sets, metrics, and details about the tools used during test, evaluation, validation, and
verification (TEVV) are documented.
About
Documenting measurement approaches, test sets, metrics, processes and materials used,
and associated details builds foundation upon which to build a valid, reliable measurement
process. Documentation enables repeatability and consistency, and can enhance AI risk
management decisions.
Suggested Actions
• Leverage existing industry best practices for transparency and documentation of all
possible aspects of measurements. Examples include: data sheet for data sets, model
cards
• Regularly assess the effectiveness of tools used to document measurement approaches,
test sets, metrics, processes and materials used
• Update the tools as needed
Transparency & Documentation
Organizations can document the following
• Given the purpose of this AI, what is an appropriate interval for checking whether it is
still accurate, unbiased, explainable, etc.? What are the checks for this model?
• To what extent has the entity documented the AI system’s development, testing
methodology, metrics, and performance outcomes?
99 of 142 AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
• WEF Companion to the Model AI Governance Framework - WEF - Companion to the
Model AI Governance Framework, 2020.
References
Emily M. Bender and Batya Friedman. “Data Statements for Natural Language Processing:
Toward Mitigating System Bias and Enabling Better Science.” Transactions of the
Association for Computational Linguistics 6 (2018): 587 –604.
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben
Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. “Model Cards for
Model Reporting.” FAT *19: Proceedings of the Conference on Fairness, Accountability, and
Transparency, January 2019, 220 –29.
IEEE Computer Society. “Software Engineering Body of Knowledge Version 3: IEEE
Computer Society.” IEEE Computer Society.
IEEE. “IEEE -1012 -2016: IEEE Standard for System, Software, and Hardware Verification and
Validation.” IEEE Standards Association.
Board of Governors of the Federal Reserve System. “SR 11 -7: Guidance on Model Risk
Management.” April 4, 2011.
Abigail Z. Jacobs and Hanna Wallach. “Measurement and Fairness.” FAccT '21: Proceedings
of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021,
375–85.
Jeanna Matthews, Bruce Hedin, Marc Canellas. Trustworthy Evidence for Trustworthy
Technology: An Overview of Evidence for Assessing the Trustworthiness of Autonomous
and Intelligent Systems. IEEE -USA, September 29 2022.
Roel Dobbe, Thomas Krendl Gilbert, and Yonatan Mintz. “Hard Choices in Artificial
Intelligence.” Artificial Intelligence 300 (November 2021).
MEASURE 2.2
Evaluations involving human subjects meet applicable requirements (including human
subject protection) and are representative of the relevant population.
About
Measurement and evaluation of AI systems often involves testing with human subjects or
using data captured from human subjects. Protection of human subjects is required by law
when carrying out federally funded research, and is a domain specific requirement for some
disciplines. Standard human subjects protection procedures include protecting the welfare
100 of 142 and interests of human subjects, designing evaluations to minimize risks to subjects, and
completion of mandatory training regarding legal requirements and expectations.
Evaluations of AI system performance that utilize human subjects or human subject data
should reflect the population within the context of use. AI system activities utilizing non -
representative data may lead to inaccurate assessments or negative and harmful outcomes.
It is often difficult – and sometimes impossible, to collect data or perform evaluation tasks
that reflect the full operational purview of an AI system. Methods for collecting, annotating,
or using these data can also contribute to the challenge. To counteract these challenges,
organizations can connect human subjects data collection, and dataset practices, to AI
system contexts and purposes and do so in close collaboration with AI Actors from the
relevant domains.
Suggested Actions
• Follow human subjects research requirements as established by organizational and
disciplinary requirements, including informed consent and compensation, during
dataset collection activities.
• Analyze differences between intended and actual population of users or data subjects,
including likelihood for errors, incidents or negative impacts.
• Utilize disaggregated evaluation methods (e.g. by race, age, gender, ethnicity, ability,
region) to improve AI system performance when deployed in real world settings.
• Establish thresholds and alert procedures for dataset representativeness within the
context of use.
• Construct datasets in close collaboration with experts with knowledge of the context of
use.
• Follow intellectual property and privacy rights related to datasets and their use,
including for the subjects represented in the data.
• Evaluate data representativeness through
• investigating known failure modes,
• assessing data quality and diverse sourcing,
• applying public benchmarks,
• traditional bias testing,
• chaos engineering,
• stakeholder feedback
• Use informed consent for individuals providing data used in system testing and
evaluation.
Transparency & Documentation
Organizations can document the following
• Given the purpose of this AI, what is an appropriate interval for checking whether it is
still accurate, unbiased, explainable, etc.? What are the checks for this model?
101 of 142 • How has the entity identified and mitigated potential impacts of bias in the data,
including inequitable or discriminatory outcomes?
• To what extent are the established procedures effective in mitigating bias, inequity, and
other concerns resulting from the system?
• To what extent has the entity identified and mitigated potential bias —statistical,
contextual, and historical —in the data?
• If it relates to people, were they told what the dataset would be used for and did they
consent? What community norms exist for data collected from human communications?
If consent was obtained, how? Were the people provided with any mechanism to revoke
their consent in the future or for certain uses?
• If human subjects were used in the development or testing of the AI system, what
protections were put in place to promote their safety and wellbeing?.
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
• WEF Companion to the Model AI Governance Framework - WEF - Companion to the
Model AI Governance Framework, 2020.
• Datasheets for Datasets.
References
United States Department of Health, Education, and Welfare's National Commission for the
Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report:
Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Volume
II. United States Department of Health and Human Services Office for Human Research
Protections. April 18, 1979.
Office for Human Research Protections (OHRP). “45 CFR 46.” United States Department of
Health and Human Services Office for Human Research Protections, March 10, 2021.
Office for Human Research Protections (OHRP). “Human Subject Regulations Decision
Chart.” United States Department of Health and Human Services Office for Human Research
Protections, June 30, 2020.
Jacob Metcalf and Kate Crawford. “Where Are Human Subjects in Big Data Research? The
Emerging Ethics Divide.” Big Data and Society 3, no. 1 (2016).
Boaz Shmueli, Jan Fell, Soumya Ray, and Lun -Wei Ku. "Beyond Fair Pay: Ethical Implications
of NLP Crowdsourcing." arXiv preprint, submitted April 20, 2021.
Divyansh Kaushik, Zachary C. Lipton, and Alex John London. "Resolving the Human Subjects
Status of Machine Learning's Crowdworkers." arXiv preprint, submitted June 8, 2022.
102 of 142 Office for Human Research Protections (OHRP). “International Compilation of Human
Research Standards.” United States Department of Health and Human Services Office for
Human Research Protections, February 7, 2022.
National Institutes of Health. “Definition of Human Subjects Research.” NIH Central
Resource for Grants and Funding Information, January 13, 2020.
Joy Buolamwini and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in
Commercial Gender Classification.” Proceedings of the 1st Conference on Fairness,
Accountability and Transparency in PMLR 81 (2018): 77 –91.
Eun Seo Jo and Timnit Gebru. “Lessons from Archives: Strategies for Collecting Sociocultural
Data in Machine Learning.” FAT* '20: Proceedings of the 2020 Conference on Fairness,
Accountability, and Transparency, January 2020, 306 –16.
Marco Gerardi, Katarzyna Barud, Marie -Catherine Wagner, Nikolaus Forgo, Francesca
Fallucchi, Noemi Scarpato, Fiorella Guadagni, and Fabio Massimo Zanzotto. "Active
Informed Consent to Boost the Application of Machine Learning in Medicine." arXiv
preprint, submitted September 27, 2022.
Shari Trewin. "AI Fairness for People with Disabilities: Point of View." arXiv preprint,
submitted November 26, 2018.
Andrea Brennen, Ryan Ashley, Ricardo Calix, JJ Ben -Joseph, George Sieniawski, Mona Gogia,
and BNH.AI. AI Assurance Audit of RoBERTa, an Open source, Pretrained Large Language
Model. IQT Labs, December 2022.
MEASURE 2.3
AI system performance or assurance criteria are measured qualitatively or quantitatively
and demonstrated for conditions similar to deployment setting(s). Measures are
documented.
About
The current risk and impact environment suggests AI system performance estimates are
insufficient and require a deeper understanding of deployment context of use.
Computationally focused performance testing and evaluation schemes are restricted to test
data sets and in silico techniques. These approaches do not directly evaluate risks and
impacts in real world environments and can only predict what might create impact based
on an approximation of expected AI use. To properly manage risks, more direct information
is necessary to understand how and under what conditions deployed AI creates impacts,
who is most likely to be impacted, and what that experience is like.
Suggested Actions
• Conduct regular and sustained engagement with potentially impacted communities
• Maintain a demographically diverse and multidisciplinary and collaborative internal
team
103 of 142 • Regularly test and evaluate systems in non -optimized conditions, and in collaboration
with AI actors in user interaction and user experience (UI/UX) roles.
• Evaluate feedback from stakeholder engagement activities, in collaboration with human
factors and socio -technical experts.
• Collaborate with socio -technical, human factors, and UI/UX experts to identify notable
characteristics in context of use that can be translated into system testing scenarios.
• Measure AI systems prior to deployment in conditions similar to expected scenarios.
• Measure and document performance criteria such as validity (false positive rate, false
negative rate, etc.) and efficiency (training times, prediction latency, etc.) related to
ground truth within the deployment context of use.
• Measure assurance criteria such as AI actor competency and experience.
• Document differences between measurement setting and the deployment
environment(s).
Transparency & Documentation
Organizations can document the following
• What experiments were initially run on this dataset? To what extent have experiments
on the AI system been documented?
• To what extent does the system/entity consistently measure progress towards stated
goals and objectives?
• How will the appropriate performance metrics, such as accuracy, of the AI be monitored
after the AI is deployed? How much distributional shift or model drift from baseline
performance is acceptable?
• As time passes and conditions change, is the training data still representative of the
operational environment?
• What testing, if any, has the entity conducted on theAI system to identify errors and
limitations (i.e.adversarial or stress testing)?
AI Transparency Resources
• Artificial Intelligence Ethics Framework For The Intelligence Community.
• WEF Companion to the Model AI Governance Framework - WEF - Companion to the
Model AI Governance Framework, 2020.
• Datasheets for Datasets.
References
Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical
Learning: Data Mining, Inference, and Prediction. 2nd ed. Springer -Verlag, 2009.
Jessica Zosa Forde, A. Feder Cooper, Kweku Kwegyir -Aggrey, Chris De Sa, and Michael
Littman. "Model Selection's Disparate Impact in Real -World Deep Learning Applications."
arXiv preprint, submitted September 7, 2021.
104 of 142 Inioluwa Deborah Raji, I. Elizabeth Kumar, Aaron Horowitz, and Andrew Selbst. “The Fallacy
of AI Functionality.” FAccT '22: 2022 ACM Conference on Fairness, Accountability, and
Transparency, June 2022, 959 –72.
Amandalynne Paullada, Inioluwa Deborah Raji, Emily M. Bender, Emily Denton, and Alex
Hanna. “Data and Its (Dis)Contents: A Survey of Dataset Development and Use in Machine
Learning Research.” Patterns 2, no. 11 (2021): 100336.
Christopher M. Bishop. Pattern Recognition and Machine Learning. New York: Springer,
2006.
Md Johirul Islam, Giang Nguyen, Rangeet Pan, and Hridesh Rajan. "A Comprehensive Study
on Deep Learning Bug Characteristics." arXiv preprint, submitted June 3, 2019.
Swaroop Mishra, Anjana Arunkumar, Bhavdeep Sachdeva, Chris Bryan, and Chitta Baral.
"DQI: Measuring Data Quality in NLP." arXiv preprint, submitted May 2, 2020.
Doug Wielenga. "Paper 073 -2007: Identifying and Overcoming Common Data Mining
Mistakes." SAS Global Forum 2007: Data Mining and Predictive Modeling, SAS Institute,
2007.
Software Resources
• Drifter library (performance assessment)
• Manifold library (performance assessment)
• MLextend library (performance assessment)
• PiML library (explainable models, performance assessment)
• SALib library (performance assessment)
• What -If Tool (performance assessment)
MEASURE 2.4
The functionality and behavior of the AI system and its components – as identified in the
MAP function – are monitored when in production.
About
AI systems may encounter new issues and risks while in production as the environment
evolves over time. This effect, often referred to as “drift”, means AI systems no longer meet
the assumptions and limitations of the original design. Regular monitoring allows AI Actors
to monitor the functionality and behavior of the AI system and its components – as
identified in the MAP function - and enhance the speed and efficacy of necessary system
interventions.
Suggested Actions
• Monitor and document how metrics and performance indicators observed in production
differ from the same metrics collected during pre -deployment testing. When differences
are observed, consider error propagation and feedback loop risks.
105 of 142 • Utilize hypothesis testing or human domain expertise to measure monitored
distribution differences in new input or output data relative to test environments
• Monitor for anomalies using approaches such as control limits, confidence intervals,
integrity constraints and ML algorithms. When anomalies are observed, consider error
propagation and feedback loop risks.
• Verify alerts are in place for when distributions in new input data or generated
predictions observed in production differ from pre -deployment test outcomes, or when
anomalies are detected.
• Assess the accuracy and quality of generated outputs against new collected ground -
truth information as it becomes available.
• Utilize human review to track processing of unexpected data and reliability of generated
outputs; warn system users when outputs may be unreliable. Verify that human
overseers responsible for these processes have clearly defined responsibilities and
training for specified tasks.
• Collect uses cases from the operational environment for system testing and monitoring
activities in accordance with organizational policies and regulatory or disciplinary
requirements (e.g. informed consent, institutional review board approval, human
research protections),
Transparency & Documentation
Organizations can document the following
• To what extent is the output of each component appropriate for the operational
context?
• What justifications, if any, has the entity provided for the assumptions, boundaries, and
limitations of the AI system?
• How will the appropriate performance metrics, such as accuracy, of the AI be monitored
after the AI is deployed?
• As time passes and conditions change, is the training data still representative of the
operational environment?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
References
Luca Piano, Fabio Garcea, Valentina Gatteschi, Fabrizio Lamberti, and Lia Morra. “Detecting
Drift in Deep Learning: A Methodology Primer.” IT Professional 24, no. 5 (2022): 53 –60.
Larysa Visengeriyeva, et al. “Awesome MLOps.“ GitHub.
106 of 142 MEASURE 2.5
The AI system to be deployed is demonstrated to be valid and reliable. Limitations of the
generalizability beyond the conditions under which the technology was developed are
documented.
About
An AI system that is not validated or that fails validation may be inaccurate or unreliable or
may generalize poorly to data and settings beyond its training, creating and increasing AI
risks and reducing trustworthiness. AI Actors can improve system validity by creating
processes for exploring and documenting system limitations. This includes broad
consideration of purposes and uses for which the system was not designed.
Validation risks include the use of proxies or other indicators that are often constructed by
AI development teams to operationalize phenomena that are either not directly observable
or measurable (e.g, fairness, hireability, honesty, propensity to commit a crime). Teams can
mitigate these risks by demonstrating that the indicator is measuring the concept it claims
to measure (also known as construct validity). Without this and other types of validation,
various negative properties or impacts may go undetected, including the presence of
confounding variables, potential spurious correlations, or error propagation and its
potential impact on other interconnected systems.
Suggested Actions
• Define the operating conditions and socio -technical context under which the AI system
will be validated.
• Define and document processes to establish the system’s operational conditions and
limits.
• Establish or identify, and document approaches to measure forms of validity, including:
• construct validity (the test is measuring the concept it claims to measure)
• internal validity (relationship being tested is not influenced by other factors or
variables)
• external validity (results are generalizable beyond the training condition)
• the use of experimental design principles and statistical analyses and modeling.
• Assess and document system variance. Standard approaches include confidence
intervals, standard deviation, standard error, bootstrapping, or cross -validation.
• Establish or identify, and document robustness measures.
• Establish or identify, and document reliability measures.
• Establish practices to specify and document the assumptions underlying measurement
models to ensure proxies accurately reflect the concept being measured.
• Utilize standard software testing approaches (e.g. unit, integration, functional and chaos
testing, computer -generated test cases, etc.)
• Utilize standard statistical methods to test bias, inferential associations, correlation, and
covariance in adopted measurement models.
107 of 142 • Utilize standard statistical methods to test variance and reliability of system outcomes.
• Monitor operating conditions for system performance outside of defined limits.
• Identify TEVV approaches for exploring AI system limitations, including testing
scenarios that differ from the operational environment. Consult experts with knowledge
of specific context of use.
• Define post -alert actions. Possible actions may include:
• alerting other relevant AI actors before action,
• requesting subsequent human review of action,
• alerting downstream users and stakeholder that the system is operating outside it’s
defined validity limits,
• tracking and mitigating possible error propagation
• action logging
• Log input data and relevant system configuration information whenever there is an
attempt to use the system beyond its well -defined range of system validity.
• Modify the system over time to extend its range of system validity to new operating
conditions.
Transparency & Documentation
Organizations can document the following
• What testing, if any, has the entity conducted on theAI system to identify errors and
limitations (i.e.adversarial or stress testing)?
• Given the purpose of this AI, what is an appropriate interval for checking whether it is
still accurate, unbiased, explainable, etc.? What are the checks for this model?
• How has the entity identified and mitigated potential impacts of bias in the data,
including inequitable or discriminatory outcomes?
• To what extent are the established procedures effective in mitigating bias, inequity, and
other concerns resulting from the system?
• What goals and objectives does the entity expect to achieve by designing, developing,
and/or deploying the AI system?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
References
Abigail Z. Jacobs and Hanna Wallach. “Measurement and Fairness.” FAccT '21: Proceedings
of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021,
375–85.
Debugging Machine Learning Models. Proceedings of ICLR 2019 Workshop, May 6, 2019,
New Orleans, Louisiana.
108 of 142 Patrick Hall. “Strategies for Model Debugging.” Towards Data Science, November 8, 2019.
Suchi Saria and Adarsh Subbaswamy. "Tutorial: Safe and Reliable Machine Learning." arXiv
preprint, submitted April 15, 2019.
Google Developers. “Overview of Debugging ML Models.” Google Developers Machine
Learning Foundational Courses, n.d.
R. Mohanani, I. Salman, B. Turhan, P. Rodríguez and P. Ralph, "Cognitive Biases in Software
Engineering: A Systematic Mapping Study," in IEEE Transactions on Software Engineering,
vol. 46, no. 12, pp. 1318 -1339, Dec. 2020,
Software Resources
• Drifter library (performance assessment)
• Manifold library (performance assessment)
• MLextend library (performance assessment)
• PiML library (explainable models, performance assessment)
• SALib library (performance assessment)
• What -If Tool (performance assessment)
MEASURE 2.6
AI system is evaluated regularly for safety risks – as identified in the MAP function. The AI
system to be deployed is demonstrated to be safe, its residual negative risk does not exceed
the risk tolerance, and can fail safely, particularly if made to operate beyond its knowledge
limits. Safety metrics implicate system reliability and robustness, real -time monitoring, and
response times for AI system failures.
About
Many AI systems are being introduced into settings such as transportation, manufacturing
or security, where failures may give rise to various physical or environmental harms. AI
systems that may endanger human life, health, property or the environment are tested
thoroughly prior to deployment, and are regularly evaluated to confirm the system is safe
during normal operations, and in settings beyond its proposed use and knowledge limits.
Measuring activities for safety often relate to exhaustive testing in development and
deployment contexts, understanding the limits of a system’s reliable, robust, and safe
behavior, and real -time monitoring of various aspects of system performance. These
activities are typically conducted along with other risk mapping, management, and
governance tasks such as avoiding past failed designs, establishing and rehearsing incident
response plans that enable quick responses to system problems, the instantiation of
redundant functionality to cover failures, and transparent and accountable governance.
System safety incidents or failures are frequently reported to be related to organizational
dynamics and culture. Independent auditors may bring important independent perspectives
for reviewing evidence of AI system safety.
109 of 142 Suggested Actions
• Thoroughly measure system performance in development and deployment contexts,
and under stress conditions.
• Employ test data assessments and simulations before proceeding to production
testing. Track multiple performance quality and error metrics.
• Stress -test system performance under likely scenarios (e.g., concept drift, high load)
and beyond known limitations, in consultation with domain experts.
• Test the system under conditions similar to those related to past known incidents or
near -misses and measure system performance and safety characteristics
• Apply chaos engineering approaches to test systems in extreme conditions and
gauge unexpected responses.
• Document the range of conditions under which the system has been tested and
demonstrated to fail safely.
• Measure and monitor system performance in real -time to enable rapid response when
AI system incidents are detected.
• Collect pertinent safety statistics (e.g., out -of-range performance, incident response
times, system down time, injuries, etc.) in anticipation of potential information sharing
with impacted communities or as required by AI system oversight personnel.
• Align measurement to the goal of continuous improvement. Seek to increase the range
of conditions under which the system is able to fail safely through system modifications
in response to in -production testing and events.
• Document, practice and measure incident response plans for AI system incidents,
including measuring response and down times.
• Compare documented safety testing and monitoring information with established risk
tolerances on an on -going basis.
• Consult MANAGE for detailed information related to managing safety risks.
Transparency & Documentation
Organizations can document the following
• What testing, if any, has the entity conducted on the AI system to identify errors and
limitations (i.e.adversarial or stress testing)?
• To what extent has the entity documented the AI system’s development, testing
methodology, metrics, and performance outcomes?
• Did you establish mechanisms that facilitate the AI system’s auditability (e.g.
traceability of the development process, the sourcing of training data and the logging of
the AI system’s processes, outcomes, positive and negative impact)?
• Did you ensure that the AI system can be audited by independent third parties?
• Did you establish a process for third parties (e.g. suppliers, end -users, subjects,
distributors/vendors or workers) to report potential vulnerabilities, risks or biases in
the AI system?
110 of 142 AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
References
AI Incident Database. 2022.
AIAAIC Repository. 2022.
Netflix. Chaos Monkey.
IBM. “IBM's Principles of Chaos Engineering.” IBM, n.d.
Suchi Saria and Adarsh Subbaswamy. "Tutorial: Safe and Reliable Machine Learning." arXiv
preprint, submitted April 15, 2019.
Daniel Kang, Deepti Raghavan, Peter Bailis, and Matei Zaharia. "Model assertions for
monitoring and improving ML models." Proceedings of Machine Learning and Systems 2
(2020): 481 -496.
Larysa Visengeriyeva, et al. “Awesome MLOps.“ GitHub.
McGregor, S., Paeth, K., & Lam, K.T. (2022). Indexing AI Risks with Incidents, Issues, and
Variants. ArXiv, abs/2211.10384.
MEASURE 2.7
AI system security and resilience – as identified in the MAP function – are evaluated and
documented.
About
AI systems, as well as the ecosystems in which they are deployed, may be said to be resilient
if they can withstand unexpected adverse events or unexpected changes in their
environment or use – or if they can maintain their functions and structure in the face of
internal
and external change and degrade safely and gracefully when this is necessary. Common
security concerns relate to adversarial examples, data poisoning, and the exfiltration of
models, training data, or other intellectual property through AI system endpoints. AI
systems that can maintain confidentiality, integrity, and availability through protection
mechanisms that prevent unauthorized access and use may be said to be secure.
Security and resilience are related but distinct characteristics. While resilience is the ability
to return to normal function after an unexpected adverse event, security includes resilience
but also encompasses protocols to avoid, protect against, respond to, or recover
111 of 142 from attacks. Resilience relates to robustness and encompasses unexpected or adversarial
use (or abuse or misuse) of the model or data.
Suggested Actions
• Establish and track AI system security tests and metrics (e.g., red -teaming activities,
frequency and rate of anomalous events, system down -time, incident response times,
time -to-bypass, etc.).
• Use red -team exercises to actively test the system under adversarial or stress
conditions, measure system response, assess failure modes or determine if system can
return to normal function after an unexpected adverse event.
• Document red -team exercise results as part of continuous improvement efforts,
including the range of security test conditions and results.
• Use red -teaming exercises to evaluate potential mismatches between claimed and actual
system performance.
• Use countermeasures (e.g, authentication, throttling, differential privacy, robust ML
approaches) to increase the range of security conditions under which the system is able
to return to normal function.
• Modify system security procedures and countermeasures to increase robustness and
resilience to attacks in response to testing and events experienced in production.
• Verify that information about errors and attack patterns is shared with incident
databases, other organizations with similar systems, and system users and stakeholders
(MANAGE -4.1).
• Develop and maintain information sharing practices with AI actors from other
organizations to learn from common attacks.
• Verify that third party AI resources and personnel undergo security audits and
screenings. Risk indicators may include failure of third parties to provide relevant
security information.
• Utilize watermarking technologies as a deterrent to data and model extraction attacks.
Transparency & Documentation
Organizations can document the following
• To what extent does the plan specifically address risks associated with acquisition,
procurement of packaged software from vendors, cybersecurity controls, computational
infrastructure, data, data science, deployment mechanics, and system failure?
• What assessments has the entity conducted on data security and privacy impacts
associated with the AI system?
• What processes exist for data generation, acquisition/collection, security, maintenance,
and dissemination?
• What testing, if any, has the entity conducted on the AI system to identify errors and
limitations (i.e. adversarial or stress testing)?
• If a third party created the AI, how will you ensure a level of explainability or
interpretability?
112 of 142 AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
References
Matthew P. Barrett. “Framework for Improving Critical Infrastructure Cybersecurity
Version 1.1.” National Institute of Standards and Technology (NIST), April 16, 2018.
Nicolas Papernot. "A Marauder's Map of Security and Privacy in Machine Learning." arXiv
preprint, submitted on November 3, 2018.
Gary McGraw, Harold Figueroa, Victor Shepardson, and Richie Bonett. “BIML Interactive
Machine Learning Risk Framework.” Berryville Institute of Machine Learning (BIML), 2022.
Mitre Corporation. “Mitre/Advmlthreatmatrix: Adversarial Threat Landscape for AI
Systems.” GitHub, 2023.
National Institute of Standards and Technology (NIST). “Cybersecurity Framework.” NIST,
2023.
Upol Ehsan, Q. Vera Liao, Samir Passi, Mark O. Riedl, and Hal Daumé. 2024. Seamful XAI:
Operationalizing Seamful Design in Explainable AI. Proc. ACM Hum. -Comput. Interact. 8,
CSCW1, Article 119. https://doi.org/10.1145/3637396
Software Resources
• adversarial -robustness -toolbox
• counterfit
• foolbox
• ml_privacy_meter
• robustness
• tensorflow/privacy
• projectGuardRail
MEASURE 2.8
Risks associated with transparency and accountability – as identified in the MAP function –
are examined and documented.
About
Transparency enables meaningful visibility into entire AI pipelines, workflows, processes or
organizations and decreases information asymmetry between AI developers and operators
and other AI Actors and impacted communities. Transparency is a central element of
effective AI risk management that enables insight into how an AI system is working, and the
ability to address risks if and when they emerge. The ability for system users, individuals, or
impacted communities to seek redress for incorrect or problema tic AI system outcomes is
113 of 142 one control for transparency and accountability. Higher level recourse processes are
typically enabled by lower level implementation efforts directed at explainability and
interpretability functionality. See Measure 2.9.
Transparency and accountability across organizations and processes is crucial to reducing
AI risks. Accountable leadership – whether individuals or groups – and transparent roles,
responsibilities, and lines of communication foster and incentivize quality assurance and
risk management activities within organizations.
Lack of transparency complicates measurement of trustworthiness and whether AI systems
or organizations are subject to effects of various individual and group biases and design
blindspots and could lead to diminished user, organizational and community trust, and
decreased overall system value. Enstating accountable and transparent organizational
structures along with documenting system risks can enable system improvement and risk
management efforts, allowing AI actors along the lifecycle to identify errors, suggest
improvements, and figure out new ways to contextualize and generalize AI system features
and outcomes.
Suggested Actions
• Instrument the system for measurement and tracking, e.g., by maintaining histories,
audit logs and other information that can be used by AI actors to review and evaluate
possible sources of error, bias, or vulnerability.
• Calibrate controls for users in close collaboration with experts in user interaction and
user experience (UI/UX), human computer interaction (HCI), and/or human -AI teaming.
• Test provided explanations for calibration with different audiences including operators,
end users, decision makers and decision subjects (individuals for whom decisions are
being made), and to enable recourse for consequential system decisions that affect end
users or subjects.
• Measure and document human oversight of AI systems:
• Document the degree of oversight that is provided by specified AI actors regarding
AI system output.
• Maintain statistics about downstream actions by end users and operators such as
system overrides.
• Maintain statistics about and document reported errors or complaints, time to
respond, and response types.
• Maintain and report statistics about adjudication activities.
• Track, document, and measure organizational accountability regarding AI systems via
policy exceptions and escalations, and document “go” and “no/go” decisions made by
accountable parties.
• Track and audit the effectiveness of organizational mechanisms related to AI risk
management, including:
114 of 142 • Lines of communication between AI actors, executive leadership, users and
impacted communities.
• Roles and responsibilities for AI actors and executive leadership.
• Organizational accountability roles, e.g., chief model risk officers, AI oversight
committees, responsible or ethical AI directors, etc.
Transparency & Documentation
Organizations can document the following
• To what extent has the entity clarified the roles, responsibilities, and delegated
authorities to relevant stakeholders?
• What are the roles, responsibilities, and delegation of authorities of personnel involved
in the design, development, deployment, assessment and monitoring of the AI system?
• Who is accountable for the ethical considerations during all stages of the AI lifecycle?
• Who will be responsible for maintaining, re -verifying, monitoring, and updating this AI
once deployed?
• Are the responsibilities of the personnel involved in the various AI governance
processes clearly defined?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
References
National Academies of Sciences, Engineering, and Medicine. Human -AI Teaming: State -of-
the-Art and Research Needs. 2022.
Inioluwa Deborah Raji and Jingying Yang. "ABOUT ML: Annotation and Benchmarking on
Understanding and Transparency of Machine Learning Lifecycles." arXiv preprint,
submitted January 8, 2020.
Andrew Smith. "Using Artificial Intelligence and Algorithms." Federal Trade Commission
Business Blog, April 8, 2020.
Board of Governors of the Federal Reserve System. “SR 11 -7: Guidance on Model Risk
Management.” April 4, 2011.
Joshua A. Kroll. “Outlining Traceability: A Principle for Operationalizing Accountability in
Computing Systems.” FAccT '21: Proceedings of the 2021 ACM Conference on Fairness,
Accountability, and Transparency, March 1, 2021, 758 –71.
Jennifer Cobbe, Michelle Seng Lee, and Jatinder Singh. “Reviewable Automated Decision -
Making: A Framework for Accountable Algorithmic Systems.” FAccT '21: Proceedings of the
2021 ACM Conference on Fairness, Accountability, and Transparency, March 1, 2021, 598 –
609.
115 of 142 MEASURE 2.9
The AI model is explained, validated, and documented, and AI system output is interpreted
within its context – as identified in the MAP function – and to inform responsible use and
governance.
About
Explainability and interpretability assist those operating or overseeing an AI system, as well
as users of an AI system, to gain deeper insights into the functionality and trustworthiness
of the system, including its outputs.
Explainable and interpretable AI systems offer information that help end users understand
the purposes and potential impact of an AI system. Risk from lack of explainability may be
managed by describing how AI systems function, with descriptions tailored to individual
differences such as the user’s role, knowledge, and skill level. Explainable systems can be
debugged and monitored more easily, and they lend themselves to more thorough
documentation, audit, and governance.
Risks to interpretability often can be addressed by communicating a description of why
an AI system made a particular prediction or recommendation.
Transparency, explainability, and interpretability are distinct characteristics that support
each other. Transparency can answer the question of “what happened”. Explainability can
answer the question of “how” a decision was made in the system. Interpretability can
answer the question of “why” a decision was made by the system and its
meaning or context to the user.
Suggested Actions
• Verify systems are developed to produce explainable models, post -hoc explanations and
audit logs.
• When possible or available, utilize approaches that are inherently explainable, such as
traditional and penalized generalized linear models , decision trees, nearest -neighbor
and prototype -based approaches, rule -based models, generalized additive models ,
explainable boosting machines and neural additive models.
• Test explanation methods and resulting explanations prior to deployment to gain
feedback from relevant AI actors, end users, and potentially impacted individuals or
groups about whether explanations are accurate, clear, and understandable.
• Document AI model details including model type (e.g., convolutional neural network,
reinforcement learning, decision tree, random forest, etc.) data features, training
algorithms, proposed uses, decision thresholds, training data, evaluation data, and
ethical considerations.
• Establish, document, and report performance and error metrics across demographic
groups and other segments relevant to the deployment context.
116 of 142 • Explain systems using a variety of methods, e.g., visualizations, model extraction,
feature importance, and others. Since explanations may not accurately summarize
complex systems, test explanations according to properties such as fidelity, consistency,
robustness, and interpretability.
• Assess the characteristics of system explanations according to properties such as
fidelity (local and global), ambiguity, interpretability, interactivity, consistency, and
resilience to attack/manipulation.
• Test the quality of system explanations with end -users and other groups.
• Secure model development processes to avoid vulnerability to external manipulation
such as gaming explanation processes.
• Test for changes in models over time, including for models that adjust in response to
production data.
• Use transparency tools such as data statements and model cards to document
explanatory and validation information.
Transparency & Documentation
Organizations can document the following
• Given the purpose of the AI, what level of explainability or interpretability is required
for how the AI made its determination?
• Given the purpose of this AI, what is an appropriate interval for checking whether it is
still accurate, unbiased, explainable, etc.? What are the checks for this model?
• How has the entity documented the AI system’s data provenance, including sources,
origins, transformations, augmentations, labels, dependencies, constraints, and
metadata?
• What type of information is accessible on the design, operations, and limitations of the
AI system to external stakeholders, including end users, consumers, regulators, and
individuals impacted by use of the AI system?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
• WEF Companion to the Model AI Governance Framework - WEF - Companion to the
Model AI Governance Framework, 2020.
References
Chaofan Chen, Oscar Li, Chaofan Tao, Alina Jade Barnett, Jonathan Su, and Cynthia Rudin.
"This Looks Like That: Deep Learning for Interpretable Image Recognition." arXiv preprint,
submitted December 28, 2019.
Cynthia Rudin. "Stop Explaining Black Box Machine Learning Models for High Stakes
Decisions and Use Interpretable Models Instead." arXiv preprint, submitted September 22,
2019.
117 of 142 David A. Broniatowski. "NISTIR 8367 Psychological Foundations of Explainability and
Interpretability in Artificial Intelligence. National Institute of Standards and Technology
(NIST), 2021.
Alejandro Barredo Arrieta, Natalia Díaz -Rodríguez, Javier Del Ser, Adrien Bennetot, Siham
Tabik, Alberto Barbado, Salvador Garcia, et al. “Explainable Artificial Intelligence (XAI):
Concepts, Taxonomies, Opportunities, and Challenges Toward Responsible AI.” Information
Fusion 58 (June 2020): 82 –115.
Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman. “Proxy Tasks and
Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems.” IUI '20:
Proceedings of the 25th International Conference on Intelligent User Interfaces, March 17,
2020, 454 –64.
P. Jonathon Phillips, Carina A. Hahn, Peter C. Fontana, Amy N. Yates, Kristen Greene, David
A. Broniatowski, and Mark A. Przybocki. "NISTIR 8312 Four Principles of Explainable
Artificial Intelligence." National Institute of Standards and Technology (NIST), September
2021.
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben
Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. “Model Cards for
Model Reporting.” FAT *19: Proceedings of the Conference on Fairness, Accountability, and
Transparency, January 2019, 220 –29.
Ke Yang, Julia Stoyanovich, Abolfazl Asudeh, Bill Howe, HV Jagadish, and Gerome Miklau. “A
Nutritional Label for Rankings.” SIGMOD '18: Proceedings of the 2018 International
Conference on Management of Data, May 27, 2018, 1773 –76.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "'Why Should I Trust You?':
Explaining the Predictions of Any Classifier." arXiv preprint, submitted August 9, 2016.
Scott M. Lundberg and Su -In Lee. "A unified approach to interpreting model predictions."
NIPS'17: Proceedings of the 31st International Conference on Neural Information
Processing Systems, December 4, 2017, 4768 -4777.
Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. “Fooling
LIME and SHAP: Adversarial Attacks on Post Hoc Explanation Methods.” AIES '20:
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, February 7, 2020, 180 –
86.
David Alvarez -Melis and Tommi S. Jaakkola. "Towards robust interpretability with self -
explaining neural networks." NIPS'18: Proceedings of the 32nd International Conference on
Neural Information Processing Systems, December 3, 2018, 7786 -7795.
FinRegLab, Laura Biattner, and Jann Spiess. "Machine Learning Explainability & Fairness:
Insights from Consumer Lending." FinRegLab, April 2022.
118 of 142 Miguel Ferreira, Muhammad Bilal Zafar, and Krishna P. Gummadi. "The Case for Temporal
Transparency: Detecting Policy Change Events in Black -Box Decision Making Systems."
arXiv preprint, submitted October 31, 2016.
Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. "Interpretable &
Explorable Approximations of Black Box Models." arXiv preprint, July 4, 2017.
Software Resources
• SHAP
• LIME
• Interpret
• PiML
• Iml
• Dalex
MEASURE 2.10
Privacy risk of the AI system – as identified in the MAP function – is examined and
documented.
About
Privacy refers generally to the norms and practices that help to safeguard human autonomy,
identity, and dignity. These norms and practices typically address freedom from intrusion,
limiting observation, or individuals’ agency to consent to disclosure or control of facets of
their identities (e.g., body, data, reputation).
Privacy values such as anonymity, confidentiality, and control generally should guide
choices for AI system design, development, and deployment. Privacy -related risks may
influence security, bias, and transparency and come with tradeoffs with these other
characteristics. Like safety and security, specific technical features of an AI system may
promote or reduce privacy. AI systems can also present new risks to privacy by allowing
inference to identify individuals or previously private information about individuals.
Privacy -enhancing technologies (“PETs”) for AI, as well as data minimizing methods such as
de-identification and aggregation for certain model outputs, can support design for privacy -
enhanced AI systems. Under certain conditions such as data sparsity, privacy enhancing
techniques can result in a loss in accuracy, impacting decisions about fairness and other
values in certain domains.
Suggested Actions
• Specify privacy -related values, frameworks, and attributes that are applicable in the
context of use through direct engagement with end users and potentially impacted
groups and communities.
• Document collection, use, management, and disclosure of personally sensitive
information in datasets, in accordance with privacy and data governance policies
119 of 142 • Quantify privacy -level data aspects such as the ability to identify individuals or groups
(e.g. k -anonymity metrics, l -diversity, t -closeness).
• Establish and document protocols (authorization, duration, type) and access controls
for training sets or production data containing personally sensitive information, in
accordance with privacy and data governance policies.
• Monitor internal queries to production data for detecting patterns that isolate personal
records.
• Monitor PSI disclosures and inference of sensitive or legally protected attributes
• Assess the risk of manipulation from overly customized content. Evaluate
information presented to representative users at various points along axes of
difference between individuals (e.g. individuals of different ages, genders, races,
political affiliation, etc.).
• Use privacy -enhancing techniques such as differential privacy, when publicly sharing
dataset information.
• Collaborate with privacy experts, AI end users and operators, and other domain experts
to determine optimal differential privacy metrics within contexts of use.
Transparency & Documentation
Organizations can document the following
• Did your organization implement accountability -based practices in data management
and protection (e.g. the PDPA and OECD Privacy Principles)?
• What assessments has the entity conducted on data security and privacy impacts
associated with the AI system?
• Did your organization implement a risk management system to address risks involved
in deploying the identified AI solution (e.g. personnel risk or changes to commercial
objectives)?
• Does the dataset contain information that might be considered sensitive or confidential?
(e.g., personally identifying information)
• If it relates to people, could this dataset expose people to harm or legal action? (e.g.,
financial, social or otherwise) What was done to mitigate or reduce the potential for
harm?
AI Transparency Resources
• WEF Companion to the Model AI Governance Framework - WEF - Companion to the
Model AI Governance Framework, 2020. (
• Datasheets for Datasets.
References
Kaitlin R. Boeckl and Naomi B. Lefkovitz. "NIST Privacy Framework: A Tool for Improving
Privacy Through Enterprise Risk Management, Version 1.0." National Institute of Standards
and Technology (NIST), January 16, 2020.
120 of 142 Latanya Sweeney. “K -Anonymity: A Model for Protecting Privacy.” International Journal of
Uncertainty, Fuzziness and Knowledge -Based Systems 10, no. 5 (2002): 557 –70.
Ashwin Machanavajjhala, Johannes Gehrke, Daniel Kifer, and Muthuramakrishnan
Venkitasubramaniam. “L -Diversity: Privacy beyond K -Anonymity.” 22nd International
Conference on Data Engineering (ICDE'06), 2006.
Ninghui Li, Tiancheng Li, and Suresh Venkatasubramanian. "CERIAS Tech Report 2007 -78 t -
Closeness: Privacy Beyond k -Anonymity and -Diversity." Center for Education and Research,
Information Assurance and Security, Purdue University, 2001.
J. Domingo -Ferrer and J. Soria -Comas. "From t -closeness to differential privacy and vice
versa in data anonymization." arXiv preprint, submitted December 21, 2015.
Joseph Near, David Darais, and Kaitlin Boeckly. "Differential Privacy for Privacy -Preserving
Data Analysis: An Introduction to our Blog Series." National Institute of Standards and
Technology (NIST), July 27, 2020.
Cynthia Dwork. “Differential Privacy.” Automata, Languages and Programming, 2006, 1 –12.
Zhanglong Ji, Zachary C. Lipton, and Charles Elkan. "Differential Privacy and Machine
Learning: a Survey and Review." arXiv preprint, submitted December 24,2014.
Michael B. Hawes. "Implementing Differential Privacy: Seven Lessons From the 2020 United
States Census." Harvard Data Science Review 2, no. 2 (2020).
Harvard University Privacy Tools Project. “Differential Privacy.” Harvard University, n.d.
John M. Abowd, Robert Ashmead, Ryan Cumings -Menon, Simson Garfinkel, Micah Heineck,
Christine Heiss, Robert Johns, Daniel Kifer, Philip Leclerc, Ashwin Machanavajjhala, Brett
Moran, William Matthew Spence Sexton and Pavel Zhuravlev. "The 2020 Census Disclosure
Avoidance System TopDown Algorithm." United States Census Bureau, April 7, 2022.
Nicolas Papernot and Abhradeep Guha Thakurta. "How to deploy machine learning with
differential privacy." National Institute of Standards and Technology (NIST), December 21,
2021.
Claire McKay Bowen. "Utility Metrics for Differential Privacy: No One -Size -Fits-All." National
Institute of Standards and Technology (NIST), November 29, 2021.
Helen Nissenbaum. "Contextual Integrity Up and Down the Data Food Chain." Theoretical
Inquiries in Law 20, L. 221 (2019): 221 -256.
Sebastian Benthall, Seda Gürses, and Helen Nissenbaum. “Contextual Integrity through the
Lens of Computer Science.” Foundations and Trends in Privacy and Security 2, no. 1
(December 22, 2017): 1 –69.
121 of 142 Jenifer Sunrise Winter and Elizabeth Davidson. “Big Data Governance of Personal Health
Information and Challenges to Contextual Integrity.” The Information Society: An
International Journal 35, no. 1 (2019): 36 –51.
MEASURE 2.11
Fairness and bias – as identified in the MAP function – is evaluated and results are
documented.
About
Fairness in AI includes concerns for equality and equity by addressing issues such as
harmful bias and discrimination. Standards of fairness can be complex and difficult to define
because perceptions of fairness differ among cultures and may shift depending on
application. Organizations’ risk management efforts will be enhanced by recognizing and
considering these differences. Systems in which harmful biases are mitigated are not
necessarily fair. For example, systems in which predictions are somewhat balanced across
demographic groups may still be inaccessible to individuals with disabilities or affected by
the digital divide or may exacerbate existing disparities or systemic biases.
Bias is broader than demographic balance and data representativeness. NIST has identified
three major categories of AI bias to be considered and managed: systemic, computational
and statistical, and human -cognitive. Each of these can occur in the absence of prejudice,
partiality, or discriminatory intent.
• Systemic bias can be present in AI datasets, the organizational norms, practices, and
processes across the AI lifecycle, and the broader society that uses AI systems.
• Computational and statistical biases can be present in AI datasets and algorithmic
processes, and often stem from systematic errors due to non -representative samples.
• Human -cognitive biases relate to how an individual or group perceives AI system
information to make a decision or fill in missing information, or how humans think
about purposes and functions of an AI system. Human -cognitive biases are omnipresent
in decision -making processes across the AI lifecycle and system use, including the
design, implementation, operation, and maintenance of AI.
Bias exists in many forms and can become ingrained in the automated systems that help
make decisions about our lives. While bias is not always a negative phenomenon, AI systems
can potentially increase the speed and scale of biases and perpetuate and amplify harms to
individuals, groups, communities, organizations, and society.
Suggested Actions
• Conduct fairness assessments to manage computational and statistical forms of bias
which include the following steps:
• Identify types of harms, including allocational, representational, quality of service,
stereotyping, or erasure
• Identify across, within, and intersecting groups that might be harmed
122 of 142 • Quantify harms using both a general fairness metric, if appropriate (e.g.
demographic parity, equalized odds, equal opportunity, statistical hypothesis tests),
and custom, context -specific metrics developed in collaboration with affected
communities
• Analyze quantified harms for contextually significant differences across groups,
within groups, and among intersecting groups
• Refine identification of within -group and intersectional group disparities.
• Evaluate underlying data distributions and employ sensitivity analysis during
the analysis of quantified harms.
• Evaluate quality metrics including false positive rates and false negative rates.
• Consider biases affecting small groups, within -group or intersectional
communities, or single individuals.
• Understand and consider sources of bias in training and TEVV data:
• Differences in distributions of outcomes across and within groups, including
intersecting groups.
• Completeness, representativeness and balance of data sources.
• Identify input data features that may serve as proxies for demographic group
membership (i.e., credit score, ZIP code) or otherwise give rise to emergent bias
within AI systems.
• Forms of systemic bias in images, text (or word embeddings), audio or other
complex or unstructured data.
• Leverage impact assessments to identify and classify system impacts and harms to end
users, other individuals, and groups with input from potentially impacted communities.
• Identify the classes of individuals, groups, or environmental ecosystems which might be
impacted through direct engagement with potentially impacted communities.
• Evaluate systems in regards to disability inclusion, including consideration of disability
status in bias testing, and discriminatory screen out processes that may arise from non -
inclusive design or deployment decisions.
• Develop objective functions in consideration of systemic biases, in -group/out -group
dynamics.
• Use context -specific fairness metrics to examine how system performance varies across
groups, within groups, and/or for intersecting groups. Metrics may include statistical
parity, error -rate equality, statistical parity difference, equal opportunity difference,
average absolute odds difference, standardized mean difference, percentage point
differences.
• Customize fairness metrics to specific context of use to examine how system
performance and potential harms vary within contextual norms.
• Define acceptable levels of difference in performance in accordance with established
organizational governance policies, business requirements, regulatory compliance, legal
frameworks, and ethical standards within the context of use
123 of 142 • Define the actions to be taken if disparity levels rise above acceptable levels.
• Identify groups within the expected population that may require disaggregated analysis,
in collaboration with impacted communities.
• Leverage experts with knowledge in the specific context of use to investigate substantial
measurement differences and identify root causes for those differences.
• Monitor system outputs for performance or bias issues that exceed established
tolerance levels.
• Ensure periodic model updates; test and recalibrate with updated and more
representative data to stay within acceptable levels of difference.
• Apply pre -processing data transformations to address factors related to demographic
balance and data representativeness.
• Apply in -processing to balance model performance quality with bias considerations.
• Apply post -processing mathematical/computational techniques to model results in
close collaboration with impact assessors, socio -technical experts, and other AI actors
with expertise in the context of use.
• Apply model selection approaches with transparent and deliberate consideration of bias
management and other trustworthy characteristics.
• Collect and share information about differences in outcomes for the identified groups.
• Consider mediations to mitigate differences, especially those that can be traced to past
patterns of unfair or biased human decision making.
• Utilize human -centered design practices to generate deeper focus on societal impacts
and counter human -cognitive biases within the AI lifecycle.
• Evaluate practices along the lifecycle to identify potential sources of human -cognitive
bias such as availability, observational, and confirmation bias, and to make implicit
decision making processes more explicit and open to investigation.
• Work with human factors experts to evaluate biases in the presentation of system
output to end users, operators and practitioners.
• Utilize processes to enhance contextual awareness, such as diverse internal staff and
stakeholder engagement.
Transparency & Documentation
Organizations can document the following
• To what extent are the established procedures effective in mitigating bias, inequity, and
other concerns resulting from the system?
• If it relates to people, does it unfairly advantage or disadvantage a particular social
group? In what ways? How was this mitigated?
• Given the purpose of this AI, what is an appropriate interval for checking whether it is
still accurate, unbiased, explainable, etc.? What are the checks for this model?
• How has the entity identified and mitigated potential impacts of bias in the data,
including inequitable or discriminatory outcomes?
• To what extent has the entity identified and mitigated potential bias —statistical,
contextual, and historical —in the data?
124 of 142 • Were adversarial machine learning approaches considered or used for measuring bias
(e.g.: prompt engineering, adversarial models)
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
• WEF Companion to the Model AI Governance Framework - WEF - Companion to the
Model AI Governance Framework, 2020.
• Datasheets for Datasets.
References
Ali Hasan, Shea Brown, Jovana Davidovic, Benjamin Lange, and Mitt Regan. “Algorithmic
Bias and Risk Assessments: Lessons from Practice.” Digital Society 1 (2022).
Richard N. Landers and Tara S. Behrend. “Auditing the AI Auditors: A Framework for
Evaluating Fairness and Bias in High Stakes AI Predictive Models.” American Psychologist
78, no. 1 (2023): 36 –49.
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. “A
Survey on Bias and Fairness in Machine Learning.” ACM Computing Surveys 54, no. 6 (July
2021): 1 –35.
Michele Loi and Christoph Heitz. “Is Calibration a Fairness Requirement?” FAccT '22: 2022
ACM Conference on Fairness, Accountability, and Transparency, June 2022, 2026 –34.
Shea Brown, Ryan Carrier, Merve Hickok, and Adam Leon Smith. “Bias Mitigation in Data
Sets.” SocArXiv, July 8, 2021.
Reva Schwartz, Apostol Vassilev, Kristen Greene, Lori Perine, Andrew Burt, and Patrick Hall.
"NIST Special Publication 1270 Towards a Standard for Identifying and Managing Bias in
Artificial Intelligence." National Institute of Standards and Technology (NIST), 2022.
Microsoft Research. “AI Fairness Checklist.” Microsoft, February 7, 2022.
Samir Passi and Solon Barocas. “Problem Formulation and Fairness.” FAT* '19: Proceedings
of the Conference on Fairness, Accountability, and Transparency, January 2019, 39 –48.
Jade S. Franklin, Karan Bhanot, Mohamed Ghalwash, Kristin P. Bennett, Jamie McCusker, and
Deborah L. McGuinness. “An Ontology for Fairness Metrics.” AIES '22: Proceedings of the
2022 AAAI/ACM Conference on AI, Ethics, and Society, July 2022, 265 –75.
Zhang, B., Lemoine, B., & Mitchell, M. (2018). Mitigating Unwanted Biases with Adversarial
Learning. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society.
https://arxiv.org/pdf/1801.07593.pdf
125 of 142 Ganguli, D., et al. (2023). The Capacity for Moral Self -Correction in Large Language Models.
arXiv. https://arxiv.org/abs/2302.07459
Arvind Narayanan. “Tl;DS - 21 Fairness Definition and Their Politics by Arvind Narayanan.”
Dora's world, July 19, 2019.
Ben Green. “Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic
Fairness.” Philosophy and Technology 35, no. 90 (October 8, 2022).
Alexandra Chouldechova. “Fair Prediction with Disparate Impact: A Study of Bias in
Recidivism Prediction Instruments.” Big Data 5, no. 2 (June 1, 2017): 153 –63.
Sina Fazelpour and Zachary C. Lipton. “Algorithmic Fairness from a Non -Ideal Perspective.”
AIES '20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, February 7,
2020, 57 –63.
Hemank Lamba, Kit T. Rodolfa, and Rayid Ghani. “An Empirical Comparison of Bias
Reduction Methods on Real -World Problems in High -Stakes Policy Settings.” ACM SIGKDD
Explorations Newsletter 23, no. 1 (May 29, 2021): 69 –85.
ISO. “ISO/IEC TR 24027:2021 Information technology — Artificial intelligence (AI) — Bias
in AI systems and AI aided decision making.” ISO Standards, November 2021.
Shari Trewin. "AI Fairness for People with Disabilities: Point of View." arXiv preprint,
submitted November 26, 2018.
MathWorks. “Explore Fairness Metrics for Credit Scoring Model.” MATLAB & Simulink,
2023.
Abigail Z. Jacobs and Hanna Wallach. “Measurement and Fairness.” FAccT '21: Proceedings
of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021,
375–85.
Tolga Bolukbasi, Kai -Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai.
"Quantifying and Reducing Stereotypes in Word Embeddings." arXiv preprint, submitted
June 20, 2016.
Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. “Semantics Derived Automatically
from Language Corpora Contain Human -Like Biases.” Science 356, no. 6334 (April 14,
2017): 183 –86.
Sina Fazelpour and Maria De -Arteaga. “Diversity in Sociotechnical Machine Learning
Systems.” Big Data and Society 9, no. 1 (2022).
Fairlearn. “Fairness in Machine Learning.” Fairlearn 0.8.0 Documentation, n.d.
Safiya Umoja Noble. Algorithms of Oppression: How Search Engines Reinforce Racism. New
York, NY: New York University Press, 2018.
126 of 142 Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. “Dissecting
Racial Bias in an Algorithm Used to Manage the Health of Populations.” Science 366, no.
6464 (October 25, 2019): 447 –53.
Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. "A
Reductions Approach to Fair Classification." arXiv preprint, submitted July 16, 2018.
Moritz Hardt, Eric Price, and Nathan Srebro. "Equality of Opportunity in Supervised
Learning." arXiv preprint, submitted October 7, 2016.
Alekh Agarwal, Miroslav Dudik, Zhiwei Steven Wu. "Fair Regression: Quantitative
Definitions and Reduction -Based Algorithms." Proceedings of the 36th International
Conference on Machine Learning, PMLR 97:120 -129, 2019.
Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet
Vertesi. “Fairness and Abstraction in Sociotechnical Systems.” FAT* '19: Proceedings of the
Conference on Fairness, Accountability, and Transparency, January 29, 2019, 59 –68.
Matthew Kay, Cynthia Matuszek, and Sean A. Munson. “Unequal Representation and Gender
Stereotypes in Image Search Results for Occupations.” CHI '15: Proceedings of the 33rd
Annual ACM Conference on Human Factors in Computing Systems, April 18, 2015, 3819 –28.
Software Resources
• aequitas
- AI Fairness 360:
• Python
• R
• algofairness
• fairlearn
• fairml
• fairmodels
• fairness
• solas -ai-disparity
• tensorflow/fairness -indicators
• Themis
MEASURE 2.12
Environmental impact and sustainability of AI model training and management activities –
as identified in the MAP function – are assessed and documented.
About
Large -scale, high -performance computational resources used by AI systems for training and
operation can contribute to environmental impacts. Direct negative impacts to the
environment from these processes are related to energy consumption, water consumption,
127 of 142 and greenhouse gas (GHG) emissions. The OECD has identified metrics for each type of
negative direct impact.
Indirect negative impacts to the environment reflect the complexity of interactions between
human behavior, socio -economic systems, and the environment and can include induced
consumption and “rebound effects”, where efficiency gains are offset by accelerated
resource consumption.
Other AI related environmental impacts can arise from the production of computational
equipment and networks (e.g. mining and extraction of raw materials), transporting
hardware, and electronic waste recycling or disposal.
Suggested Actions
• Include environmental impact indicators in AI system design and development plans,
including reducing consumption and improving efficiencies.
• Identify and implement key indicators of AI system energy and water consumption and
efficiency, and/or GHG emissions.
• Establish measurable baselines for sustainable AI system operation in accordance with
organizational policies, regulatory compliance, legal frameworks, and environmental
protection and sustainability norms.
• Assess tradeoffs between AI system performance and sustainable operations in
accordance with organizational principles and policies, regulatory compliance, legal
frameworks, and environmental protection and sustainability norms.
• Identify and establish acceptable resource consumption and efficiency, and GHG
emissions levels, along with actions to be taken if indicators rise above acceptable
levels.
• Estimate AI system emissions levels throughout the AI lifecycle via carbon calculators or
similar process.
Transparency & Documentation
Organizations can document the following
• Are greenhouse gas emissions, and energy and water consumption and efficiency
tracked within the organization?
• Are deployed AI systems evaluated for potential upstream and downstream
environmental impacts (e.g., increased consumption, increased emissions, etc.)?
• Could deployed AI systems cause environmental incidents, e.g., air or water pollution
incidents, toxic spills, fires or explosions?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
• Datasheets for Datasets.
128 of 142 References
Organisation for Economic Co -operation and Development (OECD). "Measuring the
environmental impacts of artificial intelligence compute and applications: The AI footprint.”
OECD Digital Economy Papers, No. 341, OECD Publishing, Paris.
Victor Schmidt, Alexandra Luccioni, Alexandre Lacoste, and Thomas Dandres. “Machine
Learning CO2 Impact Calculator.” ML CO2 Impact, n.d.
Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. "Quantifying
the Carbon Emissions of Machine Learning." arXiv preprint, submitted November 4, 2019.
Matthew Hutson. “Measuring AI’s Carbon Footprint: New Tools Track and Reduce
Emissions from Machine Learning.” IEEE Spectrum, November 22, 2022.
Association for Computing Machinery (ACM). "TechBriefs: Computing and Climate Change."
ACM Technology Policy Council, November 2021.
Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. “Green AI.” Communications of
the ACM 63, no. 12 (December 2020): 54 –63.
MEASURE 2.13
Effectiveness of the employed TEVV metrics and processes in the MEASURE function are
evaluated and documented.
About
The development of metrics is a process often considered to be objective but, as a human
and organization driven endeavor, can reflect implicit and systemic biases, and may
inadvertently reflect factors unrelated to the target function. Measurement approaches can
be oversimplified, gamed, lack critical nuance, become used and relied upon in unexpected
ways, fail to account for differences in affected groups and contexts.
Revisiting the metrics chosen in Measure 2.1 through 2.12 in a process of continual
improvement can help AI actors to evaluate and document metric effectiveness and make
necessary course corrections.
Suggested Actions
• Review selected system metrics and associated TEVV processes to determine if they are
able to sustain system improvements, including the identification and removal of errors.
• Regularly evaluate system metrics for utility, and consider descriptive approaches in
place of overly complex methods.
• Review selected system metrics for acceptability within the end user and impacted
community of interest.
• Assess effectiveness of metrics for identifying and measuring risks.
129 of 142 Transparency & Documentation
Organizations can document the following
• To what extent does the system/entity consistently measure progress towards stated
goals and objectives?
• Given the purpose of this AI, what is an appropriate interval for checking whether it is
still accurate, unbiased, explainable, etc.? What are the checks for this model?
• What corrective actions has the entity taken to enhance the quality, accuracy, reliability,
and representativeness of the data?
• To what extent are the model outputs consistent with the entity’s values and principles
to foster public trust and equity?
• How will the accuracy or appropriate performance metrics be assessed?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
References
Arvind Narayanan. "The limits of the quantitative approach to discrimination." 2022 James
Baldwin lecture, Princeton University, October 11, 2022.
Devansh Saxena, Karla Badillo -Urquiola, Pamela J. Wisniewski, and Shion Guha. “A Human -
Centered Review of Algorithms Used within the U.S. Child Welfare System.” CHI ‘20:
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, April 23,
2020, 1 –15.
Rachel Thomas and David Uminsky. “Reliance on Metrics Is a Fundamental Challenge for
AI.” Patterns 3, no. 5 (May 13, 2022): 100476.
Momin M. Malik. "A Hierarchy of Limitations in Machine Learning." arXiv preprint,
submitted February 29, 2020.
MEASURE 3.1
Approaches, personnel, and documentation are in place to regularly identify and track
existing, unanticipated, and emergent AI risks based on factors such as intended and actual
performance in deployed contexts.
About
For trustworthy AI systems, regular system monitoring is carried out in accordance with
organizational governance policies, AI actor roles and responsibilities, and within a culture
of continual improvement. If and when emergent or complex risks arise, it may be
necessary to adapt internal risk management procedures, such as regular monitoring, to
stay on course. Documentation, resources, and training are part of an overall strategy to
130 of 142 support AI actors as they investigate and respond to AI system errors, incidents or negative
impacts.
Suggested Actions
• Compare AI system risks with:
• simpler or traditional models
• human baseline performance
• other manual performance benchmarks
• Compare end user and community feedback about deployed AI systems to internal
measures of system performance.
• Assess effectiveness of metrics for identifying and measuring emergent risks.
• Measure error response times and track response quality.
• Elicit and track feedback from AI actors in user support roles about the type of metrics,
explanations and other system information required for fulsome resolution of system
issues. Consider:
• Instances where explanations are insufficient for investigating possible error
sources or identifying responses.
• System metrics, including system logs and explanations, for identifying and
diagnosing sources of system error.
• Elicit and track feedback from AI actors in incident response and support roles about
the adequacy of staffing and resources to perform their duties in an effective and timely
manner.
Transparency & Documentation
Organizations can document the following
• Did your organization implement a risk management system to address risks involved
in deploying the identified AI solution (e.g. personnel risk or changes to commercial
objectives)?
• To what extent can users or parties affected by the outputs of the AI system test the AI
system and provide feedback?
• What metrics has the entity developed to measure performance of the AI system,
including error logging?
• To what extent do the metrics provide accurate and useful measure of performance?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
• WEF Companion to the Model AI Governance Framework – Implementation and Self -
Assessment Guide for Organizations
131 of 142 References
ISO. "ISO 9241 -210:2019 Ergonomics of human -system interaction — Part 210: Human -
centred design for interactive systems." 2nd ed. ISO Standards, July 2019.
Larysa Visengeriyeva, et al. “Awesome MLOps.“ GitHub.
MEASURE 3.2
Risk tracking approaches are considered for settings where AI risks are difficult to assess
using currently available measurement techniques or where metrics are not yet available.
About
Risks identified in the Map function may be complex, emerge over time, or difficult to
measure. Systematic methods for risk tracking, including novel measurement approaches,
can be established as part of regular monitoring and improvement processes.
Suggested Actions
• Establish processes for tracking emergent risks that may not be measurable with
current approaches. Some processes may include:
• Recourse mechanisms for faulty AI system outputs.
• Bug bounties.
• Human -centered design approaches.
• User -interaction and experience research.
• Participatory stakeholder engagement with affected or potentially impacted
individuals and communities.
• Identify AI actors responsible for tracking emergent risks and inventory methods.
• Determine and document the rate of occurrence and severity level for complex or
difficult -to-measure risks when:
• Prioritizing new measurement approaches for deployment tasks.
• Allocating AI system risk management resources.
• Evaluating AI system improvements.
• Making go/no -go decisions for subsequent system iterations.
Transparency & Documentation
Organizations can document the following
• Who is ultimately responsible for the decisions of the AI and is this person aware of the
intended uses and limitations of the analytic?
• Who will be responsible for maintaining, re -verifying, monitoring, and updating this AI
once deployed?
• To what extent does the entity communicate its AI strategic goals and objectives to the
community of stakeholders?
132 of 142 • Given the purpose of this AI, what is an appropriate interval for checking whether it is
still accurate, unbiased, explainable, etc.? What are the checks for this model?
• If anyone believes that the AI no longer meets this ethical framework, who will be
responsible for receiving the concern and as appropriate investigating and remediating
the issue? Do they have authority to modify, limit, or stop the use of the AI?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
References
ISO. "ISO 9241 -210:2019 Ergonomics of human -system interaction — Part 210: Human -
centred design for interactive systems." 2nd ed. ISO Standards, July 2019.
Mark C. Paulk, Bill Curtis, Mary Beth Chrissis, and Charles V. Weber. “Capability Maturity
Model, Version 1.1.” IEEE Software 10, no. 4 (1993): 18 –27.
Jeff Patton, Peter Economy, Martin Fowler, Alan Cooper, and Marty Cagan. User Story
Mapping: Discover the Whole Story, Build the Right Product. O'Reilly, 2014.
Rumman Chowdhury and Jutta Williams. "Introducing Twitter’s first algorithmic bias
bounty challenge." Twitter Engineering Blog, July 30, 2021.
HackerOne. "Twitter Algorithmic Bias." HackerOne, August 8, 2021.
Josh Kenway, Camille François, Sasha Costanza -Chock, Inioluwa Deborah Raji, and Joy
Buolamwini. "Bug Bounties for Algorithmic Harms?" Algorithmic Justice League, January
2022.
Microsoft. “Community Jury.” Microsoft Learn's Azure Application Architecture Guide, 2023.
Margarita Boyarskaya, Alexandra Olteanu, and Kate Crawford. "Overcoming Failures of
Imagination in AI Infused System Development and Deployment." arXiv preprint, submitted
December 10, 2020.
MEASURE 3.3
Feedback processes for end users and impacted communities to report problems and
appeal system outcomes are established and integrated into AI system evaluation metrics.
About
Assessing impact is a two -way effort. Many AI system outcomes and impacts may not be
visible or recognizable to AI actors across the development and deployment dimensions of
the AI lifecycle, and may require direct feedback about system outcomes from the
perspective of end users and impacted groups.
133 of 142 Feedback can be collected indirectly, via systems that are mechanized to collect errors and
other feedback from end users and operators
Metrics and insights developed in this sub -category feed into Manage 4.1 and 4.2.
Suggested Actions
• Measure efficacy of end user and operator error reporting processes.
• Categorize and analyze type and rate of end user appeal requests and results.
• Measure feedback activity participation rates and awareness of feedback activity
availability.
• Utilize feedback to analyze measurement approaches and determine subsequent
courses of action.
• Evaluate measurement approaches to determine efficacy for enhancing organizational
understanding of real world impacts.
• Analyze end user and community feedback in close collaboration with domain experts.
Transparency & Documentation
Organizations can document the following
• To what extent can users or parties affected by the outputs of the AI system test the AI
system and provide feedback?
• Did your organization address usability problems and test whether user interfaces
served their intended purposes?
• How easily accessible and current is the information available to external stakeholders?
• What type of information is accessible on the design, operations, and limitations of the
AI system to external stakeholders, including end users, consumers, regulators, and
individuals impacted by use of the AI system?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• WEF Companion to the Model AI Governance Framework – Implementation and Self -
Assessment Guide for Organizations
References
Sasha Costanza -Chock. Design Justice: Community -Led Practices to Build the Worlds We
Need. Cambridge: The MIT Press, 2020.
David G. Robinson. Voices in the Code: A Story About People, Their Values, and the
Algorithm They Made. New York: Russell Sage Foundation, 2022.
Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang. "Stakeholder
Participation in AI: Beyond 'Add Diverse Stakeholders and Stir.'" arXiv preprint, submitted
November 1, 2021.
134 of 142 George Margetis, Stavroula Ntoa, Margherita Antona, and Constantine Stephanidis. “Human -
Centered Design of Artificial Intelligence.” In Handbook of Human Factors and Ergonomics,
edited by Gavriel Salvendy and Waldemar Karwowski, 5th ed., 1085 –1106. John Wiley &
Sons, 2021.
Ben Shneiderman. Human -Centered AI. Oxford: Oxford University Press, 2022
Batya Friedman, David G. Hendry, and Alan Borning. “A Survey of Value Sensitive Design
Methods.” Foundations and Trends in Human -Computer Interaction 11, no. 2 (November
22, 2017): 63 –125.
Batya Friedman, Peter H. Kahn, Jr., and Alan Borning. "Value Sensitive Design: Theory and
Methods." University of Washington Department of Computer Science & Engineering
Technical Report 02 -12-01, December 2002.
Emanuel Moss, Elizabeth Watkins, Ranjit Singh, Madeleine Clare Elish, and Jacob Metcalf.
“Assembling Accountability: Algorithmic Impact Assessment for the Public Interest.” SSRN,
July 8, 2021.
Alexandra Reeve Givens, and Meredith Ringel Morris. “Centering Disability Perspectives in
Algorithmic Fairness, Accountability, & Transparency.” FAT* '20: Proceedings of the 2020
Conference on Fairness, Accountability, and Transparency, January 27, 2020, 684 -84.
MEASURE 4.1
Measurement approaches for identifying AI risks are connected to deployment context(s)
and informed through consultation with domain experts and other end users. Approaches
are documented.
About
AI Actors carrying out TEVV tasks may have difficulty evaluating impacts within the system
context of use. AI system risks and impacts are often best described by end users and others
who may be affected by output and subsequent decisions. AI Actors can elicit feedback from
impacted individuals and communities via participatory engagement processes established
in Govern 5.1 and 5.2, and carried out in Map 1.6, 5.1, and 5.2.
Activities described in the Measure function enable AI actors to evaluate feedback from
impacted individuals and communities. To increase awareness of insights, feedback can be
evaluated in close collaboration with AI actors responsible for impact assessment, human -
factors, and governance and oversight tasks, as well as with other socio -technical domain
experts and researchers. To gain broader expertise for interpreting evaluation outcomes,
organizations may consider collaborating with advocacy groups and civil society
organizations.
Insights based on this type of analysis can inform TEVV -based decisions about metrics and
related courses of action.
135 of 142 Suggested Actions
• Support mechanisms for capturing feedback from system end users (including domain
experts, operators, and practitioners). Successful approaches are:
• conducted in settings where end users are able to openly share their doubts and
insights about AI system output, and in connection to their specific context of use
(including setting and task -specific lines of inquiry)
• developed and implemented by human -factors and socio -technical domain experts
and researchers
• designed to ensure control of interviewer and end user subjectivity and biases
• Identify and document approaches
• for evaluating and integrating elicited feedback from system end users
• in collaboration with human -factors and socio -technical domain experts,
• to actively inform a process of continual improvement.
• Evaluate feedback from end users alongside evaluated feedback from impacted
communities (MEASURE 3.3).
• Utilize end user feedback to investigate how selected metrics and measurement
approaches interact with organizational and operational contexts.
• Analyze and document system -internal measurement processes in comparison to
collected end user feedback.
• Identify and implement approaches to measure effectiveness and satisfaction with end
user elicitation techniques, and document results.
Transparency & Documentation
Organizations can document the following
• Did your organization address usability problems and test whether user interfaces
served their intended purposes?
• How will user and peer engagement be integrated into the model development process
and periodic performance review once deployed?
• To what extent can users or parties affected by the outputs of the AI system test the AI
system and provide feedback?
• To what extent are the established procedures effective in mitigating bias, inequity, and
other concerns resulting from the system?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
• WEF Companion to the Model AI Governance Framework – Implementation and Self -
Assessment Guide for Organizations
136 of 142 References
Batya Friedman, and David G. Hendry. Value Sensitive Design: Shaping Technology with
Moral Imagination. Cambridge, MA: The MIT Press, 2019.
Batya Friedman, David G. Hendry, and Alan Borning. “A Survey of Value Sensitive Design
Methods.” Foundations and Trends in Human -Computer Interaction 11, no. 2 (November
22, 2017): 63 –125.
Steven Umbrello, and Ibo van de Poel. “Mapping Value Sensitive Design onto AI for Social
Good Principles.” AI and Ethics 1, no. 3 (February 1, 2021): 283 –96.
Karen Boyd. “Designing Up with Value -Sensitive Design: Building a Field Guide for Ethical
ML Development.” FAccT '22: 2022 ACM Conference on Fairness, Accountability, and
Transparency, June 20, 2022, 2069 –82.
Janet Davis and Lisa P. Nathan. “Value Sensitive Design: Applications, Adaptations, and
Critiques.” In Handbook of Ethics, Values, and Technological Design, edited by Jeroen van
den Hoven, Pieter E. Vermaas, and Ibo van de Poel, January 1, 2015, 11 –40.
Ben Shneiderman. Human -Centered AI. Oxford: Oxford University Press, 2022.
Shneiderman, Ben. “Human -Centered AI.” Issues in Science and Technology 37, no. 2
(2021): 56 –61.
Shneiderman, Ben. “Tutorial: Human -Centered AI: Reliable, Safe and Trustworthy.” IUI '21
Companion: 26th International Conference on Intelligent User Interfaces - Companion, April
14, 2021, 7 –8.
George Margetis, Stavroula Ntoa, Margherita Antona, and Constantine Stephanidis. “Human -
Centered Design of Artificial Intelligence.” In Handbook of Human Factors and Ergonomics,
edited by Gavriel Salvendy and Waldemar Karwowski, 5th ed., 1085 –1106. John Wiley &
Sons, 2021.
Caitlin Thompson. “Who's Homeless Enough for Housing? In San Francisco, an Algorithm
Decides.” Coda, September 21, 2021.
John Zerilli, Alistair Knott, James Maclaurin, and Colin Gavaghan. “Algorithmic Decision -
Making and the Control Problem.” Minds and Machines 29, no. 4 (December 11, 2019): 555 –
78.
Fry, Hannah. Hello World: Being Human in the Age of Algorithms. New York: W.W. Norton &
Company, 2018.
Sasha Costanza -Chock. Design Justice: Community -Led Practices to Build the Worlds We
Need. Cambridge: The MIT Press, 2020.
David G. Robinson. Voices in the Code: A Story About People, Their Values, and the
Algorithm They Made. New York: Russell Sage Foundation, 2022.
137 of 142 Diane Hart, Gabi Diercks -O'Brien, and Adrian Powell. “Exploring Stakeholder Engagement in
Impact Evaluation Planning in Educational Development Work.” Evaluation 15, no. 3
(2009): 285 –306.
Asit Bhattacharyya and Lorne Cummings. “Measuring Corporate Environmental
Performance – Stakeholder Engagement Evaluation.” Business Strategy and the
Environment 24, no. 5 (2013): 309 –25.
Hendricks, Sharief, Nailah Conrad, Tania S. Douglas, and Tinashe Mutsvangwa. “A Modified
Stakeholder Participation Assessment Framework for Design Thinking in Health
Innovation.” Healthcare 6, no. 3 (September 2018): 191 –96.
Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang. "Stakeholder
Participation in AI: Beyond 'Add Diverse Stakeholders and Stir.'" arXiv preprint, submitted
November 1, 2021.
Emanuel Moss, Elizabeth Watkins, Ranjit Singh, Madeleine Clare Elish, and Jacob Metcalf.
“Assembling Accountability: Algorithmic Impact Assessment for the Public Interest.” SSRN,
July 8, 2021.
Alexandra Reeve Givens, and Meredith Ringel Morris. “Centering Disability Perspectives in
Algorithmic Fairness, Accountability, & Transparency.” FAT* '20: Proceedings of the 2020
Conference on Fairness, Accountability, and Transparency, January 27, 2020, 684 -84.
MEASURE 4.2
Measurement results regarding AI system trustworthiness in deployment context(s) and
across AI lifecycle are informed by input from domain experts and other relevant AI actors
to validate whether the system is performing consistently as intended. Results are
documented.
About
Feedback captured from relevant AI Actors can be evaluated in combination with output
from Measure 2.5 to 2.11 to determine if the AI system is performing within pre -defined
operational limits for validity and reliability, safety, security and resilience, privacy, bias
and fairness, explainability and interpretability, and transparency and accountability. This
feedback provides an additional layer of insight about AI system performance, including
potential misuse or reuse outside of intended settings.
Insights based on this type of analysis can inform TEVV -based decisions about metrics and
related courses of action.
Suggested Actions
• Integrate feedback from end users, operators, and affected individuals and communities
from Map function as inputs to assess AI system trustworthiness characteristics. Ensure
both positive and negative feedback is being assessed.
138 of 142 • Evaluate feedback in connection with AI system trustworthiness characteristics from
Measure 2.5 to 2.11.
• Evaluate feedback regarding end user satisfaction with, and confidence in, AI system
performance including whether output is considered valid and reliable, and explainable
and interpretable.
• Identify mechanisms to confirm/support AI system output (e.g. recommendations), and
end user perspectives about that output.
• Measure frequency of AI systems’ override decisions, evaluate and document results,
and feed insights back into continual improvement processes.
• Consult AI actors in impact assessment, human factors and socio -technical tasks to
assist with analysis and interpretation of results.
Transparency & Documentation
Organizations can document the following
• To what extent does the system/entity consistently measure progress towards stated
goals and objectives?
• What policies has the entity developed to ensure the use of the AI system is consistent
with its stated values and principles?
• To what extent are the model outputs consistent with the entity’s values and principles
to foster public trust and equity?
• Given the purpose of the AI, what level of explainability or interpretability is required
for how the AI made its determination?
• To what extent can users or parties affected by the outputs of the AI system test the AI
system and provide feedback?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
References
Batya Friedman, and David G. Hendry. Value Sensitive Design: Shaping Technology with
Moral Imagination. Cambridge, MA: The MIT Press, 2019.
Batya Friedman, David G. Hendry, and Alan Borning. “A Survey of Value Sensitive Design
Methods.” Foundations and Trends in Human -Computer Interaction 11, no. 2 (November
22, 2017): 63 –125.
Steven Umbrello, and Ibo van de Poel. “Mapping Value Sensitive Design onto AI for Social
Good Principles.” AI and Ethics 1, no. 3 (February 1, 2021): 283 –96.
Karen Boyd. “Designing Up with Value -Sensitive Design: Building a Field Guide for Ethical
ML Development.” FAccT '22: 2022 ACM Conference on Fairness, Accountability, and
Transparency, June 20, 2022, 2069 –82.
139 of 142 Janet Davis and Lisa P. Nathan. “Value Sensitive Design: Applications, Adaptations, and
Critiques.” In Handbook of Ethics, Values, and Technological Design, edited by Jeroen van
den Hoven, Pieter E. Vermaas, and Ibo van de Poel, January 1, 2015, 11 –40.
Ben Shneiderman. Human -Centered AI. Oxford: Oxford University Press, 2022.
Shneiderman, Ben. “Human -Centered AI.” Issues in Science and Technology 37, no. 2
(2021): 56 –61.
Shneiderman, Ben. “Tutorial: Human -Centered AI: Reliable, Safe and Trustworthy.” IUI '21
Companion: 26th International Conference on Intelligent User Interfaces - Companion, April
14, 2021, 7 –8.
George Margetis, Stavroula Ntoa, Margherita Antona, and Constantine Stephanidis. “Human -
Centered Design of Artificial Intelligence.” In Handbook of Human Factors and Ergonomics,
edited by Gavriel Salvendy and Waldemar Karwowski, 5th ed., 1085 –1106. John Wiley &
Sons, 2021.
Caitlin Thompson. “Who's Homeless Enough for Housing? In San Francisco, an Algorithm
Decides.” Coda, September 21, 2021.
John Zerilli, Alistair Knott, James Maclaurin, and Colin Gavaghan. “Algorithmic Decision -
Making and the Control Problem.” Minds and Machines 29, no. 4 (December 11, 2019): 555 –
78.
Fry, Hannah. Hello World: Being Human in the Age of Algorithms. New York: W.W. Norton &
Company, 2018.
Sasha Costanza -Chock. Design Justice: Community -Led Practices to Build the Worlds We
Need. Cambridge: The MIT Press, 2020.
David G. Robinson. Voices in the Code: A Story About People, Their Values, and the
Algorithm They Made. New York: Russell Sage Foundation, 2022.
Diane Hart, Gabi Diercks -O'Brien, and Adrian Powell. “Exploring Stakeholder Engagement in
Impact Evaluation Planning in Educational Development Work.” Evaluation 15, no. 3
(2009): 285 –306.
Asit Bhattacharyya and Lorne Cummings. “Measuring Corporate Environmental
Performance – Stakeholder Engagement Evaluation.” Business Strategy and the
Environment 24, no. 5 (2013): 309 –25.
Hendricks, Sharief, Nailah Conrad, Tania S. Douglas, and Tinashe Mutsvangwa. “A Modified
Stakeholder Participation Assessment Framework for Design Thinking in Health
Innovation.” Healthcare 6, no. 3 (September 2018): 191 –96.
140 of 142 Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang. "Stakeholder
Participation in AI: Beyond 'Add Diverse Stakeholders and Stir.'" arXiv preprint, submitted
November 1, 2021.
Emanuel Moss, Elizabeth Watkins, Ranjit Singh, Madeleine Clare Elish, and Jacob Metcalf.
“Assembling Accountability: Algorithmic Impact Assessment for the Public Interest.” SSRN,
July 8, 2021.
Alexandra Reeve Givens, and Meredith Ringel Morris. “Centering Disability Perspectives in
Algorithmic Fairness, Accountability, & Transparency.” FAT* '20: Proceedings of the 2020
Conference on Fairness, Accountability, and Transparency, January 27, 2020, 684 -84.
MEASURE 4.3
Measurable performance improvements or declines based on consultations with relevant AI
actors including affected communities, and field data about context -relevant risks and
trustworthiness characteristics, are identified and documented.
About
TEVV activities conducted throughout the AI system lifecycle can provide baseline
quantitative measures for trustworthy characteristics. When combined with results from
Measure 2.5 to 2.11 and Measure 4.1 and 4.2, TEVV actors can maintain a comprehensive
view of system performance. These measures can be augmented through participatory
engagement with potentially impacted communities or other forms of stakeholder
elicitation about AI systems’ impacts. These sources of information can allow AI actors to
explore potential adjustments to system components, adapt operating conditions, or
institute performance improvements.
Suggested Actions
• Develop baseline quantitative measures for trustworthy characteristics.
• Delimit and characterize baseline operation values and states.
• Utilize qualitative approaches to augment and complement quantitative baseline
measures, in close coordination with impact assessment, human factors and socio -
technical AI actors.
• Monitor and assess measurements as part of continual improvement to identify
potential system adjustments or modifications
• Perform and document sensitivity analysis to characterize actual and expected variance
in performance after applying system or procedural updates.
• Document decisions related to the sensitivity analysis and record expected influence on
system performance and identified risks.
Transparency & Documentation
Organizations can document the following
• To what extent are the model outputs consistent with the entity’s values and principles
to foster public trust and equity?
141 of 142 • How were sensitive variables (e.g., demographic and socioeconomic categories) that
may be subject to regulatory compliance specifically selected or not selected for
modeling purposes?
• Did your organization implement a risk management system to address risks involved
in deploying the identified AI solution (e.g. personnel risk or changes to commercial
objectives)?
• How will the accountable human(s) address changes in accuracy and precision due to
either an adversary’s attempts to disrupt the AI or unrelated changes in the
operational/business environment?
• How will user and peer engagement be integrated into the model development process
and periodic performance review once deployed?
AI Transparency Resources
• GAO -21-519SP - Artificial Intelligence: An Accountability Framework for Federal
Agencies & Other Entities.
• Artificial Intelligence Ethics Framework For The Intelligence Community.
References
Batya Friedman, and David G. Hendry. Value Sensitive Design: Shaping Technology with
Moral Imagination. Cambridge, MA: The MIT Press, 2019.
Batya Friedman, David G. Hendry, and Alan Borning. “A Survey of Value Sensitive Design
Methods.” Foundations and Trends in Human -Computer Interaction 11, no. 2 (November
22, 2017): 63 –125.
Steven Umbrello, and Ibo van de Poel. “Mapping Value Sensitive Design onto AI for Social
Good Principles.” AI and Ethics 1, no. 3 (February 1, 2021): 283 –96.
Karen Boyd. “Designing Up with Value -Sensitive Design: Building a Field Guide for Ethical
ML Development.” FAccT '22: 2022 ACM Conference on Fairness, Accountability, and
Transparency, June 20, 2022, 2069 –82.
Janet Davis and Lisa P. Nathan. “Value Sensitive Design: Applications, Adaptations, and
Critiques.” In Handbook of Ethics, Values, and Technological Design, edited by Jeroen van
den Hoven, Pieter E. Vermaas, and Ibo van de Poel, January 1, 2015, 11 –40.
Ben Shneiderman. Human -Centered AI. Oxford: Oxford University Press, 2022.
Shneiderman, Ben. “Human -Centered AI.” Issues in Science and Technology 37, no. 2
(2021): 56 –61.
Shneiderman, Ben. “Tutorial: Human -Centered AI: Reliable, Safe and Trustworthy.” IUI '21
Companion: 26th International Conference on Intelligent User Interfaces - Companion, April
14, 2021, 7 –8.
142 of 142 George Margetis, Stavroula Ntoa, Margherita Antona, and Constantine Stephanidis. “Human -
Centered Design of Artificial Intelligence.” In Handbook of Human Factors and Ergonomics,
edited by Gavriel Salvendy and Waldemar Karwowski, 5th ed., 1085 –1106. John Wiley &
Sons, 2021.
Caitlin Thompson. “Who's Homeless Enough for Housing? In San Francisco, an Algorithm
Decides.” Coda, September 21, 2021.
John Zerilli, Alistair Knott, James Maclaurin, and Colin Gavaghan. “Algorithmic Decision -
Making and the Control Problem.” Minds and Machines 29, no. 4 (December 11, 2019): 555 –
78.
Fry, Hannah. Hello World: Being Human in the Age of Algorithms. New York: W.W. Norton &
Company, 2018.
Sasha Costanza -Chock. Design Justice: Community -Led Practices to Build the Worlds We
Need. Cambridge: The MIT Press, 2020.
David G. Robinson. Voices in the Code: A Story About People, Their Values, and the
Algorithm They Made. New York: Russell Sage Foundation, 2022.
Diane Hart, Gabi Diercks -O'Brien, and Adrian Powell. “Exploring Stakeholder Engagement in
Impact Evaluation Planning in Educational Development Work.” Evaluation 15, no. 3
(2009): 285 –306.
Asit Bhattacharyya and Lorne Cummings. “Measuring Corporate Environmental
Performance – Stakeholder Engagement Evaluation.” Business Strategy and the
Environment 24, no. 5 (2013): 309 –25.
Hendricks, Sharief, Nailah Conrad, Tania S. Douglas, and Tinashe Mutsvangwa. “A Modified
Stakeholder Participation Assessment Framework for Design Thinking in Health
Innovation.” Healthcare 6, no. 3 (September 2018): 191 –96.
Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang. "Stakeholder
Participation in AI: Beyond 'Add Diverse Stakeholders and Stir.'" arXiv preprint, submitted
November 1, 2021.
Emanuel Moss, Elizabeth Watkins, Ranjit Singh, Madeleine Clare Elish, and Jacob Metcalf.
“Assembling Accountability: Algorithmic Impact Assessment for the Public Interest.” SSRN,
July 8, 2021.
Alexandra Reeve Givens, and Meredith Ringel Morris. “Centering Disability Perspectives in
Algorithmic Fairness, Accountability, & Transparency.” FAT* '20: Proceedings of the 2020
Conference on Fairness, Accountability, and Transparency, January 27, 2020, 684 -84.
|
uk.pdf
|
NationalISrteatgyC
1 2NationalISrteatgyC
Version1.2
PresentedtoParliament
bytheSecretaryofStateforDigital,Culture,MediaandSport
byCommandofHerMajesty
September 2021
Command Paper 525
3 2Oontgntu
Ourten-yearplantomakeBritainaglobalAIsuperpower 4
Executivesummary 7
Summaryofkeyactions 8
Introduction 10
10 Year Vision 11
TheUK’sNationalAIStrategy 14
AIpresentsuniqueopportunitiesandchallenges 16
Reflecting and protecting society 16
The longer term 17
From Sector Deal to AI Strategy 18
Pillar1:Investinginthelong-termneedsoftheAIecosystem 22
Skills and Talent 22
A new approach to research, development and innovation in AI 28
International collaboration on Research & Innovation 30
Access to data 30
DataFoundationsandUseinAISystems 31
Publicsectordata 32
Compute 33
Finance and VC 35
Trade 36
Commercialisation 40Pillar2:EnsuringAIbenefitsallsectorsandregions 40
AI deployment – understanding new dynamics 41
CreatingandprotectingIntellectualProperty 42
Using AI for the public benefit 42
Missions 44
NetZero 45
Health 46
The public sector as a buyer 46
Pillar3:GoverningAIeffectively 50
Supporting innovation and adoption while protecting the public and building trust 51
Alternativeoptions 54
Regulators’ coordination and capacity 54
International governance and collaboration 55
AI and global digital technical standards 56
AI Assurance 58
Public sector as an exemplar 59
AI risk, safety, and long-term development 60
Nextsteps 62NatinlaAISrteaty54
7 9-petgnmCgaeklanto
BawgKeitainayloWal
ISupkgekoRge
Overthenexttenyears,theimpactof
AIonbusinessesacrosstheUKand
thewiderworldwillbeprofound-
andUKuniversitiesandstartupsare
alreadyleadingtheworldinbuilding
thetoolsfortheneweconomy.
New discoveries and methods for
harnessing the capacity of machines to
learn, aid and assist us in new ways
emerge every day from our universities
and businesses.
AI gives us new opportunities to grow
and transform businesses of all sizes,
and capture the benefits of innovation
right across the UK.
As we build back better from the
challenges of the global pandemic, and
prepare for new challenges ahead, we
are presented with the opportunity to
supercharge our already admirable
starting position on AI and to make these
technologies central to our development
as a global science and innovation
superpower.
With the help of our thriving AI
ecosystem and world leading R&D
system, this National AI Strategy will
translate the tremendous potential of AI
into better growth, prosperity and social
benefits for the UK, and to lead the
charge in applying AI to the greatest
challenges of the 21st Century.Thisistheageofartificialintelligence.
Whetherweknowitornot,weall
interactwithAIeveryday-whether
it’sinoursocialmediafeedsand
smartspeakers,oronouronline
banking.AI,andthedatathatfuels
ouralgorithms,helpprotectusfrom
fraudanddiagnoseseriousillness.
Andthistechnologyisevolvingevery
day.
We’ve got to make sure we keep up with
the pace of change. The UK is already a
world leader in AI, as the home of
trailblazing pioneers like Alan Turing and
Ada Lovelace and with our strong history
of research excellence. This Strategy
outlines our vision for how the UK can
maintain and build on its position as
other countries also race to deliver their
own economic and technological
transformations.
The challenge now for the UK is to fully
unlock the power of AI and data-driven
technologies, to build on our early
leadership and legacy, and to look
forward to the opportunities of this
coming decade.
This National AI Strategy will signal to the
world our intention to build the most
pro-innovation regulatory environment
in the world; to drive prosperity across
the UK and ensure everyone can benefit
from AI; and to apply AI to help solve
global challenges like climate change.
AI will be central to how we drive growth
and enrich lives, and the vision set out in
our strategy will help us achieve both of
those vital goals.
TEIrSTEIGFUN,
rUOGUFIGD-LrFIFUL-G
KMrSNUrrPUNUG,DINc
SNcMrFGSIvrFGIFU,DNIcSNUc-GGSUr
rUOGUFIGD-LrFIFUL-G
cS,SFIvPOMvFMGUPfUcSI
INcrd-GF
NatinlaAISrteaty54
U1g0ptiVgupBBaeC
ArtificialIntelligence(AI)isthefastestgrowingdeeptechnology1intheworld,withhuge
potentialtorewritetherulesofentireindustries,drivesubstantialeconomicgrowthand
transformallareasoflife.TheUKisaglobalsuperpowerinAIandiswellplacedtoleadthe
worldoverthenextdecadeasagenuineresearchandinnovationpowerhouse,ahiveof
globaltalentandaprogressiveregulatoryandbusinessenvironment.
Many of the UK’s successes in AI were supported by the 2017 Industrial Strategy, which set out the
government’s vision to make the UK a global centre for AI innovation. In April 2018, the government
and the UK’s AI ecosystem agreed a near £1 billion AI Sector Deal to boost the UK’s global position as
a leader in developing AI technologies.
This new National AI Strategy builds on the UK’s strengths but also represents the start of a step-
change for AI in the UK, recognising the power of AI to increase resilience, productivity, growth and
innovation across the private and public sectors.
This is how we will prepare the UK for the next ten years, and is built on three assumptions about the
coming decade:
• The key drivers of progress, discovery and strategic advantage in AI are access to people, data,
compute and finance – all of which face huge global competition;
• AI will become mainstream in much of the economy and action will be required to ensure every
sector and region of the UK benefit from this transition;
• Our governance and regulatory regimes will need to keep pace with the fast-changing demands
of AI, maximising growth and competition, driving UK excellence in innovation, and protecting the
safety, security, choices and rights of our citizens.
The UK’s National AI Strategy therefore aims to:
• Investandplanforthelong-termneedsoftheAIecosystem to continue our leadership as a
science and AI superpower;
• SupportthetransitiontoanAI-enabledeconomy, capturing the benefits of innovation in the
UK, and ensuring AI benefits all sectors and regions;
• EnsuretheUKgetsthenationalandinternationalgovernanceofAItechnologiesright to
encourage innovation, investment, and protect the public and our fundamental values.
This will be best achieved through broad public trust and support, and by the involvement of the
diverse talents and views of society.
8
NatinlaAISrteaty54
rpBBaeCo’wgCa0tionu
0 uInvestingintheLongTermNeedsoftheAIEcosystem EnsuringAIBenefitsAllSectorsandRegions GoverningAIEffectively
Shortterm
(next3
months):■Publish a framework for Government's role in enabling better data
availability in the wider economy
■Consult on the role and options for a National Cyber-Physical
Infrastructure Framework
■Support the development of AI, data science and digital skills
through the DepartmentforEducation’sSkillsBootcamps■Begin engagement on the Draft National Strategy for AI-driven
technologies in HealthandSocialCare , through the NHS AI Lab
■Publish the DefenceAIStrategy , through the Ministry of Defence
■Launch a consultationoncopyrightandpatentsforAI through
the IPO■Publish the CDEI AIassurance roadmap
■Determine the role of data protection in wider AI governance
following the Data:Anewdirectionconsultation
■Publish details of the approachestheMinistryofDefencewill
usewhenadoptingandusingAI
■Develop an all-of-governmentapproachtointernationalAI
activity
Medium
term
(next6
months):■Publish research into what skillsare needed to enable employees
to use AI in a business setting and identify how national skills
provision can meet those needs
■Evaluate the privatefunding needs and challenges of AI scaleups
■Support the National Centre for Computing Education to ensure AI
programmesforschools are accessible
■SupportabroaderrangeofpeopletoenterAI-relatedjobs by
ensuring career pathways highlight opportunities to work with or
develop AI
■Implement the USUKDeclarationonCooperationinAIR&D
■Publish a review into the UK’s compute capacity needs to support
AI innovation, commercialisation and deployment
■Roll out newvisaregimes to attract the world's best AI talent to
the UK■Publish research into opportunities to encourage diffusionofAI
across the economy
■Consider how InnovationMissions include AI capabilities and
promote ambitious mission-based cooperation through bilateral
and multilateral efforts
■Extend UK aid to support local innovation in developing countries
■Build an open repositoryofAIchallenges with real-world
applications ■Publish White Paper on a pro-innovation national position on
governingandregulating AI
■Complete an in-depth analysis on algorithmictransparency , with
a view to develop a cross-government standard
■Pilot an AIStandards Hub to coordinate UK engagement in AI
standardisation globally
■Establish medium and long term horizon scanning functions to
increasegovernment’sawarenessofAIsafety
Longterm
(next12
monthsand
beyond):■Undertake a review of our international and domestic approach to
semiconductorsupplychains
■Consider what openandmachine-readablegovernment
datasets can be published for AI models
■Launch a new NationalAIResearchandInnovationProgramme
that will align funding programmes across UKRI and support the
wider ecosystem
■WorkwithglobalpartnersonsharedR&Dchallenges,
leveraging Overseas Development Assistance to put AI at the heart
of partnerships worldwide
■BackdiversityinAI by continuing existing interventions across top
talent, PhDs, AI and Data Science Conversion Courses and
Industrial Funded Masters
■Monitor and use National Security and Investment Act to protect
nationalsecurity while keeping the UK open for business
■Includetradedealprovisionsinemergingtechnologies,
including AI■Launch joint Office for AI / UKRI programme to stimulate the
development and adoptionofAI technologies in high potential,
lower-AI-maturity sectors
■Continue supporting the development of capabilities around
trustworthiness,adoptability,andtransparency of AI
technologies through the National AI Research and Innovation
Programme
■Join up across government to identify where usingAIcanprovide
acatalyticcontributiontostrategicchallenges■Explore with stakeholders the development of an AI technical
standardsengagement toolkit to support the AI ecosystem to
engage in the global AI standardisation landscape
■Work with partners in multilateral and multi-stakeholder fora, and
invest in GPAIto shape and support AIgovernance in line with UK
values and priorities
■Work with The Alan Turing Institute to updateguidanceonAI
ethicsandsafetyinthepublicsector
■Work with national security, defence, and leading researchers to
understandwhatpublicsectoractionscansafelyadvanceAI
andmitigatecatastrophicrisks
11 1cSnteoqp0tion
ArtificialIntelligencetechnologies(AI)
offerthepotentialtotransformtheUK’s
economiclandscapeandimprovepeople’s
livesacrossthecountry,transforming
industriesanddeliveringfirst-classpublic
services.
AI may be one of the most important
innovations in human history, and the
government believes it is critical to both our
economic and national security that the UK
prepares for the opportunities AI brings, and
that the country is at the forefront of solving
the complex challenges posed by an
increased use of AI.
This country has a long and exceptional
history in AI – from Alan Turing’s early work
through to DeepMind’s recent pioneering
discoveries. In terms of AI startups and
scaleups, private capital invested and
conference papers submitted, the UK sits in
the top tier of AI nations globally. The UK
ranked third in the world for private
investment into AI companies in 2020, behind
only the USA and China.
The National AI Strategy builds on the UK’s
current strengths and represents the start of
a step-change for AI in the UK, recognising
that maximising the potential of AI will
increase resilience, productivity, growth and
innovation across the private and public
sectors. Building on our strengths in AI will
take a whole-of-society effort that will span
the next decade. This is a top-level economic,
security, health and wellbeing priority. The UK
government sees being competitive in AI as
vital to our national ambitions on regional prosperity and for shared global challenges
such as net zero, health resilience and
environmental sustainability. AI capability is
therefore vital for the UK's international
influence as a global science superpower.
The National AI Strategy for the United
Kingdom will prepare the UK for the next ten
years, and is built on three assumptions
about the coming decade:
• The key drivers of progress, discovery and
strategic advantage in AI are access to
people, data, compute and finance – all of
which face huge global competition;
• AI will become mainstream in much of the
economy and action will be required to
ensure every sector and region of the UK
benefit from this transition;
• Our governance and regulatory regimes
will need to keep pace with the fast-
changing demands of AI, maximising
growth and competition, driving UK
excellence in innovation, and protecting
the safety, security, choices and rights of
our citizens.
This document sets out the UK’s strategic
intent at a level intended to guide action over
the next ten years, recognising that AI is a fast
moving and dynamic area. Detailed and
measurable plans for the execution of the
first stage of this strategy will be published
later this year.fl:Dgae&iuion
Over the next decade, as transformative
technologies continue to reshape our
economy and society, the world is likely to see
a shift in the nature and distribution of global
power. We are seeing how, in the case of AI,
rapid technological change seeks to rebalance
the science and technology dominance of
existing superpowers like the US and China,
and wider transnational challenges demand
greater collective action in the face of
continued global security and prosperity.
With this in mind, the UK has an opportunity
over the next ten years to position itself as
the best place to live and work with AI; with
clear rules, applied ethical principles and a
pro-innovation regulatory environment. With
the right ingredients in place, we will be both
a genuine innovation powerhouse and the
most supportive business environment in the
world, where we cooperate on using AI for
good, advocate for international standards
that reflect our values, and defend against the
malign use of AI.
Whether it is making the decision to study AI,
work at the cutting edge of research or spin
up an AI business, our investments in skills,
data and infrastructure will make it easier
than ever to succeed. Our world-leading R&D
system will step up its support of innovators
at every step of their journey, from deep research to building and shipping products. If
you are a talented AI researcher from abroad,
coming to the UK will be easier than ever
through the array of visa routes which are
available.
If you run a business – whether it is a startup,
SME or a large corporate – the government
wants you to have access to the people,
knowledge and infrastructure you need to get
your business ahead of the transformational
change AI will bring, making the UK a globally-
competitive, AI-first economy which benefits
every region and sector.
By leading with our democratic values, the UK will
work with partners around the world to make
sure international agreements embed our ethical
values, making clear that progress in AI must be
achieved responsibly, according to democratic
norms and the rule of law.
And by increasing the number and diversity of
people working with and developing AI, by
putting clear rules of the road in place and by
investing across the entire country, we will
ensure the real-world benefits of AI are felt by
every member of society. Whether that is more
accurate AI-enabled diagnostics in the NHS, the
promise of driverless cars to make our roads
safer and smarter, or the hundreds of
unforeseen benefits that AI could bring to
improve everyday life.TheUK’sNationalArtificialIntelligenceStrategyaimsto:
• Invest and plan for the long term needs of the AI ecosystem to continue our leadership as a
science and AI Superpower;
• Support the transition to an AI-enabled economy, capturing the benefits of innovation in the UK,
and ensuring AI benefits all sectors and regions;
• Ensure the UK gets the governance of AI technologies right to encourage innovation, investment,
and protect the public and our fundamental values.
Thiswillbebestachievedthroughbroadpublictrustandsupport,andbytheinvolvementof
thediversetalentsandviewsofsociety.NatinlaAISrteaty54
13 12NatinlaAISrteaty54 Slten6pstinl
The National AI Strategy does not stand alone.
It purposefully supports and amplifies the
other, interconnected work of government
including:
• ThePlanforGrowth andrecent
InnovationStrategy , which recognise
the need to develop a diverse and
inclusive pipeline of AI professionals with
the capacity to supercharge innovation;
• TheIntegratedReview , to find new
paths for UK excellence in AI to deliver
prosperity and security at home and
abroad, and shape the open international
order of the future;
• TheNationalDataStrategy , published
in September 2020, sets out our vision to
harness the power of responsible data
use to boost productivity, create new
businesses and jobs, improve public
services, support a fairer society, and
drive scientific discovery, positioning the
UK as the forerunner of the next wave of
innovation;
• ThePlanforDigitalRegulation , which
sets out our pro-innovation approach to
regulating digital technologies in a way
that drives prosperity and builds trust in
their use;
• TheupcomingNationalCyberStrategy
to continue the drive for securing
emerging technologies, including building
security into the development of AI;• TheforthcomingDigitalStrategy , which
will build on DCMS's Ten Tech Priorities to
further set out the government’s
ambitions in the digital sector;
• AnewDefenceAIcentre as a keystone
piece of the modernisation of Defence;
• TheNationalSecurityTechnology
Innovationexchange (NSTIx), a data
science & AI co-creation space that brings
together National Security stakeholders,
industry and academic partners to build
better national security capabilities; and
• Theupcoming NationalResilience
Strategy , which will in part focus on how
the UK will stay on top of technological
threats.
The government’s AI Council has played a
central role in gathering evidence to inform
the development of this strategy, including
through its roadmap published at the
beginning of the year, which represents a
valuable set of recommendations reflecting
much of the wider AI community in the UK.
The wider ecosystem also fed in through a
surveyrun by the AI Council in collaboration
with The Alan Turing Institute. The
government remains grateful to the AI Council
for its continued leadership of the AI
ecosystem, and would like to thank those
from across the United Kingdom who shared
their views during the course of developing
this strategy.ThegoalsofthisStrategyarethatthe
UK:
1. Experiences a significant growth in both
the number and type of discoveries that
happen in the UK, and are
commercialised and exploited here;
2. Benefits from the highest amount of
economic and productivity growth due to
AI; and
3. Establishes the most trusted and pro-
innovation system for AI governance in
the world.
This vision can be achieved if we build on
three pillars fundamental to the development
of AI:
1. Investing in the needs of the ecosystem
to see more people working with AI, more
access to data and compute resources to
train and deliver AI systems, and access to
finance and customers to grow sectors;
2. Supporting the diffusion of AI across the
whole economy to ensure all regions,
nations, businesses and sectors can
benefit from AI; and
3. Developing a pro-innovation regulatory
and governance framework that protects
the public.TheAICouncil
The AI Council was established in 2019 to
provide expert advice to the government and
high-level leadership of the AI ecosystem. The
AI Council demonstrates a key commitment
made in the AI Sector Deal, bringing together
respected leaders in their fields from across
industry, academia and the public sector.
Members meet quarterly to advise the Office
for AI and broader government on its current
priorities, opportunities and challenges for AI
policy.
In January 2021, the AI Council published its ‘AI
Roadmap’ providing 16 recommendations to
the government on the strategic direction for
AI. Its central call was for the government to
develop a National AI Strategy, building on the
success of investments made through the AI
Sector Deal whilst remaining adaptable to
future technological disruption. Since then, the
Council has led a programme of engagement
with the wider AI community to inform the
development of the National AI Strategy.
To guide the delivery and implementation of
this strategy the government will renew and
strengthen the role of the AI Council, ensuring
it continues to provide expert advice to
government and high-level leadership of the AI
ecosystem.
17 19Pillar1:Investinginthe
longtermneedsoftheAI
ecosystemPillar3:GoverningAI
effectivelyPillar2:EnsuringAI
benefitsallsectorsand
regionsToremainanAIandsciencesuperpowerfitforthenextdecade
Governmentactivityinthisstrategyandoverthenext10yearsReducedcompetition
forAIskills
NewAIscientific
breakthroughs
Greaterworkforce
diversity
AppliedAItechnologies
tonewusecases
Increasedinvestment
inUKAIcompaniesAgrowingUK
supplierbase
WiderAIadoptionin
industries®ions
GreaterUKAIexports
PublicSectoras
exemplarforAI
procurementðics
Greaterpublicvalue
formoneyIncreaseddiversityin
appliedAI
Improvedpublictrust
inAI
Increasedresponsible
innovation
UKmaintainsits
positionasaglobal
leaderinAICertaintyfortheUKAI
ecosystemProtectand
further
fundamental
UKvaluesStrong
domesticAI
capabilitiesto
address
National
SecurityissuesGrowthinthe
UK’sAIsector,
contributingto
UKGDPgrowthUKmaintains
itspositionasa
globalleaderin
AIresearch&
developmentBenefitsofAI
adoption
sharedacross
everyregion
andsectorActivities Outcomes Impacts VisionNatinlaAISrteaty54
F–gMT2uNationalISrteatgyC
18 1qISkegugntupni2pg
okkoetpnitiguanq
0–allgnygu
‘ArtificialIntelligence’asatermcanmean
alotofthings,andthegovernment
recognisesthatnosingledefinitionis
goingtobesuitableforeveryscenario.In
general,thefollowingdefinitionis
sufficientforourpurposes:“Machinesthat
performtasksnormallyrequiringhuman
intelligence,especiallywhenthemachines
learnfromdatahowtodothosetasks.”
TheUKgovernmenthasalsosetouta
legaldefinitionofAIintheNational
SecurityandInvestmentAct.2
Much like James Watt’s 1776 steam engine, AI
is a ‘general purpose technology’ (or more
accurately, technologies) that has many
possible applications, and we expect them to
have a transformational impact on the whole
economy. Already, AI is used in everyday
contexts like email spam filtering, media
recommendation systems, navigation apps,
payment transaction validation and
verification, and many more. AI technologies
will impact the whole economy, all of society
and us as individuals.
Many of the themes in AI policy are similar to
tech and digital policy more widely: the
commercialisation journeys; the reliance on
internationally mobile talent; the importance
of data; and consolidation of economic
functions onto platforms. However there are
some key examples of differences derived
from the above definition which differentiate
AI and require a unique policy response from
the government.• In regulatory matters, a system’s
autonomy raises unique questions
around liability and fairness as well as
riskand safety– and even ownership of
creative content3– in a way which is
distinct to AI, and these questions
increase with the relative complexity of
the algorithm. There are also questions of
transparency and biaswhich arise from
decisions made by AI systems.
• There are often greater infrastructural
requirements for AI services than in
cloud/Software as a Service systems. In
buildinganddeploying some models,
access to expensive high performance
computing and/or large data sets is
needed.
• Multiple skillsare required to develop,
validate and deploy AI systems, and the
commercialisationandproduct
journey can be longer and more
expensive because so much starts with
fundamental R&D.
Gg3g0tinyanqkeotg0tinyuo0igtC
AI makes predictions and decisions, and fulfils
tasks normally undertaken by humans. While
diverse opinions, skills, backgrounds and
experience are hugely important in designing
any service – digital or otherwise – it is
particularly important in AI because of the
executive function of the systems. As AI
increasingly becomes an enabler for
transforming the economy and our personal lives, there are at least three reasons we
should care about diversity in our AI
ecosystem:
• MORAL: As AI becomes an organising
principle which creates new opportunities
and changes the shape of industries and
the competitive dynamics across the
economy, there is a moral imperative to
ensure people from all backgrounds and
parts of the UK are able to participate and
thrive in this new AI economy.
• SOCIAL: AI systems make decisions
based on the data they have been trained
on. If that data - or the system it is
embedded in - is not representative, it
risks perpetuating or even cementing new
forms of bias in society. It is therefore
important that people from diverse
backgrounds are included in the
development and deployment of AI
systems.
• ECONOMIC: There are big economic
benefits to a diverse AI ecosystem. These
include increasing the UK’s human capital
from a diverse labour supply, creating a
wider range of AI services that stimulate
demand, ensuring the best talent is
discovered from the most diverse talent
pool.F–glonygetgeB
Making specific predictions about the future
impact of a technology – as opposed to the
needs of those developing and using it today –
has a long history in AI. Since the 1950s various
hype cycles have given way to so-called ‘AI
winters’ as the promises made have perpetually
remained ‘about 20 years away’.
While the emergence of Artificial General
Intelligence (AGI) may seem like a science fiction
concept, concern about AI safety and non-
human-aligned systems4is by no means
restricted to the fringes of the field.5The
government’s first focus is on the economic and
social outcomes of autonomous and adaptive
systems that exist today. However, we take the
firm stance that it is critical to watch the
evolution of the technology, to take seriously the
possibility of AGI and ‘moregeneralAI’ , and to
actively direct the technology in a peaceful,
human-aligned direction.6
The emergence of full AGI would have a
transformational impact on almost every aspect
of life, but there are many challenges which
could be presented by AI which could emerge
much sooner than this. As a general purpose
technology AI will have economic and social
impacts comparable to the combustion engine,
the car, the computer and the internet. As each
of these has disrupted and changed the shape of
the world we live in - so too could AI, long before
any system ‘wakes up.’NatinlaAISrteaty54
10 1uNatinlaAISrteaty54 IShey:ylt:plivpynhhnetplitiy:al6s-aAAyl5y:
The choices that are made in the here and
now to develop AI will shape the future of
humanity and the course of international
affairs. For example, whether AI is used to
enhance peace, or a cause for war; whether
AI is used to strengthen our democracies, or
embolden authoritarian regimes. As such we
have a responsibility to not only look at the
extreme risks that could be made real with
AGI, but also to consider the dual-use threats
we are already faced with today.
LeoBrg0toecgaltoISrteatgyC
The UK is an AI superpower, with particular
strengths in research, investment and
innovation. The UK’s academic and
commercial institutions are well known for
conducting world-leading AI research, and the
UK ranks 3rd in the world for AI publication
citations per capita.7This research strength
was most recently demonstrated in
November 2020 when DeepMind , a UK-based
AI company, used AlphaFold to find a solution
to a 50-year-old grand challenge in biology.8
The UK has the 3rd highest number of AI
companies in the world after the US and
China. Alongside DeepMind, the UK is home
to Graphcore , a Bristol-based machine
learning semiconductor company; Darktrace ,
a world-leading AI company for cybersecurity;
and BenevolentAI , a company changing the
way we treat disease. The UK also attracts
some of the best AI talent from around the
world9– the UK was the second most likely global destination for mobile AI researchers
after the USA.
The government has invested more than £2.3
billion into Artificial Intelligence across a range
of initiatives since 2014.10This portfolio of
investment includes, but is not limited to:
• £250 million to develop the NHS AI Lab at
NHSX to accelerate the safe adoption of
Artificial Intelligence in health and care;
• £250 million into Connected and
Autonomous Mobility (CAM) technology
through the Centre for Connected and
Autonomous Vehicles (CCAV) to develop
the future of mobility in the UK;
• 16 new AI Centres for Doctoral Training at
universities across the country, backed by
up to £100 million and delivering 1,000
new PhDs over five years;
• A new industry-funded AI Masters
programme and up to 2,500 places for AI
and data science conversion courses. This
includes up to 1,000 government-funded
scholarships;
• Investment into The Alan Turing Institute
and over £46 million to support the
Turing AI Fellowships to develop the next
generation of top AI talent;
• Over £372 million of investment into UK
AI companies through the British
Business Bank for the growing AI sector;AlphaFold&AlphaFold2
In November 2020, London-based DeepMind announced that they had solved one of the longest
running modern challenges in biology: predicting how proteins - the building blocks of life which
underpin every biological process in every living thing - take shape, or ‘fold’.
AlphaFold, DeepMind’s deep learning AI system, broke all previous accuracy levels dating back over
50 years, and in July 2021 the organisation open sourced the code for AlphaFold together with over
350,000 protein structure predictions, including the entire human proteome, via the AlphaFold
database in partnership with EMBL-EBI.
DeepMind’s decision to share this knowledge openly with the world, demonstrates both the
opportunity that AI presents, as well as what this strategy seeks to support: bleeding-edge research
happening in the UK and with partners around the world, solving big global challenges.
AlphaFold opens up a multitude of new avenues in research – helping to further our understanding
of biology and the nature of the world around us. It also has a multitude of potential real-world
applications, such as deepening our understanding of how bacteria and viruses attack the body in
order to develop more effective prevention and treatment, or support the identification of proteins
and enzymes that can break down industrial or plastic waste.
21 2cNatinlaAISrteaty54 IShey:ylt:plivpynhhnetplitiy:al6s-aAAyl5y:
the UK has a globally competitive R&D and
industrial strength11and has been widely
cited as a set of technologies in which the UK
must maintain a leading edge to guarantee
our continued security and prosperity in an
intensifying geopolitical landscape.• £172 million of investment through the
UKRI into the Hartree National Centre for
Digital Innovation, leveraging an additional
£38 million of private investment into
High Performance Computing.
Further investments have been made into the
Tech Nation Applied AI programme – now in
its third iteration; establishing the Office for
National Statistics Data Science Campus ; the
Crown Commercial Service’s public sector AI
procurement portal ; and support for the
Department for International Trade attracting
AI related Foreign Direct Investment into the
UK.
As part of the AI Sector Deal, the government
established the AI Council to bring together
respected leaders to strengthen the
conversation between academia, industry,
and the public sector. The Office for Artificial
Intelligence was created as a new team within
government to take responsibility for
overarching AI policy across government and
to be a focal point for the AI ecosystem
through its secretariat of the AI Council. The
Centre for Data Ethics and Innovation (CDEI)
was established as a government expert body
focused on the trustworthy use of data and AI
in the public and private sector.
This strategy builds on the recent history of
government support for AI and considers the
next key steps to harness its potential in the
UK for the coming decade. In doing so, the
National AI Strategy leads on from the
ambitions outlined in the government’s
Innovation Strategy to enable UK businesses
and innovators to respond to economic
opportunities and real-world problems
through our national innovation prowess. AI
was identified in the Innovation Strategy as
one of the seven technology families where
23 22Increasingdiversityandclosingtheskillsgapthroughpostgraduateconversioncoursesin
datascienceandartificialintelligence
As a result of the growing skills gap in AI and data science, 2,500 new Masters conversion courses in
AI and data science are now being delivered across universities in England. The conversion course
programme included up to 1,000 scholarships to increase the number of people from
underrepresented groups and to encourage graduates from diverse backgrounds to consider a
future in AI and Data Science.
In the first year over 1,200 students enrolled, with 22% awarded scholarships. Over 40% of the total
students are women, one quarter are black students and 15% of students are disabled. 70% of the
total students are studying on courses based outside of London and the South East.
These conversion courses are providing the opportunity to develop new digital skills or retrain to
help find new employment in the UK’s cutting-edge AI and data science sectors, ensuring that
industry and the public sector can access the greatest supply of talent across the whole country. Government’saimistogreatlyincrease
thetype,frequencyandscaleofAI
discoverieswhicharedevelopedand
exploitedintheUK.
This will be achieved by:
• Making sure the UK’s research,
development and innovation system
continues to be world leading, providing
the support to allow researchers and
entrepreneurs to forge new frontiers in
AI;
• Guaranteeing that the UK has access to a
diverse range of people with the skills
needed to develop the AI of the future
and to deploy it to meet the demands of
the new economy;
• Ensuring innovators have access to the
data and computing resources necessary
to develop and deliver the systems that
will drive the UK economy for the next
decade;
• Supporting growth for AI through a pro-
innovation business environment and
capital market, and attracting the best
people and firms to set up shop in the
UK;
• Ensuring UK AI developers can access
markets around the world.Investing in and planning for the long term needs of the AI ecosystem to remain a
science and AI superpower
To maintain the UK’s position amongst the
global AI superpowers and ensure the UK
continues to lead in the research,
development, commercialisation and
deployment of AI, we need to invest in, plan
for, secure and unlock the critical inputs that
underpin AI innovation.
rwilluanqFalgnt
Continuingtodevelop,attractandtrain
thebestpeopletobuildanduseAIisat
thecoreofmaintainingtheUK’sworld-
leadingposition.Byinspiringallwiththe
possibilitiesAIpresents,theUKwill
continuetodevelopthebrightest,most
diverseworkforce.
Building a tech-savvy nation by supporting
skills for the future is one of the government’s
ten tech priorities . The gap between demand
and supply of AI skills remains significant and
growing,12,13 despite a number of new AI skills
initiatives since the 2018 AI Sector Deal. In
order to meet demand, the UK needs a larger
workforce with AI expertise. Last year there
was a 16% increase for online AI and Data
Science job vacancies and research found
that 69% of vacancies were hard to fill.14Data
from an ecosystem surveyconducted by the
AI Council and The Alan Turing Institute
showed that 81% of respondents agreed
there were significant barriers in recruiting
and retaining top AI talent in their domain
within the UK.
Research into the AI Labour Market showed
that technical AI skill gaps are a concern for
many firms, with 35% of firms revealing that a
NatinlaAISrteaty54
dillaefl4
SnVgutinyint–glonymtgeB
nggquo’t–gISg0ouCutgB
27 29NatinlaAISrteaty54 miAAae1fSlEy:til5ilt-yAnl5btye$lyy6:nft-yISysn:4:ty$
Government will seek to build upon the £46
million Turing AI Fellowships investment to
attract, recruit, and retain a substantial cohort
of leading researchers and innovators at all
career stages. Our approach will enable
Fellows to work flexibly between academia
and other sectors, creating an environment
for them to discover and develop cutting edge
AI technologies and drive the use of AI to
address societal, economic and
environmental challenges in the UK. We note
that recently, research breakthroughs in the
field of AI have been disproportionately driven
by a small number of luminary talents and
their trainees. In line with the Innovation
Strategy, the government affirms our
commitment to empowering distinguished
academics.
Research16and industry engagement has
demonstrated the need for graduates with
business experience, indicating a need to
continue supporting industry/academic
partnerships to ensure graduates leave
education with business-ready experience.
Our particular focus will be on software
engineers, data scientists, data engineers,
machine learning engineers and scientists,
product managers, and related roles.
We recognise that global AI talent is scarce,
and the topic of fierce competition
internationally. As announced in the
Innovation Strategy, the government is
revitalising and introducing new visa routes
that encourage innovators and entrepreneurs
to the UK. Support for diverse and inclusive
researchers and innovators across sectors,
and new environments for collaboratively developing AI, will be key to ensuring the UK’s
success in developing AI and investing in the
long term health of our AI ecosystem.
Use:Empoweremployersandemployeesto
upskillandunderstandtheopportunitiesfor
usingAIinabusinesssetting
The AI Council ecosystem surveyfound that
only 18% agreed there was sufficient
provision of training and development in AI
skills available to the current UK workforce. As
the possibilities to develop and use AI grow,
so will people's need to understand and apply
AI in their jobs. This will range from people
working adjacent to the technical aspects
such as product managers and compliance,
through to those who are applying AI within
their business, such as in advertising and HR.
Below degree level, there is a need to clearly
articulate the skills employers and employees
need to use AI effectively in the workplace.
For example, industries have expressed their
willingness to fund employees to undertake
training but have not found training that suits
their needs: including training that is
business-focused, modular and flexible. lack of technical AI skills from existing
employees had prevented them from meeting
their business goals, and 49% saying that a
lack of required AI skills from job applicants
also affected their business outcomes.15To
support the adoption of AI we need to ensure
that non-technical employees understand the
opportunities, limitations and ethics of using
AI in a business setting, rather than these
being the exclusive domain of technical
practitioners.
We need to inspire a diverse set of people
across the UK to ensure the AI that is built
and used in the UK reflects the needs and
make-up of society. To close the skills gap, the
government will focus on three areas to
attract and train the best people: those who
buildAI, those who useAI, and those we
want to be inspired by AI.
Build:Trainandattractthebrightestandbest
peopleatdevelopingAI
To meet the demand seen in industry and
academia, the government will continue
supporting existing interventions across top
talent, PhDs and Masters levels. This includes
Turing Fellowships, Centres for Doctoral
Training and Postgraduate Industrial-Funded
Masters and AI Conversion Courses.‘UnderstandingtheUKAILabourMarket ’
research
In 2021, the Office for AI published research
to investigate Artificial Intelligence and Data
science skills in the UK labour market in 2020.
Some key findings from the research:
• Half of surveyed firms’ business plans had
been impacted by a lack of suitable
candidates with the appropriate AI
knowledge and skills.
• Two thirds of firms (67%) expected that
the demand for AI skills in their
organisation was likely to increase in the
next 12 months.
• Diversity in the AI sector was generally
low. Over half of firms (53%) said none of
their AI employees were female, and 40%
said none were from ethnic minority
backgrounds.
• There were over 110,000 UK job
vacancies in 2020 for AI and Data Science
roles.
The findings from this research will help the
Office for AI address the AI skills challenge
and ensure UK businesses can take
advantage of the potential of AI and Data
Science.
SkillsforJobsWhitePaper
The Skills for Jobs: Lifelong Learning for
Opportunity and Growth White Paper was
published in January 2021 and is focused on
giving people the skills they need, in a way
that suits them, so they can get great jobs in
sectors the economy needs and boost the
country’s productivity.
These reforms aim to ensure that people can
access training and learning flexibly
throughout their lives and that they are well-
informed about what is on offer, including
opportunities in valuable growth sectors. This
will also involve reconfiguring the skills system
to give employers a leading role in delivering
the reforms and influencing the system to
generate the skills they need to grow.
To more effectively use AI in a business
setting, employees, including those who
would not have traditionally engaged with AI,
will require a clear articulation of the different
skills required, so they can identify what
training already exists and understand if there
is still a gap.
Using the Skills Value Chain approach piloted
by the Department for Education,17the
government will help industry and providers
to identify what skills are needed. Lessons
learned from this pilot will support this work
to help businesses adopt the skills needed to
get the best from AI. The Office for AI will then
work with the Department for Education to explore how these needs can be met and
mainstreamed through national skills
provision.
The government will also support people to
develop skills in AI, machine learning, data
science and digital through the Department
for Education’s Skills Bootcamps . The
Bootcamps are free, flexible courses of up to
16 weeks, giving adults aged 19 and over the
opportunity to build up in-demand, sector-
specific skills and fast-track to an interview
with a local employer; improving their job
prospects and supporting the economy.
Inspire:Supportalltobeexcitedbythe
possibilitiesofAI
The AI Council’s Roadmap makes clear that
inspiring those who are not currently using AI,
and allowing children to explore and be
amazed by the potential of AI, will be integral
to ensuring we continue to have a growing
and diverse AI-literate workforce.
Through supporting the National Centre for
Computing Education (NCCE) the government
will continue to ensure programmes that
engage children with AI concepts are
accessible and reach the widest demographic.
The Office for AI will also work with the
Department for Education to ensure career
pathways for those working with or
developing AI are clearly articulated on career
guidance platforms, including the National
Careers Service , demonstrating role models
and opportunities to those exploring AI. This
will support a broader range of people to AttractingthebestAItalentfromaroundtheworld
The UK is already the top global destination for AI graduates in the United States and we punch above our
weight globally in attracting talent. The UK nearly leads the world in its proportion of top-skilled AI researchers.
Government wants to take this to the next level and make the UK the global home for AI researchers,
entrepreneurs, businesses and investors.
As well as ensuring the UK produces the next generation of AI talent we need, the government is broadening the
routes that talented AI researchers and individuals can work in the UK, through the recently announced
Innovation Strategy.
• The GlobalTalent visa route is open to those who are leaders or potential leaders in AI - and those who
have won prestigious global prizes automatically qualify. Government is currently looking at how to broaden
this list of prizes.
• A newHighPotentialIndividualroute will make it as simple as possible for internationally mobile
individuals who demonstrate high potential to come to the UK. Eligibility will be open to applicants who have
graduated from a top global university, with no job offer requirement. This gives individuals the flexibility to
work, switch jobs or employers – keeping pace with the UK’s fast-moving AI sector.
• A newscale-uproute will support UK scale-ups by allowing talented individuals with a high-skilled job offer
from a qualifying scale-up at the required salary level to come to the UK. Scaleups will be able to apply
through a fast-track verification process to use the route, so long as they can demonstrate an annual
average revenue or employment growth rate over a three-year period greater than 20%, and a minimum of
10 employees at the start of the three-year period.
• A revitalisedInnovatorroute will allow talented innovators and entrepreneurs from overseas to start and
operate a business in the UK that is venture-backed or harnesses innovative technologies, creating jobs for
UK workers and boosting growth. We have reviewed the Innovator route to make it even more open to:
• Simplifyingandstreamliningthebusinesseligibilitycriteria. Applicants will need to demonstrate that their
business venture has a high potential to grow and add value to the UK and is innovative.
• Fast-trackingapplications. The UK government is exploring a fast-track, lighter touch endorsement
process for applicants whose business ideas are particularly advanced to match the best-in-class
international offers. Applicants that have been accepted on to the government’s Global Entrepreneur
Programme will be automatically eligible.
• Buildingflexibility . Applicants will no longer be required to have at least £50,000 in investment funds to
apply for an Innovator visa, provided that the endorsing body is satisfied the applicant has sufficient
funds to grow their business. We will also remove the restriction on doing work outside of the
applicant’s primary business.
• The new GlobalBusinessMobilityvisa will also allow overseas AI businesses greater flexibility in
transferring workers to the UK, in order to establish and expand their business here.
These reforms will sit alongside the UK government’s GlobalEntrepreneurProgramme(GEP) which has a
track record of success in attracting high skilled migrant tech founders with IP-rich businesses to the UK. The
programme will focus on attracting more international talent to support the growth of technology clusters
including through working with academic institutions from overseas to access innovative spinouts and overseas
talent.
Through the Graduate Route we are also granting international students with UK degrees 2 years, 3 years for
those with PhDs, to work in the UK post-graduation. This will help ensure that we can attract the best and
brightest from across the world while also giving students time to work on the most challenging AI problems.
These are all in addition to our existing skills visa schemes for those with UK job offers.
28 2qmiAAae1fSlEy:til5ilt-yAnl5btye$lyy6:nft-yISysn:4:ty$ NatinlaAISrteaty54
consider careers in AI. The government will
ensure that leaders within the National AI
Research and Innovation Programme will play
a key role in engaging with the public and
inspiring the leaders of the future.
IngRakkeoa0–toegugae0–P
qgVglokBgntanqinnoVationinIS
Our vision is that the UK builds on our
excellence in research and innovation in the
next generation of AI technologies.
The UK has been a leader in AI research since
it developed as a field, thanks to our
strengths in computational and mathematical
sciences.18The UK’s AI base has been built
upon this foundation,19and the recently
announced Advanced Research and Invention
Agency (ARIA) will complement our efforts to
cement our status as a global science
superpower. The UK also has globally
recognised institutes such as The Alan Turing
Institute and the high-performing universities
which are core to research in AI.20
Currently, AI research undertaken in the UK is
world class, and investments in AI R&D
contribute to the Government’s target of
increasing overall public and private sector
R&D expenditure to 2.4% of GDP by 2027. But
generating economic and societal impact
through adoption and diffusion of AI
technologies is behind where it could be.21
There is a real opportunity to build on our
existing strengths in fundamental AI research
to ensure they translate into productive
processes throughout the economy. At the same time, the field of AI is advancing
rapidly, with breakthrough innovations being
generated by a diverse set of institutions and
countries. The past decade has seen the rise
of deep learning, compute-intensive models,
routine deployment of vision, speech, and
language modelling in the real world, the
emergence of responsible AI and AI safety,
among other advances. These are being
developed by new types of research labs in
private companies and public institutions
around the world. We expect that the next
decade will bring equally transformative
breakthroughs. Our goal is to make the UK
the starting point for a large proportion of
them, and to be the fastest at turning them
into benefits for all.
Todothis,UKRIwillsupportthe
transformationoftheUK’scapabilityinAI
bylaunchingaNationalAIResearchand
Innovation(R&I)Programme. The
programme will shift us from a rich but siloed
and discipline-focused national AI landscape
to an inclusive, interconnected, collaborative,
and interdisciplinary research and innovation
ecosystem. It will work across all the Councils
of UKRI and will be fully-joined up with
business of all sizes and government
departments. It will translate fundamental
scientific discoveries into real-world AI
applications, address some limitations in the
ability of current AI to be effectively used in
numerous real world contexts, such as
tackling complex and undefined problems,
and explore using legacy data such as non-
digital public records.The National AI Research and Innovation (R&I) Programme has five main aims:
• DiscoveringanddevelopingtransformativenewAItechnologies, leading the world in the development
of frontier AI and the key technical capabilities to develop responsible and trustworthy AI. The programme
will support:
• foundational research to develop novel next generation AI technologies and approaches which could
address current limitations of AI, focusing on low power and sustainable AI, and AI which can work
differently with a diverse range of challenging data sets, human-AI interaction, reasoning, and the maths
underpinning the theoretical foundations of AI.
• technical and socio-technical capability development to overcome current limitations around the
responsible trustworthy nature of AI.
• Maximisingthecreativityandadventureofresearchersandinnovators, building on UK strengths and
developing strategic advantage through a diverse range of AI technologies. The programme will support:
• specific routes to enable the exploration of high-risk ideas in the development and application of AI;
• follow-on funding to maximise the impact of the ideas with the most potential.
• Buildingnewresearchandinnovationcapacitytodelivertheideas,technologies,andworkforceof
thefuture, recruiting and retaining AI leaders, supporting the development of new collaborative AI
ecosystems, and developing collaborative, multidisciplinary, multi-partner teams. The programme will
support:
• the recruitment, retention, training and development of current and future leaders in AI, and flexible
working across sectoral and organisational interfaces using tools such as fellowships, and building on
the success of the Turing AI Fellowships scheme;
• enhanced UK capacity in key AI professional skills for research and innovation, such as data scientists
and software engineers.
• ConnectingacrosstheUKAIResearchandInnovationecosystem, building on the success of The Alan
Turing Institute as the National Centre for AI and Data Science, and building collaborative partnerships
nationally and regionally between and across sectors, diverse AI research and innovation stakeholders. The
programme will support:
• the development of a number of nationally distributed AI ecosystems which enable researchers and
innovators to collaborate in new environments and integrate basic research through application and
innovation. These ecosystems will be networked into a national AI effort with the Alan Turing Institute as
its hub, convening and coordinating the national research and innovation programme and enabling
business and government departments to access the UK’s AI expertise and skills capability e.g. the
catapult network and compute capability.
• SupportingtheUK'sAISectorandtheadoptionofAI, connecting research and innovation and
supporting AI adoption and innovation in the private sector. The programme will support:
• challenge-driven AI research and innovation programmes in key UK priorities, such as health and the
transition to net zero;
• collaborative work with the public sector and government organisations to facilitate leading researchers
and innovators engaging with the AI transformation of the public sector;
• innovation activities in the private sector, both in terms of supporting the development of the UK’s
burgeoning AI sector and the adoption of AI across sectors.
20 2uNatinlaAISrteaty54 miAAae1fSlEy:til5ilt-yAnl5btye$lyy6:nft-yISysn:4:ty$
31 3cNatinlaAISrteaty54 miAAae1fSlEy:til5ilt-yAnl5btye$lyy6:nft-yISysn:4:ty$
Sntgenational0ollaWoeationon
Ggugae0–fSnnoVation
As well as better coordination at home, the
UK will work with friends and partners around
the world on shared challenges in research
and development and lead the global
conversation on AI.
TheUKwillparticipateinHorizonEurope ,
enabling collaboration with other European
researchers, and will build a strong and varied
network of international science and
technology partnerships to support R&I
collaboration. By shaping the responsible use
of technology, we will put science and
technology, including AI, at the heart of our
alliances and partnerships worldwide. Wewill
continuetouseOfficialDevelopment
AssistancetosupportR&Dpartnerships
withdevelopingcountries.
We are also deepening our collaboration with
the United States, implementingthe USUK
DeclarationonCooperationinAIResearch
andDevelopment. This declaration outlines
a shared vision for driving technological
breakthroughs in AI between the US and the
UK. As we build materially on this partnership,
we will seek to enable UK partnership with
other key global actors in AI, to grow
influential R&I collaborations.I00guutoqata
The National Data Strategy sets out the
government's approach to unlocking the
power of data. Access to good quality,
representative data from which AI can learn is
critical to the development and application of
robust and effective AI systems.
The AI Sector Deal recognised this and since
then the government has established
evidence on which to make policies to
harness the positive economic and social
benefit of increased availability of data. This
includes the Open Data Institute’s original
research into data trusts as a model of data
stewardship to realise the value of data for AI.
The research established a repeatable model
for data trusts which others have begun to
apply.
Mission 1 of the National Data Strategy seeks
to unlock the value of data across the
economy, and is a vital enabler for AI. This
mission explores how the government can
apply six evidenced levers to tackle barriers to
data availability. Thegovernmentwill
publishapolicyframeworkinAutumn
2021informedbytheoutcomesofMission
1,settingoutitsroleinenablingbetter
dataavailabilityinthewidereconomy.The
policyframeworkincludessupportingthe
activitiesofintermediaries,includingdata
trusts,andprovidingstewardshipservices
betweenthosesharingandaccessingdata.The AI Council and the Ada Lovelace Institute
recently explored three legal mechanisms
that could help facilitate responsible data
stewardship – data trusts, data cooperatives
and corporate and contractual mechanisms.
The ongoing Data: A new direction
consultation asks what role the government
should have in enabling and engendering
confidence in responsible data intermediary
activity. Thegovernmentisalsoexploring
howprivacyenhancingtechnologiescan
removebarrierstodatasharingbymore
effectivelymanagingtherisksassociated
withsharingcommerciallysensitiveand
personaldata.
cataLopnqationuanqMuginISrCutgBu
Data foundations refer to various
characteristics of data that contribute to its
overall condition, whether it is fit for purpose,
recorded in standardised formats on modern,
future-proof systems and held in a condition
that means it is findable, accessible,
interoperable and reusable (FAIR). A recent EY
studydelivered on behalf of DCMS has found
that organisations that report higher AI
adoption levels also have a higher level of
data foundations.
The government is considering how to
improve data foundations in the private and
third sectors. Through the National AI R&I
Programme and ambitions to lead best
practices in FAIR data, we will grow our
capacity in professional AI, software and data
skills, and support the development of key
new data infrastructure capabilities. Technical professionals such as data engineers have a
key role to play in opening up access to the
most critical data and compute
infrastructures on FAIR data principles, and in
accelerating the pathway to using AI
technologies to make best use of the UK’s
healthy data ecosystem.
Data foundations are crucial to the effective
use of AI and it is estimated that, on average,
80% of the time spent on an AI project is
cleaning, standardising and making the data
fit for purpose. Furthermore, when the source
data needed to power AI or machine learning
is not fit for purpose, it leads to poor or
inaccurate results, and to delays in realising
the benefits of innovation.22Poor quality
datasets can also be un-representative,
especially when it comes to minority groups,
and this can propagate existing biases and
exclusions when they are used for AI.
The government is looking to support action
to mitigate the effects of quality issues and
underrepresentation in AI systems. Subject
totheoutcomesoftheData:Anew
directionconsultation,thegovernment
willmoreexplicitlypermitthecollection
andprocessingofsensitiveandprotected
characteristicsdatatomonitorand
mitigatebiasinAIsystems.
An important outcome for increasing access
to data and improving data foundations is in
how technology will be better able to use that
data. Technological convergence – the
tendency for technologies that were originally
unrelated to become more closely integrated
(or even unified) as they advance – means
33 32NatinlaAISrteaty54 miAAae1fSlEy:til5ilt-yAnl5btye$lyy6:nft-yISysn:4:ty$
that AI will increasingly be deployed together
with many other technologies of the future,
unlocking new technological, economic and
social opportunities. For example, AI is a
necessary driver of the development of
robotics and smart machines, and will be a
crucial enabling technology for digital twins.
These digital replicas of real-world assets,
processes or systems, with a two-way link to
sensors in the physical world, will help make
sense of and create insights and value from
vast quantities of data in increasingly
sophisticated ways. And in the future, some
types of AI will rely on the step-change in
processing power that quantum computing is
expected to unlock.
Governmentwillconsultlaterthisyearon
thepotentialvalueofandoptionsforaUK
capabilityindigitaltwinningandwider
‘cyber-physicalinfrastructure.’23This
consultation will help identify how common,
interoperable digital tools and platforms, as
well as physical testing and innovation spaces
can be brought together to form a digital and
physical shared infrastructure for innovators
(e.g. digital twins, test beds and living labs).
Supporting and enabling this shared
infrastructure will help remove time, cost and
risk from the process of bringing innovation
to market, enabling accelerated AI
development and applications.
dpWli0ug0toeqata
Work is underway within the government to
fix its own data foundations as part of Mission
3of the National Data Strategy, which focuses
on transforming the government's use of data to drive efficiency and improve public
services. The Central Digital and Data Office
(CDDO)has been created within the Cabinet
Office to consolidate the core policy and
strategy responsibilities for data foundations,
and will work with expert cross-sector
partners to improve government’s use and
reuse of data to support data-driven
innovation across the public sector.
The CDDO also leads on the Open
Government policy area, a wide-ranging and
open engagement programme that entails
ongoing work with Civil Society groups and
government departments to target new kinds
of data highlighted as having 'high potential
impact' for release as open data. The UK’s
ongoing investment in open data will serve to
further bolster the use of AI and machine
learning within government, the private
sector, and the third sector. The application of
standards and improvements to the quality of
data collected, processed, and ultimately
released publicly under the Open
Government License will create further value
when used by organisations looking to train
and optimise AI systems utilising large
amounts of information.
The Office for National Statistics (ONS) is
leading the Integrated Data Programme in
collaboration with partners across
government, providing real-time evidence,
underpinning policy decisions and delivering
better outcomes for citizens while maintaining
privacy. The 2021 Declaration on Government
Reformsets out a focus on strengthening
data skills across government including senior
leaders.We need to strengthen the way that public
authorities can engage with private sector
data providers to make better use of data
through FAIR data and open standards,
including making government data more
easily available through application
programming interfaces (APIs), and
encouraging businesses to offer their data
through APIs. Governmentwillcontinueto
publishauthoritativeopenandmachine-
readabledataonwhichAImodelsforboth
publicandcommercialbenefitcan
depend.TheOfficeforAIwillalsowork
withteamsacrossgovernmenttoconsider
whatvaluabledatasetsgovernment
shouldpurposefullyincentiviseorcurate
thatwillacceleratethedevelopmentof
valuableAIapplications.
OoBkptg
Access to computing power is essential to the
development and use of AI, and has been a
dominant trend in AI breakthroughs of the
past decade. The computing power
underpinning AI in the UK comes from a
range of sources. The government’s recent
report on large-scale computing24recognises
its importance in AI innovation, but suggests
that the UK’s infrastructure is lagging behind
other major economies around the world
such as the US, China, Japan and Germany.
We also recognise the growing compute gap
between large-scale enterprises and
researchers. Access to compute is both a
competitiveness and a security issue. It is also
not a one-size-fits-all approach – different AI
technologies need different capabilities.DigitalCatapult’sMachineIntelligence
Garage
For more than three years, Digital Catapult’s
Machine Intelligence Garage (MI Garage) has
helped startups accelerate the development
of their industry-leading AI solutions by
addressing their need for computational
power.
Some AI solutions being developed require
greater computing capacity in the form of
High Performance Computers (HPC) for
unusually large workloads (such as weather
simulation, protein folding and simulation of
molecular interactions) or access to AI
focussed hardware like Graphcore’s
Intelligence Processing Unit (IPU) , a new
processor specifically designed for developing
AI. MI Garage provides a channel through
which startups can connect with HPC centres
and access specialised hardware. HPC
partners include the Hartree National Centre
for Digital Innovation , the Edinburgh Parallel
Computing Centre , and the Earlham Institute .
MI Garage has also worked with NVIDIA,
Graphcore and LightOn to facilitate access to
special trials to lower the barrier to entry to AI
specialised hardware.
37 39NatinlaAISrteaty54 miAAae1fSlEy:til5ilt-yAnl5btye$lyy6:nft-yISysn:4:ty$
Sustained public and private investment in a
range of facilities from cloud, laboratory and
academic department scale, through to
supercomputing, will be necessary to ensure
that accessing computing power is not a
barrier to future AI research and innovation,
commercialisation and deployment of AI. In
June 2021, the government announced joint
funding with IBM for the Hartree National
Centre for Digital Innovation to stimulate high
performance computing enabled innovation
in industry and make cutting-edge
technologies like AI more accessible to
businesses and public sector organisations. Understanding our domestic AI computing
capacity needs and their relationship to
energy use is increasingly important25if we
are to achieve our ambitions. Tobetter
understandtheUK’sfutureAIcomputing
requirements,theOfficeforAIandUKRI
willevaluatetheUK’scomputingcapacity
needstosupportAIinnovation,
commercialisationanddeployment. This
study will look at the hardware and broader
needs of researchers and organisations, large
and small, developing AI technologies,
alongside organisations adopting AI products
and services. The study will also consider the
possible wider impact of future computing
requirements for AI as it relates to areas of
proportional concern, such as the environment. The report will feed into UKRI’s
wider work on Digital Research
Infrastructure.26
Alongside access to necessary compute
capacity, the competitiveness of the AI
hardware will be critical to the UK's overall
research and commercial competitiveness in
the sector. The UK is a world leader in chip
and systems design, underpinned by
processor innovation hubs in Cambridge and
Bristol. We have world-leading companies
supporting both general purpose AI –
Graphcore has built the world's most complex
AI chip,27and for specific applications – XMOS
is a leader in AI for IOT. Thegovernmentis
currentlyundertakingawiderreviewofits
internationalanddomesticapproachto
thesemiconductorsector. Given
commercial and innovation priorities in AI,
further support for the chip design
community will be considered.
Linan0ganq&O
AI innovation is thriving in the UK, backed by
our world-leading financial services industry.
In 2020, UK firms that were adopting or
creating AI-based technologies received
£1.78bn in funding, compared to £525m
raised by French companies and £386m
raised in Germany.28More broadly,
investment in UK deep tech companies has
increased by 291% over the past five years,
though deal sizes remain considerably
smaller compared to the US.29TechNation
Tech Natio n is a predominantly government-
funded programme, built to deliver its own
initiatives that grow and support the UK’s
burgeoning digital tech sector. This includes
growth initiatives aiming to help businesses
successfully navigate the transition from start-
up to scale-up and beyond, network initiatives
to connect the UK digital ecosystem, and the
Tech Nation Visa scheme, which offers a route
into the UK for exceptionally talented
individuals from overseas.
Recent growth programmes include Applied
AI, their first to help the UK’s most promising
founders who are applying AI in practical
areas and creating real-world impact; Net
Zero, a six-month free growth programme for
tech companies that are creating a more
sustainable future; and Libra, which is focused
on supporting Black founders and addressing
racial inequality in UK tech.
Thegovernmentwillcontinuetoevaluate
thestateoffundingspecificallyfor
innovativefirmsdevelopingAI
technologiesacrosseveryregionofthe
UK.Thisworkwillexploreifthereareany
significantinvestmentgapsorbarriersto
accessingfundingthatAIinnovative
companiesarefacingthatarenotbeing
addressed. Government commits to
reporting on this work in Autumn 2022.
38 3qNatinlaAISrteaty54 miAAae1fSlEy:til5ilt-yAnl5btye$lyy6:nft-yISysn:4:ty$
Accessing the right finance at the right time is
critical for AI innovators to be able to develop
their idea into a commercially viable product
and grow their business, but this is
complicated by the long timelines often
needed for AI research and development
work.30,31The AI Council’s Roadmap suggests a
funding gap at series B+, meaning that AI
companies are struggling to scale and stay
under UK ownership.
While the UK’s funding ecosystem is robust,
the government is committed to ensuring the
system is easy for businesses and innovators
to navigate, and that any existing gaps are
addressed. The recent Innovation Strategy
signalled the Government’s efforts to support
innovators by bringing together effective
private markets with well-targeted public
investment. In it, the government set out
plans to upskill lenders to assess risk when
lending to innovative businesses and outlined
work across Innovate UK and the British
Business Bank to investigate how businesses
interact with the public support landscape, to
maximise accessibility for qualifying
businesses. A good example of this is the
Future Fund: Breakthrough , a new £375
million UK-wide programme launched in July
2021, will encourage private investors to co-
invest with the government in high-growth
innovative businesses to accelerate the
deployment of breakthrough technologies.
Our economy’s success and our citizens’
safety rely on the government’s ability to
protect national security while keeping the UK
open for business with the rest of the world.
Within this context, we will ensure we protect
the growth of welcome investment into the
UK’s AI ecosystem. Thegovernmenthasintroducedthe NationalSecurityand
InvestmentAct that will provide new powers
to screen investments effectively and
efficiently now and into the future. It will give
businesses and investors the reassurance
that the UK continues to welcome the right
talent, investment and collaboration that
underpins our wider economic security.
Feaqg
AI is a key part of the UK’s digital goods and
services exports, which totalled £69.3bn in
2019.32Trade can support the UK’s objectives
to sustain the mature, competitive and
innovative AI developer base the UK needs to
access customers around the world.
As part of its free trade agenda, the
government is committed to pursuing
ambitious digital trade chapters to help place
the UK as a global leader. AstheUKsecures
newtradedeals,thegovernmentwill
includeprovisionsonemergingdigital
technologies,includingAI, and champion
international data flows, preventing
unjustified barriers to data crossing borders
while maintaining the UK’s high standards for
personal data protection.
In doing so, the UK aims to deliver digital
trade chapters in agreements that: 1) provide
legal certainty; 2) support data flows; 3)
protect consumers; 4) minimise non-tariff
barriers to digital trade; 5) prevent
discrimination against trade by electronic
means; and 6) promote international
cooperation and global AI governance. All of
these aims support a pro-innovation agenda.
30 3uNatinlaAISrteaty54 miAAae1fSlEy:til5ilt-yAnl5btye$lyy6:nft-yISysn:4:ty$
Pillar1-InvestingintheLongTermNeedsoftheAIEcosystem
Actions:
1. Launch a new National AI Research and Innovation Programme, that will align funding
programmes across UKRI Research Councils and Innovate UK, stimulating new investment in
fundamental AI research while making critical mass investments in particular applications of AI.
2. Lead the global conversation on AI R&D and put AI at the heart of our science and technology
alliances and partnerships worldwide through:
1. WorkwithpartnersaroundtheworldonsharedAIchallenges,includingparticipationinHorizon
EuropetoenablecollaborationwithotherEuropeanresearchers.
2. UseofOverseasDevelopmentAssistancetosupportpartnershipswithdevelopingAInations.
3. DelivernewinitiativesthroughtheUSUKDeclarationonCooperationinAIR&D.
3. Develop a diverse and talented workforce which is at the core of maintaining the UK’s world
leading position through:
1. ScopingwhatisrequiredtoupskillemployeestouseAIinabusinesssetting.Then,working
withtheDepartmentforEducation,explorehowskillsprovisioncanmeettheseneeds
throughtheSkillsValueChainandbuildoutAIanddatascienceskillsthroughSkills
Bootcamps.
2. Supportingexistinginterventionsacrosstoptalent,PhDsandMasterslevelsanddeveloping
worldleadingteamsandcollaborations,thegovernmentwillcontinuetoattractanddevelop
thebrightestandbestpeopletobuildAI.
3. InspiringalltobeexcitedbythepossibilitiesofAI,bysupportingtheNationalCentrefor
ComputingEducation(NCCE)toensureAIprogrammesforchildrenareaccessibleandreach
thewidestdemographicandthatcareerpathwaysforthoseworkingwithordevelopingAI
areclearlyarticulatedoncareerguidanceplatforms.
4. Promotingtherevitalisedandnewvisaroutesthatencourageinnovatorsandentrepreneurs
totheUK,makingattractivepropositionsforprospectiveandleadingAItalent.
4. Publish a policy framework setting the government's role in enabling better data availability in the
wider economy. The government is already consulting on the opportunity for data intermediaries
to support responsible data sharing and data stewardship in the economy and the interplay of AI
technologies with the UK’s data rights regime.
5. Consult on the potential role and options for a future national ‘cyber-physical infrastructure’
framework, to help identify how common interoperable digital tools and platforms and cyber-
physical or living labs could come together to form a digital and physical ‘commons’ for
innovators, enabling accelerated AI development and applications.i
ii
iii
i
ii
iii
iv6. Publish a report on the UK’s compute capacity needs to support AI innovation, commercialisation
and deployment. The report will feed into UKRI’s wider work on infrastructure.
7. Continue to publish open and machine-readable data on which AI models for both public and
commercial benefit can depend.
8. Consider what valuable datasets the government should purposefully incentivise or curate that
will accelerate the development of valuable AI applications.
9. Undertake a wider review of our international and domestic approach to the semiconductor
sector. Given commercial and innovation priorities in AI, further support for the chip design
community will be considered.
10. Evaluate the state of funding specifically for innovative firms developing AI technologies in the UK,
and report on this work in Autumn 2022.
11. Protect national security through the National Security & Investment Act while keeping the UK
open for business with the rest of the world, as our economy’s success and our citizens’ safety
rely on the government’s ability to take swift and decisive action against potentially hostile foreign
investment.
12. Include provisions on emerging digital technologies, including AI, in future trade deals alongside
championing international data flows, preventing unjustified barriers to data crossing borders
and maintaining the UK’s high standards for personal data protection.
91 9cTo ensure that all sectors and regions of the
UK economy can benefit from the positive
transformation that AI will bring, the
government will back the domestic design
and development of the next generation of AI
systems, and support British business to
adopt them, grow and become more
productive. The UK has historically been
excellent at developing new technologies but
less so at commercialising them into products
and services.
As well as smart action to support both
suppliers, developers and adopters,
government also has a role to play when it
comes to the use of AI, both as a significant
market pull in terms of public procurement,
such as the NHS and the defence sector, with
a dedicated Defence AI Strategy and AI
Centre, but also in terms of using the
technology to solve big public policy
challenges, such as in health and achieving
net zero. Finally, it requires being bold and
experimental, and supporting the use of AI in
the service of mission-led policymaking.
OoBBge0ialiuation
Developing a commercial AI product or
service is more than just bringing an idea to
market or accessing the right funding. Recent
analysisfrom Innovate UK suggests that
obtaining private funding is only one among
many other obstacles to successful
commercial outcomes in AI-related projects.
As well as the well known barriers such as
access to data, labour market supply and Government’saimistodiffuseAIacross
thewholeeconomytodrivethehighest
amountofeconomicandproductivity
growthduetoAI.
This will be achieved by:
• Supporting AI businesses on their
commercial journey, understanding the
unique challenges they face and helping
them get to market and supporting
innovation in high potential sectors and
locations where the market currently
doesn’t reach;
• Understanding better the factors that
influence the decisions to adopt AI into
organisations – which includes an
understanding of when not to;
• Ensuring AI is harnessed to support
outcomes across the Government’s
Innovation Strategy, including by
purposefully leveraging our leading AI
capabilities to tackle real-world problems
facing the UK and world through our
Innovation Missions,33while driving
forward discovery;
• Leveraging the whole public sector’s
capacity to create demand for AI and
markets for new services.access to relevant skills discussed above,
other challenges reported by businesses are
the lack of engagement with end users,
limiting adoption and commercialisation.
Commercialisation outcomes are also often
constrained by business models rather than
technical issues and a lack of understanding
of AI-related projects’ return on investment.
ISqgkloCBgnt–pnqgeutanqinyngR
qCnaBi0u
To grow the market and spread AI to more
areas of our economy, the government aims
to support the demand side as well as the
means for commercialising AI - understanding
what, why, when and how companies choose
to incorporate AI into their business planning
is a prerequisite to any attempt to encourage
wider adoption and diffusion across the UK.
EY research delivered on behalf of DCMS
shows that AI remains an emerging
technology for private sector and third sector
organisations in the UK. 27% of UK
organisations have implemented AI
technologies in business processes; 38% of
organisations are planning and piloting AI
technology; and 33% of organisations have
not adopted AI and are not planning to.
Consistent with studies of AI adoption,34the
size of an organisation was found to be a
large contributing factor to the decision to
adopt AI, with large organisations far more
likely to have already done so. Recognising
that for many sectors this is the cutting edge
of industrial transformation, and the need for more evidence, theOfficeforAIwillpublish
researchlaterthisyearintothedriversof
AIadoptionanddiffusion.
Tostimulatethedevelopmentand
adoptionofAItechnologiesinhigh-
potential,low-AImaturitysectorsthe
OfficeforAIandUKRIwilllauncha
programmethatwill:
• Support the identification and creation of
opportunities for businesses, whether
SMEs or larger firms, to use AI and for AI
developers to build new products and
services that address these needs;
• Create a pathway for AI developers to
start companies around new products
and services or to extend and diversify
their product offering if they are looking
to grow and scale;
• Facilitate close engagement between
businesses and AI developers to ensure
products and services developed address
business needs, are responsibly
developed and implemented, and
designed and deployed so that
businesses and developers alike are
prepped and primed for AI
implementation; and
• Incentivise investors to learn about these
new market opportunities, products, and
services, so that, where equity finance is
needed, the right financing is made
available to AI developers.Supporting the transition to an AI-enabled economy, capturing the benefits of AI
innovation in the UK, and ensuring AI technologies benefit all sectors and regionsNatinlaAISrteaty54
dillae24
UnupeinyISWgngfituall
ug0toeuanqegyionu
93 92NatinlaAISrteaty54 miAAae2fEl:peil5ISbylyfit:aAA:ystne:al6ey5inl:
AIandIntellectualProperty(IP):Callfor
ViewsandGovernmentResponse
An effective Intellectual Property (IP) system is
fundamental to the Government’s ambition
for the UK to be a ‘science superpower’ and
the best place in the world for scientists,
researchers and entrepreneurs to innovate.
To ensure that IP incentivises innovation, our
aspiration is that the UK’s domestic IP
framework gives the UK a competitive edge.
In support of this ambition, the IPO published
its AI and IP call for views to put the UK at the
forefront of emerging technological
opportunities, by considering how AI impacts
on the existing UK intellectual property
framework and what impacts it might have for
AI in the near to medium term.
In March this year, the government published
its response to the call for views, which
committed to the following next steps:
• To consult on the extent to which
copyright and patents should protect AI
generated inventions and creative works;
• To consult on measures to make it easier
to use copyright protected material in AI
development;
• An economic study to enhance
understanding of the role the IP
framework plays in incentivising
investment in AI.
The consultation, on copyright areas of
computer generated works and text and data
mining, and on patents for AI devised
inventions, will be launched before the end of
the year so that the UK can harness the
opportunities of AI to further support
innovation and creativity.Oegatinyanqkeotg0tinySntgllg0tpal
deokgetC
Intellectual Property (IP) plays a significant
part in building a successful business by
rewarding people for inventiveness and
creativity and enabling innovation. IP supports
business growth by incentivising investment,
safe-guarding assets and enabling the sharing
of know-how. The Intellectual Property Office
(IPO) recognises that AI researchers and
developers need the right support to
commercialise their IP, and helps them to
understand and identify their intellectual
assets, providing them with the skills to
protect, exploit and enforce their rights to
improve their chances of survival and growth.
MuinyIS’oet–gkpWli0Wgngfit
AI can contribute to solving the greatest
challenges we face. AI has contributed to
tackling COVID-19, demonstrating how these
technologies can be brought to bear
alongside other approaches to create
effective, efficient and context-specific
solutions.
There are many areas of AI development that
have matured to the point that industry and
third sector organisations are investing
significantly in AI tools, techniques and
processes. These investments are helping to
move AI from the lab and into commercial
products and services. But there remain more
complex, cross-sector challenges that
industry is unlikely to solve on its own. These challenges will require public sector
leadership, identifying strategic priorities that
can maximise the potential of AI for the
betterment of the UK.
The government has a clear role to play. In
stimulating and applying AI innovation to
priority applications and wider strategic goals,
the government can help incentivise a group
of different actors to harness innovation for
improving lives, simultaneously reinforcing
the innovation cycle that can drive wider
economic benefits – from creating and
invigorating markets, to the role of open
source in the public, private and third sectors,
to raising productivity. Overthenextsixto
twelvemonths,theOfficeforAIwillwork
closelywiththeOfficeforScienceand
TechnologyStrategyandgovernment
departmentstounderstandtheAIandCOVID-19
When the pandemic began it created a unique environment where AI technologies were developed
to identify the virus more quickly, to help with starting treatments earlier and to reduce the likelihood
that people will need intensive care.
Working with Faculty, NHS England and NHS Improvement developed the COVID-19 Early Warning
System(EWS). A first-of-its-kind toolkit that forecasts vital metrics such as COVID-19 hospital
admissions and required bed capacity up to three weeks in advance, based on a wide range of data
from the NHS COVID-19 Data Store. This gave national, regional and local NHS teams the confidence
to plan services for patients amid any potential upticks in COVID-related hospital activity.
At the same time over the past year, the NHS AI Lab has collected more than 40,000 X-ray, CT and
MRI chest images of over 13,000 patients from 21 NHS trusts through the National COVID-19 Chest
Imaging Database (NCCID), one of the largest centralised collections of medical images in the UK. The
NCCID is being used to study and understand the COVID-19 illness and to improve the care for
patients hospitalised with severe infection. The database has enabled 13 projects to research new AI
technologies to help speed up the identification, severity assessment and monitoring of COVID-19.
UK AI companies have also shown how AI can help accelerate the search for potential drug
candidates, streamline triage and contribute to global research efforts. BenevolentAI , a world-leading
AI company focused on drug discovery and medicine development, used their biomedical knowledge
graph to identify a potential coronavirus treatment from already approved drugs that could be
repurposed to defeat the virus. This was later validated through experimental testing from
AstraZeneca. UK AI company DeepMind have adapted their AI-enabled protein folding breakthrough
to better understand the virus’ structure , contributing to a wider understanding of how the virus can
function.
government'sstrategicgoalsandwhereAI
canprovideacatalyticcontribution,35
including through Innovation Missions and
the Integrated Review’s ‘Own-Collaborate-
Access’ framework.36
The COVID-19 pandemic has shown that
global challenges need global solutions. The
UK’s international science and technology
partnerships, global network of science and
innovation officers, and research and
innovation hubs, are working alongside UK
universities, research institutes and investors
to foster new collaborations to tackle the
global challenges we all share, including in
innovations on global health and to achieve
net zero emissions around the globe.
97 99NatinlaAISrteaty54 miAAae2fEl:peil5ISbylyfit:aAA:ystne:al6ey5inl:
fiuuionu
The Innovation Strategy set out the
government's plans to stimulate innovation to
tackle major challenges facing the UK and the
world, and drive capability in key technologies.
This will be achieved through Innovation
Missions,37which will draw on multiple
technologies and research disciplines towards
clear and measurable outcomes. They will be
supported by Innovation Technologies,38
including AI, supporting their capability to
tackle pressing global and national challenges
while supporting their adoption in novel
areas, boosting growth and helping to
consolidate our position as a science and AI
superpower.
Some of these challenges have been
articulated and revolve around the future
health, wellbeing, prosperity and security of
people, the economy, and our environment –
in the UK and globally. These challenges are
worthwhile and therefore difficult, and will
require harnessing the combined intellect
and diversity of the AI ecosystem and the
whole nation, and will consider a full range of
possible impacts of a given solution. The pace
of AI development is often fast, parallel and
non-linear, and finding the right answer to
these challenges will require a collection of
actors beyond just government departments,
agencies and bodies to consider the technical
and social implications of certain solutions
and increase the creativity of problem solving.
In doing so, the UK will be able to find new
paths for AI to deliver on our security and
prosperity objectives at home and abroad. At the same time, well-specified challenges
have also led to some of the most impactful
moments of progress in AI. Whether through
Imagenet ,CIFAR-10 ,MNIST,GLUE,SquAD,
Kaggle, or more, challenge-related datasets
and benchmarks have generated
breakthroughs in vision, language,
recommender systems, and other subfields.39
The government believes that challenges
could be created that simultaneously
incentivise significant progress in Innovation
Missions while rapidly progressing the
development in the technology along
desirable lines.
Tothisend,thegovernmentwilldevelopa
repositoryofshort,mediumandlongterm
AIchallengestomotivateindustryand
societytoidentifyandimplementreal-
worldsolutionstothestrategicpriorities.
These priorities will be identified through the
Missions Programme, and guided by the
National AI R&I Programme.
Climate change and global health threats are
examples of shared international challenges,
and science progresses through open
international collaboration. This is particularly
the case when AI development is able to take
advantage of publicly available coding
platforms to produce new algorithms. TheUK
willextenditssciencepartnershipsandits
workinvestingUKaidtosupportlocal
innovationecosystemsindeveloping
countries.Throughourleadershipin
internationaldevelopmentanddiplomacy,
wewillworktoensureinternational
collaborationcanunlocktheenormous
potentialofAItoaccelerateprogresson
globalchallenges,fromclimatechangeto
poverty.NgtZgeo
The Prime Minister’s Ten Point Plan for a
Green Industrial Revolution highlights the
development of disruptive technologies such
as AI for energy as a key priority, and in
concert with the government’s Ten Tech
Priorities to use digital innovations to reach
net zero, the UK has the opportunity to lead
the world in climate technologies, supporting
us to deliver our ambitious net zero targets.
This will be key to meet our stated ambition in
the Sixth Carbon Budget , and with it a need to
consider how to achieve the maximum
possible level of emissions reductions.
Over the last ten years there have been a
series of advances in AI. These advances offer
opportunities to rapidly increase the
efficiency of energy systems and help reduce
emissions across a wide array of climate
change challenges. The AI Council’s AI
Roadmap advocates for AI technologies to
play a role in innovating towards solutions to
climate change, and literature is emerging
that shows how ‘exponential technologies’
such as AI can increase the pace of
decarbonisation across the most impactful
sectors. AI is increasingly seen as a critical
technology to scale and enable these
significant emissions cuts by 2030.40,41,42AIandnetzero
AI works best when presented with specific
problem areas with clear system boundaries
and where there are large datasets being
produced. In these scenarios, AI has the
capability to identify complex patterns, unlock
new insights, and advise on how best to
optimise system inputs in order to best
achieve defined objectives.
There are a range of climate change
mitigation and adaptation challenges that fit
this description. These include:
• using machine vision to monitor the
environment;
• using machine learning to forecast
electricity generation and demand and
control its distribution around the
network;
• using data analysis to find inefficiencies in
emission-heavy industries; and
• using AI to model complex systems, like
Earth’s own climate, so we can better
prepare for future changes.
AI applications for energy and climate
challenges are already being developed, but
they are predominantly outliers and there are
many applications across sectors that are not
yet attempted. A study by Microsoft and PwC
estimated that AI can help deliver a global
reduction in emissions of up to 4% by 2030
compared to business as usual, with a
concurrent uplift of 4.4% to global GDP. Such
estimates are likely to become more accurate
over time as the potential of AI becomes
more apparent.
98 9qNatinlaAISrteaty54 miAAae2fEl:peil5ISbylyfit:aAA:ystne:al6ey5inl:
Missions will also be continued through the
Innovation Strategy’s Missions Programme,
which will form the heart of the government’s
approach to respond to these priorities, and
wewilldevelopthesemissionsinaway
thatconsidersthepromiseofAI
technologies,particularlyinareasof
specificadvantagesuchasenergy.
Government will ensure that, in key areas of
international collaboration such as the US UK
Declaration on Cooperation in AI Research
and Development and the Global Partnership
on AI, we will pursue technological
developments in world-leading areas of
expertise in the energy sector to maximise
our strategic advantage.
Hgalt–
In August 2019, the Health Secretary
announced a £250 million investment43to
create the NHS AI Lab in NHSX to accelerate
the safe, ethical and effective development
and use of AI-driven technologies to help
tackle some of the toughest challenges in
health and social care, including earlier cancer
detection, addressing priorities in the NHS
Long Term Plan , and relieving pressure on the
workforce.
AI-driven technologies have the potential to
improve health outcomes for patients and
service users, and to free up staff time for
care.44The NHS AI Lab along with partners,
such as the Accelerated Access Collaborative ,
the National Institute of Health and Care
Excellence and the Medicines and Healthcare
products Regulatory Agency , are working to provide a facilitative environment to enable
the health and social care system to
confidently adopt safe, effective and ethical
AI-driven technologies at pace and scale.
TheNHSAILabiscreatinga National
StrategyforAIinHealthandSocialCare in
linewiththeNationalAIStrategy.The
strategy,whichwillbeginengagementon
adraftthisyearandisexpectedtolaunch
inearly2022,willconsolidatethesystem
transformationachievedbytheLabto
dateandwillsetthedirectionforAIin
healthandsocialcareupto2030.
F–gkpWli0ug0toeauaWpCge
To build a world-leading strategic advantage
in AI and build an ecosystem that harnesses
innovation for the public good, the UK will
need to take a number of approaches. As the
government, we can also work with industry
leaders to develop a shared understanding
and vision for the emerging AI ecosystem,
creating longer-term certainty that enables
new supply chains and markets to form.
This requires leveraging public procurement
and pre-commercial procurement to be more
in line with the development of deep and
transformative technologies such as AI. The
recent AI Council ecosystem survey revealed
that 72% agreed the government should take
steps to increase buyer confidence and AI
capability. The Innovation Strategy and
forthcoming National Procurement Policy
Statement have recently articulated how we
can further refine public procurement processes around public sector culture,
expertise and incentive structures. This
complements previous work across
government to inform and empower buyers
in the public sector, helping them to evaluate
suppliers, then confidently and responsibly
procure AI technologies for the benefit of
citizens.45
The government has outlined how it plans to
rapidly modernise our Armed Forces,46,47 and
how investments will be guided.48,49The
Ministry of Defence will soon be publishing its
AI strategy which will contribute to how we will
achieve and sustain technological advantage,
and be a great science power in defence. This
will include the establishment of the new
Defence AI Centre which will champion AI
development and use, and enable rapid
development of AI projects. Defence should
be a natural partner for the UK AI sector and
the defence strategy will outline how to
galvanise a stronger relationship between
industry and defence.MinistryofDefenceusingAItoreduce
costsandmeetclimategoals
The MOD is trialling a US startups’ Software
Defined Electricity (SDE) system, which uses AI
to optimise electricity in real time, to help
meet its climate goals and reduce costs. Initial
tests suggest it could reduce energy draw by
at least 25% which, given the annual electricity
bill for MOD’s non-PFI sites in FY 2018/19 was
£203.6M, would equate to savings of £50.9M
every year and significant reductions in CO2
emissions.
AIDynamicPurchasingSystem
The Crown Commercial Service worked
closely with colleagues in the Office for AI and
across government during drafting of
guidelines for AI procurement. This was used
to design their AI Dynamic Purchasing System
(DPS) agreement to align with these
guidelines, and included a baselines ethics
assessment so that suppliers commit only to
bidding where they are capable and willing to
deliver both the ethical and technical
dimensions of a tender.
The Crown Commercial Service is piloting a
training workshop to help improve the public
sector’s capability to buy AI products and
services, and will continue to work closely with
the Office for AI and others across
government to ensure we are addressing the
key drivers set out in the National AI Strategy.
90 9uNatinlaAISrteaty54 miAAae2fEl:peil5ISbylyfit:aAA:ystne:al6ey5inl:
Pillar2-EnsuringAIBenefitsallSectorsandRegions
Actions:
1. Launch a programme as part of UKRI’s National AI R&I Programme, designed to stimulate the
development and adoption of AI technologies in high-potential, lower-AI maturity sectors. The
programme will be primed to exploit commercialisation interventions, enabling early innovators
to access potential market opportunities where their products and services are relevant.
2. Launch a draft National Strategy for AI in Health and Social Care in line with the National AI
Strategy. This will set the direction for AI in health and social care up to 2030, and is expected to
launch in early 2022.
3. Ensure that AI policy supports the government’s ambition to secure strategic advantage through
science and technology.
4. Consider how the development of Innovation Missions also incorporates the potential of AI
solutions to tackling big, real-world problems such as net zero. This will also be complemented by
pursuing ambitious bilateral and multilateral agreements that advance our strategic advantages
in net zero sectors such as energy, and through the extension of UK aid to to support local
innovation ecosystems in developing AI nations.
5. Build an open repository of AI challenges with real-world applications, to empower wider civil
society to identify and implement real-world solutions to the strategic priorities identified
through the Missions Programme and guided by the National AI Research and Innovation
Programme.
6. Publish research into the determinants impacting the diffusion of AI across the economy.
7. Publish the Ministry of Defence AI Strategy, which will explain how we can achieve and sustain
technological advantage and be a science superpower in defence, including detail on the
establishment of a new Defence AI Centre.
71 7cAn effective governance regime that supports
scientists, researchers and entrepreneurs to
innovate while ensuring consumer and citizen
confidence in AI technologies is fundamental
to the government’s vision over the next
decade.
In a world where systematic international
competition will have significant impacts on
security and prosperity around the world, the
government wants the UK to be the most
trustworthy jurisdiction for the development
and use of AI, one that protects the public
and the consumer while increasing
confidence and investment in AI technologies
in the UK.
Effective, pro-innovation governance of AI
means that (i) the UK has a clear,
proportionate and effective framework for
regulating AI that supports innovation while
addressing actual risks and harms, (ii) UK
regulators have the flexibility and capabilities
to respond effectively to the challenges of AI,
and (iii) organisations can confidently innovate
and adopt AI technologies with the right tools
and infrastructure to address AI risks and
harms. The UK public sector will lead the way
by setting an example for the safe and ethical
deployment of AI through how it governs its
own use of the technology.
We will collaborate with key actors and
partners on the global stage to promote the
responsible development and deployment of
AI. The UK will act to protect against efforts to
adopt and apply these technologies in the
service of authoritarianism and repression.
Through our science partnerships and wider
development and diplomacy work, we will
seek to engage early with countries on AI
governance, to promote open society values
and defend human rights.Government’saimistobuildthemost
trustedandpro-innovationsystemforAI
governanceintheworld.
This will be achieved by:
• Establishing an AI governance framework
that addresses the unique challenges and
opportunities of AI, while being flexible,
proportionate and without creating
unnecessary burdens;
• Enabling AI products and services to be
trustworthy, by supporting the development
of an ecosystem of AI assurance tools and
services to provide meaningful information
about AI systems to users and regulators;
• Growing the UK’s contribution to the
development of global AI technical
standards, to translate UK R&D for
trustworthy AI into robust, technical
specifications and processes that can
support our AI governance model, ensure
global interoperability and minimise the
costs of regulatory compliance;
• Building UK regulators’ capacities to use
and assess AI, ensuring that they can deliver
on their responsibilities as new AI-based
products and services come to market;
• Setting an example in the safe and ethical
deployment of AI, with the government
leading from the front;
• Working with our partners around the
world to promote international agreements
and standards that deliver for our
prosperity and security, and promote
innovation that harnesses the benefits of AI
as we embed our values such as fairness,
openness, liberty, security, democracy, rule
of law and respect for human rights.rpkkoetinyinnoVationanqaqoktion
R–ilgkeotg0tinyt–gkpWli0anq
Wpilqinyteput
The UK has a strong international reputation
for the rule of law and technological
breakthroughs. To build on this the
government set out its pro-innovation
approach through its Plan for Digital
Regulation . The Plan recognises that well-
designed regulation can have a powerful
effect on driving growth and shaping a
thriving digital economy and society, whereas
poorly-designed or restrictive regulation can
dampen innovation. The Plan also
acknowledges that digital businesses, which
include those developing and using AI
technologies, are currently operating in some
instances without appropriate guardrails. The
existing rules and norms, which have so far
guided business activity, were in many cases
not designed for these modern technologies
and business models. In addition, these
technologies are themselves disrupting these
established rules and norms.
This is especially the case for AI which, with its
powerful data processing and analytical
capabilities, is disrupting traditional business
models and processes.50There is growing
awareness in industry and by citizens of the
potential risks and harms associated with AI
technologies. These include concerns around
fairness, bias and accountability of AI systems.
For example , the report from the Commission
on Race and Ethnic Disparities raised
concerns around the potential for novel ways
for bias to be introduced through AI. Other
concerns include the ability of AI to undermine privacy and human agency; and
physical, economic and financial harms being
enabled or exacerbated by AI technologies.
For example, cyber security should be
considered early in the development and
deployment of AI systems to prevent such
harms from arising, by adopting a ‘secure by
design’ approach to mitigate against cyber
security becoming an afterthought.
This is not to say that AI is currently
unregulated. The UK already regulates many
aspects of the development and use of AI
through ‘cross-sector’ legislation and different
regulators. For example, there is coverage in
areas like data protection ( Information
Commissioner’s Office ), competition
(Competition & Markets Authority ), human
rights and equality (Equality & Human Rights
Commission). As well as through ‘sector-
specific’ legislation and regulators, for
example financial services ( Financial Conduct
Authority ) and medical products ( Medicines
and Healthcare products Regulatory Agency ).
As the use of AI increases, the UK has
responded by reviewing and adapting the
regulatory environment. For example, the
Data: A new direction consultation , published
earlier this month, invites views on the role of
the data protection framework within the
broader context of AI governance. Specifically,
the consultation examines the role of
sensitive personal data in bias detection and
mitigation in AI systems, and the use of the
term ‘fairness’ in a data protection context.Ensuring that national governance of AI technologies encourages innovation,
investment, protects the public and safeguards our fundamental values, while
working with global partners to promote the responsible development of AI
internationallyNatinlaAISrteaty54
dillae34
,oVgeninyISg’’g0tiVglC
Data:Anewdirectionconsultation
The UK data protection framework (UK General Data Protection Regulations and Data Protection Act
2018) is technology neutral and was not intended to comprehensively govern AI systems, or any
other specific technologies. Many AI systems do not use personal data at all.
Navigating and applying relevant data protection provisions can be perceived as a complex or
confusing exercise for an organisation looking to develop or deploy AI systems, possibly impeding
uptake of AI technologies.
DCMS is currently running a consultation on potential reforms to the data protection framework,
closing on the 19th November 2021. The consultation calls for views on specific data protection
provisions that are currently triggered in the process of developing and deploying AI. In particular,
the consultation covers:
• Clarifying the use and reuse of personal data for research (including AI development) (Ch 1);
• Clarifying the use and reuse of personal data under the legitimate interests test, including bias
detection and mitigation anonymisation (Ch 1);
• Explicitly authorising the use of sensitive personal data (special category data) for bias detection
and mitigation in AI systems (Ch 1);
• Clarifying the use of the term ‘fairness’ in a data protection context (Ch 1);
• Assessing the challenges with the current data protection framework in developing and
deploying AI responsibly (Ch 1);
• Assessing the general suitability and operation of UK GDPR Article 22 (rights relating to
automated decision-making and profiling) (Ch 1);
• Mandatory transparency requirements for the use of algorithmic decision-making in the public
sector (Ch 5).
73 72NatinlaAISrteaty54 miAAae3fGnEyelil5ISyffystiEyA4
2018, the government agreed with the House
of Lords’ view that “blanketAI-specific
regulation,atthisstage,wouldbe
inappropriate...[and]thatexistingsector-specific
regulatorsarebestplacedtoconsidertheimpact
ontheirsectorofanysubsequentregulation
whichmaybeneeded.”
There are some strong reasons why our
sector-led approach makes sense:
1. TheboundariesofAIrisksandharms
aregrey, because the harms raised by
these technologies are often non-AI, or
extensions of non-AI, issues, and also
because AI is rapidly developing and
therefore what counts as the AI part of a
system is constantly changing.
2. UsecasesforAI,andtheirwider
impacts,canbehighlycomplexintheir
ownright . There is a big limitation in
what can be covered in cross-cutting
legislation on AI, and regardless of the
overall regulatory approach, the detail will
always need to be dealt with at the level
of individual harms and use cases.
3. Individualregulatorsandindustries
arealreadystartingtorespondtothe
risksof AI, and to work with innovators in
their sectors to guide on interpretation of
existing regulations, and on what further
regulatory responses are appropriate.
Enabling and empowering individual
bodies to respond is a much quicker
response to individual harms than
agreeing to an AI regulatory regime that
makes sense across all sectors.
4. AIisnottheonlyongoingtechnology
change, and its impacts are often
interlinked with other innovations and
behaviour changes, including increased connectivity, the move to mobile working,
the dominant role of major platforms etc.
It is often hard to unpick the specific
impact of AI; focusing regulation on the
particular use cases where there is risk
allows risks to be addressed holistically,
and simplifies things for innovators.
Having embraced a strong sector-based
approach to date, now is the time to decide
whether our existing approach remains the
right one.
As the UK’s regulators have begun to respond
to the emergence of AI, challenges have
emerged. These include:
• Inconsistentorcontradictory
approachesacrosssectors. While a
sector-led approach allows
responsiveness to sector specific
challenges, it could create barriers to
adoption across sectors by creating
confusing or contradictory compliance
requirements;
• Overlapbetweenregulatory
mandates, creating uncertainty about
responsibility, the potential for issues to
fall between the gaps, and increased
need for coordination;
• AIregulationcouldbecomeframed
narrowlyaroundprominent,existing
cross-cuttingframeworks, e.g. the data
protection framework, while the range of
AI risks and harms is much broader;
• Thegrowingactivityinmultilateraland
multistakeholderforainternationally,
andglobalstandardsdevelopment
organisations that addresses AI across
sectors could overtake a national effort to
build a consistent approach.These challenges raise the question of
whether the UK’s current approach is
adequate, and whether there is a case for
greater cross-cutting AI regulation or greater
consistency across regulated sectors.
At the same time, alternative methods and
approaches to governing AI have emerged
from multilateral and multi stakeholder fora,
at international and regional levels, including
global standards development organisations,
academia, thought leaders, and businesses.
This has raised awareness about the
importance of AI governance, but also
potentially confusion for the consumer about
what good AI governance looks like and
where responsibility lies. Working with the AI ecosystem theOfficefor
AIwilldevelopournationalpositionon
governingandregulatingAI,whichwillbe
setoutinaWhitePaperinearly2022. The
White Paper will set out the government’s
position on the potential risks and harms
posed by AI technologies and our proposal to
address them.
77 79NatinlaAISrteaty54 miAAae3fGnEyelil5ISyffystiEyA4
IltgenatiVgoktionu
The UK’s 2018 policy position that “existing
sector-specificregulatorsarebestplacedto
considertheimpactontheirsectorofany
subsequentregulationwhichmaybeneeded”
will be tested in our work towards the
development of a White Paper, along with
potential alternatives. The main alternative
options are:
1. Removing some existing regulatory
burdens where there is evidence they are
creating unnecessary barriers to
innovation.
2. Retaining the existing sector-based
approach, ensuring that individual
regulators are empowered to work
flexibly within their own remits to ensure
AI delivers the right outcomes.
3. Introducing additional cross-sector
principles or rules, specific to AI, to
supplement the role of individual
regulators to enable more consistency
across existing regimes.
For any of these options, it will be necessary
to ensure that regulators and other relevant
bodies are equipped to tackle the challenges
raised by AI. This may require additional
capabilities, capacity, and better coordination
among existing regulators; new guidance; or
standards to better enable consistency across
existing regulatory regimes.
In developing our White Paper position, the
Office for AI will consider all of these, and potentially other, options for governing AI
technologies. Having exited the EU, we have
the opportunity to build on our world-leading
regulatory regime by setting out a pro-
innovation approach, one that drives
prosperity and builds trust in the use of AI.
We will consider what outcomes we want to
achieve and how best to realise them, across
existing regulators’ remits and consider the
role that standards, assurance, and
international engagement plays.
Ggyplatoeu20ooeqinationanq
0aka0itC
While some regulators are leading the way in
understanding the implications of AI for their
sector or activity, we need all regulators to be
able to do this. The cross-sector and
disruptive nature of AI also raises new
challenges in terms of regulatory overlap. For
example, concerns around fairness relate to
algorithmic bias and discrimination issues
under the Equality Act, the use of personal
data (including sensitive personal data) and
sector-specific notions of fairness such as the
Financial Conduct Authority’s Fair Treatment
of Customers guidance .
The government is working with The Alan
Turing Institute and regulators to examine
regulators’ existing AI capacities. In particular,
this work is exploring monitoring and
assessing products and services using AI and
dealing with complexities arising from cross-
sectoral AI systems.51Greater cooperation is also being enabled
through initiatives such as through the Digital
Regulation Cooperation Forum , a recently
formed voluntary forum comprising the
Competition & Markets Authority (CMA) ,
Financial Conduct Authority (FCA) ,Information
Commissioner's Office (ICO) and Office of
Communications (Ofcom) to deliver a joined
up approach to digital regulation.
SntgenationalyoVgenan0ganq
0ollaWoeation
The UK will work with partners to support the
international development of AI governance
in line with our values. We will do this by
working with partners around the world to
shape approaches to AI governance under
development, such as the proposed EU AI Act
and potential Council of Europe legal
framework . We will work to reflect the UK’s
views on international AI governance and
prevent divergence and friction between
partners, and guard against abuse of this
critical technology.
The UK is already working with like-minded
partners to ensure that shared values on
human rights, democratic principles and the
rule of law shape AI regulation and
governance frameworks, whether binding or
non-binding, and that an inclusive multi-
stakeholder approach is taken throughout
these processes. As the international debate
on these frameworks has gained momentum,
the UK has proactively engaged on AI at the
OECD,52Council of Europe and UNESCO , and helped found the Global Partnership on AI
(GPAI), providing significant support for
evidence underpinning these initiatives, such
as the recently announced £1m investment in
GPAI’s data trust research by BEIS.
The UK will act to protect against efforts to
adopt and apply these technologies in the
service of authoritarianism and repression
and through our science partnerships and
wider development and diplomacy work seek
to engage early with countries on AI
governance, including when existing
technology governance is less developed, to
promote open society values and defend
human rights.
UK Defence has a strong record of
collaboration with international partners and
allies. Key collaborations include engagement
with NATO allies to lead AI integration and
interoperability across the Alliance, and
supporting the AI Partnership for Defence , a
14-nation coalition providing values based
global leadership for defence AI.
Thegovernmentwillcontinuetowork
withourpartnersaroundtheworldto
shapeinternationalnormsandstandards
relatingtoAI,includingthosedeveloped
bymultilateralandmultistakeholder
bodiesatglobalandregionallevel. This will
support our vision for a global ecosystem that
promotes innovation and responsible
development and use of technology,
underpinned by our shared values of
freedom, fairness, and democracy.
78 7qNatinlaAISrteaty54 miAAae3fGnEyelil5ISyffystiEyA4
TheUKisleadingthewayonAItechnical
standardsinternationally
The UK’s global approach to AI
standardisation is exemplified by our
leadership in the International Organisation
for Standardisation and International
Electrotechnical Commission (ISO/IEC) on four
active AI projects, as well as the UK’s initiation
of and strong engagement in the Industry
Specification Group on Securing AI at the
European Telecommunications Standards
Institute (ETSI).
At ISO/IEC, the UK, through BSI, is leading the
development of AI international standards in
concepts and terminology; data; bias;
governance implications; and data life cycles.
At ETSI we have published, among other
documents, ETSI GR SAI 002 on Data Supply
Chain Security , which was led by the UK’s
National Cyber Security Centre.
The ISO/IEC work programme includes the
development of an AI Management System
Standard (MSS), which intends to help solve
some of the implementation challenges of AI.
This standard will be known as ISO/IEC 42001
and will help an organisation develop or use
artificial intelligence responsibly in pursuing
its objectives, and deliver its expected
obligations related to interested parties.WhataretechnicalstandardsandhowdotheybenefittheUK?
Global technical standards set out good practice that can be consistently applied to ensure that
products, processes and services perform as intended – safely and efficiently. They are generally
voluntary and developed through an industry-led process in global standards developing
organisations, based on the principles of consensus, openness, and transparency, and benefiting
from global technical expertise and best practice.53
We want global technical standards for AI to benefit UK citizens, businesses, and the economy by:
• SupportingR&DandInnovation. Technical standards should provide clear definitions and
processes for innovators and businesses, lowering costs and project complexity and improving
product consistency and interoperability, supporting market uptake.
• Supportingtrade . Technical standards should facilitate digital trade by minimising regulatory
requirements and technical barriers to trade.
• GivingUKbusinessesmoreopportunities. Standardisation is a co-creation process that spans
different roles and sectors, providing businesses with access to market knowledge, new
customers, and commercial and research partnerships.
• Deliveringonsafety,securityandtrust. The Integrated Review set out the role of technical
standards in embedding transparency and accountability in the design and deployment of
technologies. AI technical standards (e.g. for accuracy, explainability and reliability) should ensure
that safety, trust and security are at the heart of AI products and services.
• Supportingconformityassessmentsandregulatorycompliance. Technical standards should
support testing and certification to ensure the quality, performance, reliability of products before
they enter the market. This includes providing a means of compliance with requirements set out
in legislation.
ISanqyloWalqiyitaltg0–ni0al
utanqaequ
The UK’s Plan for Digital Regulation sets out
our ambition to use digital technical
standards to provide an agile and pro-
innovation way to regulate AI technologies
and build consistency in technical
approaches, as part of a wider suite of
governance tools complementing ‘traditional’
regulation.
The integration of standards in our model for
AI governance and regulation is crucial for
unlocking the benefits of AI for the economy
and society, and will play a key role in
ensuring that the principles of trustworthy AI
are translated into robust technical
specifications and processes that are globally-
recognised and interoperable.Thegovernmentisalsoexploringwith
stakeholdersto:
•PilotanAIStandardsHub to expand the
UK’s international engagement and
thought leadership; and
•DevelopanAIstandardsengagement
toolkitto guide multidisciplinary UK
stakeholders to engage in the global AI
standardisation landscape.
Internationally, the government is:
• Increasing bilateral engagement with
partners, including strengthening
coordination and information sharing.
• Bringing together conversations at
standards developing organisations and
multilateral fora. BSI and the government
are members of the Open Community for
Ethics in Autonomous and Intelligent
Systems (OCEANIS) , which unites global
SDOs, businesses, and research
institutes.
• Engaging in the OECD’s Network of
Experts Group on Implementing
Trustworthy AI , collaborating with
governments, academics, and experts to
build guidance.
• Promoting the 2021 Carbis Bay G7
Leaders’ Communiqué , on supporting
inclusive, multi-stakeholder approaches
to standards development, by ensuring
our UK approach to AI standards is
multidisciplinary, and encourages a wide
set of stakeholders in standards
developing organisations.The UK is taking a global approach to shaping
technical standards for AI trustworthiness,
seeking to embed accuracy, reliability,
security, and other facets of trust in AI
technologies from the outset. The
government’s work to date on AI technical
standards with international partners,
industry, and other stakeholders provides a
potential foundation to complement our
governance and regulatory approach.
Domestically, the government has established
a strategic coordination initiative with the
British Standards Institution (BSI) and the
National Physical Laboratory to explore ways
to step up the UK’s engagement in global
standards developing organisations.54
70 7uNatinlaAISrteaty54 miAAae3fGnEyelil5ISyffystiEyA4
ISIuupean0g
Understanding whether AI systems are safe,
fair or are otherwise trustworthy requires
measuring, evaluating and communicating a
variety of information, including how these
systems perform, how they are governed and
managed, whether they are compliant with
standards and regulations, and whether they
will reliably operate as intended. AI assurance
will play an important enabling role, unlocking
economic and social benefits of AI systems.
WhatisAssurance?
Assurance covers a number of governance
mechanisms for third parties to develop trust
in the compliance and risk of a system or
organisation. Assurance as a service draws
originally from the accounting profession, but
has since been adapted to cover many areas
such as cyber security, product safety, quality
and risk management.
In these areas, mature ecosystems of
assurance products and services enable
people to understand whether systems are
trustworthy and direct their trust or distrust
appropriately. These products and services
include: process and technical standards;
repeatable audits; impact assessments;
certification schemes; advisory and training
services. An AI assurance ecosystem is emerging within
both the public and private sectors, with a
range of companies including established
accountancy firms and specialised start-ups,
beginning to offer assurance services. A
number of possible assurance techniques55
have been proposed and regulators are
beginning to set out how AI might be assured
(for example, the ICO’s Auditing Framework
for AI).
However, the assurance ecosystem is
currently fragmented and there have been
several calls for better coordination, including
from the Committee on Standards in Public
Lifeand the Office for Statistics Regulation .
The CDEI’s recently published review into bias
in algorithmic decision-making also points to
the need for an ecosystem of industry
standards and professional services to help
organisations address algorithmic bias in the
UK and beyond.
Playing this crucial role in the development
and deployment of AI, assurance is likely to
become a significant economic activity in its
own right and is an area in which the UK, with
particular strengths in legal and professional
services, has the potential to excel.
Tosupportthedevelopmentofamature
AIassuranceecosystem,theCDEIis
publishinganAIassuranceroadmap. This
roadmap clarifies the set of activities needed
to build a mature assurance ecosystem and
identifies the roles and responsibilities of
different stakeholders across these activities.dpWli0ug0toeauang1gBklae
The government must lead from the front and
set an example in the safe and ethical
deployment of AI. The Office for AI and the
Government Digital Service worked with The
Alan Turing Institute to produce guidance on
AI ethics and safety in the public sector in
2019. This guidance identifies the potential
harms caused by AI systems and proposes
measures to counteract them. The
governmentisworkingwithTheAlan
TuringInstitutetoupdatethisguidancein
ordertoprovidepublicservantswiththe
mostcurrentinformationaboutthestate
oftheartinresponsibleAIinnovation. This
update incorporates the delivery of interactive
workbooks aimed to equip public sector
stakeholders with the practical tools and skills
needed to bring the content of the original
guidance to life.56
The Ministry of Defence is moving quickly
against a fast-evolving threat picture to secure
the benefits of these transformative
technologies. TheMinistryofDefencehas
rigorouscodesofconductandregulation
whichupholdresponsibleAIuse,andis
workingcloselywiththewider
governmentonapproachestoensure
clearalignmentwiththevaluesandnorms
ofthesocietywerepresent.
As the CDEI conducts its ongoing work to
address bias in algorithmic decision-making,
the Commission on Race and Ethnic
Disparities recommended that a mandatory
transparency obligation be placed on all
public sector organisations applying
algorithms that have an impact on significant decisions affecting individuals, highlighting the
importance of stewarding AI systems in a
responsible manner to increase overall trust
in their use.
To ensure that citizens have confidence and
trust in how data is being processed and
analysed to derive insights, theCentral
DigitalandDataOffice(CDDO)is
conductingresearchwithaviewto
developingacross-governmentstandard
foralgorithmictransparency in line with the
commitment in the National Data Strategy.
The CDDO work is being conducted
collaboratively with leading organisations in AI
and data ethics and it has been informed by a
range of public engagement processes. To
date, no other country has developed a
standard for algorithmic transparency at a
national level. Proactive transparency in this
field will be an extension of the UK’s long
standing open data and data ethics
leadership.
q1 qcNatinlaAISrteaty54 miAAae3fGnEyelil5ISyffystiEyA4
ISeiuwPua’gtCPanqlonymtgeB
qgVglokBgnt
The government takes the long term risk of
non-aligned Artificial General Intelligence, and
the unforeseeable changes that it would
mean for the UK and the world, seriously.
There are also risks, safety and national
security concerns that must be considered
here and now - from deepfakes and targeted
misinformation from authoritarian regimes, to
sophisticated attacks on consumers or critical
infrastructure. As AI becomes increasingly
ubiquitous, it has the potential to bring risks
into everyday life, into businesses and into
national security and defence. So as AI
becomes more general and is simply used in
more domains, we must maintain a broad
perspective on implications and threats, with
the tools to understand its most subtle
impacts, and ensure the UK is protected from
bad actors using AI, as well as risks inherent in
unsafe future versions of the technology itself.
TheOfficeforAIwillcoordinatecross-
governmentprocessestoaccurately
assesslongtermAIsafetyandrisks, which
will include activities such as evaluating
technical expertise in government and the
value of research infrastructure. Given the
speed at which AI developments are
impacting our world, it is also critical that the
government takes a more precise and timely
approach to monitoring progress on AI, and
the government will work to do so.
The government will support the safe and
ethical development of these technologies as well as using powers through the National
Security & Investment Act to mitigate risks
arising from a small number of potentially
concerning actors. At a strategic level, the
NationalResilienceStrategy will review our
approach to emerging technologies; the
MinistryofDefence will set out the details of
the approaches by which Defence AI is
developed and used; the NationalAIR&I
Programme’s emphasis on AI theory will
support safety; and centralgovernment will
work with the national security apparatus to
consider narrow and moregeneral AI as a top-
level security issue. Pillar3-GoverningAIEffectively
Actions:
1. Develop a pro-innovation national position on governing and regulating AI, which will be set out
in a White Paper, to be published in early 2022.
2. Publish the CDEI assurance roadmap and use this to continue work to develop a mature AI
assurance ecosystem in the UK.
3. Pilot an AI Standards Hub to coordinate UK engagement in AI standardisation globally, and
explore with stakeholders the development of an AI standards engagement toolkit to support the
AI ecosystem to engage in the global AI standardisation landscape.
4. Continue our engagement to help shape international frameworks, and international norms and
standards for governing AI, to reflect human rights, democratic principles, and the rule of law on
the international stage.
5. Support the continuing development of new capabilities around trustworthiness, acceptability,
adoptability, and transparency of AI technologies via the national AI Research and Innovation
Programme.
6. Publish details of the approaches which the Ministry of Defence will use when adopting and using
AI.
7. Develop a cross-government standard for algorithmic transparency.
8. Work with The Alan Turing Institute to update the guidance on AI ethics and safety in the public
sector.
9. Coordinate cross-government processes to accurately assess long term AI safety and risks, which
will include activities such as evaluating technical expertise in government and the value of
research infrastructure.
10. Work with national security, defence, and leading researchers to understand how to anticipate
and prevent catastrophic risks.
q3 q2Ng1tutgku
The National AI Strategy proposes three core
pillars which, taken together, are areas the UK
can make the biggest impact to set the
country on its way to being an AI and science
superpower fit for the coming decade.
By their nature, strategies are a response to
the moment in which they exist - further
actions will also be required to elaborate on
the paths set out in this document in a way
that responds to the fast-changing landscape
in the years to come. A plan to execute
against the vision set out in this strategy will
be published in the near future. Alongside
this, we will put mechanisms in place to
monitor and assess progress.
We will publish a set of quantitative indicators,
given the far-ranging and hard-to-define
impacts AI will have on the economy and
society. We will publish these indicators
separately to this document and at regular
intervals to provide transparency on our
progress and to hold ourselves to account.
Given the cross-cutting nature of AI,
collaboration across a wide range of sectors
and stakeholders will be paramount. The
Office for AI will be responsible for overall
delivery of the strategy, monitoring progress
and enabling its implementation across
government, industry, academia and civil
society.
We will also continue talking with the wider
community to get their feedback on AI in the
UK. Taken together, this quantitative analysis
and qualitative intelligence will enable us to
track progress and course-correct if we are at
risk of falling short in any particular area.The government’s AI Council, an independent
expert group formed to represent high-level
leadership of the UK’s AI ecosystem, has
played a key role in reaching a National AI
Strategy and informing its direction. As we
move into an implementation phase, the AI
Council will continue to help galvanise action
from across the ecosystem in fulfilling our
objectives and holding the government to
account on the actions contained in the
strategy. The recently established Office for
Science and Technology Strategy, National
Science and Technology Council and National
Technology Adviser will work with the rest of
government to drive forward Whitehall’s
science and technology priorities from the
centre. As a part of this, we will collectively
identify the technological capabilities required
in the UK and in the government to deliver
the Prime Minister’s global science
superpower ambitions through AI.NatinlaAISrteaty54
q7 q9NatinlaAISrteaty54
PublishedinSeptember2021
bytheOfficeforArtificialIntelligence
©Crowncopyright2021
ThispublicationislicensedunderthetermsoftheOpenGovernmentLicence
v3.0exceptwhereotherwisestated.Toviewthislicence,
visit:nationalarchives.gov.uk/doc/open-government-licence/version/3
Wherewehaveidentifiedanythirdpartycopyrightinformationyouwillneedto
obtainpermissionfromthecopyrightholdersconcerned.
ISBN978-1-5286-2894-5
E02674508 09/21
Thispublicationisavailableatwww.gov.uk/official-documents
Anyenquiriesregardingthispublicationshouldbesentto:
enquiries@dcms.gov.uk
Anyenquiriesregardingthispublication
shouldbesentto:enquiries@dcms.gov.uk
ISBN978-1-5286-2894-5
E02674508 09/21
PublishedinSeptember2021
bytheOfficeforArtificialIntelligence
©Crowncopyright2021
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.