This HTML version does not include page and line references. Please use the pdf version for page and line references.
Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL]
[As Introduced]
CONTENTS
[As Introduced]

A

bill

to

Regulate the use of automated and algorithmic tools in decision-making processes in the public sector; to require public authorities to complete an impact assessment of automated and algorithmic decision-making systems; to ensure the adoption of transparency standards for such systems; and for connected purposes.

B e it enacted by the King’s most Excellent Majesty, by and with the advice and consent of the Lords Spiritual and Temporal, and Commons, in this present Parliament assembled, and by the authority of the same, as follows:—

1 Purpose of this Act


The purpose of this Act is to ensure that algorithmic and automated
decision-making systems are deployed in a manner that accounts for and
mitigates risks to individuals, public authorities, groups and society as a
whole, and leads to efficient, fair, accurate, consistent, and interpretable
decisions; and to make provision for an independent dispute resolution service.

2 Systems to which this Act applies

(1)

Subject to subsections (3) and (4) this Act applies to any algorithmic or
automated decision-making system developed or procured by a public
authority from six months after the date on which this Act is passed.

(2)

This includes—

(a)

any system, tool or statistical model used to inform, recommend or
make an administrative decision about a service user or a group of
service users, and

(b)

systems in development, excluding automated decision-making systems
operating in test environments.

(3)

This Act does not apply to any automated decision-making system used for
the purpose of national security.

(4)

This Act does not apply to automated systems which merely calculate and
implement formulas, including taxation and budgetary allocation, insofar as
they automate a process of calculation which would otherwise be carried out
manually and fully understood.

3 Algorithmic Impact Assessments

(1)

Prior to deployment of an algorithmic or automated decision-making system,
public authorities are responsible for completing an Algorithmic Impact
Assessment prescribed in regulations made under this Act.

(2)

Subsection (1) does not apply when the algorithmic or automated
decision-making system is—

(a)

used solely for the formulation of policy in relation to that public
authority, and

(b)

is not expected to, in practice, fully or predominantly determine the
content of the policy.

(3)

The Algorithmic Impact Assessment must be updated when the functionality,
or the scope, of the algorithmic or automated decision-making system changes.

(4)

The final Algorithmic Impact Assessment must be published in accessible
format within 30 days of the results being known.

(5)

The Secretary of State must by regulations prescribe the form of an Algorithmic
Impact Assessment framework with the aims of ensuring public authorities—

(a)

procure, develop, and implement algorithmic and automated
decision-making systems such that the decisions made in and by a
public authority are responsible and comply with procedural fairness
and due process requirements, and its duties under the Equality Act
and the Human Rights Act 1998,

(b)

assess the impacts of algorithms on administrative decisions, minimise
negative outcomes, and evaluate the potential to maximise positive
outcomes,

(c)

make data and information on the use of algorithmic and automated
decision-making systems in public authorities available to the public,

(d)

better understand and reduce the risks associated with algorithmic
and automated decision-making systems,

(e)

introduce the appropriate governance, oversight, and reporting and
auditing requirements that best match the risks associated with the
application envisaged, and

(f)

undergo responsible innovation of algorithmic and automated
decision-making systems.

(6)

The framework as prescribed by regulations made under subsection (5) must
include the requirement for—

(a)

a detailed description of the algorithmic or automated decision-making
system,

(b)

an assessment of the relative benefits and risks of the system including
the risks to the privacy and security of personal information, risks to
the safety of a service user or group of service users, and risks and
likely impacts on employees of public authorities;

(c)

an explanation of the steps taken to minimise those risks,

(d)

independent external scrutiny of the efficacy and accuracy of the
system, and

(e)

mandatory bias assessment of any algorithmic or automated
decision-making system to ensure it abides by the Equality Act and
the Human Rights Act 1998.

(7)

The Secretary of State must publish regulations made under subsection (5)
in draft and consult such persons they consider appropriate on the draft
regulations before laying the regulations before both Houses of Parliament.

4 Algorithmic Transparency Records

(1)

Prior to use or procurement of an algorithmic or automated decision-making
system, public authorities must complete an Algorithmic Transparency Record
prescribed in regulations made under this Act.

(2)

Subsection (1) does not apply when the algorithmic or automated
decision-making system is—

(a)

used solely for the formulation of policy in relation to that public
authority, and

(b)

is not expected to, in practice, fully or predominantly determine the
content of the policy.

(3)

The Algorithmic Transparency Record must be published in accessible format
within 30 days of the completion of the record.

(4)

The Algorithmic Transparency Record must be updated when the functionality,
or the scope, of the algorithmic or automated decision-making system changes.

(5)

The Secretary of State must by regulations prescribe the form of transparency
records with the aim of ensuring public authorities increase the transparency
of algorithm-assisted decisions.

(6)

The Algorithmic Transparency Record as prescribed by regulations made
under subsection (1) must include the requirement for—

(a)

a detailed description of the algorithmic or automated decision-making
system,

(b)

an explanation of the rationale for using the system,

(c)

information on the technical specifications of the system,

(d)

an explanation of how the system is used to inform administrative
decisions concerning a service user or group of service users, and

(e)

information on human oversight of the system.

5 Requirements of public sector organisations on use of algorithmic or automated decision-making systems

(1)

No later than the commencement of use of a relevant algorithmic or automated
decision-making system, a public authority must—

(a)

give notice on a public register that the decision rendered will be
undertaken in whole, or in part, by an algorithmic or automated
decision-making system,

(b)

make arrangements for the provision of a meaningful and personalised
explanation to affected individuals of how and why a decision affecting
them was made, including meaningful information about the
decision-making processes, and an assessment of the potential
consequences of such processing for the data subject, as prescribed in
regulations to be made by the Secretary of State,

(c)

develop processes to—

(i)

monitor the outcomes of the algorithmic or automated
decision-making system to safeguard against unintentional
outcomes and to verify compliance with this Act and other
relevant legislation, and

(ii)

validate that the data collected for, and used by, the system is
relevant, accurate, up-to-date, and in accordance with the Data
Protection Act 2018, and

(d)

make arrangements to conduct regular audits and evaluations of
algorithmic and automated decision-making systems, including the
potential risks of those systems and steps to mitigate such risks, as
prescribed in regulations to be made by the Secretary of State.

6 Training of public sector employees

(1)

Public authorities using an algorithmic or automated decision-making system
to inform or recommend an administrative decision concerning a service user
or group of service users must implement organisational practices and
measures to ensure that those applying the final decision have the authority
and competence to challenge the system’s output.

(2)

Each public authority that uses an algorithmic or automated decision-making
system must provide adequate employee training in the design, function, and
risks of the system, in order to review, explain and oversee its operations in
accordance with the principles set out in Schedule 1 .

7 Logging

(1)

All algorithmic and automated decision-making systems must be designed
with logging capabilities enabling the automated recording of events during
operation, in line with recognised standards or common specifications, which
enable the monitoring of the operation of the system in relation to risks and
legal obligations.

(2)

Logs referred to in subsection (1) must be held by, or regularly transmitted
to, the public authority with responsibility for the functions being exercised
in connection with the algorithmic or automated decision-making system.

(3)

Public authorities must hold logs for a minimum period of five years, unless
a shorter period is strictly necessary for purposes of privacy or security, such
period to be determined in advance of the use of the algorithmic or automated
decision-making system.

(4)

In the case of decision support systems, logs referred to in subsection (1) must
record whether or not the final decision taken followed the recommendation
of the algorithmic or automated decision-making system.

8 Prohibition on procuring systems incapable of scrutiny

(1)

No public authority shall deploy or use an algorithmic or automated
decision-making system where there are practical barriers, including
contractual or technical measures and intellectual property interests, limiting
their effective assessment or monitoring of the algorithmic or automated
decision-making system in relation to individual outputs or aggregate
performance.

(2)

In assessing their obligations under subsection (1) , public authorities should
require vendors of algorithmic or automated decision-making systems to—

(a)

disclose the results of evaluations carried out on those systems,
including evaluations of foundation models used as components within
the system, and

(b)

on request, submit systems and relevant documentation to the AI
Safety Institute for evaluation.

9 Independent dispute resolution service


The Secretary of State must ensure that the ability to—

(a)

challenge a decision or class of decisions made by an algorithmic or
automated decision-making system, or

(b)

obtain redress for a decision or class of decisions made by an
algorithmic or automated decision-making system


is available through an independent dispute resolution service appropriate
to the nature of the public authority and the decision or class of decisions in
question.

10 Regulations


Regulations under sections 3 , 4 and 5 are to be made by statutory instrument
and may not be made unless a draft has been laid before, and approved by
a resolution of, both Houses of Parliament.

11 Definitions


Schedule 2 contains definitions of terms used in this Act.

12 Extent, commencement and short title

(1)

This Act extends to England and Wales.

(2)

This Act comes into force six months after the day on which it is passed.

(3)

This Act may be cited as the Public Authority Algorithmic and Automated
Decision-Making Systems Act 2024.

Schedules

Schedule 1

Section 6

Principles for reviewing, explaining and overseeing operation of algorithmic and automated decision-making systems

1

Employees in public authorities which use algorithmic or automated
decision-making systems must review, explain and oversee operations in
accordance with the principles and their interpretation as set out in
paragraphs 3 to 14.

2

The principles are complementary and should be considered as a whole.

Inclusive growth, sustainable development and well-being

3

Stakeholders should proactively engage in responsible stewardship of
trustworthy AI in pursuit of beneficial outcomes for people and the planet,
such as—

(a)

augmenting human capabilities and enhancing creativity,

(b)

advancing inclusion of underrepresented populations,

(c)

reducing economic, social, gender and other inequalities, and

(d)

protecting natural environments.


in order to support inclusive growth, well-being, sustainable development
and environmental sustainability.

Respect for the rule of law, human rights and democratic values, including fairness and privacy

4

AI actors should respect the rule of law, human rights, democratic and
human-centred values throughout the AI system lifecycle, including—

(a)

non-discrimination and equality,

(b)

freedom,

(c)

dignity,

(d)

autonomy of individuals,

(e)

privacy and data protection,

(f)

diversity,

(g)

fairness,

(h)

social justice, and

(i)

internationally recognised labour rights.

5

Misinformation and disinformation amplified by AI should be addressed
while respecting freedom of expression and other rights and freedoms
protected by applicable international law.

6

AI actors should implement mechanisms and safeguards, such as capacity
for human agency and oversight, including to address risks arising from
uses outside of intended purpose, intentional misuse, or unintentional
misuse in a manner appropriate to the context and consistent with the state
of the art.

Transparency and explainability

7

AI Actors should commit to transparency and responsible disclosure
regarding AI systems, meaning they should provide meaningful information,
appropriate to the context, and consistent with the state of art—

(a)

to foster a general understanding of AI systems, including their
capabilities and limitations,

(b)

to make stakeholders aware of their interactions with AI systems,
including in the workplace,

(c)

where feasible and useful, to provide plain and easy-to-understand
information on the sources of data/input, factors, processes and/or
logic that led to the prediction, content, recommendation or decision,
to enable those affected by an AI system to understand the output,
and,

(d)

to provide information that enable those adversely affected by an
AI system to challenge its output.

Robustness, security and safety

8

AI systems should be robust, secure and safe throughout their entire
lifecycle so that, in conditions of normal use, foreseeable use or misuse, or
other adverse conditions, they function appropriately and do not pose
unreasonable safety and/or security risks.

9

Mechanisms should be in place, as appropriate, to ensure that if AI systems
risk causing undue harm or exhibit undesired behaviour, they can be
overridden, repaired, and/or decommissioned safely as needed.

10

Mechanisms should, where technically feasible, be in place to bolster
information integrity while ensuring respect for freedom of expression.

Accountability

11

AI actors should be accountable for the proper functioning of AI systems
and for the respect of the above principles, based on their roles, the context,
and consistent with the state of the art.

12

To this end, AI actors should ensure traceability, including in relation to
datasets, processes and decisions made during the AI system lifecycle, to
enable analysis of the AI system’s outputs and responses to inquiry,
appropriate to the context and consistent with the state of the art.

13

AI actors, should, based on their roles, the context, and their ability to act,
apply a systematic risk management approach to each phase of the AI
system lifecycle on an ongoing basis and adopt responsible business conduct
to address risks related to AI systems, including, as appropriate via
co-operation between different AI actors, suppliers of AI knowledge and
AI resources, AI system users, and other stakeholders.

14

Risks in paragraph 13 include those related to harmful bias, human rights
including safety, security, and privacy, as well as labour and intellectual
property rights.

Schedule 2

Section 11

Definitions used in this Act

1

“The Equality Act” means the Equality Act 2010.

2

“Public authority” has the same meaning as in Part 1 of Schedule 19 of the
Equality Act.

3

“Algorithmic Impact Assessment” means a framework in the form laid
down in Regulations made by the Secretary of State under section 4.

4

“Algorithmic decision system” or “Automated decision system” mean any
technology that either assists or replaces the judgement of human
decision-makers.

5

“Procure” means the acquisition by means of a public contract of works,
supplies or services by one or more contracting authorities from economic
operators chosen by those contracting authorities, whether or not the works,
supplies or services are intended for a public purpose.

6

“Test environment” means an environment containing hardware,
instrumentation, simulators, software tools, and other support elements
needed to conduct an assessment.

7

“Decision support system” means an algorithmic or automated
decision-making system used by a public authority intending not to base
decisions solely upon its outputs.

Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL]
[As Introduced]

A

bill

to

Regulate the use of automated and algorithmic tools in decision-making processes in the public sector; to require public authorities to complete an impact assessment of automated and algorithmic decision-making systems; to ensure the adoption of transparency standards for such systems; and for connected purposes.

Lord Clement-Jones

Ordered to be Printed, .

© Parliamentary copyright House of Lords 2024

This publication may be reproduced under the terms of the Open Parliament Licence, which is published at www.parliament.uk/site-information/copyright

Published by the authority of the House of Lords