Risks for RLB

We spent time gathering insight from colleagues and customers to help inform and guide RLB AI principles, and ensuring all potential risks and dependencies are taken into consideration.

RLB AI opportunities have been prioritised based on (actual or perceived) associated risk. We explored risks through the lens of the company, colleagues, customers, with the following risk areas emerging:

  • Customer trust & confidence
  • Unintentional data and intellectual property leakage
  • Large market disruption

Our AI Principles have been prioritised in accordance with the level of perceived risk, and have been used to inform subsequent recommended AI guardrails.

A cross-business AI Council should be formed to own the AI Principles, as well as maintaining and upholding the relevant AI Guardrails

Risk Lens
Company
Colleagues
Customer
Risk Groupings

Reputational

Confidential Data leakage

Confidentiality + NDAs

Employee Trust

Lack of AI awareness

Manual checking overhead

Inconsistent results

Mistrust in the value of roles

Erosion of Trust

Intrinsic bias in historical data

Accuracy of output

Explainability of output

Operational

Data integrity

IP leakage

Lack of internal traceability

Unsafe instruction

Liability & premiums

Job Security

Automation anxiety

Reliance on software

Brand perception

Data integrity

Carbon copy output

Poor value outcomes

Legal (negligence)

Customer Data (PII) leakage

Corporate governance breach

Inadvertent negligence

Plagiarism

Talent Retention

Perceived de-skilling

Reduced employee motivation

Employee disempowerment

Lack of Innovation

Product stagnation

Market disruption

Service commoditisation

Safe

Our AI should be built with safety, security and privacy by design.

To ensure AI services do not compromise on the confidentiality, integrity or availability of sensitive data or systems. Our AI tooling aligns with industry best practice.

Accountable

Our AI should be used as a tool to support and augment intelligence, with humans ultimately accountable.

To ensure users are responsible and answerable for all resulting AI output.

Ethical

Our usage will be assessed to ensure that we publish accurate, unbiased AI content that treats everyone fairly.

To ensure AI services align to fair and safe values, principles and policies for the benefit of our stakeholders and society.

Transparent

Our AI usage will maximise clarity and explainability around how we use it, its input and output.

To ensure AI services are trustworthy they must be, open and understandable to all stakeholders, with as much disclosure as possible.

Responsible

All individuals and stakeholders have a right to manage their data used by well governed AI services.

To ensure our AI systems adhere to compliance guidelines, organizational operating models and a defined code of conduct.

Innovation

Explore and leverage the use of AI services to innovate, disrupt the market and deliver better customer experience.

To ensure the use of AI services provides a credible business benefit to internal and external stakeholders.

The value of an AI council

A cross-business AI Council should be formed to own the AI Principles, as well as maintaining and upholding the relevant AI Guardrails.

An AI council provides a practical implementation for how to avoid and manage escalations, where colleagues have chosen to circumvent AI Guardrails.

To avoid

  • Block corporate devices on unauthorised tools
  • Training & Awareness
  • Sign-post AI policy in current relevant documentation, processes and policies
  • Mechanism to hold people to account

Escalations

  • Revisit and tweak existing escalation processes
  • Revisit current escalation mechanism to ensure AI scenarios are considered,
  • e.g. quarterly Crisis & Issues group, Incident Response team

RLB © 2024 Rider Levett Bucknall

Privacy & Cookie Policy