AI Guardrails
There are a number of key risks associated with implementing AI, which requires strong policy and governance to mitigate. Along with the AI Principles, these AI Guardrails will provide a backbone for the AI Governance Framework
Safe
Our AI should be built with safety, security and privacy by design.
To ensure AI services do not compromise on the confidentiality, integrity or availability of sensitive data or systems. Our AI tooling aligns with industry best practice.
Ethical
Our usage will be assessed to ensure that we publish accurate, unbiased AI content that treats everyone fairly.
To ensure RLB’s AI services align to fair and safe values, principles and policies for the benefit of our stakeholders and society.
Responsible
All individuals and stakeholders have a right to manage their data used by well governed AI services.
To ensure RLB’s AI systems adhere to compliance guidelines, organizational operating models and a defined code of conduct.
Accountable
Our AI should be used as a tool to support and augment intelligence, with humans ultimately accountable.
To ensure RLB’s users are responsible and answerable for all resulting AI output.
Transparent
Our AI usage will maximise clarity and explainability around how we use it, its input and output.
To ensure RLB’s AI services are trustworthy they must be, open and understandable to all stakeholders, with as much disclosure as possible.
Innovation
Explore and leverage AI services to innovate, disrupt the market and deliver better Customer Experience
To ensure the use of RLB’s AI services provides a credible business benefit to internal and external stakeholders.
RLB © 2024 Rider Levett Bucknall