AI Guardrails

There are a number of key risks associated with implementing AI, which requires strong policy and governance to mitigate. Along with the AI Principles, these AI Guardrails will provide a backbone for the AI Governance Framework

Safe

Our AI should be built with safety, security and privacy by design.

To ensure AI services do not compromise on the confidentiality, integrity or availability of sensitive data or systems. Our AI tooling aligns with industry best practice.

Do
Don't
  • Complete RLB's Data Risk Assessment and ask for explicit managerial permission if you are using personal data in AI.
  • Ensure that you are aware of policies, processes and guidelines around the use of AI at RLB.
  • Ensure that you are aware of any legal restrictions on client data when using AI.
  • Ensure that you understand the terms of use and privacy policy of the system you are using and has been approved by RLB AI governance.
  • Remember that all other relevant RLB policies apply to any content that is generated via AI.
  • Assume public AI services are safe from malware, hacking or data breaches
  • Input non-public commercially sensitive, proprietary or confidential client or RLB information into a public AI services.
  • Submit queries to public AI services that could cause you, clients or RLB embarrassment or reputational harm were they to be made public.

Ethical

Our usage will be assessed to ensure that we publish accurate, unbiased AI content that treats everyone fairly.

To ensure RLB’s AI services align to fair and safe values, principles and policies for the benefit of our stakeholders and society.

Do
Don't
  • Look for evidence of potential bias in AI service output.
  • Regularly evaluate the output of AI with RLB ethical standards.
  • Monitor the level of bias within our AI services and take action based on this monitoring.
  • Ensure AI is used ethically, including Gen AI.

  • Use AI services that notionally amplify or perpetuate personal biases, such as gender, race, or socioeconomic biases.
  • Deploy AI services that result in discriminatory outcomes for certain individuals or groups.

Responsible

All individuals and stakeholders have a right to manage their data used by well governed AI services.

To ensure RLB’s AI systems adhere to compliance guidelines, organizational operating models and a defined code of conduct.

Do
Don't
  • Encourage responsible use of AI to create original and transformative content and other material that respects intellectual property rights.
  • Ensure attribution when using other content creators content or other material within RLB’s AI.
  • Ensure that AI systems do not infringe upon the rights of external 3rd party content creators.
  • Ensure you're using RLB approved AI tools.
  • Use AI services to generate or distribute content without appropriate permissions.
  • Present AI-generated content as original work when it is derived from or influenced by others' intellectual property.
  • Use AI services to create and distribute content contrary to RLB and clients values and policies.

Accountable

Our AI should be used as a tool to support and augment intelligence, with humans ultimately accountable.

To ensure RLB’s users are responsible and answerable for all resulting AI output.

Do
Don't
  • Experiment with use of AI as a tool to enhance your productivity and assure the quality of work.
  • Validate business content created or modified by AI services with a human read-through.
  • Independently verify any factual information offered by the AI service.
  • Take ownership of the quality of business content created or modified by AI services and it complies with RLB and client policies.
  • Ensure that you understand the data lineage used for AI services (what sources has AI service provider used to train its model).
  • Rely solely on AI for decision-making or to replace existing standards.
  • Create or purchase an AI service without agreeing a responsible owner and appropriate support model.
  • Experiment with AI without accountability of inputs and outputs.

Transparent

Our AI usage will maximise clarity and explainability around how we use it, its input and output.

To ensure RLB’s AI services are trustworthy they must be, open and understandable to all stakeholders, with as much disclosure as possible.

Do
Don't
  • Be transparent with customers when AI has generated original content.
  • Feel comfortable to share concerns, questions or suggestions around the use of AI.
  • Document AI purpose, process and value, including limitations, uncertainties and any potential bias.
  • Enable user control over data collection and processing.
  • Regularly assess AI processes to ensure that you are being transparent and compliant in its use.
  • Obscure or hide how AI is influencing content.
  • Use AI services where we don't have a reasonable understanding of how the AI processes inform.

Innovation

Explore and leverage AI services to innovate, disrupt the market and deliver better Customer Experience

To ensure the use of RLB’s AI services provides a credible business benefit to internal and external stakeholders.

Do
Don't
  • Explore and experiment with AI, but within business safe parameters.
  • Keep up to date with AI as it is a fast moving technology. GenAI especially is new to the consumer market and is evolving rapidly.
  • Pursue AI initiatives that will deliver business value and align to business objectives.
  • Prioritise innovation at the expense of the other 5 principles.
  • Waste time building AI systems without clearly defined use cases, business value and approved budget.
  • Look to introduce new AI services without first exploring what RLB already have access to.

RLB © 2024 Rider Levett Bucknall

Privacy & Cookie Policy