Skip to main content
Skip table of contents

AI policy

Using Saidot, you will be able to

  • Build your organisation’s AI policy tailored to your business needs, anchored on your culture and values, and aligned with other relevant internal policies (E.g. Responsible Business Conduct, Corporate Sustainability).

What is an AI policy?

AI policy is a broad “umbrella term” that encompasses various types of policies - both private and public, internal and external - across multiple levels, including organisational, sectoral, national, and international.

Within Saidot's AI governance methodology, the design and implementation of the high-level organisational AI policy is allocated to this current phase of the framework. Compliance with all AI policies, external and internal, is ensured when compliance management is carried out.

Importance of AI policy

AI Policy forms the basis for the development and operation of responsible AI within an organisation. It can be described as an organisation’s formally expressed commitment and direction to act responsibly in its role as developed and deployer of AI. AI policy is always unique to a given organisation as it guides the use and development of AI in that specific organisation.

Being a high-level document, AI policy creates a standard of responsibility through objectives, principles, and measures that apply when developing or deploying AI. Put simply, AI policy guides, supports, and directs the development and use of AI in the organisation while recognising business considerations and objectives, as well as high-level legal obligations. This is why AI policy is generally recommended as the first step when building an AI governance framework.

Elements of AI policy

It is generally recommended that the AI policy be created in document format. Documenting an AI policy makes it easier to communicate throughout the organisation and makes it readily available to all relevant parties.

A solid AI policy is tailored to the needs and context of the organisation, is targeted, actionable, and provides high-level guidance. When drafting an AI policy, we recommend including at least the following elements:

Element

Description

Objectives

What does your organisation want to achieve through the responsible use and development of AI?

Scope

What types of processes and types of AI are covered by the policy?

Principles

Which values and guiding principles direct your organisation's AI activities?

Measures

Which measures are used to ensure that AI usage and development align with the identified objectives and principles?

Alignment

How does this policy relate to other organisational policies, and how is consistency and alignment maintained among them?

Ownership and improvement

Who is responsible for the policy, and how will it be monitored, updated, and improved over time?

Operationalisation

How will this policy be implemented in everyday practice?

Handling deviations

How will deviations from or non-compliance with the AI policy be addressed?

Controls

What controls are in place to measure and monitor the implementation of the AI policy?

Implementation of AI policy

The implementation of the AI policy requires well-established and actionable measures—including clear processes, tasks, and designated roles and responsibilities—to effectively implement it. Furthermore, the policy should undergo regular reviews at planned intervals and additional assessments as needed to ensure its ongoing suitability, adequacy, and effectiveness.

The policy is best implemented through an established AI governance framework.

Source: Adapted based on ISO/IEC 42001:2023, Information technology — Artificial intelligence — Management system.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.