Skip to main content
Skip table of contents

Compliance management

Using Saidot, you will be able to

  • Identify applicable AI policies according to your objectives of procuring, developing, using, and/or selling AI systems.

  • Directly apply ready-to-use AI policy templates to seamlessly address and monitor all the relevant obligations, requirements, and controls for legal compliance and responsible governance of your AI system.

What we mean by Compliance Management

AI governance aims to ensure that AI developed and deployed is safe, responsible, and lawful. Consequently, ensuring AI compliance is a fundamental pillar of effective AI governance.

Compliance management in the AI context refers to a structured framework, methodology, and process for ensuring that business operations follow internal policies and external laws and regulations.​ Its primary goal is to guarantee that AI is deployed and developed in a legally compliant manner.

Compliance management is an important part of the AI lifecycle management process, and it should follow alongside AI development and deployment through the various lifecycle stages. AI compliance should always be evaluated and adapted to the context and use of the AI, the system, the model, the product, or other technology.

By incorporating compliance management into AI lifecycle management, organisations can implement governance that scales with the level of risk the AI system poses, enabling right-sized governance.

Compliance Management on Saidot

Compliance management on Saidot platform focuses on AI policy management and can be divided into three main parts: Initial assessment; Policy analysis; and Application of policies.

What do we mean by AI Policy?

AI Policy at Saidot is defined as an umbrella term for different types of normative documents that shape responsible AI governance. Examples of documents falling into these criteria are: AI-related public policies, AI national strategies, technical standards, hard law instruments (at national, regional and international level), and soft law instruments, such as principles, guidelines, recommendations, declarations, best practices, among others.

These instruments are introduced, established or enacted by a wide range of actors and stakeholders in the AI ecosystem, such as governments, international organizations, intergovernmental organizations, industry representatives, standardization organizations, Non-profits, and research and civil society organisations.

Initial assessment

The first part of policy management lays the groundwork for further compliance-based governance tasks. The initial assessment provides a preliminary assessment of whether the system is potentially subject to requirements from laws or internal policies and whether the system could be subject to more detailed compliance-related tasks.

Initial assessment focuses on evaluating the legal and regulatory risks of AI based on its context and use. In the Saidot platform, the system overview guides you when assessing the system's initial risk classification. When assessing this, think about the regions or juridictions where the AI is planned to be taken into use or put into service. Think also about the wider context of the AI, its intended purpose, the context in which it functions and what potential requirements follow from these. Weight these considerations, when assessing the system’s initial risk level.

It is important to note that the initial assessment is only the first step in the process and that the system’s risk level should be updated whenever more information surfaces that impact its risk classification. A more detailed policy analysis will be necessary to identify and confirm the applicable policies or their lack of.

Policy analysis

Policy analysis helps to identify and analyse the laws and policies that affect your AI system - a task best performed by legal experts.

The first step in the policy analysis process is to identify applicable AI policies and build on the initial assessment, refining the focus to pinpoint the exact laws and policies that apply. When scoping for relevant policies, consider factors such as:

  • Relevant jurisdictions

  • Context of use

  • Intended purpose

  • Function in which the system is used

  • Use of personal data

  • Industry-specific AI laws

  • Organisational policies

The second step in the policy analysis process is to analyse the applicable policies to confirm their application to the system. Assessing the policies is important in order to understand the obligations and rights stemming from them. When assessing the applicability of the policies, consider factors such as:

  • Scope of application

  • Potential exceptions

  • Reinforcement among different regimes and policies.

When attempting to understand the obligations applicable to an AI system under a given policy, a good starting point is to understand the scope of the policy and how the organisation developing or taking into use the AI system, as well as the AI itself, falls under the policy’s scope. Both legal experts and technical experts responsible for the technical implementation of the AI system should come together to understand how the AI policy applies to the system.

It is helpful to consider, for example, what is your organisation's role relative to the AI system? Do you perhaps develop the AI system as a provider or merely take it into use as a deployer?

It is also helpful to understand the type of AI system your organisation is either developing or taking into use, and in what context the AI system will function. The obligations under the AI policy can vary depending on the type of AI system.

Identifying the applicable policies and understanding their application to the AI system is integral to ensuring compliance. This understanding also helps when moving into the third step of policy management, the application of policies.

Application of policies

Once applicable AI policies for your system have been identified and their application to the AI system in question has been clarified, one can start working on ensuring compliance with them.

Saidot enables this through the application of policy templates. Template is a structured framework derived from a policy and includes the requirements outlined in a given policy. Templates provide a standardised format for individuals to document and manage their compliance with the relevant policy. Templates are specific to the roles and categories of AI systems or types of actions that are regulated under a policy.

Selecting templates

The policy analysis has helped you to understand the policy's application to your organisation and system. This understanding of the role of your organisation relevant to the AI system and the AI policy, as well as whether and how the AI system will fall under any potential categories of AI systems or types of actions regulated under the AI policy (also known as ‘Policy coverage’ in Saidot platform), helps when selecting relevant template(s) in the AI policy for your system.

Ensuring compliance through templates

Once relevant templates have been selected, the AI system’s compliance with relevant requirements can be evidenced. The template contains the requirements relevant to the role and coverage category displayed in the template. This information guides the structure of the template, as each template is tailored specifically for the given role and policy coverage category. It is important to note that in some cases, the AI system may have multiple applicable templates from one policy. Compliance with the different templates should be separately evidenced.

The templates consist of requirement sections, requirements, and controls that help structure the relevant obligations under the template. Organisations document and evidence their compliance with the requirements and related controls displayed in the template.

Concept

Description

Template

A template is a structured framework derived from a policy. It includes the requirements outlined in the policy and provides a standardised format for individuals to document their compliance with the given policy. Templates allow for an easy and efficient way to document and manage the AI system’s compliance with a given policy. One policy can have one or more templates. Templates are specific for the roles and categories of AI systems or types of actions that are regulated under a policy, and some policies may have multiple regulated roles and policy coverage categories.

Templates consist of requirement sections, requirements, contextual requirements, and controls.

Requirement section

Requirement sections structure a group of related requirements found in a policy and a template. They help structure the policy by grouping together requirements sharing a common topic, e.g. “Risk management” or “Data governance and management”.

Requirement

A requirement is a specific condition, rule, or standard that must be met or adhered to. Policies include requirements that outline the necessary actions, behaviour, or criteria that individuals and/or organisations must follow to comply with the policy. Requirements are displayed in requirement sections.

Contextual requirement

A contextual requirement is a type of requirement that is only applicable in a certain context. Depending on the AI system's intended purpose and use context, contextual requirements can either apply to the AI system or be non-applicable.

Control

Control is an organisation’s action to comply with the respective requirements. Controls are derived directly from requirements.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.