AI system risk management
Using Saidot, you will be able to
Adopt risk management practices that are aligned with key AI risk management standards.
Implement a systematic risk management process to identify, document and evaluate risks, create risk treatment strategy, identify mitigations and monitor risks.
Make use of an expert-curated risk register to support the identification and monitoring of AI risks efficiently.
Prioritise risks based on the systematic evaluation, identify the most relevant risks for your organisation, and avoid over-mitigating them by selecting an appropriate treatment strategy for each identified risk.
Our platform’s risk management capability follows established risk management standards and the evolving AI risk management standards landscape. We ease the identification and treatment of the risks associated with AI, including risks related to business operations, data privacy, fundamental rights, cybersecurity, and broader concerns like trust, and societal and systemic impacts.
Risk Management Principles
Risk management principles provide guidance on the key characteristics of effective and efficient risk management, highlight its value, and clarify its purpose and objectives. Our platform implements the risk management principles, integrated, structured and comprehensive, customised, inclusive, dynamic, best available information, human and cultural factors and continual improvement, as presented below.
Principle | How does Saidot implement this principle? |
---|---|
Integrated | Risk management is part of standard AI system governance on Saidot, meaning, all systems are supported with assisted risk management regardless of regulative and policy scope. Furthermore, our platform enables seamless integration of risk management into your organisation's activities by identifying various roles regarding the AI system and its governance as well as encouraging collaboration between these. Our platform is made to be used by a wide group of stakeholders in an organisation, such as AI product teams, legal and compliance departments, and business and process owners. |
Structured and comprehensive | Our platform’s risk management capability follows established risk management standards and the evolving AI risk management standards landscape. Our platform allows you to evaluate risks using standardised risk scales, ensuring consistent and comparable results. |
Customised | We encourage you to adapt our risk management process to fit your organisation’s needs. Risk management should support and align with your organisation’s goals. Tailor our process to best serve your specific context and requirements. For instance, users can create custom risks or make use of and edit Saidot’s expert curated risks and mitigations. |
Inclusive | We believe good governance is a collaboration between people with contributors from diverse backgrounds and roles. We support the identification of risks across multiple categories and the allocation of risk ownership on a risk level. |
Dynamic | Our platform connects relevant AI system components to provide relevant risks and effective mitigation strategies. It also supports identifying relevant aspects of the risks to enable their better monitoring. Furthermore, our risk management process suggests appropriate evaluations and connects risks to specific evaluations. |
Best available information | Our platform is connected to the Saidot Library, a curated and constantly updated library of AI policies, models, risks and mitigations, and evaluations. Your organisation can benefit from up-to-date recommendations and the best practices in AI risk management. |
Human and cultural factors | Our risk library contextualises risks by categorising them by type and acknowledging that different industries and AI systems can pose unique risks. Additionally, the library covers various risks that impact society, culture, the environment, and individuals. |
Continual improvement | Our platform supports the monitoring of risks, mitigations, and the overall risk management process. This ensures that any changes in the risks or new emerging risks are identified promptly and that any necessary changes to the risk management process can be made efficiently. |
Source: ISO/IEC 31000:2018(en), Risk management — Guidelines, item 4.
Risk Management Process
AI risk management on our platform involves identifying, evaluating, and mitigating specific risks throughout the lifecycle of the AI technology. The risk management process involves identifying, evaluating, and mitigating specific risks associated with a particular AI system, rather than assessing the overall risk level of the system.
AI risk management is essential to prevent and reduce possible negative impacts of AI on individuals, organisations, and society as a whole. It ensures that AI technologies are used responsibly and safely, in compliance with relevant regulatory instruments. Effective AI risk management enables organisations to foster trust among users and stakeholders by showing a commitment to addressing potential issues related to AI. Saidot risk management consists of the following risk management process:
Identify risks
Document risks
Evaluate risks
Risk treatment
Assess residual risk
Monitor risks
1. Identify risks
Your organisation can identify relevant risks for the registered AI system on our platform. Our platform recommends risks from the comprehensive Saidot Library based on the system's context and components. These include:
Risks you follow on Saidot Library
Risks associated with the same model your AI system uses
Risks relevant to the industry your AI system operates in
Risks related to the function of your AI system
Risks associated with the type of system your AI employs
Your organisation can also document custom risks on the platform. These may include more contextual or specific risks you have identified as relevant within your organisation. We emphasise the importance of accountability in identifying context-specific risks that the platform might not have captured.
2. Document risks
Your organisation can document the identified risks on the platform. This process involves recording the risk's name, description, owner, risk type, and source. Documenting risks facilitates an organised and accessible way to track and evaluate each risk associated with the AI system. Furthermore, it becomes possible to identify the tasks that are relevant for managing each risk.
Risk name and description
The risk name and its description should be recorded on the platform. Your organisation may use the provided description or modify it to better suit the context of the AI system.
Risk owner
Risk owner refers to the person managing the risk. The risk owner is often the individual or entity within the organisation who is accountable for overseeing the risk and its treatment.
Risk type
Risk Type refers to the category of potential negative outcomes or threats associated with the use of the AI system. Classifying risks by type helps in understanding the nature of the potential harm and facilitates targeted mitigation measures. Your organisation can choose from the following risk types:
Risk type | Description |
---|---|
Legal | Risks related to violations of relevant laws or regulations |
Cybersecurity | Risks of unauthorised access to or attacks on your systems and data |
Environment | Risks that affect environmental sustainability or cause harm to natural ecosystems |
Technical | Risks stemming from technical limitations or failures |
Trust | Risks that can impact the trust between your organisation and its stakeholders, including users and the public |
Fundamental rights | Risks that can impact the fundamental rights of individuals |
Privacy and data protection | Risks involving privacy infringements and improper handling or protection of personal data |
Societal | Risks that can impact society at large |
Third-party | Risks that are introduced by external partners, suppliers, or other third parties |
Business | Risks that impact the business or its operational effectiveness, profitability, or continuity |
Health and safety | Risks that impact or endanger the health and safety of individuals |
Risk source
Your organisation can select the relevant risk source for each risk. Identifying the AI risk source allows your organisation to better understand the factors contributing to and creating the risk. The risk sources on Saidot include:
Risk source | Description |
---|---|
Data | The data sets used to train or operate the AI system. |
Model | The design and function of the model, including its potential limitations. |
Product | The AI-enabled product and its features or possible limitations. |
Use | The uses of AI, potentially leading to misuse or unintended consequences. |
Context | The external environment in which the AI operates that can create or contribute to risks. |
Regulation | The regulatory instruments governing AI, where non-compliance could introduce risks. |
Other | Any other sources of risk. |
3. Evaluate risks
Risk evaluation includes assessing the inherent likelihood and impact of each risk, providing a baseline understanding of the severity and probability of possible adverse events. Additionally, evaluating the marginal risk—how the introduction of AI technology change the risk—enables your organisations to differentiate AI specific risks from those already existing in the organisation. This comprehensive assessment helps your organisation effectively prioritise specific risks as well as their risk management resources, ensuring that mitigation efforts are targeted where they are most needed.
Risk criteria
Risk criteria define the benchmarks your organisation uses to assess and measure the significance of risks. Standardising your interpretation of risk is important to maintain consistency in risk evaluation across your AI inventory. Use our explainers as a starting point while establishing your criteria for the use of risk scales.
AI risk criteria should support your organisation in:
distinguishing acceptable from non-acceptable risks;
performing AI risk assessments;
conducting AI risk treatment;
assessing AI risk impacts.
Source: ISO/IEC 42001:2023(en), Information technology — Artificial intelligence — Management system, item 6.1.1.
When setting the risk criteria, your organisation can consider the following:
the nature and type of uncertainties that can affect outcomes and objectives (both tangible and intangible);
how consequences (both positive and negative) and likelihood will be defined and measured;
time-related factors;
consistency in the use of measurements;
how the level of risk is to be determined;
how combinations and sequences of multiple risks will be taken into account;
the organisation’s capacity.
Source: ISO/IEC 31000:2018(en), Risk management — Guidelines, item 6.3.4.3.
Risk impact
Risk impact refers to assessing the outcome of the possible risk scenario. Organisations should analyse the impact of the risk scenario on the organisation, individuals and broader society. We propose a qualitative scale to assess risk impact, with a scale of 5 indicating the impact of the risk is severe, and it has serious consequences for organisations, individuals or broader society and 1 indicating the impact of the risk is negligent and it has minimal consequences for organisations, individuals or broader society.
Business impact analysis should assess the extent to which the organisation is affected by considering elements such as:
criticality of the impact;
tangible and intangible impacts;
criteria used to establish the overall impact as determined by the risk criteria
Impact analysis for individuals should assess the extent to which the individuals are affected by the development or use of AI by the organisation, or both by considering elements such as:
types of data used from the individuals;
intended impact of the development or use of AI;
potential bias impact on an individual;
potential impact on fundamental rights that can result in material and non-material damage to an individual;
potential fairness impact to an individual;
safety of an individual;
protections and mitigating controls around unwanted bias and unfairness;
jurisdictional and cultural environment of the individual (which can affect how relative impact is determined).
Impact analysis for society should assess the extent to which the society is affected by the development or use of AI by the organisation, or both by considering elements such as:
scope of societal impact (how broad is the reach of the AI system into different populations), including who the system is being used by or designed for (for instance, governmental use can potentially impact societies more than private use);
how an AI system affects social and cultural values held by various affected groups (including specific ways that the AI system amplifies or reduces pre-existing patterns of harm to different social groups).
Source: ISO/IEC 23894:2023(en), Information technology — Artificial intelligence — Guidance on risk management, item 6.4.3.2
Impact (consequence) | Description |
---|---|
5 – Severe | Consequences extend beyond the organisation. Serious implications for organisations, individuals or broader society. |
4 – Major | Adversely affects the organisation's capacity to operate. Likely irreversible damage to organisation, affected individuals or broader society. |
3 – Moderate | Noticeably reduces the organisation's operational efficiency. Possible significant consequences for affected individuals or broader society. |
2 – Minor | Degradation in the performance of the organisation with no consequences for affected individuals or broader society. |
1 – Negligible | Minimal impact on the organisation's operations, affected individuals or society. |
Sources: Saidot proprietary
Risk likelihood
Risk likelihood refers to assessing the probability of a risk scenario occurring. We propose a qualitative scale to assess risk likelihood, with a scale of 5 indicating that the risk scenario is almost certain to occur, and a rating of 1 suggesting that the risk is very rare to occur. The likelihood considerations should take into account several factors influencing the possible risk scenario, such as:
types, significance, and number of risk sources;
frequency, severity, and pervasiveness of threats;
internal factors such as the operational success of policies and procedures and motivation of internal actors;
external factors such as social, economic and environmental concerns;
success or failure of mitigation measures.
Source: ISO/IEC 23894:2023(en), Information technology — Artificial intelligence — Guidance on risk management, item 6.4.3.3.
Likelihood | Description |
---|---|
5 – Almost certain | The likelihood of the risk scenario is very high. |
4 – Very likely | The likelihood of the risk scenario is high. |
3 – Likely | The likelihood of the risk scenario is significant. |
2 – Unlikely | The likelihood of the risk scenario is low. |
1 – Rare | The likelihood of the risk scenario is very low. |
Sources: Adapted based on ISO/IEC 27005:2022(en), Information security, cybersecurity and privacy protection — Guidance on managing information security risks
Marginal risk
Marginal risk refers to the change in risk that occurs as a result of the introduction of AI technology. Evaluating the marginal risk of AI systems enables organisations to differentiate the risks associated with introducing AI technology from those existing in the organisation. This can help organisations to develop and choose more precise and effective risk treatment strategies. We propose a qualitative scale to evaluate marginal risk, ranging from significantly higher to significantly lower risk levels due to the introduction of AI technology.
Marginal risk | Description |
---|---|
Significantly higher | Significantly higher risk level due to the introduction of AI technology. |
Somewhat higher | Somewhat higher risk level due to the introduction of AI technology. |
Same | Similar risk level due to the introduction of AI technology. |
Somewhat smaller | Somewhat reduced risk level due to the introduction of AI technology. |
Significantly smaller | Significantly reduced risk level due to the introduction of AI technology. |
Sources: Saidot proprietary
4. Risk treatment
Risk treatment refers to selecting and implementing options for addressing specific risks. The input of the risk treatment strategy is based on the risk evaluation outcomes in the form of a prioritised set of risks to be treated based on the inherent risk score (likelihood x impact). The output of this process is a set of necessary actions that are to be deployed or enhanced in accordance with the chosen risk treatment strategy. Deployed in this way, the effectiveness of the risk treatment strategy is to modify the risk's impact and likelihood to reduce the residual risk level as much as possible so that it meets the organisation’s criteria for acceptance.
Treatment strategies
Your organisation can select the relevant risk treatment option on the platform from avoid, transfer, mitigate and accept.
Treatment strategy | Description |
---|---|
Avoid | Eliminate the risk by either abstaining from the activities that introduce it or by removing its source. |
Transfer* | Allocate the risk to a third party, for example, through contractual agreements or insurance policies. |
Mitigate | Reduce the risk effects by altering its probability or impact. “This means you reduce the risk effects by guided risk mitigation.” |
Accept | Decide to accept the risk by informed decision. “This means you accept the risk without mitigations. Document your risk acceptance justification.” |
*Note, this strategy to be implemented through the Mitigate strategy on Saidot
Sources: Adapted based on ISO/IEC 27005:2022(en), Information security, cybersecurity and privacy protection — Guidance on managing information security risks
Default treatment strategies
Not every treatment option is necessarily suitable for all situations. Based on the evaluated inherent risk level, we propose default treatment strategy options.
Evaluation scale | Evaluation | Treatment strategy options (default underlined) | Description |
---|---|---|---|
Low (green) | Acceptable as is | Accept | The risk can be accepted without further action. |
Medium (yellow) | Tolerable under control | Mitigate / Transfer / Avoid | Risk management should be conducted. |
High (red) | Unacceptable | Mitigate / Transfer / Avoid | Measures for reducing the risk or eliminating it should be taken with high priority. |
Sources: Adapted based on ISO/IEC 27005:2022(en), Information security, cybersecurity and privacy protection — Guidance on managing information security risks
Mitigation status
For each mitigation strategy, a status can be selected to indicate how far along the organisation is in implementing the measure. Your organisation can choose from planned, in progress, implemented or completed.
Mitigation status | Description |
---|---|
Planned | Mitigation is planned but has not yet been implemented. |
In progress | Mitigation is being implemented. |
Implemented | Mitigation is deployed and operational. |
Completed | Mitigation goals have been achieved, and no further action is required. |
Sources: Adapted based on ISO/IEC 27005:2022(en), Information security, cybersecurity and privacy protection — Guidance on managing information security risks
5. Assess residual risk
Residual risk refers to the remaining risk level after the adequate treatment strategy is carried out. Assessing residual risk level is crucial for robust risk management since it indicates the effectiveness of the treatment strategy and its impact on the inherent risk level, and it informs decision-making on the potential next steps referring to each individual risk. The scales to evaluate residual risk level (impact and likelihood) are the same as those used to evaluate the inherent risk level (see above).
6. Monitor risks
Regular monitoring and review of the risks, their treatments and the risk management process should be planned as part of your organisation’s risk management strategy. Risk monitoring on our platform is conducted through task management activities, such as system re-evaluation, and is facilitated by recommendations tailored to the specific type of risk.
The monitoring process should be ongoing and include all aspects of the risk management process:
to ensure that the risk treatments are effective, efficient and economical in both design and operation;
to obtain information to improve future risk assessments;
to analyse and learn lessons from incidents (including near misses), changes, trends, successes and failures;
to detect changes in the internal and external context, including changes to risk criteria and the risks themselves, which can require revision of risk treatments and priorities;
to identify emerging risks.
Source: ISO/IEC 27005:2022(en), Information security, cybersecurity and privacy protection — Guidance on managing information security risks, item 10.5.1.