Skip to main content
Skip table of contents

Introduction to Saidot methodology

Using Saidot, you will be able to

  • Govern your AI systems effectively in every step of their lifecycle

  • Manage AI related risks and create actionable treatment plans

  • Identify and document dataset cards related to your AI systems

  • Understand and access relevant knowledge related to your AI systems, including policies, risks, model and evaluations

  • Provide transparency to your internal and external stakeholder

  • Enable effective communication and cross-functional collaboration in AI Governance tasks

Scope

AI governance means applying systematic processes and ensuring accountability throughout the AI system lifecycle to deliver high-quality, responsible AI.

AI governance is a framework and set of practices and processes designed to ensure that AI-based systems are developed and deployed responsibly and in compliance with regulatory standards. AI governance covers the full lifecycle of an AI system, from the initiation phase to its deployment, operations and monitoring, and finally retirement. It defines a set of practices focused on ensuring product quality, and responsibility and elevating positive impacts of AI.

At its core, AI governance is a means to achieving responsible AI within an enterprise. Responsible AI refers to developing and using artificial intelligence in a way that is ethical, technically robust, and in compliance with legal requirements.

Importance

AI is transforming business processes across industries from routine to the most creative and innovative tasks. AI is used to streamline processes, improve decision-making, increase efficiency, and create completely new innovations. Generative AI has been estimated to impact 40% of jobs globally. As AI is advancing faster than ever, it is becoming essential for businesses to adopt these technologies to stay competitive.

At the same time, we’re growingly aware of the limitations and risks of AI. From hallucinations to copyright and contractual infringements, privacy vulnerabilities, and third-party reliance, the companies' risk landscape is populated with AI-related risks more than ever.

As AI has quickly found its way at the centre of enterprise strategies, companies are turning to AI governance. Effective AI governance ensures that AI systems meet quality standards and deliver value to businesses while building trust between AI providers and users. Moreover, AI governance ensures compliance with the developing regulative landscape, such as the EU’s AI Act, protecting organisations from legal and reputational risks.

No AI system is similar, but good AI governance must rely on best practices gained through rigorous research, deep expertise and previous experience. We bring these methodologies to you, helping you avoid reinventing the wheel while maintaining adaptability.

Basic principles

Our platform and methodology aims for

  • Making AI governance work as effective and easy as possible with our extensive Saidot Library knowledge base and Governance platform with concrete step-by-step guidelines and recommendation features

  • Creating a safe space for AI innovations and thus better business performance, compliance and wellbeing​ by increasing the understanding of the AI regulation, risks and limitations

  • Allowing the AI teams to focus on the governance tasks that are relevant for a given phase in the AI System lifecycle.​

  • Helping our customers to identify the risks that are most important to pay attention to and avoid over mitigating risks for the same reasons.​

  • Supporting effective collaboration around AI Governance, enabling cross-functional communication, following up progress and doing the best effort with the knowledge available at the time

Right-sized AI governance

The Right-sized AI governance approach is an underlying principle guiding our methodological framework, our AI governance operationalization and implementation work with our clients, and it is the approach we recommend they apply in their daily AI governance practices. Right-sizing AI governance at Saidot means that all governance efforts—including framework design and alignment; establishing robust processes and allocating the necessary tasks (E.g. Risk Management and Compliance Management processes); and defining necessary roles and responsibilities—should be fit for purpose and context-based.

The variables that should inform the right-sizing of AI governance are twofold:

  1. Organisational factors, which include organisations' size and their risk appetite, the sector in which they operate; the context of their AI use/development, and their existing governance and compliance practices, structures, entities and culture.

  2. AI-system-specific factors, which include the AI system risk classification and business impact; its intended purpose and context of use; the technology (models and products), and the data used to train or operate the systems or models.

One-size-does-not-fit-all in AI governance and we believe that Right-sized AI governance ensures quality, efficiency, scalability, and speed within your AI governance.

Sources and References

Our methodology is based on extensive experience, cutting-edge research, applicable regulations and relevant standards. The sources used and referenced are listed below.

ISO/IEC 22989:2022, Information technology — Artificial intelligence — Artificial intelligence concepts and terminology.

ISO/IEC 42001:2023, Information technology — Artificial intelligence — Management system.

ISO/IEC 27005:2022, Information security, cybersecurity and privacy protection — Guidance on managing information security risks.

ISO/IEC 38507:2022, Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations.

ISO/IEC 23894:2023, Information technology — Artificial intelligence — Guidance on risk management.​

​ISO 31000:2018, Risk management — Guidelines.

National Institute of Standards and Technology (2023) Artificial Intelligence Risk Management Framework. (Department of Commerce, Washington, D.C.), NIST Technical Series Publications, AI RMF 1.0, 26 January 2023. https://doi.org/10.6028/NIST.AI.100-1

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation). (2016). OJ L, 119. http://dataeuropaeu/eli/reg/2016/679/2016-05-04/eng.​

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). (2024). OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/2024/1689/oj .​

Schuett, J. (2023). Three lines of defense against risks from AI. AI & SOCIETY. Advance online publication. https://doi.org/10.1007/s00146-023-01811-0

The Institute of Internal Auditors (IIA). (2013) IIA position paper: The Three Lines of Defense in effective risk management and control. 121691 PROF-Position Paper 3 Lines of Defense_Digital Version_CX.indd

The Institute of Internal Auditors (IIA). (2024) The IIA’s Three Lines Model: An Update of the Three Lines of Defense. Position Paper. Three Lines Position Paper - IIA Sept. 2024 Update.​

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.