Roles and responsibilities in governing AI
Using Saidot, you will be able to
Invite colleagues from product, business, process, legal, and compliance teams to collaborate on governing your AI system.
Assign specific system-level responsibilities to team members to ensure well-informed governance decisions.
Sign up for role-based notifications, tasks, and access to effectively contribute to the AI system.
Regularly report the status of your AI system and risk portfolio to your organisation’s governing body.
AI governance is a responsibility that touches a wide group of stakeholders in an organisation. It is a collaboration between AI product teams, legal and compliance departments, and business and process owners. The executive level and board also hold responsibility for opening AI opportunities responsibly by prioritising the alignment with corporate values and ensuring quality and compliance. Therefore, the accountability for AI governance should happen both at the system level (system-level accountability) and at the organisational level, through a Governing Body.
System-Level Accountability: The AI System Team
Defining and assigning system-level responsibilities is key to effective AI accountability. The AI System Team is the group of stakeholders who work closest with AI systems. Members of the team are most familiar with the system’s intended purpose, capabilities, limitations, and context of use.
The AI System Team has the capacity, knowledge, and empirical evidence to make the most well-informed AI governance decisions that will contribute to better accountability and, ultimately, responsible AI.
The overall system-level responsibilities can be defined as follows:
Drives the design, implementation, and use of AI systems in alignment with company policies and procedures.
Is responsible for implementing the organisation’s agreed governance actions and controls in the context of an AI system.
Applies appropriate governance relative to the risk level of a system.
Prepares and presents the system’s governance documentation for potential conformity review processes.
Establishes mechanisms for monitoring and oversight of the system throughout the system lifecycle.
Well-defined roles at the system level are key elements of effective AI accountability. Examples include:
Role | Description |
System owner | The System Owner is responsible for the overall procurement, development, integration, modification, and operation of the AI system. |
Business owner | The Business Owner drives the strategic vision and operational objectives, ensuring the AI system aligns with and supports business goals. |
Data scientist | A Data Scientist designs and develops AI systems or modifies third-party systems to help the organisation achieve the business goals set for the AI system. |
Technical Specialist | The Technical Specialist provides expertise in the system's technical architecture and ensures the technical infrastructure supports the AI system's objectives. |
Data steward | A Data Steward manages and oversees the quality, integrity, and security of data used by the AI system. |
Compliance Specialist | The Compliance Specialist ensures that the system adheres to relevant internal and external AI policies, laws and regulations. |
System Reviewer | A Reviewer assesses the AI system's performance, usability, adherence to specifications and standards, and compliance with regulatory requirements. |
System Overseer | The Overseer ensures that the AI system operates within its intended scope and boundaries, addressing any deviations and ensuring compliance with standards and policies. |
User | The User interacts with the system, utilising its features to complete tasks and providing feedback on its functionality and performance. |
Other | Depending on the context, other roles can contribute to AI system lifecycle management. |
Organisational-Level Accountability: The Governing Body
At the organisational level, members of the Governing Body are accountable for the decisions made in the organisation, including those made through the use of AI, and for the governance and compliance where AI is being developed or deployed. They are accountable for the use of AI in the organisation and, consequently, for potential AI-related harms.
Clarity, certainty, and ownership are necessary to implement accountability in organisations effectively. Therefore, it’s crucial to assign roles and define responsibilities clearly within the Governing Body.
The responsibilities to be assigned to the Governing Body can be defined as follows:
Provide direction by setting targets for responsible AI use and approves strategies necessary to achieve the targets.
Own AI policy and company-level procedures and processes for responsible development and use of AI.
Evaluate the AI portfolio and intended uses of AI in relation to the organisation’s risk appetite.
Responsible for oversight and outcomes of AI as well as adequacy of governance.
Signs off high-risk use cases for potential regulative processes.
Members can be held accountable for the mis-actions in cases where inadequate governance, compliance, and enforcement within the organisation lead to AI harms.
*Sources: Adapted based on ISO/IEC 38507:2022(E).