This qualification develops the necessary knowledge and skills to strengthen AI governance by supporting safe and complaint AI adoption. It develops the capabilities to assess the conformance and implementation of an Artificial Intelligence Management System agnostically; with its family of standards; conforming to ISO/IEC 42001 and gain the understanding of the concepts which underpin ISO/IEC 22989:2023 as a normative reference in ISO/IEC 42001:2023, whilst building a risk management system based on ISO/IEC 23894:2023. As a risk-based standard ISO/IEC 42001:2023 provides the option of 38 risk controls that you may determine necessary for manging your AI risks. It will provide an oversight into how to implement these controls if they are necessary for the effective implementation of an AIMS and be provided with the understanding and tools to conduct an AI System Impact Assessment, either in isolation or as part of an ISO/IEC 42001:12023 AI management system.
The AI Governance Professional builds on the AI Governance Practitioner by developing an even fuller knowledge base, providing a more technical understanding.
How will I benefit?
This qualification will help you by giving you the knowledge and skills to:
- Identify and apply the benefits and requirements for an ISO/IEC 42001 management system
- Understand what is needed to develop an AI system
- Understand key terms and definitions associated with AI, AI System Impact Analysis
- Create and develop the framework for your own Artificial Intelligence Management System (AIMS), build a risk management system and build awareness and support across your organization
- Apply the best practice controls in ISO/IEC 42001:2023 with a clear rationale behind the processes and usages associated with the controls
- Understand risk terminology and how they apply in mitigating threats and delays
- Implement the controls more effectively through clear and practical guidance
- Understand what is needed to develop an AI System Impact Analysis process
- Deliver the implementation of AI System Impact Assessments
- Build a toolset of methods that can help identifying and assessing bias and fairness issues
- Identify potential sources of unwanted bias and terms to specify the nature of potential bias
- Addressing unwanted bias through treatment strategies
- Gain a comprehensive grasp of the fundamental principles and guidelines for building trustworthy AI systems
- Acquire skills to assess and mitigate threats and risks associated with AI decision-making, reducing the potential for unintended negative consequences and bolstering the reliability of AI systems
- Understanding the principles of controllability and explainability will allow you to create AI models and applications that provide clear explanations for their decisions, building trust with end-users and stakeholders
- Build a toolset of methods that can help identifying robustness issues
- Understand the different types of data perturbations, and their use in the creation robustness test data sets
- Design workflows to detect and address robustness concerns
- Take steps to ensure that the assessment of robustness is part of the development and deployment of AI systems involving neural networks