The rapid development and adoption of AI is creating new opportunities for businesses across industries. From predictive analytics and natural language processing to automated decision-making, AI is transforming business operations and the customer experience. However, with this vast potential comes significant risk – especially for compliance leaders who must navigate an everchanging and complex landscape of emerging regulations, ethical considerations, and governance challenges.
A recent survey by PWC found that while 73% of US executives are using AI in their organizations, only 58% of executives have started assessing AI risks. For risk and compliance leaders, the use of AI often raises more questions than answers: how can organizations ensure transparency in AI decision-making? What safeguards are necessary to protect against AI-driven data breaches? And perhaps most critically, how can they demonstrate responsible AI use to a range of stakeholders?
To help address these challenges, the International Organization for Standardization (ISO) created the ISO 42001 standard in December 2023. This new standard establishes requirements for implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS), offering a structured framework that balances innovation with responsible AI practices.
It’s important to note that ISO 42001 isn’t just for large enterprises – it’s a flexible framework designed for organizations of all sizes that develop, provide, or use AI-based products and services. Whether in the public sector, private industry, or nonprofits, this standard provides a clear and actionable blueprint for effective AI governance.
As the head of GRC for Kandji, I spearheaded our company’s ISO 42001 certification efforts. While the process took several months and consumed significant resources, we’re already seeing the dividends of our investment as it has not only bolstered our overall AI governance and AI Risk Management programs, but also enhanced our credibility with customers and partners by enabling us to demonstrate responsible and ethical use of AI in providing the Kandji services as they value ethical AI practices.
Why ISO 42001 Matters
ISO 42001 emerged from the growing need to manage the risks and ethical challenges associated with AI technology. As organizations become increasingly reliant on AI-driven systems, the lack of standardized governance presents both operational and reputational dangers, be it from data privacy breaches or inadequate safeguards that could lead to unethical decision-making.
At Kandji, we saw the value of the certification early on. Indeed, we didn’t view it merely as a compliance exercise but rather as a critical strategic investment undergirding our future AI roadmap. Here are some of the key reasons we decided it was a worthwhile endeavor:
- AI Risk Management: ISO 42001 helps organizations identify and assess risks associated with AI applications, offering clear guidance on managing potential threats effectively. This type of proactive approach to risk management helps safeguard critical data and operations, reducing the likelihood of legal or reputational damage resulting from AI-related incidents.
- AI Governance: The standard aligns AI practices with relevant legal and regulatory requirements, helping organizations stay ahead of evolving compliance demands while streamlining governance practices to ensure consistent and responsible AI implementation.
- Ethical AI Practices: The standard requires organizations to evaluate the societal impact of AI products and align them with ethical standards and values, helping to build stakeholder trust and addresses public concerns about the ethical implications of AI.
- Reputational Management: Conforming to ISO 42001 signals a commitment to responsible AI development, which enhances credibility and fosters trust among users, customers, and the general public.
Most important of all, our executive leadership team collectively recognized that ISO 42001 represents a forward-looking approach that will help us avoid legal pitfalls while ensuring that our AI systems operate within the boundaries of established legal guidelines.
5 Lessons Learned from Our ISO 42001 Journey
As one of the first U.S. companies to achieve ISO 42001 certification, we found ourselves navigating uncharted territory without a clear support network or established best practices. And because ISO 42001 is a relatively new standard, we were essentially building the roadmap as we went since there was little precedent to guide us. While we learned a great deal along the way, there are five lessons that stand out from our certification journey:
1. Understand the Standard and Perform a Gap Assessment Early On:
The first step we took was to deeply understand the requirements and expectations of ISO 42001. This was followed by a comprehensive scoping exercise and gap assessment to identify areas needing improvement or new processes. Early gap assessments helped our team align their strategy with the standard’s requirements and set a clear path forward.
2. Revamp Risk Assessment Processes:
Achieving certification required overhauling our risk assessment framework to align with ISO 23894 (guidance for AI-related risk management) as well as the NIST AI risk management framework. This ensured that AI risk factors were effectively identified, evaluated, and managed, particularly given the fast-evolving nature of AI technologies.
3. Integrate AI-Specific Policies and Procedures:
Certification required us to develop comprehensive AI-specific policies and procedures that addressed every aspect of AI governance. This included creating AI use policy and AI Policy respectively, a customer-facing AI acceptable use policy, AI deployment plans, and AI impact assessment procedures, revamping our 3rd party vendor assessment processes to include provisions for assessing AI vendors. Integrating these policies into our existing framework ensured that AI development and deployment were conducted responsibly and transparently.
4. Ensure Robust Training and Internal Auditing:
A critical aspect of our journey was updating security awareness training to include AI-specific training for developers, product managers, and users. Additionally, conducting internal audits before the final certification audit helped identify and mitigate potential issues, making the certification process smoother and more efficient.
5. Choose Knowledgeable Auditors and Maintain Cross-Framework Integration:
Our existing ISO 27001 certification presented a unique challenge as we needed to integrate the Information Security Management System (ISMS) with the new AI Management System (AIMS). Partnering with experienced auditors and certification bodies was essential in helping us navigate these complexities and helped ensure that integrated audits were conducted efficiently.
Pursuing ISO 42001 certification was not without its obstacles, but the benefits have proven well worth the effort. For compliance leaders considering this journey, the key takeaway is that by getting in front of this now, you will position your organization as a leader in responsible AI innovation while safeguarding your reputation as a trustworthy provider.