top of page
  • Writer's pictureDamien Peschet

AI - Introduction to the New ISO42001 Standard




In the vast universe of technology, artificial intelligence stands as an enigma, a mystery that captivates and intrigues. It's the subject of all debates, a topic that leaves no one indifferent. Sometimes admired, sometimes feared, it infiltrates every corner of our existence, unfolding in an almost infinite array of forms and applications. Like a gallery of mirrors, it reflects weak AI and strong AI, reactive AI and conscious AI, manifesting in a myriad of applications from computer vision to natural language processing, robotics, and predictive analytics.


Yet, the term artificial intelligence is not very recent, as the first use of the term dates back to the 1950s! Some pioneers, like Alan Turing, have laid the groundwork for some principles that are still in effect today.

It is in this context that the ISO42001 standard, published in December 2023, seeks to find its place and provide a framework for this technological revolution that will undoubtedly leave its mark on future innovations in all sectors of activity.


Presentation of ISO42001

In terms of form, there's nothing new; if you're familiar with the structure of ISO standards, you won't feel lost.

You'll find a section on requirements (which could be likened to the foundations of the management system) and several annexes.


Annex A presents the famous security measures to be implemented (depending on their applicability), while Annex B details the implementation guide for these security measures (ideal for those who are not sure where to start).


Annex C (very practical) allows you to identify potential objectives and sources of risk related to artificial intelligence activities. It's a valuable guide for defining the framework of your AIMS (Artificial Intelligence Management System).


Finally, Annex D reminds us of some principles of applicability of this management system and its interoperability with other management systems like ISO27001 or ISO27701.


What requirements?



Like any management system (security, quality, environment), the first requirement is to understand the context in which your organization operates: Its industry, use cases, specific regulations, in short, you will need to thoroughly examine the ins and outs of your company in the context of AI use. This step is crucial and lays the foundation for your future management system, including setting quantifiable and measurable objectives around your AI. Why? Simply to know if your management system meets your expectations and those of your stakeholders (customers, regulators, employees). It might seem obvious, but believe me, it's fundamental to understanding the role of a management system.

In the context of AI, you must conduct (and manage) a rigorous risk analysis (Annex C will be very helpful in this regard) to try to determine sources of threats, vulnerabilities, assets, and, of course, impacts. A small clarification, the standard does not impose a method of risk analysis but recommends checking out the ISO/IEC 38507 & ISO/IEC 23894 standards specific to the use of artificial intelligence technologies in businesses. A subtle difference from the traditional ISO27001 standard is that, in the context of risk analysis, an additional impact analysis is required. What's the difference, you might ask? Well, in risk analysis, we typically assess the impact of these risks on the organization. In impact analysis, we consider the consequences for individuals or even society in case of misuse of AI. Let's say the viewpoint is different, even if the approach can be shared.

For the rest, there are no big surprises; you will find requirements similar to our good old 27001.


Main security controls

In the previous step, we have:

- Established the foundations and the rules of life aboard the ship

- Determined a course to follow (while avoiding the reefs)

- Reviewed the troops and the potential forces

- Conducted a pre-boarding inspection


It's now time to open the toolbox contained in Annex A, namely the security measures.

If I were to caricature, I'd say that the security measures in Annex A are to ISO standards what boots are to motorcycling: It's not mandatory, but it's STRONGLY recommended!

In essence, you could very well deem the majority of security measures inapplicable to your context, but you'll need to have solid arguments to present on audit day and demonstrate through your risk analysis that all these measures are unnecessary —> Good luck!

Among the most emblematic measures, I've selected 4:


Issue Reporting

Your organization must implement a process that allows anyone using your AI system to report alerts or issues of any kind. This process must allow for anonymous reporting, be easily accessible, and systematically documented.


Data Sourcing

The available documentation must specify the sources of the data used to "feed" your AI. Transparency is crucial, especially to combat biases in artificial intelligence (which are quite similar to classical cognitive biases). The more an AI is fed, the more relevant it becomes, but it's out of the question to feed it just anything. Moreover, the content of data sources may be protected by copyright (hello NY Times).


Human Intelligence

One aspect of these security measures is to ensure that behind this artificial intelligence is a team of qualified experts to handle these new types of environments. In a way, this measure is reassuring as it will force organizations to have competent resources to deal with all aspects of the operation and regulation of technologies whose boundaries are sometimes difficult to define. So, if your goal was to entrust your project to an army of interns coupled with a research tax credit, you're out of luck.


The Art of Documenting

The biggest challenge of this approach will lie in your organization's ability to formalize and document the different stages of your AI's lifecycle and the control steps present at each change. Here, it's clear that ISO aims to build a system that can be easily audited (especially by authorities). Again, these measures can be considered fair because the level of complexity is such that it can quickly escape its creators and those in charge of monitoring them. The good news is that you can use AI to help create the documents...you've got it !


Security and AI?

Reading this new standard, one might almost regret the lack of technical security measures to implement, which is why it seems hardly relevant to consider this certification without implementing an ISMS 27001 in parallel.

For organizations already certified under ISO27001, adding the 42001 management system would be a formality (provided the security measures in Annex A are respected). For others, here are some arguments for this integrated approach, which lies in the complementarity of their objectives:

  • Consistency in Risk Management: By implementing them together, you ensure a coherent approach to risk management, covering both general aspects of information security and the specific challenges posed by AI technologies.

  • Optimization of Resources: There are often overlaps in the requirements of these standards, particularly in terms of governance, risk management, and review processes. By integrating them, you can streamline processes and use your resources more efficiently, thus avoiding unnecessary duplications.

  • Improvement of Security and Compliance: This integrated approach strengthens the organization's overall security and compliance posture.

  • Data Management and Privacy: AI often involves processing large amounts of data, including personal data. Combining ISO 27001 or ISO27701 with ISO 42001 allows for better management of privacy and data protection aspects in AI projects.

  • Response to Stakeholder Expectations: Customers, partners, and regulators may have increasing expectations regarding information security and responsible AI management. Joint adoption of these standards demonstrates a strong commitment to best practices in both areas, strengthening stakeholder trust.

  • Responsible Innovation: By implementing ISO 42001 alongside ISO 27001, organizations can ensure that their AI innovations are not only secure but also ethical and compliant with international standards.


Conclusion

Beyond the technical aspects, the joint implementation of these standards is part of a philosophical adventure, a call for ethical reflection on our relationship with technology. It's an invitation to view AI not as a threat but as a mirror of our own humanity, a challenge to our ability to innovate responsibly and shape a future where man and machine advance together, in harmony.

We live in a time when artificial intelligence is no longer just a tool but a daily companion, an extension of our own intelligence and abilities. This coexistence raises deep philosophical questions: What role does AI play in our society? How can we coexist with these digital entities while preserving our human values?

ISO 42001, in symbiosis with ISO 27001, is not just a framework for securing data or regulating systems;  it's a step towards a deeper understanding of our responsibility as human beings in an increasingly digital world. By integrating these standards, we're not just protecting information; we're forging a future where artificial intelligence operates not only efficiently but also ethically.



3 views0 comments

Recent Posts

See All
bottom of page