AI Management Systems

ISO 42001 Annex A in Plain English: A Control-by-Control Walkthrough

Standarity Editorial Team·ISO 42001 Lead Implementers & Auditors
··10 min read

When organizations first read ISO 42001, the main clauses (4 through 10) feel familiar — anyone who has worked with ISO 27001 or ISO 9001 recognises the management system architecture. The part that actually shapes day-to-day behaviour is Annex A. It contains the controls you select, implement, and have to evidence during an audit. Skip past Annex A and your AIMS is just a policy document.

Annex A in 42001 is structured into nine control objectives, each with several specific controls underneath. Unlike ISO 27001 Annex A, which leans heavily on technical safeguards, ISO 42001 Annex A spends most of its weight on organisational and stakeholder-facing controls — because the dominant risks in AI are not just to the system but to people affected by it.

A.2 — Policies Related to AI

This is where most implementations should start. You need a documented AI policy that is approved at senior level, communicated, and reviewed at planned intervals. The policy is not a one-pager. It needs to address acceptable use, who can deploy AI, how risk is assessed, and how decisions are made when an AI system shows unexpected behaviour. Auditors look for evidence the policy is actually used — that someone references it during AI procurement, project intake, or incident review.

A.3 — Internal Organisation

Roles, responsibilities, and authorities for AI need to be defined and assigned. In practice this means your RACI for AI cannot end at "the AI team is responsible." Procurement, legal, security, data privacy, and the business units running the AI all play distinct roles in an AIMS. The control also requires you to address conflicts of interest — particularly where the same person decides whether an AI system is safe to deploy and is accountable for hitting its delivery date.

A.4 — Resources for AI Systems

A frequently misread set of controls. It covers documented data resources (where your data comes from, how it is curated, what its limitations are), tooling resources, system and computing resources, and human resources (the competence of the people building or operating AI). The point is that you can demonstrate, for any operating AI system, what fed it, what runs it, and who is responsible for it.

A.5 — Assessing Impacts of AI Systems

The control that most clearly distinguishes ISO 42001 from older management systems. Before deploying an AI system you must conduct an AI impact assessment that considers effects on individuals, groups, and society — not just operational risks to the organisation. Implementations vary in formality, but the assessment must be repeatable and traceable. A spreadsheet that captures intended use, affected stakeholders, potential harms, mitigations, and residual risk satisfies most certification bodies.

The single biggest gap auditors find: organisations conduct an impact assessment for the flagship AI initiative everyone is watching, then deploy six smaller AI features without one. The standard does not give you a "small project" exception. Build a lightweight intake process so the impact assessment scales down — even a 30-minute structured review beats nothing.

A.6 — AI System Lifecycle

Controls A.6 cover the full lifecycle: requirements, design, verification, deployment, operation, and retirement. The substantive expectations are documented design choices, traceability between requirements and verification, and explicit decision points for deployment. If you already run a structured ML lifecycle (MLOps), you are most of the way here — the gap is usually in documenting decisions, not in the technical practice.

A.7 — Data for AI Systems

Data provenance, quality, preparation, and the assessment of bias and representativeness. This is the area where most internal-only AI deployments fall short, because organisations have spent years optimising for "use whatever data we already have" rather than "interrogate whether this data is fit for the decision the model will make." Annex A.7 forces that interrogation to be documented.

A.8, A.9, A.10 — Information for Stakeholders, Use of AI Systems, Third-Party Relationships

A.8 covers transparency: the documentation, disclosures, and explanations you provide to users and affected parties about how the AI system works, its limitations, and how its outputs should be interpreted. A.9 covers responsible use within the organisation — handling output appropriately, providing human oversight at the right points. A.10 covers suppliers — so if you procure an AI service rather than build it, you have the contractual and operational controls in place to manage it as part of your AIMS.

The broader principle running through all of Annex A is that AI risk extends beyond the model. Data, suppliers, deployment context, and downstream stakeholders all sit inside the management system. Treating Annex A as a checklist misses the point. Treat it as the structure that ensures every part of how your organisation handles AI is governed, evidenced, and improvable.

Explore Courses on Udemy

Intermediate

ISO/IEC 42001: Artificial Intelligence Management System

Intermediate

Implement ISO 42001 Step by Step With Templates

Intermediate

Auditing ISO 42001 Annex A Controls

Intermediate

ISO 42001 Annex A Controls Explained