AI Governance

Five GenAI Governance Questions Your Board Will Ask Next Quarter

Standarity Editorial Team·AI Governance Practitioners & Board Advisors
··8 min read

Two years ago, most board conversations about AI were exploratory. Directors wanted to understand the technology. Today the conversation has shifted decisively to governance. Boards are not asking what GenAI is — they are asking how the organisation is using it, what risks are accepted, who owns those risks, and what would happen in a reasonable worst case. The shift means executive teams need precise, evidenced answers instead of strategy slogans.

1. What AI Are We Already Using, and Who Approved It?

This is the question most organisations answer poorly. The official inventory covers a handful of flagship initiatives. The reality includes embedded AI in SaaS products that arrived with vendor updates, employees pasting confidential information into public chatbots, and analytics platforms that quietly added LLM-powered features. A defensible answer requires an actual AI inventory across both built and bought, with an approval status and an owner for each.

2. What Is the Worst-Case Scenario?

Boards are not looking for catastrophising — they are looking for evidence that the executive team has thought it through. A worst-case answer covers a confidentiality breach (sensitive data leaked through a model), a wrong-decision scenario (an AI-driven decision that harms a customer or breaches a regulation), and a reputation event (public failure that becomes a story). For each, the board wants to know the existing controls, the residual risk, and what would be done if it occurred.

3. Who Owns AI Risk Across the Organisation?

The wrong answer is "the AI team." The wrong answer is also "everyone." The right answer assigns specific accountability across procurement (vendor due diligence), legal (contractual and regulatory exposure), security (technical safeguards and incident response), data privacy (personal data handling), and the business unit deploying the AI (operational responsibility). Each role with a name. The board is checking that you have actually thought about the seams.

Boards have started bringing AI questions to internal audit and external advisors before the executive presentation. If your CFO walks into the meeting and discovers the audit committee has already heard a different framing of the AI programme from a third party, the conversation will go badly. Brief the audit committee chair before the full board meeting.

4. How Are We Measuring This?

Boards want metrics, not anecdotes. Useful AI governance metrics include: number of in-scope AI systems, percentage with completed impact assessments, percentage with human-in-the-loop oversight at high-risk decision points, AI-related incidents in the past quarter and their resolution status, supplier AI risk assessments completed for vendors handling sensitive data. None of these need to be perfect. They need to be real.

5. What Framework Are We Aligning To?

Boards have noticed the convergence around ISO 42001, the NIST AI Risk Management Framework, and the EU AI Act. They want to know which one (or which combination) the organisation is aligning to, why that choice was made, and what the timeline looks like. "We are working on it" is not an acceptable answer in 2025. "We are aligning to ISO 42001 with NIST AI RMF as our risk-thinking layer; first impact assessments completed for our highest-risk systems; targeting certification in 18 months" is.

  • AI inventory: built and bought, owned and approved
  • Worst-case scenarios: documented, mitigated, monitored
  • Risk ownership: roles with names, not departments
  • Metrics: a small set of meaningful numbers, not a dashboard for its own sake
  • Framework alignment: a chosen path with a credible timeline

Explore Courses on Udemy

Intermediate

CEO Playbook: Generative AI for Business Transformation

Intermediate

Generative AI for Leaders

Intermediate

Implement GenAI Governance Step by Step

Intermediate

Mitigating Bias and Ensuring Fairness in GenAI Systems