type
Post
status
Published
date
Apr 7, 2026
slug
governing-ai-why-governance-literacy-matters-more-than-technical-expertise
summary
New Zealand boards don't need to understand machine learning — they need to govern it, and three regulatory deadlines in 2026 make the gap between those two things expensive.
tags
AI-governance-NZ
Board-governance
AI-strategy-SME
Privacy-Act-NZ
IPP-3A
AI-readiness
Responsible-AI
NZ-business
Directors
Risk-management
category
AI & Governance
icon
password
URL

Governing AI: Why Governance Literacy Matters More Than Technical Expertise
On 1 January 2026, Manage My Health notified the Privacy Commissioner of a cyber incident affecting patient records. On 1 May 2026, IPP 3A comes into force, requiring organisations that collect personal information indirectly to notify those individuals. On 3 August 2026, the Biometrics Processing Privacy Code grace period expires.
Three regulatory events in eight months. None of them require your board to understand machine learning. All of them require your board to govern properly.
That's the distinction most boards are missing. The gap isn't technical literacy — it's governance literacy.
The stakes are moving faster than the agenda
New Zealand boards aren't being asked to become data scientists. They're being asked to do what they've always done — manage risk, ensure compliance, steward organisational value. The difference now is that the risks move faster, the compliance landscape shifts under your feet, and the consequences scale at algorithmic speed.
The upside scales too. International research suggests nearly 60% of executives found that investing in responsible AI practices improved both return on investment and innovation performance. But fewer than half have formalised AI governance frameworks. That gap between opportunity and oversight is where the damage happens — and where the value is lost.
Consider IPP 3A alone. If your organisation collects personal information from a third party — through data-sharing agreements, AI-powered research tools, referral processes, or even a CRM that enriches contact records — you'll need to demonstrate that the individual knows about the collection, its purpose, and who's receiving it. That's not a technical problem. That's a governance problem, and it needs board-level attention before May.
What boards consistently get wrong
Most AI governance failures follow predictable patterns.
Delegating it to IT. Many boards treat AI governance as a technical function — hand it to the data team, tick the box. But when an algorithm denies a loan on biased data, or a generative AI tool leaks commercially sensitive information, regulators and shareholders don't call the IT manager. They call the Chair. The board owns the risk. Execution can be delegated. Responsibility cannot.
Waiting for regulation to arrive. New Zealand has signalled a light-touch approach to AI regulation and will not introduce AI-specific legislation. The country relies on technology-neutral frameworks — privacy, consumer protection, human rights — updated as needed. Boards waiting for prescriptive rules will be waiting indefinitely. The regulatory pressure comes through Privacy Commissioner guidance, court decisions, and stakeholder expectations, not a single new Act.
Treating it as a procurement decision. Many boards approach AI the way they approach any technology investment — issue a tender, select a vendor, implement. But AI governance sits at the intersection of strategy, ethics, compliance, and culture. It touches multiple committees and functions. Buying a tool is not the same as governing its use.
The board owns the risk. Execution can be delegated. Responsibility cannot.
Ignoring what's already published. New Zealand released its first national AI Strategy in mid-2025, accompanied by Responsible AI Guidance for Businesses, adopting the OECD's AI Principles — inclusive growth, human rights, transparency, robustness, accountability. The Privacy Commissioner published guidance on AI and the Information Privacy Principles back in 2023, including specific consideration of te ao Māori perspectives on privacy. Any board not actively engaging with these frameworks is operating without a map.
What effective governance actually looks like
Three characteristics separate boards that are governing AI from boards that are being governed by it.
Clear ownership. There is no ambiguity about who owns AI governance at the executive level. A designated committee — whether technology, risk, ethics, or a standing subcommittee — reports directly to the board. Accountability is personal. If something goes wrong, there is no confusion about who answers for it.
Continuous oversight. Many boards treat AI like capital expenditure — approve once, assume it's handled. But models drift, data quality degrades, regulations shift, and competitive positions change. Effective boards receive regular, forward-looking updates on AI initiatives: performance, emerging risks, escalation triggers, and changes in the regulatory or competitive landscape. Point-in-time approval is not governance.
Integration, not isolation. AI governance is not a separate track bolted on to the existing agenda. It is woven into how the board thinks about strategy, risk, ethics, compliance, and management performance. Questions about AI belong on the risk register, in strategy discussions, and in how management is evaluated.
The Privacy Commissioner has been explicit about what this looks like in practice: conduct Privacy Impact Assessments before deploying AI tools. Ensure training data is relevant, reliable, and ethically sourced. Test systems for accuracy and bias. Maintain ongoing monitoring and audit processes. This isn't a technical exercise — it's a governance exercise that requires board-level involvement.
A practical starting point
If your board is navigating this shift — or avoiding it because the ground feels unstable — I've built a starting point.
The AI Governance Scenario Checkup asks nine straightforward questions, takes about three minutes, and gives you a tiered read on where your governance posture sits — reactive, managed, or strategic. The output is a one-page summary you can bring straight into your next board agenda.
No jargon. No sales pitch. Just a clear signal of whether you're governing AI or being governed by it.
Take the checkup → (free, no email required)
The Question Worth Sitting With
If your board had to explain its AI governance posture to a regulator tomorrow — could it?
Not whether you've adopted the latest tool. Not whether your IT team is capable. Could you articulate your governance structure, demonstrate what risks have been identified and how they're managed, and show what the board is doing to ensure compliance with privacy, fair trading, and human rights principles?
If those answers don't come easily, you have a governance gap. And in an environment where IPP 3A takes effect on 1 May, the Biometrics Code grace period ends on 3 August, and the Privacy Commissioner is actively investigating major breaches — that gap is getting expensive.
The invitation is to start now. Before the regulator makes it mandatory. Before the breach happens. Before the market punishes you for inadequate oversight.
Governance isn't about slowing innovation. It's about making innovation sustainable, trustworthy, and yours to control.
Further Reading
New Zealand Sources
- NZ Privacy Commissioner — AI and the Information Privacy Principles (2023, updated). Practical guidance on applying the Privacy Act's 13 IPPs to AI tools, including te ao Māori perspectives on data sovereignty and privacy.
- Ministry of Business, Innovation and Employment — New Zealand's Strategy for Artificial Intelligence and Responsible AI Guidance for Businesses (2025). The government's foundational framework for AI adoption, aligned with OECD principles.
- Bell Gully — Preparing for IPP 3A: New Requirements Effective 1 May 2026. Concise legal briefing on compliance obligations for indirect data collection.
- Simpson Grierson — Regulating AI in New Zealand and Abroad: Mind the (Legal) Gap. Useful comparative analysis of NZ's regulatory position against international frameworks including the EU AI Act.
International Governance
- EqualAI / WilmerHale — AI Governance Playbook for Boards (2026). Four-step framework: assess AI use and risks, establish oversight structures, implement risk protocols, empower teams.
- Harvard Law School Forum on Corporate Governance — How Boards Can Lead in a World Remade by AI (2026). Board composition, committee structures, and the questions directors should be asking.
- Institute of Directors UK — AI Governance in the Boardroom (2025). 12 principles for implementing context-specific AI governance frameworks, drawing on survey data from business leaders.
- Deloitte US — AI Governance for Board Members. Five actions for establishing generative AI governance, with emphasis on continuous oversight.
Books
- Agrawal, A., Gans, J. & Goldfarb, A. (2022). Power and Prediction: The Disruptive Economics of Artificial Intelligence. Harvard Business Review Press. Decision-focused framework for leaders — explains how AI changes organisational decision-making at a strategic level.
- Bozdag, E. & Bennati, S. (2026). AI Governance: Secure, Privacy-Preserving, Ethical Systems. Manning Publications. Practical playbook covering bias, data leakage, prompt injection, and regulatory compliance. Written for practitioners, not academics.
- Susskind, D. (2020). A World Without Work: Technology, Automation and How We Should Respond. Allen Lane. Broader context on AI's workforce implications — relevant for boards considering people-impact governance.
Stephen Mann is an independent management consultant and leadership advisor based in Tauranga, New Zealand. He works with business owners, board directors, and educators across Australasia on practical AI strategy and governance.