Where Is the Infrastructure?
Five questions New Zealand's AI strategy cannot answer - and why the answers will decide whether adoption is real
New Zealand published its first national AI strategy in July 2025, the last OECD country to do so. The document is honest about our position as adopters, not developers. But adoption at scale requires infrastructure, accountability, and activation mechanisms the strategy does not provide.
The last to arrive
In July 2025, Science, Innovation and Technology Minister Dr Shane Reti launched *Artificial Intelligence: Investing with Confidence*, New Zealand's first national AI strategy. It arrived last among all OECD nations. That delay is not, in itself, damning. Strategies are only as useful as the commitments they contain, and a considered late entry can outperform a premature one. But the timing does concentrate attention on what the document actually delivers.
The strategy's core thesis is pragmatic and, to its credit, honest. New Zealand will not compete with Google, OpenAI, or the large foundation model developers. Our competitive advantage, the strategy argues, lies in becoming sophisticated adopters: identifying, adapting, and deploying AI solutions to the specific challenges of our economy, from precision agriculture to diagnostic healthcare. This framing avoids the performative sovereignty rhetoric that smaller nations sometimes adopt when they announce plans to "lead" in AI development without the capital, talent base, or compute infrastructure to do so.
But adoption is not a passive posture. Becoming a sophisticated adopter of any general-purpose technology requires deliberate infrastructure investment, regulatory clarity, skills pipelines that match labour market demand, and activation mechanisms that translate awareness into deployment. The strategy addresses some of these requirements. It is silent on others. And the silences matter, because they define the space between aspiration and execution.
What follows are five structural questions the strategy, in its current form, cannot answer. They are not criticisms of its intent. They are invitations to fill the gaps before the gaps become the story.
Question one: where is the compute?
The most striking absence in Investing with Confidence is any mention of compute infrastructure. The strategy contains no discussion of data centre policy, energy requirements for AI workloads, GPU access, connectivity as an enabler, or the physical infrastructure that underpins every AI deployment at scale. For a document premised on adoption, this is a significant omission. You cannot adopt what you cannot run.
The gap becomes starker in comparative context. In December 2024, Canada launched its Sovereign AI Compute Strategy with a federal investment of C$2 billion, structured across three pillars: up to C$700 million to grow Canadian AI capacity through new or expanded data centres, up to C$1 billion for transformational public computing infrastructure, and up to C$300 million for an AI Compute Access Fund providing affordable compute to small and medium enterprises. Singapore's National AI Strategy 2.0 explicitly addresses GPU clusters, energy, and data centre capacity, with the government committing S$500 million to secure GPU access for local enterprises. The United Kingdom, before its change of government, had earmarked £1.3 billion for an exascale supercomputer and expanded AI Research Resource.
The trans-Tasman comparison is the sharpest. In late 2025, Microsoft announced A$25 billion in Australian AI infrastructure, security, and skills investment by 2029, its largest-ever commitment in the country. The investment is explicitly aligned to Australia's National AI Plan. It includes expanded Azure AI supercomputing capacity, collaboration with the Australian AI Safety Institute, an expanded cyber shield with the Australian Signals Directorate, and three million Australians trained in workforce-ready AI skills by 2028. Australia's national plan is the framework; private capital is answering it at scale.
Now consider New Zealand's position. The government's entire annual investment in the science, innovation, and technology system is $1.2 billion. Canada's compute commitment alone exceeds that. Microsoft's Australian investment is twenty times larger. This is not to suggest New Zealand should match either figure; the economies are not comparable. But it is to observe that New Zealand's strategy does not even acknowledge compute as a policy domain, let alone propose a response proportionate to our scale.
Without a compute strategy, New Zealand's adoption thesis depends entirely on the pricing, availability, and policy decisions of foreign cloud providers, primarily US hyperscalers. That dependency is not inherently problematic; most nations rely on global cloud infrastructure to some degree. But it carries risks: pricing volatility, data residency concerns, jurisdictional uncertainty, and strategic vulnerability in a geopolitical environment where compute access is increasingly a tool of statecraft. The strategy does not discuss this dependency, its risks, or any mitigation. If adoption at scale is the goal, the question of what infrastructure makes that possible, and who controls it, cannot remain unanswered.
Question two: how will we know if it's working?
A strategy without targets is a description. Investing with Confidence announces no measurable adoption goals, no KPIs, no timeline milestones, and no accountability framework for tracking progress. It describes the current state of AI adoption in New Zealand with commendable clarity, drawing on the Datacom State of AI survey, NZIER/Spark research, and international benchmarking. But it does not commit to moving the numbers in any specific direction by any specific date.
Other nations treat measurement as a core strategic function. Canada's Pan-Canadian AI Strategy includes measurable pillars with progress reporting requirements. Singapore publishes annual National AI Programme updates with specific metrics across its activity drivers, people and communities, and infrastructure pillars. The UK's AI Safety Institute, prior to the funding changes, operated under a defined mandate with structured reporting.
New Zealand, meanwhile, has excellent diagnostic data. According to the 2024 Datacom survey, 67% of larger New Zealand businesses now utilise some form of AI, up from 48% in 2023. That trajectory is encouraging. But the NZIER/Spark quarterly survey found that 68% of SMEs have no plans to even evaluate AI. Only 34% of New Zealand workers can explain what AI is. And 43% of non-users cite lack of expertise as the primary barrier to adoption.
These are strong baselines, precisely the kind of data you need to set targets and measure progress against them. But baselines without targets are just descriptions of the present. The strategy describes the adoption gap without committing to closing it by any specific margin or date. What does success look like in 2027? In 2030? In 2035? Who is accountable for measuring it? These questions do not appear in the document, and without answers, there is no mechanism for distinguishing between a strategy that succeeded and one that was simply published.
Question three: what activates the missing 68%?
The SME adoption gap is the strategy's most important diagnostic finding. The 68% figure from the NZIER/Spark survey is not merely a statistic; it represents the majority of New Zealand's business community standing outside the AI transition entirely. For context, only 38% of Australian SMEs reported having no plans to adopt AI, according to the Australian Department of Industry, Science and Resources. New Zealand's SME disengagement rate is nearly double that of our closest comparator.
The strategy proposes a set of interventions to address this gap: publishing guidance documents, upskilling Business Mentors NZ, and working through the Regional Business Partner Network, which reaches approximately 5,000 businesses per year. These are reasonable, low-cost actions. They are also awareness-tier interventions applied to an activation-tier problem.
The distinction matters. Awareness means knowing AI exists and understanding, in general terms, what it can do. Activation means committing resources to experimentation, integrating AI into a business process, and evaluating the results. These are categorically different behaviours, and the barriers between them are not primarily informational. SMEs do not adopt new technology because they read a guidance document. They adopt it when they see a peer in their sector demonstrate tangible return on investment, when the cost of experimentation is reduced to a level they can absorb, or when competitive pressure makes inaction untenable.
Other nations have recognised this and deployed demand-side instruments accordingly: co-investment mechanisms, innovation voucher schemes, and sandbox programmes that subsidise SME experimentation with AI tools in controlled environments. New Zealand's strategy contains none of these. Budget 2025 allocated $213 million to tuition and training subsidies, $64 million to STEM and priority areas, and $111 million to enrolments and Youth Guarantee places. These are meaningful supply-side investments in human capital. But they do not directly address demand-side activation for the 68% of SMEs that have not yet decided AI is relevant to their business.
The supply side is getting attention from the private sector as well. Since the strategy's release, Microsoft has doubled its New Zealand AI skilling commitment to 200,000 people by the end of 2028, with three-quarters of the original 100,000 goal already reached. That kind of private capability-building is welcome, and it matters. But it is still fundamentally supply-side: making sure people know how to use AI when they encounter it. It does not change the economics of an SME owner's decision to pilot an AI tool in their business tomorrow morning. Supply-side skilling and demand-side activation are complementary; they are not substitutes. The question is not whether SME owners will eventually encounter AI training opportunities. It is what mechanism, not aspiration, moves them from awareness to adoption.
Question four: who bears the adjustment cost?
The strategy's headline economic case rests on a Microsoft projection: generative AI could add $76 billion to New Zealand's GDP by 2038, equivalent to roughly 15% of GDP. This figure appears in the strategy's opening framing, in the Minister's launch statement, and in accompanying media coverage. It is a large and optimistic number, and the strategy treats it as motivational rather than analytical.
But productivity gains of this magnitude, if they materialise, imply significant labour market restructuring. Some roles will be augmented, with AI tools enabling workers to produce more, faster, or at higher quality. Others will be displaced, with tasks currently performed by people absorbed into automated systems. The net effect is uncertain and will vary considerably by sector, occupation, and region. What is not uncertain is that a $76 billion productivity gain does not distribute itself evenly. Someone benefits. Someone adjusts. The distributional question is not a secondary concern; it is inseparable from the economic case.
The strategy briefly mentions "labour market transformation" and references alignment with OECD AI Principles. But it offers no analysis of displacement risk by sector or occupation, no transition support framework, no sector-specific impact modelling, and no discussion of social safety net implications. This is a notable gap. If the strategy's own headline number is $76 billion in productivity gains, the question of who benefits, who adjusts, and what support exists for the transition is not optional. It is the other half of the economic argument.
New Zealand has navigated significant economic transitions before, from the removal of agricultural subsidies in the 1980s to the structural adjustments following trade liberalisation. Those transitions produced real hardship in specific communities, and the lessons from that history are well documented. Applying those lessons to the AI transition would strengthen the strategy considerably. Ignoring them leaves the distributional question to be answered by default rather than by design.
Question five: what does Māori data sovereignty look like in practice?
The strategy's treatment of Māori data sovereignty is, in comparative terms, more substantive than most national AI strategies' approach to indigenous data rights. It explicitly acknowledges mātauranga Māori as taonga, references the work of Te Puni Kōkiri in exploring how AI interacts with Māori interests, and notes the Centre for Data Ethics and Innovation. These are meaningful inclusions, and they reflect a level of engagement with indigenous data rights that few peer nations attempt.
But the operational mechanisms remain formative. Te Puni Kōkiri is described as "exploring" how AI interacts with Māori interests. The Centre for Data Ethics and Innovation provides guidance but lacks regulatory authority. The strategy acknowledges the importance of Māori data sovereignty without specifying the governance structures, funding commitments, or legal instruments that would give it practical effect.
The gap between principled acknowledgement and operational sovereignty is not abstract. Te Hiku Media, a Kaitāia-based charitable media and technology organisation, has built an automatic speech recognition model for te reo Māori that achieves 92% accuracy, earning global recognition including inclusion in TIME magazine's 2024 TIME100 AI list. Te Hiku Media's work demonstrates what community-led AI development looks like when it is grounded in tikanga and data sovereignty principles. The organisation built its own content distribution platform rather than sign over rights to global platforms. It developed its models using ethical, transparent methods of speech data collection that maintain data sovereignty for the Māori people.
But scaling these efforts requires more than acknowledgement. It requires sustained funding for indigenous AI research and development, governance frameworks that give iwi and hapū genuine decision-making authority over how AI systems use Māori data and knowledge, and legal standing that makes data sovereignty enforceable rather than aspirational. The question the strategy must eventually answer is how New Zealand moves from principled recognition to operational reality, with structures that ensure Māori communities are not merely consulted about AI but are empowered to direct how it interacts with their taonga.
The space that remains
These five questions are not an indictment of the strategy's intent. The adoption-over-development framing is sound. The diagnostic data is strong. The alignment with OECD AI Principles is appropriate for a nation of New Zealand's scale and position. The Responsible AI Guidance companion document is a practical resource that many businesses will find useful. Taking these elements together, the strategy is a credible first articulation of New Zealand's position in the global AI landscape.
But a strategy is not a description of the current state. It is a commitment to a future state, with mechanisms to get there and accountability for progress. The document as published leans more toward the former than the latter. It tells us where we are with considerable clarity. It tells us where we want to go in broad terms. It is less clear on how we get there, how we will know when we have arrived, and who is responsible for the journey.
The spaces the strategy leaves open are not failures; they are invitations. Compute infrastructure, measurable targets, SME activation mechanisms, labour market transition planning, and the operationalisation of Māori data sovereignty are precisely the domains where civil society, industry groups, research organisations, and Māori institutions can contribute most. The AI Forum New Zealand, with its 230-plus member organisations, is well positioned to lead on several of these. Te Puni Kōkiri and iwi-led technology organisations have the standing and expertise to develop the sovereignty frameworks the strategy acknowledges but does not yet specify.
The most productive reading of Investing with Confidence is not as a finished document but as an opening argument that demands answers. New Zealand was the last OECD nation to publish an AI strategy. Whether that late start becomes a lasting disadvantage depends entirely on what happens next: whether the strategy remains a description of good intentions or becomes the foundation for the specific, funded, accountable commitments that turn adoption from an aspiration into a reality. The quality of New Zealand's AI future will not be determined by the strategy we published. It will be determined by the questions we choose to answer after publication, and the seriousness with which we answer them.
Artificial Intelligence: Investing with Confidence, MBIE, July 2025. Comparative context: Microsoft's expanded New Zealand skilling commitment and Microsoft's A$25 billion Australian investment.


