The State of AI in Australian Enterprises
Investment Is Rising, but Execution Lags
Australian organisations are bullish on AI, yet a striking gap exists between ambition and results. Deloitte's 2026 State of AI in the Enterprise report found that only 65% of Australian respondents plan to increase AI investment next year, compared to 84% globally. Just 12% of Australian leaders report that generative AI is already transforming their business, versus 25% globally.
PwC's 2026 Global CEO Survey paints a similar picture: while 49% of Australian CEOs are confident about revenue growth, only 14% report actual revenue gains from AI — half the global average of 30%. Only 28% of Australian CEOs believe their current AI investment is sufficient to deliver their goals. The gap is not in interest — it is in execution, governance, and the infrastructure needed to move from experiments to production.
The Pilot-to-Production Problem
The numbers are stark. According to Deloitte, 72% of Australian organisations are still stuck in pilot mode — running AI experiments that never reach production. Only 28% have moved more than 40% of their AI pilots into production. One-third remain focused on automating existing processes rather than genuine transformation.
Globally, only 11% of enterprises have moved agentic AI into production, and 42% are still developing their strategy roadmap. As Deloitte's David Alonso put it: Australian leaders must “stop treating AI as isolated use cases, and make the enterprise-wide decisions that turn ambition into a repeatable capability.”
Why Pilots Stall
The primary blockers are not technical — they are governance, trust, and risk:
- 42%Risk and compliance concerns are the top barrier to agentic transformation, cited by 42% of IT leaders.
- Talent and skills gaps prevent widespread adoption in over half of Australian companies.
- 96%Legacy infrastructure and data silos block 96% of organisations from fully using data for AI, with 40% identifying outdated architecture as the top blocker.
- 28%Only 28% of Australian CEOs can attract high-quality AI talent, versus 42% globally.
The pattern is clear: Australian enterprises want AI agents in production, but they cannot get security, compliance, and IT leadership to say “yes” — because the governance layer does not exist.
The Rise of AI Agents — and the Governance Vacuum
Agents Are Already Deployed (Without Governance)
The average enterprise now runs 12 AI agents, a number expected to grow 67% to 20 agents per organisation by 2027. 83% of organisations report that most or all teams have adopted AI agents. But adoption has outpaced control:
50%
of agents operate in complete isolation from each other
72%
deploy agentic systems without any formal governance model
81%
lack documented governance for machine-to-machine interactions
76%
have no audit trail for agentic decisions
9%
have implemented proper Agentic Access Management
Meanwhile, the average enterprise runs 957 applications — yet only 27% of those are integrated. Agents are being asked to bridge this fragmented landscape without guardrails.
The Agentic Governance Collapse
The AI Governance Network describes this as the “Agentic Governance Collapse”: autonomous AI agents that initiate actions, call APIs, authenticate themselves, and make operational decisions previously reserved for humans — with no governance structures, no transparency, and no sector-specific safeguards.
The consequences are already materialising:
- 62% of enterprises experienced an agent-induced incident in 2025 — false escalations, misconfigurations, unintended automation.
- 74% of companies cannot explain how an agent reached its conclusion.
- Agentic AI adoption grew 6.7x in 12 months, while governance frameworks lagged 3–5 years behind deployment velocity.
For Australian government agencies bound by the ISM, APS frameworks, and the mandatory AI governance policy effective December 2025, this gap is not theoretical — it is a compliance risk.
What MCP Gets Right — and Where It Falls Short
The Promise of MCP
The Model Context Protocol (MCP) is an open standard that lets AI agents connect to external tools, APIs, and data sources through a consistent interface. Instead of building custom integrations for every model and tool, MCP provides a shared protocol for discovering and invoking tools. This is genuinely valuable — it reduces development overhead and improves interoperability.
MCP is particularly significant for agentic AI, where models plan, reason, and take actions across external systems. In such architectures, an AI agent may autonomously retrieve data, execute commands, and trigger workflows.
Where Raw MCP Fails the Enterprise
But MCP was designed as a protocol, not a governance layer. Out of the box, it provides:
- Tool discovery and invocation
- A standard transport (SSE, stdio)
- A consistent schema for tool definitions
It does not provide:
- Agent identity or authentication
- Per-agent, per-tool access control policies
- Audit logging of who called what, when, and why
- Approval workflows for high-risk operations
- Data sovereignty controls
- Compliance alignment with any regulatory framework
- Observability across agent fleets
“This expanded capability dramatically increases the security stakes, as MCP's design allows it to act on the user's behalf.”
— Red Hat, 2026 MCP Security Analysis
MCP Security: A Growing Threat Landscape
Academic research cataloguing MCP vulnerabilities has identified 31 distinct attack methods across four classifications: direct tool injection, indirect tool injection, malicious user attacks, and LLM inherent attacks. The security evidence is sobering:
| Threat | Evidence | Impact |
|---|---|---|
| Publicly exposed MCP servers | 492 identified as vulnerable, many bound to 0.0.0.0 without authentication | Arbitrary command execution, data exfiltration |
| Tool poisoning | 5.5% of MCP servers show tool poisoning; 7.2% have general security flaws | Manipulated agent behaviour, hidden malicious instructions |
| OAuth vulnerabilities (CVE-2025-6514) | 437,000+ developer environments compromised | Credential theft, remote code execution |
| Supply chain attacks | Malicious "Postmark MCP Server" found injecting BCC copies of all emails to attacker | Silent data exfiltration across all communications |
| Prompt injection via MCP | GitHub MCP vulnerability demonstrated prompt-injection-driven data theft from private repos | Sensitive data leaked to public repositories |
| Sandbox escapes | Anthropic Filesystem MCP server CVEs allowed full filesystem read/write access | Host compromise, arbitrary code execution |
The core issue: MCP servers often run with high-privilege access, and agents trust external content without sufficient context isolation or fine-grained permission checks. Mitigations require enforcing least-privilege access, implementing runtime security guardrails, and continuously monitoring MCP interactions — capabilities that raw MCP does not provide.
The Australian Regulatory Reality
Australian enterprises — particularly those serving government — face a regulatory landscape that makes ungoverned AI deployment untenable. Several overlapping frameworks now mandate the kind of controls that raw MCP lacks.
Australian Government AI Policy v2.0 — Effective December 2025
The updated Policy for the responsible use of AI in government (v2.0), effective 15 December 2025, introduces mandatory requirements for all non-corporate Commonwealth entities:
- Accountable official(s) designated for AI oversight
- Transparency statements on AI use
- Strategic approach to AI adoption
- AI use case accountability with designated owners
- Internal use case registers for all in-scope AI systems
- AI use case impact assessments prior to deployment
- Staff training on AI
New mandatory requirements begin 15 June 2026, with all remaining requirements in effect by December 2026. For any agency deploying AI agents via MCP, every agent action must be traceable to an accountable owner, logged in a use case register, and assessed for impact. Raw MCP provides none of this.
The Information Security Manual (ISM)
The ASD's Information Security Manual is the foundational cyber security framework for Australian government ICT systems. Key principles directly relevant to AI agent governance:
- ISM PRO-12Personnel (and automated systems) must be granted the minimum access required to undertake their duties.
- ISM PRO-13Robust identity, credential, and access management must control access to systems.
- ISM P8Information communicated between systems must be controlled, inspectable, and auditable.
The December 2025 ISM update introduced specific AI security controls (ISM-2084 through ISM-2093) covering model validation, rate limiting, and output filtering. These are now first-class ISM requirements — not guidelines.
IRAP and Cloud Requirements
The Infosec Registered Assessors Program (IRAP) requires cloud services procured by Australian government agencies to demonstrate compliance with ISM controls through independent assessment. For AI infrastructure deployed on Azure, AWS, or other cloud platforms, MCP gateway components must operate within IRAP-assessed environments and maintain the security controls expected at PROTECTED level. This means governance is not a layer you add later — it must be built into the architecture from the start.
Australia's AI Ethics Principles
The Department of Finance has mapped eight AI Ethics Principles that underpin government AI assurance:
- 01Human, societal, and environmental wellbeing
- 02Human-centred values
- 03Fairness
- 04Privacy protection and security
- 05Reliability and safety
- 06Transparency and explainability
- 07Contestability
- 08Accountability
The NAIC's October 2025 guidance operationalises these into six practices: deciding who is accountable, understanding impacts, measuring and managing risks, sharing essential information, testing and monitoring systems, and maintaining human control. Every principle demands infrastructure that can enforce, log, and demonstrate compliance — infrastructure that raw MCP does not offer.
The Construction and Built Environment Angle
A Sector Ripe for AI — and Exposed to Risk
Construction is one of Australia's most hazardous and least efficient industries, and AI adoption is accelerating. 37% of construction firms now use AI or machine learning, up from 26% in 2023. AI-powered construction revenue in Australia is projected to reach AUD 8 billion by 2031. Companies now use an average of 6.2 to 6.9 technologies per firm, and 64% have invested in digitalisation in the past year.
But construction is also one of the sectors where AI governance failures carry the highest consequences. WHS regulations require full auditability of any AI involvement in safety workflows. Regulators and insurers need to know exactly what the AI did — and what controls were in place when it did it. Uncontrolled agent access to safety and incident management systems is not a technical risk. It is a regulatory and liability risk.
The Governance Difference in Practice
Consider a tier-2 construction company deploying an AI agent to surface safety incidents, flag overdue WHS compliance checks, and assist site supervisors with HSE queries — while connecting to government APIs for weather and ABN validation.
Without Governed MCP
The AI/automation lead builds direct API integrations per agent. Each integration has its own auth model, no shared audit trail, and no approval workflow. Manual audit creation slows the pilot and creates compliance risk. Security blocks deployment.
With ARK360 Governed MCP
MCP Connectors wrap each external API with authentication and logging. A Construction_WHS_ApprovalRequired Policy Preset routes all incident creation through human review. The operations dashboard shows all agent activity in one place, and audit exports satisfy WHS compliance reviews.
The Cost of Inaction
The competitive and regulatory consequences of delaying AI governance are compounding:
Organisations that deploy AI agents without governance infrastructure are not just taking a technical risk — they are taking a compliance, reputational, and operational risk that grows with every agent deployed.
Moving from Gap to Action
The goal is not to slow AI adoption — it is to make it safe enough for production. When security can see what agents are doing, enforce policies on what they're allowed to do, and export audit evidence on demand, the answer to “can we deploy this?” changes from “not yet” to “yes, under these conditions.”
Assess your AI access surface
Inventory the APIs and systems your AI agents need to touch. Identify which are read-only versus mutating, which handle sensitive data, and which fall under regulatory requirements (ISM, WHS, Privacy Act).
Generate governed MCP Connectors
Wrap candidate APIs as MCP Connectors with authentication, logging, and classification built in. Start with one low-risk, read-only API to prove the pattern works end-to-end.
Apply Policy Presets and monitor
Deploy horizontal security presets (block mutating by default, audit all calls) and vertical presets relevant to your industry. Use the operations dashboard to demonstrate compliance to security teams, auditors, and regulators.
Conclusion
The enterprise AI governance gap is real, measurable, and widening. Australian organisations are investing in AI but struggling to move from pilot to production because the governance infrastructure does not exist. Raw MCP provides the protocol layer — the equivalent of TCP/IP for AI agents — but it does not provide identity, policy, audit, or compliance.
For Australian enterprises operating under the ISM, APS AI policy, WHS regulations, and evolving NAIC guidance, ungoverned AI deployment is not a viable option.
Governed MCP infrastructure — MCP Connectors that wrap existing APIs with enterprise controls, Policy Presets that encode best-practice security and compliance patterns, and operations visibility that makes AI agent behaviour auditable and explainable — is the missing layer between demo-ready AI and audit-ready AI.
The organisations that build this layer now will be the ones that move from the 72% stuck in pilot mode to the small minority delivering AI at production scale.