Playbooks

The patterns that get AI approved and shipped.

Step-by-step guides for the problems that actually block AI adoption in mission-critical environments. Not theory — named patterns that map directly to real controls and real systems.

01

Your first governed MCP Connector — live in weeks, not months.

A step-by-step process for going from a list of internal APIs to a live, verified MCP Connector in production. Start small, validate the governance chain, then expand with confidence.

1

Inventory your APIs

List every API agents could benefit from accessing. Categorise by data sensitivity: public, internal, sensitive. Note which have OpenAPI specs.

2

Choose your first candidate

Pick the API with the clearest agent use case and the lowest data sensitivity. This is your pilot — validate the governance chain before expanding.

3

Generate the MCP Connector

Point ARK360 at the OpenAPI spec. It generates a verified MCP tool with authentication, logging and tags. You review and approve the connector definition.

4

Apply baseline Policy Presets

Apply a baseline preset: rate limits, audit logging, IP allowlisting. For sensitive data, add data residency and access scoping rules.

5

Connect one agent host

Wire one agent (Claude, Copilot, or your own) to the new MCP Connector. Monitor the first calls in the operations dashboard.

6

Validate and expand

Review the audit trail. Confirm the governance chain works end-to-end. Then expand: more APIs, more agents, more Policy Presets.

02

Getting security and compliance to 'yes' on AI.

A practical approach to working with security, compliance and legal teams who are blocking AI adoption — turning their requirements into Policy Presets they can review, verify and own.

1

Map the blockers

List every concern your security and compliance team has raised. These will typically fall into: data access, data residency, auditability, and blast radius.

2

Translate concerns into controls

For each blocker, identify the corresponding control. 'Data leaving Australia' → AU-only residency preset. 'Agents reading everything' → least-privilege connector scoping.

3

Show them the dashboard

Demonstrate the operations dashboard. Security needs to see that every agent call is logged, every policy applied, and blocked calls are visible. Evidence beats assurances.

4

Map to your compliance framework

Whether it's APS/ISM, ISO 27001, WHS, or your own risk framework — map each Policy Preset to the specific control it satisfies. Document this for the risk register.

5

Run a controlled pilot

Propose a time-boxed pilot: one API, one agent, one Policy Preset, 30 days. Give security observer access to the dashboard. Let the evidence speak.

03

AI access to mission-critical systems — without touching the core.

Your core systems don't need to change. This playbook shows how to connect AI agents to legacy and mission-critical systems by wrapping at the API layer, limiting blast radius, and keeping the underlying system completely untouched.

1

Identify the AI use case

What does the agent need to do? Query, summarise, surface, alert? The use case determines what API operations are needed — and that determines the connector scope.

2

Wrap the system at the API layer

Don't touch the core system. Generate an MCP Connector from the existing API. The core system doesn't know it's talking to an AI — it just sees authenticated API calls.

3

Limit the blast radius

Scope the MCP Connector to read-only operations where possible. For write operations, add approval policies: the agent proposes, a human confirms.

4

Apply mission-critical guardrails

Rate limits prevent runaway agent calls. Audit logging creates the paper trail. Policy Presets enforce least-privilege access. The core system stays stable.

5

Monitor and iterate

Use the operations dashboard to understand how the agent is using the system. Tighten or relax controls based on real usage patterns, not guesswork.