The Puzzle: Sorting the Pieces of Agentic Governance

Here’s the situation most organizations deploying agentic AI find themselves in.

They’ve got a coding assistant that writes and commits code. A customer support agent that reads account data and responds to tickets. An ops automation pipeline that provisions infrastructure. Maybe a multi-agent workflow where several of these coordinate on a task. The capabilities are real, the productivity gains are measurable, and the systems are multiplying.

Then someone asks: how do we govern this?

The team looks around. NIST has a risk management framework — but it’s about organizational risk posture, not runtime architecture. OWASP published threat taxonomies — but those tell you what can go wrong, not how to build the defenses. The Cloud Security Alliance has a trust framework — useful for identity and access, but doesn’t address verification or decision provenance. ISO has management system standards. The EU AI Act has regulatory requirements. OpenTelemetry has observability conventions. Singapore published a government-level agentic governance framework.

Every one of these is doing critical, necessary work. The pieces are on the table.

But nobody has assembled them. Nobody has shown how NIST’s risk functions map to a runtime architecture, how OWASP’s threats assign to specific defense layers, how the EU AI Act’s articles translate to engineering requirements, or how all of this composes into something a practitioner can actually follow.

That’s the puzzle. The pieces exist. The picture is still coming into focus.


Why This Is Hard

The fragmentation isn’t anyone’s fault. It’s structural.

Standards bodies stay in their lane — that’s what makes them rigorous. NIST does risk management. OWASP does threat taxonomy. CSA does trust. ISO does organizational governance. Each produces excellent work within its scope. But the practitioner who needs to govern an actual agentic system doesn’t have a single-scope problem. They have a composition problem.

Vendors, meanwhile, solve governance within their ecosystem. ServiceNow built AI Control Tower. Salesforce built Agentforce. Microsoft built Agent 365. Each provides governance for agents running on their platform. But most organizations don’t run agents on a single platform — and none of these solutions provide the cross-ecosystem architectural patterns that practitioners need.

The result: every organization becomes its own integrator. They read the standards, evaluate the vendor solutions, survey the research, and try to stitch together a governance approach that works for their specific context. Most don’t have the bandwidth to do this well. Many don’t do it at all.


What a Governed Agentic System Actually Needs

Before sorting the pieces, it helps to name the problems they need to solve. Any organization deploying consequential agentic systems needs answers to five questions:

  1. What do we have? — Agent inventory, capability mapping, risk profiling, dependency mapping. You can’t govern what you can’t see.

  2. What risks exist and what governance is needed? — Risk classification, regulatory mapping, threat modeling, control selection. Different systems need different levels of governance.

  3. How do we ensure agents operate within boundaries? — Runtime governance architecture. Verification layers, policy enforcement, authorization gates, trust models. This is the engineering problem.

  4. Are agents behaving as expected? — Monitoring, detection, and response. Not just “is it running” but “is it governed” — quality, security, and compliance from one event stream.

  5. How do we improve over time? — Policy review, trust evolution, incident learning, regulatory tracking. Governance that doesn’t adapt becomes governance that doesn’t work.

These five questions aren’t novel — they map to governance functions that other domains have addressed. But for agentic systems, the answers require patterns that traditional frameworks weren’t designed to provide.


Sorting the Pieces

The Agentic Governance Framework — AGF — is our attempt to sort these pieces into a coherent picture.

AGF is not a new standard. It doesn’t compete with NIST, OWASP, CSA, or ISO. It integrates them. It takes the risk management functions from NIST, the threat taxonomy from OWASP, the trust model from CSA, the management system requirements from ISO, the observability conventions from OpenTelemetry, the regulatory requirements from the EU AI Act, and the empirical research from labs like DeepMind and Anthropic — and shows how they compose into an implementable reference architecture.

The framework rests on four foundational ideas.

The Rings Model

At the center of AGF is a concentric architecture that organizes governance into four rings:

Ring 0 — Execution. The agent does its work. Generates text, writes code, makes a recommendation, takes an action. Without governance, this is all there is — and it’s where most deployed agentic systems are today.

Ring 1 — Verification. A separate process evaluates Ring 0’s output. The fundamental principle: the agent that creates output must not be the sole agent that validates it. This is separation of duties — one of the oldest patterns in security engineering — applied to agentic systems.

Ring 2 — Governance. Policy evaluation and authorization. Should this verified output actually be released? Does it comply with organizational policy? Does it require human approval? Ring 2 is where governance rules — expressed as code, not just documentation — make the determination.

Ring 3 — Learning. The system improves over time. Ring 3 observes patterns, calibrates trust levels, and proposes configuration changes. Critically, Ring 3 proposes — Ring 2 decides. The system can suggest governance changes; it cannot make them.

Underneath all four rings: a cross-cutting fabric (identity, observability, structured output, error recovery) and an environment substrate that governs the context, instructions, tools, and memory every agent depends on.

The rings are a logical architecture, not a deployment prescription. They manifest differently depending on the system: sequential wrapping for batch pipelines, interrupt-driven governance at tool-call boundaries for coding agents, concurrent verification for real-time conversational systems.

Nineteen Named Patterns

AGF identifies nineteen governance patterns — named, documented, and mapped to where they sit in the ring architecture. Separation of producer and verifier. Validation loops with convergence gates. Trust ladders. Policy as code. Provenance chains. Bounded agency. Adversarial critique. Event-driven observability.

None of these are new inventions. Every one traces to established practice in distributed systems, security engineering, compliance, or control theory. The contribution is showing how they compose for the agentic context — and being honest about where they conflict.

Seven explicit tensions exist between the primitives. Self-improvement versus reproducibility. Trust versus governance gates. Validation quality versus latency cost. AGF names each tension and provides architectural invariants to resolve it. The tensions are where the real design decisions live.

Progressive Composition

Nineteen primitives is the complete picture, not a starting checklist. AGF provides composition patterns that show how to start simple and grow:

  • Minimum Viable Control — four or five primitives that give you bounded agents, attributable actions, and an audit trail. The floor for any consequential system.
  • Validation Pipeline — add verification. Outputs are checked before release.
  • Governed Decision Flow — add policy evaluation. Decisions go through governance gates.
  • Full Governed System — every ring active, every primitive engaged, zero trust at every boundary.

Organizations start where they are and grow as their stakes demand. This isn’t maturity theater — it’s architectural progression. Each level adds specific capabilities with specific cost implications.

Five Ways In

Different practitioners need different entry points. A security architect has different questions than a compliance officer, who has different questions than a platform engineer.

AGF provides five domain profiles:

  • Security — threat analysis, MITRE ATLAS mapping, red team scenarios, the three-level security model
  • Platform Engineering — deployment modes, composition patterns, implementation phases, infrastructure requirements
  • GRC — regulatory crosswalks (EU AI Act, NIST AI RMF, ISO 42001), control mapping, maturity self-assessment, evidence generation
  • AI Engineering — five-phase implementation roadmap, primitive selection by system type, integration patterns
  • Observability — event architecture, correlation rules, detection engineering, SIEM-for-agents maturity model

Each profile provides the depth that audience needs without requiring anyone to read the full primitive catalog.


What AGF Doesn’t Do

Intellectual honesty requires naming what the framework doesn’t solve.

AGF does not provide a specific technology stack. The rings are vendor-neutral — they can be implemented with any toolchain. The framework tells you what should be true about your system, not which products to buy.

AGF does not claim that oversight alone is sufficient. Research demonstrates that oversight efficacy degrades as the capability gap between overseer and system increases. This is why AGF invests in structural guarantees — verification layers, automated policy enforcement, containment mechanisms — that function whether or not the overseer catches every issue.

AGF does not pretend to be complete. There are open questions throughout the framework, clearly marked. Multi-agent governance at scale. Quantitative risk models connecting agent autonomy to probabilistic loss estimates. Portable agent identity across ecosystems. These are areas where the picture is still blurry, and we say so.


The Picture on the Box

The analogy that guides AGF is a puzzle, not a blueprint.

A blueprint implies someone designed the answer from scratch. That’s not what happened here. The answer was already being built — by NIST and OWASP and CSA and ISO and OpenTelemetry and dozens of academic labs and thousands of practitioners working through the same challenges in their own contexts.

The pieces were on the table. What was needed was someone willing to spend the time sorting them — grouping the security patterns with the security patterns, the governance mechanisms with the governance mechanisms, the observability standards with the observability standards — and then showing where the groups connect.

That’s what AGF does. It’s a proposed picture, not the final one. The framework is open — CC BY 4.0 — because the best version of this work will come from the community pushing back on it, extending it, and adapting it to contexts we haven’t anticipated.

The picture on the box is still coming into focus. But enough of the pieces are sorted now that we can see the shape of what governed agentic systems should look like.

If you’re building agentic systems and thinking about governance, I hope this is useful. And if you see a piece that’s in the wrong place — or one that’s missing entirely — I want to hear about it.


The full framework is available at agf.jessepike.dev — open source, CC BY 4.0.

Previous: From Architecture to Governance — How AFAS Became AGF

Next: The Rings Model — A Concentric Architecture for Governed Agentic Systems