From Architecture to Governance: What Building Agentic Systems Taught Me About What's Actually Missing

Three months ago, I started building an architecture framework for agentic systems. I called it AFAS — an attempt to do for agentic AI what TOGAF does for enterprise architecture and what SABSA does for security architecture. The premise was that the design methodology for governed agentic systems hadn’t been written yet.

I was half right.

The gap was real. Organizations are deploying autonomous agents at an extraordinary pace, and the governance patterns haven’t caught up. Many teams are moving to production before the guardrails are fully in place — not because they’re careless, but because the space is evolving so quickly that best practices are still being established. Verification layers, governance gates, audit trails, containment mechanisms — the patterns that other domains of critical infrastructure rely on are still finding their way into agentic deployments.

But the deeper I went, the more I realized something: the patterns weren’t missing from the world. They were scattered across it.


What AFAS Found

AFAS started from an enterprise architect’s instinct: define the layers, name the primitives, specify the control plane. The framework identified six architectural layers (from business context through runtime operations), a set of agent primitives (reasoning, instructions, memory, tools, identity, state), and a governance control plane modeled on the SIEM pattern — detect, evaluate, enforce.

I wrote about these ideas publicly. How business intent propagates across delegation boundaries. Why runtime governance needs to be a design-time decision. How agents authenticate and how delegation chains maintain accountability.

Each of these posts touched something practitioners recognized from their own work. But as I pressure-tested the framework through adversarial external reviews, a pattern emerged that shifted my thinking.

What Changed

Every pattern I’d been working through in AFAS already existed somewhere else.

The separation of producer and verifier? That’s separation of duties — decades of security engineering. The trust ladder? That’s graduated autonomy — CSA published an earned autonomy maturity model in February 2026. The control plane? That’s the PDP/PEP architecture from NIST SP 800-207 Zero Trust. The SIEM analogy? SOAR playbooks have been doing pre-authorized automated response for years.

Even the layered model — the thing I thought was most distinctively “AFAS” — traced directly to SABSA. The intellectual lineage was always there.

So if the patterns existed, what was actually missing?

Sorting the pieces.

NIST has the risk management functions. OWASP has the threat taxonomies. CSA has the trust frameworks. ISO has the management systems. OpenTelemetry has the observability standards. The EU AI Act has the regulatory requirements. Singapore published the world’s first government agentic governance framework. DeepMind and Anthropic have produced empirical research on delegation and autonomy. Academic labs have formalized the theory.

Every one of these institutions is doing critical work. But the pieces hadn’t been assembled. Nobody had shown how NIST’s four functions map to a runtime architecture, how OWASP’s ten threats assign to specific defense layers, how the EU AI Act’s articles translate to engineering requirements, or how all of this composes into something a practitioner can actually follow.

The puzzle pieces were on the table. The picture was still coming into focus.

What AGF Is

The Agentic Governance Framework — AGF — is what came out the other side.

It’s not a new architecture framework. It’s a synthesis. Nineteen named patterns — drawn from distributed systems, security engineering, and compliance, not invented — organized into a concentric ring architecture (Execution → Verification → Governance → Learning), with a three-level security model, three deployment modes, and five domain-specific profiles for different professional audiences.

Every pattern in AGF traces to existing work. Separation of duties. Least privilege. Audit trails. Zero trust. Policy as code. Defense in depth. Proportional controls. These are established engineering disciplines. The contribution is showing how they compose for the agentic context — and being honest about where they conflict.

That last part matters. Many frameworks present their recommendations as harmonious. AGF names seven explicit tensions between its own primitives — self-improvement versus reproducibility, trust versus governance gates, validation quality versus latency cost — and provides architectural invariants to resolve each one. The tensions are where the real design decisions live.

The Bridge

If you’ve been following the AFAS work, here’s how the concepts evolved:

The Intent Thread — how business intent propagates across delegation boundaries — became two AGF primitives: Provenance Chains (every output carries its full decision history) and Agent Environment Governance (the operating substrate is composed by policy, not by accident). Intent propagation is still the right instinct. AGF makes it structural.

Trust Ladders — the idea that agents should earn autonomy through demonstrated performance — grew from a concept seeded in one post to a full primitive backed by empirical data. Anthropic’s research shows trust roughly doubling over the first several hundred sessions. DeepMind published an adaptive delegation framework with six components that map directly. CSA published an earned autonomy maturity model. The pattern I’d been sketching out turned out to be well-supported by the research.

The SIEM analogy — applying security event management patterns to agent governance — became Agentic Observability: a full detection and response architecture with correlation rules mapped to OWASP threat categories, an event architecture aligned with OpenTelemetry, and a maturity model calibrated against real SIEM deployment trajectories.

The control plane — the detect-evaluate-enforce loop — became the three-level security model: Security Fabric (enforcement, wire-speed), Security Governance (policy evaluation), and Security Intelligence (detection and response). Plus a Security Response Bus for pre-authorized fast-path containment when attacks cascade faster than governance deliberation can respond.

The ideas were headed in the right direction. They just needed to be connected to what the rest of the industry had already built.

What’s Next

AGF is published as an open framework — CC BY 4.0 — because the goal was never to own the intellectual property. The goal is to help organizations build agentic systems that are safe, durable, auditable, and observable. We’re all working through this together, and the framework is a contribution to that shared effort.

The framework includes five domain profiles: one each for security architects, platform engineers, compliance officers, AI engineers, and SREs. Each provides the depth that audience needs — threat mappings, deployment modes, regulatory crosswalks, implementation roadmaps, correlation rules — without requiring anyone to read the entire 19-primitive catalog.

Over the coming weeks, I’ll be writing about AGF’s key concepts in depth — the Rings Model, Trust Ladders, the Belief Layer, composition patterns, and how the framework maps to regulatory requirements like the EU AI Act. Each piece will stand on its own, but they assemble into a larger picture.

If you’ve been following the AFAS work, AGF is where it landed. Not because AFAS was wrong — because going deeper revealed that the real contribution was the synthesis, not the architecture alone. The patterns were already out there, built by practitioners across dozens of domains over decades. What was needed was someone willing to sort the pieces and show where they fit together.

That’s what AGF does. And it’s still a living framework — we’re still sorting, still learning, and genuinely open to challenge on every claim. The best version of this work will come from the community pushing back on it.


The full framework is available at agf.jessepike.dev — open source on GitHub.

Next up: The Rings Model — A Concentric Architecture for Governed Agentic Systems