Layer 01 of 06
Guardrails
What should the AI never do?
Hard rules that define what an AI agent is not allowed to do, regardless of how it is asked.
Why this layer matters
Every AI deployment inherits risk the moment it gets access to sensitive data or external systems. Guardrails are the written, enforceable answer to the question no one asks at the start: what is off-limits, and who says so.
Without them, the answer is "whatever the model happens to feel like today." That is not a policy, and it will not survive a regulatory inquiry.
What good looks like
- Explicit block rules for sharing client data with external services
- Approval gates for actions with irreversible business impact
- Clear escalation paths when the AI encounters a request it cannot handle safely
- Rules that survive model upgrades without being rewritten
- Coverage for industry-specific obligations (HIPAA, FINRA, privilege)
What a gap looks like
How it connects
Guardrails set the outer boundary of everything the AI is allowed to do. Observability is how you know those boundaries held. Identity is how they get enforced differently for different agent contexts.