Blog
Why Cedar Policies Matter for Your Amazon Bedrock AgentCore Gateway

You are building an agentic solution. Your agent has access to MCP tools, and those tools perform real actions: approving expenses, issuing refunds, updating records. You write careful prompt instructions so the large language model picks the right tool for the right situation. Everything works on the happy path. But what happens when it doesn't?
Only thinking about the happy path gets results fast, but it leaves a lot of room for surprises. Prompt engineering can steer a model, but steering is not the same as enforcing. A well-crafted prompt injection could convince the model to approve a $2,000 expense through a tool that should only handle amounts up to $50. The model followed an instruction; it just wasn't yours.
In this post, we look at how Cedar policies in Amazon Bedrock AgentCore Gateway give you deterministic, auditable guardrails around your MCP tools, so you can stop relying on probability alone.
Why Prompt Engineering Is Not Enough
When you describe your tools to the model, you include constraints in the tool descriptions. "This tool handles expenses up to $50." The model reads that description and, most of the time, routes requests accordingly. But large language models are probabilistic. They do not enforce rules; they predict the most likely next token. That distinction matters when the stakes are real.
There is no contract between the prompt and the model's behavior. The model might follow your instructions 99% of the time, but that remaining 1% is where things go wrong. And in agentic applications, "going wrong" means a tool performs an action it should not have. Unlike a traditional API where input validation is deterministic, the decision to call a specific tool is made by the model based on pattern matching and token prediction.
Consider a prompt injection attack. An attacker crafts input that overrides or confuses the model's instructions. Suddenly the model calls the low-limit expense tool with a $2,000 amount. The tool executes, the money is approved, and your prompt-based "guardrail" did nothing to prevent it. The model did not malfunction. It simply followed the most convincing instruction it received, and that instruction came from the attacker, not from you.
This is not a hypothetical risk. As agentic applications move into production and handle financial transactions, customer data, and operational workflows, the gap between "the model usually does the right thing" and "the model is guaranteed to do the right thing" becomes a real business risk. Relying on prompt engineering alone means accepting that gap as an inherent part of your system. You need a layer that evaluates requests deterministically, before the tool ever executes. That is where Amazon Bedrock AgentCore Policy comes in.
How Cedar Policies Work in Amazon Bedrock AgentCore Gateway
Amazon Bedrock AgentCore Gateway acts as the entry point for all tool calls your agent makes. When you attach a policy engine to the Gateway, every tool invocation is intercepted and evaluated before it reaches the tool itself. Amazon Bedrock AgentCore Policy uses Cedar, an open-source policy language developed by AWS, to express those evaluation rules.
A Cedar policy is a declarative statement that either permits or forbids access to a specific tool under specific conditions. Each policy defines a scope (who is making the request, what action they want to perform, and which Gateway resource it targets) and optional conditions that inspect the request's context, such as input parameters or user attributes from an OAuth token.
Cedar follows a default-deny model. If no policy explicitly permits a request, it is denied. And if any forbid policy matches, the request is denied regardless of any matching permit policies. This forbid-overrides-permit approach means your safety constraints always win. You do not need to anticipate every possible attack vector; you only need to define what is allowed, and everything else is blocked by default.
An Expense Approval Example
Let's make this concrete. Imagine you have an expense approval agent with three tools:
approve_smallhandles expenses up to $50approve_mediumhandles expenses from $50 to $200approve_largehandles everything above $200
Your tool descriptions tell the model about these ranges, and most of the time the model respects them. But to enforce them deterministically, you write Cedar policies:
// Only allow the small expense tool for amounts under 50
permit(
principal is AgentCore::OAuthUser,
action == AgentCore::Action::"ExpenseTools__approve_small",
resource == AgentCore::Gateway::"arn:aws:bedrock-agentcore:eu-west-1:123456789012:gateway/expense-gateway"
)
when {
context.input.amount < 50
};
// Block the small expense tool for amounts of 50 or more
forbid(
principal is AgentCore::OAuthUser,
action == AgentCore::Action::"ExpenseTools__approve_small",
resource == AgentCore::Gateway::"arn:aws:bedrock-agentcore:eu-west-1:123456789012:gateway/expense-gateway"
)
when {
context.input.amount >= 50
};
The first policy permits the approve_small tool only when the amount is below 50. The second policy explicitly forbids it when the amount is 50 or more. Because Cedar uses forbid-overrides-permit semantics, even if another policy were to accidentally permit the request, the forbid policy would still block it.
Now, even if a prompt injection tricks the model into calling approve_small with an amount of $2,000, the Gateway evaluates the Cedar policy, sees that the amount exceeds the threshold, and denies the request. The tool never executes. No compute is consumed, no side effects occur, and the agent receives a denial it can handle gracefully.
You would write similar policies for approve_medium and approve_large, each scoped to their respective amount ranges. The result is a complete set of boundaries that mirror your business rules but are enforced independently of the model's behavior.
Why This Matters Beyond Guard Clauses
You might be thinking: "I can just add a guard clause in my tool code that raises an exception when the amount is out of range." And you absolutely should. Defense in depth is always a good idea. But relying solely on guard clauses has drawbacks that Cedar policies address.
First, the tool still gets invoked. That invocation consumes compute resources and incurs cost, even if the tool ultimately rejects the request. When your agent processes thousands of requests per day, those wasted invocations add up. Cedar policies stop the request at the Gateway, before any tool code runs. No Lambda execution, no container spin-up, no database connection opened for nothing.
Second, guard clauses are scattered across your codebase. They live in different functions, written in different programming languages, maintained by different teams. If someone accidentally removes or weakens a guard clause during a refactor, you are exposed, and you might not notice until something goes wrong in production. Cedar policies are centralized and declarative. They live in one place, are easy to audit, and are independent of your tool implementations. A change to your tool code cannot weaken your Cedar policies.
Third, from a governance perspective, it is far easier to review and validate a set of Cedar policies than to inspect guard clauses across dozens of functions and repositories. Auditors can read Cedar policies directly; they don't need to understand Python, TypeScript, or whatever language your tools are written in. This separation of concerns means your security team can own the policies while your development team owns the tool implementations, each working independently without stepping on each other's toes.
Finally, Cedar policies give you a consistent enforcement mechanism regardless of how your tools are built. Whether a tool is an AWS Lambda function, a container behind an API, or a third-party service, the same Cedar policy applies at the Gateway layer. You define your rules once, and they protect every tool uniformly. That consistency is hard to achieve with guard clauses alone, especially as your agent's toolset grows and evolves over time.
Conclusion
Prompt engineering is a powerful tool for guiding model behavior, but it is not a security boundary. When your agent's tools perform real-world actions, you need deterministic enforcement that no prompt injection can bypass. Cedar policies in Amazon Bedrock AgentCore Gateway provide exactly that: a centralized, auditable, default-deny policy layer that evaluates every tool call before it executes.
Combine Cedar policies with guard clauses in your tool code, and you get defense in depth that is both cost-effective and easy to govern. If you are building agentic applications on AWS, start by defining the boundaries your tools should never cross, then express them as Cedar policies.
To get started, explore the Amazon Bedrock AgentCore Policy documentation, review the common Cedar policy patterns, and read the AWS blog post on securing AI agents with Policy in Amazon Bedrock AgentCore.
Photo by Engin Akyurt
Written by

Joris Conijn
Joris is the AWS Practise CTO of the Xebia Cloud service line and has been working with the AWS cloud since 2009 and focussing on building event-driven architectures. While working with the cloud from (almost) the start, he has seen most of the services being launched. Joris strongly believes in automation and infrastructure as code and is open to learning new things and experimenting with them because that is the way to learn and grow.
Our Ideas
Explore More Blogs
Contact



