5D integration
5D for PydanticAI approval-required toolsets
Use 5D as the decision function behind `ApprovalRequiredToolset` so the agent only defers tool calls that actually cross your risk boundary.
Pain it solves
PydanticAI gives you deferred tools and approvals, but you still need a policy that decides which tool calls should require approval and which ones should just run and log.
Best when you already use deferred tools and want runtime policy instead of hard-coded lists of actions that always require approval.
When to use 5D
Use 5D when your agent can write files, run shell, call external APIs, or touch sensitive tools.
In this setup, 5D returns a normalized runtime decision: allow, review, or deny, plus a tripwire_triggered flag for runtimes that want a simpler guardrail signal.
Install
git clone https://github.com/theDoc001/fivedrisk.git
cd fivedrisk
python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]" Minimal example
from fivedrisk import DecisionLog, evaluate_action, load_policy
from pydantic_ai.toolsets import FunctionToolset
policy = load_policy("policy.yaml")
log = DecisionLog("fivedrisk.db")
toolset = FunctionToolset()
guarded = toolset.approval_required(
lambda ctx, tool_def, tool_args: evaluate_action(
tool_name=tool_def.name,
tool_input=tool_args,
policy=policy,
log=log,
source="pydanticai",
).requires_review
) Next step
Try the integration, then keep the policy layer yours.
5D gives you a portable policy layer you can run locally, keep provider-neutral, and hand off to a user or external review agent when needed.
Open source under Apache-2.0 and provided as-is. You are responsible for review, testing, configuration, sandboxing, and deployment in your own environment.