The chatter around coding agents is getting repetitive for a reason.

People want agents to run longer. They do not want to babysit every read, grep, or harmless ls. But they also do not want to wake up to an agent that restarted a service, rewrote a file tree, or sprayed data across the network because the tool had permission and the prompt sounded confident.

That tension is real.

Approval fatigue is what happens when the system treats low-risk and high-risk actions like they deserve the same human interruption. Once that happens, the user does one of three things:

  1. starts clicking through without thinking
  2. turns the approval system off
  3. stops using the longer loop entirely

None of those are good outcomes.

The mistake is usually architectural. Teams treat approval as a UI problem instead of a runtime policy problem.

The question is not “should the agent ask the user for approval?” The question is:

Which actions deserve interruption, which actions deserve logging only, and which actions should be blocked outright?

That decision belongs at the tool boundary.

If an agent is about to run shell, write files, call an external API, touch credentials, or trigger an irreversible side effect, you need a layer that evaluates that action before it lands. Not a stronger system prompt. Not another warning sentence in a markdown file. A real decision point.

That is the narrow reason I built 5D Runtime Policy Engine.

It does not try to be the entire governance stack. It does one job: score the action, let safe ones through, route risky ones to review, block the obvious foot-guns, and keep the log.

That is the missing layer in a lot of agent setups right now.

The approval problem is not binary. Agents do not need “always ask” or “never ask.” They need gradients.

A harmless read should not burn human attention.

A service restart, force push, credential touch, or external write should not be treated like a harmless read.

And if the same recurring action keeps getting approved, the system should be able to learn that pattern if the user explicitly enables it. Otherwise, the loop never gets better. It just gets more annoying.

This is why I think the next wave of serious agent tooling will look less like “more intelligent assistants” and more like better runtime control around execution.

The model is not the only risk surface.

The tool call is.

If you are already feeling this pain, start there:

  • put policy in front of shell and write actions
  • log the outcome
  • make review a first-class path
  • stop interrupting the user for low-risk noise

That is what gets agents further than a toy loop without turning them into a liability.