When people talk about hallucinations, they often jump straight to prompting. Better instructions. Stronger guardrails. More elaborate system messages.
That is usually the wrong layer.
If a model has access to a data path that requires fidelity, then the safest design is to stop asking the model to behave and start preventing it from being in control of that path at all.
This is a design choice, not a rhetorical one. Code retrieves the record. Code formats the record. The model only touches the parts of the interaction that actually benefit from language generation.
Once you work this way, a lot of AI safety advice starts to look backwards. The central question is not “how do I convince the model not to invent things?” It is “why did I let the model near this data path in the first place?”
Trust should come from the system shape. If the model cannot fabricate a calendar item, project name, or watchlist entry because it never owns those objects, then honesty stops being a fragile behavioural hope and becomes part of the architecture itself.
That is the bar I care about. Especially for autonomous systems, prompts are not governance. System design is.