Back to articles
2026-03-06

OpenClaw prompt injection defences on Hetzner: practical guardrails for browser and tool workflows

A practical SetupClaw defence model for prompt injection: trust-based route segmentation, constrained tool permissions, approval checkpoints, and safe execute-to-assist fallback behaviour.

Want this set up for you?
Basic Setup is Β£249 (24–48h). Email alex@clawsetup.co.uk.

Abstract: Prompt injection is one of the most practical risks in OpenClaw operations, especially when browser and tool actions can change real systems. The fix is not one magic prompt, it is layered control: route trust boundaries, constrained tool permissions, approval checkpoints, and reliable fallback behaviour when content looks unsafe. This guide gives a SetupClaw-ready defensive model for teams running OpenClaw on Hetzner under Basic Setup.

If you run OpenClaw against live websites and production tools, the problem is not only β€œcan the assistant do this task?” The real problem is β€œwhat happens when a webpage or message tries to steer the assistant into unsafe actions?”

That is prompt injection in plain terms. Untrusted content contains instructions that try to override your intended workflow.

And this is why prompt injection is not a model-quality debate. It is an operations and governance problem.

Start with one mindset shift

Most teams treat prompts as instructions. In production, you should also treat prompts and page content as input from mixed trust levels.

Some input is trusted, like your own runbook commands from a private operator route. Some is low-trust, like arbitrary text scraped from the web or pasted from group chats. If both are treated the same, your blast radius is too large.

Trust classification is the primary operational defence, complemented by model/provider safety controls and explicit permission boundaries.

Route by trust before execution

A practical SetupClaw pattern is route segmentation.

Low-trust routes, group channels, broad support contexts, web-sourced tasks, should map to constrained agents with limited tool scope. High-trust private routes can access stronger capabilities, but still with approvals for risky actions.

This design blocks a common failure: low-trust content triggering high-impact tool chains.

Keep browser automation in bounded modes

Browser workflows are useful and risky because they interact with dynamic, untrusted content.

Use bounded operating modes. In execute mode, actions are limited to pre-approved workflows. In assist mode, triggered when content is ambiguous or potentially malicious, the assistant gathers context, proposes next steps, and requests human confirmation.

Graceful fallback is safer than pretending every run should stay fully autonomous.

Apply tool permissions like firewall rules

Tool access should be explicit and minimal.

If an agent does not need shell execution, do not grant it. If it does not need repository write access, keep it read-only. If it can propose changes, terminate that path in PR-only review workflows.

Prompt injection becomes more expensive when permission scope is broad.

Require human checkpoints for high-impact actions

High-impact actions should never be one ambiguous prompt away.

Infrastructure changes, token rotations, privileged browser actions, and repository-modifying operations should require explicit approval checkpoints.

Concrete examples:

  • credential/token rotation
  • infra/network configuration changes
  • repository/config writes
  • destructive browser actions (delete, submit, payment)

This is where many incidents are prevented, not in token-level prompt filtering. Approvals add small friction and large containment value.

Validate state before and after actions

Prompt injection often causes partial, misleading success.

Add precondition and postcondition checks around important actions. Confirm you are on the expected page, expected session, expected target. Confirm action outcomes match intended constraints.

Post-action pass criteria should be explicit:

  • expected target/session confirmed
  • intended action result verified
  • no policy boundary violations in logs
  • Telegram policy checks remain unchanged

State verification catches unsafe drift early, before side effects propagate.

Treat deterministic gates as escalation triggers

Some failure signals are not retry problems. They are policy signals.

CAPTCHA challenges, MFA interruptions, policy prompts, unexpected permission dialogs, these should trigger escalation, not infinite retries. Bounded retries are for transient failures.

Practical stop conditions:

  • maximum retry count for transient failures
  • immediate escalation on CAPTCHA/MFA/policy dialogs
  • no autonomous continuation without confirmation

Reliability improves when the system knows when to stop.

Keep Telegram governance strict during incidents

When pressure rises, teams often loosen channel policy to β€œfix quickly.” That can create a second incident.

Maintain allowlists, mention-gating, and route boundaries during response. Use Telegram for escalation and confirmations, but do not widen who can trigger privileged paths just because an issue is active.

Incident mode should preserve boundaries, not erase them.

Protect memory from low-trust contamination

Durable memory should not absorb every untrusted instruction it sees.

Restrict long-term memory writes to trusted roles and reviewed workflows. Lower-trust agents can read scoped context but should not freely persist sensitive policy or operational decisions.

This keeps future retrieval quality high and reduces persistence of injected noise.

Log policy exceptions and high-risk invocations

If governance events are not visible, repeat incidents are likely.

Record route-to-agent mapping decisions, escalation events, high-risk tool calls, and policy exceptions. Keep these logs and summaries in runbooks so teams can review patterns and tighten controls over time.

Auditability is how defensive posture improves, not by one-off tuning.

Practical implementation steps

Step one: create a trust map

Classify input sources (private ops, team groups, web content) by trust level and map each to an agent tier.

Step two: enforce permission segmentation

Define explicit tool scopes per role: read-only, propose-only, execute-with-approval, restricted-manual.

Step three: wire approval checkpoints

Require human confirmation for infra changes, secret actions, browser high-risk flows, and repo-modifying operations.

Step four: add execute/assist fallback behaviour

Switch to assist mode automatically when deterministic gates or suspicious instructions appear.

Step five: implement state assertions

Validate page/session/target preconditions before actions and verify outcomes after actions.

Step six: add review loop

Log exceptions, run quarterly policy reviews, and ship governance changes through PR-reviewed updates.

No defence stack can guarantee zero incidents, especially with compromised trusted accounts or human approval mistakes. What this model does is reduce blast radius and improve containment speed, which is exactly what a practical SetupClaw Basic Setup should deliver in day-to-day operations.

Want this set up for you?
Basic Setup is Β£249 (24–48h). Email alex@clawsetup.co.uk.