Why DeepClaw uses a push model
Most monitoring dashboards start with a pull model: the dashboard connects to your server, asks for metrics, and repeats that forever. That works when you already run a private network, agents, firewall rules, and a security team that enjoys debugging tunnels.
DeepClaw is built for a different reality: small teams running OpenClaw on VPS machines, shipping fast, and trying not to create a new attack surface just to understand what their AI gateway is doing.
Pull monitoring adds operational risk
A pull-based dashboard needs some path into the machine it observes. That usually means one of three things:
- exposing a public metrics endpoint,
- maintaining a VPN or tunnel,
- or giving a third-party service credentials that can reach internal infrastructure.
Each option can be made safe, but none of them are free. They add setup work, failure modes, and another thing to rotate when access changes.
For AI gateway monitoring, that tradeoff is often unnecessary. The data DeepClaw needs is already known locally: sessions, models, token counts, cron status, cost estimates, and basic health signals. The gateway can summarize that state and send it out.
Push keeps the VPS private
DeepClaw uses a push model. A small sync agent runs on the VPS and periodically sends a snapshot to the DeepClaw API over HTTPS.
export DEEPCLAW_API_URL="https://app.deep-claw.com"
export DEEPCLAW_SYNC_TOKEN="dc_..."
python3 sync-to-deepclaw.pyThe dashboard never opens a connection back into the VPS. There are no inbound firewall changes. If the agent stops, DeepClaw marks the instance stale instead of trying to reach into your machine.
That makes the security boundary easier to reason about:
- the VPS initiates outbound HTTPS only,
- the token is scoped to one instance,
- revoking the token cuts off future syncs,
- and DeepClaw cannot execute commands on the gateway.
What gets synced
The sync payload is intentionally operational, not omniscient. DeepClaw cares about the questions an operator asks every day:
- Is the gateway alive?
- Which models are being used?
- Are sessions growing or failing?
- Are costs moving in the wrong direction?
- Did scheduled jobs run recently?
- Is there anything suspicious in the surface-level health signals?
That gives you the dashboard view without turning monitoring into remote administration.
The tradeoff
Push monitoring is not magic. If the VPS has no outbound network, DeepClaw cannot see new data. If the sync token is wrong, the dashboard goes stale. If you need second-by-second metrics, a five-minute cron interval is the wrong tool.
But for most OpenClaw deployments, the tradeoff is right: simple install, low privilege, no inbound ports, and enough observability to catch problems before users notice.
The principle
DeepClaw should make AI operations clearer without making infrastructure more fragile. Push-based monitoring is one of the ways we keep that promise.