Techo.ai Techo.ai
Back to Blog
Engine

OpenClaw cost model: what you actually pay for.

The engine is free. The bill is not. Five real lines — model, hosting, tools, storage, people — sized for 2026.

T
Techo.ai team The Techo team
Published May 1, 2026
Reading time 9 min
OpenClaw cost model — what you actually pay for hero

OpenClaw is open source, so the engine itself costs nothing. What does cost money is everything an agent leans on once it is running: model tokens, hosting, third-party tools, storage and traces, and operator time. This piece walks through the five real lines on an OpenClaw bill in 2026, with three sample budgets at the end so the numbers stop being abstract.

§The one-line answer

Short version
OpenClaw pricing is not a sticker price; it is a stack of five line items.
Engine is free. Model usage is the dominant line. Hosting is a real choice between a platform sprint and a managed perimeter. Tools, storage and operator time round out the bill. Most teams underestimate operator time and overestimate engine cost.

The instinct with an agent runtime is to ask “what does it cost per month?”. The more useful question is “what does the bill look like once it is running?”. The answer has five parts, each behaving differently as the agent grows.

01Five lines on the bill

An OpenClaw deployment in production almost always sums to the same five-item invoice, regardless of whether the agent is a personal concierge, an internal ops bot, or a customer-facing assistant.

1. Model usage

Tokens consumed by the planner, the tool-call reasoning, and any in-loop summarisation. Variable, dominated by traffic, and the largest line on the bill in most setups.

2. Hosting

Servers and managed services that run the engine, memory store, sandboxes, MCP registry, and observability. Either fixed (self-host) or subscription (hosted OpenClaw).

3. Tools & integrations

Third-party APIs the agent calls — search, browsing, calendars, payments, vendor SaaS. Per-call or per-seat, often invisible until volume picks up.

4. Storage & traces

Persistent memory, vector indexes, and the trace log of every tool call and model exchange. Cheap when you sample, expensive when you keep everything forever.

5. People time

The platform engineer who keeps the runtime healthy: tuning prompts, rotating keys, reviewing failures, watching drift after a model upgrade. The line teams forget to price.

Three of the five (model, tools, storage) are usage-driven and grow with traffic. Two (hosting, people) are fixed-ish and grow with complexity rather than volume. A useful budget separates the two on day one. For the engine itself, the primer on what is OpenClaw is the cleanest read.

02Model usage — the dominant line

Almost every OpenClaw bill in 2026 is dominated by frontier model usage. The reason is structural. A multi-step OpenClaw task is not a single prompt and a single completion; the planner thinks, picks a tool, observes the result, and decides what to do next. A typical task makes between two and five model round-trips, and a busy consumer agent runs thousands of those a day.

The lever that moves the bill the most is model routing. The planner step deserves a frontier model; many intermediate steps — classifying a tool result, drafting a confirmation, summarising a document — do not. OpenClaw treats the model as a configuration, so routing easy steps to a cheaper model is a configuration change, not a rewrite. Teams that get this right typically drop model spend by thirty to sixty per cent with no change in agent behaviour.

The second lever is context discipline. Long context windows are tempting and expensive; an agent that drags every prior turn into every step pays for it on every call. Memory done well means the agent retrieves what it needs, not the entire history.

i
The honest rule of thumbIf model usage is more than three quarters of your bill, you have not optimised routing. If it is less than half, either you are not using it enough or your tool call costs are running away.

03Hosting — engine vs managed perimeter

OpenClaw is a runtime that has to live somewhere. The choice is between standing it up yourself and using a hosted product that wraps the engine in a managed perimeter.

Self-hosting. Provision a small cluster, run the engine, attach a memory store, set up sandboxes, register MCP tools, wire observability. The cloud bill is modest — early-production OpenClaw fits on a few hundred pounds a month of compute and storage — but the platform sprint to get there is real.

Hosted OpenClaw. The engine is the same; the operations layer is rented. Memory, sandboxes, MCP registry, routing, observability, scheduled tasks and quotas arrive as a managed product. The piece on Techo as OpenClaw hosting walks through what that perimeter typically includes.

Neither option is universally cheaper. Self-hosting wins with a strong platform team or sensitive data that cannot leave the perimeter. Hosted wins when the team would rather spend its engineering capital on product behaviour. The honest comparison shows up only after the people line below is priced.

04Tools and integrations — the line that creeps

Every tool the agent calls sends an invoice somewhere. Search APIs, browsing services, payment gateways, vendor SaaS, custom internal endpoints — each with its own pricing model. On day one the line is small enough to ignore. By the time the agent has six integrations and a thousand active users a day, it can become the second-largest item on the bill.

The pattern is the same as with model usage: routing matters more than picking the cheapest provider. Most tasks need cheap, high-volume tools for the bulk of their steps and reach for the expensive one only when a step actually requires it. OpenClaw exposes tools as typed contracts, so the choice between a premium browsing service and a cheap fetch is configuration, not code.

The two practical safeguards are quotas and audit. Quotas cap how many calls a single tool can absorb in a day; audit makes runaway usage visible early. The piece on OpenClaw vs Claude Code covers the operational difference between agents that have these guardrails and ones that do not.

05Storage and traces — cheap if you sample

The storage line covers two things: persistent memory (preferences, prior runs) and traces (the log of every tool call, every model exchange, every plan revision).

Memory is rarely a meaningful cost in itself. A consumer agent's memory store, even at a hundred thousand active users, fits inside a small managed database; the vector index attached to it sits comfortably on a single mid-sized instance.

Traces are different. A trace-heavy agent that retains every event at full resolution for ninety days will burn more on log storage than on memory itself. The lever is sampling and retention. Keep full fidelity for a short window — seventy-two hours is usually enough to debug anything urgent — downsample to summaries for the next month, and archive coldly beyond that. Storage should be a small minority of the bill; if it is not, retention policy is doing the cost-cutting work that should have been done up front.

06People time — the line teams forget to price

The cloud bill is only half the cost of running an agent. The other half is the operator who keeps it honest. Someone has to review failed runs, tune prompts when behaviour drifts after a model upgrade, manage the MCP registry, rotate keys, watch quotas, and explain to colleagues why the agent did what it did on Tuesday afternoon.

On a self-hosted OpenClaw that is roughly half a platform engineer's week, every week, for as long as the agent is in production. At a London market rate that is comfortably north of three thousand pounds a month, before any cloud line items. At enterprise scale with multiple agents and stricter compliance, the line can be a full team.

Hosted OpenClaw absorbs most of this work into the subscription. That is what the price tag pays for: not the engine (free), not the cloud (rentable), but the on-call human who would otherwise have to live in your runtime. Pricing this line at zero is the most common mistake in self-host versus managed comparisons.

07Three sample budgets that match reality

Numbers are easier to reason about with an actual shape. Three rough envelopes for OpenClaw bills in 2026, in pounds per month, on a hosted setup so the people line is folded in:

ProfileVolumeBill
Solo founder personal agent~50 tasks/day, 3 scheduled jobs£ 40–120
Startup ops agent (5 operators)~600 tasks/day, 12 integrations£ 400–900
Customer-facing assistant~10k tasks/day, multi-tenant£ 3 500–9 000

The bands are wide on purpose. The lower end of each row is what a well-routed, well-sampled agent pays; the upper end is what a non-optimised one pays for the same volume. The same agent at the same load can sit anywhere in that band depending on routing, retention, and tool quotas.

!
The honest signalIf the lower end of the band feels too cheap, you have not seen a well-tuned agent. If the upper end feels too expensive, you have not yet seen one running at production volume without the basics in place.

Cheatsheet: the OpenClaw bill in one table

A scannable grid for keeping the lines straight:

LineShapeLever
EngineFree — open sourcePin a stable version
Model usageVariable, dominantRoute easy steps cheaper
HostingFixed-ishSelf-host vs managed perimeter
ToolsPer-call, creepsQuotas + cheap-default routing
StorageCheap if sampledTiered retention policy
PeopleHalf FTE +Hosted absorbs most of it
Surprise riskTools and tracesAudit early, cap hard
Best fit profileMulti-step, multi-tool workWorth the bill at any size

?FAQ

Is OpenClaw free?

The engine itself is open source and free to use. What costs money is everything around it: the model tokens the agent burns, the servers and storage it runs on, the third-party APIs it calls, and the operator time someone spends keeping it healthy. Most teams underestimate the operator line, not the model line.

What is the biggest line on an OpenClaw bill?

For almost every agent in production, frontier model usage dominates. A typical multi-step OpenClaw task makes between two and five model round-trips, and a busy consumer agent can run thousands of those per day. Routing the easy steps to a cheaper model and reserving the frontier for the planner is the single change that moves the bill the most.

Should I self-host OpenClaw or use hosted OpenClaw?

Self-hosting wins when you have a strong platform team, sensitive data that cannot leave your perimeter, or a need to pin specific versions. Hosted OpenClaw wins when you would rather skip the platform sprint, ship product, and pay a predictable subscription instead of carrying the operations work in-house. Both are legitimate; the answer depends on where the team's leverage actually is.

How much does a small OpenClaw agent cost per month?

A solo-founder personal agent running a handful of scheduled jobs and a few dozen ad-hoc tasks a day typically lands between forty and one hundred and twenty pounds a month, almost all of it model usage. A startup ops agent with five operators behind it sits in the four-hundred to nine-hundred pound band. Enterprise customer-facing agents move into thousands once volume is real.

Are storage and traces a meaningful cost?

Storage is rarely the headline; traces, when retained at high resolution for long windows, can be. A trace-heavy agent that keeps every tool call and every model exchange for ninety days will burn more on log storage than on memory itself. The lever is sampling and retention policy: keep full fidelity for a short window, downsample beyond that, and archive coldly.

What is the hidden line most teams miss?

Operator time. Someone has to keep the agent honest: review failed runs, tune prompts, manage the MCP registry, rotate keys, watch for drift after a model upgrade. On a self-hosted setup that is half a platform engineer's week, every week. Hosted OpenClaw absorbs most of it; self-hosting does not. Pricing this line in pounds per month is what makes the build-versus-buy decision honest.

§Where Techo fits

Techo is built on OpenClaw. It is a productised, ready-to-use OpenClaw with the operations layer already in place: managed memory, sandboxes, MCP registry, model routing, observability, scheduled tasks, quotas. The engine underneath is the same open-source OpenClaw; what Techo adds is the perimeter, the routing defaults, and the on-call human you would otherwise have to hire.

For a team weighing the bill above, the practical question is whether the platform sprint and the half-FTE operator line are work you actually want to own. If they are, self-hosting OpenClaw is a clean, well-trodden path. If they are not, hosted OpenClaw via Techo compresses the operations layer into a subscription. Either way, the engine is the same; the question is only where the operations work lives.

The OpenClaw bill is not five mystery line items; it is five legible ones. Once each is named and priced, the build-versus-buy conversation gets short.

Teams that get the OpenClaw cost model wrong on the first pass do so because they price the engine and forget everything around it. What you actually pay for is the loop the engine runs, the perimeter that keeps it safe, and the human who keeps both honest. Once those are sized, the agent is a budget item like any other.

Tags
#OpenClawPricing #OpenClawCost #OpenClaw #AIAgent #AgentRuntime #ChatGPTAgents #ClaudeCode #AgentInfrastructure #AIBudget2026 #Techo
T

Techo.ai team

The Techo team · Techo.ai

The team behind Techo — building hosted OpenClaw, running growth experiments, and writing about the mechanics of AI agents that have to behave well in the real world.

Follow →

Keep reading

More from the Techo team

View all

Hosted OpenClaw, ready in seconds

Skip the platform sprint.

Techo runs the operations layer behind a production agent — memory, sandboxes, MCP, routing, scheduling, traces, quotas, UI — on top of OpenClaw.

See hosting plans