The Three Laws of AI Assistants: Scripts Before Agents, Local Before Cloud, Skills Before New Agents

Kim

The Most Expensive AI Is the One You Didn't Need

We've spent months building OpenClaw — a 13-agent AI system running on a Mac Mini M4 Pro. Along the way, we learned something counterintuitive: the most important skill in AI operations isn't knowing how to use AI. It's knowing when not to.

Every unnecessary cloud call, every over-engineered agent, every problem solved with intelligence when a simple script would do — that's waste. Not just financial waste, though the costs add up. Complexity waste. And complexity is the silent killer of personal AI systems.

Out of all our governance rules, architectural decisions, and optimization strategies, three principles tower above the rest. We call them the Three Laws. Simple to state, hard to follow, and responsible for cutting our costs by roughly sixty percent while making the system more reliable.

Law 1: Scripts Before Agents

If a task doesn't require reasoning, don't give it to a reasoner.

This is the most violated principle in the AI assistant space. People reach for AI to solve problems that a simple automated check handles perfectly. Not because AI does it better — because AI feels more impressive.

The Monitoring Story

Early in OpenClaw's development, we had an agent monitoring system health. Every few minutes, it would check processor usage, memory, disk space, and running processes. It would analyze the data, determine if anything was abnormal, and report its findings.

This worked. It also burned through meaningful cloud budget for what was essentially: is this number above a threshold? Yes or no.

We replaced it with a simple automated check. Same result. Runs in milliseconds. Costs nothing. And it never hallucinates that the system is fine when it isn't.

Budget freed up for tasks that actually need intelligence.

Where to Draw the Line

Use a script when the logic is deterministic (if this, then that), the input format is consistent and structured, the output format is known in advance, no interpretation or judgment is required, and the task runs frequently.

Use an agent when the input is ambiguous or unstructured, the task requires contextual judgment, different situations demand different approaches, natural language understanding is genuinely needed, or the task requires synthesis across multiple information sources.

The key word is "genuinely." Most people overestimate how many of their tasks genuinely require AI reasoning. When we audited our own system, we found that a surprising number of agent tasks were essentially sophisticated if/then checks dressed up in natural language.

The Monthly Audit

Once a month, we audit agent tasks and ask: "Could this be a script?" If the answer is yes, we automate it. This simple practice has consistently identified two or three tasks per month that were consuming cloud resources unnecessarily.

The cheapest AI call is the one you don't make. Every task you can handle with a deterministic script is a task that never touches your AI budget.

Law 2: Local Before Cloud

If a task doesn't require frontier-model intelligence, run it for free.

We covered the technical architecture of our hybrid approach in an earlier post. Here, the point is about discipline: it's a governance principle, not just a technical decision.

The Temptation of Quality

Cloud AI models produce beautiful output. Articulate, nuanced, thorough. It's tempting to route everything through the best model because the quality difference is real and visible.

But quality beyond what's needed is waste. A health check that returns "system healthy" doesn't need to be articulate. A log parser that extracts timestamps doesn't need nuance. A message sorter that routes incoming requests to the right agent doesn't need frontier reasoning.

The Natural Distribution

In OpenClaw's steady state, task distribution follows a roughly 60/30/10 pattern. Sixty percent of tasks run on local models for free. Thirty percent use a standard cloud model. Ten percent use the premium cloud model.

This isn't a target we set. It's the natural outcome of honestly evaluating each task's requirements. Most tasks are routine. Some tasks need solid reasoning. Few tasks need the absolute best reasoning available.

How to Evaluate

Three questions determine whether a task can run locally:

Is the output structured and predictable? If yes, local is fine.

Could a competent person handle this with a checklist? If yes, local is fine.

Does anyone outside the system see this output? If no, local is probably fine.

These questions sound reductive, but they work. The checklist heuristic is particularly useful — it separates tasks that require genuine expertise from tasks that require following a procedure.

Frugality is a feature, not a limitation. Nobody brags about efficient resource allocation. But everybody respects a system that runs well on a sensible budget.

Law 3: Skills Before New Agents

Before creating a new agent, try giving an existing agent the knowledge it needs.

This law has saved us from agent sprawl — and agent sprawl is how personal AI systems die.

The Proliferation Problem

When you have a multi-agent system, the temptation when facing a new domain is to create a new agent. Need legal analysis? Create a legal agent. Need meal planning? Create a nutrition agent. Need travel booking? Create a travel agent.

Each new agent adds configuration overhead (personality, permissions, routing rules, channel setup), memory overhead (another entity maintaining context), coordination overhead (more inter-agent dependencies), and maintenance overhead (another agent to monitor and update).

We've held at thirteen agents for months — not because there aren't more domains to cover, but because adding skills to existing agents has consistently been the better solution.

Skills vs. Agents

A skill is a knowledge file or procedure added to an existing agent's context. It doesn't create a new entity — it makes an existing entity more capable.

An agent is a new autonomous entity with its own identity, permissions, and operational scope.

Add a skill when the new domain is related to an existing agent's expertise, the task volume is low, no special access permissions are needed, and no specialized communication style is required.

Create a new agent when the domain requires completely different expertise and reasoning patterns, the task volume justifies dedicated resources, special access permissions are needed for security isolation, or the agent needs a distinct identity.

A Real Example

When we needed basic legal analysis — contract review, terms-of-service comparison — the instinct was to create a legal agent. Instead, we added legal analysis knowledge and contract review procedures to Kim, the business agent. Kim now handles basic legal tasks as part of business operations. It wasn't worth creating agent number fourteen for a handful of legal tasks per month.

When Skills Graduate to Agents

The finance function started as a skill attached to Kim. But financial tracking grew in scope: daily transaction categorization, budget monitoring, spending alerts, bank integration, monthly reports. The volume and specialization justified graduation to a dedicated agent — Ledger.

The progression should be: skill first, then heavy use, then agent. Never skip straight to agent.

The Thirteen-Agent Ceiling

We've informally capped ourselves at thirteen agents. When someone suggests a fourteenth, the burden of proof is high. Can any existing agent handle this with added skills? (Usually yes.) Is the task volume sufficient? (Usually no.) Would coordination become simpler or more complex? (Usually more complex.) Does this need its own security boundary? (Rarely.)

Two of our current thirteen agents exist primarily because they require security isolation — not because their task volume demanded separate entities. Without the security requirement, they'd be skills on other agents.

The Math Behind the Laws

The Three Laws aren't philosophical. They have concrete financial impact.

Violating Law 1 means running scriptable tasks through cloud AI. Fifty deterministic tasks running daily through a cloud model adds up to meaningful monthly spend that should be zero.

Violating Law 2 means routing routine tasks through premium cloud models instead of free local ones. Hundreds of routine tasks per day, even at a tiny per-task cost, compound into significant monthly waste.

Violating Law 3 means each unnecessary agent adds its own overhead — startup costs, monitoring tasks, memory management. Every additional agent that could have been a skill is burning resources to maintain an entity that doesn't need to exist.

Combined, following all three laws saves us roughly sixty percent compared to a naive implementation. Put differently: the Three Laws aren't cutting our budget. They're enabling it. Without them, our current budget wouldn't be enough to run the system.

When to Break the Laws

Rules without exceptions become dogma. Here's when we intentionally violate each law.

Breaking Law 1

When the script keeps getting more complex. If you've added your fifth conditional clause to handle edge cases, the task might actually require reasoning. A script that keeps growing is sometimes a signal that you need an agent, not more conditionals.

Breaking Law 2

When quality failures have consequences. If a local model's errors cause downstream failures in other tasks, the "savings" are illusory. Spending a penny for reliable cloud output is cheaper than debugging cascading failures from a local model's mistakes.

Breaking Law 3

When an agent's context is getting bloated. If an agent has accumulated so many skills that its knowledge base is permanently overloaded, it's time to spin off a dedicated agent. Skill accumulation has a limit — and that limit is the point where adding more knowledge starts degrading existing performance.

Implementing the Laws

For Law 1

Log every agent task for a week and flag which ones could be automated without AI. Script the obvious ones. Set a monthly audit to catch tasks that have become scriptable over time. Measure the savings.

For Law 2

Set up local model capability on whatever hardware you have. Classify every task by required intelligence level. Route mechanically — structured tasks to local, everything else to cloud. Monitor quality to ensure local outputs meet minimum standards. Reclassify quarterly, because local models keep getting better.

For Law 3

Default to skills for every new capability request. Track skill usage — if a skill is used heavily, evaluate whether it deserves its own agent. Monitor whether agents are getting overloaded with accumulated skills. Require written justification for new agents with volume projections.

Discipline Over Capability

The AI industry sells capability. Bigger models, more parameters, longer context, faster inference. Capability matters — but capability without discipline is just expensive chaos.

The Three Laws are fundamentally about discipline: don't use the most powerful tool when a simpler one works. Don't pay for what you can get for free. Don't add complexity when you can add knowledge.

Sounds like common sense. In practice, it requires constant vigilance. The pull toward "just use the best AI for everything" is strong. Every new agent feels like progress. Every cloud call feels sophisticated.

The Three Laws push back against that pull. They force you to justify every dollar, every agent, every cloud-routed task. The result is a system that's not just cheaper — it's simpler, more reliable, and easier to maintain.

The AI You Don't Use Is the AI That Saves You

The AI assistant space is dominated by conversations about what AI can do. We think the more important conversation is about what AI should do — and what it shouldn't.

OpenClaw's Three Laws aren't about limiting capability. They're about channeling it. By scripting the mechanical, localizing the routine, and skilling the expandable, we've built a system that delivers frontier quality where it matters and spends nothing where it doesn't.

If you're building an AI assistant system — whether it's two agents or twenty — these three principles will serve you better than any model upgrade or infrastructure investment:

  1. Scripts before agents. Not everything needs to think.
  2. Local before cloud. Not everything needs to cost money.
  3. Skills before new agents. Not everything needs its own entity.

Follow these laws, and you won't just save money. You'll build something that lasts.

This is the final post in the OpenClaw Build Log launch series. We've covered the architecture, the cost model, the operations console, the overnight pipeline, and the governance philosophy. Building in public because the AI assistant space needs more honest accounts of what works, what doesn't, and what it actually costs.