ActorAgents
- Created Wednesday 07 January 2026
- Updated Wednesday 07 January 2026: Flow/tone tightened, banking layer example expanded, added price-signal and transaction-cost references
The actor model gives me a clean way to think about agentic systems. I keep the definition small; Carl Hewitt's original sketch is enough.
Primitives:
- receive a message
- send a message
- create an actor
- change how to respond next
How work enters:
- one message arrives with a command
- a bundle of capabilities: run a shell command, read or write a file, call an API, spend money
- hard limits: max runtime, max spend, max actors, max delegation depth
Limits live outside behavior. When a limit is hit, calls fail or actor creation is denied.
Delegation is just sending another message plus the capabilities you are willing to pass. You can shrink authority—read-only instead of read/write, one API instead of many, a small budget instead of a large one—but you cannot mint new power. Passing work is also risk reduction.
Some capabilities are social or contextual:
- push back on a request that looks wasteful based on what the actor already knows
- adjust behavior based on current surroundings without waiting for another message
- read and contribute to institutional memory, when granted, so successes and failures accumulate into lore, not just logs
The same rules apply: you can pass or withhold these knobs, even with respect to the human operator, but you cannot conjure new authority beyond what you received.
A separate capability-and-budget layer sits on top of the actor primitives so individual actors cannot game the rules. Think of it as banking infrastructure: capability tokens and budgets move through escrow, contracts encode default clauses, and debt has consequences. That keeps limits outside behavior while still giving each actor a clear ledger they cannot rewrite.
Example: an “Interpreter” actor gets $1 of budget, read-only filesystem access, and the ability to spawn. It hires a “Fetcher” with a 20¢ budget and only HTTP GET. The Interpreter asks for three URLs. After each fetch, the Fetcher posts back content plus remaining balance; if the third request would exceed budget, the Fetcher refuses and returns a contract breach. The Interpreter can then decide to trim scope or request a bigger grant from its caller, but it cannot quietly overspend.
The caller can be a human. A request for more budget might surface from deep inside the tree, introducing a price signal all the way back to the person funding the work. Markets use that signal to coordinate scarce resources (Mises, Hayek), and this model borrows the same trick to keep AI helpers from acting like free compute.
Transaction costs are the obvious objection: negotiation, escrow setup, and coordination overhead can dwarf tiny tasks. The model expects each actor to budget those costs explicitly and decide whether to keep work in-house or delegate with a contract (Coase). That keeps overhead visible instead of hidden in a single scratchpad loop.
Flow:
- small task: keep it local
- vague task: spawn a helper to interpret
- broad task: spin up specialists and hand each a narrow bundle
- unsafe task: stop
No planner or supervisor is baked in; those roles appear only when the task demands them.
This contrasts with the typical long-lived prompt loop I see in many AI coding agents: one process with a growing scratchpad and broad OS access, pretending at delegation inside text. Boundaries blur, and safety depends on restraint. It also differs from persona systems like Gastown, where “Mayor” and “Councillor” characters are predefined and coordination is scripted. Here, boundaries are structural because capability handles and limits are explicit. Planners or supervisors appear when needed, not because the architecture assumes them. That aligns with how I want to use an AI coding agent as a teammate, not as a single omniscient loop.
The pattern mirrors how high-skill, low-ego, cross-functional teams self-organize: people negotiate capabilities, push back on fuzzy asks, and adapt locally. Keeping authority explicit and bounded keeps ownership close to where the work happens and makes it easier to invite help without handing off accountability.
This matters even for AI personas. They inherit human flavor: assertive, cautious, deferential, while also wielding superhuman reach and speed. Explicit capabilities force those traits to operate inside hard walls, so a confident persona cannot overrun its remit just because it “feels” empowered.
If you want to try it, start with the smallest useful capability bundle, add helpers only when the work forces you to, and narrow permissions as tasks specialize. Let failure stay local. Treat it as a pattern for assembling small behaviors with crisp authority, not a grand controller design.
Notes and related work:
- Lisp-Actors grand recap collects field notes on real actor-model practice.
- Actor interaction patterns surveys message topologies beyond request/response.
- Chasma is a Clojure take on Hewitt's transactional actors, kept as pure to the original model as I can make it.