git » homepage.git » commit 41337a6

ActorAgent updates

author Alan Dipert
2026-01-08 01:59:14 UTC
committer Alan Dipert
2026-01-08 01:59:14 UTC
parent 4b3123f25144b0734f9bef8afe6207166341ad53

ActorAgent updates

md/ActorAgents.md +12 -3

diff --git a/md/ActorAgents.md b/md/ActorAgents.md
index 9d3359b..accdefc 100644
--- a/md/ActorAgents.md
+++ b/md/ActorAgents.md
@@ -1,5 +1,6 @@
 # ActorAgents
 - Created Wednesday 07 January 2026
+- Updated Wednesday 07 January 2026: Flow/tone tightened, banking layer example expanded, added price-signal and transaction-cost references
 
 The actor model gives me a clean way to think about agentic systems. I keep the definition small; [Carl Hewitt's original sketch](https://en.wikipedia.org/wiki/Actor_model) is enough.
 
@@ -25,6 +26,14 @@ Some capabilities are social or contextual:
 
 The same rules apply: you can pass or withhold these knobs, even with respect to the human operator, but you cannot conjure new authority beyond what you received.
 
+A separate capability-and-budget layer sits on top of the actor primitives so individual actors cannot game the rules. Think of it as banking infrastructure: capability tokens and budgets move through escrow, contracts encode default clauses, and debt has consequences. That keeps limits outside behavior while still giving each actor a clear ledger they cannot rewrite.
+
+Example: an “Interpreter” actor gets $1 of budget, read-only filesystem access, and the ability to spawn. It hires a “Fetcher” with a 20¢ budget and only HTTP GET. The Interpreter asks for three URLs. After each fetch, the Fetcher posts back content plus remaining balance; if the third request would exceed budget, the Fetcher refuses and returns a contract breach. The Interpreter can then decide to trim scope or request a bigger grant from its caller, but it cannot quietly overspend.
+
+The caller can be a human. A request for more budget might surface from deep inside the tree, introducing a price signal all the way back to the person funding the work. Markets use that signal to coordinate scarce resources ([Mises](https://en.wikipedia.org/wiki/Economic_calculation_problem), [Hayek](https://en.wikipedia.org/wiki/The_Use_of_Knowledge_in_Society)), and this model borrows the same trick to keep AI helpers from acting like free compute.
+
+Transaction costs are the obvious objection: negotiation, escrow setup, and coordination overhead can dwarf tiny tasks. The model expects each actor to budget those costs explicitly and decide whether to keep work in-house or delegate with a contract ([Coase](https://en.wikipedia.org/wiki/The_Problem_of_Social_Cost)). That keeps overhead visible instead of hidden in a single scratchpad loop.
+
 Flow:
 - small task: keep it local
 - vague task: spawn a helper to interpret
@@ -35,11 +44,11 @@ No planner or supervisor is baked in; those roles appear only when the task dema
 
 This contrasts with the typical long-lived prompt loop I see in many AI coding agents: one process with a growing scratchpad and broad OS access, pretending at delegation inside text. Boundaries blur, and safety depends on restraint. It also differs from persona systems like [Gastown](https://github.com/steveyegge/gastown), where “Mayor” and “Councillor” characters are predefined and coordination is scripted. Here, boundaries are structural because capability handles and limits are explicit. Planners or supervisors appear when needed, not because the architecture assumes them. That aligns with how I want to use an [AI coding agent](./Coherence.md) as a teammate, not as a single omniscient loop.
 
-The pattern mirrors how high-skill, low-ego, cross-functional teams self-organize: people negotiate capabilities, push back on bad asks, and adapt locally. In a command hierarchy, incentives are lopsided: doing a good job yields subtle, delayed rewards, while a bad job triggers immediate pain, so ducking responsibility becomes rational. Keeping authority explicit and bounded keeps accountability local instead of avoided.
+The pattern mirrors how high-skill, low-ego, cross-functional teams self-organize: people negotiate capabilities, push back on fuzzy asks, and adapt locally. Keeping authority explicit and bounded keeps ownership close to where the work happens and makes it easier to invite help without handing off accountability.
 
-This matters even for AI personas. They inherit human flavor—assertive, cautious, deferential—while also wielding superhuman reach and speed. Explicit capabilities force those traits to operate inside hard walls, so a confident persona cannot overrun its remit just because it “feels” empowered.
+This matters even for AI personas. They inherit human flavor: assertive, cautious, deferential, while also wielding superhuman reach and speed. Explicit capabilities force those traits to operate inside hard walls, so a confident persona cannot overrun its remit just because it “feels” empowered.
 
-To try it: start with the smallest useful capability bundle, add helpers only when the work forces you to, and narrow permissions as tasks specialize. Let failure stay local. The goal is not a grand controller, but small behaviors with crisp authority that assemble only when the problem demands it.
+If you want to try it, start with the smallest useful capability bundle, add helpers only when the work forces you to, and narrow permissions as tasks specialize. Let failure stay local. Treat it as a pattern for assembling small behaviors with crisp authority, not a grand controller design.
 
 Notes and related work:
 - [Lisp-Actors grand recap](https://github.com/dbmcclain/Lisp-Actors?tab=readme-ov-file#---12-apr-2025----grand-recap) collects field notes on real actor-model practice.