Blog
Welcome to the Blog.
The Real Historical Analogy
2026-04-20
The most popular analogies around AI are usually the worst ones, because they jump straight to apocalypse, utopia, or machine rebellion and miss the transformation already happening in front of us. A far better analogy is older, less glamorous, and much more revealing: the history of writing becoming administration.
The strongest historical analogy for LLMs is not Skynet, industrial automation, or a new species. It is the old pattern in which an expressive medium expands access and then hardens into records, templates, procedure, governance, and bureaucracy. Less cinema. More paperwork. Unfortunately that is usually where real power hides. ... continue
MCPs: "Useful" Was Never the Real Threshold -- "Consequential" was.
2026-04-20
For a while, the industry kept talking as if tool access merely made models more “useful”. That description is too soft by half, because the real shift is harsher: once a model can perceive and act through an environment, its outputs stop being merely interesting and start becoming “consequential”.
Model Context Protocol (MCP) does not just make language models more capable in some vague product sense. It moves them closer to “consequence” by connecting model output to trusted systems, permissions, tools, and environments where words can become actions. ... continue
From Prompt to Protocol Stack
2026-04-18
The future of AI control was never going to fit inside one clever paragraph typed into a chat box. What looks like prompting today is already breaking apart into layers, and each layer is quietly starting to serve a different audience: humans, agents, tools, infrastructure, and, eventually, other layers pretending not to be there.
Prompting is evolving into a full protocol stack. Natural language remains at the human boundary, while deeper layers increasingly carry schemas, tool definitions, memory layouts, compressed state, and possibly machine-native agent communication. The chat box survives, but it is no longer the whole machine. ... continue
Is There a Hidden Language Beneath English?
2026-04-16
Most prompt engineering is written in English, and the industry often treats that fact as if it were almost self-evident. But once you ask whether English is truly the best control medium or merely the most overrepresented one, the ground starts moving under the whole discussion.
There is no strong evidence yet for one universal hidden “control language” beneath English. But there is real evidence that useful control can happen through non-natural-language mechanisms such as soft prompts, steering vectors, and latent or activation-based agent communication. So the idea is not crazy. It is just easier to say crazy things around it than careful ones. ... continue
The Myth of Prompting as Conversation
2026-04-13
The phrase “just talk to the model” is one of the most successful half-truths in the current AI boom. It is good onboarding and bad description: useful for getting people in the door, and deeply misleading the moment anything expensive, fragile, or embarassingly public depends on the answer.
Prompting is conversational only at the surface. Under real workloads it behaves much more like specification-writing for a probabilistic component inside a larger system, except the specification keeps pretending to be a chat. ... continue