<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Language on TurboVision</title>
    <link>https://turbovision.in6-addr.net/tags/language/</link>
    <description>Recent content in Language on TurboVision</description>
    <generator>Hugo</generator>
    <language>en</language>
    <lastBuildDate>Tue, 21 Apr 2026 14:06:12 +0000</lastBuildDate>
    <atom:link href="https://turbovision.in6-addr.net/tags/language/index.xml" rel="self" type="application/rss&#43;xml" />
    
    
    
    <item>
      <title>The Real Historical Analogy</title>
      <link>https://turbovision.in6-addr.net/musings/ai-language-protocols/the-real-historical-analogy/</link>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <lastBuildDate>Mon, 20 Apr 2026 00:00:00 +0000</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/musings/ai-language-protocols/the-real-historical-analogy/</guid>
      <description>&lt;p&gt;The most popular analogies around AI are usually the worst ones, because they jump straight to apocalypse, utopia, or machine rebellion and miss the transformation already happening in front of us. A far better analogy is older, less glamorous, and much more revealing: the history of writing becoming administration.&lt;/p&gt;
&lt;h2 id=&#34;tldr&#34;&gt;TL;DR&lt;/h2&gt;
&lt;p&gt;The strongest historical analogy for LLMs is not Skynet, industrial automation, or a new species. It is the old pattern in which an expressive medium expands access and then hardens into records, templates, procedure, governance, and bureaucracy. Less cinema. More paperwork. Unfortunately that is usually where real power hides.&lt;/p&gt;
&lt;h2 id=&#34;the-question&#34;&gt;The Question&lt;/h2&gt;
&lt;p&gt;You may ask: if natural-language AI feels like a liberation from rigid interfaces, what historical pattern does it actually resemble? Is there an older moment where a flexible medium spread widely and then slowly turned into structure, procedure, and control?&lt;/p&gt;
&lt;h2 id=&#34;the-long-answer&#34;&gt;The Long Answer&lt;/h2&gt;
&lt;p&gt;Yes. Writing.&lt;/p&gt;
&lt;h3 id=&#34;the-better-analogy-is-older-and-less-glamorous&#34;&gt;The Better Analogy Is Older and Less Glamorous&lt;/h3&gt;
&lt;p&gt;Or more precisely: writing after it stopped being rare.&lt;/p&gt;
&lt;p&gt;When we romanticize writing, we think of poetry, letters, memory, literature, philosophy, scripture, and thought made durable. All of that matters. But historically, writing did not remain only an expressive medium. As soon as it became socially central, it also became a machine for legibility.&lt;/p&gt;
&lt;p&gt;It began to support:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;ledgers&lt;/li&gt;
&lt;li&gt;tax records&lt;/li&gt;
&lt;li&gt;property claims&lt;/li&gt;
&lt;li&gt;legal formulas&lt;/li&gt;
&lt;li&gt;decrees&lt;/li&gt;
&lt;li&gt;inventories&lt;/li&gt;
&lt;li&gt;forms&lt;/li&gt;
&lt;li&gt;standard contracts&lt;/li&gt;
&lt;li&gt;administrative routines&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The same medium that enabled reflection also enabled bureaucracy.&lt;/p&gt;
&lt;p&gt;That is not an accidental corruption of writing&amp;rsquo;s pure spirit. It is what happens when an expressive medium starts carrying coordination at scale. The lyric and the ledger share a medium, and the ledger is usually better funded.&lt;/p&gt;
&lt;p&gt;This is the historical rhyme that matters for AI.&lt;/p&gt;
&lt;p&gt;Natural-language interfaces feel, at first, like a return from bureaucracy to speech. No more memorizing commands. No more obeying narrow syntactic rituals. No more learning the machine&amp;rsquo;s rigid grammar before the machine will meet you halfway. You can just speak.&lt;/p&gt;
&lt;p&gt;But the moment that speech starts doing real work, the old dynamic reappears. The free exchange has to become legible, stable, and reusable. Then come templates. Then conventions. Then control layers. Then record-keeping. Then policy.&lt;/p&gt;
&lt;p&gt;In other words, the medium begins to administrate.&lt;/p&gt;
&lt;h3 id=&#34;writing-became-administration&#34;&gt;Writing Became Administration&lt;/h3&gt;
&lt;p&gt;That is why I think the right analogy is not &amp;ldquo;AI replaces humans&amp;rdquo; but &amp;ldquo;language-to-machine interaction is becoming administratively scalable.&amp;rdquo; That phrase has none of the drama of science fiction, which is exactly why I trust it.&lt;/p&gt;
&lt;p&gt;Notice how much current AI practice already fits that pattern.&lt;/p&gt;
&lt;p&gt;At the expressive edge:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;exploratory prompting&lt;/li&gt;
&lt;li&gt;brainstorming&lt;/li&gt;
&lt;li&gt;rewriting&lt;/li&gt;
&lt;li&gt;questioning&lt;/li&gt;
&lt;li&gt;improvisation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;At the administrative edge:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;system prompts&lt;/li&gt;
&lt;li&gt;reusable role definitions&lt;/li&gt;
&lt;li&gt;skill files&lt;/li&gt;
&lt;li&gt;output schemas&lt;/li&gt;
&lt;li&gt;tool policies&lt;/li&gt;
&lt;li&gt;safety rules&lt;/li&gt;
&lt;li&gt;evaluation harnesses&lt;/li&gt;
&lt;li&gt;memory and trace retention&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That is exactly the same medium bifurcating into two functions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;expression&lt;/li&gt;
&lt;li&gt;governance&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The mistake would be to think governance arrives from outside as an alien force. More often it emerges from the medium&amp;rsquo;s own success. Once too many people, too many workflows, and too many risks pass through the channel, informal use becomes too expensive.&lt;/p&gt;
&lt;p&gt;This is why the writing analogy beats the science-fiction analogy. Science fiction lets us talk about AI while keeping one eye on spectacle. Administration forces us to talk about rules, defaults, records, compliance, and who gets to decide what counts as proper use. Less fun, more dangerous.&lt;/p&gt;
&lt;p&gt;Science fiction keeps us staring at agency in the dramatic sense: rebellion, consciousness, domination, replacement. Those questions may have their place, but they are not what we are living through most directly right now.&lt;/p&gt;
&lt;p&gt;What we are living through is far more mundane and therefore far more transformative:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;who gets to issue instructions&lt;/li&gt;
&lt;li&gt;in what form&lt;/li&gt;
&lt;li&gt;with what defaults&lt;/li&gt;
&lt;li&gt;under whose hidden constraints&lt;/li&gt;
&lt;li&gt;with what record of compliance&lt;/li&gt;
&lt;li&gt;and according to which evolving norms&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That is administration.&lt;/p&gt;
&lt;p&gt;A government clerk, a shipping office, a medieval chancery, and a modern AI platform may look worlds apart, but they share one deep concern: turning messy human intentions into legible operations.&lt;/p&gt;
&lt;p&gt;That is why some of the current discourse feels so unserious to me. People keep asking whether the machine is becoming a person while entire companies are busy making it into procedure.&lt;/p&gt;
&lt;p&gt;Once you look through that lens, many supposedly strange features of the current AI moment become obvious.&lt;/p&gt;
&lt;p&gt;Why are people standardizing prompts?
Because legibility enables coordination.&lt;/p&gt;
&lt;p&gt;Why are teams writing internal style guides for model use?
Because institutions cannot run on charm alone.&lt;/p&gt;
&lt;p&gt;Why do skill files, tool schemas, and structured outputs proliferate?
Because the medium is being prepared for scale.&lt;/p&gt;
&lt;p&gt;Why does the language of &amp;ldquo;best practice&amp;rdquo; appear so quickly?
Because informal success always creates pressure for repeatability.&lt;/p&gt;
&lt;h3 id=&#34;freedom-and-bureaucracy-grow-together&#34;&gt;Freedom and Bureaucracy Grow Together&lt;/h3&gt;
&lt;p&gt;This is also why the present moment feels ideologically confused. We are using the rhetoric of liberation while simultaneously building new bureaucratic layers. People notice the contradiction and either celebrate one side or denounce the other. I think both reactions are too simple.&lt;/p&gt;
&lt;p&gt;The bureaucracy is not a betrayal of the freedom.
It is what the freedom becomes when it has to survive contact with institutions.&lt;/p&gt;
&lt;p&gt;That is an irritating sentence, but I think it is true.&lt;/p&gt;
&lt;p&gt;There is another historical layer worth noticing: standardization often follows democratization, not the other way around.&lt;/p&gt;
&lt;p&gt;Printing expands who can read and write, and then spelling, grammar, and editorial norms harden.
Open networks expand who can communicate, and then protocols stabilize the traffic.
Mass politics expands participation, and then bureaucracy grows to make populations administratively legible.
Natural-language computing expands who can &amp;ldquo;program,&amp;rdquo; and then prompt rules, tool contracts, and agent frameworks appear.&lt;/p&gt;
&lt;p&gt;This pattern is almost embarrassingly regular. We keep acting surprised by it anyway, which may be one of the more stable features of modernity.&lt;/p&gt;
&lt;p&gt;It should also change how we talk about power.&lt;/p&gt;
&lt;p&gt;The frightening question is not only whether AI becomes an autonomous sovereign. The more immediate question is who controls the administrative grammar of human-machine exchange. In older regimes, literacy itself was power. Later, access to legal language was power. Later still, access to code and infrastructure was power.&lt;/p&gt;
&lt;p&gt;Now the emerging power may sit in the ability to shape:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;system defaults&lt;/li&gt;
&lt;li&gt;hidden instructions&lt;/li&gt;
&lt;li&gt;moderation layers&lt;/li&gt;
&lt;li&gt;tool affordances&lt;/li&gt;
&lt;li&gt;evaluation criteria&lt;/li&gt;
&lt;li&gt;acceptable interaction styles&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That is a quieter kind of power than Skynet fantasies, but in practice it may matter more. It is much easier to smuggle power in through defaults than through manifestos.&lt;/p&gt;
&lt;p&gt;Because most people will not meet AI as pure model weights. They will meet it as institutionalized behavior.&lt;/p&gt;
&lt;p&gt;And institutionalized behavior is always partly political.&lt;/p&gt;
&lt;h3 id=&#34;the-real-struggle-is-over-administrative-power&#34;&gt;The Real Struggle Is Over Administrative Power&lt;/h3&gt;
&lt;p&gt;This is where the analogy becomes genuinely useful rather than merely clever. It gives you a way to organize the whole field without falling into either marketing or panic.&lt;/p&gt;
&lt;p&gt;You can ask of any AI feature:&lt;/p&gt;
&lt;p&gt;Is this expressive?
Is this administrative?
Or is it a hybrid trying to hide the transition?&lt;/p&gt;
&lt;p&gt;A freeform chat UI is expressive.
A schema-constrained workflow is administrative.
A friendly assistant with hidden system rules is a hybrid, and hybrids are where most of the real tension lives.&lt;/p&gt;
&lt;p&gt;The writing analogy also helps explain the emotional tone people bring to AI. Some are exhilarated because they feel the expressive release. Others are suspicious because they can already smell the coming bureaucracy. Both are perceiving real parts of the same transformation.&lt;/p&gt;
&lt;p&gt;The optimists are seeing the collapse of unnecessary formal barriers.
The skeptics are seeing the rise of a new governance layer.&lt;/p&gt;
&lt;p&gt;Again, both are right.&lt;/p&gt;
&lt;p&gt;And this returns us to the opening paradox. Why does a medium that promises freedom generate rules so quickly? Because freedom by itself is not enough for archives, institutions, teams, compliance, safety, memory, and distributed execution. A society can play in a medium informally for a while. It cannot run on that informality forever.&lt;/p&gt;
&lt;p&gt;That does not mean we should embrace every new layer of prompt bureaucracy with cheerful obedience. Quite the opposite. Once you recognize the administrative turn, you can ask better questions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;which rules are genuinely useful?&lt;/li&gt;
&lt;li&gt;which are cargo cult?&lt;/li&gt;
&lt;li&gt;which increase transparency?&lt;/li&gt;
&lt;li&gt;which hide power?&lt;/li&gt;
&lt;li&gt;which preserve human agency?&lt;/li&gt;
&lt;li&gt;which quietly narrow it?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That is the adult conversation.&lt;/p&gt;
&lt;p&gt;So if you want the real historical analogy, here is mine:&lt;/p&gt;
&lt;p&gt;LLMs are not best understood as a talking machine waiting to rebel.
They are better understood as the latest medium through which human intention becomes administratively legible at scale.&lt;/p&gt;
&lt;p&gt;That may sound less cinematic than Skynet, but it is more historically grounded and much more relevant to the systems we are actually building.&lt;/p&gt;
&lt;p&gt;The true drama is not that the machine may wake up one day and declare war. The true drama is that we may succeed in building a new universal administrative layer and barely notice how much social power gets embedded in its defaults, templates, and permitted forms of speech.&lt;/p&gt;
&lt;p&gt;An ugly example helps here. Suppose every internal assistant in a large company quietly prefers one style of project plan, one tone of escalation, one definition of risk, one preferred sequence of approvals, one acceptable way of disagreeing. Nobody declares a doctrine. Nobody publishes a manifesto. People just start adapting to what the system rewards. That is how a lot of administrative power actually enters the room.&lt;/p&gt;
&lt;p&gt;That is not a reason for panic. It is a reason for seriousness.&lt;/p&gt;
&lt;p&gt;Every civilization that learns a new medium first celebrates its expressive power.
Soon after, it learns what paperwork can do with it.&lt;/p&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;The best historical analogy for LLMs is not cinematic rebellion but administrative expansion. Like writing before them, natural-language interfaces begin as expressive tools and then harden into templates, records, procedures, and governance. That is why AI feels simultaneously liberating and bureaucratic: both experiences are true, because the same medium is serving both expression and institutional control.&lt;/p&gt;
&lt;p&gt;Seen this way, the important question is not whether structure will emerge. It is whether the coming administrative layer will stay legible, contestable, and open to public scrutiny, or whether it will arrive in the usual smiling way: convenient, useful, efficient, and already half invisible.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;When AI becomes part of society’s paperwork rather than its science fiction, who will notice first that the defaults have become law-like?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Related reading:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/musings/ai-language-protocols/freedom-creates-protocol/&#34;&gt;Freedom Creates Protocol&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/musings/ai-language-protocols/the-myth-of-prompting-as-conversation/&#34;&gt;The Myth of Prompting as Conversation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/musings/ai-language-protocols/is-there-a-hidden-language-beneath-english/&#34;&gt;Is There a Hidden Language Beneath English?&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>Is There a Hidden Language Beneath English?</title>
      <link>https://turbovision.in6-addr.net/musings/ai-language-protocols/is-there-a-hidden-language-beneath-english/</link>
      <pubDate>Thu, 16 Apr 2026 00:00:00 +0000</pubDate>
      <lastBuildDate>Thu, 16 Apr 2026 00:00:00 +0000</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/musings/ai-language-protocols/is-there-a-hidden-language-beneath-english/</guid>
      <description>&lt;p&gt;Most prompt engineering is written in English, and the industry often treats that fact as if it were almost self-evident. But once you ask whether English is truly the best control medium or merely the most overrepresented one, the ground starts moving under the whole discussion.&lt;/p&gt;
&lt;h2 id=&#34;tldr&#34;&gt;TL;DR&lt;/h2&gt;
&lt;p&gt;There is no strong evidence yet for one universal hidden &amp;ldquo;control language&amp;rdquo; beneath English. But there is real evidence that useful control can happen through non-natural-language mechanisms such as &lt;a href=&#34;https://aclanthology.org/2021.emnlp-main.243/&#34;&gt;soft prompts&lt;/a&gt;, &lt;a href=&#34;https://arxiv.org/abs/2410.12877&#34;&gt;steering vectors&lt;/a&gt;, and latent or activation-based agent communication. So the idea is not crazy. It is just easier to say crazy things around it than careful ones.&lt;/p&gt;
&lt;h2 id=&#34;the-question&#34;&gt;The Question&lt;/h2&gt;
&lt;p&gt;You may ask: if models live in a high-dimensional latent space, why are we still steering them with ordinary English sentences? Could there be a shorter, more efficient machine-native control language hidden under natural language, especially for agent-to-agent communication?&lt;/p&gt;
&lt;h2 id=&#34;the-long-answer&#34;&gt;The Long Answer&lt;/h2&gt;
&lt;p&gt;This is one of the most interesting questions in the whole field, partly because it contains a real idea and partly because it attracts nonsense like a magnet.&lt;/p&gt;
&lt;h3 id=&#34;why-the-idea-is-plausible&#34;&gt;Why the Idea Is Plausible&lt;/h3&gt;
&lt;p&gt;So let us separate what is plausible, what is established, and what is still an extrapolation, because this is exactly the kind of topic where people start sounding profound five minutes before they start lying to themselves.&lt;/p&gt;
&lt;p&gt;The plausible part comes first: natural language is almost certainly a lossy bottleneck.&lt;/p&gt;
&lt;p&gt;A model does not &amp;ldquo;think&amp;rdquo; in final output tokens alone. Internally it moves through activations, intermediate representations, attention patterns, and hidden states that contain far more structure than the sentence it eventually emits. The emitted sentence is not the whole state. It is the public projection of that state into a human-readable channel.&lt;/p&gt;
&lt;p&gt;Once you see that, your idea becomes immediately legible in technical terms. You are asking whether the human-readable wrapper is an inefficient control surface over a richer internal space, and whether two models might communicate more efficiently by exchanging compressed internal representations instead of serializing everything into English.&lt;/p&gt;
&lt;p&gt;That is not fantasy. It is already brushing against several real research directions.&lt;/p&gt;
&lt;p&gt;There is older work on emergent communication in multi-agent systems where agents invent message protocols that are useful to them but opaque to us. The 2017 paper &lt;a href=&#34;https://aclanthology.org/P17-1022/&#34;&gt;&lt;em&gt;Translating Neuralese&lt;/em&gt;&lt;/a&gt; is one of the early landmarks here. It did not show that agents had discovered some mystical perfect language hidden behind reality like a sacred cipher. It showed something more useful: agents can develop internal communication forms that are meaningful in use even when they are not naturally interpretable by humans.&lt;/p&gt;
&lt;p&gt;More recent work pushes this further toward language models specifically. Papers such as &lt;a href=&#34;https://proceedings.mlr.press/v267/ramesh25a.html&#34;&gt;&lt;em&gt;Communicating Activations Between Language Model Agents&lt;/em&gt;&lt;/a&gt; and &lt;a href=&#34;https://arxiv.org/abs/2511.09149&#34;&gt;&lt;em&gt;Interlat: Enabling Agents to Communicate Entirely in Latent Space&lt;/em&gt;&lt;/a&gt; explore the idea that agents can exchange internal activations or hidden-state-like representations directly, rather than always crushing them down into text first. The reported benefit in that line of work is exactly what you would expect: less information loss and often lower compute cost than long natural-language exchanges.&lt;/p&gt;
&lt;p&gt;So the broad direction of the intuition is already technically alive. That matters.&lt;/p&gt;
&lt;h3 id=&#34;where-the-evidence-actually-exists&#34;&gt;Where the Evidence Actually Exists&lt;/h3&gt;
&lt;p&gt;Now for the annoying but necessary part.&lt;/p&gt;
&lt;p&gt;What we do &lt;strong&gt;not&lt;/strong&gt; have, at least not in any established sense, is proof of one clean latent language sitting beneath English that we can simply reveal by subtracting the &amp;ldquo;English component.&amp;rdquo; I do not know of research that validates that exact decomposition in the neat form described. And this is exactly where people are tempted to jump from &amp;ldquo;the latent space is real&amp;rdquo; to &amp;ldquo;there must be a hidden universal language in there somewhere.&amp;rdquo; Maybe. But maybe is doing a lot of work there.&lt;/p&gt;
&lt;p&gt;Why not? Because the internal geometry is probably not that simple.&lt;/p&gt;
&lt;p&gt;English inside a model is not just &amp;ldquo;semantic content plus a detachable language shell.&amp;rdquo; It is entangled with tokenization, training distribution, stylistic priors, instruction-following habits, benchmark pressure, and all the historical accidents of the corpus. Meaning, format, tone, and control are mixed together.&lt;/p&gt;
&lt;p&gt;So I would challenge one very seductive picture: there is probably no single secret Esperanto of the latent space waiting patiently behind English, ready to reward whoever is clever enough to discover it.&lt;/p&gt;
&lt;p&gt;What is more likely is messier and, in my opinion, more interesting:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;many partially reusable internal control directions&lt;/li&gt;
&lt;li&gt;many task-specific compressed protocols&lt;/li&gt;
&lt;li&gt;many model-specific or architecture-specific latent conventions&lt;/li&gt;
&lt;li&gt;some transferable abstractions, but not one canonical hidden language&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is where &lt;a href=&#34;https://aclanthology.org/2021.emnlp-main.243/&#34;&gt;soft prompts&lt;/a&gt;, &lt;a href=&#34;https://aclanthology.org/2021.acl-long.353/&#34;&gt;prefix tuning&lt;/a&gt;, and &lt;a href=&#34;https://arxiv.org/abs/2410.12877&#34;&gt;steering vectors&lt;/a&gt; become useful to think with.&lt;/p&gt;
&lt;h3 id=&#34;why-a-single-hidden-language-is-unlikely&#34;&gt;Why a Single Hidden Language Is Unlikely&lt;/h3&gt;
&lt;p&gt;Soft prompts are not ordinary words. They are learned continuous vectors injected into the model&amp;rsquo;s input space. Prefix tuning generalizes that idea deeper into the network. Steering vectors act differently but share the same spirit: instead of asking with words alone, you manipulate the model by shifting internal activations in directions associated with some behavior or concept.&lt;/p&gt;
&lt;p&gt;That is already a kind of non-natural-language control, and it should make people at least a little suspicious of the lazy assumption that human language is the final or natural control layer forever.&lt;/p&gt;
&lt;p&gt;Notice what that implies. We already have control methods that are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;effective&lt;/li&gt;
&lt;li&gt;compact&lt;/li&gt;
&lt;li&gt;not human-readable&lt;/li&gt;
&lt;li&gt;native to representation space rather than sentence space&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;English is therefore not the only control medium. It is simply the most interoperable one for humans.&lt;/p&gt;
&lt;p&gt;And that point matters, because it reveals the real trade-off.&lt;/p&gt;
&lt;p&gt;Human language is inefficient, but legible.
Latent control is efficient, but opaque.&lt;/p&gt;
&lt;p&gt;That single sentence is the heart of the matter, and also the trade-off a lot of AI discussion would rather not stare at for too long.&lt;/p&gt;
&lt;p&gt;If two agents share architecture, alignment, and task context, there is every reason to suspect they could communicate more efficiently than by exchanging verbose English paragraphs. They could use compressed summaries, vector codes, reused cache structures, activations, or learned latent shorthands. Once the agents no longer need to satisfy human readability at every intermediate step, natural language begins to look less like the native medium and more like a compatibility layer.&lt;/p&gt;
&lt;p&gt;That does not mean English is useless or even secondary. It means English may belong mostly at the boundary:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;human to agent&lt;/li&gt;
&lt;li&gt;agent to human&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;while agent to agent may migrate toward denser internal forms.&lt;/p&gt;
&lt;h3 id=&#34;the-agent-to-agent-case-is-the-real-frontier&#34;&gt;The Agent-to-Agent Case Is the Real Frontier&lt;/h3&gt;
&lt;p&gt;This layered picture fits both engineering and history. Systems tend to expose legible interfaces at the top and efficient, ugly protocols underneath. TCP packets are not prose. Database wire formats are not essays. CPU micro-ops are not source code. So why should advanced agent swarms eternally chatter to each other in polite human language unless a human auditor needs to read every step?&lt;/p&gt;
&lt;p&gt;There is also a small absurdity here that is hard not to enjoy. We may be heading toward systems where two expensive reasoning agents exchange page after page of immaculate English purely so that humans can feel the process remains respectable, while both machines would probably prefer to swap a denser internal shorthand and get on with it.&lt;/p&gt;
&lt;p&gt;There is another issue in our question: why English?&lt;/p&gt;
&lt;p&gt;The honest answer is likely mundane rather than metaphysical, which is unfortunate for anyone hoping for a more glamorous answer.&lt;/p&gt;
&lt;p&gt;English is privileged today because:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;much of the training data is English-heavy&lt;/li&gt;
&lt;li&gt;much of the instruction-tuning corpus is English-heavy&lt;/li&gt;
&lt;li&gt;many benchmarks are English-centric&lt;/li&gt;
&lt;li&gt;most prompt-engineering lore is shared in English&lt;/li&gt;
&lt;li&gt;tool docs, code, and interface conventions are often English-first&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So the dominance of English may say less about some deep optimality of English and more about the industrial history of model training. Sometimes the explanation is not &amp;ldquo;English maps best to reason.&amp;rdquo; Sometimes the explanation is simply &amp;ldquo;the pipeline grew up there.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;That said, replacing English with another human language is not yet the same as discovering a latent control protocol. Those are different questions.&lt;/p&gt;
&lt;p&gt;One asks: which human language is better for steering?
The other asks: must steering remain in human language at all?&lt;/p&gt;
&lt;p&gt;The second question is the deeper one.&lt;/p&gt;
&lt;h3 id=&#34;human-legibility-versus-machine-efficiency&#34;&gt;Human Legibility Versus Machine Efficiency&lt;/h3&gt;
&lt;p&gt;And here I think the strongest move is not the image of &amp;ldquo;subtract English and add it back later&amp;rdquo; as a literal algorithm, but as a conceptual provocation. It suggests that language may be acting as both carrier and drag. Carrier, because it gives us a shared interface. Drag, because it forces rich internal state through a narrow symbolic bottleneck.&lt;/p&gt;
&lt;p&gt;That is exactly why agent-to-agent communication is the most credible frontier for this idea.&lt;/p&gt;
&lt;p&gt;A human still needs explanation, auditability, and trust. Two agents collaborating under a shared protocol may care far less about elegance and far more about compression, precision, and bandwidth. They may converge on communication that looks to us like gibberish, or even bypass discrete language entirely.&lt;/p&gt;
&lt;p&gt;If that happens, the implications are substantial.&lt;/p&gt;
&lt;p&gt;First, debugging gets harder. You can inspect English. You can argue about English. You can regulate English. Hidden-state exchange is much less socially governable. It is also much easier to wave away with phrases like &amp;ldquo;trust the model&amp;rdquo; when nobody can really see what is happening.&lt;/p&gt;
&lt;p&gt;Second, interoperability becomes a real problem. A latent protocol learned by one model family may fail catastrophically with another. Natural language is slow, but it is remarkably portable.&lt;/p&gt;
&lt;p&gt;Third, alignment may get stranger. A human can often spot trouble in verbose reasoning traces, at least sometimes. A compressed latent exchange could be more capable and less inspectable at the same time.&lt;/p&gt;
&lt;p&gt;So I would state the thesis like this:&lt;/p&gt;
&lt;p&gt;There may not be one hidden language beneath English, but there are probably many machine-native control regimes that natural language currently obscures.&lt;/p&gt;
&lt;p&gt;That is the version I trust.&lt;/p&gt;
&lt;p&gt;It leaves room for real progress without pretending the geometry is cleaner than it is. It respects the evidence from soft prompts, steering, and latent-agent communication without claiming that the grand unified control language has already been found. And it points toward the place where the idea matters most: not in helping humans write ever more magical prompts, but in letting agents exchange context faster than prose allows.&lt;/p&gt;
&lt;p&gt;That future, if it comes, will not feel like the discovery of a secret language carved into the bedrock of intelligence. It will feel more like the emergence of protocol families: efficient, narrow, powerful, local, and only partially intelligible from the outside.&lt;/p&gt;
&lt;p&gt;Which is, frankly, how real technical history usually looks. Messier than prophecy, less elegant than theory, and much more interesting.&lt;/p&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;There is no solid reason yet to believe in one universal hidden control language beneath English. But there is good reason to suspect that natural language is only one control surface among several, and not necessarily the most efficient one for every setting. &lt;a href=&#34;https://aclanthology.org/2021.emnlp-main.243/&#34;&gt;Soft prompts&lt;/a&gt;, &lt;a href=&#34;https://arxiv.org/abs/2410.12877&#34;&gt;steering vectors&lt;/a&gt;, and latent or activation-based communication all point in the same direction: human language may remain the public interface while more compressed machine-native protocols emerge underneath.&lt;/p&gt;
&lt;p&gt;The most promising use case for that shift is not magical human prompting. It is agent-to-agent coordination, where efficiency may matter more than legibility. The seduction of the idea lies in human prompting. The real engineering value may lie somewhere else entirely.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;If the most capable future agent systems stop explaining themselves to each other in human language, how much opacity are we actually willing to accept in exchange for speed and capability?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Related reading:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/musings/ai-language-protocols/from-prompt-to-protocol-stack/&#34;&gt;From Prompt to Protocol Stack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/musings/ai-language-protocols/the-real-historical-analogy/&#34;&gt;The Real Historical Analogy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/musings/ai-language-protocols/freedom-creates-protocol/&#34;&gt;Freedom Creates Protocol&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
  </channel>
</rss>
