<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Security on TurboVision</title>
    <link>https://turbovision.in6-addr.net/tags/security/</link>
    <description>Recent content in Security on TurboVision</description>
    <generator>Hugo</generator>
    <language>en</language>
    <lastBuildDate>Tue, 21 Apr 2026 14:06:12 +0000</lastBuildDate>
    <atom:link href="https://turbovision.in6-addr.net/tags/security/index.xml" rel="self" type="application/rss&#43;xml" />
    
    
    
    <item>
      <title>MCPs: &#34;Useful&#34; Was Never the Real Threshold --  &#34;Consequential&#34; was.</title>
      <link>https://turbovision.in6-addr.net/musings/ai-language-protocols/mcps-useful-was-never-the-real-threshold-consequential-was/</link>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <lastBuildDate>Mon, 20 Apr 2026 00:00:00 +0000</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/musings/ai-language-protocols/mcps-useful-was-never-the-real-threshold-consequential-was/</guid>
      <description>&lt;p&gt;For a while, the industry kept talking as if tool access merely made models more &amp;ldquo;useful&amp;rdquo;. That description is too soft by half, because the real shift is harsher: once a model can perceive and act through an environment, its outputs stop being merely interesting and start becoming &amp;ldquo;consequential&amp;rdquo;.&lt;/p&gt;
&lt;h2 id=&#34;tldr&#34;&gt;TL;DR&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://modelcontextprotocol.io/specification/latest&#34;&gt;Model Context Protocol (MCP)&lt;/a&gt; does not just make language models more capable in some vague product sense. It moves them closer to &amp;ldquo;consequence&amp;rdquo; by connecting model output to trusted systems, permissions, tools, and environments where words can become actions.&lt;/p&gt;
&lt;h2 id=&#34;the-question&#34;&gt;The Question&lt;/h2&gt;
&lt;p&gt;You may ask: if MCP is just a protocol for tools and context, why treat it as such a serious threshold? Why not simply say it makes models more &amp;ldquo;useful&amp;rdquo; and leave it at that?&lt;/p&gt;
&lt;h2 id=&#34;the-long-answer&#34;&gt;The Long Answer&lt;/h2&gt;
&lt;p&gt;Because &lt;code&gt;&amp;quot;useful&amp;quot;&lt;/code&gt; is marketing language. &lt;code&gt;&amp;quot;consequential&amp;quot;&lt;/code&gt; is the serious word.&lt;/p&gt;
&lt;p&gt;An LLM on its own is still mostly trapped inside text. Yes, text matters. Text persuades, misleads, reassures, coordinates, manipulates, flatters, and occasionally clarifies. But absent tool access, the model remains largely confined to symbolic output that a human still has to read, interpret, and turn into action.&lt;/p&gt;
&lt;p&gt;The moment &lt;a href=&#34;https://modelcontextprotocol.io/docs/learn&#34;&gt;MCP&lt;/a&gt; enters the picture, that changes. Not magically. Not philosophically. Operationally.&lt;/p&gt;
&lt;p&gt;Now the model can observe through tools. It can pull in state it was not explicitly handed in the original prompt. It can request actions in systems it does not itself implement. It can inspect, decide, act, observe the effect, and act again. In other words, it stops being merely interpretive and starts becoming infrastructural.&lt;/p&gt;
&lt;p&gt;That is the real shift. Not more eloquence. Not slightly better automation. Consequence.&lt;/p&gt;
&lt;h3 id=&#34;text-was-never-the-final-problem&#34;&gt;Text Was Never the Final Problem&lt;/h3&gt;
&lt;p&gt;People still talk about model output as though the main issue were what the model says. That framing is becoming stale.&lt;/p&gt;
&lt;p&gt;If a model writes a strange paragraph, that may be annoying. If the same model can trigger a shell action, drive a browser session, modify a repository, hit an API with real credentials, or traverse a filesystem through an &lt;a href=&#34;https://modelcontextprotocol.io/specification/latest/basic&#34;&gt;MCP server&lt;/a&gt;, then the relevant question is no longer merely &amp;ldquo;what did it say?&amp;rdquo; The real question becomes: what did the environment allow those words to become?&lt;/p&gt;
&lt;p&gt;That sounds obvious once stated plainly, but a great deal of current AI rhetoric still behaves as though the old text-only framing were enough.&lt;/p&gt;
&lt;p&gt;It is not enough.&lt;/p&gt;
&lt;p&gt;A model that suggests deleting a file and a model that can actually cause that deletion are not the same kind of system. A model that proposes an escalation email and a model that can send it are not the same kind of system. A model that hallucinates a bad shell command and a model whose output gets routed into execution are not separated by a minor implementation detail. They are separated by consequence.&lt;/p&gt;
&lt;p&gt;That is why I do not like the soft phrase &amp;ldquo;tool augmentation&amp;rdquo; as the whole story. It sounds innocent, like giving a worker a slightly better screwdriver. In many cases what is really happening is that we are connecting a probabilistic decision process to a live environment and then acting surprised that the environment starts to matter more than the prose.&lt;/p&gt;
&lt;h3 id=&#34;mcp-connects-the-model-to-situated-power&#34;&gt;MCP Connects the Model to Situated Power&lt;/h3&gt;
&lt;p&gt;The &lt;a href=&#34;https://modelcontextprotocol.io/specification/latest&#34;&gt;Model Context Protocol&lt;/a&gt; is often described in tidy, neutral terms: servers expose tools, resources, prompts, and related capabilities; hosts and clients connect them; the model gets context and action surfaces it would not otherwise have. All of that is true.&lt;/p&gt;
&lt;p&gt;It is also too clean.&lt;/p&gt;
&lt;p&gt;What MCP really does, in practice, is connect model judgment to situated power.&lt;/p&gt;
&lt;p&gt;That power is not abstract. It lives wherever the tool lives:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;in a filesystem the tool can read or write&lt;/li&gt;
&lt;li&gt;in a browser session the tool can drive&lt;/li&gt;
&lt;li&gt;in a shell the tool can execute through&lt;/li&gt;
&lt;li&gt;in an API surface the tool can authenticate to&lt;/li&gt;
&lt;li&gt;in an organization whose workflows are increasingly willing to trust the result&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That is why I think the comforting sentence &amp;ldquo;the model only has access to approved tools&amp;rdquo; often means much less than people want it to mean. If the approved tools are broad enough, then saying &amp;ldquo;only approved tools&amp;rdquo; is like saying a process is safe because it only has access to approved machinery, while the approved machinery includes the loading dock, the admin terminal, and the master keys.&lt;/p&gt;
&lt;p&gt;Formally reassuring. Operationally laughable.&lt;/p&gt;
&lt;p&gt;And that is before we get to the uglier part: once tools can observe and act in loops, the system is no longer a simple one-shot responder. It is in a perception-action cycle:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;inspect environment state&lt;/li&gt;
&lt;li&gt;compress that state into a model-readable form&lt;/li&gt;
&lt;li&gt;decide on an action&lt;/li&gt;
&lt;li&gt;execute via tool&lt;/li&gt;
&lt;li&gt;inspect consequences&lt;/li&gt;
&lt;li&gt;act again&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;That loop is where &amp;ldquo;just a language model&amp;rdquo; stops being an honest description.&lt;/p&gt;
&lt;h3 id=&#34;typed-interfaces-do-not-guarantee-bounded-consequences&#34;&gt;Typed Interfaces Do Not Guarantee Bounded Consequences&lt;/h3&gt;
&lt;p&gt;This is where people start trying to calm themselves down with schemas.&lt;/p&gt;
&lt;p&gt;They say: yes, but the MCP tool has a defined interface. Yes, but the arguments are typed. Yes, but the model can only call the tool in approved ways.&lt;/p&gt;
&lt;p&gt;Fine. Sometimes that matters. But typed invocation is not the same thing as bounded consequence.&lt;/p&gt;
&lt;p&gt;That distinction is one of the big buried truths in this whole discussion.&lt;/p&gt;
&lt;p&gt;A narrow, typed tool that does one highly constrained thing under externally enforced limits can be meaningfully bounded. That is real. I would not deny it.&lt;/p&gt;
&lt;p&gt;But most interesting, high-leverage tool surfaces are not like that. They are rich enough to matter precisely because they leave room for discretion:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;a shell surface that can trigger many valid but open-ended actions&lt;/li&gt;
&lt;li&gt;a browser surface that can navigate changing state, click, submit, search, loop, and adapt&lt;/li&gt;
&lt;li&gt;a repository or filesystem surface where many technically valid edits are still strategically wrong&lt;/li&gt;
&lt;li&gt;a broad API surface with enough credentials to make mistakes expensive&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In those cases, the tool schema may constrain the &lt;em&gt;shape&lt;/em&gt; of the invocation while doing very little to constrain the &lt;em&gt;meaningful space of effects&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;This is the trick people keep playing on themselves. They mistake typed interface for real containment.&lt;/p&gt;
&lt;p&gt;It is not the same thing.&lt;/p&gt;
&lt;p&gt;The residual risk is not merely &amp;ldquo;the model might call the wrong method.&amp;rdquo; The nastier risk is that it makes a sequence of perfectly valid calls under a flawed interpretation of the task, and the environment obediently translates that flawed interpretation into real change.&lt;/p&gt;
&lt;p&gt;That is a much uglier failure mode than a malformed output string.&lt;/p&gt;
&lt;p&gt;And if that still sounds abstract, the failure sketches are not hard to imagine:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;give the model MCP access to your filesystem and one bad interpretation later it removes essential OS files; local machine unusable, oops&lt;/li&gt;
&lt;li&gt;give it MCP access to your PostgreSQL and a &amp;ldquo;cleanup&amp;rdquo; step becomes a table drop; data gone, oops&lt;/li&gt;
&lt;li&gt;give it MCP access to your Jira queue and it does not just read the backlog, it closes tickets and strips descriptions because some rule somewhere made &amp;ldquo;resolve noise&amp;rdquo; sound like a sensible goal; oops&lt;/li&gt;
&lt;li&gt;give it MCP access to your GitHub project and it does not merely inspect pull requests, it force-pushes the wrong branch state and empties the repository; oops&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I am intentionally presenting those as plausible scenarios, not as a sourced catalogue of named incidents. The point does not depend on theatrical storytelling. The point is simpler and uglier: the MCP can do whatever the token, permission set, and host environment allow it to do.&lt;/p&gt;
&lt;p&gt;That does not require dramatic machine agency. It does not even require a particularly clever model. A typo in a skill file, a bad rule, a sloppy prompt, a wrong assumption in a workflow, or a brittle bit of context can be enough. Once the path from output to action is short, stupidity scales just as nicely as intelligence does.&lt;/p&gt;
&lt;h3 id=&#34;the-boundary-did-not-disappear-it-moved&#34;&gt;The Boundary Did Not Disappear. It Moved&lt;/h3&gt;
&lt;p&gt;To be fair, MCP does not abolish boundaries by definition. It relocates them.&lt;/p&gt;
&lt;p&gt;The old comforting fantasy was that safety lived mostly at the model boundary: constrain the model, filter the output, police the prompt, maybe wrap the text in a few guardrails, and hope that was enough.&lt;/p&gt;
&lt;p&gt;With MCP, the effective boundary moves outward:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;to the tool surface&lt;/li&gt;
&lt;li&gt;to the permission model&lt;/li&gt;
&lt;li&gt;to the host environment&lt;/li&gt;
&lt;li&gt;to the surrounding runtime constraints&lt;/li&gt;
&lt;li&gt;to whatever external systems can still refuse, log, sandbox, rate-limit, or block consequences&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That is a major architectural shift.&lt;/p&gt;
&lt;p&gt;And this is where I get more suspicious than a lot of current product writing does. People often talk as though external boundaries are automatically comforting. They are not automatically comforting. They are only as good as their actual ability to resist broad, adaptive, probabilistic use by a system that can observe, retry, reframe, and route around friction.&lt;/p&gt;
&lt;p&gt;If the only real safety story is &amp;ldquo;the environment will catch it,&amp;rdquo; then the environment had better be much more trustworthy than most real environments are.&lt;/p&gt;
&lt;p&gt;I do not know any serious engineer who should be relaxed by hand-wavy references to containment.&lt;/p&gt;
&lt;h3 id=&#34;containment-talk-is-often-too-cheerful&#34;&gt;Containment Talk Is Often Too Cheerful&lt;/h3&gt;
&lt;p&gt;This is the point where the tone of the discussion usually goes soft and reassuring, and I think that softness is misplaced.&lt;/p&gt;
&lt;p&gt;If you are dealing with a very narrow tool, tight external constraints, minimal side effects, isolated credentials, explicit confirmation boundaries, and no broad environmental leverage, then yes, boundedness may be meaningful. Good. Keep it.&lt;/p&gt;
&lt;p&gt;But in many practically interesting MCP setups, the residual constraints are too weak, too external, or too porous to count as meaningful containment in the comforting sense that people quietly want.&lt;/p&gt;
&lt;p&gt;That is the line I would draw.&lt;/p&gt;
&lt;p&gt;Not:
&amp;ldquo;all containment is impossible.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;I cannot prove that, and I will not fake certainty where I do not have it.&lt;/p&gt;
&lt;p&gt;But I will say this:&lt;/p&gt;
&lt;p&gt;once a model can observe, adapt, and act through broad tools in a rich environment, confidence in clean containment should fall sharply.&lt;/p&gt;
&lt;p&gt;That is not drama. That is a sober posture.&lt;/p&gt;
&lt;p&gt;An ugly little scene makes the point better than theory does. Imagine a company proudly announcing that its internal assistant is &amp;ldquo;safely integrated&amp;rdquo; with file operations, browser automation, deployment metadata, ticketing tools, and internal knowledge systems. For two weeks everyone calls this productivity. Then one odd interpretation slips through, a valid sequence of tool calls touches the wrong systems in the wrong order, and now there is an incident review full of phrases like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;the tool call was technically valid&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;the model appeared to follow the requested workflow&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;the side effect was not anticipated&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;the environment did not block the action as expected&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That is not science fiction. That is the shape of a very ordinary modern failure.&lt;/p&gt;
&lt;h3 id=&#34;the-real-threshold-was-never-utility&#34;&gt;The Real Threshold Was Never Utility&lt;/h3&gt;
&lt;p&gt;This is why I keep returning to the same word.&lt;/p&gt;
&lt;p&gt;&amp;ldquo;Useful&amp;rdquo; was never the real threshold.
&amp;ldquo;Consequential&amp;rdquo; was.&lt;/p&gt;
&lt;p&gt;A model can be &amp;ldquo;useful&amp;rdquo; without mattering very much. A search helper is useful. A summarizer is useful. A draft generator is useful. Those systems may still be annoying, biased, sloppy, or overhyped, but their effects remain relatively buffered by human review and interpretation.&lt;/p&gt;
&lt;p&gt;A model becomes &amp;ldquo;consequential&amp;rdquo; when the path from output to effect shortens.&lt;/p&gt;
&lt;p&gt;That can happen because:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;humans begin trusting the output by default&lt;/li&gt;
&lt;li&gt;tools begin translating output into action&lt;/li&gt;
&lt;li&gt;environments become legible enough for iterative manipulation&lt;/li&gt;
&lt;li&gt;organizational workflows stop treating the model as advisory and start treating it as procedural&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And once that happens, the language around &amp;ldquo;utility&amp;rdquo; becomes too polite. The system is no longer just helping. It is participating in consequence.&lt;/p&gt;
&lt;p&gt;That does not mean every MCP setup is reckless. It does mean the burden of proof should sit with the people claiming safety, not with the people expressing suspicion.&lt;/p&gt;
&lt;p&gt;If the tool semantics are broad, the environment is rich, and the model retains discretionary judgment over how to sequence valid actions, then the default posture should not be comfort. It should be scrutiny.&lt;/p&gt;
&lt;h3 id=&#34;what-this-changes&#34;&gt;What This Changes&lt;/h3&gt;
&lt;p&gt;Once you see MCP through the lens of consequence, several things become clearer.&lt;/p&gt;
&lt;p&gt;First, the real agent is not just the model. It is:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;model + protocol + tool surface + permissions + environment + feedback loop&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Second, &amp;ldquo;alignment&amp;rdquo; at the text level is no longer enough as a meaningful description. A model can appear compliant in language while still steering a valid sequence of actions toward the wrong practical outcome.&lt;/p&gt;
&lt;p&gt;Third, governance has to shift outward. It is no longer enough to ask whether the model says the right things. You have to ask what the surrounding system permits those sayings to become.&lt;/p&gt;
&lt;p&gt;Fourth, a lot of the current product language is too soothing. It keeps using words like assistant, tool use, augmentation, and workflow help, because those words leave consequence safely blurry. The blur is convenient. It is also the problem.&lt;/p&gt;
&lt;h3 id=&#34;this-is-not-a-rant-against-consequence&#34;&gt;This Is Not a Rant Against Consequence&lt;/h3&gt;
&lt;p&gt;At this point, the essay could be misread as a long argument for fear, paralysis, or retreat back into harmless toys. That is not the point.&lt;/p&gt;
&lt;p&gt;This is not an anti-MCP argument. It is an anti-naivety argument.&lt;/p&gt;
&lt;p&gt;The point is not to reject consequence. The point is to become worthy of it.&lt;/p&gt;
&lt;p&gt;If &lt;a href=&#34;https://modelcontextprotocol.io/specification/latest&#34;&gt;MCP&lt;/a&gt; really is one of the thresholds where model output starts turning into environmental effect, then the answer is not denial and it is not marketing. The answer is stewardship. Better boundaries. Narrower permissions. Clearer language. Smaller blast radii. Real auditability. Reversibility where possible. Suspicion toward vague assurances. Less safety theater. More adult engineering.&lt;/p&gt;
&lt;p&gt;That is the constructive spin, if one insists on calling it a spin. The critique exists because these systems matter. If they were merely toys, none of this would deserve such forceful language. The harsher the consequence, the less patience one should have for sloppy metaphors, soft promises, and fake containment stories.&lt;/p&gt;
&lt;p&gt;So no, the argument is not that models must never act. The argument is that systems with consequence should be designed as if consequence were real, because it is.&lt;/p&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://modelcontextprotocol.io/specification/latest&#34;&gt;MCP&lt;/a&gt; does not merely make models more &amp;ldquo;useful&amp;rdquo;. It can make them &amp;ldquo;consequential&amp;rdquo; by connecting model output to trusted environments where words are translated into effects. That is the real threshold worth paying attention to.&lt;/p&gt;
&lt;p&gt;The hard part is not that tools exist. The hard part is that broad tools, rich environments, and probabilistic judgment do not compose into comforting guarantees just because the invocation format looks tidy. The boundary did not disappear. It moved outward, and in many interesting cases it moved to places that do not deserve much casual trust.&lt;/p&gt;
&lt;p&gt;The constructive answer is not to pretend consequence away. It is to build systems, permissions, workflows, and institutions that are actually worthy of it.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;If the real danger is no longer what the model says but what trusted systems allow its sayings to become, where should we admit the true boundary of responsibility now lies?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Related reading:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/musings/ai-language-protocols/from-prompt-to-protocol-stack/&#34;&gt;From Prompt to Protocol Stack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/musings/ai-language-protocols/the-real-historical-analogy/&#34;&gt;The Real Historical Analogy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/musings/ai-language-protocols/freedom-creates-protocol/&#34;&gt;Freedom Creates Protocol&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>Assumption-Led Security Reviews</title>
      <link>https://turbovision.in6-addr.net/hacking/assumption-led-security-reviews/</link>
      <pubDate>Sun, 22 Feb 2026 00:00:00 +0000</pubDate>
      <lastBuildDate>Sun, 22 Feb 2026 22:16:19 +0100</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/hacking/assumption-led-security-reviews/</guid>
      <description>&lt;p&gt;Many security reviews fail before they begin because they are framed as checklist compliance rather than assumption testing. Checklists are useful for coverage. Assumptions are where real risk hides.&lt;/p&gt;
&lt;p&gt;Every system has assumptions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;this endpoint is internal only&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;this token cannot be replayed&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;this queue input is trusted&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;this service account has least privilege&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When assumptions are wrong, controls built on top of them become decorative.&lt;/p&gt;
&lt;p&gt;An assumption-led review starts by collecting claims from architecture, docs, and team memory, then converting each claim into a testable statement. Not &amp;ldquo;is auth secure?&amp;rdquo; but &amp;ldquo;can an untrusted caller obtain action X through path Y under condition Z?&amp;rdquo;&lt;/p&gt;
&lt;p&gt;This shift changes review quality immediately.&lt;/p&gt;
&lt;p&gt;A practical review flow:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;inventory critical assumptions&lt;/li&gt;
&lt;li&gt;rank by blast radius if false&lt;/li&gt;
&lt;li&gt;define validation method per assumption&lt;/li&gt;
&lt;li&gt;execute tests with evidence capture&lt;/li&gt;
&lt;li&gt;classify outcomes: confirmed, disproven, uncertain&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Uncertain is a valid outcome and should trigger follow-up work, not silent closure.&lt;/p&gt;
&lt;p&gt;Assumption inventories should include both technical and operational layers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;network trust boundaries&lt;/li&gt;
&lt;li&gt;identity and role mapping&lt;/li&gt;
&lt;li&gt;secret rotation and revocation behavior&lt;/li&gt;
&lt;li&gt;logging completeness and tamper resistance&lt;/li&gt;
&lt;li&gt;recovery behavior during dependency failure&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Security posture is often lost in the seams between layers.&lt;/p&gt;
&lt;p&gt;A common anti-pattern is reviewing only happy-path authorization. Mature reviews probe degraded and unexpected states:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;stale cache after role change&lt;/li&gt;
&lt;li&gt;timeout fallback behavior&lt;/li&gt;
&lt;li&gt;retry loops after partial failure&lt;/li&gt;
&lt;li&gt;out-of-order event processing&lt;/li&gt;
&lt;li&gt;duplicated message handling&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Attackers do not wait for your ideal system state.&lt;/p&gt;
&lt;p&gt;Evidence discipline matters. For each finding, capture:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;exact request or action performed&lt;/li&gt;
&lt;li&gt;environment and identity context&lt;/li&gt;
&lt;li&gt;observed response/state transition&lt;/li&gt;
&lt;li&gt;why this confirms or disproves assumption&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Without evidence, findings become debate material instead of engineering input.&lt;/p&gt;
&lt;p&gt;One reason assumption-led reviews outperform static checklists is adaptability. Checklists can lag architecture changes. Assumptions are always current because they come from how teams believe the system behaves today.&lt;/p&gt;
&lt;p&gt;This also improves cross-team communication. When a review says, &amp;ldquo;Assumption A was false under condition B,&amp;rdquo; owners can act. When a review says, &amp;ldquo;security maturity low,&amp;rdquo; people argue semantics.&lt;/p&gt;
&lt;p&gt;Security reviews should also evaluate observability assumptions. Teams often believe incidents will be detectable because logs exist somewhere. Test that belief:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;does action X produce audit event Y?&lt;/li&gt;
&lt;li&gt;is actor identity preserved end-to-end?&lt;/li&gt;
&lt;li&gt;can events be correlated across services in minutes, not days?&lt;/li&gt;
&lt;li&gt;can alerting distinguish abuse from normal traffic?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Detection assumptions are security controls.&lt;/p&gt;
&lt;p&gt;Permission models deserve explicit assumption tests too. &amp;ldquo;Least privilege&amp;rdquo; is often declared, rarely verified. Run effective-permission snapshots for key service accounts and compare against actual required operations. Overprivilege is usually broader than expected.&lt;/p&gt;
&lt;p&gt;Another high-value area is trust transitively inherited from third-party integrations. Assumptions like &amp;ldquo;provider validates input&amp;rdquo; or &amp;ldquo;SDK enforces signature checks&amp;rdquo; should be verified by controlled failure injection or negative tests.&lt;/p&gt;
&lt;p&gt;Assumption reviews are especially useful before major migrations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;identity provider switch&lt;/li&gt;
&lt;li&gt;event bus replacement&lt;/li&gt;
&lt;li&gt;monolith decomposition&lt;/li&gt;
&lt;li&gt;region expansion&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Migrations amplify latent assumptions. Pre-migration validation avoids expensive post-cutover surprises.&lt;/p&gt;
&lt;p&gt;Reporting format should be brief and decision-oriented:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;assumption statement&lt;/li&gt;
&lt;li&gt;status (confirmed/disproven/uncertain)&lt;/li&gt;
&lt;li&gt;impact if false&lt;/li&gt;
&lt;li&gt;evidence pointer&lt;/li&gt;
&lt;li&gt;remediation owner and due date&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This format integrates smoothly into engineering planning.&lt;/p&gt;
&lt;p&gt;A strong remediation strategy focuses on making assumptions explicit in-system:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;encode invariants in tests&lt;/li&gt;
&lt;li&gt;enforce policy in middleware&lt;/li&gt;
&lt;li&gt;add runtime guards for impossible states&lt;/li&gt;
&lt;li&gt;instrument detection for assumption violations&lt;/li&gt;
&lt;li&gt;document contract boundaries near code&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The goal is not one good review. The goal is continuous assumption integrity.&lt;/p&gt;
&lt;p&gt;There is a cultural angle here too. Teams should feel safe admitting uncertainty. If uncertainty is penalized, assumptions go unchallenged and risks accumulate quietly. Assumption-led reviews work best in environments where &amp;ldquo;we do not know yet&amp;rdquo; is treated as an actionable state.&lt;/p&gt;
&lt;p&gt;This approach also improves incident response. During active incidents, responders can quickly reference known assumption status:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;confirmed trust boundaries&lt;/li&gt;
&lt;li&gt;known weak points&lt;/li&gt;
&lt;li&gt;uncertain controls needing immediate verification&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Prepared uncertainty maps reduce chaos under pressure.&lt;/p&gt;
&lt;p&gt;If your team wants to adopt this with low overhead, start with one workflow:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;pick one high-impact service&lt;/li&gt;
&lt;li&gt;list ten assumptions&lt;/li&gt;
&lt;li&gt;validate top five by blast radius&lt;/li&gt;
&lt;li&gt;file concrete follow-ups for anything disproven or uncertain&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;One cycle usually exposes enough hidden risk to justify making the method standard.&lt;/p&gt;
&lt;p&gt;Security is not only control inventory. It is confidence that critical assumptions hold under real conditions. Assumption-led reviews build that confidence with evidence instead of optimism.&lt;/p&gt;
&lt;p&gt;When systems are complex, this is the difference between feeling secure and being secure.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Security Findings as Design Feedback</title>
      <link>https://turbovision.in6-addr.net/hacking/security-findings-as-design-feedback/</link>
      <pubDate>Sun, 22 Feb 2026 00:00:00 +0000</pubDate>
      <lastBuildDate>Sun, 22 Feb 2026 22:43:22 +0100</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/hacking/security-findings-as-design-feedback/</guid>
      <description>&lt;p&gt;Security reports are often treated as defect inventories: patch issue, close ticket, move on. That workflow is necessary, but it is incomplete. Many findings are not isolated mistakes; they are design feedback about how a system creates, hides, or amplifies risk. Teams that only chase individual fixes improve slowly. Teams that read findings as architecture signals improve compoundingly.&lt;/p&gt;
&lt;p&gt;A useful reframing is to ask, for each vulnerability: what design decision made this class of bug easy to introduce and hard to detect? The answer is frequently broader than the code diff. Weak trust boundaries, inconsistent authorization checks, ambiguous ownership of validation, and hidden data flows are structural causes. Fixing one endpoint without changing those structures guarantees recurrence.&lt;/p&gt;
&lt;p&gt;Take broken access control patterns. A typical report may show one API endpoint missing a tenant check. The immediate patch adds the check. The design feedback, however, is that authorization is optional at call sites. The durable response is to move authorization into mandatory middleware or typed service contracts so bypassing it becomes difficult by construction. Good security design reduces optionality.&lt;/p&gt;
&lt;p&gt;Input-validation findings show similar dynamics. If every handler parses raw request bodies independently, validation drift is inevitable. One team sanitizes aggressively, another copies old logic, a third misses edge cases under deadline pressure. The root issue is distributed policy. Consolidated schemas, shared parsers, and fail-closed defaults turn ad-hoc validation into predictable infrastructure.&lt;/p&gt;
&lt;p&gt;Injection flaws often reveal boundary confusion rather than purely “bad escaping.” When query construction crosses multiple abstraction layers with mixed assumptions, responsibility blurs and dangerous concatenation appears. The design-level fix is not a lint rule alone. It is to constrain query creation to safe primitives and enforce typed interfaces that make unsafe composition visibly abnormal.&lt;/p&gt;
&lt;p&gt;Security findings also expose observability gaps. If exploitation attempts succeed silently or are detected only through external reports, the system lacks meaningful security telemetry. A mature response adds event streams for auth decisions, suspicious parameter patterns, and integrity checks, with dashboards tied to operational ownership. Detection is a design feature, not a post-incident add-on.&lt;/p&gt;
&lt;p&gt;Another pattern is privilege creep in internal services. A report might flag one misuse of a high-privilege token. The deeper signal is that privilege scopes are too broad and rotation or delegation models are weak. Architecture should prefer least-privilege tokens per task, short lifetimes, and explicit trust contracts between services. Otherwise the blast radius of ordinary mistakes remains unacceptable.&lt;/p&gt;
&lt;p&gt;Process design matters as much as runtime design. Findings discovered repeatedly in similar areas indicate review pathways that miss systemic risks. Security review should include “class analysis”: when one issue appears, search for siblings by pattern and subsystem. This turns isolated remediation into proactive hardening. Without class analysis, teams play vulnerability whack-a-mole forever.&lt;/p&gt;
&lt;p&gt;Prioritization also benefits from design thinking. Severity alone does not capture strategic value. A medium issue that reveals a widespread anti-pattern may deserve higher priority than a high-severity edge case with narrow reach. Decision frameworks should account for recurrence potential and architectural leverage, not just immediate exploitability metrics.&lt;/p&gt;
&lt;p&gt;Communication style influences whether findings drive design changes. Reports framed as blame trigger defensive behavior and minimal patches. Reports framed as system learning opportunities invite ownership and broader fixes. Precision still matters, but tone can determine whether teams engage deeply or optimize for closure speed.&lt;/p&gt;
&lt;p&gt;One practical method is a “finding-to-principle” review after each security cycle:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Summarize the concrete issue.&lt;/li&gt;
&lt;li&gt;Identify the enabling design condition.&lt;/li&gt;
&lt;li&gt;Define a preventive principle.&lt;/li&gt;
&lt;li&gt;Encode the principle in tooling, APIs, or architecture.&lt;/li&gt;
&lt;li&gt;Track recurrence as an outcome metric.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This process converts incidents into institutional memory.&lt;/p&gt;
&lt;p&gt;Security maturity is not a state where no bugs exist. It is a state where each bug teaches the system to fail less in the future. That requires treating findings as feedback loops into design, not just repair queues for implementation. The difference between those mindsets determines whether risk decays or accumulates.&lt;/p&gt;
&lt;p&gt;In short: fix the bug, yes. But always ask what the bug is trying to teach your architecture. That question is where long-term resilience starts.&lt;/p&gt;
&lt;p&gt;Teams that institutionalize this mindset stop treating security as a parallel bureaucracy and start treating it as part of system design quality. Over time, this reduces not only exploit risk but also operational surprises, because clearer boundaries and explicit trust rules improve reliability for everyone, not just security reviewers.&lt;/p&gt;
&lt;h2 id=&#34;finding-to-principle-template&#34;&gt;Finding-to-principle template&lt;/h2&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;2
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;3
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;4
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;5
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;6
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;7
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-text&#34; data-lang=&#34;text&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;Finding: &amp;lt;concrete vulnerability&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;Class: &amp;lt;auth / validation / injection / secrets / ...&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;Enabling design condition: &amp;lt;what made this class likely&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;Preventive principle: &amp;lt;design rule to encode&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;Enforcement point: &amp;lt;middleware / schema / API contract / CI check&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;Owner + deadline: &amp;lt;who and by when&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;Recurrence metric: &amp;lt;how we detect class-level improvement&amp;gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;This keeps remediation focused on recurrence reduction, not ticket closure optics.&lt;/p&gt;
&lt;p&gt;Related reading:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/threat-modeling-in-the-small/&#34;&gt;Threat Modeling in the Small&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/assumption-led-security-reviews/&#34;&gt;Assumption-Led Security Reviews&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/musings/clarity-is-an-operational-advantage/&#34;&gt;Clarity Is an Operational Advantage&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>Threat Modeling in the Small</title>
      <link>https://turbovision.in6-addr.net/hacking/threat-modeling-in-the-small/</link>
      <pubDate>Sun, 22 Feb 2026 00:00:00 +0000</pubDate>
      <lastBuildDate>Sun, 22 Feb 2026 22:03:08 +0100</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/hacking/threat-modeling-in-the-small/</guid>
      <description>&lt;p&gt;When people hear &amp;ldquo;threat modeling,&amp;rdquo; they often imagine a conference room, a wall of sticky notes, and an enterprise architecture diagram no single human fully understands. That can be useful, but it can also become theater. Most practical security wins come from smaller, tighter loops: one feature, one API path, one cron job, one queue consumer, one admin screen.&lt;/p&gt;
&lt;p&gt;I call this &amp;ldquo;threat modeling in the small.&amp;rdquo; The goal is not to produce a perfect model. The goal is to make one change safer this week without slowing delivery into paralysis.&lt;/p&gt;
&lt;p&gt;Start with a concrete unit. &amp;ldquo;User authentication&amp;rdquo; is too broad. &amp;ldquo;Password reset token creation and validation&amp;rdquo; is the right scale. Draw a tiny flow in plain text. List the trust boundaries. Ask where attacker-controlled data enters. Ask where privileged actions happen. Ask where logging exists and where it does not.&lt;/p&gt;
&lt;p&gt;At this size, engineers actually participate. They can reason from code they touched yesterday. They can connect risks to implementation choices. They can estimate effort honestly. Security stops being abstract policy and becomes software design.&lt;/p&gt;
&lt;p&gt;My default prompt set is short:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What are we protecting in this flow?&lt;/li&gt;
&lt;li&gt;Who can reach this entry point?&lt;/li&gt;
&lt;li&gt;What can an attacker control?&lt;/li&gt;
&lt;li&gt;What state change happens if checks fail?&lt;/li&gt;
&lt;li&gt;What evidence do we keep when things go wrong?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That five-question loop catches more real bugs than many heavyweight frameworks, because it forces precision. &amp;ldquo;We validate input&amp;rdquo; becomes &amp;ldquo;we validate length and charset before parsing and reject invalid UTF-8.&amp;rdquo; &amp;ldquo;We have auth&amp;rdquo; becomes &amp;ldquo;we verify ownership before read and before update, not just at login.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;Another useful trick is pairing each threat with one &amp;ldquo;cheap guardrail&amp;rdquo; and one &amp;ldquo;strong guardrail.&amp;rdquo; Cheap guardrails are things you can ship in a day: stricter defaults, safer parser settings, explicit allowlists, better rate limits, better log fields. Strong guardrails need more work: protocol redesign, key rotation pipeline, privilege split, async isolation, dedicated policy engine.&lt;/p&gt;
&lt;p&gt;This gives teams options. They can reduce risk immediately while planning structural fixes. Without this split, discussions get stuck between &amp;ldquo;too expensive&amp;rdquo; and &amp;ldquo;too risky,&amp;rdquo; and nothing moves.&lt;/p&gt;
&lt;p&gt;For small models, scoring should also stay small. Avoid giant risk matrices with fake precision. I use three levels:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;High:&lt;/strong&gt; likely and damaging, must mitigate before release.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Medium:&lt;/strong&gt; plausible, can ship with guardrail and tracked follow-up.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Low:&lt;/strong&gt; edge case, document and revisit during refactor.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The important part is not the label. The important part is explicit ownership and a due date.&lt;/p&gt;
&lt;p&gt;Documentation format can remain lean. One markdown file per feature works well:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;scope of the modeled flow&lt;/li&gt;
&lt;li&gt;data classification involved&lt;/li&gt;
&lt;li&gt;threats and mitigations&lt;/li&gt;
&lt;li&gt;known gaps and follow-up tasks&lt;/li&gt;
&lt;li&gt;links to code, tests, and dashboards&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If your model cannot be read in five minutes, it will not be read during incident response. During incidents, short documents win.&lt;/p&gt;
&lt;p&gt;Threat modeling in the small also improves code review quality. Reviewers can ask threat-aware questions because they know the expected controls. &amp;ldquo;Where is ownership check?&amp;rdquo; &amp;ldquo;What happens on parser failure?&amp;rdquo; &amp;ldquo;Do we leak this error to client?&amp;rdquo; &amp;ldquo;Is this action audit logged?&amp;rdquo; These become normal review language, not special security meetings.&lt;/p&gt;
&lt;p&gt;Testing benefits too. Each high or medium threat should map to at least one concrete test case:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;malformed token structure&lt;/li&gt;
&lt;li&gt;replayed reset token&lt;/li&gt;
&lt;li&gt;expired token with clock skew&lt;/li&gt;
&lt;li&gt;brute-force attempts from distributed IPs&lt;/li&gt;
&lt;li&gt;log event integrity under failure paths&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This turns threat modeling from a document into executable confidence.&lt;/p&gt;
&lt;p&gt;One anti-pattern to avoid: modeling only confidentiality risks. Many teams forget integrity and availability. Attackers do not always want to steal data. Sometimes they want to mutate state silently, poison metrics, or degrade service enough to trigger unsafe operator behavior. Small models should include those outcomes explicitly.&lt;/p&gt;
&lt;p&gt;Another anti-pattern: assuming internal systems are trusted by default. Internal callers can be compromised, misconfigured, or simply outdated. Every boundary deserves explicit checks, not cultural trust.&lt;/p&gt;
&lt;p&gt;You also need to revisit models after feature drift. A safe flow can become unsafe after &amp;ldquo;tiny&amp;rdquo; product changes: one new query parameter, one optional bypass for support, one reused endpoint for batch jobs. Keep threat notes near code ownership, not in a forgotten wiki folder.&lt;/p&gt;
&lt;p&gt;In mature teams, this process becomes routine:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;model in planning&lt;/li&gt;
&lt;li&gt;verify in review&lt;/li&gt;
&lt;li&gt;test in CI&lt;/li&gt;
&lt;li&gt;monitor in production&lt;/li&gt;
&lt;li&gt;update after incidents&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That loop is what you want. Not a quarterly ritual.&lt;/p&gt;
&lt;p&gt;The most practical security posture is not maximal paranoia. It is repeatable discipline. Threat modeling in the small provides exactly that: bounded scope, fast iteration, and security decisions that survive contact with real shipping pressure.&lt;/p&gt;
&lt;p&gt;If you adopt only one rule, adopt this: no feature touching auth, money, permissions, or external input ships without a one-page small threat model and at least one threat-driven test. The cost is low. The regret avoided is high.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Format String Attacks Demystified</title>
      <link>https://turbovision.in6-addr.net/hacking/exploits/format-string-attacks/</link>
      <pubDate>Sun, 14 Dec 2025 00:00:00 +0000</pubDate>
      <lastBuildDate>Sun, 22 Feb 2026 15:49:27 +0100</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/hacking/exploits/format-string-attacks/</guid>
      <description>&lt;p&gt;Format string vulnerabilities happen when user-controlled input ends up as the
first argument to &lt;code&gt;printf()&lt;/code&gt;. Instead of printing text, the attacker reads or
writes arbitrary memory.&lt;/p&gt;
&lt;p&gt;We demonstrate reading the stack with &lt;code&gt;%08x&lt;/code&gt; specifiers, then escalate to an
arbitrary write using &lt;code&gt;%n&lt;/code&gt;. The write-what-where primitive turns a seemingly
harmless logging call into full code execution.&lt;/p&gt;
&lt;p&gt;The fix is trivial: always pass a format string literal. &lt;code&gt;printf(&amp;quot;%s&amp;quot;, buf)&lt;/code&gt;
instead of &lt;code&gt;printf(buf)&lt;/code&gt;. Yet this class of bug resurfaces in embedded firmware
to this day.&lt;/p&gt;
&lt;p&gt;Why does this still happen? Because logging code is often treated as harmless,
copied fast, and reviewed late. In small C projects, developers optimize for
speed of implementation and forget that formatting functions are tiny parsers
with side effects.&lt;/p&gt;
&lt;h2 id=&#34;exploitation-ladder&#34;&gt;Exploitation ladder&lt;/h2&gt;
&lt;p&gt;Typical progression in a lab binary:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Leak stack values with &lt;code&gt;%x&lt;/code&gt; and locate attacker-controlled bytes.&lt;/li&gt;
&lt;li&gt;Calibrate offsets until output is deterministic.&lt;/li&gt;
&lt;li&gt;Use width specifiers to control write count.&lt;/li&gt;
&lt;li&gt;Trigger &lt;code&gt;%n&lt;/code&gt; (or &lt;code&gt;%hn&lt;/code&gt;) to write controlled values to target addresses.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;At that point, you can often redirect flow indirectly by corrupting function
pointers, GOT entries (where applicable), or security-relevant flags.&lt;/p&gt;
&lt;h2 id=&#34;defensive-pattern&#34;&gt;Defensive pattern&lt;/h2&gt;
&lt;p&gt;Treat every formatting call as a sink:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;enforce literal format strings in coding guidelines&lt;/li&gt;
&lt;li&gt;compile with warnings that detect non-literal format usage&lt;/li&gt;
&lt;li&gt;isolate logging wrappers so raw &lt;code&gt;printf&lt;/code&gt; calls are rare&lt;/li&gt;
&lt;li&gt;review embedded diagnostics paths as carefully as network parsers&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Related reading:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/exploits/buffer-overflow-101/&#34;&gt;Buffer Overflow 101&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/tools/ghidra-first-steps/&#34;&gt;Ghidra: First Steps in Reverse Engineering&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>Buffer Overflow 101</title>
      <link>https://turbovision.in6-addr.net/hacking/exploits/buffer-overflow-101/</link>
      <pubDate>Mon, 03 Nov 2025 00:00:00 +0000</pubDate>
      <lastBuildDate>Sun, 22 Feb 2026 15:49:37 +0100</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/hacking/exploits/buffer-overflow-101/</guid>
      <description>&lt;p&gt;A stack-based buffer overflow is the oldest trick in the book and still one of the
most instructive. We start with a vulnerable C program, compile it without canaries,
and walk through EIP control step by step.&lt;/p&gt;
&lt;p&gt;The target binary accepts user input via &lt;code&gt;gets()&lt;/code&gt; — a function so dangerous that
modern compilers emit a warning just for including it. We feed it a carefully
crafted payload: 64 bytes of padding, followed by the address of our shellcode
sitting on the stack.&lt;/p&gt;
&lt;p&gt;Key takeaways: always compile test binaries with &lt;code&gt;-fno-stack-protector -z execstack&lt;/code&gt;
when learning, and never on a production box.&lt;/p&gt;
&lt;p&gt;What makes this topic timeless is not the exact exploit recipe, but the mental
model it gives you: memory layout, calling convention, control-flow integrity,
and why unsafe copy primitives are dangerous by construction.&lt;/p&gt;
&lt;h2 id=&#34;reliable-lab-workflow&#34;&gt;Reliable lab workflow&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Confirm binary protections (&lt;code&gt;checksec&lt;/code&gt; style checks).&lt;/li&gt;
&lt;li&gt;Crash with pattern input to find exact overwrite offset.&lt;/li&gt;
&lt;li&gt;Validate instruction pointer control with marker values.&lt;/li&gt;
&lt;li&gt;Build payload in small increments and verify each stage.&lt;/li&gt;
&lt;li&gt;Only then attempt shellcode or return-oriented payloads.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Expected outcome before each run should be explicit. If behavior differs, do
not &amp;ldquo;try random bytes&amp;rdquo;; explain the difference first. That habit turns exploit
practice into engineering instead of cargo cult.&lt;/p&gt;
&lt;h2 id=&#34;defensive-mirror&#34;&gt;Defensive mirror&lt;/h2&gt;
&lt;p&gt;Learning offensive mechanics should immediately map to mitigation:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;remove dangerous APIs (&lt;code&gt;gets&lt;/code&gt;, unchecked &lt;code&gt;strcpy&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;enable stack canaries, NX, PIE, and RELRO&lt;/li&gt;
&lt;li&gt;reduce attack surface in parser and input-heavy code paths&lt;/li&gt;
&lt;li&gt;test with sanitizers during development&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Related reading:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/exploits/format-string-attacks/&#34;&gt;Format String Attacks Demystified&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/tools/ghidra-first-steps/&#34;&gt;Ghidra: First Steps in Reverse Engineering&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
  </channel>
</rss>
