<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Hacking on TurboVision</title>
    <link>https://turbovision.in6-addr.net/hacking/</link>
    <description>Recent content in Hacking on TurboVision</description>
    <generator>Hugo</generator>
    <language>en</language>
    <lastBuildDate>Tue, 21 Apr 2026 14:06:12 +0000</lastBuildDate>
    <atom:link href="https://turbovision.in6-addr.net/hacking/index.xml" rel="self" type="application/rss&#43;xml" />
    
    
    
    <item>
      <title>Assumption-Led Security Reviews</title>
      <link>https://turbovision.in6-addr.net/hacking/assumption-led-security-reviews/</link>
      <pubDate>Sun, 22 Feb 2026 00:00:00 +0000</pubDate>
      <lastBuildDate>Sun, 22 Feb 2026 22:16:19 +0100</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/hacking/assumption-led-security-reviews/</guid>
      <description>&lt;p&gt;Many security reviews fail before they begin because they are framed as checklist compliance rather than assumption testing. Checklists are useful for coverage. Assumptions are where real risk hides.&lt;/p&gt;
&lt;p&gt;Every system has assumptions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;this endpoint is internal only&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;this token cannot be replayed&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;this queue input is trusted&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;this service account has least privilege&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When assumptions are wrong, controls built on top of them become decorative.&lt;/p&gt;
&lt;p&gt;An assumption-led review starts by collecting claims from architecture, docs, and team memory, then converting each claim into a testable statement. Not &amp;ldquo;is auth secure?&amp;rdquo; but &amp;ldquo;can an untrusted caller obtain action X through path Y under condition Z?&amp;rdquo;&lt;/p&gt;
&lt;p&gt;This shift changes review quality immediately.&lt;/p&gt;
&lt;p&gt;A practical review flow:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;inventory critical assumptions&lt;/li&gt;
&lt;li&gt;rank by blast radius if false&lt;/li&gt;
&lt;li&gt;define validation method per assumption&lt;/li&gt;
&lt;li&gt;execute tests with evidence capture&lt;/li&gt;
&lt;li&gt;classify outcomes: confirmed, disproven, uncertain&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Uncertain is a valid outcome and should trigger follow-up work, not silent closure.&lt;/p&gt;
&lt;p&gt;Assumption inventories should include both technical and operational layers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;network trust boundaries&lt;/li&gt;
&lt;li&gt;identity and role mapping&lt;/li&gt;
&lt;li&gt;secret rotation and revocation behavior&lt;/li&gt;
&lt;li&gt;logging completeness and tamper resistance&lt;/li&gt;
&lt;li&gt;recovery behavior during dependency failure&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Security posture is often lost in the seams between layers.&lt;/p&gt;
&lt;p&gt;A common anti-pattern is reviewing only happy-path authorization. Mature reviews probe degraded and unexpected states:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;stale cache after role change&lt;/li&gt;
&lt;li&gt;timeout fallback behavior&lt;/li&gt;
&lt;li&gt;retry loops after partial failure&lt;/li&gt;
&lt;li&gt;out-of-order event processing&lt;/li&gt;
&lt;li&gt;duplicated message handling&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Attackers do not wait for your ideal system state.&lt;/p&gt;
&lt;p&gt;Evidence discipline matters. For each finding, capture:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;exact request or action performed&lt;/li&gt;
&lt;li&gt;environment and identity context&lt;/li&gt;
&lt;li&gt;observed response/state transition&lt;/li&gt;
&lt;li&gt;why this confirms or disproves assumption&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Without evidence, findings become debate material instead of engineering input.&lt;/p&gt;
&lt;p&gt;One reason assumption-led reviews outperform static checklists is adaptability. Checklists can lag architecture changes. Assumptions are always current because they come from how teams believe the system behaves today.&lt;/p&gt;
&lt;p&gt;This also improves cross-team communication. When a review says, &amp;ldquo;Assumption A was false under condition B,&amp;rdquo; owners can act. When a review says, &amp;ldquo;security maturity low,&amp;rdquo; people argue semantics.&lt;/p&gt;
&lt;p&gt;Security reviews should also evaluate observability assumptions. Teams often believe incidents will be detectable because logs exist somewhere. Test that belief:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;does action X produce audit event Y?&lt;/li&gt;
&lt;li&gt;is actor identity preserved end-to-end?&lt;/li&gt;
&lt;li&gt;can events be correlated across services in minutes, not days?&lt;/li&gt;
&lt;li&gt;can alerting distinguish abuse from normal traffic?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Detection assumptions are security controls.&lt;/p&gt;
&lt;p&gt;Permission models deserve explicit assumption tests too. &amp;ldquo;Least privilege&amp;rdquo; is often declared, rarely verified. Run effective-permission snapshots for key service accounts and compare against actual required operations. Overprivilege is usually broader than expected.&lt;/p&gt;
&lt;p&gt;Another high-value area is trust transitively inherited from third-party integrations. Assumptions like &amp;ldquo;provider validates input&amp;rdquo; or &amp;ldquo;SDK enforces signature checks&amp;rdquo; should be verified by controlled failure injection or negative tests.&lt;/p&gt;
&lt;p&gt;Assumption reviews are especially useful before major migrations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;identity provider switch&lt;/li&gt;
&lt;li&gt;event bus replacement&lt;/li&gt;
&lt;li&gt;monolith decomposition&lt;/li&gt;
&lt;li&gt;region expansion&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Migrations amplify latent assumptions. Pre-migration validation avoids expensive post-cutover surprises.&lt;/p&gt;
&lt;p&gt;Reporting format should be brief and decision-oriented:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;assumption statement&lt;/li&gt;
&lt;li&gt;status (confirmed/disproven/uncertain)&lt;/li&gt;
&lt;li&gt;impact if false&lt;/li&gt;
&lt;li&gt;evidence pointer&lt;/li&gt;
&lt;li&gt;remediation owner and due date&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This format integrates smoothly into engineering planning.&lt;/p&gt;
&lt;p&gt;A strong remediation strategy focuses on making assumptions explicit in-system:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;encode invariants in tests&lt;/li&gt;
&lt;li&gt;enforce policy in middleware&lt;/li&gt;
&lt;li&gt;add runtime guards for impossible states&lt;/li&gt;
&lt;li&gt;instrument detection for assumption violations&lt;/li&gt;
&lt;li&gt;document contract boundaries near code&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The goal is not one good review. The goal is continuous assumption integrity.&lt;/p&gt;
&lt;p&gt;There is a cultural angle here too. Teams should feel safe admitting uncertainty. If uncertainty is penalized, assumptions go unchallenged and risks accumulate quietly. Assumption-led reviews work best in environments where &amp;ldquo;we do not know yet&amp;rdquo; is treated as an actionable state.&lt;/p&gt;
&lt;p&gt;This approach also improves incident response. During active incidents, responders can quickly reference known assumption status:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;confirmed trust boundaries&lt;/li&gt;
&lt;li&gt;known weak points&lt;/li&gt;
&lt;li&gt;uncertain controls needing immediate verification&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Prepared uncertainty maps reduce chaos under pressure.&lt;/p&gt;
&lt;p&gt;If your team wants to adopt this with low overhead, start with one workflow:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;pick one high-impact service&lt;/li&gt;
&lt;li&gt;list ten assumptions&lt;/li&gt;
&lt;li&gt;validate top five by blast radius&lt;/li&gt;
&lt;li&gt;file concrete follow-ups for anything disproven or uncertain&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;One cycle usually exposes enough hidden risk to justify making the method standard.&lt;/p&gt;
&lt;p&gt;Security is not only control inventory. It is confidence that critical assumptions hold under real conditions. Assumption-led reviews build that confidence with evidence instead of optimism.&lt;/p&gt;
&lt;p&gt;When systems are complex, this is the difference between feeling secure and being secure.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Incident Response with a Notebook</title>
      <link>https://turbovision.in6-addr.net/hacking/incident-response-with-a-notebook/</link>
      <pubDate>Sun, 22 Feb 2026 00:00:00 +0000</pubDate>
      <lastBuildDate>Sun, 22 Feb 2026 22:47:53 +0100</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/hacking/incident-response-with-a-notebook/</guid>
      <description>&lt;p&gt;Modern incident response tooling is powerful, but under pressure, people still fail in very analog ways: they lose sequence, they forget assumptions, they repeat commands without recording output, and they argue from memory instead of evidence. A simple notebook, used with discipline, prevents all four.&lt;/p&gt;
&lt;p&gt;This is not anti-automation advice. It is operator reliability advice. When systems are failing fast and dashboards are lagging, your most valuable artifact is a timeline you can trust.&lt;/p&gt;
&lt;p&gt;I keep a strict notebook format for incidents:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;timestamp&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;observation&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;action&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;expected result&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;actual result&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;next decision&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;That structure sounds verbose until minute twenty, when context fragmentation starts. By minute forty, it is the difference between controlled recovery and expensive chaos.&lt;/p&gt;
&lt;p&gt;The &amp;ldquo;expected result&amp;rdquo; field is especially important. Teams often run commands reactively, then treat any output as signal. That is backwards. State your hypothesis first, then test it. If expected and actual differ, you learn something real. If you skip expectation, every log line becomes confirmation bias.&lt;/p&gt;
&lt;p&gt;A good incident notebook also tracks uncertainty explicitly:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;confirmed facts&lt;/li&gt;
&lt;li&gt;plausible hypotheses&lt;/li&gt;
&lt;li&gt;disproven hypotheses&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Never mix them. During severe incidents, people quote guesses as truth within minutes. Writing confidence levels next to every statement reduces social drift.&lt;/p&gt;
&lt;p&gt;Command logging should be literal. Record the exact command, not a paraphrase. Include target host, namespace, and environment each time. &amp;ldquo;Ran restart&amp;rdquo; is meaningless later. &amp;ldquo;kubectl rollout restart deploy/api -n prod-eu&amp;rdquo; is reconstructable and auditable.&lt;/p&gt;
&lt;p&gt;I also enforce one line called &amp;ldquo;blast radius guard.&amp;rdquo; Before potentially disruptive actions, write:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;what could get worse&lt;/li&gt;
&lt;li&gt;what fallback exists&lt;/li&gt;
&lt;li&gt;who approved this level of risk&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This slows reckless action by about thirty seconds and prevents many secondary outages.&lt;/p&gt;
&lt;p&gt;Communication cadence belongs in the notebook too. Mark when stakeholder updates were sent and what confidence level you reported. This helps postmortems distinguish technical delay from communication delay. Both matter.&lt;/p&gt;
&lt;p&gt;A practical rhythm looks like this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;every 5 minutes: update timeline&lt;/li&gt;
&lt;li&gt;every 10 minutes: summarize current hypothesis set&lt;/li&gt;
&lt;li&gt;every 15 minutes: send stakeholder status&lt;/li&gt;
&lt;li&gt;after major action: log expected vs actual&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The point is not bureaucracy. The point is preserving operator cognition.&lt;/p&gt;
&lt;p&gt;Another high-value section is &amp;ldquo;state snapshots.&amp;rdquo; At key points, record:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;error rates&lt;/li&gt;
&lt;li&gt;latency percentiles&lt;/li&gt;
&lt;li&gt;queue depth&lt;/li&gt;
&lt;li&gt;CPU/memory pressure&lt;/li&gt;
&lt;li&gt;dependency status&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Snapshots create checkpoints. During noisy recovery, teams often feel like nothing is improving because local failures are still visible. Snapshot comparisons show trend and prevent premature rollback or overcorrection.&lt;/p&gt;
&lt;p&gt;I recommend assigning one person as &amp;ldquo;scribe operator&amp;rdquo; in larger incidents. They may still execute commands, but their first duty is timeline integrity. This role is not junior work. It is command-and-control work. Senior responders rotate into it regularly.&lt;/p&gt;
&lt;p&gt;During containment, notebooks help avoid tunnel vision. People get fixated on one broken service while hidden impact grows elsewhere. A running list of &amp;ldquo;unverified assumptions&amp;rdquo; keeps exploration wide enough:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;auth provider healthy?&lt;/li&gt;
&lt;li&gt;background jobs draining?&lt;/li&gt;
&lt;li&gt;delayed billing side effects?&lt;/li&gt;
&lt;li&gt;stale cache invalidation?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Write them down, then close them one by one.&lt;/p&gt;
&lt;p&gt;After resolution, the notebook becomes your best postmortem source. Chat logs are noisy and fragmented. Monitoring screenshots lack intent. Memory is unreliable. A clean timeline with hypotheses, actions, and outcomes produces faster, less political postmortems.&lt;/p&gt;
&lt;p&gt;You can also mine notebooks for prevention engineering:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;repeated manual checks become automated health probes&lt;/li&gt;
&lt;li&gt;repeated command bundles become runbooks&lt;/li&gt;
&lt;li&gt;repeated missing metrics become instrumentation tasks&lt;/li&gt;
&lt;li&gt;repeated privilege delays become access-policy fixes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That is how incidents become capability, not just pain.&lt;/p&gt;
&lt;p&gt;One warning: do not let the notebook become performative. If entries are long, delayed, or decorative, it fails. Keep lines short and decision-oriented. You are writing for future operators at 3 AM, not for a management slide deck.&lt;/p&gt;
&lt;p&gt;The best incident response stack is layered:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;good observability&lt;/li&gt;
&lt;li&gt;good automation&lt;/li&gt;
&lt;li&gt;good runbooks&lt;/li&gt;
&lt;li&gt;good human discipline&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The notebook is the discipline layer. It is cheap, fast, and robust when everything else is noisy.&lt;/p&gt;
&lt;p&gt;If your team wants one immediate upgrade, adopt this policy: no critical incident without a timestamped action log with explicit expected outcomes. It will feel unnecessary on easy days. It will save you on hard days.&lt;/p&gt;
&lt;p&gt;One final practical addition is a &amp;ldquo;handover block&amp;rdquo; at the end of every major incident window. If responders rotate, the notebook should include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;current leading hypothesis&lt;/li&gt;
&lt;li&gt;unresolved high-risk unknowns&lt;/li&gt;
&lt;li&gt;last safe action point&lt;/li&gt;
&lt;li&gt;next three recommended actions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This prevents shift changes from resetting context and repeating risky experiments.&lt;/p&gt;
&lt;h2 id=&#34;minimal-line-format&#34;&gt;Minimal line format&lt;/h2&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-text&#34; data-lang=&#34;text&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;2026-02-22T14:15:03Z | host=api-prod-2 | cmd=&amp;#34;...&amp;#34; | expect=&amp;#34;...&amp;#34; | observed=&amp;#34;...&amp;#34; | delta=&amp;#34;...&amp;#34;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;If a note cannot be expressed in this format, it is often too vague to support reliable handoff.&lt;/p&gt;
&lt;p&gt;Related reading:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/tools/terminal-kits-for-incident-triage/&#34;&gt;Terminal Kits for Incident Triage&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/tools/trace-first-debugging-with-terminal-notes/&#34;&gt;Trace-First Debugging with Terminal Notes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/musings/clarity-is-an-operational-advantage/&#34;&gt;Clarity Is an Operational Advantage&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>Security Findings as Design Feedback</title>
      <link>https://turbovision.in6-addr.net/hacking/security-findings-as-design-feedback/</link>
      <pubDate>Sun, 22 Feb 2026 00:00:00 +0000</pubDate>
      <lastBuildDate>Sun, 22 Feb 2026 22:43:22 +0100</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/hacking/security-findings-as-design-feedback/</guid>
      <description>&lt;p&gt;Security reports are often treated as defect inventories: patch issue, close ticket, move on. That workflow is necessary, but it is incomplete. Many findings are not isolated mistakes; they are design feedback about how a system creates, hides, or amplifies risk. Teams that only chase individual fixes improve slowly. Teams that read findings as architecture signals improve compoundingly.&lt;/p&gt;
&lt;p&gt;A useful reframing is to ask, for each vulnerability: what design decision made this class of bug easy to introduce and hard to detect? The answer is frequently broader than the code diff. Weak trust boundaries, inconsistent authorization checks, ambiguous ownership of validation, and hidden data flows are structural causes. Fixing one endpoint without changing those structures guarantees recurrence.&lt;/p&gt;
&lt;p&gt;Take broken access control patterns. A typical report may show one API endpoint missing a tenant check. The immediate patch adds the check. The design feedback, however, is that authorization is optional at call sites. The durable response is to move authorization into mandatory middleware or typed service contracts so bypassing it becomes difficult by construction. Good security design reduces optionality.&lt;/p&gt;
&lt;p&gt;Input-validation findings show similar dynamics. If every handler parses raw request bodies independently, validation drift is inevitable. One team sanitizes aggressively, another copies old logic, a third misses edge cases under deadline pressure. The root issue is distributed policy. Consolidated schemas, shared parsers, and fail-closed defaults turn ad-hoc validation into predictable infrastructure.&lt;/p&gt;
&lt;p&gt;Injection flaws often reveal boundary confusion rather than purely “bad escaping.” When query construction crosses multiple abstraction layers with mixed assumptions, responsibility blurs and dangerous concatenation appears. The design-level fix is not a lint rule alone. It is to constrain query creation to safe primitives and enforce typed interfaces that make unsafe composition visibly abnormal.&lt;/p&gt;
&lt;p&gt;Security findings also expose observability gaps. If exploitation attempts succeed silently or are detected only through external reports, the system lacks meaningful security telemetry. A mature response adds event streams for auth decisions, suspicious parameter patterns, and integrity checks, with dashboards tied to operational ownership. Detection is a design feature, not a post-incident add-on.&lt;/p&gt;
&lt;p&gt;Another pattern is privilege creep in internal services. A report might flag one misuse of a high-privilege token. The deeper signal is that privilege scopes are too broad and rotation or delegation models are weak. Architecture should prefer least-privilege tokens per task, short lifetimes, and explicit trust contracts between services. Otherwise the blast radius of ordinary mistakes remains unacceptable.&lt;/p&gt;
&lt;p&gt;Process design matters as much as runtime design. Findings discovered repeatedly in similar areas indicate review pathways that miss systemic risks. Security review should include “class analysis”: when one issue appears, search for siblings by pattern and subsystem. This turns isolated remediation into proactive hardening. Without class analysis, teams play vulnerability whack-a-mole forever.&lt;/p&gt;
&lt;p&gt;Prioritization also benefits from design thinking. Severity alone does not capture strategic value. A medium issue that reveals a widespread anti-pattern may deserve higher priority than a high-severity edge case with narrow reach. Decision frameworks should account for recurrence potential and architectural leverage, not just immediate exploitability metrics.&lt;/p&gt;
&lt;p&gt;Communication style influences whether findings drive design changes. Reports framed as blame trigger defensive behavior and minimal patches. Reports framed as system learning opportunities invite ownership and broader fixes. Precision still matters, but tone can determine whether teams engage deeply or optimize for closure speed.&lt;/p&gt;
&lt;p&gt;One practical method is a “finding-to-principle” review after each security cycle:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Summarize the concrete issue.&lt;/li&gt;
&lt;li&gt;Identify the enabling design condition.&lt;/li&gt;
&lt;li&gt;Define a preventive principle.&lt;/li&gt;
&lt;li&gt;Encode the principle in tooling, APIs, or architecture.&lt;/li&gt;
&lt;li&gt;Track recurrence as an outcome metric.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This process converts incidents into institutional memory.&lt;/p&gt;
&lt;p&gt;Security maturity is not a state where no bugs exist. It is a state where each bug teaches the system to fail less in the future. That requires treating findings as feedback loops into design, not just repair queues for implementation. The difference between those mindsets determines whether risk decays or accumulates.&lt;/p&gt;
&lt;p&gt;In short: fix the bug, yes. But always ask what the bug is trying to teach your architecture. That question is where long-term resilience starts.&lt;/p&gt;
&lt;p&gt;Teams that institutionalize this mindset stop treating security as a parallel bureaucracy and start treating it as part of system design quality. Over time, this reduces not only exploit risk but also operational surprises, because clearer boundaries and explicit trust rules improve reliability for everyone, not just security reviewers.&lt;/p&gt;
&lt;h2 id=&#34;finding-to-principle-template&#34;&gt;Finding-to-principle template&lt;/h2&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;2
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;3
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;4
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;5
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;6
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;7
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-text&#34; data-lang=&#34;text&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;Finding: &amp;lt;concrete vulnerability&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;Class: &amp;lt;auth / validation / injection / secrets / ...&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;Enabling design condition: &amp;lt;what made this class likely&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;Preventive principle: &amp;lt;design rule to encode&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;Enforcement point: &amp;lt;middleware / schema / API contract / CI check&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;Owner + deadline: &amp;lt;who and by when&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;Recurrence metric: &amp;lt;how we detect class-level improvement&amp;gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;This keeps remediation focused on recurrence reduction, not ticket closure optics.&lt;/p&gt;
&lt;p&gt;Related reading:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/threat-modeling-in-the-small/&#34;&gt;Threat Modeling in the Small&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/assumption-led-security-reviews/&#34;&gt;Assumption-Led Security Reviews&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/musings/clarity-is-an-operational-advantage/&#34;&gt;Clarity Is an Operational Advantage&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>Threat Modeling in the Small</title>
      <link>https://turbovision.in6-addr.net/hacking/threat-modeling-in-the-small/</link>
      <pubDate>Sun, 22 Feb 2026 00:00:00 +0000</pubDate>
      <lastBuildDate>Sun, 22 Feb 2026 22:03:08 +0100</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/hacking/threat-modeling-in-the-small/</guid>
      <description>&lt;p&gt;When people hear &amp;ldquo;threat modeling,&amp;rdquo; they often imagine a conference room, a wall of sticky notes, and an enterprise architecture diagram no single human fully understands. That can be useful, but it can also become theater. Most practical security wins come from smaller, tighter loops: one feature, one API path, one cron job, one queue consumer, one admin screen.&lt;/p&gt;
&lt;p&gt;I call this &amp;ldquo;threat modeling in the small.&amp;rdquo; The goal is not to produce a perfect model. The goal is to make one change safer this week without slowing delivery into paralysis.&lt;/p&gt;
&lt;p&gt;Start with a concrete unit. &amp;ldquo;User authentication&amp;rdquo; is too broad. &amp;ldquo;Password reset token creation and validation&amp;rdquo; is the right scale. Draw a tiny flow in plain text. List the trust boundaries. Ask where attacker-controlled data enters. Ask where privileged actions happen. Ask where logging exists and where it does not.&lt;/p&gt;
&lt;p&gt;At this size, engineers actually participate. They can reason from code they touched yesterday. They can connect risks to implementation choices. They can estimate effort honestly. Security stops being abstract policy and becomes software design.&lt;/p&gt;
&lt;p&gt;My default prompt set is short:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What are we protecting in this flow?&lt;/li&gt;
&lt;li&gt;Who can reach this entry point?&lt;/li&gt;
&lt;li&gt;What can an attacker control?&lt;/li&gt;
&lt;li&gt;What state change happens if checks fail?&lt;/li&gt;
&lt;li&gt;What evidence do we keep when things go wrong?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That five-question loop catches more real bugs than many heavyweight frameworks, because it forces precision. &amp;ldquo;We validate input&amp;rdquo; becomes &amp;ldquo;we validate length and charset before parsing and reject invalid UTF-8.&amp;rdquo; &amp;ldquo;We have auth&amp;rdquo; becomes &amp;ldquo;we verify ownership before read and before update, not just at login.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;Another useful trick is pairing each threat with one &amp;ldquo;cheap guardrail&amp;rdquo; and one &amp;ldquo;strong guardrail.&amp;rdquo; Cheap guardrails are things you can ship in a day: stricter defaults, safer parser settings, explicit allowlists, better rate limits, better log fields. Strong guardrails need more work: protocol redesign, key rotation pipeline, privilege split, async isolation, dedicated policy engine.&lt;/p&gt;
&lt;p&gt;This gives teams options. They can reduce risk immediately while planning structural fixes. Without this split, discussions get stuck between &amp;ldquo;too expensive&amp;rdquo; and &amp;ldquo;too risky,&amp;rdquo; and nothing moves.&lt;/p&gt;
&lt;p&gt;For small models, scoring should also stay small. Avoid giant risk matrices with fake precision. I use three levels:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;High:&lt;/strong&gt; likely and damaging, must mitigate before release.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Medium:&lt;/strong&gt; plausible, can ship with guardrail and tracked follow-up.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Low:&lt;/strong&gt; edge case, document and revisit during refactor.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The important part is not the label. The important part is explicit ownership and a due date.&lt;/p&gt;
&lt;p&gt;Documentation format can remain lean. One markdown file per feature works well:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;scope of the modeled flow&lt;/li&gt;
&lt;li&gt;data classification involved&lt;/li&gt;
&lt;li&gt;threats and mitigations&lt;/li&gt;
&lt;li&gt;known gaps and follow-up tasks&lt;/li&gt;
&lt;li&gt;links to code, tests, and dashboards&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If your model cannot be read in five minutes, it will not be read during incident response. During incidents, short documents win.&lt;/p&gt;
&lt;p&gt;Threat modeling in the small also improves code review quality. Reviewers can ask threat-aware questions because they know the expected controls. &amp;ldquo;Where is ownership check?&amp;rdquo; &amp;ldquo;What happens on parser failure?&amp;rdquo; &amp;ldquo;Do we leak this error to client?&amp;rdquo; &amp;ldquo;Is this action audit logged?&amp;rdquo; These become normal review language, not special security meetings.&lt;/p&gt;
&lt;p&gt;Testing benefits too. Each high or medium threat should map to at least one concrete test case:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;malformed token structure&lt;/li&gt;
&lt;li&gt;replayed reset token&lt;/li&gt;
&lt;li&gt;expired token with clock skew&lt;/li&gt;
&lt;li&gt;brute-force attempts from distributed IPs&lt;/li&gt;
&lt;li&gt;log event integrity under failure paths&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This turns threat modeling from a document into executable confidence.&lt;/p&gt;
&lt;p&gt;One anti-pattern to avoid: modeling only confidentiality risks. Many teams forget integrity and availability. Attackers do not always want to steal data. Sometimes they want to mutate state silently, poison metrics, or degrade service enough to trigger unsafe operator behavior. Small models should include those outcomes explicitly.&lt;/p&gt;
&lt;p&gt;Another anti-pattern: assuming internal systems are trusted by default. Internal callers can be compromised, misconfigured, or simply outdated. Every boundary deserves explicit checks, not cultural trust.&lt;/p&gt;
&lt;p&gt;You also need to revisit models after feature drift. A safe flow can become unsafe after &amp;ldquo;tiny&amp;rdquo; product changes: one new query parameter, one optional bypass for support, one reused endpoint for batch jobs. Keep threat notes near code ownership, not in a forgotten wiki folder.&lt;/p&gt;
&lt;p&gt;In mature teams, this process becomes routine:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;model in planning&lt;/li&gt;
&lt;li&gt;verify in review&lt;/li&gt;
&lt;li&gt;test in CI&lt;/li&gt;
&lt;li&gt;monitor in production&lt;/li&gt;
&lt;li&gt;update after incidents&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That loop is what you want. Not a quarterly ritual.&lt;/p&gt;
&lt;p&gt;The most practical security posture is not maximal paranoia. It is repeatable discipline. Threat modeling in the small provides exactly that: bounded scope, fast iteration, and security decisions that survive contact with real shipping pressure.&lt;/p&gt;
&lt;p&gt;If you adopt only one rule, adopt this: no feature touching auth, money, permissions, or external input ships without a one-page small threat model and at least one threat-driven test. The cost is low. The regret avoided is high.&lt;/p&gt;
</description>
    </item>
    
  </channel>
</rss>
