<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Exploits on TurboVision</title>
    <link>https://turbovision.in6-addr.net/tags/exploits/</link>
    <description>Recent content in Exploits on TurboVision</description>
    <generator>Hugo</generator>
    <language>en</language>
    <lastBuildDate>Tue, 21 Apr 2026 14:06:12 +0000</lastBuildDate>
    <atom:link href="https://turbovision.in6-addr.net/tags/exploits/index.xml" rel="self" type="application/rss&#43;xml" />
    
    
    
    <item>
      <title>Exploit Reliability over Cleverness</title>
      <link>https://turbovision.in6-addr.net/hacking/exploits/exploit-reliability-over-cleverness/</link>
      <pubDate>Sun, 22 Feb 2026 00:00:00 +0000</pubDate>
      <lastBuildDate>Sun, 22 Feb 2026 22:17:18 +0100</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/hacking/exploits/exploit-reliability-over-cleverness/</guid>
      <description>&lt;p&gt;Exploit writeups often reward elegance: shortest payload, sharpest primitive chain, most surprising bypass. In real engagements, the winning attribute is usually reliability. A moderately clever exploit that works repeatedly beats a brilliant exploit that succeeds once and fails under slight environmental variation.&lt;/p&gt;
&lt;p&gt;Reliability is engineering, not luck.&lt;/p&gt;
&lt;p&gt;The first step is to define what reliable means for your context:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;success rate across repeated runs&lt;/li&gt;
&lt;li&gt;tolerance to timing variance&lt;/li&gt;
&lt;li&gt;tolerance to memory layout variance&lt;/li&gt;
&lt;li&gt;deterministic post-exploit behavior&lt;/li&gt;
&lt;li&gt;recoverable failure modes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If reliability is not measured, it is mostly imagined.&lt;/p&gt;
&lt;p&gt;A practical reliability-first workflow:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;establish baseline crash and control rates&lt;/li&gt;
&lt;li&gt;isolate one primitive at a time&lt;/li&gt;
&lt;li&gt;add instrumentation around each stage&lt;/li&gt;
&lt;li&gt;run variability tests continuously&lt;/li&gt;
&lt;li&gt;optimize chain complexity only after stability&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Many teams reverse this and pay the price.&lt;/p&gt;
&lt;p&gt;Control proof should be statistical, not anecdotal. If instruction pointer control appears in one debugger run, that is a hint, not a milestone. Confirm over many runs with slightly different environment conditions.&lt;/p&gt;
&lt;p&gt;Primitive isolation is the next guardrail. Validate each piece independently:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;leak primitive correctness&lt;/li&gt;
&lt;li&gt;stack pivot stability&lt;/li&gt;
&lt;li&gt;register setup integrity&lt;/li&gt;
&lt;li&gt;write primitive side effects&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Composing unvalidated pieces creates brittle uncertainty multiplication.&lt;/p&gt;
&lt;p&gt;Instrumentation needs to exist before &amp;ldquo;final payload.&amp;rdquo; Useful markers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;stage IDs embedded in payload path&lt;/li&gt;
&lt;li&gt;register snapshots near transition points&lt;/li&gt;
&lt;li&gt;expected stack layout checkpoints&lt;/li&gt;
&lt;li&gt;structured crash classification&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With instrumentation, failure becomes data. Without it, failure is guesswork.&lt;/p&gt;
&lt;p&gt;Environment variability kills overfit exploits. Include these tests in routine:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;multiple process restarts&lt;/li&gt;
&lt;li&gt;altered environment variable lengths&lt;/li&gt;
&lt;li&gt;changed file descriptor ordering&lt;/li&gt;
&lt;li&gt;light timing perturbation&lt;/li&gt;
&lt;li&gt;host load variation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If exploit behavior changes dramatically under these, reliability work remains.&lt;/p&gt;
&lt;p&gt;Another reliability trap is hidden dependencies on tooling state. Payloads that only work with a specific debugger setting, locale, or runtime library variant are not field-ready. Capture and minimize assumptions explicitly.&lt;/p&gt;
&lt;p&gt;Input channel constraints also matter. Exploits validated through direct stdin may fail via web gateway normalization, protocol framing, or character-set transformations. Re-test through real delivery channel early.&lt;/p&gt;
&lt;p&gt;I prefer degradable exploit architecture:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;stage A leaks safe diagnostic state&lt;/li&gt;
&lt;li&gt;stage B validates critical offsets&lt;/li&gt;
&lt;li&gt;stage C performs objective action&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If stage C fails, stage A/B still provide useful evidence for iteration. All-or-nothing payloads waste cycles.&lt;/p&gt;
&lt;p&gt;Error handling is part of reliability too. Ask:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;what happens when leak parse fails?&lt;/li&gt;
&lt;li&gt;what if offset confidence is low?&lt;/li&gt;
&lt;li&gt;can payload abort cleanly instead of crashing target repeatedly?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A controlled abort path can preserve access and reduce detection noise.&lt;/p&gt;
&lt;p&gt;Mitigation-aware design should be explicit from the beginning:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;ASLR uncertainty strategy&lt;/li&gt;
&lt;li&gt;canary handling strategy&lt;/li&gt;
&lt;li&gt;RELRO impact on write targets&lt;/li&gt;
&lt;li&gt;CFI/DEP constraints&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Pretending mitigations are incidental leads to late-stage redesign.&lt;/p&gt;
&lt;p&gt;Documentation quality strongly correlates with reliability outcomes. Maintain:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;assumptions list&lt;/li&gt;
&lt;li&gt;tested environment matrix&lt;/li&gt;
&lt;li&gt;known fragility points&lt;/li&gt;
&lt;li&gt;stage success criteria&lt;/li&gt;
&lt;li&gt;rollback/cleanup guidance&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Clear docs enable repeatability across operators.&lt;/p&gt;
&lt;p&gt;Team workflows improve when reliability gates are formal:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;no stage promotion below defined success rate&lt;/li&gt;
&lt;li&gt;no merge of payload changes without variability run&lt;/li&gt;
&lt;li&gt;no &amp;ldquo;works on my machine&amp;rdquo; acceptance&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These gates feel strict until they prevent expensive engagement failures.&lt;/p&gt;
&lt;p&gt;Operationally, reliability lowers risk on both sides. For authorized assessments, predictable behavior reduces unintended impact and simplifies stakeholder communication. Unreliable payloads increase collateral risk and incident complexity.&lt;/p&gt;
&lt;p&gt;One useful metric is &amp;ldquo;mean attempts to objective.&amp;rdquo; Track it over exploit revisions. Falling mean attempts usually indicates rising reliability and improved workflow quality.&lt;/p&gt;
&lt;p&gt;Another is &amp;ldquo;unknown-failure ratio&amp;rdquo;: failures without classified root cause. High ratio means instrumentation is insufficient, no matter how clever payload logic appears.&lt;/p&gt;
&lt;p&gt;There is a strategic insight here: reliability work often reveals simpler exploitation paths. While hardening one complex chain, teams may discover a shorter, more robust primitive route. Reliability iteration is not just polishing; it is exploration with feedback.&lt;/p&gt;
&lt;p&gt;I also recommend periodic &amp;ldquo;fresh-operator replay.&amp;rdquo; Have another engineer reproduce results from docs only. If replay fails, reliability is overstated. This catches hidden tribal assumptions quickly.&lt;/p&gt;
&lt;p&gt;When reporting, communicate reliability clearly:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;tested run count&lt;/li&gt;
&lt;li&gt;success percentage&lt;/li&gt;
&lt;li&gt;environment scope&lt;/li&gt;
&lt;li&gt;known instability triggers&lt;/li&gt;
&lt;li&gt;required preconditions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This transparency improves trust in findings and helps defenders prioritize realistically.&lt;/p&gt;
&lt;p&gt;Cleverness has value. It expands possibility space. But in practice, mature exploitation programs treat cleverness as prototype and reliability as product.&lt;/p&gt;
&lt;p&gt;If you want one rule to improve outcomes immediately, adopt this: no exploit claim without repeatability evidence under controlled variability. This single rule filters out fragile wins and pushes teams toward engineering-grade results.&lt;/p&gt;
&lt;p&gt;In exploitation, the payload that survives reality is the payload that matters.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Fuzzing to Exploitability with Discipline</title>
      <link>https://turbovision.in6-addr.net/hacking/exploits/fuzzing-to-exploitability-with-discipline/</link>
      <pubDate>Sun, 22 Feb 2026 00:00:00 +0000</pubDate>
      <lastBuildDate>Sun, 22 Feb 2026 22:43:01 +0100</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/hacking/exploits/fuzzing-to-exploitability-with-discipline/</guid>
      <description>&lt;p&gt;Fuzzing finds crashes quickly. Turning crashes into reliable security findings is slower, less glamorous work. Many teams stall in the gap between “it crashed” and “this is exploitable under defined conditions.” Bridging that gap requires discipline in triage, reduction, root-cause analysis, and harness quality. Without this discipline, fuzzing campaigns generate noise instead of security value.&lt;/p&gt;
&lt;p&gt;The first mistake is overvaluing raw crash counts. Hundreds of unique stack traces can still map to a handful of root causes. Counting crashes as progress creates perverse incentives: bigger corpus churn, less deduplication, shallow analysis. Useful metrics are different: number of distinct root causes, percentage with minimized reproducers, time to fix confirmation, and recurrence rate after patches.&lt;/p&gt;
&lt;p&gt;Crash triage begins with deterministic reproduction. If you cannot replay reliably, you cannot reason reliably. Save exact binaries, runtime flags, environment variables, and input artifacts. Capture hashes of test executables. Tiny environmental drift can turn a real vulnerability into a ghost. Reproducibility is not bureaucracy; it is scientific control.&lt;/p&gt;
&lt;p&gt;Input minimization is the next force multiplier. Large fuzz artifacts obscure causality and slow debugger cycles. Use minimizers aggressively to isolate the smallest trigger that preserves behavior. A minimized artifact clarifies parser states, boundary transitions, and corruption points. It also produces cleaner reports and faster regression tests.&lt;/p&gt;
&lt;p&gt;Sanitizers provide critical signal, but they are not the end of analysis. AddressSanitizer might report a heap overflow; you still need to determine reachable control influence, overwrite constraints, and realistic attacker preconditions. UndefinedBehaviorSanitizer may flag dangerous operations that are currently non-exploitable yet indicate brittle code likely to fail differently under compiler or platform changes. Triage should classify both immediate risk and latent risk.&lt;/p&gt;
&lt;p&gt;Harness design determines campaign quality. Weak harnesses exercise parse entry points without modeling realistic state machines, causing false confidence. Strong harnesses preserve key protocol invariants while allowing broad mutation. They balance realism and mutation freedom. This is hard engineering, not copy-paste setup.&lt;/p&gt;
&lt;p&gt;Coverage guidance helps, but raw coverage increase is not always meaningful. Reaching new basic blocks in dead-end validation code is less valuable than exploring transitions around privilege checks, memory ownership changes, and parser mode switches. Analysts should correlate coverage with threat-relevant program regions, not only percentage metrics.&lt;/p&gt;
&lt;p&gt;Once root cause is known, exploitability assessment should be explicit. Ask structured questions:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Can attacker-controlled data influence memory layout?&lt;/li&gt;
&lt;li&gt;Is corruption adjacent to control data or security boundaries?&lt;/li&gt;
&lt;li&gt;What mitigations exist (ASLR, DEP, CFI, hardened allocators)?&lt;/li&gt;
&lt;li&gt;What preconditions are needed in realistic deployments?&lt;/li&gt;
&lt;li&gt;Can impact be chained with known primitives?&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This framework avoids both alarmism and underreporting.&lt;/p&gt;
&lt;p&gt;Patch validation is often where teams regress. Fixes that gate one parser branch can leave sibling paths vulnerable. Every confirmed root cause should generate regression tests and pattern searches for analogous code. If one arithmetic underflow appeared in size calculations, audit all similar calculations. Class-level remediation beats single-site repair.&lt;/p&gt;
&lt;p&gt;Communication quality affects remediation speed. Reports should provide minimized input, deterministic repro instructions, root cause narrative, exploitability assessment, and concrete patch guidance. Vague “possible overflow” reports waste maintainer cycles and reduce trust in the security process. Precision earns action.&lt;/p&gt;
&lt;p&gt;There is also a product lesson here. Fuzzing exposes interfaces that are too permissive, parser states that are too implicit, and ownership models that are too fragile. If the same categories keep appearing, architecture should change: stronger type boundaries, safer parsers, stricter validation contracts, memory-safe rewrites in high-risk components. Tooling finds symptoms; architecture removes disease reservoirs.&lt;/p&gt;
&lt;p&gt;In mature teams, fuzzing is not a one-off audit but a continuous feedback loop. Inputs evolve with features, harnesses track protocol changes, and triage pipelines remain lean enough to keep up with signal. The target is not “no crashes ever.” The target is rapid conversion of crashes into durable security improvements with measurable recurrence reduction.&lt;/p&gt;
&lt;p&gt;Fuzzers are powerful, but they are amplifiers. They amplify your harness quality, your triage discipline, and your engineering follow-through. Invest there, and fuzzing becomes a strategic advantage rather than a crash screenshot generator.&lt;/p&gt;
&lt;p&gt;For teams starting out, the most effective first milestone is not maximum coverage. It is a repeatable end-to-end path from one crash to one fixed root cause plus one regression test. Once that loop is reliable, scaling campaigns becomes a multiplication problem instead of a confusion problem.&lt;/p&gt;
&lt;h2 id=&#34;minimal-triage-loop-example&#34;&gt;Minimal triage loop example&lt;/h2&gt;
&lt;p&gt;A compact command sequence for one crash can look like this:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;2
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;3
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;4
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;./target --input crash.bin 2&amp;gt;&lt;span class=&#34;p&#34;&gt;&amp;amp;&lt;/span&gt;&lt;span class=&#34;m&#34;&gt;1&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;|&lt;/span&gt; tee repro.log
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;./minimizer --in crash.bin --out min.bin -- ./target --input @@
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nv&#34;&gt;ASAN_OPTIONS&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;halt_on_error&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;m&#34;&gt;1&lt;/span&gt; ./target --input min.bin 2&amp;gt;&lt;span class=&#34;p&#34;&gt;&amp;amp;&lt;/span&gt;&lt;span class=&#34;m&#34;&gt;1&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;|&lt;/span&gt; tee asan.log
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;rg &lt;span class=&#34;s2&#34;&gt;&amp;#34;ERROR|SUMMARY|pc|bp|sp&amp;#34;&lt;/span&gt; asan.log&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;This is not a full pipeline, but it enforces the critical order: reproduce, minimize, re-run under sanitizer, extract stable signal.&lt;/p&gt;
&lt;p&gt;Related reading:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/exploits/exploit-reliability-over-cleverness/&#34;&gt;Exploit Reliability Over Cleverness&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/exploits/rop-under-pressure/&#34;&gt;ROP Under Pressure&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/security-findings-as-design-feedback/&#34;&gt;Security Findings as Design Feedback&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>ROP Under Pressure</title>
      <link>https://turbovision.in6-addr.net/hacking/exploits/rop-under-pressure/</link>
      <pubDate>Sun, 22 Feb 2026 00:00:00 +0000</pubDate>
      <lastBuildDate>Sun, 22 Feb 2026 22:09:11 +0100</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/hacking/exploits/rop-under-pressure/</guid>
      <description>&lt;p&gt;Return-oriented programming feels elegant in writeups and messy in real targets. In controlled examples, gadgets line up, stack state is stable, and side effects are manageable. In live binaries, you are usually balancing fragile constraints: limited write primitives, partial leaks, constrained input channels, and mitigation combinations that punish assumptions.&lt;/p&gt;
&lt;p&gt;Working &amp;ldquo;under pressure&amp;rdquo; means building payloads that survive imperfect conditions, not just proving theoretical code execution.&lt;/p&gt;
&lt;p&gt;My practical approach starts by classifying constraints before touching gadgets:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;architecture and calling convention&lt;/li&gt;
&lt;li&gt;NX/DEP status&lt;/li&gt;
&lt;li&gt;ASLR quality and available leaks&lt;/li&gt;
&lt;li&gt;RELRO mode and GOT mutability&lt;/li&gt;
&lt;li&gt;stack canary behavior&lt;/li&gt;
&lt;li&gt;input sanitizer and bad-byte set&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Without this map, gadget hunting becomes random motion.&lt;/p&gt;
&lt;p&gt;A reliable chain should minimize dependencies. Fancy multi-stage chains look impressive but fail more often when target timing or memory layout shifts. Prefer short chains with explicit stack hygiene and clear post-condition checks.&lt;/p&gt;
&lt;p&gt;I use three build phases:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;control proof&lt;/strong&gt; - confirm RIP/EIP control and offset stability&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;primitive proof&lt;/strong&gt; - validate one critical primitive (e.g., register load, memory write)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;goal chain&lt;/strong&gt; - compose final chain from proven pieces&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Each phase gets its own test harness and logs.&lt;/p&gt;
&lt;p&gt;Side effects are where many chains die. A gadget that sets &lt;code&gt;rdi&lt;/code&gt; but trashes &lt;code&gt;rbx&lt;/code&gt; and &lt;code&gt;rbp&lt;/code&gt; might still be useful, but only if you account for the collateral damage in later steps. Treat every gadget as a state transition, not a one-line shortcut.&lt;/p&gt;
&lt;p&gt;Leaked address handling should be defensive. Parse leaks robustly, validate alignment expectations, and reject implausible values early. Nothing wastes time like debugging a perfect chain built on one malformed leak parse.&lt;/p&gt;
&lt;p&gt;Bad bytes and transport constraints deserve first-class design. If input path strips null bytes or mangles whitespace, chain encoding must adapt. Partial overwrite strategies and staged writes often outperform brute-force payload expansion.&lt;/p&gt;
&lt;p&gt;For libc-based chains, resolution strategy matters. Hardcoding offsets is fine for CTFs, risky in real environments. Build version-detection logic where possible and keep fallback paths. If uncertainty is high, consider ret2dlresolve or syscall-oriented alternatives.&lt;/p&gt;
&lt;p&gt;Stack alignment details are easy to ignore until they break calls on hardened libc paths. Enforce alignment deliberately before sensitive calls, especially on x86_64 where ABI expectations can cause subtle crashes.&lt;/p&gt;
&lt;p&gt;Instrumentation is critical under pressure:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;crash reason classification&lt;/li&gt;
&lt;li&gt;register snapshots at key points&lt;/li&gt;
&lt;li&gt;stack dump around pivot region&lt;/li&gt;
&lt;li&gt;chain stage markers in payload&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These reduce &amp;ldquo;it crashed somewhere&amp;rdquo; debugging into actionable iteration.&lt;/p&gt;
&lt;p&gt;Another useful tactic is payload degradability. Build chains so partial success still yields information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;leak stage works even if exec stage fails&lt;/li&gt;
&lt;li&gt;file-read stage works even if shell stage fails&lt;/li&gt;
&lt;li&gt;environment fingerprint stage precedes risky actions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Incremental gain beats all-or-nothing payloads when reliability is uncertain.&lt;/p&gt;
&lt;p&gt;Defender perspective improves attacker quality. Ask what would make this exploit harder:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;stricter CFI&lt;/li&gt;
&lt;li&gt;seccomp profiles&lt;/li&gt;
&lt;li&gt;full RELRO + PIE + canaries + hardened allocator&lt;/li&gt;
&lt;li&gt;reduced gadget surface via compiler settings&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This guides realistic chain design and helps prioritize exploitation paths.&lt;/p&gt;
&lt;p&gt;Time pressure often creates overfitting: chains that work only on one process lifetime. To avoid this, run variability tests:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;repeated launches&lt;/li&gt;
&lt;li&gt;timing perturbation&lt;/li&gt;
&lt;li&gt;environment variable changes&lt;/li&gt;
&lt;li&gt;file descriptor order shifts&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A chain that survives variability is a chain you can trust.&lt;/p&gt;
&lt;p&gt;Documentation should capture more than the final exploit. Keep:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;mitigation map&lt;/li&gt;
&lt;li&gt;failed strategy log&lt;/li&gt;
&lt;li&gt;gadget rationale&lt;/li&gt;
&lt;li&gt;known fragility points&lt;/li&gt;
&lt;li&gt;reproducibility instructions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This turns one exploit into reusable team knowledge.&lt;/p&gt;
&lt;p&gt;Ethically and operationally, exploitation work should stay bounded by authorization and clear engagement scope. &amp;ldquo;Under pressure&amp;rdquo; is not an excuse for sloppy controls. Good operators move quickly and carefully.&lt;/p&gt;
&lt;p&gt;ROP remains a valuable skill because it teaches precise reasoning about program state. But mature exploitation is less about clever gadgets and more about disciplined engineering: hypothesis-driven tests, controlled iteration, and robustness against uncertainty.&lt;/p&gt;
&lt;p&gt;If you remember one rule: never trust a chain that has not survived repeated runs under slightly different conditions. Reliability is the real exploit milestone.&lt;/p&gt;
&lt;p&gt;For teams, shared exploit harnesses help a lot. Keep a minimal runner that captures crashes, leaks, register snapshots, and timing metadata in a consistent format. Individual payloads can vary, but a common harness preserves comparability across attempts and reduces duplicated debugging labor.&lt;/p&gt;
&lt;p&gt;That consistency turns pressure into process.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Format String Attacks Demystified</title>
      <link>https://turbovision.in6-addr.net/hacking/exploits/format-string-attacks/</link>
      <pubDate>Sun, 14 Dec 2025 00:00:00 +0000</pubDate>
      <lastBuildDate>Sun, 22 Feb 2026 15:49:27 +0100</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/hacking/exploits/format-string-attacks/</guid>
      <description>&lt;p&gt;Format string vulnerabilities happen when user-controlled input ends up as the
first argument to &lt;code&gt;printf()&lt;/code&gt;. Instead of printing text, the attacker reads or
writes arbitrary memory.&lt;/p&gt;
&lt;p&gt;We demonstrate reading the stack with &lt;code&gt;%08x&lt;/code&gt; specifiers, then escalate to an
arbitrary write using &lt;code&gt;%n&lt;/code&gt;. The write-what-where primitive turns a seemingly
harmless logging call into full code execution.&lt;/p&gt;
&lt;p&gt;The fix is trivial: always pass a format string literal. &lt;code&gt;printf(&amp;quot;%s&amp;quot;, buf)&lt;/code&gt;
instead of &lt;code&gt;printf(buf)&lt;/code&gt;. Yet this class of bug resurfaces in embedded firmware
to this day.&lt;/p&gt;
&lt;p&gt;Why does this still happen? Because logging code is often treated as harmless,
copied fast, and reviewed late. In small C projects, developers optimize for
speed of implementation and forget that formatting functions are tiny parsers
with side effects.&lt;/p&gt;
&lt;h2 id=&#34;exploitation-ladder&#34;&gt;Exploitation ladder&lt;/h2&gt;
&lt;p&gt;Typical progression in a lab binary:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Leak stack values with &lt;code&gt;%x&lt;/code&gt; and locate attacker-controlled bytes.&lt;/li&gt;
&lt;li&gt;Calibrate offsets until output is deterministic.&lt;/li&gt;
&lt;li&gt;Use width specifiers to control write count.&lt;/li&gt;
&lt;li&gt;Trigger &lt;code&gt;%n&lt;/code&gt; (or &lt;code&gt;%hn&lt;/code&gt;) to write controlled values to target addresses.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;At that point, you can often redirect flow indirectly by corrupting function
pointers, GOT entries (where applicable), or security-relevant flags.&lt;/p&gt;
&lt;h2 id=&#34;defensive-pattern&#34;&gt;Defensive pattern&lt;/h2&gt;
&lt;p&gt;Treat every formatting call as a sink:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;enforce literal format strings in coding guidelines&lt;/li&gt;
&lt;li&gt;compile with warnings that detect non-literal format usage&lt;/li&gt;
&lt;li&gt;isolate logging wrappers so raw &lt;code&gt;printf&lt;/code&gt; calls are rare&lt;/li&gt;
&lt;li&gt;review embedded diagnostics paths as carefully as network parsers&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Related reading:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/exploits/buffer-overflow-101/&#34;&gt;Buffer Overflow 101&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/tools/ghidra-first-steps/&#34;&gt;Ghidra: First Steps in Reverse Engineering&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>Buffer Overflow 101</title>
      <link>https://turbovision.in6-addr.net/hacking/exploits/buffer-overflow-101/</link>
      <pubDate>Mon, 03 Nov 2025 00:00:00 +0000</pubDate>
      <lastBuildDate>Sun, 22 Feb 2026 15:49:37 +0100</lastBuildDate>
      <guid>https://turbovision.in6-addr.net/hacking/exploits/buffer-overflow-101/</guid>
      <description>&lt;p&gt;A stack-based buffer overflow is the oldest trick in the book and still one of the
most instructive. We start with a vulnerable C program, compile it without canaries,
and walk through EIP control step by step.&lt;/p&gt;
&lt;p&gt;The target binary accepts user input via &lt;code&gt;gets()&lt;/code&gt; — a function so dangerous that
modern compilers emit a warning just for including it. We feed it a carefully
crafted payload: 64 bytes of padding, followed by the address of our shellcode
sitting on the stack.&lt;/p&gt;
&lt;p&gt;Key takeaways: always compile test binaries with &lt;code&gt;-fno-stack-protector -z execstack&lt;/code&gt;
when learning, and never on a production box.&lt;/p&gt;
&lt;p&gt;What makes this topic timeless is not the exact exploit recipe, but the mental
model it gives you: memory layout, calling convention, control-flow integrity,
and why unsafe copy primitives are dangerous by construction.&lt;/p&gt;
&lt;h2 id=&#34;reliable-lab-workflow&#34;&gt;Reliable lab workflow&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Confirm binary protections (&lt;code&gt;checksec&lt;/code&gt; style checks).&lt;/li&gt;
&lt;li&gt;Crash with pattern input to find exact overwrite offset.&lt;/li&gt;
&lt;li&gt;Validate instruction pointer control with marker values.&lt;/li&gt;
&lt;li&gt;Build payload in small increments and verify each stage.&lt;/li&gt;
&lt;li&gt;Only then attempt shellcode or return-oriented payloads.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Expected outcome before each run should be explicit. If behavior differs, do
not &amp;ldquo;try random bytes&amp;rdquo;; explain the difference first. That habit turns exploit
practice into engineering instead of cargo cult.&lt;/p&gt;
&lt;h2 id=&#34;defensive-mirror&#34;&gt;Defensive mirror&lt;/h2&gt;
&lt;p&gt;Learning offensive mechanics should immediately map to mitigation:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;remove dangerous APIs (&lt;code&gt;gets&lt;/code&gt;, unchecked &lt;code&gt;strcpy&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;enable stack canaries, NX, PIE, and RELRO&lt;/li&gt;
&lt;li&gt;reduce attack surface in parser and input-heavy code paths&lt;/li&gt;
&lt;li&gt;test with sanitizers during development&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Related reading:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/exploits/format-string-attacks/&#34;&gt;Format String Attacks Demystified&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://turbovision.in6-addr.net/hacking/tools/ghidra-first-steps/&#34;&gt;Ghidra: First Steps in Reverse Engineering&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
  </channel>
</rss>
