Turbo Pascal Toolchain, Part 1: Anatomy and Workflow

C:\RETRO\DOS\TP\TOOLCH~1>type turbop~1.htm

Turbo Pascal Toolchain, Part 1: Anatomy and Workflow

Turbo Pascal is remembered for a fast blue IDE, but that is only the surface. The real strength was a full toolchain with tight feedback loops: editor, compiler, linker, debugger, units, and predictable artifacts. Part 1 maps that system in practical terms before we dive into binary formats, overlays, BGI, and ABI-level language details.

Structure map. This article proceeds in twelve sections: (1) version and scope boundaries, (2) toolchain topology and component wiring, (3) artifact pipeline and engineering signal, (4) IDE options as architecture, (5) directory and path policy, (6) practical project layout, (7) IDE–CLI parity and reproducible builds, (8) units as compile boundaries and incremental strategy, (9) debug loop mechanics and map/debug workflow, (10) external objects and integration discipline, (11) operational checklists and failure modes, and (12) how this foundation supports the rest of the series.

Scope and version boundaries

When discussing “latest Turbo Pascal,” engineers usually mean Turbo Pascal 7.0 and, in many setups, Borland Pascal 7 tooling around it. Some executable names and switches vary by package and installation, so this article uses two rules:

  1. describe workflow and architecture in version-stable terms
  2. call out where command names or options may differ

That keeps the discussion accurate without pretending all distributions are identical. TP 5.x used a simpler unit format; TP 6 and 7 extended it with object-oriented support and richer metadata. Projects that must support both TP 5 and TP 7 need to avoid OOP extensions and test on both toolchains.

Technical mechanism. TP 7 and BP 7 share the same core compiler engine but differ in packaging: TURBO.EXE (IDE) vs BP.EXE (Borland Pascal IDE), and command-line variants such as TPC.EXE or BPC.EXE. The compiler emits .TPU (Turbo Pascal Unit) files or .OBJ for linkable object code; TP 5.x and TP 6.x used similar conventions with minor format changes. Knowing your actual binary set (dir *.exe in the TP install directory) prevents configuration mistakes.

Workflow impact. Version drift between machines—one developer on TP 6, another on BP 7—manifests as mysterious “unit version mismatch” or link errors that do not reproduce elsewhere. Pitfall: assuming TURBO.EXE and TPC.EXE on the same install are always in lockstep; some bundled distributions ship slightly different compiler builds. Practical check: run tpc -? (or equivalent) and note the version string; document it in project setup. If multiple TP installs exist (e.g. C:\TP and C:\BP), ensure PATH and project scripts point to one canonical location to avoid picking up the wrong compiler.

Toolchain topology (what talks to what)

At minimum, a project involves these moving parts:

  • TURBO.EXE or BP.EXE style IDE workflow
  • command-line compiler (TPC in many setups)
  • linker stage (often via TLINK)
  • optional assembler and object modules (TASM plus .OBJ)
  • optional library manager (TLIB)
  • dump/inspection tooling (TDUMP)

Even if you only press “Compile” in the IDE, these layers still exist. Knowing them separately is the difference between “works today” and “I can debug this under pressure.”

Technical mechanism. The IDE invokes the compiler internally; the compiler produces .TPU or .OBJ and hands off to TLINK to produce the final .EXE. You rarely invoke TLINK directly—the compiler drives it. Understanding the handoff helps when TLINK fails: check that all referenced OBJ and TPU files exist and that no path is wrong. When you add {$L FASTBLIT} for an assembly module, the compiler embeds a call to TLINK with the listed object files. TASM is invoked separately if you maintain .ASM sources; TLIB merges .OBJ into .LIB archives for reuse. TDUMP inspects .EXE, .OBJ, and .TPU headers and symbol tables—critical when a link fails and you need to verify what the compiler actually produced.

Build loop semantics. Each “Compile” in the IDE runs the compiler on the main program; the compiler in turn recompiles any unit whose .PAS is newer than its .TPU, then invokes TLINK. If nothing changed, a second Compile is effectively a no-op unless you forced a rebuild—but “nothing changed” depends on timestamps. Editing a file and reverting without saving leaves the .PAS older than the .TPU, so the compiler skips it. Conversely, touching a unit file (e.g. via a script) forces recompile even when source is unchanged. Some installs exposed a “Build” vs “Make” distinction: Make recompiles only changed modules; Build recompiles everything. The command-line tpc typically behaves like Make. Knowing which mode you are in avoids confusion when expectations differ (“I changed that!” vs “it didn’t rebuild”).

Workflow impact. Debugging a “Compiler Error” when the real failure is at link time wastes hours. Learn to read compiler vs linker messages: TP compiler errors cite source lines; TLINK errors cite missing symbols or object format issues. When you add {$L file}, the compiler does not run TASM—you must assemble .ASM to .OBJ yourself. A project using assembly typically has a two-step build: first tasm /mx module, then tpc main.pas. Omitting the TASM step produces “cannot open file” or “invalid object file” from TLINK. Pitfall: the IDE may hide TLINK output or truncate it; a batch build that echoes full output is essential. Practical check: run a minimal tpc main.pas from the command line and observe the exact sequence of invocations and any warnings; compare with IDE compile to spot divergence. When TLINK reports “undefined symbol,” use tdump main.obj | findstr SYMBOL to inspect what the compiler actually exported; cross-reference with the unit’s interface to find mismatches. TDUMP also reveals TPU structure—run tdump unit.tpu to see exported symbols and segment names when debugging circular unit references or missing exports.

Artifact pipeline as engineering signal

A typical single-target flow:

1
2
.PAS  --compile-->  .TPU/.OBJ  --link-->  .EXE
                              \--optional--> .MAP

Extended flows add .OVR (overlay file), .BGI/.CHR assets (Graph unit path), and linked external .OBJ modules. If output behavior is surprising, artifacts are your first ground truth, not intuition. Runtime paths for BGI and overlays must match deployment layout—developing with assets in-project but shipping an EXE alone causes silent failures at InitGraph or overlay load.

Technical mechanism. Each .PAS file compiles to an intermediate form: main-program .PAS.OBJ (or directly to .EXE when TP drives TLINK); unit .PAS.TPU. The compiler emits one OBJ per main program and one TPU per unit; the linker then combines them. Multi-module programs (e.g. a main that uses several units) produce one EXE that embeds all linked code. The linker merges one or more .OBJ plus referenced .TPU content into a single executable. A .MAP file is produced when you pass /M (or equivalent) to the linker—it lists segment layout, public symbols, and program start address. Overlays (.OVR) are built separately and loaded at runtime by the overlay manager.

Map file usage. The map lists segments (e.g. CODE, DATA, BSS) with their load addresses and sizes, followed by a public symbol table with segment:offset for each symbol. A crash address like 0x1234:0x5678 maps to a routine by finding the segment name, then scanning the symbol list for the highest address ≤ 0x5678 within that segment—that typically identifies the containing procedure. Segment layout can shift between builds (e.g. when adding units or changing optimization), so the map must match the exact binary being debugged. Keep dated copies (MAIN_20260222.MAP) for shipped builds so a user crash report from that date can be correlated.

Workflow impact. When the program crashes at startup or behaves differently on another machine, the .MAP file tells you where symbols landed in memory—essential for correlating debug output or crash addresses. Pitfall: stale .TPU files: a unit’s interface changed but some dependent unit still compiled against an old .TPU, producing subtle ABI drift. Practical check: before release, delete all .TPU and .OBJ, rebuild from scratch, and verify no “unit version” or “identifier not found” surprises. For overlay builds, the .OVR is produced by a separate invocation; confirm the overlay manager path matches where you place the .OVR at runtime.

IDE settings are architecture settings

Turbo Pascal options are often treated as editor preferences. They are not. They directly alter generated code and runtime behavior:

  • debug info and symbolic visibility
  • optimization strategy
  • stack/heap constraints
  • runtime checking behavior (range, overflow, I/O)
  • code generation assumptions (CPU/FPU target profile)

Disciplined teams freeze these as named build profiles (for example: debug, release, diag) and log intentional changes.

Technical mechanism. Options like {$D+} (debug info), {$O+} (overlay support), {$R+} (range checking), and {$S+} (stack checking) are compiler directives; the IDE also stores numeric settings (heap size, stack size, target CPU) in its configuration. These feed into code generation and linker arguments. A “release” build typically turns off {$D+} and {$R+}, enables {$O+} if using overlays, and may bump optimization.

Workflow impact. Switching profiles mid-project without documenting the change leads to “works on my machine” when one developer runs a debug build and another ships a release build—different memory layout and checking can hide or expose bugs. Heap and stack size (configurable in Linker options or via $M directive) affect how much data and recursion the program can handle; a release build with reduced heap may expose allocation failures that a development build with generous limits never showed. Pitfall: TP stores options in .TP project files or in the default configuration; a fresh clone may pick up system defaults instead of project-specific values. Check-in a .TP file only if the team agrees; otherwise, source-level directives are safer and travel with the code. Practical check: maintain a BUILD.CFG (or equivalent) or inline directives at the top of MAIN.PAS that explicitly set the profile, e.g. {$D+,R+,S+} for debug and {$D-,R-,S-} for release. A minimal BUILD.CFG can list one directive per line; the compiler reads it before source. Alternatively, use a single CONFIG.PAS that each main program and test uses first, so the profile is always in version control. The $M directive sets stack and heap: {$M stacksize, heapsize, maxheapsize}. Too-small heap causes “Out of memory” at runtime; too-small stack breaks deep recursion or large local arrays.

Directory and path policy (where projects fail first)

Most hard-to-reproduce TP failures are path/config drift:

  • unit search path differs between machines
  • object search path misses external assembly objects
  • include path resolves wrong file version
  • runtime asset path misses .BGI/.CHR/.OVR

A stable project keeps paths explicit in one place and checks them at startup. Do not rely on “whatever current directory happens to be.”

Technical mechanism. TP resolves units and includes in a fixed order: current directory first, then paths from Options | Directories (or -U / -I on the command line). The order matters: if C:\TP\UNITS and C:\PROJECT\UNITS both exist, whichever is searched first wins. Object files ({$L file}) are resolved relative to the source file or the object path. Runtime paths (BGI, fonts) are handled by the Graph unit and typically use InitGraph’s driver path or SetGraphBufSize; the program must know where its asset directory lives.

Workflow impact. A developer who runs TP from C:\PROJECT\SRC gets different resolution than one who runs from C:\PROJECT—units in SRC\ may be found first, masking a missing path. Pitfall: PATH and SET in AUTOEXEC.BAT vary by machine; a batch build that does cd \PROJECT\SRC before invoking tpc can behave differently from an IDE launched from a shortcut with a different working directory. Practical check: add a startup check in MAIN.PAS that verifies a known file exists (e.g. ASSETS\BGI\EGAVGA.BGI) and aborts with a clear message if not found; document the required directory layout in README. Use ParamStr(0) to derive the executable location and build asset paths relative to it when possible—that helps when the user runs from a different directory. Example guard at the top of a graphics-heavy main:

{$I-}
assign(f, 'ASSETS\BGI\EGAVGA.BGI');
reset(f);
if IOResult <> 0 then begin
  writeln('FATAL: BGI path not found. Run from project root.');
  halt(1);
end;
close(f);
{$I+}

This fails fast instead of letting InitGraph return a cryptic error code.

TP5 reference details worth remembering:

  • System unit is used automatically; other standard units are not.
  • non-resident units are resolved by <UnitName>.TPU search (current dir, then configured unit directories).
  • make/build unit source lookup follows the same pattern with <UnitName>.PAS. On the command line, tpc -Upath1;path2 -Ipath3 sets unit and include paths; semicolon separates multiple entries. Paths are searched in order. Relative paths are interpreted from the current directory at invoke time—another reason to standardize cd before build.

Path resolution behavior. {$I filename} (include) and {$L filename} (link object) resolve differently. Include files are searched along the include path and typically use just the base name ({$I TYPES.INC}); the compiler merges the file contents at that point. Object files for {$L} are usually resolved relative to the source file’s directory first, then the unit/object path. Using a bare name like {$L FASTBLIT} assumes FASTBLIT.OBJ is in the same directory as the .PAS or on the object path. A common pitfall: a unit in SRC\CORE.PAS with {$L ..\ASM\FASTBLIT} works when compiled from project root, but a different working directory can break resolution. Prefer explicit paths in build configuration (-U, -I, object path) over {$L} with relative names when the source tree spans multiple directories. Paths containing spaces (e.g. C:\TP\My Units) can cause parsing issues in some older TP installs; stick to 8.3 names in critical paths when possible.

Practical project shape

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
PROJECT/
  SRC/
    MAIN.PAS
    CORE.PAS
    RENDER.PAS
  ASM/
    FASTBLIT.ASM
    FASTBLIT.OBJ
  BIN/
  ASSETS/
    BGI/
  BUILD.BAT
  README.TXT
  CHANGELOG.TXT

This looks mundane. That is good. In DOS projects, boring layout is a stability feature.

Technical mechanism. SRC/ holds all .PAS; ASM/ holds assembly source and pre-built .OBJ; BIN/ receives .EXE, .OVR, .MAP; ASSETS/BGI/ holds driver and font files. The compiler’s -E (or equivalent) switch can direct output to BIN\. Keeping .TPU alongside source in SRC\ or in a dedicated UNITS\ subdirectory avoids polluting the root. A UNITS\ folder with only TPUs (no PAS) works if you treat it as build output—the batch compile writes TPUs there and adds -U%CD%\UNITS so dependents find them. This keeps SRC clean of generated files.

Workflow impact. A flat layout with everything in the project root works for tiny projects but becomes unmaintainable when units and assets multiply. Pitfall: storing .TPU in a shared C:\TP\UNITS risks cross-project contamination—two projects with a UTILS unit will overwrite each other’s TPU. Practical check: the batch build should cd to a canonical directory (e.g. project root), set TPC output and unit paths explicitly, and produce deterministic artifacts in BIN\; dir BIN\*.exe after build should show expected output with sensible timestamps. A clean-build target in the batch helps catch stale-artifact bugs:

1
2
3
4
5
6
7
:clean
del /q SRC\*.TPU 2>nul
del /q SRC\*.OBJ 2>nul
del /q ASM\*.OBJ 2>nul
del /q BIN\*.* 2>nul
echo Cleaned
goto :eof

Invoke with BUILD.BAT clean before a release build. If the batch supports arguments, add if "%1"=="clean" goto clean at the top so build clean and build both work from a single script.

IDE and CLI parity is non-negotiable

If a project only builds via hidden IDE state, you do not have a reproducible build. Keep a batch build path next to the IDE path.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
@echo off
setlocal
set MAIN=SRC\MAIN.PAS
rem command/options vary by TP/BP install; -E directs exe to BIN
set TPCDIR=C:\TP
set PATH=%TPCDIR%;%PATH%
cd /d %~dp0
tpc %MAIN% -U%CD%\UNITS -EBIN
if errorlevel 1 goto fail
echo BUILD OK
goto end
:fail
echo BUILD FAILED
:end
endlocal

Technical mechanism. tpc (or bpc) accepts -U for unit search path, -E for exe output directory, -D for defines, and -$ for directives. Exact syntax varies; BP 7 uses -Upath and -Epath (no space between switch and path). The batch file uses cd /d %~dp0 to ensure it runs from the project root regardless of where it is invoked. Some installs use -Epath to send the EXE to a specific directory; without it, the EXE lands next to the main source, which can clutter SRC\.

Workflow impact. When the IDE build succeeds but the batch fails (or vice versa), the difference is usually in paths or options. Pitfall: the IDE may use a different TPC than the one on PATH if the shortcut sets its own environment. Practical check: add tpc %MAIN% 2>&1 | more to capture full compiler/linker output; compare character-for-character with IDE compile log if behavior diverges. Expected outcome: success yields deterministic .EXE in BIN\; failure yields non-zero exit and repeatable error output.

Units are compile boundaries, not just reuse

Units define contracts and incremental rebuild boundaries. This yields two benefits:

  1. interface changes produce immediate compile-time blast radius
  2. implementation-only changes stay local when boundaries are clean

That behavior gives architectural feedback automatically. If tiny edits trigger massive recompilation or link churn, boundaries are weak.

Technical mechanism. A unit’s interface section is compiled first and emitted into the .TPU; dependents read that interface. Changing the interface (adding/removing/altering exported declarations) invalidates all dependent units—they must recompile. Changing only the implementation invalidates only that unit’s TPU. The compiler tracks dependency via timestamps (or explicit make rules) and recompiles only what changed.

Workflow impact. A well-factored project compiles quickly during development: edit one unit’s implementation, only that unit rebuilds. Interface changes are expensive by design—they force you to confront coupling. Pitfall: large “god” units with sprawling interfaces cause rebuild cascades; splitting into smaller units with narrow interfaces reduces blast radius. Practical check: run a clean build, make a one-line implementation change, rebuild—only that unit’s TPU should change. If half the project rebuilds, revisit boundaries. Incremental compile strategy: without make, TP recompiles a unit when its .PAS is newer than its .TPU. Compile in dependency order (leaf units first) or rely on uses order; some teams kept a batch that compiled units explicitly before the main program to avoid timestamp quirks. See also: Turbo Pascal Units as Architecture, Not Just Reuse.

Debug loop mechanics

A strong TP debugging loop is short and explicit:

  1. define expected behavior before run
  2. run the same deterministic input
  3. inspect state at subsystem boundaries
  4. adjust one variable or one assumption
  5. rerun same case

Fast compile-run cycles make this practical dozens of times per hour. That is why teams felt productive: not because bugs were fewer, but because feedback latency stayed low.

Technical mechanism. TP’s integrated debugger uses {$D+} (debug info) and {$L+} (local symbol info) to map source lines to addresses. The linker’s map file (/M or $M output) lists segment:offset for public symbols. When a crash occurs at a hex address, you look up that address in the map to identify the routine. TD (Turbo Debugger) can attach to a running process or launch the program with breakpoints; TD requires the same debug info and matching source paths.

Workflow impact. A typical cycle: set breakpoint in TD, run, inspect variables, fix source, recompile, run again. TD can be launched from the command line with td main.exe or from the IDE’s Run menu; ensure the working directory is set so the program finds its assets. Without a map file, a crash dump (e.g. from a user) is useless—you cannot map the fault address back to a function.

Map/debug workflow. When a user reports “it crashed at 1234:5678,” the workflow is: (1) obtain the exact EXE they ran—rebuilding from “same source” may produce different segment layout; (2) ensure you have the matching map from that build; (3) parse the address: segment 1234 hex, offset 5678 hex; (4) open the map, locate the segment (often CODE or C0), find the symbol with the largest address ≤ 5678 in that segment—that is the containing routine; (5) open that routine in the source and reason about what could fault at that offset. TD’s “View | CPU” shows disassembly; correlating the fault address with the map gives you the Pascal routine to inspect. If debug info was stripped (release build), you still have the map for symbol-level localization; line numbers require {$D+} and {$L+} in the binary. Some teams kept a post-build step that copied MAIN.EXE and MAIN.MAP to a RELEASE\ folder with a date suffix, so crash reports could be matched to archived symbol data.

Pitfall: debug builds with {$D+} produce larger executables and slightly different code layout; a bug that appears only in release may be a timing or memory-layout issue. Practical check: keep a debug build profile that always generates .MAP, and ensure your run script or batch uses that profile when investigating crashes. Example map lookup: findstr /C:"RoutineName" MAIN.MAP to locate a symbol’s segment. Team checklist: (1) every developer runs tpc -? and records version in project docs; (2) new machines run a clean build before first commit; (3) before release, one developer performs a memory-stressed boot (load COMMAND.COM, a few TSRs, then run) to catch conventional-memory edge cases. (4) When integrating assembly or C modules, one person owns the calling-convention doc and reviews any new external declarations. (5) Archive the exact BUILD.BAT and BUILD.CFG (or equivalent) with each shipped build so you can reproduce it later.

External objects from day one

Many real projects mixed Pascal with assembly or C object modules. Keep that integration explicit:

  • source ownership (.ASM/.PAS) is documented
  • object generation step is reproducible
  • calling convention assumptions are written next to declarations

Technical mechanism. {$L FASTBLIT} tells the compiler to pass FASTBLIT.OBJ to the linker. TP uses Pascal calling convention (left-to-right push, caller clears stack) and specific name mangling; assembly routines must match. A typical declaration:

{$L FASTBLIT}
procedure FastBlit(Src, Dst: pointer; Count: word); external;

The .OBJ is resolved from the current directory or object path. TASM assembles FASTBLIT.ASM with tasm /mx fastblit (case-sensitive symbols) to produce the object.

Object integration guardrails. When a unit uses {$L MODULE}, that unit must link before any unit or main program that imports it—the compiler passes OBJ references through to TLINK in use order. If MAIN uses CORE and CORE uses {$L FASTBLIT}, the linker receives CORE.OBJ (from CORE’s TPU) plus FASTBLIT.OBJ; MAIN’s OBJ comes last. A missing FASTBLIT.OBJ produces TLINK “cannot open file” or “invalid object file”—the compiler does not pre-validate {$L} references. Guardrail: run a pre-build step that checks all {$L}-referenced OBJs exist before invoking tpc. If a unit exports a procedure declared external, the OBJ must export a matching public symbol (fastblit, FASTBLIT, or whatever your assembler emits); tdump unit.obj shows the actual exports. Mismatched symbol names cause “undefined symbol” at link time. When mixing TP units with C object files, the C module must use the correct calling convention (pascal or cdecl as documented) and export names that match the Pascal external declaration; C’s default name mangling does not match TP’s expectations.

Workflow impact. Adding an external module without documenting convention leads to subtle stack corruption or wrong arguments. Pitfall: mixing TP’s default calling convention with C’s cdecl or fastcall from a C-compiled .OBJ causes unpredictable behavior. Practical check: add a BUILD_ASM.BAT that runs tasm on all .ASM files and fails if any object is missing; invoke it from the main build or document it as a prerequisite. Document the expected object-file location (ASM, SRC, or a shared OBJ lib) so new contributors know where to put compiled assembly. Part 2 goes deep on this, including object/module investigation and symbol diagnostics.

Operational checklists that saved teams

Before shipping any build profile:

  1. clean rebuild from source (no stale artifacts)
  2. confirm expected files (.EXE, optional .OVR, BGI assets)
  3. compare binary size/checksum against previous known-good
  4. run one memory-stressed boot profile test
  5. archive build settings with artifact

This is primitive CI and still effective. A minimal pre-ship batch can automate steps 1–3:

1
2
3
4
5
call BUILD.BAT clean
call BUILD.BAT
if errorlevel 1 goto :eof
dir BIN\*.EXE
fc BIN\MAIN.EXE C:\RELEASE\MAIN.EXE

fc compares current build to last known-good; manual review of any diff prevents accidental regression.

Reproducibility patterns. To reproduce a build months later: (1) archive the exact BUILD.BAT, BUILD.CFG, and any CONFIG.PAS or directive files with each release; (2) record the compiler version (tpc -? output) in CHANGELOG or a BUILD_INFO.TXT; (3) avoid relying on date/time inside binaries if you need bit-identical output—some linkers embed timestamps. Clean builds from the same source with the same toolchain should produce functionally identical executables; exact byte-for-byte match may require controlling timestamp and path variables. When debugging “works on build machine, fails elsewhere,” compare the full tpc command line, PATH, and current directory between environments. A BUILD_VERBOSE.BAT that echoes %PATH%, cd, and the exact tpc invocation helps document the winning configuration.

Realistic failure modes. (a) Stale TPU: a unit was changed but an old TPU remained; symptoms include “identifier not found” at link or runtime behavior that contradicts the source. (b) Path drift: unit or object path wrong; “Cannot find unit X” or “Undefined symbol.” (c) Config mismatch: release build with debug assertions left on, or wrong overlay flags. (d) Asset missing: BGI or OVR not in expected path; InitGraph or overlay load fails at runtime. (e) Memory: loading with different TSRs or drivers changes free conventional memory; a marginal program may work in one boot and fail in another. (f) Optimization: aggressive optimization can reorder or eliminate code; a bug that disappears with {$O-} is often a race or uninitialized variable exposed by different layout. Troubleshooting patterns. For “unit version mismatch” or odd link errors: delete all .TPU and .OBJ, rebuild from scratch. Record the exact command line and paths that produced the failing build—often the fix is a path typo or missing -U rather than a source bug. For runtime path failures: add a diagnostic that prints ParamStr(0) and the path it derives for assets. For “works on my machine”: compare mem output, path, and set between machines; document minimal boot config. For crash-with-no-symbols: ensure debug build produces .MAP and that you have the exact source revision that built the crashing binary. Reproduction kit: when a user reports a crash, ask for (1) the exact EXE they ran, (2) mem and path output, (3) steps to reproduce. Rebuild from tagged source, run under TD with the same input, and use the map to set breakpoints near the fault address.

Why this part matters for the rest of the series

Parts 2 to 5 assume you understand this topology. Without it, TPU forensics, overlay policy, and BGI packaging all look like isolated tricks. They are not. They are consequences of one coherent pipeline. Part 2’s object and unit investigation relies on knowing how TPU and OBJ flow into the linker; overlay tutorials presume you manage paths and artifact placement; BGI packaging assumes asset paths and runtime resolution. A disciplined build loop and checklist habit pays off when those advanced topics introduce new failure modes. New contributors should complete the operational checklist once manually before relying on automation—the exercise builds intuition for what can go wrong and where to look when it does. Parts 3–5 (overlays, BGI, ABI) each add new artifact types and path requirements; the habits established here—clean builds, explicit paths, archived config—scale to those more complex setups.

Next:

Related deep dives:

2026-02-22 | MOD 2026-03-14