Turbo Pascal Toolchain, Part 2: Objects, Units, and Binary Investigation

C:\RETRO\DOS\TP\TOOLCH~1>type turbop~2.htm

Turbo Pascal Toolchain, Part 2: Objects, Units, and Binary Investigation

Part 1 covered workflow. Part 2 goes where practical debugging starts: the actual artifacts on disk. In Turbo Pascal, build failures and runtime bugs are often solved faster by reading files and link maps than by re-reading source. The tools are simple—TDUMP, MAP files, strings, hex diffs—but used systematically they turn “it used to work” into “here is exactly what changed.”

Structure map. This article proceeds in eleven sections: (1) artifact catalog and operational meaning, (2) TP5 unit-resolution behavior, (3) TPU constraints and version coupling, (4) TPU differential forensics and reconstruction when source is missing, (5) OBJ/LIB forensics and OMF orientation, (6) MAP file workflow and TDUMP-style inspection loops, (7) EXE-level checks before deep disassembly, (8) external OBJ integration and calling-convention cautions, (9) repeatable troubleshooting matrix with high-signal checks, (10) manipulating artifacts safely and team discipline for reproducibility, and (11) unit libraries and cross references.

Artifact catalog with operational meaning

Typical TP/BP project artifacts:

  • .PAS: Pascal source (program or unit)
  • .TPU: compiled unit (compiler-consumable binary module)
  • .OBJ: object module (often OMF format)
  • .LIB: archive of .OBJ modules
  • .EXE/.COM: linked executable
  • .MAP: linker map with symbol/segment addresses
  • .OVR: overlay file (if overlay build path is enabled)
  • .BGI/.CHR: Graph unit driver/font assets

This list is not trivia. It is your debugging map. OVR files are loaded at runtime when overlay code executes; if the OVR path is wrong or the file is missing, the program may hang or crash on overlay entry rather than at startup. BGI and CHR are resolved by path at runtime—Graph unit InitGraph searches the driver path. Capture these paths in your environment documentation; “works here, fails there” often traces to BGI/OVR path differences.

Tool availability. TDUMP ships with Borland toolchains; if missing, omfdump (from the OMFutils project) or objdump with appropriate flags can suffice for OBJ/LIB inspection, though output format differs. On modern systems, strings and hexdump are standard. The workflows described here assume TDUMP is available; adapt commands if using substitutes.

Inspection tool mapping. Each artifact type has a primary inspection path: TPU → strings, hexdump, or compiler re-compile test; OBJ/LIB/EXE → TDUMP; MAP → diff against baseline. When troubleshooting, pick the artifact closest to the failure and work outward. Link failures start at OBJ/LIB; unit mismatch starts at TPU; runtime crashes may need EXE + MAP to correlate addresses with symbols.

Artifact dependency graph. A program’s build products form a directed graph: sources (.PAS, .ASM) produce TPU/OBJ; those plus linker input produce EXE; optional MAP records the link result. When a failure occurs, identify which edge of this graph is broken. “Compile works, link fails” means the TPU→EXE or OBJ→EXE edge; “link works, crash on startup” means the EXE itself or its runtime dependencies (BGI, OVR, paths). Staying aware of the graph prevents conflating compile-time and link-time issues.

Regression triage. When a previously working build starts failing, the fastest diagnostic is a binary diff: compare the new MAP and EXE (or checksums) to the last known-good. If the MAP is identical, the problem is environmental (paths, runtime, machine). If the MAP changed, the regression is in the build; then compare OBJ/TPU timestamps to see which module changed. This two-step filter—build vs environment, then which module—cuts investigation time dramatically.

TP5 unit-resolution behavior (manual-grounded)

Turbo Pascal 5.0 describes a concrete unit lookup order:

  1. check resident units loaded from TURBO.TPL
  2. if not resident, search <UnitName>.TPU in current directory
  3. then search configured unit directories (/U or IDE Unit Directories)

For make/build flows that compile unit sources, <UnitName>.PAS follows the same directory search pattern.

Path-order trap. If CORE.TPU exists in both the current directory and a configured unit path, the first match wins. Two developers with different path or unit-dir settings can compile “the same” project and get different TPUs. Fix: use a single canonical unit directory and document it in BUILD.BAT or README. Resident units from TURBO.TPL bypass file search; updating a .TPU on disk has no effect if the resident copy is used. For custom units, use non-resident layout so you control the artifact.

TPU reality: powerful, version-coupled, poorly documented

.TPU is a compiled unit format designed for compiler/linker consumption, not for human readability. Two facts matter in practice:

  1. TPUs are tightly tied to compiler version/family. TP5 TPUs are not guaranteed compatible with TP6 or BP7; even minor compiler bumps can change internal layout.
  2. Mixing stale or cross-version TPUs causes misleading failures: “unit version mismatch,” phantom unresolved externals, or runtime corruption that does not correlate with recent edits.

Version-pinning rule: lock the compiler and RTL version for a project and do not mix TPUs built by different compilers. If migrating, rebuild all units from source under the new toolchain rather than reusing old TPUs.

Important honesty point: I cannot verify a complete, official, stable byte-level specification for late TPU variants in this repo. Practical reverse-engineering material exists, but fields and layout differ by version. So treat any fixed “TPU format diagram” from random sources as version-scoped, not universal.

TPU differential forensics (high signal technique)

When format docs are weak, compare binaries under controlled source changes.

Recommended experiment:

  1. compile baseline unit and save U0.TPU
  2. change implementation only, compile U1.TPU
  3. change interface signature, compile U2.TPU
  4. compare byte-level deltas (fc /b or hex diff tool)

Expected outcomes:

  • implementation-only changes affect localized regions (code blocks, constants)
  • interface changes tend to alter broader metadata/signature regions and may shift offsets used by dependent units

Concrete example: if you add one procedure to an interface, dependent units that uses it must be recompiled. The TPU header/symbol tables change; a stale dependent TPU can produce “unit version mismatch” or subtle ABI drift. Always keep the forensics baseline (U0.TPU) immutable; copy, don’t overwrite.

When comparing deltas, focus on regions near the start (header/metadata) versus the tail (code and data blocks). Interface changes often perturb both; pure implementation changes usually leave the header stable and alter only later regions. If a delta spans many disjoint areas, treat the unit as incompatible with prior dependents and schedule a full recompile. This gives practical understanding of compatibility sensitivity without relying on undocumented magic constants.

What to do when you only have a TPU (no source)

This is a common retro-maintenance scenario.

Step 1: classify before touching code

  • identify likely compiler generation (project docs, timestamps, known toolchain)
  • keep original TPU immutable (copy to forensics/)
  • confirm build environment matches expected compiler generation

Wrong compiler often produces “unit format error” or similar before any useful diagnostic. If you have multiple TP versions installed, ensure PATH and invocation point at the correct one.

Step 2: inspect for recoverable metadata

Use lightweight inspection first:

1
2
strings SOMEUNIT.TPU | less
hexdump -C SOMEUNIT.TPU | less

Expected outcome:

  • discover symbol-like names or error strings
  • estimate whether unit contains useful identifiers or is mostly opaque

If identifiers are absent, you still can treat the unit as a black-box provider.

Step 3: reconstruct interface incrementally

If you know or infer exported symbols, create a probe unit/program and compile against the TPU using conservative declarations. Iterate by compiler feedback:

  1. declare one procedure/function candidate
  2. compile
  3. fix signature assumptions from diagnostics
  4. repeat

This is slow and effective. Think of it as ABI archaeology, not decompilation.

No-source caveat. Reconstructing an interface from a TPU alone is best-effort. Some identifiers may be mangled or stripped; constant values and exact type layouts are harder to recover. When in doubt, treat the unit as opaque and call only what you can confirm compiles and behaves correctly. Do not assume undocumented TPU layout is stable across compiler versions.

Recovery priority. If you have partial source (e.g. one unit’s .PAS but not its dependencies), compile that first and see what the compiler reports as missing. The error messages often reveal needed unit or symbol names. Work from known-good declarations inward; avoid guessing large interface blocks from scratch when you can narrow the surface with compiler feedback.

Version-scoping of claims. The TPU layout and OMF record details described here are based on commonly observed behavior in TP5/BP7-era toolchains. Tool variants (TASM vs MASM, TLINK vs other linkers) can produce slightly different OBJ/LIB layouts. Where this article makes format-specific claims, treat them as applicable to the Borland toolchain family; other environments may differ.

When external modules are involved, .OBJ and .LIB are usually where truth is found. In many Borland-era environments, object modules follow OMF records; you can inspect structure with TDUMP or compatible tools (e.g. omfdump, objdump with OMF support where available).

Basic inspection workflow:

1
2
3
tdump FASTBLIT.OBJ > FASTBLIT.DMP
tdump RUNTIME.LIB > RUNTIME.DMP
tdump MAIN.EXE > MAIN.DMP

For .LIB files, TDUMP lists contained object modules and their publics. For .OBJ files, you see the single module’s records. For .EXE files, you see the linked image and segment layout.

In dumps, you are looking for:

  • exported/public symbol names (exact spelling and decoration, if any)
  • unresolved externals expected from other modules
  • segment/class patterns that do not match expectations (e.g. CODE vs CSEG, FAR vs NEAR)

If names look right but link still fails, calling convention or far/near model mismatch is often the real issue.

Manual anchor: TP5 external declarations are linked through {$L filename}. This is documented as the assembly-language interop path for external subprogram declarations. The linker searches object directories when path is not explicit; document that search order for your setup.

OMF record-level orientation (why TDUMP output matters)

You will often see record classes such as module header (THEADR), external definitions (EXTDEF), public definitions (PUBDEF), communal definitions (COMDEF), segment definitions (SEGDEF), data records (LEDATA/LIDATA), fixups (FIXUPP), and module end (MODEND). You do not need to memorize every byte code to gain value. What matters is recognizing:

  • what this module exports (look for PUBDEF and similar)
  • what this module imports (look for EXTDEF and unresolved refs)
  • where relocation/fixup pressure appears (segments, frame numbers)

Example: if tdump FASTBLIT.OBJ shows a public FastCopy in segment CODE, and your Pascal declares procedure FastBlit(...) external;, the name mismatch (FastCopy vs FastBlit) will cause “unresolved external.” The dump gives you the ground truth. OMF does not standardize symbol decoration; Borland tools typically emit undecorated public names for Pascal-callable routines, whereas C compilers may prefix with underscore or use name mangling. If an OBJ came from a C build, strings on the OBJ or TDUMP’s public list shows the actual external name—use that exact form in your external declaration.

Sample TDUMP output interpretation. A typical OBJ dump might show:

1
2
3
4
Module: FASTBLIT
Segment: CODE  Align: Word  Combine: Public
  Publics: FastCopy
Externals: (none)

This tells you: the routine is named FastCopy, lives in CODE, and does not import any external symbols. If your Pascal expects FastBlit or a different segment, the mismatch is clear. For LIB dumps, you see one such block per contained OBJ; scan for the symbol you need and note which module provides it. If an OBJ lists externals, those must be satisfied by other linked modules or libraries; unresolved externals at link time usually mean a missing OBJ or LIB in the link command, or a symbol name typo in the providing module. For LIB files, link order can matter: the linker pulls in members to satisfy unresolved externals in sequence. If two OBJs in a LIB have circular references, their relative order in the archive may determine whether resolution succeeds. When adding new OBJs to a LIB, run tdump LIBNAME.LIB afterward to confirm the member list and publics; TDUMP typically does not reorder members, but some library tools do. That is enough to explain most “why does this link differently now?” questions.

Map files: the fastest way to end speculation

Generate a map file for non-trivial builds. In IDE: Options → Linker → Map file (create detailed map). On CLI: TLINK typically has a /M or similar switch for map output. Once you have a map, you can answer quickly:

  • did the symbol land in the expected segment?
  • did the expected object module get linked at all?
  • which module caused unexpected size growth?

MAP forensics loop:

  1. Build with map enabled. Save GOOD.MAP as baseline.
  2. After a change or failure, build again and compare segment/symbol layout.
  3. If a symbol is missing or moved unexpectedly, trace back to OBJ/TPU ownership.
  4. If total size jumps, scan the map for newly included modules or segments.

Example interpretation:

1
2
3
0001:03A0  MainLoop
0001:07C0  DrawHud
0002:0010  FastCopy   (from FASTBLIT.OBJ)

This gives direct evidence that your assembly object is linked and reachable. The 0002:0010 format is segment:offset; the (from FASTBLIT.OBJ) annotation confirms the symbol’s origin. If FastCopy does not appear, the OBJ was not linked—check {$L} and link order.

End-to-end artifact workflow example. Suppose a project fails to link with “Unresolved external FastBlit.”

  1. Run tdump ASM\FASTBLIT.OBJ → inspect publics. If the symbol is FastCopy not FastBlit, fix the Pascal external declaration to match.
  2. Verify {$L ASM\FASTBLIT.OBJ} is present and path correct.
  3. Rebuild with map enabled. Check that FastCopy (or corrected name) appears in the MAP with (from FASTBLIT.OBJ).
  4. If MAP shows the symbol but runtime crashes on call, switch to calling-convention checklist (near/far, Pascal vs cdecl, parameter order).
  5. If all above pass, run tdump MYAPP.EXE and confirm segment layout matches expectations; then consider disassembly only as a last step.

This sequence uses TPU/OBJ/LIB/MAP/EXE in order of diagnostic payoff. Skipping to EXE or disassembly before resolving OBJ/MAP questions wastes time.

When MAP generation fails. Some minimal IDE profiles omit map output by default. If you cannot enable it, capture at least: EXE file size, list of {$L} and uses entries, and a TDUMP of the EXE for segment layout. That still beats debugging without any artifact visibility.

Checksum vs size. File size is a fast sanity check; if the EXE grows by 50KB with no new features, something changed. A simple checksum (e.g. DOS certutil or Unix cksum) catches content drift when size alone is unchanged. For release verification, checksum the EXE and key TPUs/OBJs and record them in the build log. Teams that automate this in their build script catch integration drift before it reaches users.

MAP format nuances. TLINK map files use segment:offset notation; the segment number corresponds to the link order of segments. A “detailed” map includes module origins—which OBJ or unit contributed each segment—so you can trace size bloat to a specific module. Segment class names (CODE, DATA, CSEG, DSEG) reflect compiler/linker output; minor differences across TP versions are common. When diffing MAPs, compare symbol-to-segment assignments and segment sizes rather than raw class names. A symbol that moved from one segment to another between builds can indicate model changes (e.g. near vs far) or link order tweaks.

Manipulating artifacts safely

Three levels of “manipulation” exist; do not mix them casually.

  1. Clean rebuild manipulation: remove stale TPUs/OBJs and rebuild. Safe and repeatable. Script it: del *.TPU *.OBJ (or equivalent) before build.
  2. Link graph manipulation: reorder/add/remove OBJ/LIB participation. Changes code layout; verify with MAP. Can expose far/near or segment ordering issues.
  3. Binary patch manipulation: edit executable bytes post-link. Risky. Use only for experiments; document offsets, hashes, and rationale. Never treat patched binaries as release artifacts without explicit process.

Rule: if a problem appears after link-graph or binary manipulation, revert to last known-good clean build before drawing conclusions.

Clean script pattern. A minimal DOS-era clean step:

1
2
3
del *.TPU *.OBJ 2>nul
if exist BIN\*.EXE del BIN\*.EXE
if exist BIN\*.MAP del BIN\*.MAP

Run this before any “full rebuild” or when chasing artifact-related bugs. Keep source (.PAS, .ASM) and build scripts; treat everything else as regenerable.

Unit libraries and TPUMOVER note

Some TP/BP installations include tooling such as TPUMOVER for packaging unit modules into library containers. Availability and exact workflows are installation-dependent. If present, treat library generation as a release artifact with version pinning, not as a casual local convenience. Migrating TPUs between library and loose-file form can alter search order; document which layout the project uses.

Libraries vs loose TPUs. Loose TPUs in a directory are easier to individually inspect, checksum, and replace during development. Library (TUM-style) packaging reduces file count and can speed unit search on slow media. Choose one approach per project and stick with it; mixing both for the same units invites “which version did we actually link?” confusion.

TPUMOVER and library maintenance. When you add or remove units from a library, always rebuild the library from a clean state rather than incrementally patching. Stale or partially updated libraries produce the same mystery failures as stale TPUs. After any library change, run a full clean rebuild of the main program and verify the MAP reflects the expected unit set. Treat the library as an intermediate build product, not a hand-edited asset.

External OBJ integration: robust declaration pattern

Pascal side:

{$L FASTBLIT.OBJ}
procedure FastBlit(var Dst; const Src; Count: Word); external;

Expected outcome before first run:

  • link succeeds with no unresolved external
  • call does not corrupt stack
  • output buffer changes exactly as test vector predicts

If link succeeds but behavior is wrong, suspect ABI mismatch first. Before blaming the algorithm, verify parameter alignment: Turbo Pascal typically aligns parameters to word boundaries; an assembly routine expecting byte-precise layout may read garbage. Return-value handling also varies: functions returning Word or Integer use AX; LongInt uses DX:AX; records and strings use hidden pointer parameters. Document what your external returns and how the caller expects it; mismatches cause wrong values, not link errors.

Calling-convention cautions. Turbo Pascal’s default calling convention (typically near, Pascal-style: left-to-right push, caller cleans stack) must match the external routine. Common failure modes:

  1. C vs Pascal convention: C pushes right-to-left and often uses different name decoration. If the OBJ came from C (TCC, BCC), declare with cdecl or equivalent where the compiler supports it.
  2. Near vs far: {$F+} forces far calls; assembly routines must use RET FAR and matching prolog. Mismatch causes return to wrong address.
  3. Parameter order and types: var passes pointer; const can pass pointer or value depending on size. Word-sized Count must match assembly expectations (byte, word, or dword).
  4. Segment assumptions: If the OBJ assumes a particular DS or ES setup, document it. Pascal does not guarantee segment registers at call boundary.

Document every external in a small header comment: source file, compiler/TASM options used, calling convention, and any non-default assumptions.

Integration test pattern. Before relying on an external in production code, add a minimal harness that calls it with known inputs and verifies output. For example, fill two buffers, call the routine, and assert the result. If that passes, the OBJ is correctly integrated; failures point to convention or parameter mismatches before you bury the call in complex logic. Run it immediately after linking.

TP5 reference also states {$L filename} is a local directive and searches object directories when a path is not explicit, which is a common source of machine-to-machine drift. Prefer explicit paths in build scripts: {$L ASM\FASTBLIT.OBJ}.

TLIB workflow for multi-module assembly. When you have several .ASM files producing .OBJ modules, you can either list each with {$L mod1.OBJ} {$L mod2.OBJ} … or build a .LIB and link that. TLIB creates/updates libraries:

1
tlib FASTMATH +FASTBLIT +FASTMUL +FASTDIV

Then {$L FASTMATH.LIB} pulls in all modules. TDUMP on the LIB shows which modules and publics it contains. Use a LIB when you have many OBJ files and want a single linkable unit; keep OBJ references when you need explicit control over link order (e.g. for overlays or segment placement).

EXE-level checks before disassembly

Before deep reversing, inspect executable-level metadata. TDUMP on .EXE shows DOS header, relocation table, segment layout, and entry point. The DOS header contains the relocation count (number of fixups applied at load), initial CS:IP (entry point), and initial SS:SP (stack). Relocation entries point to segment references that the loader patches when loading at a non-default base; a change in relocation count often indicates new far pointers or segment-relative refs.

High-signal EXE checks:

  • relocation count changes (indicates new segments or far model shifts)
  • stack/code entry metadata drift
  • total image size deltas
  • segment order and class names (e.g. CODE, DATA, STACK)
1
tdump MYAPP.EXE | findstr /i "reloc entry segment"

Or capture full dump and diff against known-good:

1
2
tdump MYAPP.EXE > MYAPP_EXE.DMP
fc /b MYAPP_EXE.DMP BASELINE_EXE.DMP

Large unexpected changes usually indicate build-profile or link-graph drift, not random compiler mood. This quick check avoids hours of aimless debugging. If the EXE header and relocation table match a known-good build, but behavior differs, the problem is likely runtime (paths, overlays, memory) rather than link-time.

High-value troubleshooting table

Use this as a repeatable decision matrix. Check in order; do not skip to disassembly before ruling out high-signal causes. The goal is to eliminate most failures with minimal tool use—TDUMP, MAP diff, and clean rebuild cover the majority of cases.

“Unresolved external”

Most likely causes (check first):

  1. symbol spelling/case mismatch (TDUMP the OBJ for exact public name)
  2. missing object or library in link graph (verify {$L} and TLINK command)
  3. module compiled for incompatible object format/profile (OMF vs COFF, etc.)
  4. wrong unit or OBJ pulled from alternate path (path order, current dir)

Quick check: tdump SYMBOL.OBJ | findstr /i "public pubdef" — does the exported name match your Pascal external declaration exactly?

“Runs, then random crash after external call”

Most likely causes (check first):

  1. parameter passing mismatch (order, size, var vs value)
  2. caller/callee stack cleanup mismatch (Pascal vs cdecl)
  3. near/far routine mismatch (return address on wrong stack location)
  4. segment register assumptions violated (DS, ES not as assembly expects)

Quick check: Add a minimal passthrough test: call the routine with known-good inputs and confirm output. If that works, the failure is in integration, not the routine itself.

“Unit version mismatch”

Most likely causes:

  1. TPU built by different compiler version
  2. interface changed but dependent unit not recompiled
  3. stale TPU in a path that shadows the correct one

Quick check: Delete all TPUs, rebuild from scratch. If it works, you had stale artifacts.

“Binary suddenly huge”

Most likely causes:

  1. profile drift (debug info/checks enabled)
  2. broad library dependency pull
  3. accidental static inclusion of assets/modules (BGI linked in, large data)

Quick check: Compare MAP files. New segments or modules explain the growth.

“Works on my machine, fails elsewhere”

Most likely causes:

  1. path differences (unit dir, object dir, BGI dir, overlay dir)
  2. different DOS/TSR footprint (less conventional memory)
  3. different compiler or RTL version installed

Quick check: Document paths and versions on working machine; replicate exactly on failing one, or ship with explicit relative paths.

“Overlay load fails or hangs”

Most likely causes:

  1. OVR file not in working directory or configured overlay path
  2. overlay unit compiled with different memory model than main program
  3. overlay segment size exceeds OVR file (truncated or mismatched build)

Quick check: Confirm OVR file size matches expectations; run tdump on the EXE to see overlay segment declarations. Compare with a known-good overlay build.

Summary: signal order for artifact inspection

When you do not know where to start, use this priority:

  1. MAP — fastest way to see what actually linked. Generate it; diff it.
  2. OBJ/LIB + TDUMP — resolves “unresolved external” and symbol-name issues.
  3. TPU — resolves “unit version mismatch” and interface drift; use differential forensics when format is unknown.
  4. EXE + TDUMP — confirms final layout; use when MAP and OBJ checks pass but runtime behavior is wrong.
  5. Disassembly — last resort when binary layout is correct but logic is suspect.

Most TP toolchain bugs are solved at steps 1–3. Avoid jumping to 4–5 without evidence.

Checkpoint discipline. When you have a working build, immediately: (a) save BASELINE.MAP, (b) note EXE size and optionally CRC, (c) archive BUILD.TXT. If a later change breaks things, you can diff MAP vs baseline, compare sizes, and often pinpoint the regression without touching source. Teams that skip checkpoints repeat the same forensic work repeatedly. A single baseline from a known-good build can save hours of regression hunting.

Before seeking help. If you are stuck and plan to ask a colleague or post online, gather: exact error message, compiler/linker version, output of tdump on the failing OBJ (for link errors) or EXE (for runtime), and a one-line description of the last change. That context turns “it doesn’t work” into a solvable puzzle. Omitting the MAP or TDUMP output is the most common reason diagnostic threads go nowhere.

A disciplined binary investigation loop

  1. state expected outcome before run
  2. build clean (no stale TPU/OBJ)
  3. capture .EXE size/hash + .MAP
  4. inspect changed symbols/segments first
  5. only then debug/disassemble

This order keeps you from chasing folklore. Teams that skip step 3 often waste hours on “it used to work” bugs that are pure link/artifact drift.

When the loop stalls. If you have done clean rebuild, MAP diff, TDUMP on OBJ and EXE, and the problem persists, the cause may be environmental: TSR conflicts, EMS/XMS driver behavior, or DOS version differences. At that point narrow the environment: boot minimal config, disable TSRs, try a different DOS version or machine. Document the minimal repro configuration; that becomes the bug report. Before concluding “environment only,” re-run the loop with a single-source-change variation: revert the most recent edit, rebuild, and compare. If the revert fixes it, the regression is in that change, not the environment—even when the artifact diff is subtle.

Team and process discipline for artifact reproducibility

Reproducibility fails when one developer has hidden state that others do not. Enforce these practices:

  • Version-lock the toolchain: document exact TP/BP version, TASM version, and any third-party units. Rebuild from source on a clean checkout must produce identical artifacts.
  • Explicit paths in scripts: avoid “current directory” assumptions. Build scripts should set PATH, unit dirs, and object dirs explicitly.
  • Archive build products with releases: keep EXE + MAP + optional OVR and a short BUILD.TXT (compiler version, options, date) in the release package. That gives future maintainers a diff target.
  • One clean rebuild before any “weird bug” investigation: if a bug appears after days of incremental builds, delete TPUs/OBJs and rebuild. Many “impossible” bugs vanish.
  • ABI checkpoint for externals: when integrating a new OBJ, record its public symbols (from TDUMP), calling convention, and any segment or alignment assumptions in a small integration doc. Future maintainers can verify correctness without re-deriving the ABI from scratch.
  • Treat TPU/OBJ as derived, never committed: only source (.PAS, .ASM) goes in version control. Rebuild artifact sets from source on each machine. Committed TPUs from one developer’s machine can silently break another’s build when compiler versions differ. Document this policy in the project README.

These rules are low-cost and eliminate a large class of non-reproducible failures.

Build log discipline. For each release or debugging baseline, record in BUILD.TXT or equivalent: compiler executable and version, key options ({$D+}, {$R+}, memory model), unit and object paths, and checksum or size of the main EXE. When a bug report arrives months later, that log tells you whether you can reproduce the exact binary or must narrow the search.

Handoff protocol. When passing a project to another maintainer, include: source tree, BUILD.BAT or equivalent, BASELINE.MAP from last known-good build, and a one-page “toolchain and paths” document. Without that, the next person spends days rediscovering unit search order, object paths, and which TP version was used. The hour you spend documenting pays off on the first “works on my machine” incident.

Cross references

Next part

Part 3 moves from artifacts to runtime memory strategy: overlays, near/far costs, and link strategy under hard 640K pressure.


Summary for busy maintainers. When a TP project misbehaves: (1) clean rebuild first; (2) generate and diff the MAP; (3) TDUMP any external OBJs to confirm symbol names; (4) verify calling conventions on externals; (5) check path and version consistency. Most failures resolve before you touch a disassembler. Treat TPU/OBJ as version-locked, path-explicit, and never-committed. Document once; benefit forever. The artifact-focused mindset that Part 1 introduced becomes concrete here: files on disk are your primary evidence, source code is secondary when debugging build and link failures.

2026-02-22 | MOD 2026-03-14