C:\RETRO\DOS\TP\TOOLCH~1>type turbop~6.htm
Turbo Pascal Toolchain, Part 6: Object Pascal, TPW, and the Windows Transition
Parts 1–5 mapped the DOS-era toolchain: workflow, artifacts, overlays, BGI, and the compiler/linker boundary from TP6 to TP7. This part crosses the platform divide. Object Pascal extensions, Turbo Pascal for Windows (TPW), and the move to message-driven GUIs forced a different kind of toolchain thinking. Same language family, new mental model.
This article traces that transition from a practitioner’s perspective: what stayed familiar, what broke, and what had to be relearned. We cover the historical milestones (TP 5.5 OOP, TPW 1.0, TPW 1.5, BP7), the technical culprits that bit migrating teams, debugging and build/deploy workflow differences, and the mental shift from sequential to event-driven execution.
Version timeline (conservative): TP 5.5 (1989) introduced Object Pascal. TPW 1.0 appeared in the Windows 3.0 era (c. 1991). Borland Pascal 7 (1992) offered unified DOS and Windows tooling including DLL support. TPW 1.5 followed TP7 (c. 1993). OWL matured alongside these releases. Exact dates for some variants vary by region and packaging; the sequence is well established. The transition spanned roughly four years; many teams maintained both DOS and Windows targets during that period.
Structure map (balanced chapter plan)
Before drilling into details, this article follows a fixed ten-chapter plan so the narrative stays balanced rather than front-loaded:
- Object Pascal in TP 5.5
- TPW 1.0 and first Windows workflow shock
- TPW 1.5 in the post-TP7 landscape
- BP7 as dual-target toolchain
- OWL and message-driven architecture
- migration culprits and pitfalls
- debugging model changes (DOS vs Windows)
- build/deploy pipeline changes
- team workflow and review-model changes
- synthesis and transfer lessons
Each chapter carries similar depth: technical mechanism, failure mode, and practical operator/developer workflow.
Object Pascal arrives: TP 5.5 and the OOP extensions
Turbo Pascal 5.5, released in 1989, introduced Object Pascal: the object type
with inheritance, virtual methods, and constructors/destructors. The additions
were substantial for the language, but the toolchain remained essentially the
same. Compile, link, run. .TPU units still carried compiled code; the linker
still produced .EXE. What changed was what you expressed in those units and
how you structured larger programs.
The object keyword (distinct from the later class keyword in Delphi) defined
a type with a hidden pointer to its virtual method table (VMT). Inheritance was
single; you could not inherit from multiple base objects. Virtual methods
required the virtual directive and had to be overridden with the same
signature. The compiler emitted the VMT layout; if you got the inheritance
hierarchy wrong, the wrong method could be invoked at runtime—a form of bug
that procedural Pascal had never had.
unit Shapes;
interface
type
TShape = object
X, Y: Integer;
procedure Move(Dx, Dy: Integer);
procedure Draw; virtual;
constructor Init(AX, AY: Integer);
destructor Done; virtual;
end;
TCircle = object(TShape)
Radius: Integer;
procedure Draw; virtual;
constructor Init(AX, AY, ARadius: Integer);
end;
implementation
constructor TShape.Init(AX, AY: Integer);
begin
X := AX;
Y := AY;
end;
destructor TShape.Done;
begin
{ cleanup }
end;
procedure TShape.Move(Dx, Dy: Integer);
begin
Inc(X, Dx);
Inc(Y, Dy);
end;
procedure TShape.Draw;
begin
{ base: no-op or default behavior }
end;
constructor TCircle.Init(AX, AY, ARadius: Integer);
begin
TShape.Init(AX, AY);
Radius := ARadius;
end;
procedure TCircle.Draw;
begin
{ draw circle at X,Y with Radius }
end;
end.For DOS projects, this was still a single-threaded, linear-control-flow world. The object model improved structure and reuse; it did not yet change the execution paradigm. Overlays, BGI, and conventional memory limits applied unchanged. Teams adopting Object Pascal in the late 1980s learned inheritance and polymorphism while keeping familiar toolchain habits.
Constructor and destructor discipline mattered. In the early object model
(pre-class syntax), you called Init explicitly and Done before disposal.
Forgetting Done on objects that held resources (handles, memory) leaked. The
toolchain did not enforce this; it was a coding discipline. Virtual method
tables added a small runtime cost and one more thing to get wrong when mixing
object types—passing a TShape where a TCircle was expected could produce
subtle bugs if the receiver assumed the concrete type.
The important point for the Windows transition: Object Pascal gave developers the vocabulary (inheritance, virtual dispatch, encapsulation) that OWL and later frameworks would use. Learning OOP in DOS was preparation for OWL’s message-handler hierarchy.
Toolchain impact was minimal. TP 5.5 still produced .TPU units; the compiler
emitted VMT layout for object types; the linker resolved virtual calls at
link time. Debugging object hierarchies required understanding the VMT
structure, but Turbo Debugger could display object instances and their
fields. Migration from procedural to object-based code was incremental: one
unit at a time, starting with leaf modules that had no dependencies. A common
path: introduce a single object type to encapsulate a record and its
operations, compile and test, then add inheritance where it simplified
structure. Big-bang rewrites to “full OOP” were rare and risky; most teams
evolved their codebases gradually.
Turbo Pascal for Windows 1.0: the first wave
Turbo Pascal for Windows 1.0 arrived in the Windows 3.0 era, commonly cited as
around 1991. The toolchain surface looked familiar: blue IDE, integrated
compiler, linker. Underneath, the target was completely different. Instead of
DOS .EXE and real-mode segments, you produced Windows .EXE binaries that
linked against the Windows API, expected a GUI entry point (WinMain), and
ran inside a message loop.
First-time TPW users discovered that a “Pascal program” was no longer a
straight-line script. The main block ran once to register the window class,
create the main window, and enter GetMessage/DispatchMessage. After that,
everything happened inside the window procedure (WndProc) in response to
messages. A typical beginner error: putting “real” logic in the main block,
wondering why it never ran, and only later realizing the block had already
exited into the message loop. Another: assuming that WndProc would be
called once per “event.” In fact, Windows sends many messages—WM_CREATE,
WM_SIZE, WM_PAINT, WM_COMMAND, and dozens more—and the order and
timing depend on user actions and system behaviour. Learning which messages
mattered for a given task was part of the ramp-up.
program HelloWin;
uses
WinTypes, WinProcs;
const
IDC_BUTTON = 100;
function WndProc(Window: HWnd; Message, WParam: Word; LParam: LongInt): LongInt;
far;
begin
case Message of
wm_Command:
if WParam = IDC_BUTTON then
MessageBox(Window, 'Hello from TPW', 'TPW', mb_Ok);
wm_Destroy:
PostQuitMessage(0);
else
WndProc := DefWindowProc(Window, Message, WParam, LParam);
Exit;
end;
WndProc := 0;
end;
var
Msg: TMsg;
WndClass: TWndClass;
hWnd: HWnd;
begin
WndClass.style := 0;
WndClass.lpfnWndProc := @WndProc;
WndClass.cbClsExtra := 0;
WndClass.cbWndExtra := 0;
WndClass.hInstance := HInstance;
WndClass.hIcon := LoadIcon(0, idi_Application);
WndClass.hCursor := LoadCursor(0, idc_Arrow);
WndClass.hbrBackground := GetStockObject(white_Brush);
WndClass.lpszMenuName := nil;
WndClass.lpszClassName := 'HelloWin';
RegisterClass(WndClass);
hWnd := CreateWindow('HelloWin', 'TPW Hello', ws_OverlappedWindow,
cw_UseDefault, 0, cw_UseDefault, 0, 0, 0, HInstance, nil);
ShowWindow(hWnd, sw_ShowNormal);
UpdateWindow(hWnd);
while GetMessage(Msg, 0, 0, 0) do
begin
TranslateMessage(Msg);
DispatchMessage(Msg);
end;
end.The shift was conceptual: instead of “run from top to bottom,” you “register a
window class, create a window, then sit in a message loop.” Event handling was
reactive. The toolchain still produced .EXE, but the runtime contract was
Windows API calls, far procs, and GetMessage/DispatchMessage.
TPW 1.0 shipped with WinTypes and WinProcs units (API bindings) and
optionally WinCrt for console-style apps. The IDE looked like the DOS Turbo
Pascal IDE but targeted a different runtime. Keyboard shortcuts and menu
structure were familiar, which eased the transition. The debugger, however,
had to handle a different execution model: breakpoints in message handlers
fired when messages arrived, not when you single-stepped through a linear
flow. Setting a breakpoint in WndProc and running would eventually stop
there—but only when a message was dispatched to that window. First-time TPW users often hit:
wrong library linking (mixing DOS and Windows units), missing far on
WndProc, and confusion about when their code actually ran—the main block
sets up and enters the loop; the rest happens inside WndProc when messages
arrive. That inversion was the core mental break.
Linker differences mattered. TPW produced Windows executables with a different
header format, different segment layout, and different startup code. You could
not link a DOS object file into a Windows executable or vice versa. Mixed
projects—e.g. a shared algorithm library—had to compile the same source
twice, once for each target, with target-specific uses and possibly
{$IFDEF} guards. The idea of “one binary runs everywhere” did not exist;
you had DOS binaries and Windows binaries.
Understanding the message loop was essential. GetMessage blocks until a
message is available; TranslateMessage converts keystrokes to WM_CHAR when
needed; DispatchMessage invokes the window procedure for the target window.
Every GUI action in a Windows app flows through this pipeline. A handler that
did too much work (e.g. a long computation) would block the loop and freeze
the UI. DOS programs could ReadKey and wait indefinitely; Windows programs
had to return from handlers quickly and defer heavy work (e.g. via timers or
background processing) to avoid stalling the whole application. Developers
coming from DOS often wrote handlers that performed synchronous file I/O or
lengthy calculations, then wondered why the window would not repaint or
respond to input until the operation finished. The fix was to break work
into smaller chunks or use PeekMessage-based cooperative multitasking—a
technique that required unlearning the “run until done” habit.
TPW 1.5 and the post-TP7 landscape
TPW 1.5 followed TP7 and appeared in the early 1990s (often cited around 1993). It brought the TP7-era language and tooling to the Windows target. Better integration with Windows APIs, improved resource tooling, and alignment with the Borland Pascal 7 family. By this point, DOS and Windows were parallel targets within the same product family, not separate products with different pedigrees.
Build workflows diversified. A team might maintain both a DOS and a Windows configuration: different compiler switches, different libraries, different entry points. Shared units had to stay abstract enough to compile for both.
{ Conditional compilation for dual-target units }
unit SharedCore;
interface
procedure DoWork(Data: Pointer);
implementation
{$IFDEF MSWINDOWS}
uses WinTypes, WinProcs;
{$ENDIF}
{$IFDEF MSDOS}
uses Dos;
{$ENDIF}
procedure DoWork(Data: Pointer);
begin
{$IFDEF MSWINDOWS}
{ Windows-specific implementation }
{$ENDIF}
{$IFDEF MSDOS}
{ DOS-specific implementation }
{$ENDIF}
end;
end.The {$IFDEF} pattern became standard for code shared across targets. Not all
logic could be shared; APIs differed. But data structures, algorithms, and
business rules could live in common units with thin platform-specific wrappers.
Teams learned to minimize {$IFDEF} surface and push platform branches to
dedicated units.
A common layout: a Core unit with pure logic (no uses of platform units),
a CoreDOS unit that implemented Core for DOS (overlays, BGI, Dos unit),
and a CoreWin unit that implemented Core for Windows (handles, WinProcs).
The program or a top-level unit chose which implementation to use. This kept
the conditional compilation at a few strategic points rather than scattered
throughout.
TPW 1.5 also improved the resource workflow. Earlier TPW had resource support,
but the integration was rougher. By 1.5, the path from dialog design to linked
.EXE was more streamlined, and teams doing serious Windows development could
rely on it.
A practical consideration: machine requirements. DOS Turbo Pascal ran on an 8088 with 256 KB of RAM. TPW and Windows 3.x demanded more—typically a 286 or 386, 1 MB or more of RAM, and a graphics display. Teams developing on higher-end machines had to remember that target users might have minimal configurations. Testing on a “cramped” setup (e.g. 1 MB RAM, 640×480) caught memory pressure and layout bugs that did not appear on development hardware.
BP7: unified DOS and Windows toolchain
Borland Pascal 7, released in 1992, provided a single box with DOS and Windows support. You could build:
- DOS executables (with overlays, EMS, real-mode semantics)
- Windows executables
- Windows DLLs
DLL building introduced a new artifact type and a new linkage model.
library MyLib;
uses
WinTypes, WinProcs;
exports
MyExportProc index 1,
MyExportFunc index 2;
procedure MyExportProc(P: PChar); far;
begin
{ DLL-exported procedure }
end;
function MyExportFunc(I: Integer): Integer; far;
begin
MyExportFunc := I * 2;
end;
begin
{ DLL entry/exit handling if needed }
end.The toolchain produced .DLL instead of (or in addition to) .EXE. Callers
used LoadLibrary and GetProcAddress. Version coupling and calling
conventions mattered more: a Pascal DLL had to match what the caller expected.
Teams learned to isolate DLL interfaces and treat them as stable ABI boundaries.
DLL entry and exit ran at load/unload. If a DLL’s initialization touched
other DLLs or global state, load order could cause subtle failures. Export by
name vs. by ordinal had tradeoffs: ordinals were smaller and faster to resolve
but fragile if the export table changed. Many teams standardized on name-based
exports for maintainability and reserved ordinals for performance-critical
paths. The exports section in the library block was the contract; changing
it broke any caller that relied on it. Adding new exports was usually safe;
removing or reordering required coordinated updates to all clients. Teams
that treated the DLL interface as a stable API and versioned it explicitly
(including in documentation) had fewer integration surprises.
Calling a Pascal DLL from C or another language required matching conventions: pascal vs. cdecl, near vs. far, and structure layout. Teams building mixed- language systems documented the ABI explicitly. A small test program that called each exported function and verified return values caught many integration bugs before they reached production.
BP7’s value was consolidation: one purchase, one documentation set, one support channel for both DOS and Windows. Teams could prototype on DOS (faster iteration, simpler debugging) and port to Windows when the design stabilised, or maintain both targets from a shared codebase from the start.
The DLL workflow itself took time to internalise. A library program had no
main loop; it exported entry points. Callers loaded it, resolved exports, and
called. The DLL’s initialization block ran at load; its finalization (if any)
ran at unload. Thread safety was not a primary concern in 16-bit Windows, but
DLL global state was shared across all callers. A bug in one executable’s use
of a DLL could corrupt state for another. Documentation and code review had to
cover “who loads this DLL, when, and what do they assume about its state?”
DLLs also changed the testing matrix: a fix in a shared DLL required
re-testing every application that used it. Versioning the DLL (e.g. embedding
a version resource) and checking it at load time caught many “wrong DLL”
deployment bugs before they manifested as mysterious crashes.
Importing a DLL from Pascal required matching the export signature exactly. A common pattern:
{ In unit that uses the DLL }
procedure MyImportProc(P: PChar); far; external 'MYLIB' index 1;
function MyImportFunc(I: Integer): Integer; far; external 'MYLIB' index 2;If the DLL used pascal convention (Borland default) and the caller did too,
calls worked. Mixing cdecl and pascal caused stack corruption. Teams
building reusable DLLs often documented the calling convention in the header
or in a separate ABI document.
OWL and message-driven architecture
Object Windows Library (OWL) and similar frameworks wrapped the raw Windows API
in an object-oriented, message-handler style. Instead of a giant case
statement in a single WndProc, you subclassed window types and overrode
message handlers.
unit MyWindow;
interface
uses
Objects, WinTypes, WinProcs, OWindows;
type
PMyWindow = ^TMyWindow;
TMyWindow = object(TWindow)
procedure WMCommand(var Msg: TMessage); virtual wm_First + wm_Command;
procedure WMPaint(var Msg: TMessage); virtual wm_First + wm_Paint;
end;
implementation
procedure TMyWindow.WMCommand(var Msg: TMessage);
begin
if Msg.WParam = 100 then
MessageBox(HWindow, 'Button clicked', 'OWL', mb_Ok)
else
inherited WMCommand(Msg);
end;
procedure TMyWindow.WMPaint(var Msg: TMessage);
var
PS: TPaintStruct;
DC: HDC;
begin
DC := BeginPaint(HWindow, PS);
{ draw using DC }
EndPaint(HWindow, PS);
end;
end.The pattern: each message maps to a virtual method; inherited propagates to
the default handler. Toolchain-wise, you still compiled units and linked, but
the design idiom was “object per window, method per message.” This influenced
how teams structured code and how they debugged: failures showed up as wrong
message routing or missing overrides.
OWL abstracted the raw RegisterClass/CreateWindow/message-loop boilerplate.
You derived from TApplication and TWindow, filled in handlers, and the
framework dealt with registration and dispatch. The tradeoff: learning OWL’s
object graph and lifecycle. Windows created by OWL were owned by the framework;
manual CreateWindow calls mixed with OWL could bypass that ownership and cause
duplicate destruction or leaked handles. Teams that went “all OWL” had fewer
ownership bugs than those that mixed raw API and OWL freely.
The virtual wm_First + wm_Command syntax mapped a Windows message ID to a
method. When a message arrived, OWL’s dispatch logic looked up the method and
called it. If you did not override a message, the base class handled it (or
passed to DefWindowProc). This was a clean separation of concerns: each
window class handled only the messages it cared about.
{ OWL: creating a custom control by inheritance }
type
PMyEdit = ^TMyEdit;
TMyEdit = object(TEdit)
procedure WMChar(var Msg: TMessage); virtual wm_First + wm_Char;
end;
procedure TMyEdit.WMChar(var Msg: TMessage);
begin
{ Filter or transform input before default handling }
inherited WMChar(Msg);
end;This pattern—override, do something, call inherited—became the standard for extending OWL controls. The toolchain compiled and linked the same way; the design vocabulary had expanded.
Choosing between raw API and OWL was a real decision. Raw API gave full control and smaller binaries but required more boilerplate and discipline. OWL added framework overhead but let teams ship Windows apps faster. Many TPW projects started with raw API for learning, then switched to OWL once the team understood the message model. Hybrid approaches existed but demanded careful ownership rules for window handles and resources.
OWL also provided standard dialogs, common controls wrappers, and application lifecycle management. Reinventing these with raw API was possible but time- consuming. Teams that adopted OWL early often had a working prototype in days instead of weeks. The tradeoff was dependency on Borland’s framework and its design decisions; customising behaviour sometimes required diving into OWL source or working around framework limitations. For teams building multiple Windows applications, OWL’s consistency across projects was valuable: once you learned the patterns, new apps came together faster. The investment in learning the framework paid off over several products.
Technical culprits and pitfalls
Several failure modes were common when moving from DOS to Windows. Experienced DOS developers often hit these first; the habits that worked in real mode backfired in Windows.
Far-call discipline. Windows callback procs (WndProc, dialogs, hooks) must
be far. The Windows kernel and USER module invoke your code through function
pointers; in the segmented 16-bit model, a near call to a callback caused
immediate corruption when the system tried to return. Missing far or wrong
declaration led to crashes that were hard to reproduce—sometimes only when a
particular code path was taken. The compiler did not always catch it; runtime
did, and not always with a clear message.
Resource coupling. Windows apps depend on .RC resources (dialogs,
menus, icons). Wrong paths, missing resources, or mismatched IDs produced
obscure startup failures. The linker or resource compiler had to be in the loop,
and the resulting .RES had to link into the .EXE. A dialog defined in .RC
with control ID 100 had to match the wm_Command handler that checked for 100.
Typos or reuse of IDs across dialogs caused wrong controls to be identified.
Teams learned to centralize ID constants in a shared include or unit. Some
teams used a naming scheme (e.g. IDC_BUTTON_SAVE, IDC_EDIT_NAME) to make
the link between resource and handler obvious during code review.
Segment and memory model. Windows 3.x used segmented memory. Large
allocations, wrong segment assumptions, or stack overflow in message handlers
could corrupt the heap or cause intermittent faults. DOS habits (assume
sequential execution, small stack) did not translate. In DOS, you often knew
exactly when a procedure returned; in Windows, a message handler could call
SendMessage and re-enter the same or another handler before returning.
Recursive message handling required care with stack depth and static state.
String interop. Pascal String[N] vs. C null-terminated. Windows API
expects PChar and length conventions. Conversion bugs caused truncation,
buffer overrun, or wrong display. Teams needed explicit conversion layers and
disciplined use of buffers.
DLL load order and initialization. DLLs had init/exit sequences. Circular
dependencies or incorrect load order led to startup hangs or access violations.
Build order and uses discipline mattered.
String conversion and buffer safety. Windows API calls often expect
null-terminated PChar. Pascal String is length-prefixed. Passing a raw
String variable where PChar was expected could work by accident (many
implementations had a trailing zero) but was undefined. Correct pattern:
{ Safe Pascal-to-Windows string passing }
procedure ShowText(const S: String);
var
Buf: array[0..255] of Char;
I: Integer;
begin
for I := 0 to Length(S) - 1 do
Buf[I] := S[I + 1]; { Pascal 1-based indexing }
Buf[Length(S)] := #0;
MessageBox(0, Buf, 'Title', mb_Ok);
end;Teams built small conversion units and used them consistently. Ad-hoc StrPCopy
calls scattered across codebases were a maintenance hazard. A StrUtils or
WinStrings unit with PascalToPChar, PCharToPascal, and perhaps
PCharBuf for temporary buffers reduced copy-paste errors and gave a single
place to fix bugs when a new Windows version changed length semantics.
{ Common mistake: forgetting far on Windows callbacks }
procedure BadProc(Window: HWnd; Msg: Word; W, L: LongInt); { WRONG }
procedure GoodProc(Window: HWnd; Msg: Word; W, L: LongInt); far; { CORRECT }Debugging workflows: DOS vs Windows
DOS debugging was relatively direct. Single process, linear execution, predictable crash locations. Turbo Debugger could single-step, set breakpoints, inspect memory. Overlay and BGI issues were usually reproducible. If a crash happened at a fixed address, you set a breakpoint there, ran again, and examined the call stack. Deterministic replay was the default.
Windows debugging was harder. Message-driven execution meant control flow jumped between handlers. A bug might only appear when a specific message arrived in a specific order. Reproducing required driving the UI in a particular way. Crashes could occur in system code invoked via callback; the immediate cause might be bad parameters passed from your handler. Null pointer dereferences, wrong handle usage, and stack corruption in message handlers produced intermittent failures that did not correlate with “run it again.”
{ Diagnostic: log message flow to understand ordering }
procedure TMyWindow.DefaultHandler(var Msg: TMessage);
begin
WriteLn(DebugFile, 'Msg=', Msg.Msg, ' W=', Msg.WParam, ' L=', Msg.LParam);
inherited DefaultHandler(Msg);
end;Practitioners used:
- OutputDebugString and a monitor (e.g. Turbo Debugger for Windows or third-party tools) to capture log output
- Conditional breakpoints in the debugger on message IDs (e.g. break when
Msg.Msg = wm_Paint) - Small harness programs that sent specific messages via
SendMessageto isolate behavior without manual UI interaction - Map files to correlate addresses with symbols when analyzing postmortem dumps
The mental shift: from “re-run until it crashes” to “instrument and trace message flow.” Debugging became hypothesis-driven: which message, which window, which order?
Another technique: build a minimal reproduction. If the bug appeared when
clicking a specific button after resizing the window, create a tiny app with
only that button and that resize logic. Isolating the failure often revealed
that the cause was not where intuition suggested—e.g. a WM_PAINT handler
that assumed state set up in WM_SIZE, but WM_PAINT could arrive before
WM_SIZE in certain scenarios. Understanding Windows’ message ordering and
reentrancy was as important as knowing the API. A handler that called
SendMessage to a child window could find itself re-entered if the child’s
handler did something that triggered another message to the parent. Careful
design avoided such cycles; when they occurred, stack overflow or corrupted
state often resulted.
Build and deploy: DOS vs Windows
DOS deployment was simple: .EXE, optionally .OVR, and .BGI/.CHR in a
known directory. Batch files or simple install scripts sufficed. A typical
release package: one folder, a few files, run the EXE. Path assumptions (e.g.
.\BGI for drivers) had to be correct, but the surface was small. Floppy
distribution was common: a single disk for the program, optionally a second
for BGI drivers or overlay files. Users understood “copy to C:\MYAPP and run.”
Windows deployment added:
- Multiple DLLs (Windows system DLLs plus any you shipped)
- Resource files (icons, dialogs) embedded or alongside
- INI files or registry for configuration
- Different machine profiles (video drivers, memory)
The resource pipeline was new. You authored .RC files, compiled them with
BRC.EXE (Borland Resource Compiler) to .RES, and linked the .RES into
the .EXE. Forgetting the resource step produced a binary that ran but showed
no icon, wrong menu, or broken dialogs. Dialog editor output and hand-written
.RC had to stay in sync; ID collisions caused mysterious behavior. A small convention helped: define
all resource IDs in a single $I-included file or a dedicated unit, and
reference them from both .RC and Pascal. Changing an ID in one place
without the other was a frequent source of “the button does nothing” bugs
that took hours to track down.
|
|
Build scripts had to branch by target. Release builds often required separate configurations for DOS and Windows, with different linker options and runtime selection. Teams documented “DOS build checklist” vs. “Windows build checklist” and treated them as separate pipelines. A dual-target product meant two release builds, two test passes, and two support matrices (e.g. “runs on DOS 5.0+” vs. “runs on Windows 3.1+”).
Versioning of deliverables also changed. A DOS product might ship “v1.2”; a Windows product might need “v1.2 for Windows 3.1” vs. “v1.2 for Windows 3.11” if patch-level differences mattered. Installer design entered the picture: copying files into the right place, registering extensions, and creating program group icons. Teams that had never needed an “install” step had to learn one. Early Windows installers were often batch files or simple scripts; later, dedicated installer tools (e.g. Borland’s own offerings) became part of the release workflow. The transition from “copy to floppy and run” to “run setup and follow the wizard” was another incremental change that accumulated over the early 1990s.
Team collaboration and mental model shift
DOS-era teams had a shared mental model: one process, one flow, predictable artifacts. Code reviews focused on logic, overlays, and memory. A developer could read a program from top to bottom and follow execution. Ownership of “the main loop” was clear.
Windows-era teams dealt with:
- Split expertise: some people owned dialog layout (
.RCand resource editor), others message handlers, others DLL interfaces. The “GUI person” and the “engine person” became distinct roles. - Asynchronous feel: events could arrive in varied order; testing had to cover combinations. “Click A then B” vs. “Click B then A” could expose different bugs.
- Toolchain fragmentation: resource compiler, different linker flags, different debugger workflows. Build breaks could occur in the resource step, which DOS-only developers had never seen.
Documentation shifted. Instead of “run main, then X, then Y,” teams wrote “on WM_COMMAND with ID Z, the flow is…”. Architecture diagrams showed window hierarchies and message flow, not just procedure call graphs. Onboarding documents included “Windows messaging basics” and “OWL object lifecycle.”
New joiners needed to internalize the event loop and the idea that “your code runs when Windows says so.” That was a larger conceptual jump than learning Object Pascal syntax. Experienced DOS Pascal developers sometimes struggled more than newcomers—unlearning “I control the flow” was harder than never having assumed it.
Code review practices adapted. DOS reviews often traced “what happens when we run.” Windows reviews asked “what happens when the user does X, and in what order do messages arrive?” Test plans shifted from “run through the menu” to “for each dialog, test each control, test tab order, test keyboard shortcuts.” The surface area of “things that can go wrong” grew substantially. Senior developers who had debugged DOS programs for years sometimes needed mentoring from junior developers who had started with Windows—not because the seniors were less skilled, but because the younger developers had never internalised the sequential model and adapted to event-driven design more quickly.
A practical collaboration upgrade in that period was formal handoff contracts between UI and engine work. In DOS-only projects, one developer could often own everything from input parsing to rendering. In TPW projects, that approach scaled poorly because message handlers, dialog resources, and shared core logic changed at different speeds. Teams that stayed healthy wrote explicit contracts:
- which messages a form handled directly versus delegated
- which unit owned validation rules
- which module owned persistence and file I/O
- which callbacks were synchronous, and which were deferred
Without this, “small UI tweaks” frequently broke core behavior because a developer moved logic into a handler that now ran under a different timing context.
|
|
This kind of document looked heavy for small teams and saved debugging days. It made expectations executable in reviews and reduced arguments about “who owns this behavior.” It also improved onboarding because a new developer could read one page and understand the current flow before touching code.
Another change was review vocabulary. DOS reviews asked, “Does this procedure return the right value?” Windows reviews increasingly asked, “In what callback context does this run?” and “What other message paths can trigger this state change?” That second question caught an entire class of defects: duplicated state transitions caused by one logic block being reachable through both menu commands and control notifications.
Teams that developed this callback-context discipline were already preparing for
Delphi’s event model, even before switching products. The names changed (OnClick
instead of WM_COMMAND branches), but the design concern stayed the same: keep
state transitions explicit, idempotent where possible, and reviewable under
multiple event paths.
Synthesis: what the toolchain taught
The transition from DOS Turbo Pascal to Object Pascal and TPW was not a language change alone. The Pascal syntax, unit system, and compilation model persisted. What changed was the execution environment, the artifact graph, and the problem-solving strategies. It was a shift in:
- Control flow: from sequential to event-driven. Your code became a set of handlers invoked by the runtime, not a script you controlled from start to finish.
- Artifacts: from
.EXE+.OVRto.EXE+.DLL+resources. The artifact graph grew; build and deploy had more moving parts. - Debugging: from reproducible traces to message-flow analysis. Crashes became context-dependent; instrumentation and hypothesis replaced simple replay.
- Deployment: from single-directory to multi-component, multi-profile. “Works on my machine” expanded to “works on which video driver, which memory configuration, which Windows patch level.”
The compiler and linker remained recognizable. The surrounding workflow— resources, callbacks, DLLs, deployment—became the new complexity. Teams that succeeded treated the Windows toolchain as a different system with different rules, not “Turbo Pascal with a new UI library.” The language carried forward; the problem-solving model had to adapt. Developers who made that mental shift were well positioned for Delphi and the 32-bit Windows world that followed. The lessons—event-driven design, resource pipelines, DLL boundaries—carried forward. Delphi refined the language and tooling, but the conceptual bridge from DOS to Windows had already been crossed.
Practical migration: DOS to Windows checklist
For teams porting an existing DOS application to Windows, a disciplined sequence reduced risk:
- Isolate platform-dependent code. Identify all
Dos,Crt,Graph, and overlay usage. Move them behind abstraction layers or{$IFDEF}-guarded units. - Verify string handling. Audit every place that touches filenames, user input, or API parameters. Introduce conversion routines and use them consistently.
- Add the resource pipeline. Create a minimal
.RC, link it, verify the app still runs. Add dialogs and menus incrementally. - Replace the main loop. The DOS “repeat until done” loop becomes “register, create, message loop.” Ensure no logic assumed it ran “at startup” in a single pass.
- Test on multiple configurations. Different video drivers, different memory, and different Windows versions surfaced bugs that did not appear in development.
Not every DOS app was worth porting. Those that were tightly coupled to hardware (TSRs, direct port I/O, mode-X graphics) required substantial redesign or remained DOS-only. Business logic and data-heavy applications were better candidates.
A phased approach often worked: first a Windows shell that displayed data (perhaps read from a file format shared with the DOS version), then incremental feature parity. Trying to port everything at once usually led to long integration branches and merge pain. Teams that shipped a minimal Windows version early, then iterated, had better feedback and morale.
Related reading
- Turbo Pascal Toolchain, Part 5: From 6.0 to 7.0 - Compiler, Linker, and Language Growth
- Turbo Pascal Overlay Tutorial: Build, Package, and Debug an OVR Application
- Turbo Pascal BGI Tutorial: Dynamic Drivers, Linked Drivers, and Diagnostic Harnesses
Full series index
- Part 1: Anatomy and Workflow
- Part 2: Objects, Units, and Binary Investigation
- Part 3: Overlays, Memory Models, and Link Strategy
- Part 4: Graphics Drivers, BGI, and Rendering Integration
- Part 5: From 6.0 to 7.0 - Compiler, Linker, and Language Growth
- Part 6: Object Pascal, TPW, and the Windows Transition (this article)
- Part 7: From TPW to Delphi and the RAD Mindset