Timer Capture Without an RTOS

Timer Capture Without an RTOS

One of the most useful embedded skills is measuring external timing accurately without hiding behind a heavy runtime stack. You do not need an RTOS to capture pulse widths, frequency drift, or event latency with high reliability. You need a clear timing model, disciplined interrupt design, and careful data handoff.

Timer input-capture peripherals are built for this job. They latch counter values on configured edges and let firmware process deltas later. The hardware does the precise timestamping; software handles interpretation.

A robust architecture starts with three decisions:

  1. counter clock source and prescaler
  2. edge policy (rising, falling, both)
  3. overflow handling strategy

If these are vague, accuracy claims will be vague too.

Choose timer frequency from measurement goals, not convenience. Too slow and quantization error dominates. Too fast and overflow complexity increases, especially on narrow counters. A good target is where one tick is clearly below your required resolution with margin for jitter analysis.

Input capture ISR design should be minimal:

  • read captured value
  • read/track overflow epoch
  • write compact event record into ring buffer
  • return

Do not compute expensive statistics inside ISR unless absolutely necessary. Deterministic ISR duration keeps timestamping reliable.

The ring buffer is the bridge between hard realtime edges and softer application logic. Make it explicit:

  • fixed-size, lock-free where possible
  • head/tail updates with clear ownership
  • overflow counter for dropped samples
  • sequence IDs for gap detection

If sampling can outrun processing, design for graceful loss reporting instead of silent corruption.

Overflow math is where many implementations become flaky. A 16-bit timer at high clock rate wraps frequently. You need either:

  • software epoch extension in overflow ISR, or
  • wider hardware timer if available

Then reconstruct absolute timestamps as (epoch << counter_bits) | capture_value.

Validate overflow handling with deliberate stress:

  • low-frequency signals to force many wraps between edges
  • bursty high-frequency signals near ISR capacity
  • mixed duty cycles

If only one scenario is tested, hidden edge cases survive to production.

Debounce and input conditioning matter too. Electrical noise can generate false captures. Hardware filtering, Schmitt inputs, or digital filter settings on capture channels often improve reliability more than post-processing hacks.

For pulse width measurement, both-edge capture is ideal:

  • capture rising edge timestamp
  • capture falling edge timestamp
  • subtract with wrap-safe arithmetic

For frequency measurement, rising-only with period averaging is often cleaner.

Averaging strategy should reflect signal characteristics. Fixed-window averaging smooths noise but can blur short transients. Exponential filters react faster but need careful coefficient tuning. Choose based on what errors are expensive for your application.

No RTOS does not mean no scheduling discipline. Use a simple cooperative loop:

  • drain capture buffer
  • update derived metrics
  • publish snapshots atomically
  • run non-critical tasks opportunistically

This model is predictable and usually enough for single-MCU measurement nodes.

Atomic publication is important when data is consumed by other contexts (serial output, control loop, diagnostics). Use double-buffered snapshots or short critical sections to avoid torn reads.

Instrumentation should be built in early:

  • dropped-sample count
  • max ISR latency observed
  • max buffer depth reached
  • timestamp monotonicity checks

Without instrumentation, “seems stable” can hide near-overload behavior.

Another practical pattern is calibration hooks. If timer clock derives from an internal RC oscillator, drift can distort measurements. Add a calibration path using known references where possible, or at least expose drift estimation telemetry so users understand uncertainty.

When integrating with control logic, separate measurement confidence from measurement value. For each computed metric, carry metadata:

  • valid/invalid
  • sample count
  • age
  • error flags

Control decisions should degrade safely on low-confidence inputs.

Testing must include real signal generators and ugly signals:

  • clean square waves for baseline
  • jittered waveforms
  • missing pulses
  • slow edges near threshold
  • EMI-contaminated lines

Embedded timing code that only passes clean-lab signals is unfinished.

One reason people reach for RTOS early is fear of concurrency complexity. That fear is understandable. But for focused timing tasks, a disciplined interrupt-plus-buffer model is simpler, faster, and easier to audit. You can always layer a scheduler later if system scope grows.

A compact bring-up checklist:

  • verify edge timestamps with logic analyzer correlation
  • force overflow and confirm wrap-safe math
  • saturate input rate and observe drop accounting
  • validate end-to-end latency from edge to published metric
  • confirm behavior after long-duration runs

If all five pass, you have a reliable timing subsystem.

The deeper lesson is architectural: put precision where it belongs. Let hardware timestamp edges. Let ISR move minimal data. Let foreground logic compute and publish. Clean boundaries produce reliable systems.

This design style scales from small sensor interfaces to motor control telemetry and protocol timing diagnostics. It also teaches excellent habits: deterministic ISR design, explicit loss accounting, and confidence-aware outputs.

You do not need an RTOS to do serious timing work. You need explicit constraints, measurable behavior, and the discipline to keep fast paths simple.

2026-02-22