Developing wave-based calibration for vector network analyzers

Current state-of-the-art in RF calibration

Calibration is an essential part of any RF, microwave and millimeter wave measurement process. The accuracy of a bare-bones high-frequency measurement instrument is jeopardized by the influences of non-idealities, cabling and instrument drift. To maximally remove these systematic errors, calibration has been introduced together with the instrumentation benches.

This work concentrates on the calibration of a vector network analyzer (VNA). The aim of this instrument is to characterize the behavior of a system operating at microwave frequencies. The calibration is meant to increase the accuracy of the measurement process, by removing the systematic errors.

Calibration methods for VNA calibration have been constantly developed in the literature since the introduction of the VNA in the sixties. This resulted in a wide variety of methods, whose diversity is dictated by their dependence on the type and the amount of standards used.

Standards can be impedances (1-port), transmission lines (2-ports) or complete circuits (1-port and 2-ports). They can be exactly known, partially known or unknown-but-repeatable and do depend on the technology used (connectorized, probed, free space). Generally speaking, a minimal set of standards is selected to solve for the calibration coefficients analytically. The class of ‘auto-calibrating’ methods exploits the redundancy in the measurements to some extend to avoid exact prior knowledge of the standards (TRL [1], Unknown thru [2], LRM [3]…), but still requires a standard-specific solution for each set of standards.

Taking one step backward to locate the similarities between the state-of-the-art methods, it becomes clear that the common basis is

  • Calibration of the S-parameters only: Both the device and the instrument are assumed to be Linear and Time-Invariant systems (LTI). Knowing the S-parameters is hence equivalent to knowing the system. Calibration of the S-parameters is therefore sufficient, and the standards are characterized by their S-parameters.
  • Single frequency calibration: LTI systems support the superposition principle, hence the calibration can be solved on a frequency-by-frequency basis. Note that all measurement frequencies have to be calibrated separately.
  • Non-Parametric calibration: The calibration is known only at the calibration frequencies only. There is no knowledge about the behavior in between calibration frequencies.
  • The calibration depends on the standard set: Each standard set results in a new method, which has to be fully characterized and tested. This limits the flexibility.
  • The calibration is based on analytic expressions: Imposing minimal standard sets leads to equations that can be solved analytically, at the cost of losing the uncertainty estimation of the calibration.

While the basic assumptions underpinning the VNA calibration are unchanged since the sixties, three quantum leaps happened in RF measurements: differential system measurements, non-linear system measurements and parameter-varying (time-varying or temperature-varying) system measurements.

Both classes allow the system behavior to deviate from linearity in some way. It is no longer allowed to use superposition to characterize the response to an arbitrary excitation as a sum of sinewave responses. Despite the increased complexity of measurements performed (differential, non-linear, time-varying), the initial linear S-parameter calibration has mainly remained unchanged. The extension in the calibration that was needed for nonlinear characterization resulted merely in an add-on to the S-parameter calibration addition, using a separate standard element such as a pulse generator ([4],[5]). For parameter varying systems, the calibration will have to be known on a much denser frequency grid.

S-parameters are no longer sufficient to characterize a nonlinear or time-varying DUT. Calibrated waves are the new natural measurable quantity rather than their calibrated ratios. A shift of the calibration paradigm from S-parameter to wave-based calibration is therefore needed. Realizing this shift to waves answers the call for characterization under multi-line spectral excitation, which results in either an increased spectral resolution (time-variation) or a wider spectral bandwidth (nonlinearity) of the waves present at the ports of the DUT. Measuring at excited frequencies only is no longer sufficient.

This calls for a new paradigm in calibration. Instruments have evolved away from a single-sine single-frequency measurement in different ways as summarized below:

  • Excitation spectra are dense and wide banded: Arbitrary waveform generators create spectrally dense excitation spectra over wide modulation bandwidths. Time-varying and nonlinear DUTs expand the bandwidth and increase the resolution even further. The number of calibration frequencies therefore increases by at least 2 orders of magnitude.
  • Measurements are fast: Acquiring 10 frequency sweeps per second moves to mainstream, resulting in access to vast amounts of (repeated) data.
  • Setups are agile: Frequency is no longer the only variable that is swept in a measurement. Dependence on signal power, port impedances, bias settings under large signal operation, or temperature in time-varying systems is also characterized and swept.
  • Ad-hoc calibration kits pop up: specific measurements require specific calibration kits (e.g. on-wafer) to ensure accuracy. These home-made standards have a lower quality than commercial counterparts and are not specified a priori. Their uncertainty also needs to be propagated.

Goal of this project: methods overview

“Wave-based calibration for nonlinear and parameter-varying system measurement and identification enables accurate, validated measurement and modeling with uncertainty bounds included”

Changing the basic assumptions underpinning the calibration framework will generate a quantum leap in calibration. This evolution is mandatory to keep up with the changes in the measurement instrumentation. The framework will be developed along 5 main axes

  • Develop a generic N-port calibration framework to embed flexible user-determined standard set selections in a single estimation framework.
  • Validate the calibration using a stochastic N-port framework including measured experimental noise properties, (measured) reconnection errors, and (simulated) standard uncertainties.
  • Construct ‘user calibration kits’ based on a user-specified mix of custom-built and commercial standards. Custom built elements will be characterized by a device model  and an uncertainty model, with minimal user interaction. An accurate calibration uncertainty boundary will result.
  • Develop a minimum mean square error calibration framework to help the user to maximally identify and minimize the influence of parasitic problems by the identification of custom standard sets. This will maximally circumvent the technological limitations of the user setup.
  • Develop an “optimal” calibration to complete set of standards with new standards that maximally decrease the uncertainty on the DUT measurements.
  • Develop an identification-friendly calibration. Even small abrupt (noisy) changes in the error terms over the frequency lead to model extraction problems. A combined, single step, instrument calibration and device estimation framework will be developed to maximize model quality while maximally maintaining the calibration accuracy.
Back to top