Complex quantities

There are methods for evaluating the uncertainty of complex measurements that extend the approach employed for real-valued problems. The mathematical details are more complicated, but many concepts are similar.

The material in this section is drawn from journal articles and other reports prepared by members of the RF and microwave metrology community.

The measurement function

To formulate the data analysis, the measurement procedure must be described as an equation that involves all influence quantities. This can be expressed as [1]
\[\boldsymbol{Y} = \boldsymbol{f}(\boldsymbol{X}_1,\boldsymbol{X}_2,\cdots,\boldsymbol{X}_N)\;,\]

where \(\boldsymbol{Y}\) is the measurand and the \(\boldsymbol{X}_i\) are influence quantities.

The measurand can usually be derived from a subset of the \(\boldsymbol{X}_i\), with the remaining terms representing residual errors. These errors are nuisance terms in the calculation (see power measurement), because the best estimates of their value is typically 0, or 1, and does not affect the result. Nevertheless, the uncertainty in the residual error estimates does contribute to the measurement uncertainty.

Example: One-port reflectometer

There is a simple ‘one-port’ measurement model of a vector network analyzer (VNA) that relates the reflection coefficient of a device connected to a VNA to the reflection coefficient that is actually detected internally by the instrument. The model accounts for systematic hardware errors that affect measurements.

The following measurement equation is based on this simple one-port error model

\[\boldsymbol{\Gamma} = \frac {\boldsymbol{\Gamma}_\mathrm{m} - \boldsymbol{E}_\mathrm{D}} {\boldsymbol{E}_\mathrm{M}(\boldsymbol{\Gamma}_\mathrm{m} - \boldsymbol{E}_\mathrm{D}) + \boldsymbol{E}_\mathrm{T}} \;,\]

where \(\boldsymbol{E}_\mathrm{D}\), \(\boldsymbol{E}_\mathrm{M}\) and \(\boldsymbol{E}_\mathrm{T}\) are error terms and \(\boldsymbol{\Gamma}_\mathrm{m}\) is the reflection coefficient actually measured by the VNA.

The terms \(\boldsymbol{E}_\mathrm{D}\), \(\boldsymbol{E}_\mathrm{M}\) and \(\boldsymbol{E}_\mathrm{T}\) are estimated during an initial phase in the VNA measurement called ‘calibration’. Thereafter, the VNA internal software will apply the equation above to provide better estimates of \(\boldsymbol{\Gamma}\) than the raw reading \(\boldsymbol{\Gamma}_\mathrm{m}\).

At present it is difficult to evaluate the measurement uncertainty of the error terms \(\boldsymbol{E}_\mathrm{D}\), \(\boldsymbol{E}_\mathrm{M}\) and \(\boldsymbol{E}_\mathrm{T}\), because of the complexity of VNA operation. However, it is possible to investigate the residual errors of a calibrated VNA, when error-correction is being applied.

In that case a measurement equation of a similar mathematical form can be used

\[\boldsymbol{\Gamma} = \frac {\boldsymbol{\Gamma^\prime} - \boldsymbol{\varepsilon}_\mathrm{D}} {\boldsymbol{\varepsilon}_\mathrm{M}(\boldsymbol{\Gamma^\prime} - \boldsymbol{\varepsilon}_\mathrm{D}) + (1 + \boldsymbol{\varepsilon}_\mathrm{T})} \;,\]

where \(\boldsymbol{\varepsilon}_\mathrm{D}\), \(\boldsymbol{\varepsilon}_\mathrm{M}\) and \(\boldsymbol{\varepsilon}_\mathrm{T}\) are all residual errors estimated as zero and \(\Gamma^\prime\) is the now corrected indication on the VNA, which estimates of \(\Gamma\). The uncertainties in \(\boldsymbol{\varepsilon}_\mathrm{D}\), \(\boldsymbol{\varepsilon}_\mathrm{M}\) and \(\boldsymbol{\varepsilon}_\mathrm{T}\) contribute to the uncertainty in \(\Gamma^\prime\) as an estimate of \(\Gamma\).

Input estimates

Estimates of the input quantities are used to evaluate the measurement result (see Input estimates). To formulate an uncertainty statement, three additional attributes are needed:

  • the covariance matrix \(\mathbf{v}(\boldsymbol{x}_i)\) (representing the uncertainty of the input estimate)
  • the number of degrees of freedom \(\nu_i\) (associated with the covariance)
  • the type of distribution (associated with the error)

Type-A and Type-B evaluations

As before (see Input estimates) there are two approaches to evaluating the uncertainty of input estimates

  • type-A evaluation of uncertainty uses statistical analysis of data (such as sample statistics of repeated observations)
  • type-B evaluation of uncertainty does not use statistical analysis (e.g., information from calibration certificates, previous measurement data, experience or knowledge about relevant errors, manufacturer specifications, etc)

Type-A evaluation

For example, consider a type-A evaluation of uncertainty for a quantity \(\boldsymbol{Q}\) using a sample of \(N\) independent observations \(\boldsymbol{q}_k\).

The best estimate of \(\boldsymbol{Q}= Q_\mathrm{re}+ \mathrm{j}\,Q_\mathrm{im}\) is the sample mean \(\boldsymbol{q} = q_\mathrm{re}+ \mathrm{j}\,q_\mathrm{im}\), where

\[q_\mathrm{re} = \frac{1}{N}\sum_{k=1}^{N} q_{\mathrm{re}\cdot k} \, \quad q_\mathrm{im} = \frac{1}{N}\sum_{k=1}^{N} q_{\mathrm{im}\cdot k} \; .\]

The diagonal elements of the covariance matrix for the sample mean

\[v_{11} = \frac{1}{N(N-1)} \sum_{k=1}^{N} \left( q_{\mathrm{re}\cdot k} - q_\mathrm{re} \right)^2\]
\[v_{22} = \frac{1}{N(N-1)} \sum_{k=1}^{N} \left( q_{\mathrm{im}\cdot k} - q_\mathrm{im} \right)^2\]

and the off-diagonal elements

\[v_{12} = v_{21} =\frac{1}{N(N-1)} \sum_{k=1}^{N} \left( q_{\mathrm{re}\cdot k} - q_\mathrm{re} \right)\left( q_{\mathrm{im}\cdot k} - q_\mathrm{im} \right)\]

The degrees of freedom

\[\nu = N-1 \;.\]

The type of distribution is usually assumed to be bivariate Gaussian.

Warning

A type-A evaluation of uncertainty should not be carried out with data in polar coordinates. Rectangular coordinates should always be used. Results can be significantly biased if polar coordinates are used [Riddler:2000] [Riddler:2002].

Type-B uncertainty

There are several types of distribution that may better represent the error in an estimate of a complex influence quantity when phase cannot be measured.

Unknown phase

There are occasions when some information is available about the magnitude of a complex quantity but nothing is known about the phase. This happens, for example, when the VSWR of a component is reported.

In such cases the best estimate of the complex quantity is zero – the geometrical centre of possible values – and the uncertainty depends only on what is known about the magnitude.

The following cases are taken from [Hall:2007] [Hall:2011].

Known magnitude

A uniform ring distribution

When the magnitude is known but not the phase, the measurement error is drawn from a uniform ring distribution

When the magnitude of a complex quantity is known \(|\boldsymbol{\Gamma}|=a\), but the phase is not, \(\boldsymbol{\Gamma}\) could lie anywhere on a circle of radius \(a\) around the origin. The best estimate of \(\boldsymbol{\Gamma}\) for the purpose of uncertainty calculation is zero and the associated standard uncertainty in each component is \(a/\sqrt{2}\).

This situation is related to the type-B arcsine (u-shaped) distribution, of half-width \(a\), that is sometimes used in real-valued uncertainty problems. It was first described by Harris and Warner in the context of mismatch in attenuation measurements [Harris:81]. However, Harris and Warner considered a real-valued quantity with an arcsine distribution, whereas here \(u\) is the standard uncertainty associated with both the real and imaginary component estimates.

Maximum magnitude

A uniform disk distribution

When the magnitude bounded, the measurement error is drawn from a uniform disk distribution

When the maximum magnitude of a complex quantity is known \(|\boldsymbol{\Gamma}| \le a\), but the phase is not, \(\boldsymbol{\Gamma}\) could be anywhere on a disk of radius \(a\) centered at the origin. The best estimate of \(\boldsymbol{\Gamma}\) is zero and the standard uncertainty in the real and imaginary components is \(a/2\).

If the radius \(a\) is the same, the standard uncertainty is lower than the ring, because the magnitude is, on average, less than the maximum value.

Product of magnitudes

Measurement equations describing RF networks often feature product terms for which phase information may not be available. This is a problem for uncertainty calculation, because the LPU cannot propagate uncertainty from a product when both factors are zero.

In such cases, the product can be re-defined as single independent quantity for the purposes of the uncertainty calculation. For example, if we define

\[\boldsymbol{G} = \boldsymbol{\Gamma}_1 \boldsymbol{\Gamma}_2\]

the standard uncertainty associated with \(\boldsymbol{G}\) is

\[u = \sqrt{2}u_1 u_2 \; ,\]

where \(u_1\) and \(u_2\) are the standard uncertainties associated with estimates of \(\boldsymbol{\Gamma}_1\) and \(\boldsymbol{\Gamma}_2\) that lack phase information.

Example: power measurement

In a simple power measurement, the measurement equation can be written as

\[P_\mathrm{ g} = M P_\mathrm{ i} \;,\]

where \(P_\mathrm{ i}\), is the nett RF power available to a power sensor (the difference between incident and reflected power) and

\[M = |1 - \boldsymbol{\Gamma}_\mathrm{ s}\boldsymbol{\Gamma}_\mathrm{ g}|^2 \;,\]

which is sometimes referred to as the mismatch error.

Mismatch can dominate the uncertainty budget if phase information about \(\boldsymbol{\Gamma}_\mathrm{ g}\) and \(\boldsymbol{\Gamma}_\mathrm{ s}\) is not available.

Both \(|\boldsymbol{\Gamma}_\mathrm{ g}|\) and \(|\boldsymbol{\Gamma}_\mathrm{ s}|\) will be small, so \(M \approx 1\) and the associated uncertainty must account for the unknown phase.

Considering the the real and imaginary components of \(\boldsymbol{G} = \boldsymbol{\Gamma}_\mathrm{s} \boldsymbol{\Gamma}_\mathrm{ g}\), we may write

\[M = 1 - 2G_\mathrm{ re} + G_\mathrm{ re}^2 + G_\mathrm{ im}^2 \;.\]

Differentiating this with respect to \(G_\mathrm{ re}\) and \(G_\mathrm{ im}\), and remembering that \(\boldsymbol{G} \approx 0\), we find that \(M\) is only sensitive to the real component

\[\frac{\partial M}{\partial G_\mathrm{re}} = 2(G_\mathrm{ re} - 1) \approx -2\]
\[\frac{\partial M}{\partial G_\mathrm{im}} = 2G_\mathrm{ im} \approx 0 \;,\]

so the standard uncertainty

\[u(M) = 2 u(G_\mathrm{ re}) \;.\]

The contribution from each factor can be made explicit by using the equation for a product of unknown-phase factors

\[u(M) = 2 \sqrt{2} \, u(\Gamma_{\mathrm{ s}\cdot\mathrm{ re}}) u(\Gamma_{\mathrm{ g}\cdot\mathrm{ re}}) \;.\]

Then, considering different combinations of ring and disk distributions, we can obtain two alternatives to the conventional treatment of this problem. In the Harris and Warner treatment, \(|\boldsymbol{\Gamma}_{\rm s}|\) and \(|\boldsymbol{\Gamma}_{\rm g}|\) are known and the standard uncertainty is

\[u(M) = \sqrt{2}\,| \boldsymbol{\Gamma}_{\rm s}| | \boldsymbol{\Gamma}_{\rm g}| \;.\]

However, if one magnitude is known and the other is bounded the uncertainty is reduced

\[u(M) = | \boldsymbol{\Gamma}_{\rm s} | | \boldsymbol{\Gamma}_{\rm g} |\]

and when both magnitudes are bounded the uncertainty is only half the initial amount

\[u(M) = \frac{|\boldsymbol{\Gamma}_{\rm s}||\boldsymbol{\Gamma}_{\rm g}|}{\sqrt{2}} \;.\]

Propagation of uncertainty and degrees of freedom

Two calculations are needed to produce an uncertainty statement for a complex quantity. There is the extended form of LPU calculation appropriate for complex quantities and also the extended form of the calculation of effective degrees of freedom.

The law of propagation of uncertainty (LPU)

The covariance matrix associated with a measurement result can be calculated from the covariances of the input quantity estimates and the measurement function.

The uncertainty in a measurement result

\[\boldsymbol{y} = \boldsymbol{f}(\boldsymbol{x}_1,\boldsymbol{x}_2,\cdots,\boldsymbol{x}_N)\;,\]

is represented by a covariance matrix

\[\mathbf{v}(\boldsymbol{y}) = \sum_{i=1}^N \sum_{j=1}^N \mathbf{u}_i(\boldsymbol{y})\, \mathbf{r}( \boldsymbol{x}_i,\boldsymbol{x}_j)\, \mathbf{u}_j(\boldsymbol{y})^\prime \, ,\]

where the prime symbol indicates matrix transpose and

\[\begin{split}\mathbf{r}( \boldsymbol{x}_i,\boldsymbol{x}_j) = \begin{bmatrix} r(x_{\mathrm{re}\cdot i},x_{\mathrm{re}\cdot j}) & r(x_{\mathrm{re}\cdot i},x_{\mathrm{im}\cdot j}) \\ r(x_{\mathrm{im}\cdot i},x_{\mathrm{re}\cdot j}) & r(x_{\mathrm{im}\cdot i},x_{\mathrm{im}\cdot j}) \end{bmatrix}\end{split}\]

is a matrix of the correlation coefficients between the real and imaginary components of inputs \(\boldsymbol{x}_i\) and \(\boldsymbol{x}_j\).

The notion of a component of uncertainty in \(\boldsymbol{y}\) due to the uncertainty of the estimate \(\boldsymbol{x}_i\) is represented by

\[\mathbf{u}_i(\boldsymbol{y}) = \begin{bmatrix} \frac{\boldsymbol{\partial y}}{\boldsymbol{\partial x}_i} \end{bmatrix}\, \mathbf{u}(\boldsymbol{x}_i)\;,\]

where

\[\begin{split}\begin{bmatrix} \frac{\boldsymbol{\partial y}}{\boldsymbol{\partial x}_i} \end{bmatrix}\, = \begin{bmatrix} \frac{\partial y_\mathrm{re}}{\partial x_{\mathrm{re}\cdot i}} & \frac{\partial y_\mathrm{re}}{\partial x_{\mathrm{im}\cdot i}} \\ \frac{\partial y_\mathrm{im}}{\partial x_{\mathrm{re}\cdot i}} & \frac{\partial y_\mathrm{im}}{\partial x_{\mathrm{im}\cdot i}} \end{bmatrix}\;, \quad \mathbf{u}(\boldsymbol{x}_i) = \begin{bmatrix} u(x_{\mathrm{re}\cdot i}) & 0 \\ 0 & u(x_{\mathrm{im}\cdot i}) \end{bmatrix}\, .\end{split}\]

Note

Complex differentiation can be used to calculate \(\boldsymbol{z} = \frac{\boldsymbol{\partial y}}{\boldsymbol{\partial x}_i}\) when the complex partial derivative of the measurement function is well-defined. The required matrix elements are then given by the Cauchy-Riemann relations [Hall:2004]

\[\begin{split}\begin{bmatrix} \frac{\boldsymbol{\partial y}}{\boldsymbol{\partial x}_i} \end{bmatrix}\, = \begin{bmatrix} z_\mathrm{re} & -z_\mathrm{im} \\ z_\mathrm{im} & z_\mathrm{re} \end{bmatrix} \,.\end{split}\]

Unfortunately, there are some cases of interest when the complex measurement equation cannot be differentiated (e.g., when taking the complex modulus), so the Jacobian matrix form above is also useful.

Example: Mismatch error

The mismatch term

\[M = \left| 1 - \boldsymbol{\Gamma}_\mathrm{g} \boldsymbol{\Gamma}_\mathrm{s} \right|^2\]

arises in simple power measurements (see power example), due to the non-zero reflection coefficients at the generator (\(\boldsymbol{\Gamma}_\mathrm{g}\)) and at the sensor (\(\boldsymbol{\Gamma}_\mathrm{s}\)).

Suppose both reflection coefficients have been measured, yielding independent estimates \(\boldsymbol{\gamma}_\mathrm{g}\) and \(\boldsymbol{\gamma}_\mathrm{s}\), and standard uncertainties in the real and imaginary components: \(u(\gamma_\mathrm{g \cdot re})\), \(u(\gamma_\mathrm{g \cdot im})\), \(u(\gamma_\mathrm{s \cdot re})\) and \(u(\gamma_\mathrm{s \cdot im})\). There is no correlation between the various estimates.

The equation for mismatch is differentiable, so the first step is to express the measurement equation in terms of the components.

The estimate of \(M\) is

\[m = 1 - 2(\gamma_\mathrm{g\cdot re} \gamma_\mathrm{s \cdot re} - \gamma_\mathrm{g \cdot im} \gamma_\mathrm{s \cdot im}) + (\gamma_\mathrm{g\cdot re}^2 + \gamma_\mathrm{g \cdot im}^2) (\gamma_\mathrm{s \cdot re}^2 + \gamma_\mathrm{s \cdot im}^2) \;.\]

To obtain expressions for the (real-valued) components of uncertainty, we calculate the partial derivatives of \(m\) with respect to the input quantities

\[\frac{\partial m}{\partial \gamma_\mathrm{g\cdot re}} = 2(\gamma_\mathrm{g\cdot re} |\boldsymbol{\gamma_\mathrm{s}}|^2 - \gamma_\mathrm{s \cdot re})\,; \quad u_\mathrm{g\cdot re}(m) = 2(\gamma_\mathrm{g\cdot re} |\boldsymbol{\gamma_\mathrm{s}}|^2 - \gamma_\mathrm{s \cdot re})\, u(\gamma_\mathrm{g \cdot re})\]
\[\frac{\partial m}{\partial \gamma_\mathrm{g\cdot im}} = 2(\gamma_\mathrm{g\cdot im} |\boldsymbol{\gamma_\mathrm{s}}|^2 + \gamma_\mathrm{s \cdot im})\,; \quad u_\mathrm{g\cdot im}(m) = 2(\gamma_\mathrm{g\cdot im} |\boldsymbol{\gamma_\mathrm{s}}|^2 + \gamma_\mathrm{s \cdot im})\, u(\gamma_\mathrm{g \cdot im})\]
\[\frac{\partial m}{\partial \gamma_\mathrm{s\cdot re}} = 2(\gamma_\mathrm{s\cdot re} |\boldsymbol{\gamma_\mathrm{g}}|^2 - \gamma_\mathrm{g \cdot re})\,; \quad u_\mathrm{s\cdot re}(m) = 2(\gamma_\mathrm{s\cdot re} |\boldsymbol{\gamma_\mathrm{g}}|^2 - \gamma_\mathrm{g \cdot re})\, u(\gamma_\mathrm{g \cdot re})\]
\[\frac{\partial m}{\partial \gamma_\mathrm{s\cdot im}} = 2(\gamma_\mathrm{s \cdot im} |\boldsymbol{\gamma_\mathrm{g}}|^2 + \gamma_\mathrm{g \cdot im})\,; \quad u_\mathrm{s\cdot im}(m) = 2(\gamma_\mathrm{s \cdot im} |\boldsymbol{\gamma_\mathrm{g}}|^2 + \gamma_\mathrm{g \cdot im})\, u(\gamma_\mathrm{s \cdot im})\]

The component of uncertainty matrices associated with \(\boldsymbol{\gamma}_\mathrm{g}\) and \(\boldsymbol{\gamma}_\mathrm{s}\) are (the second columns contain zeros because \(m\) has no imaginary component)

\[\begin{split}\mathbf{u}_\mathrm{g}(m) = \begin{bmatrix} u_\mathrm{g \cdot re}(m) & 0 \\ u_\mathrm{g \cdot im}(m) & 0 \end{bmatrix}\end{split}\]
\[\begin{split}\mathbf{u}_\mathrm{s}(m) = \begin{bmatrix} u_\mathrm{s \cdot re}(m) & 0 \\ u_\mathrm{s \cdot im}(m) & 0 \end{bmatrix}\end{split}\]

and the covariance matrix associated with \(m\) is (the estimates are assumed to be independent, so the correlation matrices are not needed)

\[\mathbf{v}(m) = \mathbf{u}_\mathrm{g}(m)\, \mathbf{u}_\mathrm{g}(m)^\prime + \mathbf{u}_\mathrm{s}(m)\, \mathbf{u}_\mathrm{s}(m)^\prime \, .\]

Only the first element of \(\mathbf{v}(m)\) is non-zero. It represents the standard variance of the real component,

\[v_{11}(m) = u_\mathrm{g \cdot re}(m)^2 + u_\mathrm{g \cdot im}(m)^2 + u_\mathrm{s \cdot re}(m)^2 + u_\mathrm{s \cdot im}(m)^2\]

so the standard uncertainty in the estimate of mismatch is

\[u(m) = \sqrt{u_\mathrm{g \cdot re}(m)^2 + u_\mathrm{g \cdot im}(m)^2 + u_\mathrm{s \cdot re}(m)^2 + u_\mathrm{s \cdot im}(m)^2} \;.\]

Alternative forms of LPU

There are several other formulations of the LPU that can be used for complex problems.

The first two are mathematically equivalent to that just described. The last is a simplification.

  • GUM Method: The real-valued LPU can be used to evaluate the elements of the covariance matrix. Read more....

  • Matrix formulation: A concise matrrix formulation of the LPU is possible. Read more....

  • Independent circular uncertainties: A simplified approach can be used that makes rather severe assumptions about the measurement errors.

    The method assumes that uncertainties in the real and imaginary components of each input are equal and that the estimates are independent. It retains the two-dimensional nature of the problem, but uses calculations very similar to the real-valued LPU. Read more....

Effective degrees of freedom

If any influence quantity estimates have finite degrees of freedom [2], a number of effective degrees of freedom must be calculated for the measurement result. The degrees of freedom is needed to calculate a coverage factor.

The complex version of the degrees of freedom calculation may be used when the real and imaginary components of individual inputs are correlated. However, it cannot be used if different inputs with finite degrees of freedom are correlated [Willink:2002].

We first define a set of 2-by-2 sub-matrices along the tridiagonal band of \(\mathbf{v}(\boldsymbol{y})\)

\[\mathbf{v}_i = \mathbf{u}_i(\boldsymbol{y})\, \mathbf{r}(\boldsymbol{x}_i,\boldsymbol{x}_i)\, \mathbf{u}_i(\boldsymbol{y})^\prime\]

Then calculate

\[A = 2 \left( \sum v_{i \cdot 11} \right)^2\]
\[D = \sum v_{i \cdot 11} \sum v_{i \cdot 22} + \left( \sum v_{i \cdot 12} \right)^2\]
\[F = 2 \left( \sum v_{i \cdot 22} \right)^2\]

and

\[a = 2 \sum \frac{ v_{i \cdot 11}^2 } {\nu_i}\]
\[d = \sum \frac{ v_{i \cdot 11} v_{i \cdot 22} + v_{i \cdot 12}^2 }{\nu_i}\]
\[f = 2 \sum \frac{ v_{i \cdot 22}^2 }{\nu_i} \; .\]

Finally, the effective degrees of freedom is

\[\nu_\mathrm{eff} = \frac{A + D + F}{a + d + f} \; .\]

Note

The matrix

\[\begin{split}\mathbf{r}(\boldsymbol{x}_i,\boldsymbol{x}_i) = \begin{bmatrix} 1 & r(x_{\mathrm{re}\cdot i},x_{\mathrm{im}\cdot i})\\ r(x_{\mathrm{re}\cdot i},x_{\mathrm{im}\cdot i}) & 1 \end{bmatrix} \; ,\end{split}\]

where \(r(x_{\mathrm{re}\cdot i},x_{\mathrm{im}\cdot i})\) is the correlation coefficient between the real and imaginary components of \(\boldsymbol{x}_i\).

The uncertainty statement

An uncertainty statement for the measurement of a complex quantity is a region of the complex plane.

The region is conventionally an ellipse centered on \(\boldsymbol{y}\). The shape and orientation of the ellipse depends on the covariance matrix and the area is determined by the coverage factor.

A concise statement of uncertainty can be made in terms of the Mahalanobis distance.

Elliptical regions

The shape and orientation of the ellipse depends on the covariance matrix.

Circles

The shape of the uncertainty region is a circle when the covariance matrix has the form

\[\begin{split}\begin{bmatrix} v & 0 \\ 0 & v \end{bmatrix} \;.\end{split}\]

The standard uncertainty in the real and imaginary components \(u(y_\mathrm{re}) = u(y_\mathrm{im}) = u = \sqrt{v}\).

The radius of the uncertainty region is

\[U = k_{2,p}(\nu)\, u \;.\]
A circular uncertainty region

The real and imaginary component uncertainties are equal for a circular uncertainty region

Ellipses aligned with coordinate axes

The principal axes of the uncertainty ellipse are parallel to the real and imaginary coordinate axes when the the diagonal elements are different

\[\begin{split}\begin{bmatrix} v_{11} & 0 \\ 0 & v_{22} \end{bmatrix}\end{split}\]

the standard uncertainties are

\[u(y_\mathrm{re}) = \sqrt{v_{11}}\]
\[u(y_\mathrm{im}) = \sqrt{v_{22}}\;.\]

and the lengths of the ellipse axes are

\[k_{2,p}(\nu)\,u(y_\mathrm{re})\]
\[k_{2,p}(\nu)\, u(y_\mathrm{im})\;.\]
An elliptical uncertainty region

When there is no correlation between the real and imaginary component estimates, the ellipse is aligned with the Cartesian axes

Rotated ellipses

The most general case is when the off-diagonal elements of the covariance matrix are not zero. The uncertainty ellipse is then oriented at an angle to the coordinate axes.

\[\begin{split}\begin{bmatrix} v_{11} & v_{12} \\ v_{21} & v_{22} \end{bmatrix}\end{split}\]

the standard uncertainties are

\[u(y_\mathrm{re}) = \sqrt{v_{11}}\]
\[u(y_\mathrm{im}) = \sqrt{v_{22}}\]

but the principal axis lengths are now proportional to the square root of the eigenvalues of the covariance matrix.

A rotated elliptical uncertainty region

Correlation between the real and imaginary component estimates rotates the ellipse

Mahalanobis distance

The Mahalanobis distance \(\mathbf{d}(\mathbf{z},\mathbf{x})\) may be used to calculate a convenient measure of the separation between a point \(\mathbf{x}\) and the center of an ellipse at \(\mathbf{z}\).

Treating the complex numbers as vectors \(\mathbf{z}=(z_\mathrm{re},z_\mathrm{im})^\mathrm{T}\) and \(\mathbf{x}=(x_\mathrm{re},x_\mathrm{im})^\mathrm{T}\), the Mahalanobis distance is

\[\mathbf{d}(\mathbf{z},\mathbf{x}) = \sqrt{ (\mathbf{z}- \mathbf{x})^\mathrm{T} \mathbf{V}^{-1} (\mathbf{z}- \mathbf{x}) }\]

where \(\mathbf{V}^{-1}\) is the covariance matrix inverse.

An uncertainty region is then the locus of all points \(\mathbf{\xi}\) such that

\[\mathbf{d}(\mathbf{z},\mathbf{\xi}) \le k_{2,p}(\nu) \;.\]

Note

When there is no correlation, \(r=0\) and the uncertainies are equal, the Mahalanobis distance is simply a scaled Euclidean distance

\[\mathbf{d}(\mathbf{z},\mathbf{x}) = \sqrt{ \left(\frac{{z}_\mathrm{re}- {x}_\mathrm{re}}{u_\mathrm{re}}\right)^2 + \left(\frac{{z}_\mathrm{im}- {x}_\mathrm{im}}{u_\mathrm{im}}\right)^2 }\]

The coverage factor

The coverage factor is calculated from the degrees of freedom \(\nu\) and the required coverage probability \(p\)

\[k_{2,p}(\nu) = \left[ \frac{2\nu}{\nu-1} F_{2,\nu-1}(p)\right]^{1/2} \;,\]

where \(F_{2,\nu-1}(p)\) is the upper \(100p^\mathrm{th}\) percentile of the \(F\)-distribution.

When \(\nu\) is infinite,

\[k_{2,p}(\nu) = \sqrt{\chi_{2,p}^2}\]

where \(\chi_{2,p}^2\) is the \(100p^\mathrm{th}\) point of the chi-squared distribution with two degrees of freedom.

The coverage factors for complex quantities are not the same as those used for real quantities, as the following tables shows

\(\nu\) Complex Real
2 28.3 4.3
3 7.6 3.2
4 5.1 2.8
5 4.2 2.6
6 3.7 2.5
7 3.5 2.4
8 3.3 2.3
9 3.2 2.3
10 3.1 2.2
50 2.6 2.0
\(\infty\) 2.45 1.96

Footnotes

[1]Complex quantities are written here in bold font, like \(\boldsymbol{X}\).
[2]If an estimate is said to have infinite degrees of freedom then the covariance of the distribution associated with its error is presumed known.
[Riddler:2002]N M Ridler and M J Salter, An approach to the treatment of uncertainty in complex S-parameter measurements, Metrologia 39 (2002) 295-302
[Riddler:2000]N M Ridler and M J Salter, Evaluating and expressing uncertainty in complex S-parameter measurements, ARFTG 56 Conf. Digest (2000), 63-75
[Hall:2007]B D Hall, Some considerations related to the evaluation of measurement uncertainty for complex-valued quantities in radio frequency measurements, Metrologia, 44 (2007) L62-L67 [on-line: http://rf.irl.cri.nz]
[Hall:2011]B D Hall, On the expression of measurement uncertainty for complex quantities with unknown phase, Metrologia 48 (2011) 324-332 [on-line: http://rf.irl.cri.nz]
[Harris:81]I A Harris and F L Warner, Re-examination of mismatch uncertainty when measuring microwave power and attenuation, IEE Proc. H: Microwaves, Optics and Antennas, 128 (1) (1981) 35-41
[Hall:2004]B D Hall, On the propagation of uncertainty in complex-valued quantities, Metrologia, 41 (3), (2004) 173-177 [on-line: http://rf.irl.cri.nz]