Real quantities

A document called the Guide to the Expression of Uncertainty in Measurement [GUM] describes an internationally accepted approach to evaluating and reporting measurement uncertainty for real-valued quantities.

The following notes are based on the GUM.

The measurement function

A measurement procedure must be described as an equation involving all quantities that influence a result

\[Y = f(X_1,X_2,\cdots,X_N) \;,\]

where \(Y\) as the measurand and the \(X_i\) are influence quantities. When applied to a measurement, this general form may be changed to use more appropriate symbols for the quantities involved.

The function \(f(\cdots)\) must take account of all quantities that affect the outcome of a measurement, including residual errors. The measurand can usually be derived from a subset of the \(X_i\), the remaining terms represent residual errors. These errors are nuisance terms in the calculation, because the best estimates of their value is typically 0, or 1, which does not affect the result. Nevertheless, the uncertainty in the residual error estimates does contribute to the measurement uncertainty.

Example: an RF power measurement

A model of a simple power measurement illustrates the distinction made here between residual errors and quantities from which the measurand is derived.

Assume that a power sensor has been connected directly to a signal generator with the intention of measuring the power that can be delivered. In this arrangement, reflections at the generator and sensor ports give rise to a mismatch error. The sensor response must also be adjusted using a calibration factor. These aspects of the measurement are expressed in the equation

\[P = \frac{M}{K}\, p_\mathrm{m}\, ,\]

where \(P\) is the power level of interest (the measurand), \(M\) is the mismatch factor and \(K\) is the sensor calibration factor. The power meter indicates a number \(p_\mathrm{m}\). The sensor calibration factor and mismatch factor are used to transform the meter indication \(p_\mathrm{m}\) into an estimate of the power level of interest.

However, the equation above does not take account of what goes on inside the power meter. In a simple model, the raw response to an applied signal will be corrected for an offset error and a gain error by the instrumentation. The residual errors associated with these adjustments affect the accuracy of the measurement.

Since the measurement model must include these residual instrumentation errors, a term \(G\) is added here for a scale factor (gain), and \(Z\) is added for an offset (zero), giving a more detailed measurement equation

\[P = \frac{M}{K}\, \left( \frac{p_\mathrm{m}-Z}{G} \right)\, .\]

The measurement result, our estimate of \(P\), is little changed by adding these terms, because \(G \approx 1\) and \(Z \approx 0\). However, the residual errors must be accounted for in the uncertainty budget. They are examples of the nuisance terms referred to above.

Input estimates

The quantities \(X_i\) are not exactly known, so estimates must be used to calculate an estimate of the measurand. Sometimes an \(X_i\) will be measured, in other cases estimates will be simply assigned (e.g., residual errors are often estimated as zero).

When calculating uncertainty, the GUM associates additional attributes with the estimates \(x_i\) [1]

  • the standard uncertainty (of the estimate) \(u(x_i)\),
  • the number of degrees of freedom \(\nu_i\) (associated with the standard uncertainty) and
  • the type of distribution (associated with the error) [2].

Type-A and type-B evaluations

The GUM distinguishes between two different approaches to evaluating the standard uncertainty of input quantities:

  • a type-A evaluation uses statistical analysis of data (such as the sample statistics for a number of repeated observations)
  • a type-B evaluation uses anything other than statistical analysis (e.g., information from: calibration certificates, previous measurement data, experience or knowledge about relevant errors, manufacturer specifications, etc)

Type-A evaluation

For example, consider the results of a type-A evaluation for a quantity \(Q\), using a sample of \(N\) independent observations \(q_k\). The best estimate of \(Q\) is the sample mean

\[q = \frac{1}{N} \sum_{k=1}^{N} q_k \,,\]

the standard uncertainty is the standard deviation of the sample mean

\[u(q) = \sqrt{ \frac{1}{N(N-1)} \sum_{k=1}^{N} (q - q_k)^2 } \, ,\]

and the degrees of freedom

\[\nu = N-1 \, .\]

The type of distribution for type-A evaluations is usually Gaussian.

It is generally assumed that the error resulting from a type-A evaluation is Gaussian, because the distribution of the mean of many independent random errors is approximately Gaussian (see: http://en.wikipedia.org/wiki/Central_limit_theorem). However, in type-B evaluations other distributions often better represent what is known about an error.

For example, when an observation is reported with finite resolution (e.g., from an instrument with a digital display), a uniform distribution may better describe the error. The best estimate is the indicated value, which can be considered the midpoint of a range between the levels at which the least-significant digit would change. The standard uncertainty is the standard deviation of a uniform distribution over this range. The degrees of freedom is infinite when the width of the range is known [3].

The following table gives some common conversion factors (\(a\) is the half-width of each distribution)

Distribution Standard uncertainty
Uniform \(a/\sqrt{3}\,\)
Triangular \(a/\sqrt{6}\,\)
Arcsine \(a/\sqrt{2}\,\)

Propagation of uncertainty

The standard uncertainty of a measurement can be evaluated from the standard uncertainties of the input quantity estimates and the sensitivity of the measurement function to these inputs.

The standard uncertainty of

\[y = f(x_1,x_2,\cdots,x_N)\,\]

as an estimate of \(Y\) is

\[u(y) = \left[ \sum_{i=1}^N \sum_{j=1}^N u_i(y) r(x_i,x_j) u_j(y)\right]^{1/2} \;,\]

where \(r(x_i,x_j)\) is the correlation coefficient between the estimates \(x_i\) and \(x_j\). The terms \(u_i(y)\) may be called components of uncertainty.

\[u_i(y) = \frac{\partial f}{\partial x_i} u(x_i)\, .\]

When there is no correlation

\[u(y) = \left[ \sum_{i=1}^N u_i(y)^2 \right]^{1/2} \; .\]

These expressions for \(u(y)\) are commonly referred to as the law of propagation of uncertainty (LPU).

Example: power measurement (continued)

Returning to the power measurement, there are four components of uncertainty that must be evaluated.

We assume that \(M\), \(K\), \(G\) and \(Z\) have been measured, or estimated, yielding \(m\), \(k\), \(g\) and \(z\), respectively. The corresponding uncertainties in these estimates are \(u(m)\), \(u(k)\), \(u(g)\) and \(u(z)\).

An estimate of the power is then

\[p = \frac{m}{k}\, \left( \frac{p_\mathrm{m}-z}{g} \right)\, .\]

To obtain the components of uncertainty, this equation must be differentiated with respect to each input and then multiplied by the respective standard uncertainty

\[u_{m}(p) = p\,\frac{u(m)}{m} \,,\]
\[u_{k}(p) = -p\,\frac{u(k)}{k} \,,\]
\[u_{g}(p) = -p\,\frac{u(g)}{g} \,,\]

and

\[u_{z}(p) = -\frac{m}{kg} u(z) \,.\]

Finally, assuming the input estimates are uncorrelated, the combined standard uncertainty is

\[u(p) = \left\{ p^2 \left[ \left( \frac{u(m)}{m} \right)^2 + \left( \frac{u(k)}{k} \right)^2 + \left( \frac{u(g)}{g} \right)^2 \right] + \left(\frac{m}{kg}\, u(z)\right)^2 \right\}^{1/2} \,.\]

Special cases: simple arithmetic

The general form of the LPU, based on partial derivatives, can be simplified in certain cases. However, the nature of the measurement errors must still be considered. Different rules apply when estimates are independent to when they are dependent (correlated).

Independent estimates

The following rules can be used to calculate uncertainty with independent estimates.

Equation form Uncertainty equation
\(y = x_1 \oplus x_2 \oplus\, \cdots \,\) \(u^2(y) = u(x_1)^2 + u(x_2)^2 + \cdots \,\)
\(y = x_1 \otimes x_2 \otimes\, \cdots\,\) \(\left[ \frac{u(y)}{y} \right]^2 = \left[\frac{u(x_1)}{x_1}\right]^2 + \left[\frac{u(x_2)}{x_2}\right]^2 + \cdots\,\)

The symbol \(\oplus\) is used here to represent either addition or subtraction, because the form of the uncertainty calculation is the same for both. The symbol \(\otimes\) represents either multiplication or division.

Example: power measurement

The simple power measurement above has the measurement equation

\[p = \frac{m}{k}\, \left( \frac{p_\mathrm{m}-z}{g} \right)\, .\]

There is uncertainty in the estimates \(m\), \(k\), \(g\) and \(z\), which are considered independent (no correlations).

If we write

\[p_\mathrm{m}^\prime = p_\mathrm{m} - z\, ,\]

then the equation only involves multiplication and division

\[p = \frac{m}{k}\, \frac{p_\mathrm{m}^\prime}{g}\, .\]

so the rule in the second line of the table applies and we can write

\[\left[ \frac{u(p)}{p} \right]^2 = \left[\frac{u(m)}{m}\right]^2 + \left[\frac{u(k)}{k}\right]^2 + \left[\frac{u(g)}{g}\right]^2 + \left[\frac{u(p_\mathrm{m}^\prime)}{p_\mathrm{m}^\prime} \right]^2\]

However, we now need to know the uncertainty \(u( p_\mathrm{m}^\prime )\). The indicated power \(p_\mathrm{m}\) has no uncertainty (we know exactly the number displayed), so \(u(p_\mathrm{m}^\prime)\) is actually the same as \(u(z)\) [5].

\[u( p_\mathrm{m}^\prime ) = u(z)\]

The last term may be expressed as

\[\frac{1}{p^2} \left[ \frac{m}{kg} u(z) \right]^2\]

So, after re-arranging we obtain, as before,

\[u(p) = \left\{ p^2 \left[ \left( \frac{u(m)}{m} \right)^2 + \left( \frac{u(k)}{k} \right)^2 + \left( \frac{u(g)}{g} \right)^2 \right] + \left(\frac{m}{kg}\, u(z)\right)^2 \right\}^{1/2} \; .\]

Dependent estimates

Sometimes estimates are perfectly correlated (i.e., \(r_{ij} = 1\)). This can happen when a systematic error influences are present in several different parts of a measurement.

The following table shows how to combine uncertainties in such cases.

Equation form Uncertainty equation
\(y = x_1 \pm x_2 \pm \, \cdots\) \(u^2(y) = [u(x_1) \pm u(x_2) \pm \cdots ]^2\)
\(y = x_1 \star x_2 \star\, \cdots\) \(\left[ \frac{u(y)}{y} \right]^2 = \left[ \frac{u(x_1)}{x_1} \pm \frac{u(x_2)}{x_2} \pm \cdots \right]^2\)

In the second row, the star symbol \(\star\) is being used represent either multiplication or division: if multiplication, the corresponding operation in the uncertainty equation is addition; if division, the corresponding operation in the uncertainty equation is division.

Example: power measurement (continued)

The residual error associated with the internal correction for zero offset in the power meter is a typical systematic error. It influences every measurement made until the power meter is adjusted again.

In a more detailed uncertainty analysis of power measurement, we would encounter the ratio

\[\frac{p_\mathrm{m} - z}{p_\mathrm{mc} - z} \;,\]

where \(p_\mathrm{mc}\) corresponds to the meter indication when setting the gain factor (the ‘c’ is for ‘’calibration’‘).

As shown above, the numerator \(p_\mathrm{m}^\prime = p_\mathrm{m} - z\) has uncertainty \(u(p_\mathrm{m}^\prime) = u(z)\). Similarly, the denominator \(p_\mathrm{mc}^\prime = p_\mathrm{mc} - z\) has uncertainty \(u(p_\mathrm{mc}^\prime) = u(z)\).

Now, the term \(z\) appearing in these equations represents the same residual error and our best estimate of \(z\) is zero. So the relative uncertainties of \(p_\mathrm{m}^\prime\) and \(p_\mathrm{mc}^\prime\) can be written as

\[\frac{u(p_\mathrm{m}^\prime)}{p_\mathrm{m}^\prime} = \frac{u(z)}{p_\mathrm{m}}\quad \mathrm{and} \quad \frac{u(p_\mathrm{mc}^\prime)}{p_\mathrm{mc}^\prime} = \frac{u(z)}{p_\mathrm{mc}}\]

and the uncertainty of the required ratio \(p_\mathrm{c}^\prime / p_\mathrm{mc}^\prime\), may be calculated using the second row of the table above

\[u\left(\frac{p_\mathrm{c}^\prime }{ p_\mathrm{mc}^\prime}\right) = \left( \frac{1}{p_\mathrm{m}} - \frac{1}{p_\mathrm{mc}}\right) u(z)\]

Effective degrees of freedom and the Welch-Satterthwaite formula

To calculate an expanded uncertainty (the half-width of an uncertainty interval), the standard uncertainty obtained from an LPU calculation must be multiplied by a coverage factor that depends on the number of degrees of freedom associated with the measurement result.

An effective number of degrees of freedom can be calculated using information about the degrees of freedom of the input quantity estimates and the components of uncertainty [6].

The calculation is generally referred to as the Welch-Satterthwaite formula. It can be written as

\[\frac{u(y)^4}{\nu_\mathrm{eff}} = \sum_{i=1}^N \frac{u_i(y)^4}{\nu_i} \, ,\]

where \(\nu_\mathrm{eff}\) is the ‘’effective degrees of freedom’’ and \(\nu_i\) is the number of degrees of freedom associated with the estimate \(x_i\).

Note

The Welch-Satterthwaite formula is only valid when the input estimates that have finite degrees of freedom are independent (uncorrelated) [4].

The uncertainty statement

The uncertainty statement for a measurement is an interval, usually centered on the result \(y\) and with a half-width \(U\)

\[[y - U, y + U]\;.\]

The expanded uncertainty \(U\) is calculated from the standard uncertainty \(u(y)\), the required coverage probability (level of confidence) \(p\) and the effective degrees of freedom \(\nu_\mathrm{eff}\)

\[U = k_p(\nu_\mathrm{eff}) \, u(y) \,,\]

The coverage factor \(k_p(\nu)\) is the \(100\,p^\mathrm{th}\) percentile of the Student t-distribution with \(\nu\) degrees of freedom [7].

Note

The coverage factor calculation assumes that the distribution of errors associated with the estimate \(y\) is Gaussian. This is usually a fair approximation in well-designed measurements.

Footnotes

[1]Note when referring to physical quantities, the GUM uses upper case letters; lower case letters are used for estimates.
[2]The type of distribution attributed to an input quantity is used to determine a scale factor that converts between a distribution width-parameter (like half-width) and the distribution standard deviation. This standard deviation is then used as the standard uncertainty of the estimate in uncertainty calculations.
[3]An approximate formula for degrees of freedom, which can be used when the range parameter of a type-B distribution is not well known, is given in the GUM (see equation (G.3)).
[4]The Welch-Satterthwaite formula is an approximate method borrowed from classical statistics that applies to a linear combination of independent Gaussian distributions.
[5]The rule for subtraction from the first row of the table has been used.
[6]Degrees of freedom is a notion borrowed from classical statistics, where it is a measure of the sample size used to estimate a parameter. In the GUM, degrees of freedom is less formally defined and can be attributed to type-B uncertainty evaluations as well.
[7]E.g., \(k_{95}(6) = 2.5\) and \(k_{95}(50) = 2.0\).
[GUM]BIPM, IEC, IFCC, ILAC, ISO, IUPAC, IUPAP and OIML, Evaluation of Measurement Data-Guide to the Expression of Uncertainty in Measurement JCGM 100:2008 (GUM 1995 with minor corrections) (Sevres, France: Joint Committee for Guides in Metrology, 2008) [on-line: http://www.bipm.org/en/publications/guides/gum.html]