» Measurement Uncertainty

What is uncertainty?

I used to be uncertain - now I'm not so sure. In ordinary use the word 'uncertainty' does not inspire confidence. However, when used in a technical sense as in 'measurement uncertainty' or 'uncertainty of a test result' it carries a specific meaning. It is a parameter, associated with the result of a measurement (eg a calibration or test) that defines the range of the values that could reasonably be attributed to the measured quantity. When uncertainty is evaluated and reported in a specified way it indicates the level of confidence that the value actually lies within the range defined by the uncertainty interval.

How does it arise?

Any measurement is subject to imperfections; some of these are due to random effects, such as short-term fluctuations in temperature, humidity and air-pressure or variability in the performance of the measurer. Repeated measurements will show variation because of these random effects. Other imperfections are due to the practical limits to which correction can be made for systematic effects, such as offset of a measuring instrument, drift in its characteristics between calibrations, personal bias in reading an analogue scale or the uncertainty of the value of a reference standard.

Why is it important?

The uncertainty is a quantitative indication of the quality of the result. It gives an answer to the question, how well does the result represent the value of the quantity being measured? It allows users of the result to assess its reliability, for example for the purposes of companson of results from different sources or with reference values. Confidence in the comparability of results can help to reduce barriers to trade.

Often, a result is compared with a limiting value defined in a specification or regulation. In this case, knowledge of the uncertainty shows whether the result is well within the the acceptable limits or only just makes it. Occasionally a result is so close to the limit that the risk associated with the possibility that the property that was measured may not fall within the limit, once the uncertainty has been allowed for, must be considered.

Suppose that a customer has the same test done in more than one laboratory, perhaps on the same sample, more likely on what they may regard as an identical sample of the same product. Would we expect the laboratories to get identical results? Only within limits, we may answer, but when the results are close to the specification limit it may be that one laboratory indicates failure whereas another indicates a pass. From time to time accreditation bodies have to investigate complaints concerning such differences. This can involve much time and effort for all parties, which in many cases could have been avoided if the uncertainty of the result had been known by the customer.

What is done about it?

The standard ISO/IEC 17025:2005 [General requirements for the competence of testing and calibration laboratories] specifies requirements for reporting and evaluating uncertainty of measurement.The problems presented by these requirements vary in nature and severity depending on the technical field and whether the measurement is a calibration or test.

Calibration is characterised by the facts that:

  1. repeated measurements can be made
  2. uncertainty of reference instruments is provided at each stage down the calibration chain, starting with the national standard and
  3. customers are aware of the need for a statement of uncertainty in order to ensure that the instrument meets their requirements.

Consequently, calibration laboratories are used to evaluating and reporting uncertainty. In accredited laboratories the uncertainty evaluation is subject to assessment by the accreditation body and is quoted on calibration certificates issued by the laboratory.

The situation in testing is not as well-developed and particular difficulties are encountered. For example, in destructive tests the opportunity to repeat the test is limited to another sample, often at significant extra cost and with the additional uncertainty due to sample variation. Even when repeat tests are technically feasible such an approach may be uneconomic. In some cases a test may not be defined well enough by the standard, leading to potentially inconsistent application and thus another source of uncertainty. In many tests there will be uncertainty components that need to be evaluated on the basis of previous data and experience, in addition to those evaluated from calibration certificates and manufacturers, specifications.

International and accreditation aspects

Accreditation bodies are responsible for ensuring that accredited laboratories meet the requirements of ISO/IEC 17025. The standard requires appropriate methods of analysis to be used for estimating uncertainty of measurement. These methods are considered to be those based on the Guide to the expression of uncertainty of measurement, published by ISO and endorsed by the major international professional bodies. It is a weighty document and the international accreditation community has taken up its principles and, along with other bodies such as EURACHEM/CITAC, has produced simplified or more specific guidance based on them.

Accreditation bodies are harmonising their implementation of the requirements for expressing uncertainty of measurement through organisations such as the European co-operation for Accreditation (EA) and the International Laboratory Accreditation Co-operation (ILAC).

How is uncertainty evaluated?

Uncertainty is a consequence of the unknown sign of random effects and limits to corrections for systematic effects and is therefore expressed as a quantity, ie an interval about the result. It is evaluated by combining a number of uncertainty components. The components are quantified either by evaluation of the results of several repeated measurements or by estimation based on data from records, previous measurements, knowledge of the equipment and experience of the measurement.

In most cases, repeated measurement results are distributed about the average in the familiar bell-shaped curve or normal distribution, in which there is a greater probability that the value lies closer to the mean than to the extremes. The evaluation from repeated measurements is done by applying a relatively simple mathematical formula. This is derived from statistical theory and the parameter that is determined is the standard deviation.

Uncertainty components quantified by means other than repeated measurements are also expressed as standard deviations, although they may not always be characterised by the normal distribution. For example, it may be possible only to estimate that the value of a quantity lies within bounds (upper and lower limits) such that there is an equal probability of it lying anywhere within those bounds. This is known as a rectangular distribution. There are simple mathematical expressions to evaluate the standard deviation for this and a number of other distributions encountered in measurement. An interesting one that is sometimes encountered, eg in EMC measurements, is the U-shaped distribution.

The method of combining the uncertainty components is aimed at producing a realistic rather than pessimistic combined uncertainty. This usually means working out the square root of the sum of the squares of the separate components (the root sum square method). The combined standard uncertainty may be reported as it stands (the one standard deviation level), or, usually, an expanded uncertainty is reported. This is the combined standard uncertainty multiplied by what is known as a coverage factor. The greater this factor the larger the uncertainty interval and, correspondingly, the higher the level of confidence that the value lies within that interval. For a level of confidence of approximately 95% a coverage factor of 2 is used. When reporting uncertainty it is important to indicate the coverage factor or state the level of confidence, or both.

What is best practice?

Sector-specific guidance is still needed in several fields in order to enable laboratories to evaluate uncertainty consistently. Laboratories are being encouraged to evaluate uncertainty, even when reporting is not required; they will then be able to assess the quality of their own results and will be aware whether the result is close to any specified limit. The process of evaluation highlights those aspects of a test or calibration that produce the greatest uncertainty components, thus indicating where improvements could be beneficial. Conversely, it can be seen whether larger uncertainty contributions could be accepted from some sources without significantly increasing the overall interval. This could give the opportunity to use cheaper, less sensitive equipment or provide justification for extending calibration intervals.

Uncertainty evaluation is best done by personnel who are thoroughly familiar with the test or calibration and understand the limitations of the measuring equipment and the influences of extemal factors, eg environment. Records should be kept showing the assumptions that were made, eg concerning the distribution functions referred to above, and the sources of information for the estimation of component uncertainty values, eg calibration certificates, previous data, experience of the behaviour of relevant materials.

Statements of compliance - effect of uncertainty

This is a difficult area and what is to be reported must be considered in the context of the client's needs. In particular, consideration must be given to the possible consequences and risks associated with a result that is close to the specification limit. The uncertainty may be such as to raise real doubt about the reliability of pass/fail statements. When uncertainty is not taken into account, then the larger the uncertainty, the greater are the chances of passing failures and failing passes. A lower uncertainty is usually attained by using better equipment, better control of environment, and ensuring consistent performance of the test.

For some products it may be appropriate for the user to make a judgement of compliance, based on whether the result is within the specified limits with no allowance made for uncertainty. This is often referred to as shared risk, since the end user takes some of the risk of the product not meeting specification. The implications of that risk may vary considerably. Shared risk may be acceptable in non-safety critical performance, for example the EMC characteristics of a domestic radio or TV. However, when testing a heart pacemaker or components for aerospace purposes, the user may require that the risk of the product not complying has to be negligible and would need uncertainty to be taken into account. An important aspect of shared risk is that the parties concerned agree on the uncertainty that is acceptable; otherwise disputes could arise later.


Uncertainty is an unavoidable part of any measurement and it starts to matter when results are close to a specified limit. A proper evaluation of uncertainty is good professional practice and can provide laboratories and customers with valuable information about the quality and reliability of the result. Although common practice in calibration, there is some way to go with expression of uncertainty in testing, but there is growing activity in the area and, in time, uncertainty statements will be the norm.

Uncertainty in practice

Print this page, and then use a ruler to measure the distance between these lines, in centimetres to two decimal points (eg 4.28 cm). Make a note of the measurement, and of the ruler you use.

Now do it with another ruler; note the result, and also make a note of which ruler you use. Repeat the job with as many different rulers as you can find, noting the ruler used each time. Are the measurements all the same?

Now ask colleagues to do the same, noting the measurements for each ruler.
Do different people produce different results with the same ruler? Do different rulers give consistent results?

Now give one of the rulers to someone else and get them to measure this distance.

Now do it with another ruler; note the result, and also make a note of which ruler you use. Repeat the job with as many different rulers as you can find, noting the ruler used each time. Are the measurements all the same?

Now ask colleagues to do the same, noting the measurements for each ruler.
Do different people produce different results with the same ruler? Do different rulers give consistent results?

Now give one of the rulers to someone else and get them to measure this distance.

How confident in the result are you?

Do you require training in uncertainty measurement?