Have a personal or library account? Click to login
The Three Most Common Needs for Training on Measurement Uncertainty Cover

The Three Most Common Needs for Training on Measurement Uncertainty

Open Access
|Oct 2025

Full Article

1.
Introduction to measurement uncertainty

Measurement uncertainty is essential for assessing, stating and improving the reliability of measurements. It is the basis for ensuring that measurement results are metrologically traceable to the SI [1], [2] and thus crucial for metrology for the European Grand Challenges in health, environment and energy [3], [4], as well as for quality infrastructure [5], industry [6], [7], trade, regulations [8],[9], etc.

Guidance on the evaluation of measurement uncertainty is available in the suite of documents of the GUM (the Guide to the expression of Uncertainty in Measurement [10], [11], [12], [13], [14], [15]). This guide is published on behalf of the international organizations BIPM, IEC, IFCC, ILAC, ISO, IUPAC, IUPAP and OIML, is applicable to a broad spectrum of measurement problems (see Section 2.G and e.g. [10, sec. 1.1] [14, sec. 1]) and is based on sound principles of probability and statistics [16]. Furthermore, the GUM is adopted by national metrology institutes (see Section 1.B), it is widely accepted [16, sec. 2.1.1] and is required in accreditation [17], [9], for the statement of uncertainties on calibration certificates [18], [19] and of calibration and measurement capabilities [2]. The GUM is recommended when assessing conformity with tolerances [20] and is the basis for guidelines, standards and policy documents in many application areas [18], [19], [21], [22], [23].

An understanding of measurement uncertainty from the highest scientific level down to the end user [10, sec. 1.1] is the basis for confidence in measurements. The need for a better understanding has been repeatedly expressed, e.g. in a survey [24] and during a research project [25]. The communities with such a need are multifarious, as uncertainty is associated with diverse measurements. The remainder of this section lists the key communities that have requirements for understanding and evaluating measurement uncertainty (Section 1.A), indicates different ways to provide that understanding (Section 1.B), and describes the overall aims of the present study to support that provision from the perspective of training (Section 1.C).

A.
Communities requiring an understanding of uncertainty

To structure the discussion on the communities that require to understand measurement uncertainty, the article follows the metrological traceability hierarchy and focuses on the key organizations of (national) metrology systems. These are 1) national metrology institutes, designated institutes and other signatories of the CIPM MRA [8], 2) accreditation bodies and technical assessors, 3) calibration laboratories, 4) testing laboratories, and 5) legal metrology authorities and their organizations. In addition, 6) students, lecturers and researchers at universities, 7) different metrology communities as represented by EURAMET’s European Metrology Networks (EMNs) and Technical Committees (TCs), and finally 8) the teachers of measurement uncertainty themselves require an understanding of measurement uncertainty.

Other communities may have related or different requirements for understanding uncertainty. For example, schoolteachers and students [26], [27], standardization bodies, industrial companies (e.g., manufacturers interested in stating product specifications or confirming test equipment suitability), needs arising from digital transformation [28] and other fields new to metrology, and finally society in general have requirements for understanding uncertainty at some level. However, these requirements are not explicitly covered in this article.

B.
Support for understanding uncertainty

In addition to guidance documents such as the GUM, there are other ways to increase or support understanding of the concepts, evaluation, usage and reporting of measurement uncertainty. These possibilities include research, examples and software, as well as training.

Methods for evaluating uncertainty are an active area of research, and both those covered by the GUM documents and those not covered are continually being investigated and developed (c.f. [29] and Section 3.I). Research on applications [29] and on the didactics (see Section 2.H) of uncertainty is also increasing the understanding of it at the highest level. Altogether, more than 3 500 publications in the last 10 years contained the keyword ‘measurement uncertainty’ in their title (according to Google Scholar search).

Worked out examples can illustrate the evaluation and reporting of measurement uncertainty, thus enhancing practitioners’ understanding or demonstrating good practice. Many examples can be found in the GUM documents [10], [11], [12], [15] and in a future dedicated document [30], as well as in many derived guidelines, standards, trainings [31] and in a compendium [7]. However, the principles underlying these examples need to be understood to properly adapt them to measurement models or conditions other than those specified.

A multitude of diverse software [32] exists to calculate uncertainty according to the GUM [33]. This software ranges from user-friendly web applications or GUIs to comprehensive collections of libraries, it implements methods of one or several of the GUM documents, represents a broad scope software or a very tailored one. Some software provides evidence of its validation (c.f. [34]). Generally, dedicated software can simplify or even partially automate the evaluation of uncertainty. Its users, however, still require an understanding of the principles.

An important cornerstone to increase the understanding of measurement uncertainty is to provide training courses to relevant communities. The next section describes how this article will contribute to improving training and thus the understanding of uncertainty.

C.
Training on uncertainty – content and aim of this article

This work will identify training needs in communities that require an understanding of uncertainty. For this purpose, Section 2 describes the requirements for understanding uncertainty and the current provision of training for each identified community. It also includes an overview of the resources available for uncertainty training. Section 3 then identifies gaps in current training on uncertainty and lists training needs for each community as well as overall needs that may arise due to new developments in metrology. Finally, Section 4 summarizes and identifies the needs most common to all communities and gives recommendations for future developments in uncertainty training.

The aim of this work is to raise the awareness of trainees from each of the communities that require an understanding of uncertainty, to highlight their training needs and identify common ones across the communities. These aspects will help teachers of uncertainty to better know their audiences and to better address their needs. The listed resources for uncertainty training will also support teachers. The combination of increased awareness, identified needs, listed resources and recommendations may serve to guide future developments by uncertainty training providers. Ultimately, this work will contribute to increasing the understanding of uncertainty.

Information focuses on measurement uncertainty training offered in Europe. It is based on surveys, interviews and a workshop (see Appendix A to C) conducted as part of the project Measurement Uncertainty (MU) Training (an activity of the European Metrology Network for Mathematics and Statistics, see [35]). Information is also based on letters from and meetings with stakeholders of this project as well as on protocol statements from project partners (see Appendix D, the stakeholder letters are labelled [SLn] and the protocol statements [PS]). In addition, information from several European Metrology Networks will be used (see Appendix E) to describe high-level needs in different metrology fields. Research will also give insights into uncertainty training.

This article is the first to describe training on measurement uncertainty in many different disciplines. The study merges many different sources of information, but the authors do not claim to fully cover all the needs of all audiences. Inevitably, the study is limited by the sources of information used to describe the state of the art and the needs in uncertainty training. Although the completeness and quality of each of the sources varies, the authors think that the set as a whole provides the reader with a good overview. Reference is made to the publicly available sources. Those sources that cannot be published are described in the appendix to transparently link the information with the conclusions drawn from it.

The authors also hope to stimulate discussion in the communities not covered here.

2.
State of the art of training on measurement uncertainty in Europe

Training is usually based on the GUM or derived documents, often illustrated by examples and supported by software (c.f. [34], [36], [31]).

This section will provide a more detailed overview of the state of the art in uncertainty training, considering each of the 8 communities listed in Section 1.A separately. Each subsection starts with the community’s requirement for evaluating and understanding uncertainty, which itself can be very heterogeneous.

A.
NMIs, DIs and other signatories of the CIPM MRA

The 252 institutes participating in the CIPM MRA, i.e. 97 national metrology institutes (NMIs), 151 designated institutes (DIs) and 4 international organizations [37], [38], mutually recognize their calibration and measurement capabilities, which are expressed in terms of measurement uncertainty [8, paragr. T2, T7], [2] and should comply with the GUM (see [39, Note 4], also [20, Note 3 in 7.6]). These institutes therefore require a deep understanding of uncertainty for research, for participation in key comparisons and for the dissemination of traceability. Some larger NMIs employ teams of mathematicians and statisticians who expand, apply, and teach methods to evaluate uncertainty (c.f. [29, authors]). However, the management and possibly even the administration of institutes participating in the CIPM MRA also require some understanding of uncertainty.

In Europe, at least the national metrology institutes in Spain (CEM), Poland (GUM), Italy (INRiM), Portugal (IPQ), France (LNE), Switzerland (METAS), the UK (NPL), Ireland (NSAI), Germany (PTB), Sweden (RISE) and Belgium (SMD), as well as the European Commission’s Joint Research Centre (JRC) each teach courses on or including measurement uncertainty. A recent survey (see Appendix A and [36], [34]) provides an overview of these 34 courses, 15 of which are aimed at audiences that include NMI staff (with the remainder aimed at calibration and testing laboratories, legal metrology staff, and academia). Of these 15 courses (1), 9 are dedicated to measurement uncertainty, 5 to metrology and one to calibration. Table 1 gives the number of these uncertainty courses, which teach different technical content. Of these 15 courses, 13 use one or more examples and 10 use software in class. 7 courses are available online, most of them in pre-recorded form. 12 of the courses offer some form of certification.

Table 1.

Number of courses in which various technical aspects of measurement uncertainty are taught for audiences including employees of NMIs, calibration or testing laboratories, in legal metrology and at universities (from [36]). For reference, the equivalent question (Q) number for the questionnaire in Table 2 is given.

Audience
Technical content of uncertainty courseEquivalent topic in Table 2NMIlablegaluni
Mathematical tools reviewed / as prerequisiteQ17/79/70/26/6
(some) Probability conceptsQ21522612
Basic metrological conceptsQ31523615
Standard uncertainties for input quantitiesQ41524616
Law of propagation of uncertainty (LPU)Q5 (maybe Q6)1323615
Propagation of distributions via Monte CarloQ117819
Validate LPU against Monte Carlo resultsQ137816
LPU & Monte Carlo for multivariate modelsQ8, Q123202
Reporting of measurement resultsQ181422616

Number of courses1524616

In addition to this range of uncertainty courses offered by and for single NMIs, EURAMET e.V. and the BIPM promote knowledge transfer in general. The respective websites [40], [41] offer some training materials and courses that also deal with measurement uncertainty (see Section 2.G). The dedicated activity MU Training of the European Metrology Network for Mathematics and Statistics provides support for teachers of uncertainty and training material such as videos explaining uncertainty aspects and overviews surveying uncertainty courses, software and examples (see [35] and Section 2.H).

B.
Accreditation bodies and technical assessors

The 109 accreditation bodies that have signed the ILAC MRA [42] assess and recognize the competence of conformity assessment bodies (i.e., laboratories, inspection bodies, proficiency testing providers, reference material producers, and biobanks) [43]. As a result, trust in accredited services is being built worldwide [44] and accreditation is one (core) dimension of quality infrastructure [5], [45].

Accreditation bodies shall have the competence to assess conformity assessment bodies (c.f. [43, sec. 1.2-3], [46]), which includes assessing that these bodies competently evaluate uncertainties in calibration, when relevant in testing [20], in medical laboratories [47, sec. 7.3.4], in inspections [48, sec. 6.2.7-9], in proficiency testing [49, sec. 4.4.5], in the production of reference materials [50, sec. 7.13] and in the operation of biobanks [51, sec. 6.5]. To do so, the accreditation bodies define, evaluate and monitor competence criteria and have a procedure for training assessors [46, sec. 6.1]. In particular, they shall identify training needs and provide access to specific training. Some larger accreditation bodies (e.g., UKAS, ACCREDIA, and ANAB) themselves offer training and sometimes guidance (e.g., [22], [52]) on the evaluation of uncertainty for testing and calibration laboratories.

In the course survey [36], only one online-course [53] specifically lists ‘auditors who need to implement and assess the estimation of measurement uncertainties [...] of reference materials’ among its audiences. Another course is offered by the UK accreditation body (UKAS), which also states that ‘Measurement uncertainty is a key concern in all accredited calibration and testing, consequently the availability of specialist training is vitally important’ [SL16]. The Italian accreditation body (ACCREDIA) provides uncertainty training during summer schools and states that its ‘technical officers have experience in [...] providing training courses on the standard for accreditation [20] focusing on measurement traceability and the evaluation of measurement uncertainty’ [PS]. Also in Europe, the national accreditation bodies in Belgium (BELAC), Germany (DAkkS) and Ireland (INAB) expressed an interest in improving training on measurement uncertainty [SL17, SL27, SL29] – mainly for laboratory personnel (see Section 3.C).

In addition, a questionnaire was created to assess interest, (self-reported) knowledge and preferred teaching method on 21 topics related to measurement uncertainty (see Appendix B for details). Table 2 lists the 21 topics on which technical assessors of calibration and testing laboratories were surveyed among the members of the European cooperation for Accreditation (EA) and in Italy (2). In the table, the topics on which the calibration laboratory assessors indicated a high level of knowledge are highlighted in dark gray, light gray highlights a medium level of knowledge and white highlights a lower level of knowledge. (On a scale from 1 to 4, the average scores for EA range from 3.5 to 2.6 for high knowledge, from 2.5 to 2.0 for medium knowledge and from 1.9 to 1.5 for lower knowledge, see Table 2. These scores were chosen so that each column of the table contains 9-10 topics in the highest level, 6-7 in the medium level and 4 in the lowest level.) High knowledge is generally reported for basic concepts, input quantities, combined and expanded uncertainty (Q1-7), as well as reporting uncertainties (Q18) and statement of conformity (Q21). Italian assessors also indicate high knowledge for Q16 (uncertainty from ILC/PF).

Table 2.

Topics of the questionnaire on which EA and Italian accredited calibration and testing laboratories and their assessors indicated their knowledge (‘know’) and interest (‘inter’). The values indicate the average score and the color highlights the level (dark gray = highest level, light gray = medium level, white = lower level) of knowledge and interest reported (see Appendix B and text for details). Emphasized topics largely correspond to topics included in the course survey [36].

Questionnaire topicAccredited calib. labsTechnical assessorsAccredited test labsTechnical assessors
EAItalyEAItalyEAItalyEAItaly
knowinterknowinterknowinterknowinterknowinterknowinterknowinterknowinter
Basic math. & metrolog. conceptsMathem. elements forevaluating uncertaintyQ12.813.602.653.372.933.743.123.602.533.332.573.232.683.332.743.31
Probability andstatistics elementsQ22.533.502.463.402.933.673.003.482.443.212.443.242.633.242.553.36
Fundamental concepts of metrologyQ33.213.673.173.313.423.723.643.482.803.472.853.502.943.513.063.57

Propagating uncertainties (GUM approach)Evaluation of type A& B uncertainty componentsQ42.793.662.543.473.003.793.203.642.223.132.283.202.413.142.703.44
Combined stand. uncertainty for uncorr. inputsQ52.603.472.363.292.953.673.203.602.183.052.073.122.323.132.353.32
Combined stand. uncertainty for correlated inputsQ62.373.482.243.422.633.722.763.522.163.062.073.112.293.012.393.35
Expanded uncertainty (U) & coverage factors (k)Q72.843.572.833.383.193.793.363.482.543.292.613.362.663.242.893.47
Applying multivariate measurement modelsQ81.983.241.662.902.123.512.163.281.842.821.652.832.072.961.852.99
Theoretical or empirical measurement modelsQ92.113.241.993.112.283.602.523.442.042.911.933.052.213.002.113.15
Fitness for purpose and target uncertaintyQ172.043.232.123.292.473.562.723.482.053.082.233.432.313.192.523.57
Reporting measurement resultsQ182.863.532.903.283.193.723.323.252.663.422.843.472.833.483.073.54
ConformityStatements of conformity to specificationsQ212.273.342.453.492.653.742.843.442.263.202.393.432.453.292.803.65

Uncertainty evaluation for specific dataLeast squares method applied to metrologyQ102.183.292.073.242.493.652.683.521.912.762.072.982.192.922.243.15
Uncertainty based on methods validation dataQ151.963.271.833.302.003.532.253.282.343.362.453.492.513.332.723.56
Uncertainty based on ILC/PT data & experienceQ162.043.302.283.612.163.583.003.522.153.212.333.472.303.182.613.55
Uncertainty for samplingQ201.822.891.652.842.163.472.003.041.993.121.803.102.263.262.113.48

Propagating distributions (MonteCarlo)Single measurand (Univariate model)Q111.423.001.603.091.933.632.083.281.452.381.442.631.622.631.542.89
More measurands (Multivariate model)Q121.372.951.492.941.723.511.843.241.372.311.402.571.552.551.422.85
vs. GUM approach for uncertainty evaluationQ131.442.931.503.111.843.601.923.361.422.501.362.701.672.721.433.06
Bayes appr.Alternative methods for uncertaintyQ141.442.861.302.791.493.261.643.201.342.461.352.621.632.661.583.06

Technical assessors for testing laboratories generally report slightly lower levels of knowledge. (The average scores for EA range from 3.0 to 2.32 for high knowledge, from 2.31 to 2.0 for medium knowledge and from 1.7 to 1.5 for lower knowledge). Table 2 illustrates these levels of knowledge. High knowledge is generally reported for the basic concepts (Q1-3), input quantities (Q4), expanded uncertainty (Q7), reporting uncertainties (Q18), statement of conformity (Q21) and uncertainty from validation data (Q15). While the technical assessors of EA also report high knowledge for Q5 (combined uncertainty), the Italian assessors do so for Q16 (uncertainty from ILC/PF).

C.
Calibration laboratories

Accredited laboratories must comply with ISO/IEC 17025 [20] when performing calibrations (see also [18]) and tests within their accredited scope [54], and thus establish and maintain metrological traceability of their measurement results [20]. The latter requires the identification of the contributions to uncertainty [20, Note 3 in 7.6] and shall comply with the GUM for calibration [18]. EA-4/02 [19] implements the GUM for calibrations and is mandatory for EA members.

Consequently, the more than 11 000 calibration laboratories [42] accredited by ILAC signatories, shall have the competence to evaluate measurement uncertainty in compliance with the GUM when declaring it on calibration certificates [18]. Calibration laboratories thus require an understanding of evaluating and reporting uncertainty.

The survey of courses on or including measurement uncertainty (see Appendix A and [36], [34]) lists 24 courses aimed at audiences including calibration and testing laboratory staff. Of these 24 courses1, 18 are dedicated to uncertainty, 5 to calibration, three to metrology and one to reference materials. Table 1 shows the number of these courses, which teach different technical aspects of measurement uncertainty. All of these courses offer some form of certification. All but one of the courses use at least one example and 13 use software in class. One third of the courses is given physically, one third physically and online, and one third online in pre-recorded form.

Calibration laboratories among the members of EA and in Italy were also surveyed with the questionnaire on interest, (self-reported) knowledge and preferred teaching method on 21 topics related to measurement uncertainty (see Appendix B and Table 2). The topics are generally at the same knowledge level as for the technical assessors, but the laboratories report slightly lower knowledge scores (by about 0.2 to 0.4 score points).

Tables 1 and 2 show coherently that the law of propagation of uncertainty (LPU) and its input and reporting (Q4,5,18) are frequently taught (i.e. in more than 90 % of the courses) and that European and Italian calibration laboratories have a high knowledge of them.

D.
Testing laboratories

Accredited laboratories must comply with ISO/IEC 17025 [20] when performing tests within their accredited scope [54], and thus establish and maintain metrological traceability of their measurement results [20]. The latter requires the identification of the contributions to uncertainty [20, Note 3 in 7.6].

Analogous to the calibration laboratories, the 65 000 testing laboratories [42] accredited by the ILAC signatories shall also evaluate the uncertainty of their measurement results or at least make an estimation of it [20, sec. 7.6.3]. However, ILAC G17 [55, clause 3] also recognizes documents that are an alternative to the GUM. Testing laboratories therefore require an understanding of evaluating and reporting uncertainty, although at a different level than calibration laboratories.

The survey of courses on or including measurement uncertainty lists 24 courses aimed at audiences including calibration and testing laboratory staff (see Section 2.C). That is, the survey provides some information on uncertainty training for testing laboratories, but it does not separate this audience from calibration laboratories.

Testing laboratories among the members of EA and in Italy were surveyed with the questionnaire on interest, (self-reported) knowledge and preferred teaching method on 21 topics related to measurement uncertainty (see Appendix B and Table 2). Testing laboratories generally report a slightly lower level of knowledge than their assessors (by about 0.1 to score points) and than calibration laboratories (by up to points). The topics are generally at the same knowledge level as for their technical assessors (see Section 2.B).

In addition, Table 1 and Table 2 show coherently that input quantities and reporting uncertainties (Q4,18) are frequently taught (>90 %) and that European and Italian testing laboratories have a high knowledge of them.

E.
Legal metrology authorities and their organizations

Legal metrology aims to ensure trust and fairness in measurements covering trade, and to protect health, safety and the environment [56], [45]. Within this scope, measurement-based requirements and requirements for measuring instruments or systems are laid down in national regulations [57]. The International Organization of Legal Metrology (OIML) with its 64 member states and 63 corresponding states promotes the global harmonization of legal metrology laws, and offers its members guidance for their national legislation [56]. In particular, OIML D 1 [56, sec. 6.5, element no. 10] recommends that measurement results should be traceable when covered by regulations, performed to control regulated prepackages and provided by regulated instruments, and that non-fulfilment should be an offence [56, element no. 29]. The requirements for regulated measurements themselves ordinarily include the required measurement uncertainties [56, sec. 6.5.1, element no. 17]. In all these cases, the uncertainties should be evaluated following the GUM [9], [56, art. 17]. In addition, it is recommended that conformity assessment procedures for regulatory enforcement follow OIML guidance [56, sec. 6.6], which includes the recent document [58] on how to take uncertainty into account in conformity decisions in legal metrology.

Consequently, local legal metrology authorities, which implement legal controls, conduct surveillance inspections and verifications of instruments and prepackages, and accept or reject them [56, sec. 3.2.4], require an understanding of the importance and evaluation of uncertainty for regulated measurements. See [59] for training content for staff in verification offices. In addition, national legal metrology authorities, which may or may not be part of a country’s NMI, that develop metrological controls, study requirements, calibration and test equipment, carry out or supervise type evaluation and provide training in legal metrology [56, sec. 3.2.3], require an understanding of the importance and evaluation of uncertainty in legal metrology. Furthermore, the central government authority that is in charge of the national metrology policy and of coordinating metrology-related government actions requires an understanding of the importance of uncertainty [56, sec. 3.2.1]. An understanding of the concept of uncertainty is also required by secretariats, conveners and members of technical committees, subcommittees or project groups at the OIML [58] and regional legal metrology organizations such as WELMEC.

The European Cooperation in Legal Metrology, WELMEC, had 9 active working groups in 2022, 8 of which were interviewed on their understanding and need for training on measurement uncertainty (see Appendix C). Several working groups promote aspects of uncertainty in guidelines, such as Guide 4.2 [60], 6.9 [61], and, to some extent, 13.1 [62]. Guide 8.10 [63] recommends sampling plans. In general, WELMEC guides target various conformity assessment activities for the legal control of regulated measuring instruments [SL15]. The WG convenors are usually NMI employees and stated in the interview that they themselves have appropriate knowledge of measurement uncertainty.

The survey of courses on or including measurement uncertainty (see Appendix A and [36], [34]) lists 6 courses aimed at audiences from verification authorities (cf. Table 1). These 6 courses are offered either by one of three NMIs or by a federated academy for legal metrology. They include1 4 courses dedicated to uncertainty and one course each to metrology experiments and gas pump verification. Table 1 shows the number of these courses, which teach different technical aspects of measurement uncertainty. Software is used in all but one course. The same courses each offer at least one example. Two courses are also available online and one is online only (in pre-recorded form). All courses offer certification, half of them after an exam.

F.
Students, lecturers and researchers at universities

The evaluation of uncertainty is an essential part of courses on measurement data in University degree programs in metrology and physics [64], [65], [66]. Aspects of uncertainty are also taught in degree programs in engineering [67], chemistry [68], [69], biology [70], [71], and medicine [72], especially laboratory medicine, and related fields. Literature shows that state-of-the-art teaching of measurement uncertainty follows the concepts of the GUM. At the same time, however, it also shows that this is not yet the case in all courses [70]. In addition to students and lecturers, researchers whose work is based on measurement results are also in need of using, reporting or evaluating uncertainties, and it is their responsibility to use pertinent methods to do so.

The survey of courses (see Appendix A and [36, 34]) lists 16 courses aimed at audiences including students, lecturers and/or researchers at universities. Of these 16 courses1, 9 are dedicated to uncertainty, 6 to measurement or metrology, one to testing and certification, and one to reference materials. Table 1 shows the number of these courses, which teach different technical aspects of measurement uncertainty. All of these courses offer some form of certification and use at least one example in class. Ten courses use software for uncertainty propagation or uncertainty budget calculation in the classroom. Of the courses, 5 are held physically, 7 are held physically and online, and 4 are online only (most in pre-recorded form).

Given the number of universities in Europe and the number of degree programs they offer, the survey of uncertainty courses may not provide a representative overview. Particularly those universities or departments without NMI contacts or not teaching GUM concepts may be underrepresented in the survey, considering that 9 of the 16 courses at the universities included in the survey are taught by NMI staff.

In addition to the survey, there is research that provides insight into the content of uncertainty courses at universities and schools. For example, [73, p. 34] summarizes that ‘the topic of measurement uncertainties is traditionally introduced at universities during laboratory courses [74], [75], [76], [77]). [...] Often, these laboratory courses are accompanied by theoretical (statistics) courses that introduce the topic of measurement uncertainties [75], [77], [78], [79].’ In secondary education, however, ‘measurement uncertainty is a topic that is often neglected’ [73, p. iii].

G.
Different metrology fields

There are 12 European Metrology Networks (EMNs) that represent the measurement science community in different fields. These EMNs analyze and address the European and global metrology challenges, and uncertainty has already been explicitly identified as a challenge by 5 of these EMNs (see Appendix E for details). In particular, the EMN

  • Mathmet identified the foundational topic ‘Data Analysis and Uncertainty Evaluation’, as well as the demand for research on uncertainty to support the strategic topics of ‘Artificial Intelligence and Machine Learning’ and ‘Computational Modelling and Virtual Metrology’ (see [29] and Section 3.I for details).

  • Climate and Ocean Observation identified uncertainty as a general metrology challenge, particularly for in situ observations of essential climate variables in the land domain, for ocean observations and for remote sensing [80].

  • Advanced Manufacturing identified requirements for uncertainty, particularly in the cross-cutting topics of intelligent product design, advanced materials and smart manufacture and assembly. The EMN envisages that instruments or their digital twins provide measurement results including uncertainties, and that AI algorithms are validated and ‘calibrated’ for uncertainty determination [81].

  • Radiation Protection’s vision is that ‘quality assurance including measurement traceability to the SI system is available for all measurements in the respective exposure situation addressed under the European legislation.’ They want to contribute to this with reliable data, including uncertainties [82].

  • Smart Electricity Grids identified ‘grid monitoring and data analytics’ as one of 8 themes with particular metrological relevance for smart grids, and identified for this theme the requirement to develop ‘big data analytics and visualisation platforms with adequate evaluation of measurement uncertainty’ and ‘machine learning algorithms for short-term load forecasting’ [83].

  • Laboratory Medicine’s mission is to provide metrological traceability of in vitro diagnostics, which is important in health care, and for devices regulated in EU 2017/746 [84]. Uncertainty is not explicitly mentioned, but is a prerequisite for traceability.

The EMNs were only founded in 2019 or later, so that at the beginning of 2024, the strategic research agenda of 6 EMNs was not yet available or did not mention the keyword ‘uncertainty’ (c.f. Appendix E). None of the EMNs have (yet) published information on the state of the art of measurement uncertainty training in their field. Nevertheless, some EMNs have identified uncertainty training as one of their priorities, see Section 3.G.

EURAMET advertises and archives events [85] such as trainings, workshops and courses organized in different metrology fields. 12 events containing the keyword ‘uncertainty’ were organized between 2017 and 2024, either related to projects within EURAMET’s research programs such as EMPIR, as EURAMET capacity building activity, by bodies closely related to EURAMET or by the EMN Mathmet. These events include 5 workshops, 6 training courses, and a tutorial, all explicitly dedicated to teaching measurement uncertainty topics, either at a general level or for a specific application (i.e., spectral data, volatile organic compound measurements, chemical analysis and sampling, flow measurements, electrical power and energy, mass calibration and volume measurements). Among the 48 events that included the more generic keyword ‘training’, 10 further events had measurement uncertainty evaluation as a topic on their agenda. These trainings were all offered in the last two years and by similar types of organizers to the above-mentioned events on measurement uncertainty. All of these events confirm the cross-disciplinary nature of measurement uncertainty and the transversality of the need for its comprehension, modeling and evaluation in many different metrology fields.

H.
Teachers of measurement uncertainty

In general, following Shulman’s model of teachers’ professional knowledge [86, p. 8], the three dimensions of content knowledge, general pedagogical knowledge, and content-specific pedagogical knowledge (i.e., knowledge of how best to teach a certain topic) are needed to teach a topic successfully. When digital tools are used, additional technological knowledge is needed and it overlaps with all three dimensions mentioned above [87]. Furthermore, teaching is facilitated by a sufficient number of teachers as well as suitable materials that address concepts and support learning. This section focuses on the state of the art related to providing measurement uncertainty training, which is common to the community of teachers, and points to available resources for teaching measurement uncertainty. Section 3.H will point to further needs.

In the case of measurement uncertainty, content knowledge implies a thorough understanding of the concepts and limitations of measurement uncertainty as well as practical competence in the evaluation, use and reporting of measurement uncertainty as described in the GUM suite of documents [10], [11], [12], [13], [14], [15] and in research. In order to design training tailored to a specific audience, awareness of audience needs, expectations and prior training (with respect to metrology, mathematics, etc.) is also required to select relevant content, suitable examples, software, etc. This specific background for each community has been described in the subsections of Sections 2 and 3 and will not be repeated here.

The second and third of the three dimensions needed for successful teaching are general pedagogical skills, and pedagogical content knowledge specific to measurement uncertainty. The little research available on the latter gives insights into, for example, learning problems [78], pre- or misconceptions [88], and validated assessment tools (e.g., [89], [90]) to monitor the progress of trainees from mere knowledge of the existence of measurement uncertainty to its handling, assessment, and finally its conclusiveness [27]. In addition, [73, p. 34-35] and [91] conclude that the most important aspect to successfully teach uncertainties is a concept-based approach and emphasis on the underlying principles rather than just statistical and calculational procedures. Training on general pedagogical skills as well as dedicated training centers can help teachers develop courses (e.g., [92]) or skills that were often not part of their education.

Practical experiences were exchanged between teachers of measurement uncertainty at a recent workshop [93], and the importance of examples (e.g., an annotated template of a simple uncertainty budget) to illustrate the training content and to learn by example was emphasized, as well as the importance of (interactive) exercises, quizzes and other practical work to transfer higher levels of cognition (c.f. [94]). Knowledge transfer can also be improved through interactions between trainees and through follow-up contacts and consultations between experts and trainees after the course content has been implemented. On the other hand, trainee feedback can support the improvement of future courses. In addition, general pedagogical knowledge about methods such as blended learning and flipped classroom [95] enables the implementation of more individual learning and more time for discussions in class. Exchange of good practices and experiences between teachers of uncertainty enhances their capabilities; for example, the MU Training activity can serve as a reference point for uncertainty training and improve both pedagogical content knowledge and pedagogical knowledge. Among others, the activity organized a workshop for non-professional teachers (see [93] and [96]), and created a framework to mutually attend courses within its consortium.

The context in which uncertainty training is given differs substantially. The course survey [36] provides insights into the organizing bodies, training frameworks such as metrology, calibration, various Master, Bachelor degrees or PhD programs. Furthermore, the courses cover different metrology areas and vary in frequency, duration and language.

In addition to the professional knowledge of teachers, Shulman [86, p. 9] points to didactically sound materials and tools to support teachers. These include tailored curricula (c.f. Sections 2 and 3), but also examples and software to enhance courses on uncertainty that can bridge the gap between theory and application. Surveys on both [31], [33] provide guidance for choices tailored to the audience and context. Introductory material and e-learning can help to align the background knowledge of heterogeneous audiences. The MU Training activity offers the exchange of course material within its consortium. The GUM documents themselves [14], [10], [11], [12], [13], [15], textbooks (e.g., [97], [98]) and guidelines in different metrology fields support the teaching of uncertainty. Video material can also support teachers, and is available on certain aspects of uncertainty [99]. A Digital Learning Environment on Measurement Uncertainties with videos and practice problems for secondary school students has been developed [100] and assessed in [73]. Dedicated e-learning courses are also offered by CEM in Spanish, by LNE in French, and by JRC and NPL in English (see [36]). In general, educational technology has been shown to have a positive impact on learning outcomes when used intentionally [101]. Technological tools are abundant and some multi-purpose tools are listed [95].

3.
The training needs of each community

This section identifies separate needs for uncertainty training for each community of NMIs, accreditation bodies, calibration and testing laboratories, legal metrology, universities and different metrology fields. While communities may also have needs for tailored guidance, examples or software on uncertainty, those needs will only be touched upon if related to training.

Training providers, such as the European NMIs, are usually interested in how to better deliver their training. If particular training needs are not specific to any of the audiences, they are deferred to Section 3.H, where the needs of the teachers are summarized. In addition, Section 3.I identifies needs that may emerge due to new developments in metrology.

Section 4 then summarizes the needs identified in this section and provides recommendations and an outlook.

A.
NMIs, DIs and other signatories of the CIPM MRA

The survey on uncertainty courses in Europe [36] highlighted that little training on multivariate models [12] and thus on the calculation of correlation between multiple measurands exists – although these correlations should always be evaluated when measurands depend on common input quantities. There is also a rather small number of courses dedicated to propagating distributions (c.f. Table 1). This shortage may imply that there is a need for such courses and/or that awareness of the importance of both topics needs to be raised.

In addition, some of the smaller NMIs aim to extend their offer on uncertainty training because they do not currently teach courses or want to serve new audiences [PS]. Emerging NMIs stated that they themselves have a need for training on uncertainty methods [SL20, on MC] and a need for uncertainty trainers to teach their stakeholders. A DI would like to strengthen its teaching on uncertainty due to retiring teachers and with regard to aspects such as sampling, calibration, testing, binary or ranked tests and conformity assessment [SL22].

B.
Accreditation bodies and technical assessors

The current sources of information provide little insight into the training of accreditation body personnel. Neither the course survey [36] nor the 5 accreditation bodies that are partners or stakeholders of the MU Training activity include training that is explicitly tailored to this audience. The authors suspect that this community attends training that is mainly directed at other audiences, such as NMI or laboratory staff.

The technical assessors were also asked about their level of interest in 21 uncertainty topics in the questionnaire described in Section 2.B and Appendix B (see Table 2). In general, the level of interest is similar to the level of reported knowledge on the topics, but the average interest score is higher and less dispersed than the average knowledge score. For example, for assessors of EA calibration laboratories, average scores range from 3.8 to 3.65 for high interest, from 3.65 to 3.53 for medium interest, and from 3.51 to 3.26 for lower interest. This result can be interpreted as a general lack of structured training on uncertainty topics.

Let us consider topics which raise even more interest than knowledge compared to other topics and which could therefore be taught more to technical assessors. These topics are Q11 (univariate Monte Carlo) and Q13 (GUM vs. Monte Carlo) for assessors of EA calibration laboratories. For assessors of testing laboratories, topics Q17 (fitness for purpose of uncertainty) and Q20 (uncertainty for sampling) raise more interest than knowledge. Among Italian assessors, reported knowledge and interest seem to diverge on more topics. Surprisingly, the prerequisites for topics with high knowledge and interest (Q7,18,21), do not stimulate equally high interest among the assessors of EA testing laboratories. This means that there is a lack of interest and partially also a lack of knowledge for input and combined uncertainties (Q4-6) and for modeling (Q8-9). This outcome may be explained by the requirement in [20, sec. 7.8.3.1] to state uncertainties in test reports only if they are relevant for the test result, the customer or the conformity statement. (For example, stating conformity based on simple acceptance with an uncertainty or uncertainty sources limited by a documented test method [13, sec. 8.2.4, 8.2.5] may avoid explicitly evaluating the uncertainty.) Assessors generally reported less interest in Q12 (multivariate Monte Carlo) and Q14 (Bayesian methods). While for assessors of EA calibration laboratories interest in Q8 (multivariate LPU) and Q20 (uncertainty for sampling) is also low, for testing laboratories it is also low for Q11 (univariate Monte Carlo) and Q13 (GUM vs. Monte Carlo).

For each of the topics in the questionnaire, respondents were also asked about their preferred teaching method, choosing between the options ‘Theoretical’, ‘Exercises illustrated by the teacher’, ‘Exercises carried out by the attendee’ and ‘Use of dedicated software’. For all topics, ‘Exercises illustrated by the teacher’ is the (or one of the) most preferred teaching method by the technical assessors. This could be interpreted as the general demand for more exercises in uncertainty courses (c.f. Sections 1.B and 2.H for the importance and a caveat of exercises and examples).

C.
Calibration laboratories

Compared to NMIs, it is expected that calibration and testing laboratories generally have less statistical and numerical competence when evaluating uncertainties ([SL6] and [102]). Calibration laboratories offering metrology training expressed the need

  • for a common comparable approach and a clearly structured educational system for the teaching of measurement uncertainty [SL8]

  • for comparable uncertainty training and expertise [SL11], and

  • for a better transfer of the mathematical basics into practice in calibration and testing [SL4].

One standardization body expressed the need for uncertainty training particularly directed at European calibration and testing laboratories, and assumes that the latest edition of ISO 17025 poses challenges to this audience that require training [SL6], e.g. conformity statements that account for uncertainty. In addition, national accreditation bodies

  • are ‘interested in teaching that is targeted at the laboratory practitioner level including basic measurement equations, methods to recognize and address correlation, and calculation, reporting and use (importing) of measurement uncertainty within GUM-LPU in the presence of dominant type B uncertainties’ [SL16]

  • identified the need to ‘improve the trainings in this technical field [authors’ note: of determining uncertainties in testing and calibration performed by accredited laboratories] and with that the competence of all participating laboratory staff’ [SL17]

  • stated the importance of ‘setting up training to raise awareness about measurement uncertainties and to ensure the deep understanding of their concepts’ for all laboratories that apply for accreditation [SL28], and

  • identified the need for ‘common, best quality material to fill the gap by appropriate trainings targeting very different audiences’ as ‘founding concepts are often not really understood, because drown in extensive ex-cathedra calculus’ [SL28].

In the questionnaire summarized in Table 2, laboratories were also asked about their level of interest in 21 uncertainty topics. High levels of interest there largely coincide with the expectations for uncertainty training, observed by one training provider [102] in the selection, modeling and description of uncertainty sources (Q4-5), establishment and extension of the basic measurement model (Q9), covariance (Q6,8), Monte Carlo (Q11) and decision rules (Q7,21). In general, the level of interest is similar to the level of reported knowledge on the topics, but the average interest score is higher and less dispersed than the average knowledge score (see Table 2). Again, this result can be interpreted as a general lack of training on uncertainty topics and confirms the need expressed above by standardization and accreditation bodies. In addition, there are topics that raise even more interest than knowledge compared to other topics. This is generally the case for Q11 (univariate Monte Carlo) and Q13 (GUM vs. Monte Carlo). Among the Italian laboratories, other topics also raise relatively more interest than knowledge. On the other hand, uncertainty for sampling (Q20) raises less interest compared to other topics.

Table 1 and Table 2 coherently show that the propagation of distributions via Monte Carlo and the validation of LPU against Monte Carlo results are little taught (33 %) and that European and Italian calibration laboratories have little knowledge about them (generally in the lower third). However, there is interest in both topics, as Table 2 shows. The multivariate extension of LPU and Monte Carlo is taught even less (<10 %), and interest is medium and low, respectively – despite the need to consider correlations when measurands depend on common input quantities. Since calibrations often imply correlations, there may be a need to raise awareness of the importance of multivariate (joint) modeling of measurands.

Again, the preferred teaching method for all topics is ‘Exercises illustrated by the teacher’. One could interpret this preference as the general demand for more exercises in uncertainty courses, which reiterates the need for better knowledge transfer into practice and for tailored courses as expressed above by a training provider and by standardization and accreditation bodies.

D.
Testing laboratories

One manufacturer for whom product and component testing is essential, stated the need for training on identifying sources of uncertainty, or more generally on applying the ‘hard math’ in practice, and the need that development engineers, managers and suppliers be trained at different levels [SL23]. A testing laboratory aims to keep its knowledge up-to-date and to improve uncertainties by, among other things, being informed about uncertainty software and courses [SL32].

Table 2 ranks the interest of accredited testing laboratories in uncertainty topics more generally. Testing laboratories generally report average interest scores that are higher and less dispersed than the average knowledge score. Again, this can be interpreted as a general lack of training on uncertainty topics. Surprisingly, the prerequisites for topics with high knowledge and interest (Q7,18,21) do not stimulate equally high interest among testing laboratories. This means that there is a lack of interest and partially also a lack of knowledge for correlated input and combined uncertainties (Q5,6) and for modeling (Q8,9). (C.f. assessors of testing laboratories in Section 3.B for an explanation.)

Tables 1 and 2 coherently show that the propagation of distributions via Monte Carlo (Q11) and the validation of LPU against Monte Carlo results (Q13) are little taught (33 %) and that European and Italian testing laboratories have little knowledge about them (in the lower third). The multivariate extension of Monte Carlo (Q12) is taught even less (<10 %), and interest is low.

Again, the preferred teaching method for all topics is ‘Exercises illustrated by the teacher’, which could be interpreted as a general demand for more exercises in uncertainty courses.

E.
Legal metrology authorities and their organizations

WELMEC stated a general shortage of teaching material and courses that are both accessible and understandable to different specialists in legal metrology [SL15], such as notified bodies or field inspectors. During the interviews summarized in Appendix C, WELMEC WG 6 stated the additional need for support in improving Guide 6.9 [61], especially on model building, and in developing software including uncertainty for the packer procedure. WG 8 stated the need for conformity assessment according to JCGM 106 [13]. In addition, one federated academy for legal metrology stated the need to offer more blended learning with a larger online portfolio for teaching [PS].

The course survey [36] shows a particular lack of courses on distribution propagation via Monte Carlo and on multivariate modeling that are accessible to legal audiences. In addition, the few courses covered in the survey highlight that either a more complete overview or more courses on measurement uncertainty are needed for the legal community. WELMEC’s statement above indicates the latter need. If new national or European legislation or guidelines are adopted, the existing courses may need to be adapted.

F.
Students, lecturers and researchers at universities

The relatively low number of courses on uncertainty offered at universities and included in the survey [36] highlights the need for a more complete overview of such courses, especially among those university departments with weaker links to metrology. Among the courses included in the survey, a lack of training on multivariate models at universities is indicated.

Three universities expressed interest in improving their teaching of uncertainty, e.g. with new training material, and in establishing new courses [SL21,33,34]. In addition, they

  • would like to focus more on the concepts and importance of uncertainty, and would like to increase online delivery and other new digital teaching tools in the curricula [SL33]

  • aim to extend their offer to include a course that contains uncertainty in regression and the relevance of uncertainties for machine learning models and big data analytics [SL33]

  • aim particularly at the calculation of sensitivity coefficients and expanded uncertainties via LPU taking into account distributions, correlations and a minimum number of measurements, as well as the evaluation of uncertainty in calibrations (including needs for methods, models, budgets, validation), sampling and via Monte Carlo (needing a review of software and uncertainty budget calculations) [SL34].

The physics department of a fourth university expressed interest in new teaching approaches of the GUM, applications in basic research and would like to stay up-to-date [SL18]. A fifth university is interested in making reference courses on the fundamentals of measurement uncertainty training available to students in the fields of basic science and engineering [SL26]. In addition, one cooperation expressed that the understanding of uncertainty is not yet fully established in university education in the area of traceability in analytical chemistry [SL1].

G.
Different metrology fields

For the metrology fields of Advanced Manufacturing, Climate and Ocean Observation, Laboratory Medicine, Mathematics and Statistics, Radiation Protection, and Smart Electricity Grids, uncertainty has been identified as a challenge (see Sections 2.G and Appendix E). For these fields, the EMN

  • Climate and Ocean Observation also states that ‘One of the most common requests [...] is for training in both instrumental techniques and in uncertainty analysis’ [103, p. 36f]. Their roadmap until 2025 and 2032 aims to tailor training in uncertainty analysis to communities and connect these communities to generic training developed, for example, through EMN Mathmet. In particular, uncertainty training courses or training support is needed for ocean observation and remote sensing.

  • Radiation Protection aims to provide expertise and support their vision [82] with reliable data including uncertainties. There is also interest to interact with the EMN Mathmet [SL30].

  • Advanced Manufacturing aims to leverage metrology advancements through knowledge transfer and training. They want to harmonize fundamental metrology courses, connect national metrology training hubs and to coordinate training material, course development and transfer [81]. Training on uncertainty is particularly important because manufactured components are accepted based on the specified manufacturing tolerances, the measurement values for measurands of the components and their associated measurement uncertainties [SL13].

  • Smart Electricity Grids stated that its training focuses on metrological educational resources specific to scientific and engineering know-how related to electrical energy and power, and that it is interested in extending their training offerings to include courses on the fundamentals of measurement uncertainty [SL25].

  • TraceLabMed and its members offer uncertainty training, e.g., for calibration laboratories, and stated their interest in obtaining feedback and establishing a core curriculum for their audiences [SL31].

  • Mathmet identified [29] the need for research on uncertainty (see Section 3.I for details). During interviews, 7 members of Mathmet’s Stakeholder Advisory Committee expressed interest in training activities on measurement uncertainty (mostly medium or high interest). One of these stakeholders [WELMEC e.V.] also suggested that training activities on applications to conformity decisions would be useful. Currently, the MU Training activity of the EMN is improving the quality, efficiency and dissemination of measurement uncertainty training.

Within EURAMET e.V., scientific and technical cooperation in different fields is organized in Technical Committees (TCs, see [104] and its subpages). TC-Length members [SL7], the TC for Metrology in Chemistry and several subcommittees of the TC for Mass and Related Quantities are involved in knowledge transfer. The TC for Interdisciplinary Metrology also monitors the training needs for early career researchers in metrology [TC-IM Early Career project group] and the TC-Thermometry does this in its field [SL5, TC-T Best practice WG objectives]. TC-Thermometry also initiates courses among NMIs and accredited laboratories, and is interested in an exchange on measurement uncertainty to develop new courses and improve existing ones [SL5]. TC-Length considers improving the understanding of uncertainty concepts to be crucial for decisions in science, industry and legal metrology, and is interested in new training material for uncertainty courses. They particularly observe the need to improve uncertainty training in the practical realization and improvement of length and angle units, as well as in the application of measurement techniques in fields ranging from nanotechnology to advanced manufacturing and long range measurements. They are highly interested in common training material and in exchange with the MU Training activity [SL7].

H.
Teachers of measurement uncertainty

Teachers of measurement uncertainty need knowledge of the subject, general and uncertainty-specific pedagogical knowledge, technology-related knowledge, knowledge of their audience and the context in which they teach, as well as appropriate teaching materials and tools (c.f. Section 2.H). The number of qualified teachers must also meet the demand. Sections 3.A to 3.G collected evidence on which of these aspects need to be improved for particular audiences. Finally, this section collects overarching needs.

Stakeholder letters expressed a need for general and uncertainty-specific pedagogical knowledge. Specifically, NMIs

  • would like to strengthen those who teach and those who require an understanding of uncertainty, particularly outside of calibration laboratories and outside of the areas covered by EA-4/02 [19], i.e., product testing in legal and regulatory contexts, uncertainties in chemical and biological analysis, and multivariate uncertainties [SL19].

  • expressed the need for a basis that facilitates knowledge transfer and understanding of uncertainty, such as the exchange of teaching methods and practical experiments [SL19].

  • emphasized the importance of the Monte Carlo method, correlations between quantities and making sophisticated statistics better accessible to technical staff [SL14].

The general need to strengthen those teaching and those requiring an understanding of uncertainty has been expressed for all areas of chemical measurements, e.g., medicine, food safety, environmental and climate protection [SL1], as well as in civil engineering research to improve the use of metrology, e.g., for concrete dams, buildings, structures, geotechnics, hydraulics, materials, transportation [SL10], and for manufacturing in the semiconductor industry and of measuring instruments to improve the quality of their products. For the latter, exchange and training material on coverage and confidence intervals for non-metrologists, on uncertainty contributions that are unknown or originate from time or climatic drifts, the use of prior knowledge acquired from similar products and examples for electrical high-frequency applications are of particular interest [SL24]. One NMI has observed the need for a comprehensive, easy and efficient treatment of uncertainty in regression problems to support teaching in calibration contexts [105]. (The JCGM has also been planning for some time to publish a guide on applications of the least-squares method [30].) The European Commission’s science and knowledge service would like to gain more visibility for their uncertainty training on reference materials and thus for the importance of reliable measurement results and the role of reference materials, e.g., in food, feed and environmental analysis, engineering and health applications [SL35]. One DI wants to strengthen its teaching of uncertainty as many teachers are retiring [SL22]. In countries with fewer metrology resources and smaller audiences for uncertainty topics, it may be more difficult for NMIs and other training providers to offer uncertainty training tailored to heterogeneous audiences and contexts, and to do so regularly (see e.g., [106], [107] for hints).

Additional needs for uncertainty training may exist in communities that were not surveyed in this article.

I.
Emerging needs

In its strategic research agenda [29], the EMN Mathmet has identified urgent needs, new challenges and opportunities in the areas of mathematics and statistics in metrology. For this purpose, Mathmet identified the strategic topics ‘Artificial Intelligence (AI) and Machine Learning (ML)’ and ‘Computational Modelling and Virtual Metrology’, as well as the foundational topic ‘Data Analysis and Uncertainty Evaluation’ based on an extensive stakeholder consultation process. For the topic ‘Data Analysis and Uncertainty Evaluation’, Mathmet foresees the following challenges relevant to measurement uncertainty: uncertainty evaluation for scientific applications, for small and large sample sizes, Bayesian statistics, analysis of key comparison data, statistical tests, as well as model design, selection and validation, accounting for model errors, and the GUM suite of documents. Persons interviewed by Mathmet reported, among others, the following applications for which guidance on appropriate uncertainty calculations was still missing: biomedical applications, high frequency applications (non-stable process), regression problems with complicated uncertainty structures, analytical measurements (chemistry and life sciences, non-stationary data), optical surface and coordinate metrology, and uncertainty calculations of AI and ML data analysis approaches.

For the topic ‘AI and ML’, one of the recurring emerging issues relevant to uncertainty is its modeling and evaluation, to ensure robustness and reliability, and, ultimately, the trustworthiness and traceability of results. Uncertainty quantification of AI/ML predictions is important for the verification and validation of algorithms and software; understanding the impact of measurement uncertainty on the robustness of AI/ML systems is essential to pave the way towards their standardization and to improve decision making based on such systems; assessing the confidence in AI and ML results is needed for the certification of relevant applications. The presentation [108] derives some training needs from current and emerging uncertainty approaches in ML.

For the topic ‘Computational Modelling and Virtual Metrology’ uncertainty evaluation is also essential and there are still some issues to be solved in this context, especially when ensuring compliance with current standards in metrology, such as the GUM [10], [11], [12].

Most of the above-mentioned challenges and gaps, concerning the modeling and evaluation of measurement uncertainty, are not directly accompanied by an explicit need for corresponding uncertainty training. Nonetheless, it is clear that any new challenge or research topic to be tackled, in a certain area or for a certain application, will also need some corresponding kind of support from the training side. Hence, ideas and strategies for the development of future training courses and curricula should be developed in connection with the plethora of present and emerging needs for measurement uncertainty evaluation, in the different application fields and for the different audiences that ask for support.

4.
CONCLUSIONS
A.
Three common needs for training on uncertainty

This work revealed three major needs in relation to uncertainty training that are common to many of the communities requiring an understanding of uncertainty.

First, there is a need to address a general lack of training on uncertainty for calibration laboratories, testing laboratories, their technical assessors, in legal metrology, in the field of the EMNs Climate and Ocean Observation, Advanced Manufacturing as well as Smart Electricity Grids, in the thermometry, length and analytical chemistry community, and for emerging topics. The lack is often identified for tailored approaches to uncertainty training (including examples), but also for fundamental or common approaches, and when new guidance or regulation is adopted. Smaller and emerging NMIs need qualified teachers to strengthen their teaching or expertise. Calibration laboratories and teachers of uncertainty see the need to improve general and uncertainty-specific pedagogical knowledge, e.g., by encouraging exchange between those who teach uncertainty.

Secondly, a better overview of the state of the art of uncertainty training is needed, especially at universities, for accreditation body personnel and in the different metrology fields represented by the European Metrology Networks. A better overview of uncertainty training may also be needed in legal metrology, if the lack of evidence of courses accessible to legal metrology audiences is not entirely due to an overall lack of such courses. In addition, all communities that need an understanding of uncertainty, but are not covered in this article, could benefit from an overview of the state of the art of uncertainty training.

Thirdly, the information collected and investigated for this study highlights that more training on specific technical topics related to uncertainty evaluation is needed. In particular, little training on propagation of distributions via Monte Carlo is provided for NMI staff, for calibration and testing laboratories and their assessors, and in legal metrology. This lack includes the validation of the law of propagation of uncertainty against Monte Carlo results. In addition, little training on multivariate measurement models is provided for NMI staff, for calibration and testing laboratories and their assessors as well as in legal metrology and at the universities. Since neither the validation of results from the law of propagation of uncertainty nor the existence of correlation between input and/or output quantities should be a rarity, there is a need to raise awareness of these communities for the generalization of propagating uncertainties to propagating distributions [11] as well as for their extensions to several output quantities [12]. This is expected to happen with the recent rebranding of the whole suite of documents as ‘the GUM’ [14], with the desire for more reliable uncertainty statements, and with digitalization efforts. Training is also needed in different communities on other aspects of the evaluation of uncertainty, e.g., on model building, uncertainty for sampling or the evaluation of input and combined uncertainties.

B.
Discussion and outlook

To review the state of the art, and the needs for, training on measurement uncertainty in Europe, this article merged many different sources of information. Some of these sources were collected as part of the MU Training activity between 2021 and 2024, and range from surveys, interviews and stakeholder letters to workshop presentations. Other sources range from research to websites about courses or strategic agendas. The authors do not claim complete or equal coverage of all training on measurement uncertainty in Europe, and caution about potential limitations.

The needs identified for each community, as well as the three most common needs, indicate where future developments are required most to improve the understanding of uncertainty. This may be aided by a discussion in each community about underlying causes for the identified needs. Future work could expand the study beyond Europe and compare the identified needs.

Some of these prioritized needs can be taken up and addressed by the MU Training activity, which will continue in the future. This article can also serve as a guide for future developments of the EURAMET Technical Committees and the European Metrology Networks for their communities. For example, the EMN Mathmet is considering to plan a summer school on uncertainty. In addition, training providers in all communities covered and not covered here have the opportunity to benefit from this study.

This article is also a valuable source of information for teachers of uncertainty to better address the needs of their audience, to learn about the research, practical teaching experience, and available materials and tools specific to uncertainty training. If implemented, this information will facilitate to improve the training on uncertainty. Ultimately, this work will contribute to increasing the understanding of measurement uncertainty.

Courses are counted twice if they fulfill several of the listed characteristics.

Despite the high number of Italian participants in the questionnaire, because it was conducted by the Italian accreditation body, the article will focus on the European (EA) participants.

Language: English
Page range: 257 - 275
Submitted on: Jun 2, 2025
|
Accepted on: Aug 14, 2025
|
Published on: Oct 6, 2025
In partnership with: Paradigm Publishing Services
Publication frequency: Volume open

© 2025 Katy Klauenberg, Peter Harris, Philipp Möhrke, Francesca Pennecchi, published by Slovak Academy of Sciences, Institute of Measurement Science
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.