Quality Infrastructure Data
For ten years, we are dealing with the question of how to measure the development and performance of a country’s Quality Infrastructure. Our continuous effort involves comparing the level of development of different countries and measuring the progress of development of a country’s Quality Infrastructure. You can compare this data with other indicators such as population size, economic power, competitiveness, exports, etc. The Quality Infrastructure measurement requires a reliable database in the areas of metrology, standardization, accreditation and conformity assessment. This blog post deals with metrology data and thereby initiates a series of blog post dedicated to QI-Data specifically.
CIPM Mutual Recognition Arrangement
The credibility of the Quality Infrastructure depends on comparable and internationally accepted measurement results. Since 1999, the International Committee on Weights and Measures (CIPM) convenes a Mutual Recognition Arrangement (CIPM MRA) which is the framework through which National Metrology Institutes (NMIs) demonstrate the international equivalence of their measurement standards and of the calibration and measurement certificates they issue. The outcomes of the CIPM MRA are the internationally recognized, peer-reviewed and approved Calibration and Measurement Capabilities (CMCs) of the participating institutes. Approved CMCs and supporting technical data are publicly available from the CIPM MRA database (the KCDB).
Key Comparison Database (KCDB)
The Key Comparison Database (KCDB) was launched on 14 October 1999. Since then the data has been accessible via a website https://www.bipm.org/kcdb/. On 29 October 2019, the International Bureau of Weights and Measures (BIPM) relaunched the KCDB website. Now, the user can search on CMCs and comparisons using a free-keyword search or by using a predefined menu. Additionally, they can filter on CMC approval dates, combine advanced search with free keywords, filter CMC measurand values and uncertainties numerically, sort results and export search results to a spreadsheet. At the beginning of 2020, BIPM extended the statistic function of the KCDB, making it easier to compare the metrological competence of different countries.
The CIPM MRA and the KCDB distinguish three indicators, with different information on the metrological competence of a country by NMI and the Designated Institutes (DI):
- Key Comparisons (KC) refer to the central measuring techniques in an area. The NMIs prove the accuracy of their measurements by comparing their highest standards.
- Supplementary Comparisons (SC) relate to a set of comparisons conducted by the RMOs to meet a specific need not covered by the KC.
- Calibration and Measurement Capabilities (CMC) refer to the ability of the NMI’s to provide internationally accepted calibration and measurement services.
Together, Key and Supplementary Comparisons (K&SC) refer to the national standards and show the degree of equivalence of measurement results. K&SC are the basis of the scientific evidence of the Calibration & Measurement Capabilities (CMC).
Currently, in the KCDB there are 282 CIPM MRA participants with 1,058 key comparisons, 607 supplementary comparisons and 25,792 CMCs (KCDB retrieved 30/06/2020)
The KCDB data are separated into different metrological areas:
- Acoustics, Ultrasound, Vibration (AUV)
- Electricity and magnetism (EM)
- Length (L)
- Mass and related quantities (M)
- Photometry and radiometry (PR)
- Chemistry and biology (QM)
- Ionizing Radiation (RI)
- Thermometry (T)
- Time and frequency (TF)
The members of the CIPM Consultative Committee (CC) carry out the key comparisons in their area.
Figure 1: Number of key and supplementary comparisons by metrology area
Figure 2: Number of CMCs by metrology area
The figures show Key and Supplementary Comparisons (K&SC), and Calibration and Measurement Capabilities (CMC) are not evenly distributed over the different metrology areas. Among the comparisons, the number is highest in the Mass (M), followed by Chemical Metrology (QM), Ionizing Radiation (RI) and Electrical Metrology (EM). The number of Supplementary Comparisons (SC) is particularly high for Mass (M) and Electrical Metrology (EM). In contrast, for CMCs the Chemical Metrology (QM) figures are by far the highest, followed by Electrical (EM), Ionizing Radiation (RI) and Temperature (T).
Another form for disaggregation is by world regions (and countries). The KCDB provides the data geographically according to six RMOs.
Figure 3: Regional Metrology Organizations
The members of the CIPM Consultative Committee carry out the key comparisons in their field. Additionally, up to three RMO members can participate.
Figure 4: Number of CMCs by RMO
Figure 4 shows that the European National Metrology Institutes (EURAMET), with 11,363 CMCs, have 44% of all CMCs worldwide. Following are the Asia-Pacific NMIs (APMP) with 6,453 CMCs (25%), the Pan-American NMIs (SIM) with 4,685 (18.1%), the Eurasian NMIs (COOMET) with 2,650 (10.3%) and the African NMIs (AFRIMETS) with 641 CMCs (2.5%). The NMIs of the Gulf States (GULFMET) do not yet have any CMCs.
KCDB data time series
The Key Comparison Data Base (KCDB) provides data for about 20 years.
Figure 5: Total number of key comparisons and supplementary comparisons registered in the KCDB
Figure 5 shows the continuously increasing development of K&SC from 2003 to 2015, and this development will continue in the years to come. In April 2020, BIPM counts 1,657 K&SC.
Figure 6: Total number of CMC registered in the KCDB
The number of CMCs increase over time significantly. However, structural breaks are noticeable here, which can be explained by the adjustment of the data basis or methodological changes such as the introduction of the uncertainty tables (see Figure 7). The relaunch of the KCDB at the end of the year 2019 implied additional changes in the counting method so that the numbers of the CMC’s are only conditionally comparable over time.
Figure 7: Measurement uncertainty table (example)
The differences in measurement uncertainties are a complementary indicator of the metrological competence of an NMI or Designated Body.
However, it is not always advisable for a country to strive for the highest level of metrological competence because the costs of achieving higher accuracy are increasing exponentially. In this respect, a “race for measurement uncertainty” makes no sense; rather, the level of measurement uncertainty should always correspond to the needs of the respective country.
Are CMCs a good measure for national metrological competence?
It is tIt is tempting to use the CMCs as an indicator for the international comparison of countries’ metrological competence. The data is available in an open database and can be disaggregated by country.
However, the BIPM emphasizes that CMCs alone are not a good indicator for comparing the metrological performance of different countries:
“NMIs should be advised to use the percentage of coverage of their services by CMCs as a metric of success rather than the number of CMCs … The number of CMCs alone should not be considered a metric of the success of an NMI.” 
Several reasons limit the explanatory power of CMCs as a measure of metrological performance, i.e.:
- The scopes of CMCs in different metrological areas often differ significantly. In some metrology areas, one CMC stands for a broad measurement competence, whereas in other areas one CMCs represent only a particular measurement capability. The individual CMC differ in terms of their measurement uncertainty (see Figure 7).
- Counting the CMCs can even have unwanted effects. Some NMIs might be tempted to register as many CMCs as possible to give the impression of high metrological competence.
- The competence of secondary laboratories is also essential to understand the metrological capability of a country. In countries with large, competent calibration laboratories, the NMI may, under certain circumstances, limit itself to essential CMCs. On the other hand, if the national system of calibration laboratories is weakly developed, the need for CMCs increases.
Since BIPM offers the KCDB explicitly for static use, it would be necessary for CIPM to make the coverage of the individual CMCs more uniform.
One possibility for a differentiated interpretation is to consider the number of CMCs in specific metrological areas or subareas. The KCDB distinguishes new areas with a total of 47 subareas. The measurement competence of a country is recognized if its registered CMCs are distributed over different areas and subareas. For example, it makes a difference whether a country has its CMCs exclusively in physical metrology or also in chemical metrology.
Another control variable is the participation of a country in the CIPM Consultative Committees. Only countries that are leaders in the respective measurement area are represented in these committees. If a country is represented in several CCs, this shows that its metrological competence is high and broadly distributed.
When assessing the metrological competence of a country, the Key and Supplementary Comparisons (K&SC) should also be considered. With Key Comparisons, a country’s NMI proves the accuracy of its measurements and lays the foundation for the CMCs. The CMCs express how the NMI converts these competencies into services.
Figure 8 shows a strong correlation between K&SC and CMC.
Figure 8: Correlation between metrological capabilities and comparisons
Countries with a low number of K&SC have relatively few CMCs and vice versa. However, we observe differences in countries with high levels. For instance, Germany has a large number of K&SCs compared to the number of its CMCs. The German NMI, the PTB, deliberately focuses only on specific measured quantities. In contrast, the People’s Republic of China has a relatively large number of CMCs compared to the number of its K&SCs.
All in all, the number of CMCs is the best proxy to express the metrological performance of a country. However, when comparing countries, additional indicators should be used. These include participation in CIPM Consultative Committees, the distribution of CMCs across different metrological areas and the number of accredited calibration laboratories in the country.
Finally, the metrological competence of a country needs to be appropriate to the level of development and structure of the national economy. In this respect, it is essential to compare the metrological capability metrics with the economic performance, exports and competitiveness of a country.
But this will be the matter of another blog post …
Feature photo by Pexels
 HARMES-LIEDTKE, U. & OTEIZA DI MATTEO, J. J. 2011. Measurement of Quality Infrastructure. Discussion Paper. Braunschweig: Physikalisch-Technische Bundesanstalt.
 HENSON, A. 2015. The CIPM MRA: Past, present and future. Sevres: BIPM.
 Kühne, M. (2012). International Recognition of NMI Calibration and Measurement Capabilities: The CIPM MRA. Varenna, International School of Physics “Enrico Fermi”.
 BIPM (2016). Recommendations from the Working Group on the Implementation and Operation of the CIPM MRA. Sevres.
 HARMES-LIEDKTE, U. & OTEIZA DI MATEO, J. J. 2019. Measurement and performance of Quality Infrastructure. A proposal for a global quality infrastructure index. Buenos Aires and Duisburg: Mesopartner and Analyticar.