Pressure Calibration Terminology

Having a general knowledge of these pressure calibration terms and their definitions will help you understand the other concepts on this page. If you're already familiar with measuring instrument specifications or calibration certificates, consider these a review or skip to the next section.

Accuracy vs. Uncertainty

Accuracy and uncertainty are two of the most common terms used to determine the specification of pressure measuring and controlling devices, however, they are often confused with each other. 
According to the International Vocabulary of Metrology (VIM), measurement uncertainty is defined as the "parameter associated with the result of a measurement that characterizes the dispersion of values that could reasonably be attributed to the measurand" or a measure of the possible error in the estimated value as a result of the measurement. However, in day-to-day terms, it is basically the accumulation of all the systematic components that contribute toward the overall error in measurement. The typical components contributing toward an instruments’ measurement uncertainty are the defined uncertainty of the reference instrument, effect of ambient conditions, the intrinsic uncertainty of the instrument itself and the deviation recorded in measurement.
Accuracy, on the other hand, is defined in the VIM as the "closeness of agreement between a measured quantity value and a true quantity value of a measurand."  Accuracy is more of a qualitative concept rather than a quantitative measurement. Manufacturers often use this term to represent the standard value of the maximum difference between measured and actual or true values.
So what does it really mean for pressure calibration?
Uncertainty Graph
With pressure as a measurand, the uncertainty of the instrument is dependent on the reference calibrator’s uncertainty, the linearity, hysteresis, and repeatability of measurements across measurement cycles, and the compensation for ambient conditions such as atmospheric pressure, temperature, and humidity. This is typically reported at a certain coverage factor. The coverage factor determines the probability density of the stated uncertainty using a numerical factor to derive the expanded uncertainty. The coverage factor is usually symbolized with a letter “k.” For example, k= 2 represents a 95% confidence level in reporting the expanded uncertainty, while k = 3 represents a 99% confidence level. It’s typical in pressure calibration that expanded uncertainty is reported with a confidence level of k=2.
Accuracy being a qualitative concept allows for more flexibility in interpretation and may lead to different definitions from different manufacturers. As accuracy is the overall representation of the closeness of values, it often encompasses the contributions of measurement uncertainty, longterm stability, and a guard band over an interval of time. The purpose of this term is to provide the user with an estimation of the overall worst-case specification of their instrument over the stated time interval.


Precision is defined by the VIM as "closeness of agreement between indications or measured quantity values obtained by replicate measurements on the same or similar objects under specified conditions." Precision is a term that defines the nature of proximity an instrument's measurement would have between the same measurement taken multiple times under the same conditions, such as ambient conditions, test setup and reference instrument used.
Precision Representation
In pressure calibration, this is a term that plays a significant role when performing a measurement going upscale and downscale in pressure multiple times during the calibration. The error in the same measurement between these cycles determines precision. It is a specification that encompasses linearity, hysteresis, and repeatability of the measurement.


In an ideal world, all measuring devices are linear, i.e. the deviation in true value and measured value throughout the range can be represented by a straight line. Unfortunately, this isn’t true. All measuring instruments have some level of nonlinearity associated with them. This implies that the deviation between the true value and measured value varies across the range.
For pressure calibration, nonlinearity is measured by going upscale between various measuring points and comparing that to the true output. Nonlinearity can be compensated through a few different ways, such as best fit straight line (BFSL), zero-based BFSL or multipoint linearization. Each method has its pros and cons.
The best fit straight line (BFSL) method is defined by a straight line to best represent the measuring points and their outputs across the range. It is drawn this way to minimize the relative error between the actual curve and the line itself. This method is most commonly used in applications requiring low accuracy, where the nonlinearity bandwidth is relatively higher.
Zero-based BFSL is a derived form of the BFSL method where the line passes through the zero or the minimum point of the range to ensure the offset of the zero point is mitigated.
Multipoint linearization is the most thorough process of the three. This method allows the line segment between multiple points in the range to be modified to come as close as possible to the actual calibration curve. This approach, although tedious, ensures the highest amount of correction toward nonlinearity. Measuring points typically include the zero and span point and then a multitude of different points can be selected within the range of the DUT.


Hysteresis is the maximum difference in measurement at a specific point when the measurements are taken upscale to the same measurements taken downscale. For pressure calibration, hysteresis is measured at each pressure value being recorded by increasing the pressure to full scale and then releasing it down to minimum value. Different accreditation standards require different procedures to calculate the overall hysteresis. As an example, DKD-R-6 requires the upscale and downscale values to be recorded twice each and then an aggregate hysteresis value be derived for each pressure point.
Hysteresis Representation


Measurement repeatability is the degree in closeness between the same measurement taken with the same procedure, operators, system, and conditions over a short period of time. A typical example of repeatability is a comparison of a measurement output at one point in the range over a certain time interval while keeping all other conditions the same including the approach and descent to the measuring point.


Stability vs. Drift

Stability is defined by the VIM as the "property of measuring instrument, whereby its metrological properties remain constant in time." It can be quantified in terms of the duration of a time interval where the stated property remains a certain value. For calibration, stability is a part of the overall accuracy definition of the instrument; it plays a crucial role in determining the calibration interval of the instruments. All pressure measuring devices drift over time from the day they were calibrated.
Often pressure calibration equipment manufacturers specify stability as a byproduct of drifting for a specific measuring point or multiple points in the range. For absolute pressure instruments, this is the zero point. As a zero-point offset can cause a vertical shift in the calibration curve, this point’s drift over time becomes the determining factor in maintaining the manufacturer specifications.

As Found vs. As Left Data

These terms are usually found on calibration certificates when a device returns after being recalibrated. The simplest definition of as-found data is the data a calibration lab finds on a device it has just received, prior to making any adjustments or repairs. As-left data would be what the certificate shows once the calibration is complete and the device is ready to leave the lab. 


As the word suggests, adjustment describes performing some operation on the measuring system or measuring point so that it responds with a prescribed output to the corresponding measured value. In practice, adjustments are performed on specific measuring points for them to respond according to the stated manufacturer’s specifications. This is typically the minimum and maximum points in the range, i.e. zero adjustment and span adjustment. Adjustment is often carried out after an as-found calibration has highlighted the measuring points not meeting the desired specification.


TAR corresponds to test accuracy ratio and TUR corresponds to test uncertainty ratio. These both represent the factor by which the DUT is worse in accuracy or uncertainty, respectively, compared to the reference standard used for its calibration. These ratios are regarded as the practical standard for selecting the optimal reference standard to calibrate the DUTs at hand.
Download The Essential Pressure Calibration Glossary

Why Should You Calibrate?

The simple answer is that calibration ensures standardization and fosters safety and efficiency. If you need to know the pressure of a process or environmental condition, the sensor you are relying on for that information should be calibrated to ensure the pressure reading is correct, within the tolerance you deem acceptable. Otherwise, you cannot be certain the pressure reading accuracy is sufficient for your purpose.

A few examples might illustrate this better:

Standardization in processes

A petrochemical researcher has tested a process and determined the most desirable chemical reaction is highly dependent on the pressure of hydrogen gas during the reaction. Refineries use this accepted standard to make their product in the most efficient way. The hydrogen pressure is controlled within recommended limits using feedback from a calibrated pressure sensor. Refineries across the world use this recommended pressure in identical processes. Calibration ensures the pressure is accurate and the reaction conforms to standard practices.

Standardization in weather forecasting and climate study

The barometric pressure is a key predictor of weather and a key data point in climate science. Pressure calibration and barometric pressure, standardized to mean sea level, ensures that the pressures recorded around the world are accurate and reliable for use in forecasting and in the analysis of weather systems and climate.



A vessel or pipe manufacturer provides standard working and burst pressures for their products. Exceeding these pressures in a process may cause damage or catastrophic failure. Calibrated pressure sensors are placed within these processes to ensure acceptable pressures are not exceeded. It is important to know these pressures sensors are accurate in order to ensure safety.


Testing has proven a steam-electric generator is at its peak efficiency when the steam pressure at the outlet is at a specific level. Above or below this level, the efficiency drops dramatically. Efficiency, in this case, equates directly to bottom line profits. The tighter the pressure is held to the recommended pressure, the more efficient the generator runs and the most cost-effective output is assured. With a calibrated high accuracy pressure sensor, the pressure can be held within a tight tolerance to provide maximum efficiency and bottom-line revenue.

Discover even more ways calibrating your pressure instruments can help you by reading "10 Reasons to Calibrate Your Instruments."

How Often Should You Calibrate?

The short answer is as often as you think is necessary for the level of accuracy you need to maintain. All pressure sensors will eventually drift away from their calibrated output. Typically it is the zero point that drifts, this causes the whole calibration curve to shift up or down. There can also be a span drift component which is a shift in the slope of the curve, as seen below:  ZeroDrift     SpanDrift

The amount of drift and how long it will take to drift outside of acceptable accuracy specification depends on the quality of the sensor. Most manufacturers of pressure measuring devices will give a calibration interval in their product datasheet. This tells the customer how long they can expect the calibration to remain within the accuracy specification. The calibration interval is usually stated in days or years and is typically anywhere from 90 to 365 days. This interval is determined through statistical analysis of products and usually represents a 95% confidence interval. This means that statistically, 95% of the units will meet their accuracy specification within the limits of the calibration interval. For example, Mensor's calibration interval specification is given as 365 or 180 days, depending on the sensor chosen.

The customer can choose to shorten or lengthen the calibration interval once they take possession of the sensor and have calibration data that supports the decision. By conducting an as-found calibration at its calibration interval the sensor will be in tolerance or out of tolerance. If it is found to be in tolerance, it can be put back in service and checked again within another calibration interval. If out of tolerance, offsets can be applied to bring it back in tolerance. In this case, the next interval can be shortened to make sure it holds its accuracy. Successive as-found calibrations will provide a history of each individual sensor and can be used to adjust the calibration interval based on this data and the criticality of the application where the sensor is used.

Read more about why and how often you should calibrate in "10 Reasons to Calibrate Your Instruments."

Where is Pressure Calibration Performed?

Pressure calibrations can be performed in a laboratory environment, a test bench, or in the field. All that is needed to calibrate a pressure indicator, transmitter or transducer is a regulated pressure source, a pressure standard, a way to read the DUT, and the means to connect the DUT to the regulated pressure source and the pressure standard. Pressure rated tubing, fittings, and a manifold to isolate from the measured process pressure may be the only equipment necessary to perform the calibration.
Calibration Lab Setup
Where to calibrate a pressure sensor is totally up to the user of the device. If however there is a requirement for an accredited calibration to the ISO/IEC 17025 General Requirements for the Competence of Testing and Calibration Laboratories Standard, then either your organization must be accredited by an accreditation body or the calibration must be performed by an organization that is accredited.
An ISO/IEC 17025 standard accreditation ensures the organization conducting calibration is deemed competent. The standard focuses on general, structural, resource, process, and management system requirements, ensuring results are based on accepted science and the accredited organization is producing technical valid results.
The Mensor calibration laboratory in San Marcos, Texas, is accredited by A2LA to the ISO/IEC 17025 standard. All devices manufactured here can be returned for calibration, and the lab is also capable of calibrating other pressure devices within our scope of accreditation.

Instruments Used in Pressure Calibration

Deciding what instrument to use for calibrating pressure measuring devices depends on the accuracy of the DUT. For devices that ascribe to the highest accuracy achievable, the reference standard used to calibrate it should also have the highest achievable accuracy.

Accuracy of DUTs can range widely but for devices with accuracy greater than 1-5% it may not even be necessary to calibrate. It is completely up to the application and the discretion of the user. Calibration may not be deemed necessary for devices used only as a visual "ballpark" indication and are not critical to any safety or process concern. These devices may be used as a visual estimate of the process pressures or limits being monitored. To calibrate or not is a decision left to the owner of the device.

More critical pressure measuring instruments may require periodic calibration because the application may require more precision in the process pressure being monitored or a tighter tolerance in a variable or a limit. In general, these process instruments might have an accuracy of 0.1 to 1.0% of full scale.

The Calibrator

Common sense says the device being used to calibrate another device should be more accurate than the device being calibrated. A long-standing rule of thumb in the calibration industry prescribes a 4 to 1 test uncertainty ratio (TUR) between the DUT accuracy and the reference standard accuracy. So, for instance, a 100 psi pressure transducer with an accuracy of 0.04% full scale (FS) would have to employ a reference standard with an accuracy of 0.01% FS for that range.

Knowing these basics will help determine the equipment that can deliver the accuracy necessary to achieve your calibration goals. There are several levels of calibration that may be encountered in a typical manufacturing or process facility, described below as laboratory, test bench, and field. In general, individual facility quality standards may define these differently.


Laboratory primary standard devices have the highest level of accuracy and will be the devices used to calibrate all other devices in your system. They could be deadweight testers, high accuracy piston gauges, or pressure controllers/calibrators. The accuracy of these devices typically range from about 0.001% (10 ppm) of reading to 0.01% of full scale and should be traceable to the SI units. Their required accuracy will be determined by what they are required to calibrate to maintain a 4:1 TUR. Adherence to the 4:1 rule can be relaxed but it must be reported on the calibration certificate. These laboratory devices are typically used in a controlled environment subject to the requirements of ISO 17025, which is the guideline for general requirements for the competence of testing and calibration laboratories. Laboratory test standards are typically the most expensive devices but are capable of calibration a large range of lower accuracy devices.


Test Bench

Test bench devices are used outside of the laboratory or in an instrument shop, and are typically used as a check or to calibrate pressure instruments taken from the field. They possess sufficient accuracy to calibrate lower accuracy field devices. These can be desktop units or panel mount instruments like controllers, indicators or even pressure transducers. These instruments are sometimes combined into a system that includes a vacuum and pressure source, an electrical measurement device and even a computer for indication and recording. The pressure transducers used in these instruments are periodically calibrated in the laboratory to certify their level of accuracy. To maintain an acceptable TUR with devices from the field, multiple ranges may be necessary or devices with multiple and interchangeable transducer ranges internally. The accuracy of these devices are typically from 0.01% FS to 0.05% FS and are lower cost than the higher accuracy instruments used in the laboratory.



Field instruments are designed for portable use and typically have limited internal pressure generation and the capability to attach external higher pressure or vacuum sources. They may have multi-function capability for measuring pressure and electrical signals, data logging, built-in calibration procedures and programs to facilitate field calibration, plus certifications for use in hazardous areas. These multi-function instruments are designed to be self-contained to perform calibrations on site with minimal need for extraneous equipment. They typically have accuracy from 0.025% FS to 0.05% FS. Given the multi-function utility, these instruments are priced comparable to the instruments used on the bench and can also be utilized in a bench setting.


In general, what is used to calibrate your pressure instruments in your facility will be determined by your established quality and standard operating procedures. Starting from scratch will require an analysis of the cost given the range and accuracy of the pressure instruments that need to be calibrated.

How is Pressure Calibration Performed?

Understanding the process of performing a calibration can be intimidating even after you have all of the correct equipment to perform the calibration. The process can vary depending on calibration environment, device under test accuracy and the guideline followed to perform the calibration.

The calibration process consists of comparing the DUT reading to a standard's reading and recording the error. Depending on specific pressure calibration requirements of the quality standards, one or more calibration points must be evaluated and an upscale and downscale process may be required. The test points can be at the zero and span or any combination of points in between. The standard must be more accurate than the DUT. The rule of thumb is that it should be four times more accurate but individual requirements may vary from this.

Depending on the choice of the pressure standard the process will involve the manual, semi-automatic or fully automatic recording of pressure readings. The pressure is cycled upscale and/or downscale to the desired pressure point in the range, and the readings from both the pressure standard and the DUT are recorded. These recordings are then reported in a calibration certificate to note the deviation of the DUT from the standard.   

As mentioned, different guidelines detail the process of calibration differently. Below are some of the standards that highlight such differences when calibrating pressure transducers or gauges:
IEC 61298-2 defines the process for “Process measurement and control devices.” The section on "Test procedures and precautions" defines the number of exercise cycles, the number of measurement cycles and test points required.
DKD-R 6-1 “Calibration of Pressure Gauges” defines different process for different accuracy classes of devices. It also defines exercise cycles, number of cycles and points and also minimum times on how long to hold the pressure before taking a measurement.
EURAMET Calibration Guide No. 17 has basic, standard and comprehensive calibration procedures depending on the uncertainty of the device being calibrated. It requires additional information like the standard deviation of the device’s output at each pressure point.
Keep in mind specific industries may require their own calibration processes.

Calibration Traceability and Pressure Standards

Calibration Traceability

A traceable calibration is a calibration in which the measurement is traceable to the International System of Units (SI) through an unbroken chain of comparable measurements to a National Metrology Institute (NMI).  This type of calibration does not indicate or determine the level of competence of the staff and laboratory that performs the calibrations. It mainly identifies that the standard used in the calibration is traceable to an NMI. NMIs demonstrate the international equivalence of their measurement standards and the calibration and measurement certificates they issue through the framework of CIPM Mutual Recognition Arrangement (CIPM MRA).

Primary vs. Secondary Pressure Standards

There seems to be a good deal of confusion on the difference between primary and secondary standards, mainly because of a lack of technical distinction between the two.
Pressure is a unit derived from fundamental SI units so any pressure device could never be a primary standard. The lowest uncertainty pressure devices available are considered fundamental pressure standards and are typically ultrasonic interference manometers and piston gauges. They are often referred to as primary standards even though technically, they are not.
The term primary standard is also sometimes used when referring to the most accurate pressure standard within a facility. In most cases, these are traceable to the best fundamental pressure devices at NMIs. The instruments at these institutes are also called primary standards and are probably more deserving of the title because they are at the pinnacle of accuracy in the chain of traceability.  
To further complicate the issue, calibration laboratories frequently call their lowest uncertainty pressure devices their primary standards. Secondary standards are devices either calibrated by, or traceable to, the aforementioned primary standards, or even other secondary standards.

Accredited Calibrations

A calibration laboratory is accredited when it is found to be in compliance with ISO/IEC 17025,  which outlines the general requirements for the competence of testing and calibration laboratories. Accreditation is awarded through an accreditation body that is an ILAC-MRA signatory organization. These accreditation bodies audit the laboratory and its processes to determine the laboratory competent to perform calibrations and to issue their calibration results as accredited.   Accreditation recognizes a lab's competence in calibration and assures customers that calibrations performed under the scope of accreditation conform to international standards.

 The laboratory is audited periodically, to ensure continued compliance with the ISO/IEC 17025 standard.

Check out this article for a more detailed look at the differences between NIST traceable and ISO/IEC 17025 accredited calibrations, including a checklist for how to achieve them.

Factors Affecting Pressure Calibration and Corrections

There are several corrections, ranging from simple to complex, which may need to be applied during the calibration of a device under test (DUT).

Head Height

If the reference standard is a pressure controller, the only correction that may need to be applied is what is referred to as a head height correction. The head height correction can be calculated using the following formula:

( ρ- ρ)gh

Where ρf is the density of the pressure medium (kg/m3), ρa is the density of the ambient air (kg/m3), g is the gravity (m/s2) and h is the difference in height (m). Typically, if the DUT is below the reference level, the value will be negative, and vice versa if the DUT is above the reference level. Regardless of the pressure medium, depending on the accuracy and resolution of the DUT, a head height correction must be calculated. Mensor controllers allow the user to input a head height and the instrument will calculate the head height correction.

Sea Level

Another potentially confusing correction is what is referred to as a sea level correction. This is most important for absolute ranges, particularly barometric pressure ranges. Simply put, this correction will provide a common barometric reference regardless of elevation. This makes it easier for meteorologists to monitor weather fronts as all of the barometers are referenced to sea level. For an absolute sensor, as the sensor increases its altitude, it will approach absolute zero, as expected. However, this can become problematic for a barometric range sensor as the reading will no longer be ~14.5 psi when vented to atmosphere. Instead, the local barometric pressure may read ~12.0 psi. However, this is not the case. The current barometric pressure in Denver, Colorado, for example, will actually be closer to ~14.5 psi and not ~12.0 psi. This is because the barometric sensor has a sea level correction applied to it. The sea level pressure can be calculated using the following formula:

(Station Pressure / e(-elevation/T*29.263))

Where Station Pressure is the current, uncorrected barometric reading (in inHg@0˚C), elevation is the current elevation (meters) and T is the current temperature (Kelvin).

For everyday users of pressure controllers or gauges, those may be the only corrections they may encounter. The following corrections apply mainly to piston gauges and the necessity to perform them relies on the desired target specification and associated uncertainty.


Another source of error in pressure calibrations are changes in temperature. While all Mensor sensors are compensated over a temperature range during manufacturing, this becomes particularly important for reference standards such as piston gauges, where the temperature must be monitored. Piston-cylinder systems, regardless of composition (steel, tungsten carbide, etc.), must be compensated for temperature during use as all materials either expand or contract depending on changes in temperature. The thermal expansion correction can be calculated using the following formula:

1 + (α+ αc)(T - TREF )

Where αP is the thermal expansion coefficient of the piston (1/˚C) and αC is the thermal expansion coefficient of the cylinder (1/˚C), T is the current piston-cylinder temperature (˚C) and TREF is the reference temperature (typically 20˚C).

As the temperature of the piston cylinder increases, the piston-cylinder system expands, causing the area to increase, which causes the pressure generated to decrease. Conversely, as the temperature decreases, the piston-cylinder system contracts, causing the area to decrease, which causes the pressure generated to increase. This correction will be applied directly to the area of the piston and errors will exceed 0.01% of the indicated value if uncorrected. The thermal expansion coefficients for the piston and cylinder are typically provided by the manufacturer, but they can be experimentally determined.


A similar correction that must be made to piston-cylinder systems is referred to as a distortion correction. As the pressure increases on the piston-cylinder system, it will cause the piston area to increase, causing it to effectively generate less pressure. The distortion correction can be calculated using the following formula:

 1 + λP

Where λ is the distortion coefficient (1/Pa) and P is the calculated, or target, pressure (Pa). With increasing pressure, the piston area increases, generating less pressure than expected. The distortion coefficient is typically provided by the manufacturer, but it can be experimentally determined.

Surface Tension

A surface tension correction must also be made with oil-lubricated piston-cylinder systems as the surface tension of the fluid must be overcome to “free” the piston. Essentially, this causes an additional “phantom” mass load, depending on the diameter of the piston. The effect is larger on larger diameter pistons and smaller on smaller diameter pistons. The surface tension correction can be calculated using the following formula:


Where D is the diameter of the piston (meters) and T is the surface tension of the fluid (N/m). This correction is more important at lower pressures as it becomes less with increasing pressure.

Air Buoyancy

One of the most important corrections that must be made to piston-cylinder systems is air buoyancy.

As introduced during the head height correction, the air surrounding us generates pressure... think of it as a column of air. At the same time, it also exerts an upward force on objects, much like a stone in water weighs less than it does on dry land. This is because the water exerts an upward force on the stone, causing is to weigh less. The air around us does exactly the same thing. If this correction is not applied, it can cause an error as high as 0.015% of the indicated value. Any mass, including the piston, will need to have what is referred to as an air buoyancy correction. The following formula can be used to calculate the air buoyancy correction:

1 - ρam

Where ρa is the density of the air (kg/m3) and ρm is the density of the masses (kg/m3). This correction is only necessary with gauge calibrations and absolute by atmosphere calibrations. It is negligible for absolute by vacuum calibrations as the ambient air is essentially removed.

Local Gravity

The final correction and arguably the largest contributor to errors, especially in piston-gauge systems, is a correction for local gravity. Earth’s gravity varies across its entire surface, with the lowest acceleration due to gravity being approximately 9.7639 m/s2 and the highest acceleration due to gravity being approximately 9.8337 m/s2. During the pressure calculation for a piston gauge, the local gravity may be used and a gravity correction may not need to be applied. However, many industrial deadweight testers are calibrated to standard gravity (9.80665 m/s2) and must be corrected. Were an industrial deadweight tester calibrated at standard gravity and then taken to the location with the lowest acceleration due to gravity, an error greater than 0.4% of the indicated value would be experienced. The following formula can be used to calculate the correction due to gravity:


Where gl is the local gravity (m/s2) and gs is the standard gravity (m/s2).


The simple formula for pressure is as follows:

P = F / A = mg / A

This is likely the fundamental formula most people think of when they hear the word “pressure.”  As we dive deeper into the world of precision pressure measurement, we learn that this formula simply isn't thorough enough. The formula that incorporates all of these corrections (for gauge pressure) is as follows:

P = F / A = mg / A
mg / A =  (mg ( 1 -  ρa) + πDT ) / (Ae (1 + (αp + αc )(T - TREF))( 1 + λP)) + ( ρ- ρ)gh

Related Resources

Basics of Pressure

Pressure Calibration applications

Understanding Pressure Calibration Equipment

Temperature Calibration

About Mensor

See More from Mensor