In metrology, what is the fundamental difference between accuracy and precision, and why is it possible—and often dangerous—for a measurement to be highly precise but not accurate?
The fundamental difference is that accuracy describes the closeness of a measurement to the true value, while precision describes the closeness of repeated measurements to each other. They are independent concepts, and confusing them can lead to critical errors in science, engineering, and manufacturing.
A simple way to frame it:
Accuracy is about correctness.
Precision is about consistency.
Accuracy is a measure of how close a single measurement or the average of many measurements is to the actual, true value of the quantity being measured. It is a measure of the degree of systematic error or bias in a measurement system.
To improve accuracy, you must identify and eliminate systematic errors, primarily through calibration against a known standard.
Precision is a measure of how close a series of repeated measurements are to one another, regardless of whether they are close to the true value. It is a measure of the degree of random error in a measurement system.
High precision means there is very little random error or "scatter" in the data. To improve precision, you must control the measurement process to minimize these random variations.
Imagine a target where the bullseye represents the true value. Each shot is a measurement.
| | |
| ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- |
| 1. High Accuracy, High Precision (Ideal) | 2. Low Accuracy, High Precision (Dangerous) |
| All shots are tightly clustered right in the center of the bullseye. The measurements are both correct and consistent. This is the goal of any good measurement system. | All shots are tightly clustered together, but they are far from the bullseye. The measurements are consistent, but they are consistently wrong. |
| 3. High Accuracy, Low Precision | 4. Low Accuracy, Low Precision (Worst Case) |
| The shots are scattered widely all over the target, but their average position is the bullseye. The system has no bias, but it is noisy and inconsistent. | The shots are scattered widely all over the target, and their average is not in the bullseye. The measurements are neither correct nor consistent. |
The second scenario (Low Accuracy, High Precision) is the most insidious and dangerous in a practical setting for one key reason: it creates a false sense of confidence.
When an engineer or technician takes a measurement and gets the same result repeatedly (e.g., 10.52 mm, 10.51 mm, 10.52 mm), the high precision (low scatter) makes the result seem reliable and trustworthy. However, if the true value is actually 10.00 mm, the measuring instrument has a systematic error of +0.52 mm.
Conclusion:
A metrologist's job is to achieve both accuracy and precision. Accuracy is addressed through proper calibration to eliminate systematic bias. Precision is addressed through proper technique, environmental control, and high-quality equipment to minimize random variation. Understanding that a measurement can be consistently wrong is a foundational principle of metrology.