Coefficient of Variation (CV) Statistic
(Redirected from Coefficient of Variation)
Jump to navigation
Jump to search
A Coefficient of Variation (CV) Statistic is a scale invariant relative magnitude dispersion statistic that measures the degree of variation in a set of data relative to the mean (that supports CV statistical analysis tasks).
- AKA: Normalized Root-Mean-Square Deviation (NRMSD), Percent RMS, Relative Standard Deviation (RSD).
- Context:
- CV Statistic Input: dataset, data distribution
- CV Statistic Output: CV value, relative variability measure
- CV Statistic Performance Measure: CV performance metrics such as CV calculation accuracy, CV computational efficiency, and CV interpretation reliability
- It can typically enable Ratio Scale Measurements through CV scale-invariant analysis.
- It can typically support Price Dispersion Analysis through CV price variability measurements.
- It can typically quantify Relative Data Variability through CV ratio calculations.
- It can typically enable Scale-Invariant Comparisons through CV standardized measurements.
- It can typically assess Data Precision through CV percentage expressions.
- It can typically support Quality Control Analysis through CV threshold evaluations.
- It can typically measure Population Variability through CV distribution analysis.
- ...
- It can often facilitate Cross-Dataset Comparisons through CV normalization methods.
- It can often provide Risk Assessment through CV volatility measurements.
- It can often implement Process Control through CV monitoring systems.
- It can often support Jackknifing Algorithms through CV resampling validations.
- ...
- It can range from being a Simple CV Statistic to being a Complex CV Statistic, depending on its CV calculation methodology.
- It can range from being a Basic CV Statistic to being a Advanced CV Statistic, depending on its CV application sophistication.
- It can range from being a Manual CV Statistic to being an Automated CV Statistic, depending on its CV computation approach.
- It can range from being a General-Purpose CV Statistic to being a Domain-Specific CV Statistic, depending on its CV application domain specialization.
- It can range from being a Single-Dataset CV Statistic to being a Multi-Dataset CV Statistic, depending on its CV data scope complexity.
- ...
- It can integrate with Statistical Software for CV automated calculation.
- It can connect to Quality Management Systems for CV process monitoring.
- It can support Data Analysis Platforms for CV comparative analysis.
- ...
- Examples:
- Domain-Specific CV Statistics, such as:
- Analytical Chemistry CV Statistics, such as:
- Engineering CV Statistics, such as:
- Financial CV Statistics, such as:
- Biological CV Statistics, such as:
- Application-Specific CV Statistics, such as:
- Research CV Statistics, such as:
- Industrial CV Statistics, such as:
- ...
- Domain-Specific CV Statistics, such as:
- Counter-Examples:
- Margin of Error Statistics, which measure sampling uncertainty rather than CV relative variability.
- Coefficient of Determination statistics, which measure explained variance proportion rather than CV relative variability.
- Root Mean Squared Error (RMSE) Statistics, which provide absolute error measurements rather than CV scale-invariant comparisons.
- Standard Deviation statistics, which measure absolute dispersion rather than CV relative dispersion.
- Variance statistics, which quantify squared deviations rather than CV normalized variation.
- See: Statistical Concept, Statistical Dispersion, Probability Distribution, Frequency Distribution, Standard Deviation, Relative Variability Measure, Scale-Invariant Metric, Ratio Scale, Price Dispersion Measure, Coefficient of Determination, Jackknifing Algorithm.
References
2025-05-22
- Deep Research
- Question: Provide detailed insights on the Coefficient of Variation (CV) Statistic, covering its applications, calculation methodologies, performance metrics, and typical use cases across domains such as quality control, risk assessment, price dispersion analysis, and cross-dataset comparison.
- Answer: The Coefficient of Variation (CV) is a scale-invariant metric of relative dispersion that provides comprehensive insights across multiple statistical domains. As a standardized, dimensionless measure of dispersion defined as the ratio of the standard deviation to the mean of a dataset, it enables comparative analysis across different measurement scales and units.
- Definition and Mathematical Formulation:
- Mathematical Definition: The Coefficient of Variation (CV) is a standardized, dimensionless measure of dispersion defined as the ratio of the standard deviation to the mean of a dataset. In formula form: CV = σ/μ, where σ is the standard deviation and μ is the mean (often the absolute mean is used to ensure a non-negative CV). It is commonly expressed as a percentage, in which case it may be denoted %CV or %RSD (percent relative standard deviation). For example, a CV of 0.10 corresponds to a 10% relative variation.
- Scale-Invariant Property: By dividing by the mean, the CV provides a scale-invariant measure – if all data values are scaled by a constant factor, the CV remains the same, reflecting relative variability independent of the unit or magnitude of the measurements. This property makes CV particularly useful when comparing variability across datasets with different units or widely different means.
- Ratio Scale Requirement: Importantly, the CV should only be computed for data measured on a ratio scale, i.e. scales with a meaningful zero point. On ratio scales (such as lengths, weights, Kelvin temperature, etc.), it is meaningful to compare variability relative to the mean. In contrast, for interval scales with arbitrary zero (e.g. Celsius temperature), the CV can be misleading or undefined. For instance, 20°C is not "twice as hot" as 10°C, so a CV on such a scale has no clear interpretation. Thus, a non-zero true baseline is required for the CV to be meaningful.
- Mathematical Characteristics: The CV can be seen as the normalized dispersion: it shows how large the standard deviation is relative to the mean. A higher CV indicates greater relative variability in the data, whereas a CV of 0 means no variability (all observations equal the mean). Because it is unitless, the CV enables direct comparison of variability between datasets on different scales. For example, consider two machines producing bolts: Machine A outputs bolts of length 100 ± 5 mm and Machine B outputs 200 ± 10 mm. Although Machine B's output has a larger absolute spread, both have a CV of 5%, indicating they are equally consistent in relative terms.
- Common Aliases and Terminology:
- Relative Standard Deviation (RSD): This term is often used interchangeably with CV, especially in laboratory and quality control contexts. It typically refers to the CV expressed as a percentage (%RSD).
- Normalized Root-Mean-Square Deviation (NRMSD): In modeling and forecasting, the CV concept appears as the normalized root-mean-square error or deviation. For example, in regression analysis, one can compute CV as the ratio of the root-mean-square error (RMSE) to the mean of the dependent variable. This is essentially a CV of the residuals, also called percent RMS error, used to assess relative model fit (a lower value indicates better fit).
- CV Metric or CV%: In some texts and software, the term "CV" itself or "CV%" is used as shorthand for the coefficient of variation (with or without multiplying by 100 for percentage form).
- Other Variations: In certain domains, you might encounter context-specific phrases like coefficient of variation of error (in calibration models) or coefficient of variability. All refer to the same fundamental ratio of dispersion to mean. Regardless of naming, these all convey the idea of relative variability. For clarity: standard deviation must be interpreted in the context of the data's mean and units, whereas the CV provides a unit-independent measure, simplifying comparisons. In analytical chemistry, for instance, reporting the %RSD of assay results is preferred over raw standard deviation because it directly communicates precision in relative terms.
- Core Inputs, Outputs, and Parameters:
- Inputs: To calculate the CV, the primary input is a set of numerical data (a sample or population). In practice, one usually computes the sample mean and sample standard deviation from the data. These summary statistics are sufficient to determine the CV: CV = s/x̄. The data should be continuous or at least quantitative; as noted, it must be on a ratio scale with a true zero. If the data are given along with their statistical summary (mean and SD), those can be used directly without needing the raw data.
- Outputs: The output is a single numeric value representing the coefficient of variation. Often it is reported either as a decimal (e.g. 0.15) or a percentage (e.g. 15% variability). The CV has no unit (the units cancel out in the ratio), which makes it easy to interpret across different contexts. In many software packages or calculators, the CV might be returned alongside other descriptive statistics. For example, a summary might list mean = 50, SD = 5, hence CV = 0.1 (or 10%). In some cases, software will automatically provide %CV in output, especially in biosciences or quality control reports.
- Optional Parameters and Considerations: Confidence Levels: If one needs to communicate the statistical confidence or precision of a CV estimate, additional parameters come into play. Because CV is a ratio of random variables (SD/mean), constructing a confidence interval for a sample CV is not straightforward. However, for large samples or assuming a normal distribution, there are formulas and methods to estimate confidence intervals for the true (population CV). For instance, methods by McKay or modified McKay use the chi-square distribution to derive lower and upper confidence limits for CV. A user may specify a confidence level (e.g. 95%) as an input to obtain an interval estimate of CV. This is particularly useful when comparing CVs between datasets to determine if differences are statistically significant.
- Performance Metrics and Properties of the CV:
- Calculation Accuracy: As an estimator, the sample CV (using sample SD and mean) has some bias in estimating the true population CV. Specifically, CV sample = s/x̄ is a biased estimator of σ/μ. The bias is usually small for large samples, but for small sample sizes or highly skewed data it can be notable. Statisticians have derived adjusted formulas to correct this bias. For normally distributed data, one can approximate an unbiased CV by applying a correction factor involving sample size (and even skewness/kurtosis for more accuracy). For most practical purposes, though, the bias of the sample CV is minor when n is reasonably large (e.g. n > 30), and s/x̄ serves as a good point estimate. In terms of numerical accuracy, computing CV is straightforward and not prone to numerical instability except in edge cases (such as x̄ extremely close to zero, which can cause a division-by-zero problem or an exaggerated CV). In those edge cases, the accuracy of interpretation plummets as CV approaches infinity.
- Computational Efficiency: The computation of CV is extremely efficient. One needs to compute the mean and standard deviation, both of which can be done in one pass through the data. The time complexity is O(n) for n data points (or even O(1) if mean and variance are pre-computed). In practice, calculating a CV for even millions of observations is trivial for modern computers. Memory requirements are minimal (you can compute mean and SD incrementally without storing all data at once). Thus, from a performance standpoint, CV can be computed quickly in real-time, making it suitable for on-line monitoring of processes. Its simplicity also means it's less prone to bugs or errors in implementation. Many spreadsheet software and programming libraries have built-in support for CV or can compute it with one line of code. In summary, computational cost is negligible and not a limiting factor for using CV in any context.
- Interpretation Reliability: The CV is generally easy to interpret: it tells you the size of variability relative to the mean. A CV of 5% means the standard deviation is 5% of the mean, indicating very low spread relative to the average; a CV of 50% would indicate quite high relative variability. Because it is dimensionless, it allows apples-to-apples comparison of variability across different datasets or metrics. This reliability in interpretation holds as long as the underlying assumptions are met (ratio-scale data, moderately large mean, etc.). However, there are important caveats: If the mean is extremely small (approaching zero), a moderate absolute variation will blow up the CV, making it very large and unstable. In such cases, the CV can be misleading or not reliable. The CV assumes the mean is an appropriate baseline. In distributions that are highly skewed or heavy-tailed, the mean might not be a robust measure of central tendency, and hence CV might not reliably reflect typical relative variability. For example, in a highly skewed income distribution, the CV could be influenced by a few extreme values (since standard deviation is sensitive to outliers). A robust CV (using median and median absolute deviation, for instance) would be more reliable in such cases.
- Applications Across Domains:
- Ratio-Scale Measurements: Whenever data are on a ratio scale, CV is a natural choice to describe variability. In the physical and biological sciences, researchers use CV to report variability in measurements such as lengths, weights, reaction times, or concentrations. The requirement of a true zero means measurements like temperature must be converted to an absolute scale (Kelvin) if CV is to be used. CV allows scientists to compare the relative noise or consistency of different measurements. For example, if one instrument measuring weight has a CV of 2% and another measuring volume has a CV of 5%, the weight measurements are relatively more precise even if the units differ. Analytical chemistry provides a clear use-case: when validating an assay or laboratory test, scientists report the CV of repeated measurements to quantify precision. A low CV (e.g. under 5–10%) might indicate the method is very repeatable, whereas a high CV would flag issues with measurement consistency.
- Price Dispersion Analysis: In economics and marketing, the CV is frequently used to measure price dispersion – the variability of prices of the same item across different sellers or markets. Because prices can vary widely in currency units, the CV succinctly captures how large that variation is relative to the average price. For instance, if the average price of a product is $100 and the standard deviation across sellers is $15, the CV is 15%, indicating a moderate dispersion in prices. Researchers often compare CVs of price distributions to study market efficiency, search costs, or the impact of online shopping. As an example, one study of online retail found that for a particular electronic item, prices ranged from $549 to $1138; the coefficient of variation of those prices was about 22.4%, signifying a substantial spread relative to the mean price. Such analyses help economists understand whether high dispersion (high CV) is due to market frictions, and they can compare dispersion over time or between markets using CV as a consistent metric.
- Quality Control and Process Control: In manufacturing and industrial engineering, quality control analysis relies on CV to assess process stability and consistency. Because production processes are expected to produce parts or products within certain tolerances, the CV provides a quick check on whether the variability is in proportion to the mean. A low CV in a production metric (e.g. diameter of screws, weight of filled packages) indicates the process is tightly controlled relative to the target mean. Companies often set benchmark CV thresholds as quality goals. For example, a factory might require that the weight of a cereal box has a CV below 2% to ensure customers get a nearly uniform amount. If a batch is measured and found to have CV = 5%, this high variability would trigger investigation into the production line (maybe a machine is malfunctioning). In chemical process control, similarly, the CV of parameters like temperature or pressure can be monitored; an increase in CV might signal that the process is drifting out of control. In summary, CV serves as a key performance indicator for uniformity. A practical illustration: A pharmaceutical manufacturer measuring the active ingredient in pills finds that pill dosages have a CV of 3%. This low CV means each pill is nearly the same strength, which is crucial for patient safety. In contrast, if some measurement in the process suddenly jumps in CV (say from 2% to 6%), it's a red flag of instability. Specialized control charts have even been developed that plot CV over time to monitor process variability directly.
- Risk Assessment in Finance: In finance and investment, the coefficient of variation is used as a risk-to-reward ratio. Here the "risk" is represented by the standard deviation of an investment's returns, and the "reward" is the mean return. By computing CV = volatility / mean return, investors can compare different assets or portfolios on how much risk is taken per unit of return. A lower CV in this context means you get more return per unit of risk (which is desirable), while a higher CV means the investment's return is more volatile relative to its average performance. For example, if Fund A has an average annual return of 8% with a standard deviation of 12%, its CV is 1.5. If Fund B has an average return of 5% with a standard deviation of 5%, its CV is 1.0. Even though Fund A has higher returns, Fund B has a better risk-return tradeoff (less relative volatility). Investors thus prefer assets with lower CV, provided the mean returns are positive. (If the mean return is near zero or negative, the CV becomes negative or very large, which is one limitation in using CV for extremely low-return scenarios.) Beyond investments, in broader risk management the CV can quantify variability in demand forecasts, project costs, or any financial metric to help identify unstable vs. stable elements.
- Cross-Dataset Comparisons: Often in data analysis we need to compare the variability of two or more datasets that have different units or vastly different means. The CV is ideal for this scenario because it puts variability on a relative scale. Analysts in fields like ecology, sociology, or multi-site studies use CV to compare variability across groups. For example, suppose one study measures the height of plants (in cm) and another measures the weight of fruits (in grams). Directly comparing standard deviations is meaningless due to units, but comparing CVs (say 12% vs 18%) can tell us which trait is more variable relative to its mean. In social sciences, the CV is used to compare demographic variability: one might compare income inequality across regions using CV of income (though Gini index is more common for inequality, CV is sometimes reported for heterogeneity). In medicine, one could compare the relative variability of a biomarker between two patient groups. In machine learning, CV can help compare the stability of different features or the relative error of different models on different scales. In summary, whenever you have multiple datasets or multiple variables and you want a fair comparison of variability, the CV provides a common yardstick. It has been noted as a simple tool to enable comparisons across datasets with different units or scales. This use is especially powerful in meta-analyses or systematic reviews, where studies report outcomes in different scales; converting variation to CV allows a rough but useful comparison of consistency across studies.
- A Classification Spectrum of CV Statistics and Uses:
- Calculation Methodology (Simple → Complex): The simplest form of CV calculation is the direct ratio of sample standard deviation to mean from raw data. This classic CV assumes one can compute those two summary statistics directly. On the more complex end, there are advanced calculation methodologies and variations of CV. For instance, when data are not normally distributed or contain outliers, one might use robust CV estimators – for example, using the median and interquartile range (IQR) in place of mean and SD to form a robust coefficient of dispersion. Another complex methodology is maximum-likelihood estimation of CV: in cases where data are available as part of a statistical model (such as heteroscedastic data with a functional relationship), one can use MLE to estimate a CV that best fits the data across a range of values. Additionally, calculating CV for grouped data or data with a hierarchical structure might involve more complex formulas (e.g. pooling CVs or using mixed models to estimate variability at different levels). In summary, the "simple" end is a one-line calculation, whereas the "complex" end involves specialized statistical techniques to handle non-standard situations (small sample corrections, bias adjustments, robust measures, etc.).
- Application Sophistication (Basic → Advanced): Basic applications of CV involve using it as a descriptive statistic for a single dataset or experiment – for example, reporting that "Experiment A has a CV of 8%". An intermediate application would be using CV to compare two or more groups (e.g., comparing the CVs of two experimental conditions to see which one has more relative variability). On the advanced end of the spectrum, CV is used in more sophisticated analyses: for instance, incorporating CV into model evaluation (like using CV of residuals to compare model fits, as discussed earlier for NRMSD), or using CV as part of a composite index in a domain (such as a risk metric combining multiple factors). In economics, a basic use is just computing price dispersion CV, while an advanced use might integrate CV of prices into an index of market competitiveness. In biology, a basic use might be CV of a trait in a population; an advanced use might be comparing CVs across many traits to identify which traits are under stabilizing selection. Advanced applications often require statistical testing or confidence intervals for CV – for example, testing if one CV is significantly different from another (which can involve bootstrapping or specific CV-based tests). Thus, the sophistication grows from simple descriptive use to being an integral part of inferential analyses and decision-making frameworks.
- Example Use Cases and Datasets:
- Laboratory Assay Precision: Suppose a lab is validating a new blood test for glucose. They measure a control sample 10 times and get readings (in mg/dL). The mean glucose reading is 100 mg/dL and the standard deviation is 3 mg/dL. The CV is 3%, which indicates high precision (only 3% variation relative to the mean). In fact, in many clinical labs an inter-assay CV below ~5% is considered excellent. If another assay for a different analyte had a CV of 15%, it would be deemed much less reliable. Dataset example: A published study might report something like "The intra-assay CV for cortisol measurements was 4.2%, indicating excellent repeatability of the assay." This gives readers a sense of how much measurement error or variability to expect relative to the concentration level.
- Manufacturing Process Data: A factory produces ball bearings, and quality control measures the diameter of 100 bearings. The data show a mean diameter of 5.00 cm and a standard deviation of 0.02 cm. The CV here is 0.4%. This extremely low CV suggests a very tight manufacturing process – nearly all bearings are virtually the same size (within 0.4% of the mean). If a different production line makes larger bearings with mean 10.00 cm and standard deviation 0.04 cm, that line also has CV = 0.4%. Thus both lines are equally consistent thanks to CV's scale-independence. Use case: Engineers might use this information to decide if a machine needs calibration. Perhaps over time they notice the CV creeping up to 1%; that would signal increased variability. Control charts plotting CV over each batch could catch this trend early.
- Financial Portfolio Comparison: Consider two investment portfolios over a 5-year period. Portfolio X has an average annual return of 8% with standard deviation 10%, so CV ≈ 1.25. Portfolio Y has an average return of 5% with standard deviation 8%, so CV = 1.6. Comparing these, Portfolio X, despite higher volatility in absolute terms, actually has a better risk-reward profile (lower CV). An investor would interpret this as Portfolio X delivering more return per unit risk than Portfolio Y. Dataset example: If one looked at historical returns of the S&P 500 versus a small-cap stock index, you might find something like: S&P 500 mean return ~7%, SD ~15% (CV ~2.1), whereas small-cap index mean ~9%, SD ~20% (CV ~2.22). The small difference in CV suggests a slightly poorer risk-reward for small caps. This kind of analysis is common in finance to quickly filter investment options.
- Common Pitfalls and Limitations:
- Meaningless for Zero or Near-Zero Mean: The CV is not defined for a mean of zero, and it becomes extremely unstable when the mean is very close to zero. In such cases, a tiny change in the mean can dramatically change the CV. For example, if mean = 0.1 and SD = 0.05, CV = 0.5 (50%). If mean drifts down to 0.05 with similar SD, CV = 1.0 (100%). Thus, when mean ~ 0, reporting CV is not informative (or even possible). It's better to use alternate measures or add a constant baseline if scientifically justifiable. This also ties to the requirement of a ratio scale: if a variable can take positive and negative values with mean near zero (e.g. financial returns that hover around 0), the CV will blow up and change sign depending on slight shifts — a clear indication that CV is not suitable in that scenario.
- Not Appropriate for Interval Scales or Arbitrary Origins: As noted earlier, CV should not be used for data on an interval scale (with arbitrary zero). A classic pitfall is computing CV for temperatures in Celsius or Fahrenheit. The CV value would change if one converts Celsius to Kelvin, even though the underlying variability in physical terms hasn't changed. For instance, 0°C is 273.15K, so a dataset with mean ~10°C might have a wildly different CV than in Kelvin. Always ensure the data's zero is meaningful before using CV. If unsure, avoid CV or convert to a ratio scale (e.g., use Kelvin for temperature data, or use differences from a reference if appropriate).
- Sensitivity to Outliers and Non-Normality: The CV inherits the weaknesses of the standard deviation in terms of robustness. A single extreme outlier can inflate the standard deviation and thus the CV. If the dataset has a heavy-tailed or skewed distribution, the CV might give a misleading sense of variability. For example, income data with a few billionaires will have a huge CV, but that doesn't necessarily reflect the variability experienced by most of the population. In such cases, consider using a robust CV measure or complement CV with other dispersion metrics (like the interquartile range or Gini coefficient for inequality). Recent research has highlighted that using robust estimators (median-based CV, etc.) can provide a more stable picture of variability for non-normal data.
- Sampling Variability and Confidence Intervals: The CV from a sample is a statistic and thus subject to sampling error. Particularly for small sample sizes, the CV can be a noisy estimate of the true CV. It's a mistake to assume the sample CV is exact – one should interpret it with caution or, if possible, provide a confidence interval. There are specific methods to calculate the standard error of CV and confidence bounds (some implemented in statistical software and NIST guidelines), usually assuming normality. If one is comparing CVs between two samples, a common pitfall is to just compare their values without testing if the difference is statistically significant. Non-overlapping confidence intervals or specialized tests should be used to make that judgment, otherwise one might wrongly conclude a process is more variable than another due to random chance.
- Interpretation Depends on Context: A CV value by itself doesn't carry a universal interpretation of "good" or "bad" variability. For instance, a CV of 20% might be trivial in one context but huge in another. It's a mistake to apply hard cutoffs universally (e.g., thinking "CV > 30% is always bad"). Always interpret the CV in light of domain norms or requirements. In quality control, 5% may be a critical threshold, whereas in fields like biology, 50% might be common due to natural variability. Ensure you know what typical CV ranges are for your specific application before drawing conclusions.
- Definition and Mathematical Formulation:
- Citations:
[1] NIST/SEMATECH, "Coefficient of Variation," Engineering Statistics Handbook – definition and usage notes. [2] Wikipedia, "Coefficient of Variation," last updated 2025 – general definition, formula and cautions on meaningful zero. [3] UCLA Statistical Consulting, "FAQ: What is the coefficient of variation?" – discussion of CV in single-variable and model contexts. [4] Corporate Finance Institute, "Coefficient of Variation," 2023 – explanation of CV and its use as risk-to-reward in finance. [5] SixSigma.us, "Mastering Relative Variability in Statistics," 2024 – applications of CV in quality control and industry examples. [6] Ospina & Marmolejo-Ramos (2019), Frontiers in Applied Math & Statistics – study comparing classic and robust CV estimators. [7] Real Statistics using Excel, "Coefficient of Variation Testing," 2022 – notes on bias of CV and unbiased estimation for normal data. [8] Investopedia, "Coefficient of Variation: Definition and How to Use It," 2021 – describes CV in investment risk context. [9] Price dispersion literature – e.g. Baye et al. (2004) – demonstrating use of CV to measure price dispersion in markets. [10] Biological Cybernetics, "Note on the coefficient of variations of neuronal spike trains," 2017 – example of CV = 1 in neuronal firing (PubMed ID 28432423).
2023
- (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Coefficient_of_variation Retrieved:2023-10-4.
- In probability theory and statistics, the coefficient of variation (CV), also known as Normalized Root-Mean-Square Deviation (NRMSD), Percent RMS, and relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is defined as the ratio of the standard deviation [math]\displaystyle{ \sigma }[/math] to the mean [math]\displaystyle{ \mu }[/math] (or its absolute value, , and often expressed as a percentage ("%RSD"). The CV or RSD is widely used in analytical chemistry to express the precision and repeatability of an assay. It is also commonly used in fields such as engineering or physics when doing quality assurance studies and ANOVA gauge R&R,by economists and investors in economic models, and in psychology/neuroscience.