June 2015 – by Dr. Gerald Lebovic, PhD

Results and decision making that emanate from statistical analysis can only be as good as the quality of the data. Data quality involves ensuring the collected data is accurate, timely, available, and complete (1). In contrast to the strict protocols of randomized controlled trials, data quality in observational databases is often more problematic.

The inability to collect all data can arise for several reasons, such as non-response, lost data, or skip patterns in questionnaires. When the data is missing due to non-response, the reason why has implications on how the variable is treated during analysis, and therefore non-response data is classified into one of three categories: missing completely at random (MCAR), missing at random (MAR), and non-ignorable data.


MCAR are the simplest type of missing data to deal with. Suppose variable X contains missing data. If the reason the data is missing is independent of X or any other variable in the dataset, then X is said to be MCAR (2).

Examples of MCAR:

a. A subject simply forgets to fill out their age in a questionnaire.

b. Financial resources only allow a particular variable to be collected on a subset of participants (i.e. occurs by study design).

How is MCAR handled in analysis?

As a rule of thumb, if less than 5% of the observations are missing, the missing data can simply be deleted without any significant ramifications (3). However, if more than 5% of the data is missing, deleting the missing data will result in a reduced sample size and an increased standard error of the parameter estimates. In this case, it is strongly suggested to use imputation of the mean, median or mode, or multiple imputation, to fill in the missing data.


In the case of MAR, the criteria are a little less strict. Suppose we have two variables, X and Y. The MAR criterion says that the reason X is missing cannot depend on X but it may depend on Y.

Example of MAR:

a. Suppose that older people tend not to report their income level. So long as the missing data on income is not dependent on the income level (i.e. there is no correlation between missing data and those earning high or low income, for example), this would be MAR data.

How is MAR handled in analysis?

Multiple imputation, which extends the idea of single imputation by imputing missing data numerous times, is strongly recommended to improve the quality of the results. In the end, numerous complete datasets exist, each containing the original data with different imputed values. Statistical analysis is run on each of the datasets independently and then combined. Regression methods are able to predict quite well what the missing values would be, and are frequently used to fill in missing data. Furthermore, today’s computer technology allows for the imputation of 50 or more datasets in short time periods. Finally, software has been programmed to combine results to account for the imputed values and adjust parameter estimates and their standard errors accordingly.


In this scenario, the missing data is quite informative and data missing for variable X can very well depend on X.

Examples of non-ignorable data:

a. A smoker may not want to respond to a question about current smoking status.

b. An overweight person may not want to answer a question about her/his weight.

How is non-ignorable data handled in analysis?

Non-ignorable data is more complex and involves modeling the missing data mechanism (4). One approach that may be taken is a sensitivity analysis. For the smoking example above, this may involve filling in all the missing data in two different ways: one where all missing values are considered as “smokers” and the other where they are all considered as “non-smokers”. The two datasets would then be analyzed separately and the results would be compared to see if they differ. If no significant change is observed then we need not be too concerned about it and it would be reported as such, whereas a change would be a limitation to the analysis.

In summary, missing data is quite common in observational studies and the analytical approach depends on the reason for the missing data. From a statistical modeling perspective, one can never confirm the reason the data is missing, however, a combination of “statistical sleuthing” and discussion with the principal investigator can often shed light on these issues. While it is ideal to have a complete dataset, this is rarely attainable. Nonetheless, the following should be considered to limit the amount of missing data:

  1. Examine the data periodically to ensure the completeness of the data.
  2. Variables that are poorly collected may be better off dropped and resources better allocated elsewhere.
  3. Set aside budget to attempt to contact people who have left the study or provided no response.
  4. Ensure data entry staff is well trained.
  5. Employ the use of a trusted electronic data capture system.


(1) http://www-01.ibm.com/software/data/quality/

(2) Allison, Paul D. Missing data. Thousand Oaks, CA: Sage, 2000.

(3) Harrell, Frank E. Regression modeling strategies: with applications to linear models, logistic regression, and survival analysis. Springer, 2001.

(4) Little, Roderick JA, and Donald B. Rubin. “Statistical analysis with missing data.” (2002).