10.6 - Screening Biases

Printer-friendly version

Bias results from a  problem with the methods of a study that can't be corrected in an analysis. We can adjust for the effects of confounders in an  analysis. For example, we can calculate adjusted rates, but we can't correct for biases.

There are two common types of biases

Information bias - data is not accurate, possibly due to faulty instruments, or possibly the data is wrong.

Selection bias - the study population is not representative of the larger population, possibly because of a poor sampling process, or because of a lot of individuals are lost to follow-up

Common biases in screening include: .

• Lead time (information bias)- the systematic error of apparent increased survival from detecting disease in an early stage
• Length (information bias) - the systematic error from detecting disease with a long latency or pre-clinical period
• Referral/Volunteer bias (selection bias) - the systematic error from detecting disease in persons who have a propensity to seek health care
• Detection (information bias) - the detection of insignificant disease

Note: Lead time and length biases reduce the utility of using increased survival time as the measure of success for a screening modality. Instead, screening programs have traditionally been assessed with changes in mortality rates. Disease-specific mortality rates have been the most commonly used measure of disease frequency.

Let's take a look at a chart we saw earlier. Here the disease starts in 1985, is diagnosed in 1992 and the person dies of that disease in 1995. How long is his survival? Three years.

Now we institute an effective screening program. The disease starts in 1985 and is detected by the screening program in 1989. The person dies of the disease in 1995. How long was the survival? Six years. Screening seems to have increased their survival time, correct?

You have also noted that in either situation, it is 10 years from the time the disease started until the person dies. If our measure is survival time,  we can easily produce a lead time bias. In this example, there is actually no benefit of the screening process, in terms of survival. The person still died in 1995.  They know about the disease for three years longer; that is the effect of the screening. This example demonstrates a lead-time bias of three years.

An effective screening progam for a life-threatening disease should extend life. .Screening studies with an outcome of survival time are subject to lead-time bias that can favors the screening process when there is not actual benefit to the program.

Another way to represent the lead time bias is on a survival curve:

Gordis, L. Epidemiology. Philadelphia: Saunders and Company, 1996

In this graph,  at time zero 100% of the people are alive. After five years, 30% are alive. We have instituted a screening program which detects disease one year earlier. In this case, five years following the screening diagnosis gets us out to a 50% survival rate for individuals five years following screening diagnosis. We have a lead time bias of a 20% increased survival to five years. All that has been done here is to move diagnosis back one year!

Instead, we need to look at the mortality rates from the disease,  i.e. the mortality rates in the exposed group and the nonexposed group. Mortality rates are the 'gold standard' for measuring the effect of early screening and treatment, not survival time.

Length Bias

Let's use this graph to consider the effect of length bias:

Disease onset is at zero and each line represents an individual. The bottom person, for instance, has a very slow growth rate of disease. The top line, with the steepest slope, represents someone with aggressive disease. This person has rapid growth and dies (D). The individuals with slower growth lived to the point where they get screened (S).

A screening initiative is more likely to detect slow-growing disease. There is not much that a researcher can do about this type of bias other than to realize it is likely to occur.

Breast Cancer Screening - HIP Experience

Data from the Health Insurance Program (HIP) in New York are represented as rates of death in 10,000 women per year, from all causes and from cardiovascular (CV) disease causes in the table below:

 Deaths /10,000 women per year from all causes from CV causes Control Women 54 25 Experimental Women Volunteer 42 17 Refused 77 38

The control group has death rates of 54 and 25/10,000. The experimental group, women who were screened, were recruited two different ways:

1) women who volunteered, after being asked to participate

2) women who did not volunteer to be screened, after being asked to participate

The state was able to record the death rates for each of these groups. The women who volunteered for screening had much lower rates of death,  both all causes and cardiovascular causes. The women who volunteer appear to be healthier. The women who were offered to participate in the program, but refused, have the highest death rates.

If a screening study does not include a randomized process for selection, volunteers for the study are likely to be in better health than the general population. Thus, in evaluating a screening study, consider how the subjects were recruited.  Were they volunteers or were they randomized into screening or no-screening groups?

How to Avoid Bias

Can we design a screening study without these biases?

For lead time bias – use mortality rather than survival rates.

A randomized clinical trial design can reduce biases:

• For length time bias – count all outcomes regardless of method of detection
• For volunteer bias – count all outcomes regardless of group; follow-up those who refuse to get outcomes