6.5 - Power

The probability of rejecting the null hypothesis, given that the null hypothesis is false, is known as power. In other words, power is the probability of correctly rejecting \(H_0\).

Power
\(Power = 1-\beta\)
\(\beta\) = probability of committing a Type II Error.

The power of a test can be increased in a number of ways, for example increasing the sample size, decreasing the standard error, increasing the difference between the sample statistic and the hypothesized parameter, or increasing the alpha level. 

When we increase the sample size, decrease the standard error, or increase the difference between the sample statistic and hypothesized parameter, the p value decreases, thus making it more likely that we reject the null hypothesis. When we increase the alpha level, there is a larger range of p values for which we would reject the null hypothesis. In all of these cases, we say that statistically power is increased. 

There is a relationship between \(\alpha\) and \(\beta\). If the sample size is fixed, then decreasing \(\alpha\) will increase \(\beta\). If we want both \(\alpha\) and \(\beta\) to decrease (i.e., decreasing the likelihood of both Type I and Type II errors), then we should increase the sample size.

Try it! Section

Question 1
If the power of a statistical test is increased, for example by increasing the sample size, how does the probability of a Type II error change?

The probability of committing a Type II error is known as \(\beta\).

\(Power+\beta=1\)

\(Power=1-\beta\)

If power increases then \(\beta\) must decrease. So, if the power of a statistical test is increased, for example by increasing the sample size, the probability of committing a Type II error decreases.

Question 2
When we fail to reject the null hypothesis, can we accept the null hypothesis? For example, with a p value of 0.12 we fail to reject the null hypothesis at 0.05 alpha level. Can we say that the data support the null hypothesis?

No. When we perform a hypothesis test, we only set the Type I error rate (i.e., alpha level) and guard against it. Thus, we can only present the strength of evidence against the null hypothesis. We can sidestep the concern about Type II error if the conclusion never mentions that the null hypothesis is accepted. When the null hypothesis cannot be rejected, there are two possible cases:

1) The null hypothesis is really true.

2) The sample size is not large enough to reject the null hypothesis (i.e., statistical power is too low).

Question 3
A study was conducted by a retail store to determine if the majority of their customers were teenagers. With \(\widehat{p}=0.48\), the null hypothesis was not rejected and the company concluded that they did not have evidence that the majority of their customers were teenagers. But, in reality, the proportion of all of their customers (i.e., the population) who are teenagers is actually \(p=0.53\). Did this research study result in a Type I error, Type II error, or correct decision?

The result of the study was to fail to reject the null hypothesis. In reality, the null hypothesis was false. This is a Type II error.

Question 4
A university conducted a hypothesis test to determine if their students' average SAT-Math score was greater than the national average of 500. They collected a sample of \(n=800\) students and found \(\overline{x}=506\). The t test statistic was 1.70 and \(p=0.045\) therefore they rejected the null hypothesis and concluded that the mean SAT-Math score at their university was higher than the national average. However, in reality, in the population of all students at the university, the mean SAT-Math score is 503. Did this research study result in a Type I error, Type II error, or correct decision?
This study came to a correct conclusion. They rejected the null hypothesis and concluded that \(\mu>500\) when in reality \(\mu=503\) which is greater than 500.