Intro to hypothesis testing

Hypothesis testing is all about answering the question: for a parameter θ, is a parameter value of θ0 consistent with the data in our observed sample?

We call this is the null hypothesis and write

H0 : θ = θ0

where this means that true (population) value of a parameter θ is equal to some value θ0.

What do we do next? We assume that θ = θ0 in the population, and then check if this assumption is compatible with our observed data. The population with θ = θ0 corresponds to a probability distribution, which we call the null distribution.

Let’s make this concrete. Suppose that we observe data 2, 3, 7 and we know that our data comes from a normal distribution with known variance σ2 = 2. Realistically, we won’t know σ2, or that our data is normal, but we’ll work with these assumptions for now and relax them later.

Let’s suppose we’re interested in the population mean. Let’s guess that the population mean is 8. In this case we would write the null hypothesis as H0 : μ = 8. This is a ridiculous guess for the population mean given our data, but it’ll illustrate our point. Our null distribution is then Normal(8, 2).

Now that we have a null distribution, we need to dream up a test statistic. In this class, you’ll always be given a test statistic. For now we’ll use the T statistic.

$$ Z = {\bar x - \mu_0 \over \mathrm{se}\left(\bar x \right)} = {\bar x - \mu_0 \over {\sigma \over \sqrt n}} = {4 \over \sqrt \frac 23} \approx 4.9 $$

Recall: a statistic T(X) is a function from a random sample into the real line. Since statistics are functions of random samples, they are themselves random variables.

Test statistics are chosen to have two important properties:

  1. They need to relate to the population parameter we’re interested in measuring
  2. We need to know their sampling distributions

Sampling distributions you say! Why do test statistics have sampling distributions? Because we’re just taking a function of a random sample.

For this example, we know that

Z ∼ Normal(0, 1)

and now we ask how probable is this statistic given that we have assumed that null distribution is true.

The idea is that if this number is very small, then our null distribution can’t be correct: we shouldn’t observe highly unlikely statistics. This means that hypothesis testing is a form of falsification testing.

For the example above, we are interested in the probability of observing a more extreme test statistic given the null distribution, which in this case is:

P(|Z| > 4.9) = P(Z < −4.9) + P(Z > 4.9) ≈ 9.6 ⋅ 10−7

This probability is called a p-value. Since it’s very small, we conclude that the null hypothesis is not realistic. In other words, the population mean is statistically distinguishable from 8 (whether or not it is practically distinguishable from 8 is entirely another matter).

This is the just of hypothesis testing. Of course there’s a bunch of other associated nonsense that obscures the basic idea, which we’ll dive into next.

Things that can go wrong

False positives

We need to be concerned about rejecting the null hypothesis when the null hypothesis is true. This is called a false positive or a Type I error.

If the null hypothesis is true, and we calculate a statistic like we did above, we still expect to see a value of p-value of 9.6 ⋅ 10−7 about 9.6 ⋅ 10−5 percent of the time. For small p-values this isn’t an issue, but let’s consider a different null hypothesis of μ0 = 3.9. Now

$$ Z = {\bar x - \mu_0 \over {\sigma \over \sqrt n}} = {4 - 3.9 \over \sqrt \frac 23} \approx 0.12 $$

and our corresponding p-value is

P(|Z| > 0.12) = P(Z < −0.12) + P(Z > 0.12) ≈ 0.9

and we see that this is quite probable! We should definitely not reject the null hypothesis!

This leads us to a new question: when should we reject the null hypothesis? A standard choice is to set an acceptable probability for a false positive α. One arbitrary but common choice is to set α = 0.05, which means we are okay with a ${1 \over 20}$ chance of a false positive. We should then reject the null hypothesis when the p-value is less than α. This is often called “rejecting the null hypothesis at significance level α”. More formally, we might write

P(reject H0|H0 true) = α

False negatives

On the other hand, we may also fail to reject the null hypothesis when the null hypothesis is in fact false. We might just not have enough data to reject the null, for example. We call this a false negative or a Type II error. We write this as

Power = P(fail to reject H0|H0 false) = 1 − β

To achieve a power of 1 − β for a one sample Z-test, you need

$$ n \approx \left( { \sigma \cdot (z_{\alpha / 2} + z_\beta) \over \mu_0 - \mu_A } \right)^2 $$

where μA is the true mean and μ0 is the proposed mean. We’ll do an exercise later that will help you see where this comes from.

Examples

Z-test

A company claims battery lifetimes are normally distributed with μ = 40 and σ = 5 hours. We are curious if the claim about the mean is reasonable, and collect a random sample of 100 batteries. The sample mean is 39.8. What is the p-value of a Z-test for H0 : μ = 40?

We begin by calculating a Z-score

$$ Z = {\bar x - \mu_0 \over {\sigma \over \sqrt n}} = {39.8 - 40 \over {5 \over \sqrt 100}} = 0.4 $$

and then we calculate, using the fact that Z ∼ Normal(0, 1),

P(Z < −0.4) + P(Z > 0.4) ≈ 0.69

we might also be interested in a one-sided test, where HA : μ < 40. In this case the p-value is only the case when Z < −0.4, and the p-value is

P(Z < −0.4) ≈ 0.34

Power for Z-test

Suppose a powdered medicine is supposed to have a mean particle diameter of μ = 15 micrometers, and the standard deviation of diameters stays steady around 1.8 micrometers. The company would like to have high power to detect mean thicknesses 0.2 micrometers away from 15. When n = 100, what is the power of the test if the true μ is 15.2 micrometers. Assume the company is interested in controlling type I error at an α = 0.05 level.

We will reject the null when our Z score is less than zα/2 or z1 − α/2, or when the Z score is less than -1.96 or greater than 1.96. Recall that the Z score is ${\bar x - \mu_0 \over {\sigma \over \sqrt n}}$, which we can rearrange in terms of to see that we will reject the null when  < 14.65 or  > 15.35.

Now we are interested in the probability of being in this rejection region when the alternative hypothesis μA = 15.2 is true.

P( > 15.35|μ = 15.2) + P( < 14.65|μ = 15.2)

and we know that $\bar x \sim \mathrm{Normal} \left(15.2, 1.8 / \sqrt{100}\right)$ so this equals

0.001 + 0.198 ≈ 0.199

So we have only a power of about 20 percent. This is quite low.