Type I Error
A Type I Error is the error we make when we choose to reject the null hypothesis in a hypothesis test even though the null hypothesis is, in reality, true. Let’s say we work for a camping good manufacturer that makes bag chairs.
We’ve just gotten word that our chairs are collapsing under a weight much lower than the 350 lbs we’ve rated them at. We conduct a hypothesis test trying to gather evidence to support the claim that lawn chairs break at a weight less than the 350 lbs we think they should break at
We’ve just calculated a p-value from the hypothesis test. The p-value, 0.02, happens to be more than our predetermined alpha-level of 5% or 0.05. We should reject the null hypothesis because that’s what we do when the p-value is less than our alpha-level. We would conclude “We have sufficient evidence to support the claim that our chairs break at weights less than 350 lbs.” Based on this test result, we look closely into the materials or manufacturing process of the chairs because the hypothesis test said there’s something wrong.
We fire our parts supplier and hire a new, more expensive one, and we buy all new assembly machines. Problem...what if the null hypothesis is actually, secretly, behind-the-scenes-but-we-don’t-know-it true? What if the chairs aren’t breaking at less than their rating? It’s certainly possible because the p-value is just a probability that something could happen...not a guarantee that it will happen.
The decision we made to reject the null hypothesis based on statistically significant evidence is wrong, wrong, wrong, even though we don’t know it. Because we made a Type I Error, because we fire our materials supplier and buy all new manufacturing machines, we’re out a bunch of money we didn’t really need to spend. We can reduce the chance of making a Type I Error by reducing our alpha-level as much as possible.