Which type of error occurs when the null hypothesis is rejected and it is found to be true?

We want to know the answer to a research question. We determine our null and alternative hypotheses. Now it is time to make a decision.

The decision is either going to be...

  1. reject the null hypothesis or...
  2. fail to reject the null hypothesis.

Note! Why can’t we say we “accept the null”? The reason is that we are assuming the null hypothesis is true and trying to see if there is evidence against it. Therefore, the conclusion should be in terms of rejecting the null.

Consider the following table. The table shows the decision/conclusion of the hypothesis test and the unknown "reality", or truth. We do not know if the null is true or if it is false. If the null is false and we reject it, then we made the correct decision. If the null hypothesis is true and we fail to reject it, then we made the correct decision.

DecisionReality
\(H_0\) is true\(H_0\) is false
Reject \(H_0\), (conclude \(H_a\))  Correct decision
Fail to reject \(H_0\)Correct decision  

So what happens when we do not make the correct decision?

When doing hypothesis testing, two types of mistakes may be made and we call them Type I error and Type II error. If we reject the null hypothesis when it is true, then we made a type I error. If the null hypothesis is false and we failed to reject it, we made another error called a Type II error.

DecisionReality
\(H_0\) is true\(H_0\) is false
Reject \(H_0\), (conclude \(H_a\))Type I error Correct decision
Fail to reject \(H_0\)Correct decision Type II error

Types of errors

Type I error When we reject the null hypothesis when the null hypothesis is true.

Type II error When we fail to reject the null hypothesis when the null hypothesis is false.

The “reality”, or truth, about the null hypothesis is unknown and therefore we do not know if we have made the correct decision or if we committed an error. We can, however, define the likelihood of these events.

\(\alpha\) ('Alpha') The probability of committing a Type I error. Also known as the significance level.

\(\beta\) ('Beta') The probability of committing a Type II error.

Power Power is the probability the null hypothesis is rejected given that it is false (ie. \(1-\beta\))

\(\alpha\) and \(\beta\) are probabilities of committing an error so we want these values to be low. However, we cannot decrease both. As \(\alpha\) decreases, \(\beta\) increases.

Note!Type I error is also thought of as the event that we reject the null hypothesis GIVEN the null is true. In other words, Type I error is a conditional event and \(\alpha\) is a conditional probability. The same idea applies to Type II error and \(\beta\).

No hypothesis test is 100% certain. Because the test is based on probabilities, there is always a chance of making an incorrect conclusion. When you do a hypothesis test, two types of errors are possible: type I and type II. The risks of these two errors are inversely related and determined by the level of significance and the power for the test. Therefore, you should determine which error has more severe consequences for your situation before you define their risks.

Type I error When the null hypothesis is true and you reject it, you make a type I error. The probability of making a type I error is α, which is the level of significance you set for your hypothesis test. An α of 0.05 indicates that you are willing to accept a 5% chance that you are wrong when you reject the null hypothesis. To lower this risk, you must use a lower value for α. However, using a lower value for alpha means that you will be less likely to detect a true difference if one really exists. Type II error When the null hypothesis is false and you fail to reject it, you make a type II error. The probability of making a type II error is β, which depends on the power of the test. You can decrease your risk of committing a type II error by ensuring your test has enough power. You can do this by ensuring your sample size is large enough to detect a practical difference when one truly exists.

The probability of rejecting the null hypothesis when it is false is equal to 1–β. This value is the power of the test.

  Truth about the population
Decision based on sample H0 is true H0 is false
Fail to reject H0 Correct Decision (probability = 1 - α) Type II Error - fail to reject H0 when it is false (probability = β)
Reject H0 Type I Error - rejecting H0 when it is true (probability = α) Correct Decision (probability = 1 - β)

Example of type I and type II error

To understand the interrelationship between type I and type II error, and to determine which error has more severe consequences for your situation, consider the following example.

A medical researcher wants to compare the effectiveness of two medications. The null and alternative hypotheses are:

  • Null hypothesis (H0): μ1= μ2

    The two medications are equally effective.

  • Alternative hypothesis (H1): μ1≠ μ2

    The two medications are not equally effective.

A type I error occurs if the researcher rejects the null hypothesis and concludes that the two medications are different when, in fact, they are not. If the medications have the same effectiveness, the researcher may not consider this error too severe because the patients still benefit from the same level of effectiveness regardless of which medicine they take. However, if a type II error occurs, the researcher fails to reject the null hypothesis when it should be rejected. That is, the researcher concludes that the medications are the same when, in fact, they are different. This error is potentially life-threatening if the less-effective medication is sold to the public instead of the more effective one.

As you conduct your hypothesis tests, consider the risks of making type I and type II errors. If the consequences of making one type of error are more severe or costly than making the other type of error, then choose a level of significance and a power for the test that will reflect the relative severity of those consequences.

What type of error occurs when the null hypothesis is rejected when it is true?

A type 1 error is also known as a false positive and occurs when a researcher incorrectly rejects a true null hypothesis. This means that your report that your findings are significant when in fact they have occurred by chance.

What is Type 1 error in research?

A type I error occurs when in research when we reject the null hypothesis and erroneously state that the study found significant differences when there indeed was no difference. In other words, it is equivalent to saying that the groups or variables differ when, in fact, they do not or having false positives.

What causes Type 1 error?

What causes type 1 errors? Type 1 errors can result from two sources: random chance and improper research techniques. Random chance: no random sample, whether it's a pre-election poll or an A/B test, can ever perfectly represent the population it intends to describe.