front of an IMAX theater with lots of bright neon lights
Outlier Articles Home

Statistics

What Is ANOVA Analysis?

02.21.2023 • 7 min read

Sarah Thomas

Subject Matter Expert

Here’s an easy-to-understand overview of ANOVA. We’ll cover why to use it, when to use it , how to use it, and much more.

In This Article

  1. What Is ANOVA Analysis?

  2. Why Use ANOVA?

  3. When To Use ANOVA?

  4. Common Applications

  5. How To Use ANOVA?

  6. What Are the Limitations of ANOVA?

  7. Are There Other Types of ANOVA?

Have you ever wondered how to compare sample averages across many groups in your data set? ANOVA—a statistical method developed by Ronald Fisher in 1918—is a statistical test that allows you to compare means across different sample groups.

In this article, we’ll cover the basics of this method.

What Is ANOVA Analysis?

Analysis of variance (or ANOVA) is a group of statistical methods used to study the relationship between a categorical variable (or subcategories in your data) and a numerical variable. It is an application of linear regression and one of the methods used in inferential statistics.

Suppose you want to study the relationship between:

  • Different groups of leading male actors ‌cast in action movies (a categorical variable)

  • The box office sales of those movies (a numerical variable)

You could use ANOVA to study this relationship.

You may have an in-going hypothesis that movies starring different types of leading men—those with short hair or clean-shaven—on average, have significantly higher box office sales than movies starring other types—like long hair or with facial hair.

ANOVA helps you determine whether these categories do, in fact, explain part of the variation in sales.

Why Use ANOVA?

ANOVA is useful because it allows you to compare mean outcomes for subgroups in your data. The ANOVA test enables you to determine if a statistically significant difference exists between these group means.

If the difference is statistically significant, you can reject the null hypothesis that each subcategory has the same mean. In other words, you can reject the null hypothesis ‌the categorical variable does not explain any variation in your outcome variable. You reject this null hypothesis for an alternative hypothesis that at least two of the subgroups in your data correlate with the outcome variable.

Null Hypothesis

H0:μ1=μ2=μ3=...=μkH_0:\mu_1=\mu_2=\mu_3=...=\mu_k

Where: μ\mu is the mean for a subgroup

kk is the number of subgroups defined by your categorical variable

Alternative Hypothesis

Ha:H_a: At least two of the group means (μ1...μk)(\mu_1...\mu_k)are not equal;

μiμj\mu_i\neq\mu_j from some iji\neq j

When To Use ANOVA?

To use ANOVA, you should identify at least 3 subgroups, and you need to make the following assumptions about your data.

  • Your independent variable—or explanatory variable—is a categorical variable.

  • Your dependent variable—or response variable—is a numerical variable.

  • The population from which observations are drawn should be normally distributed.

  • Homogeneity of variance exists, meaning the population variance for each subcategory should be the same. We sometimes call this homoscedasticity.

  • Observations in each of the groups are independent of each other and collected through random sampling. If each observation is independent of others, you will also have independent groups.

When more than two subgroups are in your data, statisticians sometimes prefer ANOVA analysis to performing pairwise T-tests. This is because performing multiple T-tests can increase the likelihood of making Type I errors. However, there are also ways of adjusting the p-values in your T-tests to deal with this issue.

Common Applications

ANOVA has applications in many fields. Whenever a research question revolves around differences between different sub-categories, you should ask yourself whether ANOVA may be useful.

Here are some examples:

Economics

Economists studying income inequality may want to know whether parental education levels (high school diploma, some college, college degree, graduate degree) are correlated with the future earnings of children.

Psychology

A psychologist studying childhood anxiety may want to know whether anxiety levels differ for different age groups.

Business

A business may want to test various advertising campaigns to see whether the ads are linked to improved sales.

Medicine

A medical researcher can use ANOVA to test whether patients undergoing different types of treatments (Treatment A, Treatment B, Treatment C…) experience different overall outcomes.

How To Use ANOVA?

Here is a step-by-step example of how to use a one-way ANOVA test.

Movie Stars and Box Office Sales

Let’s return to our hypothetical example of movie stars and box office sales. Let’s say we have data for action films starring lead male actors. We’ll conduct the analysis of variance in three parts.

Part I. Use Your Data

Use your data to calculate a grand mean and means for each subgroup.

  1. We can first divide the data into four groups based on the leading actor’s hairstyle (bald, buzz cut, short hair, and long hair).

  2. We can then calculate average box office sales for our entire data set. We’ll call this mean the grand mean, yˉgm\bar{y}_{gm}.

  3. Next, we calculate average box office sales for each of our four categories (yˉbald,yˉbuzz,yˉshort,yˉlong)(\bar{y}_{bald},\bar{y}_{buzz}, \bar{y}_{short}, \bar{y}_{long}).

We might see from our data that the sample means are different, but we’ll need to do a bit more analysis and conduct a hypothesis test to determine whether the differences are due to random factors, such as errors due to sampling, or an actual difference between the groups.

Part II. Estimate the Explained Variance in Your Model

To dig ‌deeper, we start by calculating an estimate of explained variance. The intuition ‌is that if any of our categorical groups do matter, some of the overall variance in our data should be explained by the categorical variable.

  1. Start by calculating the sum of squares total (SST). This is the total of all of the squared differences from the mean.

Sum of Squares Total (SST)=(yiyˉgm)2\text{Sum of Squares Total (SST)}=\sum_{}^{}(y_i-\bar{y}_{gm})^2

2. Next, calculate the sum of squares between (SSB). For each group, you’ll multiply the number of observations in the group by the square of the difference between the group mean and the grand mean. Once you’ve done this for each group, you sum up each of your calculations.

Sum of Squares Between (SSB)=nj(yˉjyˉgm)2\text{Sum of Squares Between (SSB)}=\sum_{}^{}n_j(\bar{y}_{j}-\bar{y}_{gm})^2

Where:

njn_j is the number of observations in group jj

yˉj\bar{y}_{j} is the mean for group jj

yˉ\bar{y} is the grand mean

3. To calculate the explained variance, divide the sum of squares between (SSB) by the sum of squares total (SST). The explained variance gives you an estimate of how much of the total variance in your data can be explained by the categorical variable.

Suppose you find an explained variance of 0.05 or 5%. This figure tells you that your categorical variable can explain an estimated 5% of your overall variance. The explained variance is also called the effect size.

Explained Variance=SSBSST\text{Explained Variance} = \dfrac{SSB}{SST}

Part III. Conduct a Hypothesis Test

Just as you would in any type of inferential statistical analysis, you’ll need to test the statistical significance of your results. We can do this by conducting a hypothesis test.

  1. The first step in the ANOVA hypothesis test is to identify the null hypothesis and the alternative hypothesis.

The null hypothesis (H0) is that the population means for each group are not significantly different (μbald=μbuzz=μshort=μlong)(\mu_{bald}= \mu_{buzz}=\mu_{short}=\mu_{long}). This would suggest that hairstyles do not help explain the outcome variable. Our alternative hypothesis (HaH_a) is that at least two of the means are statistically different, which suggests that at least some hairstyles do help explain box office sales.

2. Next, we choose a significance level (or alpha level) for the test. Statisticians typically use an alpha level of 0.10 or 0.05, but the choice is yours to make.

3. To conduct the hypothesis test, you’ll use an F-distribution and an F-test statistic. The F-statistic (sometimes called an F-ratio or the ANOVA coefficient) is a ratio where the numerator is derived from the estimate of variance between the subgroups (the SSBSSB), and the denominator is derived from the sum of squares error.

F=MSBMSE=SSB/(#of groups1)SSE/(#of observations/#of groups)F=\dfrac{MSB}{MSE}=\dfrac{SSB/(\# \text{of groups}-1)}{SSE/(\#\text{of observations}/\# \text{of groups})}

Where:

FF is the F-statistic, or ANOVA coefficient

MSBMSB is the mean square between (also called the model mean square)

# of groups - 1 is the appropriate degrees of freedom for the MSB

MSEMSE is the mean squares within—also called the mean squared error

# of observations - # of groups is the appropriate degrees of freedom for the MSEMSE

SSBSSB is the sum of squares between

SSESSE is the sum of squares error and is equal to SSTSSBSST - SSB

You can perform all of these ANOVA calculations using statistical software, like SPSS or R, but the intuition here is simple. If the null hypothesis is true, the variance across all of your data (the variance in box office sales across all actor types) should be approximately the same as the variance within each subgroup. The F-statistic will indicate whether the variance across your data is similar or substantially greater than the variance within the groups.

4. As a final step, you can calculate the p-value for your F-statistic. When you perform an ANOVA test using statistical software, you will get an ANOVA table showing you both your F-value and the associated p-value. The larger your sample size, the smaller F-value needs to be for it to be statistically significant.

If the p-value is greater than your significance level, you should fail to reject the null hypothesis. If the p-value is less than the significance level, your results are statistically significant, and you can reject the null hypothesis in favor of the alternative.

As an alternative to this step, you could calculate a critical value and compare your F-value to your critical value. If your F-value is larger than the critical value, you can reject the null hypothesis in favor of the alternative hypothesis.

In these videos, Professor AnnMaria De Mars goes through another ANOVA example looking at psychological distress across groups of patients categorized by their marital status. The first video explains ANOVA, and the second video takes a closer look at the F-statistic.

What Are the Limitations of ANOVA?

Some of the limitations of ANOVA are as follows:

  • The ANOVA test is an omnibus test, meaning it cannot tell you which specific subcategories in your data are statistically significant. To do this, you would need to conduct a post-hoc test, such as a Tukey test, or use other statistical methods.

  • ANOVA analysis assumes that your data is normally distributed within each group. If this is not the case—if there are outliers or the data is skewed—ANOVA will not give you accurate results.

  • ANOVA also assumes that the variances are similar across each group. If there are large differences in variance (or standard deviation) between your groups, you should not use ANOVA.

Are There Other Types of ANOVA?

The simplest form of ANOVA is the one-way analysis of variance or single-factor ANOVA test, which we’ve discussed so far in the article. There are other forms of ANOVA tests you could use as well.

Two-way ANOVA. A two-way ANOVA test is an ANOVA analysis that uses two independent categorical variables rather than one.

MANOVA (also called multi-factorial ANOVA or N-Way ANOVA). MANOVA uses multiple categorical variables and tests whether these independent variables are correlated with the numerical dependent variable. The number of independent variables in a MANOVA analysis is three or more.

Explore Outlier's Award-Winning For-Credit Courses

Outlier (from the co-founder of MasterClass) has brought together some of the world's best instructors, game designers, and filmmakers to create the future of online college.

Check out these related courses:

Intro to Statistics

Intro to Statistics

How data describes our world.

Explore course
Intro to Microeconomics

Intro to Microeconomics

Why small choices have big impact.

Explore course
Intro to Macroeconomics

Intro to Macroeconomics

How money moves our world.

Explore course
Intro to Psychology

Intro to Psychology

The science of the mind.

Explore course

Share