DISCOVER
×

How to Calculate MSE

Updated March 23, 2017

The mean square error (MSE) is the average of the squared errors between actual and estimated readings in a data sample. Squaring the difference removes the possibility of dealing with negative numbers. It also gives bigger differences more weight than smaller differences in the result. Mean square error is widely used in signal processing applications, such as assessing signal quality, comparing competing signal processing methods and optimising signal processing algorithms.

Find the difference between the actual and estimated data points in a sample. For example, if you have developed an algorithm for predicting stock prices, the difference between the predicted stock price and the actual price would be the error. If your algorithm predicts £7, £9, £13, £14 and £15 as prices for five stocks on a particular day, and the actual prices are £8, £11, £11, £13 and £15, respectively, then the errors are 60p ($13 - £7), £1.30 ($17 - £9), -$2 ($18 - £13), -$2 ($20 - £14) and zero (£15 - $24), respectively.

Compute the sum of the square of the errors. First, square the differences, and then add them up. Continuing with the example, the sum of the square of the errors is 13 (1 + 4 + 4 + 4 + 0).

Divide the sum of the square of the errors by the number of data points to calculate the mean square error. To conclude the example, the mean square error is equal to 2.6 (13 / 5).

Cite this Article A tool to create a citation to reference this article Cite this Article

About the Author

Based in Ottawa, Canada, Chirantan Basu has been writing since 1995. His work has appeared in various publications and he has performed financial editing at a Wall Street firm. Basu holds a Bachelor of Engineering from Memorial University of Newfoundland, a Master of Business Administration from the University of Ottawa and holds the Canadian Investment Manager designation from the Canadian Securities Institute.