# L2 loss > A loss function that calculates the square of the difference between actual label values and the values that a model predicts. Due to squaring, L2 loss amplifies the influence of outliers. That is, L2 loss reacts more strongly to bad predictions than L1 loss.[^1] A [loss function](https://wiki.g15e.com/pages/Loss%20function.txt) that calculates the square of the difference between actual [label](https://wiki.g15e.com/pages/Label%20(machine%20learning.txt)) values and the values that a [model](https://wiki.g15e.com/pages/Model%20(machine%20learning.txt)) predicts. Due to squaring, L2 loss amplifies the influence of [outliers](https://wiki.g15e.com/pages/Outliers.txt). That is, L2 loss reacts more strongly to bad predictions than [L1 loss](https://wiki.g15e.com/pages/L1%20loss.txt).[^1] $$ L_2 loss = \sum_{i=0}^{n} ( y_i - \hat{y}_i )^2 $$ ## See also - [Regression models](https://wiki.g15e.com/pages/Regression%20model.txt) typically use L2 loss as the . - [Mean squared error](https://wiki.g15e.com/pages/Mean%20squared%20error.txt) is the average L2 loss per example. ## Footnotes [^1]: https://developers.google.com/machine-learning/glossary#l2-loss