# L2 regularization > A type of regularization that penalizes weights in proportion to the sum of the *squares* of the weights. L2 regularization helps drive outlier weights (those with high positive or low negative values) closer to 0 but *not quite to 0*. Features with values very close to 0 remain in the model but don't influence the model's prediction very much.[^1] A type of [regularization](https://wiki.g15e.com/pages/Regularization%20(machine%20learning.txt)) that penalizes [weights](https://wiki.g15e.com/pages/Weight%20(machine%20learning.txt)) in proportion to the sum of the *squares* of the weights. L2 regularization helps drive [outlier](https://wiki.g15e.com/pages/Outliers.txt) weights (those with high positive or low negative values) closer to 0 but *not quite to 0*. Features with values very close to 0 remain in the model but don't influence the [model](https://wiki.g15e.com/pages/Model%20(machine%20learning.txt))'s prediction very much.[^1] L2 regularization always improves generalization in [linear models](https://wiki.g15e.com/pages/Linear%20model.txt). ## See also - [L1 regularization](https://wiki.g15e.com/pages/L1%20regularization.txt) ### Footnotes [^1]: https://developers.google.com/machine-learning/glossary#L2_regularization