# Regularization (machine learning) > Any mechanism that reduces overfitting. Popular types of regularization include:[^1] Any mechanism that reduces [overfitting](https://wiki.g15e.com/pages/Overfitting.txt). Popular types of regularization include:[^1] - [L1 regularization](https://wiki.g15e.com/pages/L1%20regularization.txt) - [L2 regularization](https://wiki.g15e.com/pages/L2%20regularization.txt) - [Dropout regularization](https://wiki.g15e.com/pages/Dropout%20regularization.txt) - [Early stopping](https://wiki.g15e.com/pages/Early%20stopping.txt) (this is not a formal regularization method, but can effectively limit overfitting) Regularization can also be defined as the penalty on a model's complexity. ## Training loss vs. real-world performance Regularization is counterintuitive. Increasing regularization usually *increases* training loss, which is confusing because, well, isn't the goal to *minimize* training loss?[^1] Actually, no. The goal isn't to minimize training loss. The goal is to make excellent predictions on real-world examples. Remarkably, even though increasing regularization increases training loss, it usually helps models make better predictions on real-world examples. ## Footnotes [^1]: https://developers.google.com/machine-learning/glossary#regularization