# Early stopping > A method for regularization that involves ending training *before* training loss finishes decreasing. In early stopping, you intentionally stop training the model when the loss on a validation dataset starts to increase; that is, when generalization performance worsens. A method for [regularization](https://wiki.g15e.com/pages/Regularization%20(machine%20learning.txt)) that involves ending [training](https://wiki.g15e.com/pages/Training%20(machine%20learning.txt)) *before* training loss finishes decreasing. In early stopping, you intentionally stop training the model when the loss on a [validation dataset](https://wiki.g15e.com/pages/Validation%20set.txt) starts to increase; that is, when [generalization](https://wiki.g15e.com/pages/Generalization%20(machine%20learning.txt)) performance worsens. Early stopping may seem counterintuitive. After all, telling a [model](https://wiki.g15e.com/pages/Model%20(machine%20learning.txt)) to halt training while the loss is still decreasing may seem like telling a chef to stop cooking before the dessert has fully baked. However, training a model for too long can lead to [overfitting](https://wiki.g15e.com/pages/Overfitting.txt). That is, if you train a model too long, the model may fit the training data so closely that the model doesn't make good predictions on new examples.[^1] ## Footnotes [^1]: https://developers.google.com/machine-learning/glossary#early-stopping