# Gradient descent > A mathematical technique to minimize loss. Gradient descent iteratively adjusts weights and biases, gradually finding the best combination to minimize loss. Gradient descent is older--much, much older--than machine learning.[^1] A mathematical technique to minimize [loss](https://wiki.g15e.com/pages/Loss%20(machine%20learning.txt)). Gradient descent iteratively adjusts [weights](https://wiki.g15e.com/pages/Weight%20(machine%20learning.txt)) and [biases](https://wiki.g15e.com/pages/Bias%20(machine%20learning.txt)), gradually finding the best combination to minimize loss. Gradient descent is older--much, much older--than [machine learning](https://wiki.g15e.com/pages/Machine%20learning.txt).[^1] ## Algorithm The model begins training with randomized weights and biases near zero, and then repeats the following steps:[^2] 1. Calculate the loss with the current weight and bias. 2. Determine the direction to move the weights and bias that reduce loss. 3. Move the weight and bias values a small amount in the direction that reduces loss. 4. Return to step one and repeat the process until it [converges](https://wiki.g15e.com/pages/Convergence%20(machine%20learning.txt)) ## Footnotes [^1]: https://developers.google.com/machine-learning/glossary#gradient_descent [^2]: https://developers.google.com/machine-learning/crash-course/linear-regression/gradient-descent