# Gradient boosting > Gradient boosting is a powerful ensemble machine learning technique used for both regression and classification tasks. It builds a strong predictive model by combining a series of "weak" models, which are typically decision trees. Gradient boosting is a powerful [ensemble](https://wiki.g15e.com/pages/Ensemble%20model.txt) [machine learning](https://wiki.g15e.com/pages/Machine%20learning.txt) technique used for both [regression](https://wiki.g15e.com/pages/Regression%20model.txt) and [classification](https://wiki.g15e.com/pages/Classification%20model.txt) tasks. It builds a strong predictive model by combining a series of "weak" models, which are typically . The key idea is that it builds the model in a sequential, stage-wise fashion. Each new weak model is trained to correct the errors - specifically, the residual errors - of the previous one. This process is called "boosting" because each step boosts the overall performance of the model. ## Implementations - [LightGBM](https://wiki.g15e.com/pages/LightGBM.txt) - [XGBoost](https://wiki.g15e.com/pages/XGBoost.txt)