# Test loss > A metric representing a model's loss against the test set. When building a model, you typically try to minimize test loss. That's because a low test loss is a stronger quality signal than a low training loss or low validation loss.[^1] A metric representing a [model](https://wiki.g15e.com/pages/Model%20(machine%20learning.txt))'s [loss](https://wiki.g15e.com/pages/Loss%20(machine%20learning.txt)) against the [test set](https://wiki.g15e.com/pages/Test%20set.txt). When building a model, you typically try to minimize test loss. That's because a low test loss is a stronger quality signal than a low [training loss](https://wiki.g15e.com/pages/Training%20loss.txt) or low [validation loss](https://wiki.g15e.com/pages/Validation%20loss.txt).[^1] A large gap between test loss and training loss or validation loss sometimes suggests that you need to increase the [regularization rate](https://wiki.g15e.com/pages/Regularization%20rate.txt). ## Footnotes [^1]: https://developers.google.com/machine-learning/glossary#test-loss