# Accuracy (machine learning) > The number of correct classification predictions divided by the total number of predictions.[^1] That is: The number of correct classification [predictions](https://wiki.g15e.com/pages/Prediction%20(machine%20learning.txt)) divided by the total number of predictions.[^1] That is: $$ \text{Accuracy} = \frac{\text{correct preds} + \text{incorrect preds}}{\text{correct preds}} $$ [Binary classification](https://wiki.g15e.com/pages/Binary%20classification.txt) provides specific names for the different categories of *correct predictions* and *incorrect predictions*. So, the accuracy formula for binary classification is as follows:[^1] $$ \text{Accuracy} = \frac{TP + TN}{TP + TN + FT + FN} $$ where: - TP is the number of [true positives](https://wiki.g15e.com/pages/True%20positive.txt) (correct predictions). - TN is the number of [true negatives](https://wiki.g15e.com/pages/True%20negative.txt) (correct predictions). - FP is the number of [false positives](https://wiki.g15e.com/pages/False%20positive.txt) (incorrect predictions). - FN is the number of [false negatives](https://wiki.g15e.com/pages/False%20negative.txt) (incorrect predictions). ## When to use Because it incorporates all four outcomes from the ([TP](https://wiki.g15e.com/pages/True%20positive.txt), [FP](https://wiki.g15e.com/pages/False%20positive.txt), [TN](https://wiki.g15e.com/pages/True%20negative.txt), [FN](https://wiki.g15e.com/pages/False%20negative.txt)), given a balanced dataset, with similar numbers of examples in both classes, accuracy can serve as a coarse-grained measure of model quality. For this reason, it is often the default evaluation metric used for generic or unspecified models carrying out generic or unspecified tasks.[^2] However, when the dataset is [imbalanced](https://wiki.g15e.com/pages/Class-imbalanced%20dataset.txt), or where one kind of mistake ([FN](https://wiki.g15e.com/pages/False%20negative.txt) or [FP](https://wiki.g15e.com/pages/False%20positive.txt)) is more costly than the other, which is the case in most real-world applications, it's better to optimize for one of the other metrics instead.[^2] ## See also - [Precision](https://wiki.g15e.com/pages/Precision%20(machine%20learning.txt)) - [Recall](https://wiki.g15e.com/pages/Recall%20(machine%20learning.txt)) (a.k.a. probability of detection) - [False positive rate](https://wiki.g15e.com/pages/False%20positive%20rate.txt) (a.k.a. probability of false alarm) ## Footnotes [^1]: https://developers.google.com/machine-learning/glossary#accuracy [^2]: [ML crash course - Classification](https://wiki.g15e.com/pages/ML%20crash%20course%20-%20Classification.txt)