# False positive rate > The proportion of actual negative examples for which the model mistakenly predicted the positive class. The following formula calculates the false positive rate:[^1] The proportion of actual negative examples for which the model mistakenly predicted the [positive class](https://wiki.g15e.com/pages/Positive%20class%20(machine%20learning.txt)). The following formula calculates the false positive rate:[^1] $$ \frac{\text{incorrectly classified actual negatives}}{\text{all actual negatives}} $$ which means: $$ \text{FPR} = \frac{FP}{FP + TN} $$ The false positive rate is the x-axis in an [ROC curve](https://wiki.g15e.com/pages/ROC%20curve.txt). ## When (not) to use Use when [false positives](https://wiki.g15e.com/pages/False%20positive.txt) are more expensive than [false negatives](https://wiki.g15e.com/pages/False%20negative.txt).[^2] In an [imbalanced dataset](https://wiki.g15e.com/pages/Class-imbalanced%20dataset.txt) where the number of actual negatives is very, very low, say 1-2 examples in total, FPR is less meaningful and less useful as a metric.[^2] ## See also - [Recall](https://wiki.g15e.com/pages/Recall%20(machine%20learning.txt)) (a.k.a. **probability of detection**, where the FPR is known as **probability of false alarm**) - [Accuracy](https://wiki.g15e.com/pages/Accuracy%20(machine%20learning.txt)) - [Precision](https://wiki.g15e.com/pages/Precision%20(machine%20learning.txt)) ## Footnotes [^1]: https://developers.google.com/machine-learning/glossary#FP_rate [^2]: [ML crash course - Classification](https://wiki.g15e.com/pages/ML%20crash%20course%20-%20Classification.txt)