# Ground truth (machine learning) > Reality. The thing that actually happened. For example, consider a binary classification model that predicts whether a student in their first year of university will graduate within six years. Ground truth for this model is whether or not that student actually graduated within six years.[^1] Reality. The thing that actually happened. For example, consider a [binary classification](https://wiki.g15e.com/pages/Binary%20classification.txt) model that predicts whether a student in their first year of university will graduate within six years. Ground truth for this model is whether or not that student actually graduated within six years.[^1] We assess model quality against ground truth. However, ground truth is not always completely, well, truthful. For example, consider the following examples of potential imperfections in ground truth:[^1] - In the graduation example, are we *certain* that the graduation records for each student are always correct? Is the university's record-keeping flawless? - Suppose the label is a floating-point value measured by instruments (for example, barometers). How can we be sure that each instrument is calibrated identically or that each reading was taken under the same circumstances? - If the [label](https://wiki.g15e.com/pages/Label%20(machine%20learning.txt)) is a matter of human opinion, how can we be sure that each human [rater](https://wiki.g15e.com/pages/Rater%20(machine%20learning.txt)) is evaluating events in the same way? To improve consistency, *expert* human raters sometimes intervene. ## Footnotes [^1]: https://developers.google.com/machine-learning/glossary#ground_truth