• Center for Problem oriented policing

Step 37: Know that to err is human

Crime prevention often involves predictions. Will offenders associated with the problem continue to offend as they have done in the past? Will recent victims become victims again in the near future? Will hot spot places become cold spots, or will they stay hot? Though past behavior may be our best predictor of future behavior, it is not a perfect predictor.

The examples above deal with predicting the future. But we also try to probe the unknown in other ways, including in our responses to problems. A polygraph examiner tries to assess whether the subject is lying or not. Drug tests are used to determine if people have recently used illicit drugs. Metal detectors and baggage screening devices at airports are used to determine if passengers have weapons on their person or in their luggage. In all these examples the examiner is trying to draw a conclusion about an unknown condition. And just like predictions of the future, the examiner may make an accurate assessment or an inaccurate one. Consequently, it is very important to understand how predictions and other judgments can fail.

A useful way to examine errors of prediction and judgment is to compare the prediction to what actually occurs. The columns in Table 1 show two possible predictions: Yes, the outcome will occur; and No, the outcome will not occur. The rows show two actual outcomes: Yes, the outcome did occur; and No, the outcome did not occur.

Table 1: Types of Prediction Errors

Actual OutcomeYESNO
YESA. Accurate True PositivesB. False Negatives
NOC. False PositivesD. Accurate True Negatives
Accuracy Rate
False Negative Rate
False Positive Rate
(A+D)/(A+B+C+D)
B/(A+B+C+D)
C/(A+B+C+D)

Imagine a large number of predictions. When a prediction corresponds to the actual, then it is accurate. Cells A and D contain counts of accurate predictions. You can calculate an accuracy rate by adding the number of predictions that fall into these two categories and dividing by all the predictions made.

Let's look at cells B and C. When the decision-maker predicts that the outcome will not occur, but it does occur then it goes into cell B. This is called a False Positive. Cases in cell C represent situations in which the decision-maker has predicted that the outcome would occur, but it did not. These are False Negatives. You can calculate error rates for both types by dividing the number of predictions in each cell by the total number of predictions.

Let's look at a hypothetical example. To curb crime in rental housing, a police department facilitates and encourages landlords to conduct background checks. Prospective renters with recent histories of criminal behavior are not accepted. Such a policy implies a prediction that people with recent histories of criminal involvement will continue that involvement on or nearby the rental property and that people without such backgrounds will not engage in this type of behavior. Even advocates of such a policy would agree that such predictions are not perfect, but it would be useful to know two things. First, does such a policy reduce rental property crime? An evaluation could answer this question. But even if it does reduce crime, what are the negative consequences? To answer this question requires an analysis of the prediction errors.

If we were able to collect the relevant data we might be able to create a table like Table 2. We see that the policy's predictions are accurate. But how do we feel about the errors? Should something be done about people without a recent history of prior criminal involvement who commit crimes? Are too many former offenders who are not engaging in criminal behavior being denied housing?

Table 2: Example of Prediction Error Analysis

Prior Criminal Involvement
Later Criminal InvolvementYESNOTotal
YES351045
NO35496531
Total70506576
Accuracy Rate
False Negative Rate
False Positive Rate
92.2%
1.7%
6.1%

Tighter restrictions to curb offending by people who have no recent criminal record might reduce the false negative rate, but it could increase the false positive rate, particularly if the information for making the decisions is less accurate than the information currently used. On the other hand, making distinctions among applicants with a recent criminal history could decrease the false positive rate, but at the expense of increasing the false negative rate. Such tradeoffs are quite common.

Further, we may regret one type of error more than another. If the types of crimes being prevented by the landlords are relatively minor, then the false positive rate might be too high. But if these are crimes of serious violence being averted, then the false negative rate may be of greater concern. The consequences of the errors are very important and people often disagree over these.

Another source of disagreement are the error rates themselves. Such rates are often very difficult to estimate. Consider the shaded boxes in Table 2. Under most circumstances, these figures will be unknown. The landlords might count how many people they turned away because of a criminal record, but they cannot tell us what such people would do if they had not been turned away. In other situations the shoe is on the other foot; false positives may be known with some precision, but false negatives are unknown. In airport screening, false positives are known because predictions of having contraband are followed by closer scrutiny. A passenger who security personnel believe is carrying a firearm will be subject to a very careful search, thus revealing if the initial prediction was accurate or inaccurate. However, false negatives are not known with much reliability. A passenger, who carries contraband past airport security may not be checked again, So we cannot learn that she was a false negative.

In some circumstances it is possible to use a pilot test to accurately estimate the errors by making the predictions, not acting on them, and carefully observing what happens. This might be difficult to do with offenders, who prefer to keep their misdeeds hidden, but it could work with potential victims or crime places. For example, a response to a problem might involve predicting which places are most likely to be crime sites and then intervening at those locations. Prior to implementing this response, a pilot study could be conducted in which the predictions are made, but no action is taken. If the error rates are unacceptably high, then it might not be worth implementing the response.

Next Step