2015 POP Conference
Oct 19-21, 2015 Portland, OR

Center for Problem-Oriented Policing

Powerd by University at Albany, SUNY
Site Menu ▼

Assessing Responses to Problems: An Introductory Guide for Police Problem-Solvers

Tool Guide No. 1 (2002)

by John E. Eck

Introduction

The purpose of assessing a problem-solving effort is to help you make better decisions by answering two specific questions. First, did the problem decline? Answering this question helps you decide whether to end the problem-solving effort and focus resources on other problems. Second, if the problem did decline, did the response cause the decline? Answering this question helps you decide whether to apply the response to similar problems.

What This Guide Is About

This introduction to problem-solving assessments is intended to help you design evaluations to answer the two questions above. It was written for those who are responsible for evaluating the effectiveness of responses to problems, and who have a basic understanding of problem-oriented policing and the problem-solving process. This guide assumes a basic understanding of the SARA problem-solving process (scanning, analysis, response, and assessment), but it requires little or no experience with assessing problem solutions.

This guide was written based on the assumption that you have no outside assistance. Nevertheless, you should seek the advice and help of researchers with training and experience in evaluation, particularly if the problem you are addressing is large and complex. Requesting aid from an independent outside evaluator can be particularly helpful if there is controversy over a response's usefulness. Local colleges and universities are a good source for such expertise. Many social science departments–economics, political science, sociology, psychology, and criminal justice/criminology–have faculty and graduate students who are knowledgeable in program evaluation and related topics.

This guide is a companion reference to the Problem-Oriented Guides for Police series. Each guide in the series suggests ways to measure a particular problem, and describes possible responses to it. Though the evaluation principles discussed here are intended to apply to the specific problems in the guides, you should be able to apply them to any problemsolving project.

This is an introduction to a complex subject, and it emphasizes evaluation methods that are the most relevant to problem-oriented policing. You should consult the list of recommended readings at the end of the guide if you are interested in exploring the topic of evaluation in greater detail.

† Excluded from this discussion is any mention of significance testing and statistical estimation. Though useful methods, they cannot be described in a guide of this length sufficiently enough for you to effectively use them.

Assessment and Decision-Making

As stated, this guide is about aiding decision–making. There are two key decisions to make regarding any problem-solving effort. First, did the problem decline enough for you to end the effort and apply resources elsewhere? If the problem did not decline substantially, then the job is not done. In such a case, the most appropriate decision may be to reanalyze the problem and develop a new response. Second, if the problem did decline substantially, then it might be worthwhile to apply the response to similar problems.

This guide focuses on the first decision–whether to end the problem-solving effort. The second decision has to do with future response applications. If the problem declined substantially, and if the response at least partly caused the decline, then you might consider using the response with other problems. But if the problem did not decline, or if it got worse, and this was due to an ineffective response, then future problem-solvers should be alerted so they can develop better responses to similar problems. Future decisions about whether to use the response depend in part on assessment information. In this regard, assessment is an essential part of police organizational learning. Without assessments, problemsolvers are constantly reinventing the wheel, and run the risk of repeating the same mistakes. Nevertheless, obtaining valid information to aid in decision-making increases the complexity of assessments.

Making either decision requires a detailed understanding of the problem, of how the response is supposed to reduce the problem, and of the context in which the response has been implemented.1 For this reason, the evaluation process begins after it is identified in the scanning stage.

This guide discusses two simple designs–pre-post and interrupted time series. The pre-post design is useful in making only the first type of decision–whether to end the problem-solving effort. The time series design can aid in making both types of decisions.

Finally, it is worth mentioning how the guide is organized. The body of the text addresses fundamental issues in constructing simple but useful evaluations. The endnotes provide a link to more-technical books on evaluation. Many of these clarify terminology. The appendixes expand on material in the text. Appendix A uses an extended example to show why evaluating responses over longer periods provides a better understanding of response effectiveness. Appendix B describes two advanced designs involving comparison (or "control" groups). Appendix C explains how to calculate a response's net effect on a problem. Appendix D provides a summary of the designs' strengths and weaknesses. Finally, Appendix E provides a checklist for going through the evaluation process, selecting the most applicable design, and drawing reasonable conclusions from evaluation results. You should read the body of the text before examining the appendixes.

In summary, this guide explains, in ordinary language, those aspects of evaluation methods that are most important to police when addressing problems. In the next section, we will examine how evaluation fits within the SARA problemsolving process. We will then examine the two major types of evaluation–process and impact.

Evaluation’s Role in Problem-Solving

It is important to distinguish between evaluation and assessment. Evaluation is scientific process for determining if a problem declined and if the solution caused the decline. As we will see, it begins at the moment the problem-solving process begins and continues through the completion of the effort. Assessment occurs at the final stage in the SARA problem-solving process.2 It is the culmination of the evaluation process, the time when you draw conclusions about the problem and its solutions.

Though assessment is the final stage of both evaluation and problem solving, critical decisions about the evaluation are made throughout the process, as indicated in Figure 1. The left side shows the standard SARA process and some of the most basic questions asked at each stage. It also draws attention to the fact that the assessment may produce information requiring the problem-solver to go back to earlier stages to make modifications. This is particularly the case if the response was not as successful as expected.

The right side of Figure 1 lists critical questions to address to conduct an evaluation. During the scanning stage, you must define the problem with sufficient precision to measure it. You will collect baseline data on the nature and scope of the problem during the analysis phase. Virtually every important question to be addressed during analysis will be important during assessment. This is because, during assessment, you want to know if the problem has changed. So data uncovered during analysis become vital baseline information (or "preresponse measures") during assessment.

Fig. 1. The problem-solving process and evaluation

Fig. 1. The problem-solving process and evaluation

During the response stage, while developing a strategy to reduce the problem, you should also develop an accountability mechanism to be sure the various participants in the response do what they should be doing. As we will see later, one type of evaluation–process–is closely tied to accountability. Thus, while developing a response, it is important to determine how to assess accountability. Also, the type of response has a major influence on how you design the other type of evaluation–impact.

During assessment, you answer the following questions: Did the response occur as planned? Did the problem decline? If so, are there good reasons to believe the decline resulted from the response?

In summary, you begin planning for an evaluation when you take on a problem. The evaluation builds throughout the SARA process, culminates during the assessment, and provides findings that help you determine if you should revisit earlier stages to improve the response. You can use the checklist in Appendix E as a general guide to evaluation throughout the SARA process.

Types of Evaluations

There are two types of evaluations. You should conduct both. As we will see later, they complement each other.

Process Evaluations

Process evaluations ask the following questions: Did the response occur as planned? Did all the response components work? Or, stated more bluntly, Did you do what you said you would do? This is a question of accountability.

Let's start with a hypothetical example. A problem-solving team, after a careful analysis, determines that, to curb a street prostitution problem, they will ask the city's traffic engineering department to make a major thoroughfare oneway, and to create several dead-end streets to thwart cruising by "johns." This will be done immediately after a comprehensive crackdown on the prostitutes in the target area. Convicted prostitutes will be given probation under the condition that they do not enter the target area for a year. Finally, a nonprofit organization will help prostitutes who want to leave their line of work gain the necessary skills for legitimate employment. The vice squad, district patrol officers, prosecutor, local judges, probation office, sheriff's department, traffic engineering department, and nonprofit organization all agree to this plan.

A process evaluation will determine whether the crackdown occurred and, if so, how many arrests police made; whether the traffic engineering department altered street patterns as planned; and how many prostitutes asked for job skills assistance and found legitimate employment. The process evaluation will also examine whether everything occurred in the planned sequence. If you find that the crackdown occurred after the street alterations, that the police arrested only a fraction of the prostitutes, and that none of the prostitutes sought job skills, then you will suspect that the plan was not fully carried out, nor was it carried out in the specified sequence. You might conclude that the response was a colossal failure. However, the evidence provided gives us no indication of success or failure, because a process evaluation does not answer the question, What happened to the problem?

Impact Evaluations

To determine what happened to the problem, you need an impact evaluation. An impact evaluation asks the following questions: Did the problem decline? If so, did the response cause the decline? Continuing with our prostitution example, let's look at how it might work. During the analysis stage of the problem-solving process, patrol officers and vice detectives conduct a census of prostitutes operating in the target area. They also ask the traffic engineering department to install traffic counters on the major thoroughfare and critical side streets to measure traffic flow. This is done to determine how customers move through the area. The vice squad makes covert video recordings of the target area to document how prostitutes interact with potential customers. All of this is done before the problem-solving team selects a response, and the information gained helps the team to do so.

After the response is implemented (though not the planned response, as we have seen), the team decides to repeat these measures to see if the problem has declined. They discover that instead of the 23 prostitutes counted in the first census, only 10 can be found. They also find that there has been a slight decline in traffic on the major thoroughfare on Friday and Saturday nights, but not at other times. However, there has been a substantial decline in side street traffic on Friday and Saturday nights. New covert video recordings show that prostitutes in the area have changed how they approach vehicles, and are acting more cautiously. In short, the team has evidence that the problem has declined after response implementation.

So what has caused the problem to decline? You may be tempted to jump right into trying to answer this question, because it will help you determine if you can attribute the decline to the response. However, this question may not be as important as it first appears. After all, if the goal is to reduce or eliminate the problem, and this occurs, what difference does it make what the cause is? The answer is that it does not matter in the least, unless you are interested in using the same response for similar problems. If you have no interest in using the response again, then all that matters is that you have achieved the goal. You can then use the resources devoted to addressing the problem on some more pressing concern. But if you believe you can use the response again, it is very important to determine if the response caused the decline in the problem.

Let's assume the prostitution problem-solving team believes the response might be useful for addressing similar problems. The response, though not implemented according to plan, might have caused the decline, but it is also possible that something else caused the decline. There are two reasons the team takes this second possibility seriously. First, the actual response was somewhat haphazard, unlike the planned response. If the planned response had been implemented, the team would have a plausible explanation for the decline. But the jury-rigged nature of the actual response makes it a far less plausible explanation for the decline. Second, the impact evaluation is not particularly strong. Later, we will discuss why this is a weak evaluation, and what can be done to strengthen it.

Interpretation of Process and Impact Evaluations

Process and impact evaluations answer different questions, so their combined results are often highly informative. Table 1 summarizes the information you can glean from both evaluations. As you will see in Appendix E, the interpretation of this table depends on the type of design used for the impact evaluation. For the moment, however, we will assume that the evaluation design can show whether the response caused the problem to decline.

    Process Evaluation Results
    Response implemented as planned, or nearly so Response not implemented, or implemented in a radically different manner than planned
Impact Evaluation Results Problem declined A. Evidence that the response caused the decline C. Suggests that other factors may have caused the decline, or that the response was accidentally effective
Problem did not decline B. Evidence that the response was ineffective, and that a different response should be tried D. Little is learned. Perhaps if the response had been implemented as planned, the problem would have declined, but this is speculative

Table 1: Interpreting Results of Process and Impact Evaluations

When a response is implemented as planned (or nearly so), the conclusions are much easier to interpret (cells A and B). When the response is not implemented as planned, we have more difficulty determining what happened, and what to do next (cells C and D). Cell D is particularly troublesome because all you really know is that "we did not do it, and it did not work." Should you try to implement your original plan, or should you start over from scratch?

Outcomes that fall into cell C merit further discussion. The decline in the problem means that you could end the problem-solving process and go on to something else. If the problem has declined considerably, this might be satisfactory. If, however, the problem is still too big, then you do not know whether to continue or increase the response (on the assumption that it is working, but more is needed). Alternatively, you could seek a different response (on the assumption that the response is not working, and something else is needed). In addition, you do not know if the response will be useful for similar problems. In short, it is difficult to replicate successes when you do not know why you were successful. The basic lesson is that all assessments should contain both a process and an impact evaluation.

A process evaluation involves comparing the planned response with what actually occurred. Much of this information becomes apparent while managing a problemsolving process. If the vice squad is supposed to arrest prostitutes in the target area, you can determine whether they have from departmental records and discussions with squad members. There will be judgment calls, nevertheless. For example, how many arrests are required? The response plan may call for the arrest of 75 percent of the prostitutes, but only 60 percent are arrested. Whether this is a serious violation of the plan may be difficult to determine. Much of a process evaluation is descriptive (these people did these things, in this order, using these procedures). Nevertheless, numbers can help. In our example, data on traffic volume show where street alterations have changed driving patterns, and these pattern changes are consistent with what was anticipated in the response plan.

In short, a process evaluation tells what happened, when and to whom. Though it does not tell whether the response affected the problem, it is very useful for determining how to interpret impact evaluation results.