2015 POP Conference
Oct 19-21, 2015 Portland, OR

Center for Problem-Oriented Policing

Powerd by University at Albany, SUNY
Site Menu ▼

Appendix E: Problem-Solving Evaluation Checklist

The following checklist provides a summary of the issues you should consider in evaluating a problem-solving effort. It should be interpreted as a general guide, and not as a set of rigid rules. This checklist is most helpful if used throughout the problem-solving process, beginning in the scanning stage.

I. Early Considerations

You should consider the following questions during the scanning, analysis and response stages.

A. What will the evaluation help you decide?

  1. Should you continue the problem-solving effort? If this is the only decision the evaluation will help you make, then a simple evaluation design will be sufficient (see question III.A).
  2. Should either your agency or other agencies use the response for similar problems? If so, then you should consider using a control group in the evaluation design (see question III.A).
  3. There is no decision to make. If no decision is required, then an evaluation will not be helpful.

B. Do you know the problem? (You need to answer these questions with some precision to develop and evaluate a costeffective response. If you cannot answer them with some precision, then you should do more to analyze the problem.)

  1. Whom does the problem harm? Whom does it not harm?
  2. How can you measure the harm?
  3. Where does the problem occur? Where does it not occur?
  4. When does the problem occur? When does it not occur?
  5. What causes the problem? What prevents or reduces it?

C. Do you know how the response works? (You need to answer these questions to determine if the response is likely to be effective, and to ensure accountability during implementation. If you cannot answer them, then your response plans are inadequate, and you need to focus more on the response stage.)

  1. How does the response affect the causes of the problem?
  2. Who is responsible for implementing the response?
  3. When is the response supposed to be implemented?
  4. Where is the response supposed to be implemented?
  5. How long does the response take to have a noticeable effect on the problem?
  6. Who has the legal authority to implement the response?
  7. What are the likely barriers to implementing the response?

II. Process Evaluation

The process evaluation begins toward the end of the response stage, and continues well into the assessment stage.

A. Did you implement the response? (The closer the actual implementation is to the planned response, the greater confidence you have that the response caused the problem change documented in the impact evaluation. The more variation between what you intended and what occurred, the greater the likelihood that factors other than the response caused changes in the problem.)

  1. Did you implement the response when you were supposed to?
  2. Did you implement the response where you were supposed to?
  3. Did you implement the response for the appropriate group?
  4. Did you otherwise implement the response as planned?

B. Did you implement enough of the response? (You may have implemented the response, but without the resources, duration or intensity needed to make it effective.)

  1. Did you have sufficient resources to fully implement the response?
  2. Did you implement the response long enough to have an effect?
  3. Did you implement the response with sufficient intensity?

III. Impact Evaluation

Many of the decisions you need to make to conduct an impact evaluation should be considered in the analysis and response stages. This is particularly true of measurement decisions.

A. Do you need a control group? (Answering these questions helps you decide on the complexity of the evaluation design.)

  1. Did you check question I.A.1? If so, then you do not need a control group.
  2. Did you check question I.A.2? If so, then you should use a control group.

B. How often can you measure the problem? (Answering these questions helps you to decide whether a time series design is possible.)

  1. Can you measure the problem consistently for many time periods before and after the response? If so, then a time series design is feasible.
  2. Can you measure the problem only a few times before and after the response? If so, then a time series design is not feasible, and you need to use a pre-post design.
  3. Can you take some measures of the problem for many time periods before and after the response, and other measures for only a few time periods before and after the response? If so, then you can use both a time series and a pre-post design.

C. What type of evaluation design should you use? (Your answers to the questions in sections A and B, immediately above, provide some basic guidance for answering this question, as shown in Table E.1. Obviously, precise answers depend on the particular circumstances of each problemsolving effort.)

B. Question Checked A. Question Checked
1 2
1 Interrupted time series design Multiple time series design
2 Pre-post design Pre-post design with a control group
3 Combination of designs above Combination of designs above
Table E.1 Which Evaluation Design Makes the Most Sense?

D. What type of control group do you need? (This question applies only if you chose one of the options from column 2 under "A. Question Checked" above. If you chose an option from column 1, then skip this section and go to part IV.)

  1. Will you apply the response to an identifiable geographic area (place, neighborhood, etc.)? If so, then the control group should be a very similar geographic area–with a similar problem–preferably located some distance from the response area.
  2. Will you apply the response to a group of identifiable potential victims (young males, elderly women, commuters, etc.)? If so, then the control group should be a very similar group of potential victims.
  3. Will you apply the response to a group of identifiable potential offenders? If so, then the control group should be a very similar group of potential offenders.
  4. Will you apply the response to some other identifiable group of people or things? If so, then the control group should be a very similar group of people or things.
  5. Are you unable to identify a control group for this evaluation? If so, then go back to Table E.1 and pick the appropriate option from column 1 under "A. Question Checked." Then go to part IV.

If you checked one of the first four questions above, then systematically compare the response group's characteristics with the control group's characteristics, and list the major differences. In part V, you will consider whether other factors might have caused the change in the problem. Your list of differences is a list of potential "other factors."

IV. Evaluation Conclusions

The following questions fall within the assessment stage and are applicable once you have documented your evaluation results. These questions are designed to help you draw conclusions consistent with your process and impact evaluation results and your evaluation design. You will have to ask more questions than listed here to fully interpret your particular evaluation results.

A. What are your findings from the process evaluation?

  1. You did not implement the response.
  2. You implemented the response in a radically different manner than planned.
  3. You implemented the response with insufficient resources, for too short a time, or without the required intensity.
  4. You implemented the response almost as planned, and with sufficient resources, for the necessary time, and with the required intensity.

B. What are your findings from the impact evaluation? (Select the design you used–pre-post, pre-post with a control group, time series, or multiple time series. If you used a combination of designs, then interpret your evaluation for each design separately, using tables E.2 and E.3.)

Pre-post design: Use Table E.2 to interpret your evaluation.

  1. The problem got worse after the response.
  2. The problem did not change after the response.
  3. The problem declined after the response.

Pre-post design with a control group: Use Table E.3 to interpret your evaluation.

  1. The response group's problem got worse, relative to the control group's.
  2. The response group's problem did not change, relative to the control group's.
  3. The response group's problem declined, relative to the control group's.

Time series design: Use Table E.3 to interpret your evaluation.

  1. The problem got worse after the response.
  2. The problem did not change after the response.
  3. The problem declined after the response.

Multiple time series design: Use Table E.3 to interpret your evaluation.

  1. The response group's problem got worse, relative to the control group's.
  2. The response group's problem did not change, relative to the control group's.
  3. The response group's problem declined, relative to the control group's.

V. Overall Impact Evaluation Conclusions

The answers to the following questions are judgment calls and reflect your degree of confidence in the findings, rather than a totally objective assessment of what occurred. Other people, examining the same evidence, could come to different conclusions. For this reason, you should answer these questions (and the question that follows) after several people with different perspectives have examined the assessment information.

  1. Did the problem decline after the response?
  2. If the problem did decline, did it do so at a faster rate after the response than before the response?
  3. If the problem did decline, can you rule out all other plausible explanations for the decline, other than the response? Use your list of differences between the response and control groups to help answer this question.

Based on your answers to the preceding questions, are you reasonably confident that the response caused the decline (if any) in the problem?

    Process Evaluation Results
Answers to Question IV.A
    4 checked: You implemented the response almost as planned. 1, 2 or 3 checked: You did not implement the response; implemented it in a radically different manner than planned; or implemented it with insufficient resources, for too short a time, or without the required intensity.
Impact Evaluation Results
Answers to Question IV.B (pre-post design)
3 checked: The problem declined. A. The response may or may not have caused the decline in the problem. Nevertheless, thedecline occurred. C. This suggests that other factors may have caused the decline in the problem, or the response was accidentally effective. Nevertheless, the decline occurred.
1 or 2 checked:The problem got worse or did not change. B. The response does not seem to have worked, though it is possible the problem would have increased (or increased even more) without it. D. You have learned little from this evaluation. It is unclear whether you should implement the planned response, or reanalyze the problem and try adifferent response.
Regardless of the interpretation (A, B, C, or D), you have insufficient evidence to link the response to the
problem level. The impact evaluation results neither support nor rule out using the response for similar
problems.
Table E.2 Interpreting Results of Process and Impact Evaluations (Pre-Post Designs)
  1. Yes–If you have thoroughly considered the questions and have answered "Yes" to all of them, then Table E.3 may be helpful. If you used only a pre-post design, then you cannot answer "Yes" to questions 2 and 3. If you used only a pre-post design with a control group, then you cannot answer "Yes" to question 2.
  2. No–If you answered "No" to any of the three questions, then you must interpret Table E.3 with extreme caution. Any recommendations you make regarding the response should entail a frank discussion of alternative explanations.
    Process Evaluation Results
Answers to Question IV.A
    4 checked: You implemented the response almost as planned. 1, 2 or 3 checked: You did not implement the response; implemented it in a radically different manner than planned; or implemented it with insufficient resources, for too short a time, or without the required intensity.
Impact Evaluation Results
Answers to Question IV.B

(pre-post design with a control group, time series design or multiple time series design)
3 checked: The problem declined. A. This is evidence that the response caused the decline in the problem. The response is a potentially useful option for similar problems. C. This suggests that other factors may have caused the decline in the problem, or the response was accidentally effective. You should not recommend this response to address similar problems, since you do not know if it would have an impact.
1 or 2 checked: The problem got worse or did not change. B. This is evidence that the response was ineffective. The response probably should not be used for similar problems. You should reanalyze the problem and try a different response. D. You have learned little from this evaluation. Perhaps if you had implemented the response as planned, you would have had better results, but this is speculative. No recommendations-either for or against the response-are valid.
Table E.3 Interpreting Results of Process and Impact Evaluations (Other Designs)