Sign In / Sign Out
Navigation for Entire University
- ASU Home
- My ASU
- Colleges and Schools
- Map and Locations
This guidebook has introduced some basic principles for assessing the effectiveness of responses to problems. All evaluations require valid measures of the problem that are systematically taken before and after the implementation of a response. There are two possible goals for any problem-solving evaluation. The first is to demonstrate that the problem declined sufficiently. This is the most basic requirement of an evaluation. For this goal, we are not concerned about whether the reduction was directly caused by the response or by something else entirely.
In many circumstances, it is also useful to determine whether the decline in the problem was due to the response. This is a second goal. If one anticipates using the response again on similar problems (or on the same problem if it returns), it is important to make this determination. This requires an evaluation design that can eliminate the most likely alternative explanations for the decline in the problem. Elimination of those explanations requires either the use of an interrupted time series design or the use of a control group (Appendix B). The control group tells you what the level of the problem is likely to have been in the absence of this problem-solving effort.
The results of an impact evaluation should be compared to the results of a process evaluation in order to form a detailed picture of whether the response was implemented as planned and what impact it had on the problem. This information helps show whether the response was the cause of the decline in the problem, and what parts of the response are the “active ingredient.”
A recurring theme in this guidebook is that an evaluation design builds on knowledge gained during the analysis of the problem. Competent evaluations require the evaluators to have detailed knowledge about the problem in order to develop useful measures and to anticipate possible reasons for the decline in the problem following a response.
The evaluation of responses can be extremely complex. This guidebook is only an introduction. For small-scale problem-solving efforts, where the costs of mistaken conclusions are not serious, and weak causal inferences are tolerable, the information contained here should be sufficient. If, however, there is a great deal riding on the outcome, if it is important to show whether the response caused a drop in the problem, or if there would be serious consequences from drawing the wrong conclusion from the evaluation, you should seek professional assistance in developing a rigorous evaluation. A decision to enlist the support of an outside evaluator should be made as soon as possible once a problem has been identified so that adequate before-response measures can be made and a rigorous design can be developed.