In Section A, decide which type of evaluation approach best fits your policy or program, and state why. Discuss and consider impact evaluation, performance evaluation, or efficiency evaluation. It is important to explain why you chose this approach and explain why this approach will satisfy each of your stakeholders.
In Section B, determine whether the three prerequisites for evaluation have been met. It is important to detail how each has been met. Do not just answer the question as yes or no.
In Section C, you will develop outcome measures based on your objectives identified in Stage 2. Restate each objective and, for each, explain exactly how you will measure whether the objectives that you identified are being achieved and whether your policy or program is achieving the desired effect. Be specific.
In Sections D and E, you will identify potential confounding factors, and you will consider and determine techniques for minimizing confounding effects. Your potential confounds are biased attrition, biased selection, and historical confounds. Which are present, and which technique for minimizing these effects would be most useful to your program or policy? Be specific.
In Section F, you will specify the appropriate research design to be used. Be sure to explain why your choice is preferable.
Finally, in Section G, you will identify the users and uses of your evaluation results. Review the stakeholders you identified in Stage 1. Who will be responsible for communicating the results of your evaluation? How will the results be used?
A. Decide which type of evaluation is appropriate, and why: Do major stakeholders (including those funding the evaluation) want to know whether the program or policy is achieving its objectives (impact), how outcomes change over time (continuous outcomes), or whether it is worth the investment of resources devoted to its implementation (efficiency)?
B. Determine whether three prerequisites for evaluation have been met: (1) Are objectives clearly defined and measurable? (2) Has the intervention been sufficiently well designed and well implemented? (3) Was the intervention implemented properly?
C. Develop outcome measures based on objectives: Good outcome measures should be valid and reliable.
D. Identify potential confounding factors (factors other than the intervention that may have biased observed outcomes): Common confounding factors include biased selection, biased attrition, and history.
E. Determine which technique for minimizing confounding effects can be usedrandom assignment or nonequivalent comparison groups: Each involves creating some kind of comparison or control group.
F. Specify the appropriate research design to be used: Examples include: the simple pretest-posttest design; the pretest-posttest design with control group; the pretest-posttest design with multiple pretests; the longitudinal design with treatment and control groups; and the cohort design.
G. Identify users and uses of evaluation results: Who is the intended audience, and how can results be effectively and efficiently communicated? How will the results be used?