Usetutoringspotscode to get 8% OFF on your first order!

  • time icon24/7 online - support@tutoringspots.com
  • phone icon1-316-444-1378 or 44-141-628-6690
  • login iconLogin

Project Proposal Activity

Project Proposal Activity

In this module we specifically address the data sources and methods of data collection and how those types of data can be integrated to enhance validity and credibility. As the final section of your paper, write a description of the sources for your data, how these data will be collected, how they will be analyzed and interpreted, and finally, how they can be integrated and triangulated to enhance validity and credibility.
Your project paper should be in Microsoft Word 2000 or higher. Remember to follow the current edition of APA format. Your paper should be double-spaced and in 12 point font. It should not exceed five pages and should include an additional page that lists your citations.
Every week, your assignment should be submitted using proper APA format. A cover page will not be required. However, the first page of the assignment should include a title and proper headings for sections of your response. In-text citations should follow APA guidelines, and you should include a Reference list as a separate page with entries that match in-text citations. You may include an Appendix if you have additional information (e.g., assessment instruments, tables/figures, other supporting documentation).
All written assignments and responses should follow APA rules for attributing sources.
Assignment 2 Grading Criteria     Maximum Points
Clearly described the sources of data.     4
Clearly described and justified how data will be collected.    4
Clearly described methods of analyzing and interpreting the data.    5
Clearly described how the various kinds of data will be used to triangulate, enhancing the validity and credibility of the project.    5
Utilized correct APA format for headings, margins, spacing, citations, etc.    2
Wrote in a clear, concise, and organized manner; demonstrated ethical scholarship in accurate representation and attribution of sources; displayed accurate spelling, grammar, and punctuation.    4
Total:    24

readings

Overview

“Where is the wisdom we have lost in knowledge?
Where is the knowledge we have lost in information?”
—from The Rock by T.S. Eliott
When persons want to see what is available to solve a problem—whether it is the best place to live when retired, a psychological test, or an investment opportunity—after a diligent search they are often presented with a simple list of options. Sometimes they are disappointed because they are subconsciously looking for the option that would solve their problem. Instead, they find a whole list of possible solutions that are relatively good and bad but no easy answer. Program evaluation is the home of many options—lots of alternatives—but no easy solutions. The best solution is more hard work: you have to think through the problem, its demands, and connections to other problems and decide what makes the most sense. If we are fortunate, we go from information to knowledge and, in rare cases, to wisdom.
In this module, you will see a long list of options, but that is not all. You will be challenged to evaluate the source and to think about what using a method might mean in your plan. Furthermore, you will think about the differences between qualitative and quantitative methods and how you might synthesize the two. If nothing else, you will see the advantages of doing that. Finally, you will be challenged to go beyond analysis and into interpretation and consider how that is done. After all, recommendations are based on interpretations.
The are a number of information sources. The most immediate and useful but often least used are existing records. Secondly, we are gathering other information all of the time, through our observations. Observations, when systematic and carefully done, can be powerful. Another source of data are surveys, which are becoming less useful because they are used so widely and have some inherent often undiscussed psychometric problems. Tests and qualitative information are also in our list.
In the end, data—qualitative and quantitative—need to be analyzed and interpreted. There are some guidelines for that.

Common Sources and Methods

Students often think that the best, most efficient way to gather information is through a test or survey. Actually, tests and surveys are not the most sophisticated, information-laden, and complex sources of information. Those sources are in the artifacts of everyday life and exist in files and our own archives that are, strangely, considered by an IRB to be Level 1 or “exempt” because there is no human participant interaction. On the other hand, documents and records can be the source of excellent information. Gordon Allport, the great American psychologist, published a collection of letters entitled Letters from Jenney (1965) to drive home his belief that correspondence (documents) can be as revealing and actually lead to more insight than tests.
Observations
We are constantly observing. Thousands of studies demonstrate the fruitfulness of careful observation: that it can lead to great insights, as with Piaget’s work on child development and Freud’s insight into the unconscious. We see that systematic and careful observations combined with thoughtful insight can lead to great discoveries. In your text, the authors explore the value of structured and unstructured observations, qualitative observations, and site visits.
Surveys and Tests
Surveys are common to persons in business, education, psychology, and the other social sciences. Hardly a day goes by that we aren’t asked to fill out a survey about something. While it is important to understand the techniques of building a survey and the range of surveys available to the potential evaluator, it is equally important to understand their limitations that are revealed in the research and their status as a statistical measure.
Tests are also used in every discipline but especially in the psychological assessment. The Buros Handbook lists thousands of them, of varying worth and value, depending on the context, reliability, validity, and other characteristics.
Qualitative Data
Most data are analogue: raw, open, and soft. It usually takes work to organize, analyze, and interpret, but it can carry additional meaning that quantitative data cannot. The authors discuss qualitative interviews and focus groups as sources of qualitative information.
Problems in Data Collection and Analysis

As the saying goes, “There is no free lunch.” As a rule, whatever technique we use contains limitations and weaknesses. Many have been documented but many have not. The responsible and careful evaluator is aware of these insights and actually looks for the problems so he or she will not be caught unaware. As all of us know, those times often occur when we are least prepared for them and contain unwelcome surprises.
In any case, it is important to establish a perspective on analysis and interpretation of findings—qualitative and quantitative—because it is in the interpretations that we can give recommendations for improvement.
Summary

The student who takes this module seriously will develop a perspective on data. He or she will understand in a specific way that there are many sources of data and that the most sophisticated data are often the easiest to find and use and are the least risky. The humans who participated in its development have left their traces and moved on, never to return to the scene. The insightful and analytic evaluator or researcher should be aware of those data, not just as sources for answers but questions. Thoughtful consideration of data can lead to important questions that have not been asked and that can lead to interesting and significant insights, even discoveries, that can lead to more questions and insight.
Once the student has gathered information, it is incumbent on him or her to find ways to analyze it and, ultimately, interpret its meaning. In this way, we can move from information to knowledge and from knowledge to wisdom.

Module 6 Readings and Assignments
Complete the following readings early in the module:
•    Read online lecture for Module 6
•    From your textbook, Program evaluation: Alternative approaches and practical guidelines, review:
o    Collecting Evaluation Information: Data Sources and Methods, Analysis, and Interpretation

Program Evaluation: Alternative Approaches and Practical Guidelines
FOURTH EDITION
Jody L. Fitzpatrick
University of Colorado Denver
James R. Sanders
Western Michigan University
Blaine R. Worthen
Utah State University

16 Collecting Evaluative Information: Data Sources and Methods, Analysis, and Interpretation
Orienting Questions
1.

When and why do evaluators use mixed methods for data collection?
2.

What criteria do evaluators use to select methods?
3.

What are common methods for collecting data? How might each method be used?
4.

How can stakeholders be involved in selecting and designing measures? In analyzing and interpreting data?
5.

How does analysis differ from interpretation? Why is interpretation so important?

In the previous chapters, we described how evaluators work with stakeholders to make important decisions about what evaluation questions will serve as the focus for the study and possible designs and sampling strategies that can be used to answer the questions. In this chapter, we discuss the next choices involved in data collection: selecting sources of information and methods for collecting it; planning procedures for gathering the data; and, finally, collecting, analyzing, and interpreting the results.

Just as with design and sampling, there are many important choices to be made. The selection of methods is influenced by the nature of the questions to be answered, the perspectives of the evaluator and stakeholders, the characteristics of the setting, budget and personnel available for the evaluation, and the state of the art in data-collection methods. Nevertheless, using mixed methods continues to be helpful in obtaining a full picture of the issues. Remember, an evaluator’s methodological tool kit is much larger than that of traditional, single-discipline researchers, because evaluators are working in a variety of natural settings, answering many different questions, and working and communicating with stakeholders who hold many different perspectives. As in Chapter 15, we will discuss critical issues and choices to be made at each stage and will reference more detailed treatments of each method.

Before discussing specific methods, we will again comment briefly on the choice between qualitative and quantitative methods, in this case referring specifically to methods of collecting data. Few, if any, evaluation studies would be complete if they relied solely on either qualitative or quantitative measures. Evaluators should select the method that is most appropriate for answering the evaluation question at hand given the context of the program and its stakeholders. They should first consider the best source for the information and then the most appropriate method or methods for collecting information from that source. The goal is to identify the method that will produce the highest quality information for that particular program and evaluation question, be most informative and credible to the key stakeholders, involve the least bias and intrusion, and be both feasible and cost-effective to use. Quite simple, right? Of course the difficulty can be in determining which of those criteria are most important in answering a specific evaluation question. For some, the quality of evidence might be the most critical issue. For others, feasibility and cost may become critical.

Qualitative methods such as content analysis of existing sources, in-depth interviews, focus groups, and direct observations, as well as more quantitative instruments such as surveys, tests, and telephone interviews, should all be considered. Each of these, and other methods that are more difficult to categorize, provide opportunities for answering evaluative questions. In practice, many methods are difficult to classify as qualitative or quantitative. Some interviews and observations are quite structured and are analyzed using quantitative statistical methods. Some surveys are very unstructured and are analyzed using themes that make the data-gathering device more qualitative in orientation. Our focus will not be on the paradigm or label attached to the method, but rather on how and when each method might be used and the nature of the information it generates.
Common Sources and Methods for Collecting Information
Existing Documents and Records

The evaluator’s first consideration for sources and methods of data collection should be existing information, or documents and records. We recommend considering existing information first for three reasons: (1) using existing information can be considerably more cost-effective than original data collection; (2) such information is nonreactive, meaning it is not changed by the act of collecting or analyzing it, whereas other methods of collecting information typically affect the respondent and may bias the response; (3) way too much information is already collected and not used sufficiently. In our excitement to evaluate a program, we often neglect to look for existing information that might answer some of the evaluation questions.

Lincoln and Guba (1985) made a useful distinction between two categories of existing data: documents and records. Documents include personal or agency records that were not prepared specifically for evaluation purposes or to be used by others in a systematic way. Documents would include minutes or notes from meetings, comments on students or patients in their files, organizational newsletters or messages, correspondence, annual reports, proposals, and so forth. Although most documents consist of text or words, documents can include videos, recordings, or photographs. Because of their more informal or irregular nature, documents may be useful in revealing the perspectives of various individuals or groups. Content analyses of minutes from meetings, newsletters, manuals of state educational standards, lesson plans, or individual notes or correspondence can help portray a true picture of events or views of those events. One of the advantages of documents is that they permit evaluators to capture events, or representations of those events, before the evaluation began, so they are often viewed as more reliable than personal recall and more credible to outside audiences (Hurworth, 2005). Text documents can be scanned onto the computer and analyzed with existing qualitative software using content analysis procedures. (See “Analysis of Qualitative Data” at the end of this chapter.)

Records are official documents or data prepared for use by others and, as such, are typically collected and organized more carefully than documents. Many records are computerized. Some are collected and held by the agency primarily for internal use, but they are more official than documents and, therefore, are collected more systematially. Such records could include personnel data on employee absences and turnover or data on patients or students and the services they receive, their demographics, test scores, health status, attendance, and such. Other records are organized by external agencies to be used for tracking and in research by others. These would include test scores collected by state departments of education, measures of air quality collected by environmental agencies, economic records maintained by a variety of government agencies, census data, and the like. Of course, such public data can be useful for giving a big picture of the context, but they are rarely sensitive enough to be used to identify a program effect. Remember that, although existing information can be cheaper, the cost will not be worth the savings if the information is not valid for the purposes of the current evaluation study. Unlike data collected originally for the study, this information has been collected for other purposes. These purposes may or may not match those of your evaluation.
Identifying Sources and Methods for Original Data Collection: A Process

In many cases, existing data may be helpful but cannot serve to completely answer the evaluation questions to the satisfaction of stakeholders. Therefore, for most studies evaluators will have to collect some original data. In Chapter 14, we reviewed the typical sources of data, that is, the people from whom one might collect information. Recall that common sources for data include:

funding agency officials)
• Persons with special expertise in the program’s content or methodology (other program specialists, college or university researchers)
• Program events or activities that can be observed directly

To select a source and method, evaluators take these steps:

1.    Identify the concept or construct that must be measured in each evaluation question specified in the evaluation plan. For example, if the question is: “Did patients’ health improve after a six-week guided meditation class?” the key concept is patients’ health. 2. Consider who has knowledge of this concept. Several sources may emerge. Of course, patients are likely to have the most knowledge of how they feel, but with some conditions, they may not be able to accurately report their condition. In these cases, family members or caregivers may be an important secondary audience. Finally, if patients have a specific medical condition, such as high blood pressure, high cholesterol, or diabetes, evaluators may also look to the medical providers or existing records to obtain periodic physical measures. In this case, multiple measures might be very useful to obtain a fuller picture of patients’ health. 3. Consider how the information will be obtained. Will evaluators survey or interview patients and their family members or caregivers? How much detail is needed? How comparable do the responses have to be? If evaluators want to compare responses statistically, surveys may be preferable. If they want a better sense for how patients feel and how the program has or has not helped them, interviews may be a better strategy. Both surveys (of many) and interviews (perhaps with a subset) may be used but at a higher cost. Similarly, with the patients’ blood pressure or other measures, should evaluators obtain the records from files, or do they also want to talk to the health care providers about the patients’ health? 4. Identify the means to be used to collect the information. Will surveys be administered face-to-face with patients as they come to the meditation class or to another office? Will they be conducted as a short interview by phone? If patients or their family members are likely to use computers, an electronic survey might be used. Finally, surveys could be mailed. Some of these choices have to do with the condition of the patients. Family members or caregivers may be less accessible, so face-to-face administration of survey items or interviews would have to be arranged, and visits to homes might not permit family members to discuss the patient privately. Telephone interviews might introduce similar privacy and validity concerns. Mailed or electronic surveys might be preferable. 5. Determine what training must take place for those collecting the data and how the information will be recorded.
Another concept to be considered in the evaluation question is the six-week guided mediation class. The evaluators need to determine if the program is being delivered as planned and perhaps assess its quality. The program theory proposes that this class should take place in a particular way, under leaders with certain types of training and skills, for a certain length of time, and with specified equipment and facilities. A quiet, carpeted room with comfortable places for sitting might be important. In measuring program delivery, evaluators need to identify the critical concepts to monitor or describe, and then use a similar process to identify data sources, methods, and procedures.

We have briefly described and illustrated a process for considering the construct(s) to be measured, potential sources for the construct, and then, given the source, the best manner for collecting the information. Let us move on to discussing different types of data collection, and their strengths and weaknesses, so readers can make a more informed choice about the method(s) to use.
Using Mixed Methods.

Note that, as shown in the previous example, evaluators often use mixed methods. In using mixed methods to measure the same construct, evaluators should consider their purposes in order to select the right mix and order for the measures. Mixed measures might be used for any of the following reasons:
—perceptions of health and physical measures of a health indicator—are important and inform evaluators’ views of patients’ overall health.
• Development purposes, when responses to one measure help evaluators in developing the next measure. In these examples, interviews and surveys may be used for development purposes. Interviews may first inform the types or wording of survey questions. The analysis of the survey data may then be followed by interviews with patients or health care providers to learn more about trends found in the survey data.

In the next sections, we will review some common methods for collecting information. We will be building on the classification scheme that we introduced in Chapter 14 (see pp. 348–349.) Our focus here, however, is providing more detail on particular methods that you have chosen to use, describing their strengths and weaknesses, and providing information on some other choices evaluators make in implementing particular methods.
Observations

Observations are essential for almost all evaluations. At minimum, such methods would include site visits to observe the program in operation and making use of one’s observational skills to note contextual issues in any interactions with stakeholders. Observation can be used more extensively to learn about the program operations and outcomes, participants’ reactions and behaviors, interactions and relationships among stakeholders, and other factors vital to answering the evaluation questions. Observation methods for collecting evaluation information may be quantitative or qualitative, structured or unstructured, depending on the approach that best suits the evaluation question to be addressed.
Observations have a major strength: Evaluators are seeing the real thing—the program in delivery—a meeting, children on the playground or students in the halls, participants in their daily lives. If the evaluation questions contain elements that can be observed, evaluators should definitely do so. But, many observations also have a major drawback—the fact that the observation itself may change the thing being observed. So, evaluators may not be seeing the real thing but, instead, the way those present would like to be observed. Program deliverers or participants may, and probably do, behave differently in the presence of a stranger. Some programs or phenomena being observed are so public that the observers’ presence is not noted; the program has an audience already. For example, court proceedings or city council hearings can be observed with little or no concern for reactivity because others—nonevaluators—are there to observe as well. But, in many cases, the presence of the evaluator is obvious and, given the circumstances, observers may need to be introduced and their role explained. Those being observed may have to give informed consent for the observation to take place. In such cases, it is recommended that several observations be conducted and/or that observations continue long enough for those being observed to become more accustomed to the observation and, therefore, behave as they might without the presence of the observer. Evaluators should judge the potential for reactivity in each setting being observed and, if reactivity is a problem, consider how it might be minimized or overcome. We will discuss reactivity more later.

Of course, observations in any evaluation may be confidential or require informed consent. Evaluators should consider their ethical obligation to respect the dignity and privacy of program participants and other stakeholders in any observation.
stakeholder groups? Jorgensen (1989) writes:

The basic goal of these largely unfocused initial observations is to become increasingly familiar with the insiders’ world so as to refine and focus subsequent observation and data collection. It is extremely important that you record these observations as immediately as possible and with the greatest possible detail because never again will you experience the setting as so utterly unfamiliar (p. 82).

Unstructured observations remain useful throughout the evaluation if evaluators are alert to the opportunities. Every meeting is an opportunity to observe stakeholders in action, to note their concerns and needs and their methods of interacting with others. If permitted, informal observations of the program being evaluated should occur frequently.1 Such observations give the evaluator a vital picture of what others (e.g., participants, deliverers, administrators) are experiencing, as well as the physical environment itself. Each member of the evaluation staff should be required to observe the program at least once. Those most involved should observe the program frequently to note changes and gain a greater understanding of the program as it is delivered. When two or more members of the evaluation team have observed the same classes, sessions, or activities, they should discuss their perspectives on their observations. All observers should keep notes to document their perceptions at the time. These notes can later be arranged into themes as appropriate. (See Fitzpatrick and Fetterman [2000] for a discussion of an evaluation with extensive use of program observations or Fitzpatrick and Greene [2001] for a discussion of different perceptions by observers and how these differences can be used.)

1 We say “if permitted” because some program interactions may be private, e.g., therapy sessions or physical exams in health care. Such sessions may ultimately be observed but not in an informal fashion without participants’ consent. By “informal observation” we mean wandering into the program and observing its general nature. Training or educational programs and some social services and judicial programs present this opportunity.
interactions, receptionist-client interactions, and so on.

A final category of observations is participants’ behaviors. What behaviors might one observe? Imagine a school-based conflict-resolution program designed to reduce playground conflicts. Observations of playground behaviors provide an excellent method for observing outcomes. Imagine a new city recycling program for which there are questions about the level of interest and nature of participation. The frequency of participation and the amount and type of refuse recycled can be easily observed, although the process may be a little messy! Students’ attention to task has been a common measure observed in educational research. While many programs focus on outcomes that are difficult to observe, such as self-esteem or preventing alcohol and drug abuse many others lead to outcomes that can be readily observed. This is particularly the case when the target audience or program participants are congregated in the same public area (e.g., hospitals, schools, prisons, parks, or roads).

Structured methods of observation typically involve using some type of form, often called observation schedules, to record observations. Whenever quantitative observation data are needed, we advise reviewing the literature for existing measures and, if necessary, adapting this instrument to the particulars of the program to be evaluated. Other concerns in structured observation involve training observers to ensure consistency across observers. Careful training and measuring for consistency or reliability are critical. The differences that may emerge in the data should be based on real differences in what is observed, not differences in the observers’ perceptions. It is important to consider not only what to observe but also the sampling for observation: Which sites should be observed? Which sessions or what times should be observed? If individual participants or students are selected from among a group for observation, how will they be chosen? (See Greiner, 2004, for more on structured observations, particularly training observers, calculating inter-rater reliability, and using it for feedback to improve the quality of the observations.)

inquiry-based science program in middle schools. They identify three purposes for examining implementation: adherence of the implementation to the original model, the amount of exposure students or participants have to the model (dosage), and the quality of the implementation. Adherence is whether programs are delivered according to a logic model or program theory; quality is concerned with how the program is implemented. They note that “Observations are necessary for measuring quality because they avoid self-report biases by providing external reviewers’ perspectives” (2008, p. 246). They make use of videotapes of “key junctures” in the inquiry-based curricula and focus on teachers’ questioning strategies. As they note, the process forces them to make decisions about what to observe and, thus, to “winnow down [a] list of program features to examine the essential characteristics that most directly address program quality and are most likely to affect student learning and understanding” (p. 246). They must make decisions about which schools and teachers to videotape and what features or events to record, and then they must train judges to compare teachers’ responses through the videotapes. The project illustrates the complexity of observation, but also its utility in evaluating and identifying what constitutes quality performance. Readers’ use of observation may be less complex, but it can be informed by the procedures and forms used by Brandon et al. and their focus on quality.

Observations can also be used as quality checks on adherence in program implementation. In such cases, the factors to describe or evaluate should be key elements in the program’s logic models or theory. Program deliverers may be directed to maintain logs or diaries of their activities, and participants may be asked to report what they have experienced in regard to the key elements. But mixed methods, including observation, are useful to document the reliability and validity of these self-report measures. Evaluation staff can train observers to document or describe the key elements of the program using checklists with notes to explain variations in what is observed. Zvoch (2009) provides an example of using classroom observations of teachers delivering two early childhood literacy programs across a large school district. They used observation checklists of key elements in the program and found variation in program implementation both at early and later stages of the program. They were able to identify teacher characteristics and contextual variables associated with adherence to the program model that were then helpful in analyzing and identifying problems. For example, teachers with larger class sizes were less able to adhere to the model. Not surprising, but helpful to know for future program dissemination.

2 Survey more appropriately refers to the general method, whereas questionnaire, interview protocol, and the like refer to instruments used to collect the actual data.

3 Most organizations make use of satisfaction surveys with clients, parents, and the like. Often these are conducted in a rote and superficial way. We encourage evaluators and administrators to make use of these surveys to add items that might be useful for a particular, timely issue.

• Surveys of stakeholders or the general public to obtain perceptions of the program or of their community and its needs or to involve the public further in policy issues and decisions regarding their community (Henry, 1996).

These are some of the common uses of surveys, but they can be used for many purposes with many different audiences. We will now move to how evaluators identify or develop surveys for evaluation studies.
2.

How did clients first learn of the agency?

Multiple-choice

21

Percentages

3.

What type(s) of services do they receive from the agency?

Checklist

22–23

Percentages

4.

Do opinions differ by type of service required?

Score on 2–20 with 22–23

t-tests and ANOVA, explore

When the purpose of the survey is to measure opinions, behaviors, attitudes, or life circumstances quite specific to the program being evaluated, the evaluators are likely to be faced with developing their own questionnaires. In this case, we recommend developing a design plan for the questionnaire that is analogous to the evaluation design used for the entire evaluation. In the first column, list the questions (not the item) to be answered by the survey. That is, what questions should the results of this survey answer? In the second column, indicate the item type(s) that should be used to obtain this information. A third column may be used after items are developed to reference the numbers of the items that are linked to each question. A fourth column can then specify the means of analysis. Table 16.1 provides an illustration. This design then becomes a guide for planning the questionnaire and analyzing the information obtained. It helps evaluators confirm that they have included a sufficient number of items to answer each question. (Some questions require more items than others.) The design also helps to avoid items that sound interesting but, in fact, don’t really address any of the evaluation questions. Evaluators may decide to include such items, but their purpose should be further explored. Items that do not answer a question of interest lengthen the questionnaire and show disrespect for the time and privacy of the respon

Responses are currently closed, but you can trackback from your own site.

Comments are closed.

Project Proposal Activity

Project Proposal Activity

In this module we specifically address the data sources and methods of data collection and how those types of data can be integrated to enhance validity and credibility. As the final section of your paper, write a description of the sources for your data, how these data will be collected, how they will be analyzed and interpreted, and finally, how they can be integrated and triangulated to enhance validity and credibility.
Your project paper should be in Microsoft Word 2000 or higher. Remember to follow the current edition of APA format. Your paper should be double-spaced and in 12 point font. It should not exceed five pages and should include an additional page that lists your citations.
Every week, your assignment should be submitted using proper APA format. A cover page will not be required. However, the first page of the assignment should include a title and proper headings for sections of your response. In-text citations should follow APA guidelines, and you should include a Reference list as a separate page with entries that match in-text citations. You may include an Appendix if you have additional information (e.g., assessment instruments, tables/figures, other supporting documentation).
All written assignments and responses should follow APA rules for attributing sources.
Assignment 2 Grading Criteria     Maximum Points
Clearly described the sources of data.     4
Clearly described and justified how data will be collected.    4
Clearly described methods of analyzing and interpreting the data.    5
Clearly described how the various kinds of data will be used to triangulate, enhancing the validity and credibility of the project.    5
Utilized correct APA format for headings, margins, spacing, citations, etc.    2
Wrote in a clear, concise, and organized manner; demonstrated ethical scholarship in accurate representation and attribution of sources; displayed accurate spelling, grammar, and punctuation.    4
Total:    24

readings

Overview

“Where is the wisdom we have lost in knowledge?
Where is the knowledge we have lost in information?”
—from The Rock by T.S. Eliott
When persons want to see what is available to solve a problem—whether it is the best place to live when retired, a psychological test, or an investment opportunity—after a diligent search they are often presented with a simple list of options. Sometimes they are disappointed because they are subconsciously looking for the option that would solve their problem. Instead, they find a whole list of possible solutions that are relatively good and bad but no easy answer. Program evaluation is the home of many options—lots of alternatives—but no easy solutions. The best solution is more hard work: you have to think through the problem, its demands, and connections to other problems and decide what makes the most sense. If we are fortunate, we go from information to knowledge and, in rare cases, to wisdom.
In this module, you will see a long list of options, but that is not all. You will be challenged to evaluate the source and to think about what using a method might mean in your plan. Furthermore, you will think about the differences between qualitative and quantitative methods and how you might synthesize the two. If nothing else, you will see the advantages of doing that. Finally, you will be challenged to go beyond analysis and into interpretation and consider how that is done. After all, recommendations are based on interpretations.
The are a number of information sources. The most immediate and useful but often least used are existing records. Secondly, we are gathering other information all of the time, through our observations. Observations, when systematic and carefully done, can be powerful. Another source of data are surveys, which are becoming less useful because they are used so widely and have some inherent often undiscussed psychometric problems. Tests and qualitative information are also in our list.
In the end, data—qualitative and quantitative—need to be analyzed and interpreted. There are some guidelines for that.

Common Sources and Methods

Students often think that the best, most efficient way to gather information is through a test or survey. Actually, tests and surveys are not the most sophisticated, information-laden, and complex sources of information. Those sources are in the artifacts of everyday life and exist in files and our own archives that are, strangely, considered by an IRB to be Level 1 or “exempt” because there is no human participant interaction. On the other hand, documents and records can be the source of excellent information. Gordon Allport, the great American psychologist, published a collection of letters entitled Letters from Jenney (1965) to drive home his belief that correspondence (documents) can be as revealing and actually lead to more insight than tests.
Observations
We are constantly observing. Thousands of studies demonstrate the fruitfulness of careful observation: that it can lead to great insights, as with Piaget’s work on child development and Freud’s insight into the unconscious. We see that systematic and careful observations combined with thoughtful insight can lead to great discoveries. In your text, the authors explore the value of structured and unstructured observations, qualitative observations, and site visits.
Surveys and Tests
Surveys are common to persons in business, education, psychology, and the other social sciences. Hardly a day goes by that we aren’t asked to fill out a survey about something. While it is important to understand the techniques of building a survey and the range of surveys available to the potential evaluator, it is equally important to understand their limitations that are revealed in the research and their status as a statistical measure.
Tests are also used in every discipline but especially in the psychological assessment. The Buros Handbook lists thousands of them, of varying worth and value, depending on the context, reliability, validity, and other characteristics.
Qualitative Data
Most data are analogue: raw, open, and soft. It usually takes work to organize, analyze, and interpret, but it can carry additional meaning that quantitative data cannot. The authors discuss qualitative interviews and focus groups as sources of qualitative information.
Problems in Data Collection and Analysis

As the saying goes, “There is no free lunch.” As a rule, whatever technique we use contains limitations and weaknesses. Many have been documented but many have not. The responsible and careful evaluator is aware of these insights and actually looks for the problems so he or she will not be caught unaware. As all of us know, those times often occur when we are least prepared for them and contain unwelcome surprises.
In any case, it is important to establish a perspective on analysis and interpretation of findings—qualitative and quantitative—because it is in the interpretations that we can give recommendations for improvement.
Summary

The student who takes this module seriously will develop a perspective on data. He or she will understand in a specific way that there are many sources of data and that the most sophisticated data are often the easiest to find and use and are the least risky. The humans who participated in its development have left their traces and moved on, never to return to the scene. The insightful and analytic evaluator or researcher should be aware of those data, not just as sources for answers but questions. Thoughtful consideration of data can lead to important questions that have not been asked and that can lead to interesting and significant insights, even discoveries, that can lead to more questions and insight.
Once the student has gathered information, it is incumbent on him or her to find ways to analyze it and, ultimately, interpret its meaning. In this way, we can move from information to knowledge and from knowledge to wisdom.

Module 6 Readings and Assignments
Complete the following readings early in the module:
•    Read online lecture for Module 6
•    From your textbook, Program evaluation: Alternative approaches and practical guidelines, review:
o    Collecting Evaluation Information: Data Sources and Methods, Analysis, and Interpretation

Program Evaluation: Alternative Approaches and Practical Guidelines
FOURTH EDITION
Jody L. Fitzpatrick
University of Colorado Denver
James R. Sanders
Western Michigan University
Blaine R. Worthen
Utah State University

16 Collecting Evaluative Information: Data Sources and Methods, Analysis, and Interpretation
Orienting Questions
1.

When and why do evaluators use mixed methods for data collection?
2.

What criteria do evaluators use to select methods?
3.

What are common methods for collecting data? How might each method be used?
4.

How can stakeholders be involved in selecting and designing measures? In analyzing and interpreting data?
5.

How does analysis differ from interpretation? Why is interpretation so important?

In the previous chapters, we described how evaluators work with stakeholders to make important decisions about what evaluation questions will serve as the focus for the study and possible designs and sampling strategies that can be used to answer the questions. In this chapter, we discuss the next choices involved in data collection: selecting sources of information and methods for collecting it; planning procedures for gathering the data; and, finally, collecting, analyzing, and interpreting the results.

Just as with design and sampling, there are many important choices to be made. The selection of methods is influenced by the nature of the questions to be answered, the perspectives of the evaluator and stakeholders, the characteristics of the setting, budget and personnel available for the evaluation, and the state of the art in data-collection methods. Nevertheless, using mixed methods continues to be helpful in obtaining a full picture of the issues. Remember, an evaluator’s methodological tool kit is much larger than that of traditional, single-discipline researchers, because evaluators are working in a variety of natural settings, answering many different questions, and working and communicating with stakeholders who hold many different perspectives. As in Chapter 15, we will discuss critical issues and choices to be made at each stage and will reference more detailed treatments of each method.

Before discussing specific methods, we will again comment briefly on the choice between qualitative and quantitative methods, in this case referring specifically to methods of collecting data. Few, if any, evaluation studies would be complete if they relied solely on either qualitative or quantitative measures. Evaluators should select the method that is most appropriate for answering the evaluation question at hand given the context of the program and its stakeholders. They should first consider the best source for the information and then the most appropriate method or methods for collecting information from that source. The goal is to identify the method that will produce the highest quality information for that particular program and evaluation question, be most informative and credible to the key stakeholders, involve the least bias and intrusion, and be both feasible and cost-effective to use. Quite simple, right? Of course the difficulty can be in determining which of those criteria are most important in answering a specific evaluation question. For some, the quality of evidence might be the most critical issue. For others, feasibility and cost may become critical.

Qualitative methods such as content analysis of existing sources, in-depth interviews, focus groups, and direct observations, as well as more quantitative instruments such as surveys, tests, and telephone interviews, should all be considered. Each of these, and other methods that are more difficult to categorize, provide opportunities for answering evaluative questions. In practice, many methods are difficult to classify as qualitative or quantitative. Some interviews and observations are quite structured and are analyzed using quantitative statistical methods. Some surveys are very unstructured and are analyzed using themes that make the data-gathering device more qualitative in orientation. Our focus will not be on the paradigm or label attached to the method, but rather on how and when each method might be used and the nature of the information it generates.
Common Sources and Methods for Collecting Information
Existing Documents and Records

The evaluator’s first consideration for sources and methods of data collection should be existing information, or documents and records. We recommend considering existing information first for three reasons: (1) using existing information can be considerably more cost-effective than original data collection; (2) such information is nonreactive, meaning it is not changed by the act of collecting or analyzing it, whereas other methods of collecting information typically affect the respondent and may bias the response; (3) way too much information is already collected and not used sufficiently. In our excitement to evaluate a program, we often neglect to look for existing information that might answer some of the evaluation questions.

Lincoln and Guba (1985) made a useful distinction between two categories of existing data: documents and records. Documents include personal or agency records that were not prepared specifically for evaluation purposes or to be used by others in a systematic way. Documents would include minutes or notes from meetings, comments on students or patients in their files, organizational newsletters or messages, correspondence, annual reports, proposals, and so forth. Although most documents consist of text or words, documents can include videos, recordings, or photographs. Because of their more informal or irregular nature, documents may be useful in revealing the perspectives of various individuals or groups. Content analyses of minutes from meetings, newsletters, manuals of state educational standards, lesson plans, or individual notes or correspondence can help portray a true picture of events or views of those events. One of the advantages of documents is that they permit evaluators to capture events, or representations of those events, before the evaluation began, so they are often viewed as more reliable than personal recall and more credible to outside audiences (Hurworth, 2005). Text documents can be scanned onto the computer and analyzed with existing qualitative software using content analysis procedures. (See “Analysis of Qualitative Data” at the end of this chapter.)

Records are official documents or data prepared for use by others and, as such, are typically collected and organized more carefully than documents. Many records are computerized. Some are collected and held by the agency primarily for internal use, but they are more official than documents and, therefore, are collected more systematially. Such records could include personnel data on employee absences and turnover or data on patients or students and the services they receive, their demographics, test scores, health status, attendance, and such. Other records are organized by external agencies to be used for tracking and in research by others. These would include test scores collected by state departments of education, measures of air quality collected by environmental agencies, economic records maintained by a variety of government agencies, census data, and the like. Of course, such public data can be useful for giving a big picture of the context, but they are rarely sensitive enough to be used to identify a program effect. Remember that, although existing information can be cheaper, the cost will not be worth the savings if the information is not valid for the purposes of the current evaluation study. Unlike data collected originally for the study, this information has been collected for other purposes. These purposes may or may not match those of your evaluation.
Identifying Sources and Methods for Original Data Collection: A Process

In many cases, existing data may be helpful but cannot serve to completely answer the evaluation questions to the satisfaction of stakeholders. Therefore, for most studies evaluators will have to collect some original data. In Chapter 14, we reviewed the typical sources of data, that is, the people from whom one might collect information. Recall that common sources for data include:

funding agency officials)
• Persons with special expertise in the program’s content or methodology (other program specialists, college or university researchers)
• Program events or activities that can be observed directly

To select a source and method, evaluators take these steps:

1.    Identify the concept or construct that must be measured in each evaluation question specified in the evaluation plan. For example, if the question is: “Did patients’ health improve after a six-week guided meditation class?” the key concept is patients’ health. 2. Consider who has knowledge of this concept. Several sources may emerge. Of course, patients are likely to have the most knowledge of how they feel, but with some conditions, they may not be able to accurately report their condition. In these cases, family members or caregivers may be an important secondary audience. Finally, if patients have a specific medical condition, such as high blood pressure, high cholesterol, or diabetes, evaluators may also look to the medical providers or existing records to obtain periodic physical measures. In this case, multiple measures might be very useful to obtain a fuller picture of patients’ health. 3. Consider how the information will be obtained. Will evaluators survey or interview patients and their family members or caregivers? How much detail is needed? How comparable do the responses have to be? If evaluators want to compare responses statistically, surveys may be preferable. If they want a better sense for how patients feel and how the program has or has not helped them, interviews may be a better strategy. Both surveys (of many) and interviews (perhaps with a subset) may be used but at a higher cost. Similarly, with the patients’ blood pressure or other measures, should evaluators obtain the records from files, or do they also want to talk to the health care providers about the patients’ health? 4. Identify the means to be used to collect the information. Will surveys be administered face-to-face with patients as they come to the meditation class or to another office? Will they be conducted as a short interview by phone? If patients or their family members are likely to use computers, an electronic survey might be used. Finally, surveys could be mailed. Some of these choices have to do with the condition of the patients. Family members or caregivers may be less accessible, so face-to-face administration of survey items or interviews would have to be arranged, and visits to homes might not permit family members to discuss the patient privately. Telephone interviews might introduce similar privacy and validity concerns. Mailed or electronic surveys might be preferable. 5. Determine what training must take place for those collecting the data and how the information will be recorded.
Another concept to be considered in the evaluation question is the six-week guided mediation class. The evaluators need to determine if the program is being delivered as planned and perhaps assess its quality. The program theory proposes that this class should take place in a particular way, under leaders with certain types of training and skills, for a certain length of time, and with specified equipment and facilities. A quiet, carpeted room with comfortable places for sitting might be important. In measuring program delivery, evaluators need to identify the critical concepts to monitor or describe, and then use a similar process to identify data sources, methods, and procedures.

We have briefly described and illustrated a process for considering the construct(s) to be measured, potential sources for the construct, and then, given the source, the best manner for collecting the information. Let us move on to discussing different types of data collection, and their strengths and weaknesses, so readers can make a more informed choice about the method(s) to use.
Using Mixed Methods.

Note that, as shown in the previous example, evaluators often use mixed methods. In using mixed methods to measure the same construct, evaluators should consider their purposes in order to select the right mix and order for the measures. Mixed measures might be used for any of the following reasons:
—perceptions of health and physical measures of a health indicator—are important and inform evaluators’ views of patients’ overall health.
• Development purposes, when responses to one measure help evaluators in developing the next measure. In these examples, interviews and surveys may be used for development purposes. Interviews may first inform the types or wording of survey questions. The analysis of the survey data may then be followed by interviews with patients or health care providers to learn more about trends found in the survey data.

In the next sections, we will review some common methods for collecting information. We will be building on the classification scheme that we introduced in Chapter 14 (see pp. 348–349.) Our focus here, however, is providing more detail on particular methods that you have chosen to use, describing their strengths and weaknesses, and providing information on some other choices evaluators make in implementing particular methods.
Observations

Observations are essential for almost all evaluations. At minimum, such methods would include site visits to observe the program in operation and making use of one’s observational skills to note contextual issues in any interactions with stakeholders. Observation can be used more extensively to learn about the program operations and outcomes, participants’ reactions and behaviors, interactions and relationships among stakeholders, and other factors vital to answering the evaluation questions. Observation methods for collecting evaluation information may be quantitative or qualitative, structured or unstructured, depending on the approach that best suits the evaluation question to be addressed.
Observations have a major strength: Evaluators are seeing the real thing—the program in delivery—a meeting, children on the playground or students in the halls, participants in their daily lives. If the evaluation questions contain elements that can be observed, evaluators should definitely do so. But, many observations also have a major drawback—the fact that the observation itself may change the thing being observed. So, evaluators may not be seeing the real thing but, instead, the way those present would like to be observed. Program deliverers or participants may, and probably do, behave differently in the presence of a stranger. Some programs or phenomena being observed are so public that the observers’ presence is not noted; the program has an audience already. For example, court proceedings or city council hearings can be observed with little or no concern for reactivity because others—nonevaluators—are there to observe as well. But, in many cases, the presence of the evaluator is obvious and, given the circumstances, observers may need to be introduced and their role explained. Those being observed may have to give informed consent for the observation to take place. In such cases, it is recommended that several observations be conducted and/or that observations continue long enough for those being observed to become more accustomed to the observation and, therefore, behave as they might without the presence of the observer. Evaluators should judge the potential for reactivity in each setting being observed and, if reactivity is a problem, consider how it might be minimized or overcome. We will discuss reactivity more later.

Of course, observations in any evaluation may be confidential or require informed consent. Evaluators should consider their ethical obligation to respect the dignity and privacy of program participants and other stakeholders in any observation.
stakeholder groups? Jorgensen (1989) writes:

The basic goal of these largely unfocused initial observations is to become increasingly familiar with the insiders’ world so as to refine and focus subsequent observation and data collection. It is extremely important that you record these observations as immediately as possible and with the greatest possible detail because never again will you experience the setting as so utterly unfamiliar (p. 82).

Unstructured observations remain useful throughout the evaluation if evaluators are alert to the opportunities. Every meeting is an opportunity to observe stakeholders in action, to note their concerns and needs and their methods of interacting with others. If permitted, informal observations of the program being evaluated should occur frequently.1 Such observations give the evaluator a vital picture of what others (e.g., participants, deliverers, administrators) are experiencing, as well as the physical environment itself. Each member of the evaluation staff should be required to observe the program at least once. Those most involved should observe the program frequently to note changes and gain a greater understanding of the program as it is delivered. When two or more members of the evaluation team have observed the same classes, sessions, or activities, they should discuss their perspectives on their observations. All observers should keep notes to document their perceptions at the time. These notes can later be arranged into themes as appropriate. (See Fitzpatrick and Fetterman [2000] for a discussion of an evaluation with extensive use of program observations or Fitzpatrick and Greene [2001] for a discussion of different perceptions by observers and how these differences can be used.)

1 We say “if permitted” because some program interactions may be private, e.g., therapy sessions or physical exams in health care. Such sessions may ultimately be observed but not in an informal fashion without participants’ consent. By “informal observation” we mean wandering into the program and observing its general nature. Training or educational programs and some social services and judicial programs present this opportunity.
interactions, receptionist-client interactions, and so on.

A final category of observations is participants’ behaviors. What behaviors might one observe? Imagine a school-based conflict-resolution program designed to reduce playground conflicts. Observations of playground behaviors provide an excellent method for observing outcomes. Imagine a new city recycling program for which there are questions about the level of interest and nature of participation. The frequency of participation and the amount and type of refuse recycled can be easily observed, although the process may be a little messy! Students’ attention to task has been a common measure observed in educational research. While many programs focus on outcomes that are difficult to observe, such as self-esteem or preventing alcohol and drug abuse many others lead to outcomes that can be readily observed. This is particularly the case when the target audience or program participants are congregated in the same public area (e.g., hospitals, schools, prisons, parks, or roads).

Structured methods of observation typically involve using some type of form, often called observation schedules, to record observations. Whenever quantitative observation data are needed, we advise reviewing the literature for existing measures and, if necessary, adapting this instrument to the particulars of the program to be evaluated. Other concerns in structured observation involve training observers to ensure consistency across observers. Careful training and measuring for consistency or reliability are critical. The differences that may emerge in the data should be based on real differences in what is observed, not differences in the observers’ perceptions. It is important to consider not only what to observe but also the sampling for observation: Which sites should be observed? Which sessions or what times should be observed? If individual participants or students are selected from among a group for observation, how will they be chosen? (See Greiner, 2004, for more on structured observations, particularly training observers, calculating inter-rater reliability, and using it for feedback to improve the quality of the observations.)

inquiry-based science program in middle schools. They identify three purposes for examining implementation: adherence of the implementation to the original model, the amount of exposure students or participants have to the model (dosage), and the quality of the implementation. Adherence is whether programs are delivered according to a logic model or program theory; quality is concerned with how the program is implemented. They note that “Observations are necessary for measuring quality because they avoid self-report biases by providing external reviewers’ perspectives” (2008, p. 246). They make use of videotapes of “key junctures” in the inquiry-based curricula and focus on teachers’ questioning strategies. As they note, the process forces them to make decisions about what to observe and, thus, to “winnow down [a] list of program features to examine the essential characteristics that most directly address program quality and are most likely to affect student learning and understanding” (p. 246). They must make decisions about which schools and teachers to videotape and what features or events to record, and then they must train judges to compare teachers’ responses through the videotapes. The project illustrates the complexity of observation, but also its utility in evaluating and identifying what constitutes quality performance. Readers’ use of observation may be less complex, but it can be informed by the procedures and forms used by Brandon et al. and their focus on quality.

Observations can also be used as quality checks on adherence in program implementation. In such cases, the factors to describe or evaluate should be key elements in the program’s logic models or theory. Program deliverers may be directed to maintain logs or diaries of their activities, and participants may be asked to report what they have experienced in regard to the key elements. But mixed methods, including observation, are useful to document the reliability and validity of these self-report measures. Evaluation staff can train observers to document or describe the key elements of the program using checklists with notes to explain variations in what is observed. Zvoch (2009) provides an example of using classroom observations of teachers delivering two early childhood literacy programs across a large school district. They used observation checklists of key elements in the program and found variation in program implementation both at early and later stages of the program. They were able to identify teacher characteristics and contextual variables associated with adherence to the program model that were then helpful in analyzing and identifying problems. For example, teachers with larger class sizes were less able to adhere to the model. Not surprising, but helpful to know for future program dissemination.

2 Survey more appropriately refers to the general method, whereas questionnaire, interview protocol, and the like refer to instruments used to collect the actual data.

3 Most organizations make use of satisfaction surveys with clients, parents, and the like. Often these are conducted in a rote and superficial way. We encourage evaluators and administrators to make use of these surveys to add items that might be useful for a particular, timely issue.

• Surveys of stakeholders or the general public to obtain perceptions of the program or of their community and its needs or to involve the public further in policy issues and decisions regarding their community (Henry, 1996).

These are some of the common uses of surveys, but they can be used for many purposes with many different audiences. We will now move to how evaluators identify or develop surveys for evaluation studies.
2.

How did clients first learn of the agency?

Multiple-choice

21

Percentages

3.

What type(s) of services do they receive from the agency?

Checklist

22–23

Percentages

4.

Do opinions differ by type of service required?

Score on 2–20 with 22–23

t-tests and ANOVA, explore

When the purpose of the survey is to measure opinions, behaviors, attitudes, or life circumstances quite specific to the program being evaluated, the evaluators are likely to be faced with developing their own questionnaires. In this case, we recommend developing a design plan for the questionnaire that is analogous to the evaluation design used for the entire evaluation. In the first column, list the questions (not the item) to be answered by the survey. That is, what questions should the results of this survey answer? In the second column, indicate the item type(s) that should be used to obtain this information. A third column may be used after items are developed to reference the numbers of the items that are linked to each question. A fourth column can then specify the means of analysis. Table 16.1 provides an illustration. This design then becomes a guide for planning the questionnaire and analyzing the information obtained. It helps evaluators confirm that they have included a sufficient number of items to answer each question. (Some questions require more items than others.) The design also helps to avoid items that sound interesting but, in fact, don’t really address any of the evaluation questions. Evaluators may decide to include such items, but their purpose should be further explored. Items that do not answer a question of interest lengthen the questionnaire and show disrespect for the time and privacy of the respon

Responses are currently closed, but you can trackback from your own site.

Comments are closed.

Powered by WordPress | Designed by: Premium WordPress Themes | Thanks to Themes Gallery, Bromoney and Wordpress Themes