Click one of the letters above to advance the page to terms beginning with that letter.
The abstract is 125 words or less and presents a concise picture of the proposed research. It includes major constructs and hypotheses. The abstract is the first section of the paper.
Actions that are implemented to achieve the desired program results.
- CASE STUDY
A research strategy that investigates a phenomenon in its natural setting using mulitple sources of evidence. 
- COMPARISON GROUP
A group of people that do not receive the program under investigation and are compared to the group of program recipients on measures of interest. Comparison group differs from control groups in that they are not selected randomly from the same population as program participants. 
- CONTROL GROUP
A group chosen randomly from the same population as the program group but that does not receive the program. It is a stand-in for what the program group would have looked like if it had not received the program. 
- CONVENIENCE SAMPLING
- DEPENDENT VARIABLE
- DESCRIPTIVE ANALYSIS
Values that describe the characteristics of a sample or population. 
- EXPERIMENTAL DESIGNS
Evaluation designs in which potential recipients are randomly assigned to either a "treatment" group (receive the evaluand) or a "control" group (receive either nothing or an alternative intervention). 
- FACT SHEET
Describes a research project or evaluation initiative. Fact sheets often contain technical data lists and statistics or are sometimes a summary of a much longer document.
- FOCUS GROUP
A small panel of persons selected for their knowledge or perspective on a topic of interest that is convened to discuss the topic with the assistance of a facilitator. The discussion is used to identify important themes or to construct descriptive summaries of views and experiences on the focal topic. 
- FORMATIVE EVALUATION
- INDEPENDENT VARIABLE
- INFERENTIAL ANALYSIS
Statistical analysis used to reach conclusions that extend beyond the immediate data alone. 
The resources used to conduct a program. 
- INSTITUTIONAL REVIEW BOARD
A review board that evaluates, approves, and monitors all research projects with respect to ethical requirements and practices. 
A data collection method in which participants provide information about their behavior, thoughts, or feelings in response to questions posed by an interviewer. 
- LOGIC MODEL
Logic models are visual representations of programs that show how a program is intended to work, that is, how resources that are available to deliver the program are converted into program activities, and how those activities in turn produce intended results. 
- LONGITUDINAL DESIGN
A study design in which data are collected at several points in time from the same individuals, groups, or organizations.
- MIXED METHODS DATA
The use of multiple measures, which include a combination of qualitative and quantitative approaches. 
- NEEDS ASSESSMENT
An evaluative study that answers questions about the social conditions a program is intended to address and the need for the program. 
A data collection method in which the researcher watches and records events and processes. 
The end results of the program. Outcomes may be intended or unintended and be positive or negative. 
- OUTCOME EVALUATION
A study of whether or not the program produced the intended program effects. Outcome evaluation relates to the phase of the program studied in this case, the end result of the program. 
A measure taken after a program ends. 
A measure taken before a program ends. 
- PRETEST/POSTTEST DESIGN
A reflexive control design in which only one measure is taken before and after the intervention. 
- PROCESS EVALUATION
- QUALITATIVE DATA
Data that examines phenomena primarily through words and tends to focus on dynamics, meaning and context. 
- QUANTITATIVE DATA
Data that examines phenomena that can be expressed numerically and analyzed statistically. 
- QUASI-EXPERIMENTAL DESIGN
A class of research designs in which program (or treatment) and comparison groups are selected non randomly but in which some controls are introduced to minimize threats to the validity of conclusions. 
A group of written questions to which subjects respond. Some restrict the use of the term "questionnaire" to written responses. 
- RANDOM ASSIGNMENT
Assignment of potential targets to intervention and control groups on the basis of chance so that every unit in a target population has the same probability as any other to be selected for either group. Also referred to as randomization. 
The consistency or stability of a measure over repeated use. An instrument is said to be reliable if repeated efforts to measure the same phenomenon produce the same result. 
The act of selecting units from a population. 
- SECONDARY DATA SOURCES
A source that provides non-original (secondhand) data or information. 
- SNOWBALL SAMPLING
- SUMMATIVE EVALUATION
A study conducted at the end of a program (or of a phase of the program) to determine the extent to which anticipatedoutcomes were produced. Summative evaluation is intended to provide information about the worth of the program.
- SYSTEMATIC SAMPLING
A sample drawn by selecting every Nth case from a list of potential units. 
- TIME-SERIES DESIGN
Designs that collect data over long time intervals. In evaluation, time-series designs take repeated measurements of key variables at periodic intervals before, during, and after program implementation and analyze changes over time.
The extent to which a measure actually measures what it is intended to measure. 
A measured characteristic, usually expressed quantitatively, that varies across members of a population. 
- Crano, W. D., & Brewer, M. B. (2002). Principles and Methods of Social Research (2nd ed.). Mahwah. NJ: Lawrence Erlbaum Associates Publishers.
- Davidson, J. E. (2005). Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation (1st ed.). Thousand Oaks, CA: Sage Publications Inc.
- Fitzpatrick, J. L., Sanders, J. R. , & Worthen, B. R. (2011). Program Evaluation: Alternative Approaches and Practical Guidelines (4th ed.). Upper Saddle River, NJ: Pearson Education.
- Judd, C. M., & Smith, E. R., Kidder, L. H. (1991). Research Methods in Social Relations (6th ed.). Fort Worth, TX: Holt, Reinhart and Winston, Inc.
- McDavid, J.C., & Hawthorn, L. R. L. (2006). Program Evaluation & Performance Measurement An Introduction to Practice(1st ed.). Thousand Oaks, CA: Sage Publications, Inc.
- Rossie, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation A Systematic Approach (7th ed.). Thousand Oaks, CA: Sage Publications, Inc.
- Salking, N. J. (2004). Statistics for People Who (Think They) Hate Statistics (2nd ed.). Thousand Oaks, CA: Sage Publications, Inc.
- Trochim, W. M. K., & Donnelly, J. P. (2008). The Research Methods Knowledge Base (3rd ed.). Mason, OH: Cengage Learning.
- Vogt, P. W. (1999). Dictionary Of Statistics & Methodology A Nontechnical Guide For The Social Sciences (2nd ed.). Thousand Oaks, CA: Sage Publications, Inc.
- Weiss, C. H. (1998). Evaluation (2nd ed.). Upper Saddle River, NJ: Prentice Hall.
- United Way of America. (1996). Measuring Program Outcomes: A Practical Approach (2nd ed.). Alexandria, VA: United Way of America.
The following references were used in the development of the evaluation glossary.