Glossary

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

Click one of the letters above to advance the page to terms beginning with that letter.

A

ABSTRACT

The abstract is 125 words or less and presents a concise picture of the proposed research. It includes major constructs and hypotheses. The abstract is the first section of the paper.[8]

ACTIVITIES

Actions that are implemented to achieve the desired program results.

C

CASE STUDY

A research strategy that investigates a phenomenon in its natural setting using mulitple sources of evidence. [10]

COMPARISON GROUP

A group of people that do not receive the program under investigation and are compared to the group of program recipients on measures of interest. Comparison group differs from control groups in that they are not selected randomly from the same population as program participants. [10]

CONTROL GROUP

A group chosen randomly from the same population as the program group but that does not receive the program. It is a stand-in for what the program group would have looked like if it had not received the program. [10]

CONVENIENCE SAMPLING

Nonrandom sampling in which the sample is drawn from the target population because of ease of access. [10]

D

DEPENDENT VARIABLE

A measure of the presumed effect in a study. Its values are predicted by other variables, called independent variables.[10]

DESCRIPTIVE ANALYSIS

Values that describe the characteristics of a sample or population. [7]

E

EVALUATION

The systematic assessment of the operation and/or outcomes of a program or policy, compared to explicit or implicit standards, in order to contribute to the improvement of the program or policy. [10]

EXPERIMENTAL DESIGNS

Evaluation designs in which potential recipients are randomly assigned to either a "treatment" group (receive the evaluand) or a "control" group (receive either nothing or an alternative intervention). [2]

F

FACT SHEET

Describes a research project or evaluation initiative. Fact sheets often contain technical data lists and statistics or are sometimes a summary of a much longer document.

FOCUS GROUP

A small panel of persons selected for their knowledge or perspective on a topic of interest that is convened to discuss the topic with the assistance of a facilitator. The discussion is used to identify important themes or to construct descriptive summaries of views and experiences on the focal topic. [6]

FORMATIVE EVALUATION

A type of evaluation conducted during the course of program implementation whose primary purpose is to provide information to improve the program under study. [10]

I

IMPACT

Change or (sometimes) lack of change caused by the evaluand. This term is similar in meaning to the terms outcome and effect. The term impact is often used to refer to long-term outcomes. [2]

INDEPENDENT VARIABLE

The presumed cause of some outcome under study; changes in an independent variable are expected to predict a change in the value of a dependent variable[10]

INFERENTIAL ANALYSIS

Statistical analysis used to reach conclusions that extend beyond the immediate data alone. [8]

INPUTS

The resources used to conduct a program. [10]

INSTITUTIONAL REVIEW BOARD

A review board that evaluates, approves, and monitors all research projects with respect to ethical requirements and practices. [1]

INTERVIEW

A data collection method in which participants provide information about their behavior, thoughts, or feelings in response to questions posed by an interviewer. [1]

L

LOGIC MODEL

Logic models are visual representations of programs that show how a program is intended to work, that is, how resources that are available to deliver the program are converted into program activities, and how those activities in turn produce intended results. [5]

LONGITUDINAL DESIGN

A study design in which data are collected at several points in time from the same individuals, groups, or organizations.[10]

M

MIXED METHODS DATA

The use of multiple measures, which include a combination of qualitative and quantitative approaches. [3]

N

NEEDS ASSESSMENT

An evaluative study that answers questions about the social conditions a program is intended to address and the need for the program. [6]

O

OBSERVATION

A data collection method in which the researcher watches and records events and processes. [10]

OUTCOME

The end results of the program. Outcomes may be intended or unintended and be positive or negative. [10]

OUTCOME EVALUATION

A study of whether or not the program produced the intended program effects. Outcome evaluation relates to the phase of the program studied in this case, the end result of the program. [10]

P

POSTTEST

A measure taken after a program ends. [10]

PRETEST

A measure taken before a program ends. [10]

PRETEST/POSTTEST DESIGN

A reflexive control design in which only one measure is taken before and after the intervention. [6]

PROCESS EVALUATION

A form of program evaluation designed to determine whether the program is delivered as intended to the target recipients. Also known as implementation assessment. [6]

Q

QUALITATIVE DATA

Data that examines phenomena primarily through words and tends to focus on dynamics, meaning and context. [10]

QUANTITATIVE DATA

Data that examines phenomena that can be expressed numerically and analyzed statistically. [10]

QUASI-EXPERIMENTAL DESIGN

A class of research designs in which program (or treatment) and comparison groups are selected non randomly but in which some controls are introduced to minimize threats to the validity of conclusions. [10]

QUESTIONNAIRE

A group of written questions to which subjects respond. Some restrict the use of the term "questionnaire" to written responses. [7]

R

RANDOM ASSIGNMENT

Assignment of potential targets to intervention and control groups on the basis of chance so that every unit in a target population has the same probability as any other to be selected for either group. Also referred to as randomization. [6]

RELIABILITY

The consistency or stability of a measure over repeated use. An instrument is said to be reliable if repeated efforts to measure the same phenomenon produce the same result. [10]

S

SAMPLING

The act of selecting units from a population. [10]

SECONDARY DATA SOURCES

A source that provides non-original (secondhand) data or information. [9]

SNOWBALL SAMPLING

A nonprobability sampling method in which each person interviewed is asked to suggest additional knowledgeable people for interviewing. [6]

STAKEHOLDER

Individuals who conduct, participate in, fund, or manage a program, or who may otherwise affect or be affected by decisions about the program or evaluation[10]

SUMMATIVE EVALUATION

A study conducted at the end of a program (or of a phase of the program) to determine the extent to which anticipatedoutcomes were produced. Summative evaluation is intended to provide information about the worth of the program.[10]

SYSTEMATIC SAMPLING

A sample drawn by selecting every Nth case from a list of potential units. [10]

T

TIME-SERIES DESIGN

Designs that collect data over long time intervals. In evaluation, time-series designs take repeated measurements of key variables at periodic intervals before, during, and after program implementation and analyze changes over time.[10]

V

VALIDITY

The extent to which a measure actually measures what it is intended to measure. [6]

VARIABLE

A measured characteristic, usually expressed quantitatively, that varies across members of a population. [10]

 


The following references were used in the development of the evaluation glossary.

  1. Crano, W. D., & Brewer, M. B. (2002). Principles and Methods of Social Research (2nd ed.). Mahwah. NJ: Lawrence Erlbaum Associates Publishers.
  2. Davidson, J. E. (2005). Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation (1st ed.). Thousand Oaks, CA: Sage Publications Inc.
  3. Fitzpatrick, J. L., Sanders, J. R. , & Worthen, B. R. (2011). Program Evaluation: Alternative Approaches and Practical Guidelines (4th ed.). Upper Saddle River, NJ: Pearson Education.
  4. Judd, C. M., & Smith, E. R., Kidder, L. H. (1991). Research Methods in Social Relations (6th ed.). Fort Worth, TX: Holt, Reinhart and Winston, Inc.
  5. McDavid, J.C., & Hawthorn, L. R. L. (2006). Program Evaluation & Performance Measurement An Introduction to Practice(1st ed.). Thousand Oaks, CA: Sage Publications, Inc.
  6. Rossie, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation A Systematic Approach (7th ed.). Thousand Oaks, CA: Sage Publications, Inc.
  7. Salking, N. J. (2004). Statistics for People Who (Think They) Hate Statistics (2nd ed.). Thousand Oaks, CA: Sage Publications, Inc.
  8. Trochim, W. M. K., & Donnelly, J. P. (2008). The Research Methods Knowledge Base (3rd ed.). Mason, OH: Cengage Learning.
  9. Vogt, P. W. (1999). Dictionary Of Statistics & Methodology A Nontechnical Guide For The Social Sciences (2nd ed.). Thousand Oaks, CA: Sage Publications, Inc.
  10. Weiss, C. H. (1998). Evaluation (2nd ed.). Upper Saddle River, NJ: Prentice Hall.
  11. United Way of America. (1996). Measuring Program Outcomes: A Practical Approach (2nd ed.). Alexandria, VA: United Way of America.