Evaluation Resource Review

Resource Review

Series Summary

The Resource Review series includes information about six topics relevant for many CYFAR grantees and links to a curated set of high quality resources that CYFAR projects may find useful in their work. CYFAR PDTA Center staff will highlight each topic and introduce the related resources through podcast interviews, video overviews, and webinars shared through the CYFAR newsletter and this page.

Scaling Up Your Program

Many SCPs use CYFAR funding to improve and then scale up their programming. Scaling up is a systematic process and should be approached with careful consideration of the available resources, need for and interest in the project topic, and the outcomes the project is able to achieve at each stage.

Hear about how the folks behind PROSPER, Youth Community Action, and Juntos scaled up their programs.
Available on Spotify or wherever you get your podcasts.

Evaluation is an important component of the scale-up process, and can inform improvements and expansion. Once an SCP has evaluation data on hand, the team can begin reviewing the quantitative and qualitative data through data displays and identified themes, and use these analyses to discuss the data and identify next steps. Once the team members have reviewed the data displays and determined a list of key takeaways or areas for improvement, look for connections to see if a single change to the program training, implementation, or support could impact multiple needs. Then, discuss how to address these priorities. For example, do you need to provide a midyear training to help keep implementation on track? Do you need to provide upfront guidance to staff about working with youth? For each need, identify a goal or specific aim for improvements and then identify the specific steps that would move toward that aim. Using the specific, measurable, achievable, relevant, time-bound (SMART) framework can be helpful for writing these plans.

Following these discussions, the team can consider challenges that may arise after scaling up implementation and create a plan to avoid them. For example, the project directors may have provided support and troubleshooting to a small group of community sites during the CYFAR grant but would be unable to provide the same level of support to a larger group. How can you ensure providing the supports needed at scale is feasible? What can the team do to improve materials or trainings? What is the best way to leverage experienced staff to support new users?

An initial step all projects can take is considering and then setting a goal and timeline for scaling up: Are you hoping to reach new communities in the next year? Become a signature program over the next five years? Promote and offer your program curriculum or framework nationwide? These next steps can help orient your scale-up process in the initial stages and continue to inform your team’s actions.

Resource Link or LocationResource Title
https://www.childtrends.org/publications/how-to-scale-up-effective-programs-serving-children-youth-and-families-2ChildTrends: How to Scale Up Effective Programs Serving Children, Youth, and Families
https://www.wallacefoundation.org/knowledge-center/Documents/Strategies-to-Scale-Up-Social-Programs.pdfStrategies to Scale Up Social Programs
Reliability and Validity

In addition to the required CYFAR Common Measures, CYFAR SCPs can choose from a wide array of participant surveys, observation measures, and other instruments to gather additional data about their program’s process and outcomes. Evaluating these measures is a daunting task; it can be difficult to know if the measure is reliable, accurate, and valid. Researchers and practitioners who develop these measures must conduct studies to examine their instruments under ideal and real world settings. These studies enable researchers to publish information about the measures’ test-retest reliability, face validity, and criterion validity (each described below). PIs and evaluators selecting additional measures should consider these aspects of the measure as well as the quality and relevance of the studies. Did the study samples include the same age range as your program participants? Did the studies include a diverse participant sample?

The first question to ask of any assessment instrument is whether the scores are consistent and reliable. If a participant were to complete a measure twice, we would expect his or her responses to be about the same. To confirm the measure’s reliability, review the information about the test-retest reliability of the measure as well as the standard deviation. For observation measures, researchers often report the interrater reliability of the measure as well as the training and review process observers went through to achieve this reliability.

Next, consider the quality of the research backing the face validity of the measure. Do the measure’s items align with the concept or skill? Using a STEM knowledge survey as an example, the items should include all relevant STEM concepts. Is the wording of each item aligned with the correct type of knowledge or skill? For example, a measure about attitudes towards STEM would have different wording from a measure about STEM concept knowledge.

Finally, examine the criterion validity of the measure. Do the authors report how well the items correlate with other well-established measures of this concept or topic? Measures that are highly correlated to other valid measures of the same concept are usually considered valid on the basis of that relation.

Remember that these important aspects of measures are only accurate when the measure is used under similar conditions to those reported in the study. Changing anything about the measure, including the order, wording, and number of the items, can also change the reliability and validity of the measure. This is also true of using the measure with a new population, such as using an established adult measure with youth and/or children. Adapting or adopting a measure for a new context can be useful for the field, but the results of that application should be rigorously studied and reported so that the reliability and validity can be confirmed. This not only benefits the initial program to use these measures, increasing their confidence in their own data and analyses, but also benefits other programs who may adopt these measures in future.

Download this resource from CYFAR Experts -

How to Find a Valid and Reliable Measure

Watch the presentation on the CYFAR YouTube Channel -

YouTube Video Icon

Resource Link or LocationResource Title
https://www.scribbr.com/methodology/reliability-vs-validity/Reliability vs. Validity in Research | Differences, Types, and Examples
https://wac.colostate.edu/resources/writing/guides/reliability-validity/WAC Clearinghouse: Reliability and Validity


 

Collecting and Using Qualitative Data

Many SCPs already collect and use qualitative data in their impact statements, stakeholder reports, and scholarly work. SCPs have access to many sources of qualitative data including interviews, focus groups, staff notes or logs, and participant journals and other documentation. Qualitative data can provide more nuanced contextual information that leads to a deeper understanding of survey findings. Similar to survey data collection, staff can collect qualitative data virtually or in person, depending on the team’s resources and capacity.

SCPs can use an objective and systematic approach to analyzing qualitative data such as meeting notes and open-response items from surveys. Depending on availability and capacity, it can be helpful to have multiple staff work on qualitative analysis. This increases validity and accuracy and can alleviate the burden of problem-solving and decision-making during analyses. Another way to increase the validity of qualitative findings is to triangulate conclusions with other data sources. Triangulation is good practice and particularly critical if only one person has responsibility for qualitative analysis. For instance, if qualitative analysis indicates that participants did not find the material engaging, compare this finding with survey data or staff logs to see if they corroborate it. This maintains the objectivity of the qualitative analysis and can limit bias.

Resource Link or LocationResource Title
https://www.cdc.gov/healthyyouth/evaluation/pdf/brief19.pdfCDC Evaluation Brief: Analyzing Qualitative Data for Evaluation
https://fyi.extension.wisc.edu/programdevelopment/using-data/University of Wisconsin Extension Using Qualitative Data
https://dasycenter.sri.com/downloads/DaSy_papers/DaSy_SSIP_QualMethods.pdfDaSy Center: Strengthening Evaluation with Qualitative Methods
Data Visualization

CYFAR SCPs work with a wide range of data, from CYFAR Common Measures to interviews to program-specific implementation. It is important to keep data analysis manageable while maintaining the core principles of systematic and unbiased analysis. Calculating some simple summaries can make quantitative data more easily comprehensible. This can include calculating means or averages for reported and observed variables, percentages for variables focused on completion or accuracy, and frequencies for counts. Project staff can complete calculations using all available data, data from each community site, or data within categories of interest (for example, by age group). SCPs should consider using their logic model to discuss as a team the meaningful distinctions anticipated in the data in order to limit the number of analyses. Ensure that the planned analyses will address the program’s research questions and provide the most useful information for decision-making. For example, if an SCP wants to learn what factors influence implementation of the program, consider whether implementation may vary by community site, program role, or month.

Resource Link or LocationResource Title
https://dasycenter.org/datavis-toolkit/DaSy Center Data Visualization Toolkit 
https://extension.umn.edu/program-design-and-evaluation/evaluating-youth-and-education-programs#reporting-and-data-visualization-2657315UMN Extension Reporting and Data Visualization
Improving Response Rates and Retention

Anyone distributing and collecting a survey has difficulty with response rates, and CYFAR SCPs are no exception. Surveys provide valuable information to all programs and are a necessary component of federal reporting requirements, program evaluation, and sustainability. One unique challenge that CYFAR SCPs face is improving response rates without incentives. There are still ways to improve response rates and to encourage participants to complete the surveys in their entirety. Below are some ideas that SCPs have reported using to increase the amount and quality of their data:

  • Talking with all participants about the importance of the data for maintaining and improving the program.
  • Administering surveys in-person, and providing ample time for all participants to complete the survey.
  • Administering surveys over several days to break up the amount of time it takes participants to complete all of the measures.
  • Increasing the accessibility of the measures through group administration:
    • Reading the items aloud for all participants, encouraging those who wish to work at their own pace to move ahead as needed;
    • Defining any new or unique terms for the group in advance if possible.
  • Using a personal connection with participants to thank them for their time and to encourage them to answer honestly and to the best of their ability.

Another issue for many programs is participant retention. CYFAR SCPs invest a great deal of time and money in planning and executing their programming, and ensuring that participants continue to attend the full program helps programs to realize those investments. Establishing a personal connection with participants as early in the program as possible, and reaching out to them when they do not attend a program, can help to improve retention for all participants. To learn more about why participants might leave your program, it can be helpful to ask participants why they have not returned for the next session or several sessions. Although program staff may not be able to address many of the barriers to participant attendance, identifying common or consistent issues can help to adapt the program in future iterations. For example, if many participants no longer attend due to limited transportation, identifying a site near a bus line may help. For programs in areas where public transit options are not available, help participants to organize carpools and find rides with one another as needed.

Resource LinkResource Title
https://fyi.extension.wisc.edu/programdevelopment/files/2019/10/Zoom-Sheet-Increasing-Response-Rates-updated-links.docxUniversity of Wisconsin Extension: Increasing Response Rates (tip sheet)
https://www.pellinstitute.org/pell-resources-and-projects/evaluation-toolkit/Pell Institute: Evaluation Toolkit
https://www.qualtrics.com/experience-management/research/tools-increase-response-rate/How to Increase Online Survey Response Rates
Media
Video URL