Select an Evidence Based Program

Section 3.0

Evaluation Planning

Evaluation is the search for evidence that confirms a program has been implemented and attributes outcomes to the intervention. In other words, evaluation reveals which program components were implemented, how they were implemented, and if impacts (such as changes in health status or disease rates) are due to the program itself or to some factor outside of the program. Listed below are several reasons to conduct evaluation.

  • To determine if program objectives were achieved
  • To develop quality assurance and control methods for the program
  • To identify the strengths and weaknesses of the program
  • To determine if the program can be generalized to another audience or setting
  • To identify hypotheses regarding health behavior for future programs and studies
  • To build the evidence base
  • To fulfill obligations to funders
  • To be accountable to stakeholders and your audience

Program evaluation should be considered during program planning. If evaluation is not considered until after the program is completed, opportunities to collect important data will be missed. Notably, you will want to collect baseline data on your participants. Baseline data is information about participants (e.g., health status, opinions, knowledge, behaviors) that is gathered before a program begins. Having baseline data allows you to compare the participants’ post-intervention information to their pre-intervention information. For example, if you are implementing a nutrition program, you may compare their pre- and post-intervention rates of fruit and vegetable consumption to determine if the intervention had an impact. Additionally, you may choose to keep attendance logs during program sessions. This is another evaluation activity that cannot be conducted unless evaluation planning occurs before program implementation.

Two types of evaluation are generally conducted: process evaluation and impact/outcome evaluation. Both types are described in the subsequent text.

Glossary of Terms

Toolkit Glossary

scroll to see all

Adaptation– Making changes to an evidence-based program in order to make it more suitable for a particular organization and/or audience.

Baseline – A starting point. In evidence-based programming, the term “baseline” is usually used in the context of data collection, where baseline data is data collected before a program is implemented.

Credentials – A testimony of qualification, competence, or authority issued to an individual by a third party. Examples of credentials include academic diplomas, academic degrees (e.g., MSW, MPH, PhD), licenses (e.g., MD, RN, LCSW), certifications (e.g., CHES, CPR, first aid), and security clearances.

Evidence – Facts or testimony in support of a conclusion, statement, or belief. In some settings, individuals may refer to “levels of evidence” or “types of evidence.” These terms will have specific definitions unique to the setting in which they are used. When referring to evidence-based programs, the term “evidence” is generally used to describe the findings or results of program evaluation studies.

Evidence-based program – A program that has been thoroughly evaluated by researchers who determined it produces positive outcomes.

Fidelity – The extent to which a program is being implemented as its developers intended for it to be.

Implementation – Putting into action or carrying out a program.

Instrument – A measurement tool. Instruments can take many forms including biomedical equipment (e.g., glucometer, blood pressure monitor, weight scale), pencil and paper tests, questionnaires, and interviews. A thermometer is an instrument used to measure body temperature. Likewise, a survey is an instrument that can be used to measure anxiety.

Medicaid – A publically-funded health insurance program for individuals who have low incomes and fall into certain categories of eligibility.

Objectives – Specific, measurable steps that can be taken to achieve goals.

Peer review – When experts review a professional’s performance, research, or writings. Peer review is a way that qualified professionals self-regulate their professions. Performance, research, or writings that pass the peer review process have increased credibility or trustworthiness.

Program champion – An individual who advocates for a program.

Quality assurance- A collection of planned, systematic activities applied to ensure that the components of a program are being implemented well.

Secondary data – Previously collected data that is being used for a purpose other than that for which it was originally collected.

Theory of behavior change – An attempt to explain how and why people change their behaviors. Researchers typically generate theories of behavior change from research in the fields of psychology, education, health, and other social sciences. When developing evidence-based programs, researchers will select a theory or components from several theories to guide program development.

Audience – The individuals for whom you implement your program. Depending on your setting, these individuals may also be referred to as a target population, population of interest, or clientele.

Buy-in – Typically used in the business world, buy-in refers to a financial exchange. In the context of health programs, the buy-in of stakeholders (community members, organizational leaders, participants, etc.) is generally non-financial. It involves their acceptance of a concept, idea, or proposal.

Data – A collection of facts, such as measurements and statistics.

Evidence-based practice – When clinicians (e.g., doctors, nurses) base their healthcare treatment decisions on the findings of current research, their clinical expertise, and the values/preferences of their patients.

Evidence-informed practice or program – A practice or program that is guided by theories and preliminary research. While there is some indication that these practices and programs produce positive outcomes, the evidence is too weak to refer to them as evidence-based. These are sometimes referred to as “promising” or “emerging” practices and programs.

Goals – General, non-measurable intentions or outcomes.

Incentives for participation – Factors that motivate an individual to take part in a program. Organizations sometimes provide incentives to encourage participants to begin and/or remain enrolled in a program. Common incentives include gift cards and program t-shirts.

Intervention – Organized efforts to promote health and prevent disease. This term is used because the efforts intervene, or come between, an individual and a negative health outcome in an attempt to prevent or delay the negative outcome. “Intervention” and “program” are often used interchangeably.

Interventionist – An individual who implements or carries out the components of a program.

Lay leaders – Individuals who do not have formal healthcare credentials who are trained to lead evidence-based programs.

Medicare – A publically-funded health insurance program for adults over age 65 and individuals with certain disabilities or health conditions.

Partnership – A cooperative relationship between two or more organizations that collaborate to achieve a common goal through the effective use of knowledge, personnel, and other resources.

Primary data – Original or new data being collected for a specific research goal.

Protocols - Predefined procedural methods. Examples include detailed program implementation procedures, required equipment, required data collection instruments with detailed instructions for administration, and recommended safety precautions.

Readiness – The degree to which an organization is prepared or ready for something.

Stakeholder – Any individual or group that has a stake or interest in a program.

Translation – The process of taking a program originally implemented in a controlled, laboratory-like setting and making it suitable for implementation in the community.

Table of Contents