Implement an Evidence Based Program

Section 4.1

Fidelity Monitoring

The steps to fidelity monitoring include activities that take place before, during, and after program implementation. Each staff member who participates in program implementation can also participate in fidelity monitoring. If program personnel are numerous, it may be useful to select a few of these individuals to form a team that carries out the bulk of the monitoring. Depending on the nature of the program, individuals other than program staff, such as the program’s developers, may also take part.

Before program implementation

  • Identify and understand the key elements or components of the program — Read through the program curriculum and be acquainted with each topic to be addressed, as well as the worksheets, checklists, tip sheets, and other handouts that will be given to participants. Determine how knowledge and skills are built as the program advances.
  • Learn about the theoretical basis of the program — Study the theory or theories of behavior change (e.g., Social Cognitive Theory, Stages of Change, Health Belief Model) that underpin the program. Working knowledge of the theoretical underpinnings will provide an understanding of how program components fit together and why they are essential.
  • Develop a fidelity monitoring tool — If a fidelity monitoring tool was provided by the developers of your program, then this step is simplified for you. If a tool was not provided, then work collaboratively to develop an easy-to-complete tool. Checklists are frequently used as fidelity monitoring tools. At minimum, a fidelity checklist should contain a list of each key element or component of the program. These will be checked off once completed. Optimally, a checklist will also contain designated areas to document when each activity was completed as well as other relevant information such as who completed each activity and any contextual factors (e.g., tragedies within the community, programs implemented by other organizations) that might influence the impacts of the activities. Example fidelity checklists can be found in the Stanford Self-Management Fidelity Toolkit.
  • Provide fidelity monitoring training — Training opportunities will make sure each staff member responsible for fidelity monitoring understands the significance of fidelity and how to use the fidelity monitoring tool.
Additional Resource

The National Council on Aging provides The Fidelity Tool, a step-by-step guide to fidelity monitoring.

During program implementation

  • Complete the fidelity monitoring tool as activities are implemented — It is important to update the fidelity monitoring tool soon after activities are implemented. Delays between implementation and documentation can make it difficult to remember details. For example, a staff member may forget if a particular handout was provided to participants. Likewise, a staff member may forget that a particular video couldn’t be shown to participants because the DVD player was not working.
  • Document any implementation problems that arise — The identification of problems will help program planners develop strategies to avoid these same problems in the future. Additionally, program evaluators can use documentation of implementation problems to help explain disparities between their program outcomes and those seen during evaluation studies on the program.
  • Provide ongoing training and support — Program staff can work together to make sure each staff member understands and is completing his or her role in fidelity monitoring. If some fidelity monitoring activity is not working as planned, then staff can address this and make changes in a timely manner so it will work for the remainder of the program.

After program implementation

  • Review fidelity monitoring tools for completion — Make sure that all of the forms used to monitor fidelity have been completed.
  • Work with program evaluators to determine the extent of program fidelity and if this influenced outcomes — Program evaluators can consider if deviations from a program’s protocol, such as sessions that were not offered or materials that were not provided, influenced program outcomes.
  • Use fidelity monitoring data to plan for future rounds of implementation — Strengths and trouble spots can be identified so strengths are built upon and trouble spots are targeted for improvement the next time the program is implemented. Fidelity monitoring data is also useful for quality assurance activities (see Quality Assurance for more details).

Despite the need for fidelity, it is likely that your organization would like to change certain elements of the program you have chosen to implement. In fact, you may feel that these elements must be changed in order for the program to produce positive outcomes among your audience members. What do you do in these situations? The good news is that some program elements can be changed without jeopardizing fidelity. These are discussed in the following section.

Glossary of Terms

Toolkit Glossary

scroll to see all

Adaptation– Making changes to an evidence-based program in order to make it more suitable for a particular organization and/or audience.

Baseline – A starting point. In evidence-based programming, the term “baseline” is usually used in the context of data collection, where baseline data is data collected before a program is implemented.

Credentials – A testimony of qualification, competence, or authority issued to an individual by a third party. Examples of credentials include academic diplomas, academic degrees (e.g., MSW, MPH, PhD), licenses (e.g., MD, RN, LCSW), certifications (e.g., CHES, CPR, first aid), and security clearances.

Evidence – Facts or testimony in support of a conclusion, statement, or belief. In some settings, individuals may refer to “levels of evidence” or “types of evidence.” These terms will have specific definitions unique to the setting in which they are used. When referring to evidence-based programs, the term “evidence” is generally used to describe the findings or results of program evaluation studies.

Evidence-based program – A program that has been thoroughly evaluated by researchers who determined it produces positive outcomes.

Fidelity – The extent to which a program is being implemented as its developers intended for it to be.

Implementation – Putting into action or carrying out a program.

Instrument – A measurement tool. Instruments can take many forms including biomedical equipment (e.g., glucometer, blood pressure monitor, weight scale), pencil and paper tests, questionnaires, and interviews. A thermometer is an instrument used to measure body temperature. Likewise, a survey is an instrument that can be used to measure anxiety.

Medicaid – A publically-funded health insurance program for individuals who have low incomes and fall into certain categories of eligibility.

Objectives – Specific, measurable steps that can be taken to achieve goals.

Peer review – When experts review a professional’s performance, research, or writings. Peer review is a way that qualified professionals self-regulate their professions. Performance, research, or writings that pass the peer review process have increased credibility or trustworthiness.

Program champion – An individual who advocates for a program.

Quality assurance- A collection of planned, systematic activities applied to ensure that the components of a program are being implemented well.

Secondary data – Previously collected data that is being used for a purpose other than that for which it was originally collected.

Theory of behavior change – An attempt to explain how and why people change their behaviors. Researchers typically generate theories of behavior change from research in the fields of psychology, education, health, and other social sciences. When developing evidence-based programs, researchers will select a theory or components from several theories to guide program development.

Audience – The individuals for whom you implement your program. Depending on your setting, these individuals may also be referred to as a target population, population of interest, or clientele.

Buy-in – Typically used in the business world, buy-in refers to a financial exchange. In the context of health programs, the buy-in of stakeholders (community members, organizational leaders, participants, etc.) is generally non-financial. It involves their acceptance of a concept, idea, or proposal.

Data – A collection of facts, such as measurements and statistics.

Evidence-based practice – When clinicians (e.g., doctors, nurses) base their healthcare treatment decisions on the findings of current research, their clinical expertise, and the values/preferences of their patients.

Evidence-informed practice or program – A practice or program that is guided by theories and preliminary research. While there is some indication that these practices and programs produce positive outcomes, the evidence is too weak to refer to them as evidence-based. These are sometimes referred to as “promising” or “emerging” practices and programs.

Goals – General, non-measurable intentions or outcomes.

Incentives for participation – Factors that motivate an individual to take part in a program. Organizations sometimes provide incentives to encourage participants to begin and/or remain enrolled in a program. Common incentives include gift cards and program t-shirts.

Intervention – Organized efforts to promote health and prevent disease. This term is used because the efforts intervene, or come between, an individual and a negative health outcome in an attempt to prevent or delay the negative outcome. “Intervention” and “program” are often used interchangeably.

Interventionist – An individual who implements or carries out the components of a program.

Lay leaders – Individuals who do not have formal healthcare credentials who are trained to lead evidence-based programs.

Medicare – A publically-funded health insurance program for adults over age 65 and individuals with certain disabilities or health conditions.

Partnership – A cooperative relationship between two or more organizations that collaborate to achieve a common goal through the effective use of knowledge, personnel, and other resources.

Primary data – Original or new data being collected for a specific research goal.

Protocols - Predefined procedural methods. Examples include detailed program implementation procedures, required equipment, required data collection instruments with detailed instructions for administration, and recommended safety precautions.

Readiness – The degree to which an organization is prepared or ready for something.

Stakeholder – Any individual or group that has a stake or interest in a program.

Translation – The process of taking a program originally implemented in a controlled, laboratory-like setting and making it suitable for implementation in the community.

Table of Contents