Select an Evidence Based Program

Section 3.1

Process Evaluation

Process evaluation assesses the delivery of a program. It examines factors such as the extent to which the program is being implemented as designed, whether the target population is being reached, and the quality of program delivery. Sometimes process evaluation can also constitute formative evaluation, which is evaluation used to improve or fine tune a program.

Numerous aspects of program delivery can be assessed with process evaluation. Important aspects to evaluate are described below.

  • Justification (the extent to which the program addressed an important need) — Justification can be assessed by comparing needs assessment data with the program to determine if the program addresses a need that was identified during needs assessment.
  • Fidelity (the extent to which the program was implemented as designed) — Fidelity can be assessed with the use of tools such as protocol checklists (either self-administered or administered by a trained observer).
  • Dose delivered (the extent to which all program components were implemented) — Dose delivered can be assessed with program component checklists (either self-administered or administered by a trained observer).
  • Dose received (the extent to which participants took part in program components) — Dose received can be assessed by observing participants’ engagement and measuring knowledge and skill acquisition.
  • Reach (the proportion of the intended audience that took part in the intervention) — Reach can be assessed by dividing the number of individuals who took part in the program by the number of individuals in the intended audience. Attendance logs can help facilitate an assessment of reach.
  • Participant satisfaction (the extent to which participants were satisfied with the program) —Participant satisfaction can be assessed by surveying participants to find out how satisfied they are with the program.
  • Recruitment success (which recruitment methods were effective in attracting participants to the program) — Recruitment success can be assessed by surveying participants to find out how they heard about the program.
  • Accountability (the extent to which staff and partners fulfilled their responsibilities) — Accountability can be assessed by a supervisor comparing actual role and task completion with the roles and tasks assigned during program planning.
  • Context (the extent to which environmental characteristics influenced program implementation or outcomes) — Context can be assessed by surveying participants and program personnel with open-ended questions to determine what they saw and experienced that may have influenced implementation or outcomes.

Glossary of Terms

Toolkit Glossary

scroll to see all

Adaptation– Making changes to an evidence-based program in order to make it more suitable for a particular organization and/or audience.

Baseline – A starting point. In evidence-based programming, the term “baseline” is usually used in the context of data collection, where baseline data is data collected before a program is implemented.

Credentials – A testimony of qualification, competence, or authority issued to an individual by a third party. Examples of credentials include academic diplomas, academic degrees (e.g., MSW, MPH, PhD), licenses (e.g., MD, RN, LCSW), certifications (e.g., CHES, CPR, first aid), and security clearances.

Evidence – Facts or testimony in support of a conclusion, statement, or belief. In some settings, individuals may refer to “levels of evidence” or “types of evidence.” These terms will have specific definitions unique to the setting in which they are used. When referring to evidence-based programs, the term “evidence” is generally used to describe the findings or results of program evaluation studies.

Evidence-based program – A program that has been thoroughly evaluated by researchers who determined it produces positive outcomes.

Fidelity – The extent to which a program is being implemented as its developers intended for it to be.

Implementation – Putting into action or carrying out a program.

Instrument – A measurement tool. Instruments can take many forms including biomedical equipment (e.g., glucometer, blood pressure monitor, weight scale), pencil and paper tests, questionnaires, and interviews. A thermometer is an instrument used to measure body temperature. Likewise, a survey is an instrument that can be used to measure anxiety.

Medicaid – A publically-funded health insurance program for individuals who have low incomes and fall into certain categories of eligibility.

Objectives – Specific, measurable steps that can be taken to achieve goals.

Peer review – When experts review a professional’s performance, research, or writings. Peer review is a way that qualified professionals self-regulate their professions. Performance, research, or writings that pass the peer review process have increased credibility or trustworthiness.

Program champion – An individual who advocates for a program.

Quality assurance- A collection of planned, systematic activities applied to ensure that the components of a program are being implemented well.

Secondary data – Previously collected data that is being used for a purpose other than that for which it was originally collected.

Theory of behavior change – An attempt to explain how and why people change their behaviors. Researchers typically generate theories of behavior change from research in the fields of psychology, education, health, and other social sciences. When developing evidence-based programs, researchers will select a theory or components from several theories to guide program development.

Audience – The individuals for whom you implement your program. Depending on your setting, these individuals may also be referred to as a target population, population of interest, or clientele.

Buy-in – Typically used in the business world, buy-in refers to a financial exchange. In the context of health programs, the buy-in of stakeholders (community members, organizational leaders, participants, etc.) is generally non-financial. It involves their acceptance of a concept, idea, or proposal.

Data – A collection of facts, such as measurements and statistics.

Evidence-based practice – When clinicians (e.g., doctors, nurses) base their healthcare treatment decisions on the findings of current research, their clinical expertise, and the values/preferences of their patients.

Evidence-informed practice or program – A practice or program that is guided by theories and preliminary research. While there is some indication that these practices and programs produce positive outcomes, the evidence is too weak to refer to them as evidence-based. These are sometimes referred to as “promising” or “emerging” practices and programs.

Goals – General, non-measurable intentions or outcomes.

Incentives for participation – Factors that motivate an individual to take part in a program. Organizations sometimes provide incentives to encourage participants to begin and/or remain enrolled in a program. Common incentives include gift cards and program t-shirts.

Intervention – Organized efforts to promote health and prevent disease. This term is used because the efforts intervene, or come between, an individual and a negative health outcome in an attempt to prevent or delay the negative outcome. “Intervention” and “program” are often used interchangeably.

Interventionist – An individual who implements or carries out the components of a program.

Lay leaders – Individuals who do not have formal healthcare credentials who are trained to lead evidence-based programs.

Medicare – A publically-funded health insurance program for adults over age 65 and individuals with certain disabilities or health conditions.

Partnership – A cooperative relationship between two or more organizations that collaborate to achieve a common goal through the effective use of knowledge, personnel, and other resources.

Primary data – Original or new data being collected for a specific research goal.

Protocols - Predefined procedural methods. Examples include detailed program implementation procedures, required equipment, required data collection instruments with detailed instructions for administration, and recommended safety precautions.

Readiness – The degree to which an organization is prepared or ready for something.

Stakeholder – Any individual or group that has a stake or interest in a program.

Translation – The process of taking a program originally implemented in a controlled, laboratory-like setting and making it suitable for implementation in the community.

Table of Contents