Program evaluation is an integral part of the planning activities of NIAID. Evaluation provides information about NIAID program processes and outcomes and enables the optimization of program performance. NIAID also strives to further program evaluation capacity and training across the Institute.
Program Evaluation Defined
An evaluation is a systematic method to assess a program or process. In most cases, the purpose of a program evaluation is to help program administrators understand what is successful and what areas could be modified or improved upon, and/or to make other programmatic decisions.
The four main types of program evaluations typically conducted at NIAID include:
- Needs Assessments are conducted to assist in determining the need for a program, defining program goals, and determining how a program should be designed or modified to achieve those goals. A needs assessment is often used as a tool for strategic planning and priority setting.
- Feasibility Studies are conducted as preliminary studies to improve the design of a more complex process evaluation or outcome evaluation. It usually includes determining whether conducting an evaluation is appropriate and/or determining whether the evaluation can be conducted at a reasonable cost.
- Process Evaluations provide information about the efficiency of a program, including how program-critical processes can be improved, and may be conducted at any time during a program’s implementation.
- Outcome Evaluations provide information about a program’s effectiveness and are generally conducted at a point when the program is considered mature and results may be more easily measured.
Note: Experts external to the program often conduct program evaluations, but program managers may also conduct them.
The CDC’s guide on program evaluation provides a glossary that defines many terms commonly used in NIAID evaluations.
The Value of Evaluation
Evaluation activities support decision-making that can lead to proactive management of programs. Program evaluation provides valuable information for performance planning and assessment, leading to informed modifications of strategic plans, resource allocation decisions, and program components or approaches.
Evaluators may contribute the following to a program:
- Ensure public accountability
- Clarify assumptions and assess progress
- Gather dependable and consistent data, perform analyses, identify information gaps
- Advance program knowledge
- Identify budget cost-effectiveness
- Review program effectiveness and usefulness
- Identify opportunities and pathways to achieve objectives, outcomes, and efficiencies
Components of an Evaluation
Using a framework is helpful in determining the steps needed to conduct an evaluation. The CDC framework for conducting an evaluation is commonly used in evaluation settings; however, this is only one of many types of frameworks that can be useful. The CDC evaluation framework includes the following steps:
- Engage Stakeholders
- Describe the Program
- Focus the Evaluation Design
- Gather Credible Evidence
- Justify Conclusions
- Ensure Use and Share Lessons Learned
A logic model, or conceptual framework, depicts how a program is intended to work. It typically illustrates how program resources, population characteristics, program activities, and external factors are expected to influence the achievement of the program’s process, intermediate goals, and long-term goals. The CDC and OPM provide additional information on developing logic models.
Common Evaluation Methods at NIAID
Every evaluation is unique, but there are several methods and metrics that are commonly used in NIAID program evaluations.
Common Data Collection Methods
- Focus groups
- Data mining
- Administrative and archival data
- Case studies
- Expert panel review
- Network Analysis
Common Evaluation Metrics
- Bibliometrics (e.g., publications, citations, patents, relative citation ratio)
- Other outcome measures (e.g., patents, changes to clinical guidelines, cost per publication)
- Process and efficiency metrics
- Evolution of collaborative networks
- Process and efficiency metrics
- Achievement of milestones
- Program influence and impact
The following examples showcase the variety of evaluations that NIAID supports to improve programs and processes across the Institute.
DAIDS: Assessment of the Centers for HIV/AIDS Vaccine Immunology and Immunogen Discovery (CHAVI-ID)
One method that NIAID recently deployed to support scientific research is the large-scale, high-resource, “big science” approach. This method allows the government to fund and manage the research of large, complex issues that require input from several disciplines. NIAID uses the approach to support the Centers for HIV/AIDS Vaccine Immunology and Immunogen Discovery (CHAVI-ID). NIAID supported a comprehensive program evaluation to assess CHAVI-related outcomes and identify areas for program improvement. This process and outcome evaluation also aimed to understand the value of CHAVI’s “big science” approach within the larger field of HIV vaccine research.
DMID: Antibacterial Resistance Leadership Group (ARLG) Assessment
NIAID launched a six-year grant in 2013 to develop, design, implement, and manage a clinical research agenda to increase knowledge of antibacterial resistance. This program—the Leadership Group for a Clinical Research Network on Antibacterial Resistance (ARLG)—represents a substantial NIAID investment and brings together researchers from around the world to tackle the increase in antibacterial resistance. NIAID sought to examine this large-scale program to assess outcomes to-date and to identify ways to improve the complex program structure and implementation. The mixed-methods approach to this study included a complex portfolio analysis, in-depth interviews, and a survey of the research community.
DEA: Comparative Study of Unsolicited P01 and R01 Grant Mechanisms
The NIAID P01 Program Project Grant mechanism provides funding for at least two inter-related research projects, which are required to address a central research problem in a conceptually inter-related manner, as well as administrative and scientific core services. The individual P01 research projects are each roughly equivalent in scale and scope to a NIH R01 grant, but the aim is that, together, the P01 projects produce higher synergy and address more complex scientific questions than individuals R01s. The central question of this study was: given the higher funding costs of P01 grants, what additional value does the P01 provide in comparison with R01s in terms of research synergy, productivity, and scientific impact?
DAIT: The Immunology Database and Analysis Portal (ImmPort) System Evaluation
As the volume of data collected by researchers continues to expand rapidly, the NIH seeks to provide ways for data to be transparent, shared, and used in meaningful ways. The Immunology Database and Analysis Portal (ImmPort) System is one way that NIAID provides a unique and meaningful platform and analytical tools for the scientific community to share and explore clinical and basic research data. To determine how this data repository is being used in the scientific community and how to improve the system’s efficiency and effectiveness, NIAID supported a process evaluation of ImmPort.
DAIDS: Process and Outcome Evaluation of the NIAID DAIDS R21/R33 Mechanism to Fund High-Risk, High-Reward, Product Oriented Research
NIAID supports high-risk, high-reward research through a biphasic R21/R33 grant mechanism. This mechanism begins with an exploratory R21 grant which may lead to a larger R33 grant. Through the provision of these grants, NIAID aims to support high-risk, high-reward product-oriented research within the Division of AIDS (DAIDS). In particular, two DAIDS programs utilize this mechanism - the AIDS Vaccine Research (AVR) Program and the Microbicide Innovation Program (MIP). To determine if this biphasic grant mechanism is achieving its goal, NIAID supported a process and outcome evaluation of these R21/R33 research programs.