This workshop covers the Implementation Outcomes Framework and the importance of internal and external validity in determining causality and generalizability. The workshop explains applications of evaluative study designs, focusing on how audit and feedback strategies can be improved using Continuous Quality Improvement and when to use randomized or non-randomized designs, such as the interrupted time series to compare effectiveness of standard and new healthcare practices. The workshop also focuses on developing implementation science evaluation questions that are appropriate for different stages of implementation and varying contexts and how to monitor implementation to identify areas where adaptation is needed.
Learning objectives
- Select and apply an appropriate framework to develop implementation science evaluation questions that are appropriate for distinct stages of implementation and varying contexts
- Select appropriate implementation, process, and health impact outcomes to address an evaluation question of interest
- Identify trade-offs and strengths that must be considered in the design of an implementation research evaluation to balance validity and feasibility