Abstract
Background: Community organizations are essential in addressing health and social needs for populations often underserved by traditional healthcare systems. Recognizing the importance of program evaluation, these organizations increasingly seek to improve their services, demonstrate impact, and secure funding. However, they frequently face barriers such as limited resources, time constraints, and lack of in-house expertise. Evaluation training programs have been developed to bridge this gap to support community organizations in building evaluation capacity. Despite the availability of such programs, few undergo rigorous evaluation themselves, resulting in limited evidence of their effectiveness in fostering sustainable evaluation capacity within community settings. Addressing this need, this study presents key practices and findings from LaboEval, a five-year initiative launched in 2019 to strengthen evaluation capacity in community organizations across Quebec, Canada.
Approach: LaboEval was developed through an iterative, participatory curriculum design process led by a steering committee of community representatives, students, and evaluation experts. The program offered seven online modules, each accompanied by follow-up coaching sessions grounded in a practical participatory evaluation framework. Modules covered essential topics such as stakeholder engagement, theory of change, data collection methods, knowledge transfer, and strategies for embedding evaluation within organizational practices. Implemented across 16 diverse organizations within the health and social services sectors, LaboEval emphasized flexibility to accommodate the schedules and needs of participants. Mayne's six-step contribution analysis was employed in the fifth year to assess LaboEval's impact on both individual and organizational evaluation capacities. A mixed-methods approach was applied, combining secondary data analysis, semi-structured interviews, document reviews, surveys, and literature reviews to ensure thorough data collection and analysis.
Results: Findings indicate that LaboEval significantly enhanced individual and organizational evaluation skills and practices among participating organizations. Data reported increased confidence in conducting evaluations and improved integration of evaluation into their organizations’ processes. These improvements include more consistent use of logic models, incorporating evaluation activities into routine operations, and adopting appropriate data collection and analysis methods. The program's participatory design, which incorporated direct input from community stakeholders, contributed to its relevance, practicality, and success. Additionally, the modular structure and flexible implementation enhanced participant engagement and retention by addressing organizational constraints. However, challenges remained, particularly regarding the dissemination of evaluation findings by the organizations, which limited the broader impact of their evaluations on policy discussions.
Implications: LaboEval offers important insights for both researchers and practitioners engaged in capacity-building initiatives. Its participatory curriculum design highlights the importance of involving organizations in the training program development process, enabling the creation of contextually relevant training that aligns with their specific needs and objectives. Contribution analysis provided a structured evaluation approach, yielding concrete, evidence-based insights into the effectiveness of training strategies and the pathways through which they build evaluation capacity. These insights illuminated specific outcomes, such as increased confidence and skills in evaluation practices, and highlighted how these pathways can refine future capacity-building initiatives and enhance their practical impact. The next phase of LaboEval will focus on expanding the program to additional organizations and enhancing mechanisms to support sustained evaluation practices.
