Table 1
Summary of study characteristics
|
Characteristics |
n (%) | |
|---|---|---|
|
Study design |
Quantitative |
12 (44.4%) |
|
Qualitative |
10 (37%) | |
|
Mixed methods |
5 (18.5%) | |
|
Implementation location |
The Netherlands |
10 (37%) |
|
Canada |
6 (22.2%) | |
|
United States |
3 (11.1%) | |
|
Australia |
1 (3.7%) | |
|
United Kingdom |
1 (3.7%) | |
|
Iran |
1 (3.7%) | |
|
New Zealand |
1 (3.7%) | |
|
Multiple locations a |
4 (14.8%) | |
|
Setting |
Clinical |
21 (77.8%) |
|
Pre-clinical |
3 (11.1%) | |
|
Both |
3 (11.1%) | |
|
Data sources b |
Learner perceptions |
13 (36.1%) |
|
Teacher perceptions |
11 (30.5%) | |
|
Assessment data |
12 (33.3%) | |
|
Kirkpatrick levels |
Level 1 |
17 (62.9%) |
|
Level 2 |
3 (11.1%) | |
|
Level 1 and Level 2 |
1 (3.7%) | |
|
Level 3/Level 4 |
0 (0%) | |
|
Not applicable |
6 (22.2%) |
a ‘Multiple locations’ refers here only to some combination of the countries listed here
b Multiple data sources in a single study add up to the total of 36 data sources indicated here (100%)
Table 2
Inferred strategies from the literature to improve the value and use of programmatic assessment
|
Inferred strategy and exemplifying references |
|---|
|
Build on creating a shared understanding of programmatic assessment by clearly introducing the nature and purpose, providing explanatory guidelines for individual assessments and how they are used in the system as a whole, and involving teachers and learners in the whole chain of the system [16, 19, 21, 29, 30, 32, 38, 40] |
|
Provide teachers and learners with feedback on the quality of provided assessment information and how their input contributes to the decision-making process [17, 21, 24, 40] |
|
Normalize daily feedback, observation, and follow-up, as well as reflection and continuous improvement [19, 21, 22, 28, 34, 38] |
|
Be cautious with mandatory requirements, being overly bureaucratic, and the use of summative signals in the design of programmatic assessment [17, 20–22, 24, 28, 33–35, 40], but keep the approach flexible, fit for purpose and negotiable, specifically in relation to the information needs of different stakeholders and the realities of the educational context [16, 17, 20, 21, 24, 28, 33, 34, 41] |
|
Promote learner agency and the development of life-long learner capabilities by increasing learners’ ownership over the assessment process [20, 28, 30, 34, 41] |
|
Address learners’ and teachers’ assessment beliefs and the implications of a learner-led assessment approach [21, 28, 34, 35, 39] and provide mentorship for novices within programmatic assessment [16, 17, 20–22, 28–30, 33, 34, 38, 40]; more experienced stakeholders can help with the transformation |
|
Invest in prolonged and trustworthy teacher–learner relationships to create a safe and supportive environment [16, 17, 21, 33, 35, 39–41]. Frameworks such as ‘The Educational Alliance’ model [44] and the R2C2 model [45] might be helpful in this respect |
|
Organize group discussions and ensure shared decision-making; these do not only ease teachers’ individual assessment responsibilities but can also improve the assessment outcome [19, 24, 30, 32, 34, 35, 40] |
|
Invest in credibility and trustworthiness as quality concepts for stakeholders, the process, and the system [21, 24, 34, 40]. Norcini et al. [46] offer a quality framework for assessment systems |
|
Ensure a supportive infrastructure (i.e. available time and resources, effective technology and sufficient faculty development), while taking the realities of the educational context into account [17, 21, 28, 34, 38, 40] |
|
Offer leadership in times of change. Cultural change takes time and, although issues should be addressed quickly, programmatic assessment will not be implemented perfectly from the start [38] |
