Have a personal or library account? Click to login
Quality of cost evaluations of physician continuous professional development: Systematic review of reporting and methods Cover

Quality of cost evaluations of physician continuous professional development: Systematic review of reporting and methods

Open Access
|Mar 2022

Figures & Tables

Table 1

Methodological quality appraised using the Medical Education Research Study Quality Instrument (MERSQI): operational considerations for cost evaluations and prevalence

Domain: Item

Operational adjustments

Level

Prevalence

N (%) (N = 62)

Study design

Added option for economic modeling studies (score 1.5)

1‑group post-only (1)

 6 (10%)

1‑group pre-post, or modeling (1.5)

20 (32%)

2‑group non-randomized (2)

16 (26%)

2‑group randomized (3)

20 (32%)

Sampling: No. of institutions studied

No change

1 (0.5)

54 (87%)

2 (1)

 1 (2%)

>2 (1.5)

 7 (11%)

Sampling: Response rate

For cost data: Data derived from large record sets unlikely to reflect bias (e.g., institutional electronic health record or regional claims database) count as high (score 1.5)

<50% or not specified (0.5)

24 (39%)

50–74% (1)

 7 (11%)

≥75% or large record

31 (50%)

Type of data (data source)

For cost data: Details of resource quantitation (both data source and quantity [number of units, not just total cost]) count as high (score 3). Cost alone counts as low (score 1)

Self-reported data, or cost without resource quantitation (1)

 8 (13%)

Objective measurement, or cost with data source and quantity (3)

54 (87%)

Validation of evaluation instrument: Content

For cost data: “The degree to which the cost estimation encompasses all aspects of the true cost, encompassing processes to both identify and measure cost” [15]. Evidence could include use of a formal framework (e.g., the Ingredients Method) or the involvement of experts in planning, empiric identification and selection of relevant resources (e.g., time-motion studies or process mapping), and substantiation that a robust data source was used to select, quantitate, or price resources (e.g., detailed description of a computer database)

Reported (1)

 8 (13%)

Validation of evaluation instrument: Internal structure

For cost data: “The degree to which the cost estimate is reproducible if the same method is followed” [15]. Evidence could include replicability of the valuation or analysis (e.g., robust examination of the uncertainty of input parameter estimates [sensitivity analysis], independent valuation of costs by two investigators [inter-rater reliability], or comparing cost estimates derived at two different time points [temporal stability])

Reported (1)

 9 (15%)

Validation of evaluation instrument: Relations with other variables

For cost data: “The degree to which the cost estimate relates to cost estimates formed using alternative approaches” [15]. Evidence could include examining predicted associations among results obtained using alternative approaches to economic modeling (e.g., sensitivity analysis comparing different base assumptions, valuation methods, statistical models, or economic theories)

Reported (1)

 1 (2%)

Data analysis: Appropriateness

For cost data: The following count as “appropriate” (score 1): cost effectiveness ratio, net benefit, or other similar analysis of cost data

Inappropriate for study design (0)

37 (60%)

Appropriate (1)

25 (40%)

Data analysis: Complexity

For cost data: The following count as “beyond descriptive” (score 2): cost effectiveness ratio, net benefit, visual display of cost-effectiveness

Descriptive analysis only (1)

37 (60%)

Beyond descriptive analysis (2)

25 (40%)

Outcomes

For cost outcomes: As per Foo, we distinguished education costs in a “test setting” or a “real setting,” namely: “Test settings are those in which the context does not match how the intervention would be utilized in actual practice (e.g., a hypothetical program that was not actually implemented). Real settings are where the intervention is evaluated in a context similar to its anticipated utilization in practice (e.g., an evaluation of a program that is taught to real students)” [15]. However, we assigned points differently than Foo: score 1.5 for cost of education in test setting, score 2 for cost of education in real setting, score 3 for health care costs. Outcomes estimated from previously published research (including health care costs and non-cost outcomes) also score 1.5

Knowledge, skills, or education costs in a “test” or hypothetical training setting, or estimated from literature (1.5)

 1 (2%)

Behaviors in practice or education costs in a “real” training setting (2)

25 (40%)

Patient effects, including health care costs (3)

36 (58%)

For each item in a given study, the design feature (study design, outcome, evaluation instrument, etc.) that supported the highest level of coding was selected. For example, for a study reporting both cost and effectiveness (non-cost) outcomes, the outcome corresponding to the highest-scoring level was selected for coding (and as a result, in some cases the design features in the cost evaluation [i.e., the features coded in this review] are less than those reported in this table)

Fig. 1

Reporting quality as per CHEERS guideline criteria. N = 62 except as indicated. Numbers in [brackets] indicate item number in CHEERS checklist [10]. Details on abstract reporting are provided in Fig. 2. Operational considerations used in coding are provided in Tab. S1 in ESM

Fig. 2

Reporting quality of abstract. N = 56 studies with abstract, except as indicated

Language: English
Submitted on: Dec 3, 2021
|
Accepted on: Feb 3, 2022
|
Published on: Mar 31, 2022
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2022 David A. Cook, John M. Wilkinson, Jonathan Foo, published by Bohn Stafleu van Loghum
This work is licensed under the Creative Commons Attribution 4.0 License.