Have a personal or library account? Click to login
Detecting rater bias using a person-fit statistic: a Monte Carlo simulation study Cover

Detecting rater bias using a person-fit statistic: a Monte Carlo simulation study

Open Access
|Jan 2018

Abstract

Introduction With the Standards voicing concern for the appropriateness of response processes, we need to explore strategies that would allow us to identify inappropriate rater response processes. Although certain statistics can be used to help detect rater bias, their use is complicated by either a lack of data about their actual power to detect rater bias or the difficulty related to their application in the context of health professions education. This exploratory study aimed to establish the worthiness of pursuing the use of l z to detect rater bias.

Methods We conducted a Monte Carlo simulation study to investigate the power of a specific detection statistic, that is: the standardized likelihood l z person-fit statistics (PFS). Our primary outcome was the detection rate of biased raters, namely: raters whom we manipulated into being either stringent (giving lower scores) or lenient (giving higher scores), using the l z statistic while controlling for the number of biased raters in a sample (6 levels) and the rate of bias per rater (6 levels).

Results Overall, stringent raters (M = 0.84, SD = 0.23) were easier to detect than lenient raters (M = 0.31, SD = 0.28). More biased raters were easier to detect then less biased raters (60% bias: 62, SD = 0.37; 10% bias: 43, SD = 0.36).

Language: English
Published on: Jan 2, 2018
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2018 André-Sébastien Aubin, Christina St-Onge, Jean-Sébastien Renaud, published by Bohn Stafleu van Loghum
This work is licensed under the Creative Commons Attribution 4.0 License.