Abstract
Consistent with Joachim Krueger’s information sampling approach (Krueger, 2008) to inductive reasoning, the present review delineates a hierarchy of theoretical sampling assumptions. Starting from Bernoulli’s normative law of large numbers at the highest level, the initial part is devoted to a widely shared descriptive belief, which assumes that increasingly large samples generally enable increasingly accurate inferences of the true population properties. In the next section, though, it is explained with reference to two phenomena, hot-stove and self-truncation effects, that small samples can be more informative than large samples. Therefore, diagnosticity must not be reduced to sample size. In the remainder of this review, the dependency of sampled evidence is explained in terms of Bayesian calculus, which is hard to monitor and control at a metacognitive level, and a distinction is introduced between Brunswikian and Thurstonian sampling. With respect to Joachim Krueger’s seminal writings (Krueger, DiDonato & Freestone, 2012; Robbins Krueger, 2005), the notion of Thurstonian sampling offers a theoretical account of the major role of projection in inductive information processing.
