Have a personal or library account? Click to login
The (Sometimes Misguided) Belief in the Law of Large Numbers Cover
By: Klaus Fiedler  
Open Access
|Jun 2024

Full Article

Joachim Krueger’s inductive-reasoning approach (Krueger, 2008; Krueger, DiDonato & Freestone, 2012; Robbins Krueger, 2005) relies heavily on information sampling. Sampling theories posit that judgments and decisions are mediated by the process of information search in the social and physical environment. Most distal properties of the environment, in which social individuals are ultimately interested – such as risk, danger, honesty, utility – are not amenable to direct observation but have to be inferred or construed from samples of more proximal cues. Teachers cannot directly perceive, but have to infer, their students’ abilities and motivations from samples of students’ correct or incorrect answers and from their collaboration activities. Consumers have no sense organs for the quality of goods and brands; they have to infer their product evaluations from repeated experience or from other customers’ comments and advice. Social individuals cannot literally perceive their communication partners’ honesty; they have to infer this essential antecedent of all social interaction from samples of proximal cues of questionable validity (Hartwig & Bond, 2011; Vrij, Granhag, Mann & Leal, 2011). Virtually all social cognition, indeed, depends essentially on such constructive inferences.

Sample size

Consider, for illustration, a typical decision task of the one-armed or two-armed bandit type. For instance, when planning a holiday trip, a consumer must make a choice between two (or more) hotels in the holiday resort. Granting no previous experience with the candidate hotels, consumers rely on other customers’ evaluative ratings (say, on a scale from 1 to 5). Assuming, for instance, ratings of 4 4 5 3 3 5 for Hotel A and 2 3 3 3 5 2 for Hotel B, the consumer may develop a preference for Hotel A with the higher average rating of MA = 4, whereas the average rating of B is only MB = 3. Yet, given the same average ratings in two samples of double size (4 4 4 4 5 5 3 3 3 3 5 5 vs. 2 2 3 3 3 3 3 3 5 5 2 2), the preference of A over B is presumably stronger. According to Bernoulli’s (1713) law of large numbers and Laplace’s rule of succession (Laplace, 1774/1986), sample means approximate the true population mean as sample size increases. That is, a high or low average in a large sample more likely indicates the true population trend than the same average in a small sample. By and large, then, increasing sample size serves to increase the reliability of, and the confidence in, sample-based estimates.

However, note that the remainder of this article is not concerned with a normative discussion of the mathematics of the law of large numbers, but with the generalized subjective belief that samples of increasing size are increasingly reliable or valid. Just as a larger number of test items increases the test’s reliability (Li, Rosenthal & Rubin, 1996) or increasing the number of judges in a collective judgment enhances the wisdom-of-crowd effect (Surowiecki, 2005), people generally assume that large samples are more reliable and more informative than small samples drawn (at random) from the same population. As a consequence, the same high rate of positive (norm-abiding) behaviors is more likely recognized in the large sample of a majority than in the small sample of a minority, thus producing an illusory correlation in favor of majorities (Costello & Watts, 2019; Fiedler, 2000). The same predominant trend, which is typically desirable and norm-abiding rather than norm-deviant, is more apparent from a majority (large sample) than from a minority (small sample).

By the same token, assuming that the true confirmation rates of two competing hypotheses are constantly high, the tendency to gather more observations about one’s preferred hypothesis H1, compared to H0, will make an equally high confirmation rate more salient for H1 than for H0, yielding a confirmation bias (Klayman & Ha, 1987; Snyder & Swann, 1978). Or, granting that self-referent or ingroup-referent samples are generally larger than other-referent and outgroup-referent samples (Moore & Healy, 2008), most people believe that frequent benevolent behaviors more likely apply to the self or to the ingroup than to others or to outgroups. A reversal of this self-serving or ingroup-serving bias can however be expected for exceptional behaviors with a low base-rate of occurrence.

Doubtlessly, sample size is a major determinant of subjective inductive inferences, in line with a basic mathematical (Bayesian) principle known as the law of large numbers (Bernoulli, 1713; de Finetti, 1972). Judgments or decisions of majorities that diverge from minorities should not be quickly ascribed to a cognitive bias or violation of rational thinking; sensitivity to sample size rather constitutes a central module of adaptive cognition. The Bayesian rule of succession (Savage, 1954) predicts that the population parameter is less extreme than the mean or percentile of a finite sample. For instance, if the proportion of wins in a binary sample of dichotomous outcomes is Pwin = .80, the underlying probability pwin of wins in the population is smaller (pwin < Pwin). If the observed proportion of wins is low, say Pwin = .25, the true population probability is also less extreme, in this case higher (pwin > Pwin). According to the rule of succession (de Finetti, 1972), the best estimate of the probability of winning given k wins in a sample of size n is pwin = (k + 1)/(n + 2). Thus, given an observed sample proportion of Pwin = .8, the best estimate of the underlying population probability, according to the rule of succession, is (8 + 1)/(10 + 2) = pwin = .75 for a sample of size 10 but (80 + 1)/(100 + 2) = pwin = .79 for a sample of size 100.

Indeed, statistical knowledge about sampling distributions imposes strong and predictable constraints on inferences from observed samples. The principle of insufficient reasons (Savage, 1954) – according to which all priors of p are equally likely on a priori grounds – implies that more possible p are smaller than a large P (such as P = .8), whereas more possible p are larger than a small P (such as P = .25). It therefore seems logically justified to believe in trends observed in larger samples more than in trends observed in smaller samples.

Two seeming reversals of the law of large numbers

However, this intuitive conclusion may be premature and misleading. Large samples are only diagnostic and trustworthy when sample size is an independent variable, such that small and large subsamples are comparable in all other respects. If this is not the case, because sample size depends on the participants’ own truncation decision, there are good reasons to expect an opposite role played by sample size. Small samples can be more informative (i.e., lead to more accurate inferences) and have a stronger impact on resulting judgments and decisions than larger samples. Let us discuss two striking variants of this counterintuitive case.

Hot-stove effects

The first reason why small samples of rare events can dominate larger samples has been called a hot-stove effect (Denrell & Le Mens, 2023; Denrell & March, 2001), which results from the hedonic avoidance of a highly unpleasant stimulus. Just as a young child, who avoids touching the stove again after she has burnt her fingers, we stop visiting a restaurant after we felt sick, even when a virus indicates that the restaurant was not really the cause of sickness. There are plenty of other restaurants in town for future sampling. Thus, a consequential decision may be determined by very small samples of unpleasant experience, making it impossible to correct for a negative first impression. Conversely, there are many opportunities to correct for premature positive first impressions. Because consumers will continue to sample again and again from seemingly pleasant restaurants, there will be many opportunities to recognize unjustified first impressions. Hot stove effects can thus be conceived as massive biases induced by hedonically unpleasant stimulus sources, which cannot be corrected in future interaction because negative sources will be radically avoided.

Remarkably, a hot-stove effect is fundamentally different from the canonical sample-size effect of the Bernoulli type, in that the resulting preference is not dominated by the most frequent sampling outcome. Hot stove effects rather reflect the extraordinary influence of one or few distinct hedonic experiences. A negative sampling effect, induced through a hot-stove experience, causes a strong and persistent negativity bias against the painful hot-stove, which prevents the agent from getting rid of this negative attitude (Denrell, 2005; Denrell & Le Mens, 2007; 2012; 2023; Le Mens & Denrell, 2011; Fazio, Eiser & Shook, 2004).

Self-truncation effects

Different from such a hot-stove effect, which consists of a persistent and hard to unlearn bias against an initially sampled unpleasant stimulus source, a self-truncation effect binds the participants’ preference to the primacy effect of an initial sample. When it is up to participants to truncate sampling at the moment they believe to have gathered sufficient information, sample size is no longer an independent, experimenter-determined variable. As in a hot-stove effect, sample size depends on the individual’s evaluation of initial stimuli or subsamples. However, while the initial hot-stove sample is decidedly negative, the primacy effect in self-truncation can be both positive or negative; it only has to be informative. A consumer truncates sampling when the initial information about one option is sufficient to make a choice. A teacher truncates sampling answers to interview questions and turns to other students when he or she feels the evidence about preceding students is sufficient for grading. As a consequence, the resulting judgments are most pronounced (i.e., most extreme) when a strong primacy effect leads to early truncation. The more extreme and clear-cut the primacy effect in a self-truncation task – regardless of whether it is extremely negative or positive – the earlier a sample will be truncated, and the more extreme and conflict-free will be the final judgment or decision.

In any case, because early truncation and hot-stove truncations will be most likely triggered by early outliers, the Bernoulli law does not always predict a positive correlation between sample size and judgment strength (i.e., increasingly stronger judgments informed by larger than smaller samples). Due to truncation or hot-stove effects, the Bernoulli law is compatible with negative correlations between sample size and judgment strength. If the first few observations in a sample (e.g., a consumer’s first few observations about a product brand) happen to be exceptionally positive, the individual may stop sampling very early and choose the target object at a high level of confidence and without the slightest conflict. Such a positive primacy effect can result in a negative correlation between sample size and extremity of accurate judgments. Moreover, as Prager, Krueger, and Fiedler (2018) and Prager and Fiedler (2021) have shown empirically and through Monte-Carlo simulations, self-truncated decisions can be more accurate and confident when sample size is small rather than large.

Understanding the pros and cons of large and small samples is ultimately a Bayesian problem. Assuming flat priors (i.e., assuming that each hypothesis is equally likely on a-priori grounds) and perfect stochastic independence (i.e., that each data point is of equal reliability), the diagnostic value of a sample increases with sample size. This may not be the case when prior odds are not flat at all but clearly higher for truncated than for extended samples.

Bayesian Perspective on Sample Size

The underlying mathematics are not so hard to understand. Applying the Bayesian odds notation to sample-based inferences says that

Ω posterior= Ω prior× LR

Thus, the posterior odds Ωposterior = p(H+|E)/p(Hnot+|E) favoring the focal hypotheses H+ over the complementary hypothesis Hnot+, in the light of the sampled evidence E, is a multiplicative function of the prior odds, Ωprior = p(H+)/p(Hnot+) times the likelihood ratio LR = p(E|H+)/p(E|Hnot+), that is, the extent to which E is more likely given the focal hypothesis H+ than given Hnot+.

For a numerical illustration of how Ωposterior depends on sample size, assuming equal priors (i.e., p(H+) = p(Hnot+) and assuming a sample of n stochastically independent observations (i.e., n as an independent variable), consider a consumer choice informed by n former consumers’ votes. A positive former consumer vote (E = “+”) implies LR > 1, saying that E is more likely given H+ than given Hnot+). Conversely, a negative consumer vote (E = “–”) provides evidence for 1/LR. Assuming stochastically independent observations, “–” implies 1/LR to the same extent as “+” implies LR.

Thus, the Bayesian updating rule says that Ωposterior is the product of Ωprior times twice LR, because two 1/LR factors cancel off two more LR factors:

Ω posterior  =Ω prior × LR × LR × LR × LR × 1/LR × 1/LR              =Ω prior × LR42=2

Doubling sample size to n = 12 (8 positive and 4 negative) random votes, Ωposterior increases to Ωprior × LR8–4 = 4. Generally, Ωposterior increases with n, given that “+” (LR > 1) is more frequent than “–” (1/LR < 1). However, crucially, this only holds as long as n is an independent variable. If n becomes a dependent variable in a self-truncated sample, such that the first few sampled observations get most weight, Ωposterior can no longer be expected to increase with increasing n.

Note that a similar argument holds for a hot stove effect. If an initial stimulus experience Ehot stove is as aversive as a hot stove, a high initial likelihood ratio LR = p(Ehot stove|Haversive)/p(Ehot stove|Hnot aversive) strongly supports the hypothesis Haversive, which is much more likely than the opposite hypothesis Hnot aversive. This initial likelihood presumably dominates the small frequency with which the hot stove has been encountered (typically only once). Decision makers presumably avoid a hot stove even when a majority of alternative experiences with the same source would have been not at all aversive. A single aversive hot-stove experience can be strong enough to dominate a large number of (forgone) non-aversive experiences.

Likewise, in a self-truncation setting, very few diagnostic observations may have a stronger impact on truncated judgments and decisions than a larger number of non-truncated but less diagnostic observations. Again, self-truncation effects by no means falsify the Bernoulli law. They simply highlight the fact that the law of large numbers only holds when all observations are independent and equally diagnostic. Otherwise, a few highly diagnostic observations can have more Bayesian impact than many more non-diagnostic observations. In this regard, the law of large numbers may be a misnomer; what counts is a sample’s diagnosticity (as reflected in its LR) rather than the large number of stimuli in a sample.

Sampling as a conditional response

A memorable piece of evidence from Hütter, Niese, and Ihmels (2022) corroborates and strengthens this point, showing that diagnosticity need not be a fixed stimulus property, but may be a dynamic property construed by the experimental participants themselves through the stimulus sampling process. As in many other evaluative conditioning experiments, the authors intended to demonstrate that formerly neutral conditional stimulus faces (CSs) will take on the positive or negative valence of the unconditional stimuli (USs pictures from the International Affective Picture System; Lang & Bradley, 2007), with which they were paired. However, unlike previous experiments, Hütter et al. (2022) let their participants select their own sample of CSs. They not only found a canonical conditioning effect, manifested in an increasingly positive shift of evaluative ratings (post EC – pre EC) of faces paired with positive USs and increasingly negative ratings of faces paired with negative USs. Rather, they also found a clear-cut positivity bias in self-selected samples. Participants preferred to sample faces that had been paired on previous trials with pleasant rather than unpleasant pictures. As a consequence, this preference for positive faces accelerated the learning of positive compared to negative conditioned responses.

Yet, of most interest was the finding that the mere act of sampling neutral faces had a similarly strong impact on its positive posttest rating as pairing faces (CSs) with positive pictures (USs). Apparently, then, actively sampling (approaching) a neutral face served to increase its positive diagnostic value. Although CS faces had been carefully pretested to be equally neutral, a few expectedly positive stimuli that were the focus of participants’ information search were given more weight than a larger number of expectedly negative and non-eligible stimuli. Thus, whereas previous research has shown evaluation to depend on mere exposure in an experimenter-determined sample, the seminal research by Hütter et al. (2022) extends this mere-exposure effect, showing that self-determined repeated sampling can exert a similar evaluative effect.

Diagnosticity

So far, we have seen that the law of large numbers does not universally underlie all sample-based inferences. The degree to which decision makers trust in a sample does not solely depend on sample size, as could be expected from a principle labelled “law of large numbers” or from the “wisdom of crowds”. Closer reflection on Bayesian inferences revealed that diagnosticity often dominates sample size in that smaller samples can be more diagnostic than larger samples. In the same vein, we have seen that diagnosticity need not be a fixed environmental stimulus property, as in a hot-stove effect, but could also be the result of subjective construal in the mind of the beholder, as in the Hütter et al. (2022) research.

The same holds for self-truncated sampling as already discussed in a former subsection. Several results by Prager and colleagues (Prager et al., 2018; Prager & Fiedler, 2021; Prager, Fiedler & McCaughey, 2023) highlight the inter-individual variation in diagnosticity. They asked participants to draw from an urn a random sample of traits describing a target person or group and to truncate sampling when they felt ready to make a final judgment. While participants in a self-truncated task have no control over the stimuli they sample – which is completely random – they are free to determine the stopping rule.

Consistent with other findings, in several experiments conducted by Prager and colleagues, participants who could stop sampling whenever they wanted arrived at more extreme and more accurate judgments. However, participants in a yoked control condition, who received exactly the same trait samples as the self-truncation condition, provided clearly less extreme and less accurate judgments. Samples truncated at the very moment when it appeared most diagnostic led to more extreme and more correct judgments than the same sample presented to somebody else in a yoked control condition, who received the very same sample but who were not in such an optimal mindset for a truncated judgment.

Brunswikian versus Thurstonian sampling

To explain the divergent results in the Self-Truncation Condition and its Yoked Control, Prager and colleagues introduced the terminological distinction between Brunswikian and Thurstonian sampling (Juslin & Olsson, 1997). Whereas Brunswikian sampling refers to the sample contents provided by the information environment, the notion of Thurstonian sampling refers to oscillations within the judge’s mind. According to this notion, the judges’ internal response to a stimulus sample is not invariant but varies a lot over time and context. When a participant in the Self-Truncation Condition decides to truncate a sample, both Brunswikian and Thurstonian sampling render the decision maker ready for a decision. However, when participants in the Yoked Control Condition are exposed to the same (Brunswikian) sample, it is very unlikely that they are simultaneously in the same Thurstonian state of mind. As a consequence, they cannot be expected to be as ready for a sample-based decision as the self-truncating participants.

Projection

Central to Krueger’s sample-based induction theory is projection – the notion that individuals impose their internal ideas onto their inferences about the external environment, as distinguished from reverse inferences from the environment to the self. The manifold ways in which the impact of samples was found to depend on subjective construal, and on internally generated Thurstonian oscillations within the individual’s mind, suggests a process account of projection. Quite distinct from the psycho-dynamic notion of projection as a means of improving self-worth or homeostasis, projection offers an integrative concept that emphasizes the internally determined strategies of information sampling, which often dominate the external constraints imposed by the environment. In other words, projection highlights the importance of Thurstonian sampling beyond the impact of Brunswikian sampling of environmental information.

Yet, it should be noted that inference processes that enable projection cannot be reduced to hot-stove effects (Denrell & Le Mens, 2023), self-truncation (Prager & Fiedler, 2021) and self-determined exposure to conditional stimuli (Hütter et al., 2022). Indeed, the variety of creative sampling goals and environmental constraints is almost unrestricted, as evident from a teacher’s sampling of multiple students’ behavior in a classroom. On one hand, the teacher’s information search process is multiply determined by the students’ manifest activities, how often and how fast they raise their hands, their sitting positions in the classroom, the salience of their body language, their facial expression; the redundancy of their communicative acts etc. (Fiedler & Walther, 2004). On the other hand, however, sampling in the classroom depends on the teacher’s mood and mindset, her penchant for specific topics and for selected favorite students, her expectations of individual students’ ability (correctness rate) and motivation (hand raising), and the teacher’s vitality, judgment strategies and performance attributions. Moreover, as we have seen, sampling depends dramatically on the Thurstonian random sampling oscillations in the teacher’s mind, not just on intentional Brunswikian sampling of students’ behaviors. Most importantly perhaps – sampling depends on the interplay of all these external and internal determinants, their alignment and fit, the teacher’s surprise and disappointment function and countless other interactions.

Metacognitive monitoring and control

The bottom line of these considerations about the multiple determinants of information sampling is that even the most considerate and the most rational decision maker, who intends to monitor and control all aspects of information intake, has no chance to correct for all the vicissitudes of the resulting sampling biases. Teachers do not even know, consciously, what their precise achievement-related expectations are, their student preferences, how often students raised their hands without knowing the correct answer and how often they did not raise their hand when they did or did not know the solution. They cannot keep track of all individual students’ participation rates in all different lessons, and they have no idea about stochastic dependencies (i.e., to what degree a student’s answers to different questions are interdependent). Regarding the determinants we have discussed in this article, they do not know in retrospect which impressions were the result of self-truncation or determined by external processes. Even in the latter case, when the number of questions asked to one individual student were definitely not determined by the teacher himself or herself, there is no guarantee that n (i.e., the size of a performance sample) was a purely dependent variable. Even then, samples may have been truncated by the student (who left a lesson) or by the end of the lesson (when the bell rang), or what individual students were questioned earlier or later than others. A recency effect would certainly bias a resulting sample differently from a primacy effect in a self-truncation process. Moreover, provided that most influences on a sample happen to be recognized – which is certainly not the case – even the most sophisticated Bayesian will not be able to control and undo for the plethora and for the complexity of all conceivable sampling biases.

The mediational function of an actuarial sample

The advantage of this uncontrollability of the multiplex sampling process is that an “actuarial sample” (Dawes, Faust & Meehl, 1989; Meehl, 1956) affords an optimal manipulation check or mediation measure. Thus, in addition to the independent variable (e.g., teachers’ prior expectations of boys’ and girls’ performance in maths and in sciences) and the dependent variable (e.g., teachers’ final gradings of boys and girls), a sampling experiment provides an actuarial measure of the sample itself (e.g., the content and ordering of the questions asked to the students, and the students’ actual participation and correctness rates). So a sampling experiment offers all the data required to investigate how and to what extent the impact of the same independent variable was mediated by the specific sample gathered for the sake of judgment and decision making. No self-report and no observational report offers a similarly authentic reflection of the underlying process.

Sample-based versus preformed judgments

Does all this apply to all judgments? There is at least one major article in the literature – on memory-based versus online judgments (Hastie & Park, 1986) – that might be stretched to distinguish between sample-dependent and sample-independent judgments. Although this is possibly not what Hastie and Park had in mind originally, their seminal article motivates a distinction between sample-based judgments (formed inductively from samples of relevant observations) and preformed judgments for which no inductive inference is required. For instance, when an adult, politically mature citizen goes to a political election, he/she knows his/her voting preference beforehand. He/she need not inductively count the numbers of pro and contra arguments in a sample of observed or recalled information, unlike a young voter at the age of 18, who does not possess preformed voting preferences yet, or a stranger in a new country, who did not yet form preferences for brands or products. A consumer who is familiar with the market may know his favorite brands of beer or tea, rather than deriving his/her preferences from samples of customer ratings; or a scientist may “know” his/her evaluation of a theory beforehand, without sampling from an extended meta-analysis. More generally, one might distinguish two judgment tasks, those derived from samples of raw information (pro and con arguments) and those that rely on preformed judgment modules residing in the judge’s long-term memory, which circumvent all inductive sampling operations.

Well, from a theoretical or meta-theoretical perspective, this may be a viable position, with interesting implications. For instance, it may turn out that self-referent or ingroup-referent judgments and decisions are more likely preformed, whereas judgments of others or unfamiliar outgroups do not exist as preformed modules in the judge’s mind and are therefore more prone to exhibit the phenomena described in this chapter. The latter judgments have to be formed inductively from the available set of raw information, rather than simply recalled.

However, once we have accepted and justified that Thurstonian sampling (internal oscillation) is as real and as psychologically enlightening as Brunswikian sampling (of external facts or events), one might reframe preformed judgments as cases in which Brunswikian sampling is cut short and all remaining judgment variation reflects Thurstonian oscillations. However, pertinent research may show that the distinction is less clear-cut than expected. Even familiar product brands and politicians may not be represented as a fixed position on a reference scale. Judgments and decisions of such familiar targets may reflect random samples or convenience samples from distributions of ever fluctuating responses. In any case, the notion of Thurstonian sampling may be highly related to the emphasis put by Krueger on projection.

Summary

Starting from the assumption that the distal concepts in the focus of most judgments and decisions (e.g., utility, preference, attraction, risk, uncertainty) are not amenable to direct perception but have to be construed from samples of more proximal cues, we identified inductive sampling as a leading subroutine in judgment and decision making. Bernoulli’s (1713) law of large numbers alerted us to sample size, which is regarded (not only by statisticians) as a major determinant of the impact of a sample. However, putting an emphasis on sample size may be somewhat premature and incompatible with the fact that the diagnosticity of a sample is often determined by other factors than sample size. For illustration, we referred to hot-stove effects and self-truncation effects, in which a minority of highly diagnostic stimuli dominates the impact of sample size. A recent investigation by Hütter, Niese and Ihmels (2022) illustrated that diagnosticity is not a fixed stimulus-inherent property but may emerge as a result of an active sampling process conceived as approach versus avoidance of distinct stimuli.

The endless variety of sampling influences and biases renders the metacognitive control and correction of sampling vicissitudes virtually impossible. However, fortunately, the availability of an actuarial sample more than compensates for this weakness. Although it is almost impossible to monitor and control all sampling errors and sampling biases, the very outcome of the sampling process that constitutes the effective stimulus is vividly apparent. Thus, the actuarial sample provides us with behavioral evidence about the teacher’s attention focus, the trajectory of information search, and the etiology of judgments and evaluations.

One novel discovery of the present research with immense consequences about subsequent choice is the impact of self-truncation. If participants can themselves truncate the sampling process, making sample size n a dependent rather than an independent variable, then the strongest and most confident judgments are based on small rather than large samples. This apparent reversal of the law of large numbers is not only surprising and remained unrecognized for a long time. It is also hard to monitor and control in real-life decision making because we typically do not attend to whether the stopping criterion is a dependent or an independent decision. We do not include in a protocol who terminates a consumer’s choice process, a therapeutic session, a political conference, or a TV interview. We generally assume that an extended process leading to a large sample reflects a careful and conscientious decision process, and we miss the point that the law of large numbers may be more tricky than expected.

With regard to the work of Joachim Krueger, we believe that self-truncation and internally determined sampling of the Thurstonian type are probably at the heart of the major role played by projection in the inductive sampling process.

Funding Information

The work underlying this article was supported by grants provided by the Deutsche Forschungsgemeinschaft (FI 294/29-1; FI 294/30–1).

Competing Interests

The author has no competing interests to declare.

DOI: https://doi.org/10.5334/spo.74 | Journal eISSN: 2752-5341
Language: English
Submitted on: Dec 17, 2023
|
Accepted on: May 24, 2024
|
Published on: Jun 18, 2024
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2024 Klaus Fiedler, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.