References
- 1Albers, C., & Lakens, D. (2018). When power analyses based on pilot data are biased: Inaccurate effect size estimators and follow-up bias. Journal of Experimental Social Psychology, 74, 187–195. DOI: 10.1016/j.jesp.2017.09.004
- 2Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample-size planning for more accurate statistical power: A method adjusting sample effect sizes for publication bias and uncertainty. Psychological Science, 28(11), 1547–1562. DOI: 10.1177/0956797617723724
- 3Armitage, P., Berry, G., & Matthews, J. (2002). Statistical Methods in Medical Research (4th ed.). Blackwell Science Ltd. DOI: 10.1002/9780470773666
- 4Arnold, B. F., Hogan, D. R., Colford, J. M., & Hubbard, A. E. (2011). Simulation methods to estimate design power: an overview for applied research. BMC Medical Research Methodology, 11(1), 94. DOI: 10.1186/1471-2288-11-94
- 5Asendorpf, J. B., Conner, M., De Fruyt, F., et al. (2013). Recommendations for increasing replicability in psychology. European Journal of Personality, 27(2), 108–119. DOI: 10.1002/per.1919
- 6Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7(6), 543–554. DOI: 10.1177/1745691612459060
- 7Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173–1182. DOI: 10.1037/0022-3514.51.6.1173
- 8Benjamin, D. J., Berger, J. O., Johannesson, M., et al. (2018). Redefine statistical significance. Nature Human Behaviour, 2(1), 6–10. DOI: 10.1038/s41562-017-0189-z
- 9Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, Series B (Methodological), 57, 289–300.
- 10Bloom, H. S. (1995). Minimum detectable effects: A simple way to report the statistical power of experimental designs. Evaluation Review, 19(5), 547–556. DOI: 10.1177/0193841X9501900504
- 11Brysbaert, M., & Stevens, M. (2018). Power analysis and effect size in mixed effects models: A tutorial. Journal of Cognition, 1(1), 1–20. DOI: 10.5334/joc.10
- 12Button, K. S., Ioannidis, J. P. A., Mokrysz, C., et al. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365–376. DOI: 10.1038/nrn3475
- 13Cohen, J. (1962). The statistical power of abnormal—social psychological research: A review. Journal of Abnormal and Social Psychology, 65(3), 145–153. DOI: 10.1037/h0045186
- 14Cohen, J. (1988). Statistical power analysis for the behavioral sciences. (2nd ed.). Hillsdale, NJ: Erlbaum.
- 15Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45(12), 1304–12. DOI: 10.1037/0003-066X.45.12.1304
- 16Cohen, J. (1992a). Statistical power analysis. Current Directions in Psychological Science, 1(3), 98–101. DOI: 10.1111/1467-8721.ep10768783
- 17Cohen, J. (1992b). A power primer. Psychological Bulletin, 112(1), 155–159. DOI: 10.1037/0033-2909.112.1.155
- 18Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49(12), 997–1003. DOI: 10.1037/0003-066X.49.12.997
- 19Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences. Mahwah, N.J.: L. Erlbaum Associates.
- 20Cumming, G. (2013). Understanding the New Statistics: Effect Sizes, Confidence Intervals, and Meta-analysis. New York: Routledge.
- 21De Schryver, M., Hughes, S., Rosseel, Y., & De Houwer, J. (2016). Unreliable yet still replicable: A comment on LeBel and Paunonen (2011). Frontiers in Psychology, 6, 2039. DOI: 10.3389/fpsyg.2015.02039
- 22Eikeland, H. M. (1975). Epsilon-squared Should Be Preferred To Eta-squared. Technical Report, University of Oslo.
- 23Ellis. (2010). The Essential Guide To Effect Sizes. Cambridge University Press. DOI: 10.1017/CBO9780511761676
- 24Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2009). Statistical power analyses using G* Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160. DOI: 10.3758/BRM.41.4.1149
- 25Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. DOI: 10.3758/BF03193146
- 26Fritz, C. O., Morris, P. E., & Richler, J. J. (2012). Effect size estimates: Current use, calculations, and interpretation. Journal of Experimental Psychology: General, 141(1), 2–18. DOI: 10.1037/a0024338
- 27Gelman, A. (2018, March 15). You need 16 times the sample size to estimate an interaction than to estimate a main effect [blog post]. Retrieved from:
http://andrewgelman.com/2018/03/15/need-16-times-sample-size-estimate-interaction-estimate-main-effect/ . - 28Gelman, A., & Hill, J. (2006). Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge, England: Cambridge University Press. DOI: 10.1017/CBO9780511790942
- 29Giner-Sorolla, R. (2018, January 24). Powering your interaction [blog post]. Retrieved from:
https://approachingblog.wordpress.com/2018/01/24/powering-your-interaction-2/ . - 30Hayes, A. F., & Scharkow, M. (2013). The relative trustworthiness of inferential tests of the indirect effect in statistical mediation analysis: Does method really matter? Psychological Science, 24(10), 1918–1927. DOI: 10.1177/0956797613480187
- 31Hunter, J. E., & Schmidt, F. L. (2004). Methods of Meta-Analysis: Correcting Error and Bias in Research Findings. (2nd ed.). Thousand Oaks, CA: Sage. DOI: 10.4135/9781412985031
- 32Hays, W. L. (1963). Statistics for Psychologists. New York: Holt, Rinehart, and Winston.
- 33Hedges, L. V. (1981). Distribution theory for Glass’ estimator of effect size and related estimators. Journal of Educational Statistics, 6(2), 107–128. DOI: 10.3102/10769986006002107
- 34Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Medicine, 2(8),
e124 . DOI: 10.1371/journal.pmed.0020124 - 35Jaccard, J., & Turrisi, R. (2003). Interaction Effects in Multiple Regression. (2nd ed.). Thousand Oaks: Sage. DOI: 10.4135/9781412984522
- 36Judd, C. M., McClelland, G. H., & Ryan, C. S. (2017). Data Analysis: A Model Comparison Approach To Regression. (3rd ed.). New York: Routledge.
- 37Kelly, T. L. (1935). An unbiased correlation ratio measure. Proceedings of the National Academy of Sciences, 21, 554–559. DOI: 10.1073/pnas.21.9.554
- 38Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 863. DOI: 10.3389/fpsyg.2013.00863
- 39Lakens, D. (2014). Performing high-powered studies efficiently with sequential analyses. European Journal of Social Psychology, 44(7), 701–710. DOI: 10.1002/ejsp.2023
- 40Lakens, D., Adolfi, F. G., Albers, C. J., et al. (2018). Justify your alpha. Nature Human Behaviour, 2, 168–171. DOI: 10.1038/s41562-018-0311-x
- 41Lane, S. P., & Hennes, E. P. (2017). Power struggles: Estimating sample size for multilevel relationships research. Journal of Social and Personal Relationships, 35(1), 7–31. DOI: 10.1177/0265407517710342
- 42LeBel, E. P., & Paunonen, S. V. (2011). Sexy but often unreliable: The impact of unreliability on the replicability of experimental findings with implicit measures. Personality and Social Psychology Bulletin, 37(4), 570–583. DOI: 10.1177/0146167211400619
- 43Liu, X. S. (2014). Statistical Power Analysis for the Social and Behavioral Sciences: Basic and Advanced Techniques. New York: Routledge.
- 44Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: causes, consequences, and remedies. Psychological Methods, 9(2), 147–163. DOI: 10.1037/1082-989X.9.2.147
- 45Maxwell, S. E., Kelley, K., & Rausch, J. R. (2008). Sample size planning for statistical power and accuracy in parameter estimation. Annual Review of Psychology, 59, 537–563. DOI: 10.1146/annurev.psych.59.103006.093735
- 46McClelland, G. H., & Judd, C. M. (1993). Statistical difficulties of detecting interactions and moderator effects. Psychological Bulletin, 114(2), 376–390. DOI: 10.1037/0033-2909.114.2.376
- 47O’Brien, R. G., & Castelloe, J. M. (2007).
Sample size analysis for traditional hypothesis testing: concepts and issues . In: Dmitrienko, A., Chuang-Stein, C., D’Agostino, R. (eds.), Pharmaceutical Statistics Using SAS: A Practical Guide, 237–71. Cary, NC: SAS. - 48Okada, K. (2013). Is omega squared less biased? A comparison of three major effect size indices in one-way ANOVA. Behaviormetrika, 40(2), 129–147. DOI: 10.2333/bhmk.40.129
- 49Perugini, M., Gallucci, M., & Costantini, G. (2014). Safeguard power as a protection against imprecise power estimates. Perspectives on Psychological Science, 9(3), 319–332. DOI: 10.1177/1745691614528519
- 50Porter, K. E. (2017). Statistical power in evaluations that investigate effects on multiple outcomes: A guide for researchers. Journal of Research on Educational Effectiveness, 11(2), 267–295. DOI: 10.1080/19345747.2017.1342887
- 51Preacher, K. J., & Hayes, A. F. (2004). SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behavior Research Methods, Instruments, & Computers, 36(4), 717–731. DOI: 10.3758/BF03206553
- 52Preacher, K. J., & Selig, J. P. (2012). Advantages of Monte Carlo confidence intervals for indirect effects. Communication Methods and Measures, 6(2), 77–98. DOI: 10.1080/19312458.2012.679848
- 53Qiu, W. (2017). powerMediation: Power/Sample Size Calculation for Mediation Analysis. R package version 0.2.7.
- 54R Core Team. (2017). R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing.
- 55Richardson, J. T. (2011). Eta squared and partial eta squared as measures of effect size in educational research. Educational Research Review, 6(2), 135–147. DOI: 10.1016/j.edurev.2010.12.001
- 56Rosenthal, R., & Rosnow, R. L. (1985). Contrast Analysis: Focused Comparisons in the Analysis of Variance. Cambridge University Press.
- 57Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1–36. DOI: 10.18637/jss.v048.i02
- 58Ruscio, J. (2008). A probability-based measure of effect size: robustness to base rates and other factors. Psychological Methods, 13(1), 19–30. DOI: 10.1037/1082-989X.13.1.19
- 59Ruscio, J., & Mullen, T. (2012). Confidence intervals for the probability of superiority effect size measure and the area under a receiver operating characteristic curve. Multivariate Behavioral Research, 47(2), 201–223. DOI: 10.1080/00273171.2012.658329
- 60Schoemann, A. M., Boulton, A. J., & Short, S. D. (2017). Determining Power and Sample Size for Simple and Complex Mediation Models. Social Psychological and Personality Science, 8(4), 379–386. DOI: 10.1177/1948550617715068
- 61Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47(5), 609–612. DOI: 10.1016/j.jrp.2013.05.009
- 62Schönbrodt, F. D., & Wagenmakers, E. (2017). Bayes factor design analysis: Planning for compelling evidence. Psychonomic Bulletin & Review, (2014), 1–13. DOI: 10.3758/s13423-017-1230-y
- 63Schönbrodt, F. D., Wagenmakers, E., Zehetleitner, M., & Perugini, M. (2017). Sequential hypothesis testing with Bayes factors: Efficiently testing mean differences. Psychological Methods, 22(2), 322–339. DOI: 10.1037/met0000061
- 64Shieh, G. (2009). Detecting interaction effects in moderated multiple regression with continuous variables power and sample size considerations. Organizational Research Methods, 12(3), 510–528. DOI: 10.1177/1094428108320370
- 65Simonsohn, U. (2014). [17] No-way Interactions. The Winnower, 5,
e142559.90552 . DOI: 10.15200/winn.142559.90552 - 66Sobel, M. E. (1982). Asymptotic confidence intervals for indirect effects in structural equation models. Sociological Methodology, 13, 290–312. DOI: 10.2307/270723
- 67Sterne, J. A., & Smith, D. G. (2001). Sifting the evidence—What’s wrong with significance tests? Physical Therapy, 81(8), 1464–1469. DOI: 10.1093/ptj/81.8.1464
- 68Swiatkowski, W., & Dompnier, B. (2017). Replicability Crisis in Social Psychology: Looking at the Past to Find New Pathways for the Future. International Review of Social Psychology, 30(1), 111–124. DOI: 10.5334/irsp.66
- 69Thoemmes, F., MacKinnon, D. P., & Reiser, M. R. (2010). Power analysis for complex mediational designs using Monte Carlo methods. Structural Equation Modeling, 17(3), 510–534. DOI: 10.1080/10705511.2010.489379
- 70Wahlsten, D. (1991). Sample size to detect a planned contrast and a one degree-of-freedom interaction effect. Psychological Bulletin, 110(3), 587–595. DOI: 10.1037/0033-2909.110.3.587
- 71Wilkinson, L., & Task Force Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54(8), 594–604. DOI: 10.1037/0003-066X.54.8.594
- 72Zhang, Z. (2014). Monte Carlo based statistical power analysis for mediation models: Methods and software. Behavior Research Methods, 46(4), 1184–1198. DOI: 10.3758/s13428-013-0424-0
- 73Zumbo, B. D., & Hubley, A. M. (1998). A note on misconceptions concerning prospective and retrospective power. Journal of the Royal Statistical Society: Series D (The Statistician), 47(2), 385–388. DOI: 10.1111/1467-9884.00139
