References
- 1Abraham, W. T., & Russell, D. W. (2008). Statistical power analysis in psychological research. Social and Personality Psychology Compass, 2(1), 283–301. 10.1111/j.1751-9004.2007.00052.x
- 2Agnoli, F., Wicherts, J. M., Veldkamp, C. L., Albiero, P., & Cubelli, R. (2017). Questionable research practices among Italian research psychologists. PloS one, 12(3). 10.1371/journal.pone.0172792
- 3Amrhein, V., Trafimow, D., & Greenland, S. (2019). Inferential statistics as descriptive statistics: There is no replication crisis if we don’t expect replication. The American Statistician, 73(sup1), 262–270. 10.1080/00031305.2018.1543137
- 4Appelbaum, M. I., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report. American Psychologist, 73(1), 3–25. 10.1037/amp0000191
- 5Asendorpf, J. B., Conner, M., De Fruyt, F., De Houwer, J., Denissen, J. J. A., Fiedler, K., Fiedler, S., Fun-der, D. C., Kliegl, R., Nosek, B. A., Perugini, M., Roberts, B. W., Schmitt, M., Vanaken, M. A. G., Weber, H., & Wicherts, J. M. (2013). Recommendations for increasing replicability in psychology. European Journal of Personality, 27(2), 108–119. 10.1002/per.1919
- 6Baguley, T. (2009). Standardized or simple effect size: What should be reported? British Journal Of Psychology, 100(3), 603–617. 10.1348/000712608x377117
- 7Birkett, M. A., & Day, S. J. (1994). Internal pilot studies for estimating sample size. Statistics in Medicine, 13(23–24), 2455–2463. 10.1002/sim.4780132309
- 8Bishop, D. (2019). Rein in the four horsemen of irreproducibility. Nature, 568(7753),
435 . 10.1038/d41586-019-01307-2 - 9Brodeur, A., Cook, N., Hartley, J., & Heyes, A. (2022). Do pre-registration and pre-analysis plans reduce p-hacking and publication bias? 10.2139/ssrn.4180594
- 10Brysbaert, M. (2019). How Many Participants Do We Have to Include in Properly Powered Experiments? A Tutorial of Power Analysis with Reference Tables. Journal of cognition, 2(1),
16 . 10.5334/joc.72 - 11Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E., & Munafò, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365–376. 10.1038/nrn3475
- 12Buzbas, E. O., Devezer, B., & Baumgaertner, B. (2023). The logical structure of experiments lays the foundation for a theory of reproducibility. Royal Society Open Science, 10, Article
221042 . 10.1098/rsos.221042 - 13Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. The Journal of Abnormal and Social Psychology, 65(3), 145–153. 10.1037/h0045186
- 14Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Routledge. 10.4324/9780203771587
- 15Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159. 10.1037/0033-2909.112.1.155
- 16Colling, L. J., & Szűcs, D. (2021). Statistical inference and the replication crisis. Review of Philosophy and Psychology, 12(1), 121–147. 10.1007/s13164-018-0421-4
- 17Cumming, G. (2008). Replication and p Intervals: p Values Predict the Future Only Vaguely, but Confidence Intervals Do Much Better. Perspectives on Psychological Science, 3(4), 286–300. 10.1111/j.1745-6924.2008.00079.x
- 18Cumming, G. (2012). Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. Routledge. 10.4324/9780203807002
- 19Davis, S., Johnson, A. H., Lynch, T., Gray, L., Pryor, E. R., Azuero, A., Soistmann, H. C., Phillips, S. R., & Rice, M. (2020). Inclusion of Effect Size Measures and Clinical Relevance in Research Papers. Nursing Research, 70(3), 222–230. 10.1097/nnr.0000000000000494
- 20De Rond, M., & Miller, A. N. (2005). Publish or perish: Bane or boon of academic life? Journal of management inquiry, 14(4), 321–329. 10.1177/1056492605276850
- 21Doyen, S., Klein, O., Pichon, C. L., & Cleeremans, A. (2012). Behavioral priming: it’s all in the mind, but whose mind? PloS one, 7(1). 10.1371/journal.pone.0029081
- 22Ebersole, C. R., Atherton, O. E., Belanger, A. L., Skulborstad, H. M., Allen, J., Banks, J., Baranski, E., Bernstein, M. J., Bonfiglio, D. B. V., Boucher, L., Brown, E. R., Budiman, N. I., Cairo, A. H., Capaldi, C. A., Chartier, C. R., Chung, J. M., Cicero, D. C., Coleman, J. A., Conway, J., … Nosek, B. A. (2016). Many Labs 3: Evaluating participant pool quality across the academic semester via replication. Journal Of Experimental Social Psychology, 67, 68–82. 10.1016/j.jesp.2015.10.012
- 23Ebersole, C. R., Mathur, M. B., Baranski, E., Bart-Plange, D.-J., Buttrick, N. R., Chartier, C. R., Corker, K. S., Corley, M., Hartshorne, J. K., IJzerman, H., Lazarević, L. B., Rabagliati, H., Ropovik, I., Aczel, B., Aeschbach, L. F., Andrighetto, L., Arnal, J. D., Arrow, H., Babincak, P., … Nosek, B. A. (2020). Many Labs 5: Testing pre-data-collection peer review as an intervention to increase replicability. Advances in Methods and Practices in Psychological Science, 3(3), 309–331. 10.1177/2515245920958687
- 24Ehde, D. M. (2018). Opening editorial: Rehabilitation Psychology [Editorial]. Rehabilitation Psychology, 63(2), 167–169. 10.1037/rep0000233
- 25Fanelli, D. (2012). Negative results are disappearing from most disciplines and countries. Scientometrics, 90(3), 891–904. 10.1007/s11192-011-0494-7
- 26Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. 10.3758/bf03193146
- 27Ferguson, C. J., & Brannick, M. T. (2012). Publication bias in psychological science: Prevalence, methods for identifying and controlling, and implications for the use of meta-analyses. Psychological Methods, 17(1), 120–128. 10.1037/a0024445
- 28Fraley, R. C., Chong, J. Y., Baacke, K. A., Greco, A. J., Guan, H., & Vazire, S. (2022). Journal N-pact factors from 2011 to 2019: evaluating the quality of social/personality journals with respect to sample size and statistical power. Advances in Methods and Practices in Psychological Science, 5(4). 10.1177/25152459231175075
- 29Fraley, R. C., & Vazire, S. (2014). The N-Pact Factor: Evaluating the Quality of Empirical Journals with Respect to Sample Size and Statistical Power. PLOS ONE, 9(10). 10.1371/journal.pone.0109019
- 30Francis, G. (2012). Publication bias and the failure of replication in experimental psychology. Psychonomic Bulletin & Review, 19, 975–991. 10.3758/s13423-012-0322-y
- 31Fraser, H., Parker, T., Nakagawa, S., Barnett, A., & Fidler, F. (2018). Questionable research practices in ecology and evolution. PLOS ONE, 13(7),
e0200303 . 10.1371/journal.pone.0200303 - 32Friede, T., & Miller, F. (2012). Blinded continuous monitoring of nuisance parameters in clinical trials. Journal of the Royal Statistical Society Series C (Applied Statistics), 61(4), 601–618. 10.1111/j.1467-9876.2011.01029.x
- 33Friese, M., & Frankenbach, J. (2020). p-Hacking and publication bias interact to distort meta-analytic effect size estimates. Psychological Methods, 25(4), 456–471. 10.1037/met0000246
- 34Fritz, A., Scherndl, T., & Kühberger, A. (2012). A comprehensive review of reporting practices in psychological journals: Are effect sizes really enough? Theory & Psychology, 23(1), 98–122. 10.1177/0959354312436870
- 35Funder, D. C., & Ozer, D. J. (2019). Evaluating effect size in psychological research: Sense and nonsense. Advances in Methods and Practices in Psychological Science, 2(2), 156–168. 10.1177/2515245919847202
- 36Gauthier, I. (2018). Inaugural editorial [Editorial]. Journal of Experimental Psychology: Human Perception and Performance, 44(1), 1. 10.1037/xhp0000519
- 37Giner-Sorolla, R. (2018). From crisis of evidence to a “crisis” of relevance? incentive-based answers for Social Psychology’s perennial relevance worries. European Review of Social Psychology, 30(1), 1–38. 10.1080/10463283.2018.1542902
- 38Giner-Sorolla, R., Montoya, A. K., Reifman, A., Carpenter, T., Lewis, N. A.,
Jr. , Aberson, C. L., Bostyn, D. H., Conrique, B. G., Ng, B. W., Schoemann, A. M., & Soderberg, C. (2024). Power to detect what? Considerations for planning and evaluating sample size. Personality and Social Psychology Review, 28(3), 276–301. 10.1177/10888683241228328 - 39Greenwald, A. G. (1975). Consequences of prejudice against the null hypothesis. Psychological bulletin, 82(1), 1–20. 10.1037/h0076157
- 40Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of p-hacking in science. PLoS biology, 13(3),
e1002106 . 10.1371/journal.pbio.1002106 - 41Hoenig, J. M., & Heisey, D. M. (2001). The Abuse of Power. The American Statistician, 55(1), 19–24. 10.1198/000313001300339897
- 42Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS ONE, 2(8),
e124 . 10.1371/journal.pmed.0020124 - 43Ioannidis, J. P. A. (2008). Why most discovered true associations are inflated. Epidemiology, 19(5), 640–648. 10.1097/ede.0b013e31818131e7
- 44Ioannidis, J. P. A., Stanley, T. D., & Doucouliagos, H. (2017). The Power of Bias in Economics Research. The Economic Journal, 127(605), F236–F265. 10.1111/ecoj.12461
- 45John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychological Science, 23(5), 524–532. 10.1177/0956797611430953
- 46Kitayama, S. (2017). Journal of Personality and Social Psychology: Attitudes and social cognition [Editorial]. Journal of Personality and Social Psychology, 112(3), 357–360. 10.1037/pspa0000077
- 47Klein, R. A., Cook, C. L., Ebersole, C. R., Vitiello, C., Nosek, B. A., Hilgard, J., Ahn, P. H., Brady, A. J., Chartier, C. R., Christopherson, C. D., Clay, S., Collisson, B., Crawford, J. T., Cromar, R., Gardiner, G., Gosnell, C. L., Grahe, J., Hall, C., Howard, I., … Ratliff, K. A. (2022). Many labs 4: Failure to replicate mortality salience effect with and without original author involvement. Collabra: Psychology, 8(1), 1–15. 10.1525/collabra.35271
- 48Klein, R. A., Ratliff, K., Vianello, M., Adams, A. B.,
Jr. , Bahník, S., Bernstein, N. B., … Nosek, B. A. (2014). Investigating variation in replicability: A “Many Labs” Replication Project. Social Psychology, 45, 142–152. 10.1027/1864-9335/a000178 - 49Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B.,
Jr. , Alper, S., … Sowden, W. (2018). Many Labs 2: Investigating variation in replicability across samples and settings. Advances in Methods and Practices in Psychological Science, 1(4), 443–490. 10.1177/2515245918810225 - 50Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs. Frontiers in psychology, 4,
863 . 10.3389/fpsyg.2013.00863 - 51Lakens, D. (2022). Sample Size Justification. Collabra: Psychology, 8(1). 10.1525/collabra.33267
- 52Lakens, D., Adolfi, F., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S., Baguley, T., Becker, R., Benning, S. D., Bradford, D. E., Buchanan, E. M., Caldwell, A. R., Van Calster, B., Carlsson, R., Chen, S., Chung, B., Colling, L., Collins, G. S., Crook, Z., … Zwaan, R. A. (2018). Justify your alpha. Nature Human Behaviour, 2(3), 168–171. 10.1038/s41562-018-0311-x
- 53Levitt, H. M., Bamberg, M., Creswell, J. W., Frost, D. M., Josselson, R., & Suárez-Orozco, C. (2018). Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: The APA Publications and Communications Board task force report. American Psychologist, 73(1), 26–46. 10.1037/amp0000151
- 54Linder, C., & Farahbakhsh, S. (2020). Unfolding the black box of questionable research practices: Where is the line between acceptable and unacceptable practices? Business Ethics Quarterly, 30(3), 335–360. 10.1017/beq.2019.52
- 55Lindstromberg, S. (2023). The winner’s curse and related perils of low statistical power – spelled out and illustrated. Research Methods in Applied Linguistics, 2(3). 10.1016/j.rmal.2023.100059
- 56Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: Causes, consequences, and remedies. Psychological Methods, 9(2), 147–163. 10.1037/1082-989X.9.2.147
- 57Morawski, J. G. (2019). The replication crisis: How might philosophy and theory of psychology be of use? Journal Of Theoretical And Philosophical Psychology, 39(4), 218–238. 10.1037/teo0000129
- 58Moussa, S., & Charlton, A. (2023). Retraction (mal)practices of elite marketing and social psychology journals in the Dirk Smeesters’ research misconduct case. Accountability in Research, 1–16. 10.1080/08989621.2022.2164489
- 59Munafò, M. R., Nosek, B. A., Bishop, D. V. M., Button, K. S., Chambers, C. D., Du Sert, N. P., Simonsohn, U., Wagenmakers, E. J., Ware, J., & Ioannidis, J. P. A. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1(1). 10.1038/s41562-016-0021
- 60Nakagawa, S., Lagisz, M., Yang, Y., & Drobniak, S. M. (2024). Finding the right power balance: Better study design and collaboration can reduce dependence on statistical power. PLOS Biology, 22(1),
e3002423 . 10.1371/journal.pbio.3002423 - 61Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S., Breckler, S. J., Buck, S., Chambers, C., Chin, G., Christensen, G., Contestabile, M., Dafoe, A., Eich, E., Freese, J., Glennerster, R., Goroff, D. L., Green, D. P., Hesse, B. W., Humphreys, M., … Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. 10.1126/science.aab2374
- 62Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings Of The National Academy Of Sciences Of The United States Of America, 115(11), 2600–2606. 10.1073/pnas.1708274114
- 63Nosek, B. A., Hardwicke, T. E., Moshontz, H., Allard, A., Corker, K. S., Dreber, A., Fidler, F., Hilgard, J., Struhl, M. K., Nuijten, M. B., Rohrer, J. M., Romero, F., Scheel, A. M., Scherer, L. D., Schönbrodt, F. D., & Vazire, S. (2022). Replicability, Robustness, and Reproducibility in Psychological Science. Annual Review Of Psychology, 73(1), 719–748. 10.1146/annurev-psych-020821-114157
- 64Nosek, B. A., & Lakens, D. (2014). Registered reports. Social Psychology, 45(3), 137–141. 10.1027/1864-9335/a000192
- 65Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia. Perspectives On Psychological Science, 7(6), 615–631. 10.1177/1745691612459058
- 66O’Keefe, D. J. (2007). Brief report: Post hoc power, observed power, a priori power, retrospective power, prospective power, achieved power: Sorting out appropriate uses of statistical power analyses. Communication Methods and Measures, 1(4), 291–299. 10.1080/19312450701641375
- 67Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251). 10.1126/science.aac4716
- 68Pashler, H., & Harris, C. R. (2012). Is the Replicability Crisis Overblown? Three Arguments Examined. Perspectives On Psychological Science, 7(6), 531–536. 10.1177/1745691612463401
- 69Pashler, H., & Wagenmakers, E. (2012). Editors’ Introduction to the Special Section on Replicability in Psychological Science. Perspectives On Psychological Science, 7(6), 528–530. 10.1177/1745691612465253
- 70Pek, J., Hoisington-Shaw, K. J., & Wegener, D. (2024). Uses of uncertain statistical power: Designing future studies, not evaluating completed studies. Psychological Methods. Advance online publication. 10.1037/met0000577
- 71Penders, B. (2022). Process and Bureaucracy: Scientific Reform as Civilisation. Bulletin Of Science, Technology & Society, 42(4), 107–116. 10.1177/02704676221126388
- 72Perugini, M., Gallucci, M., & Costantini, G. (2018). A practical primer to power analysis for simple experimental designs. Revue Internationale de Psychologie Sociale, 31(1), 1–23. 10.5334/irsp.181
- 73Pupovac, V., Prijić-Samaržija, S., & Petrovečki, M. (2017). Research misconduct in the Croatian scientific community: a survey assessing the forms and characteristics of research misconduct. Science and engineering ethics, 23, 165–181. 10.1007/s11948-016-9767-0
- 74Rodgers, J. L., & Shrout, P. E. (2018). Psychology’s replication crisis as scientific opportunity: A précis for policymakers. Policy Insights from the Behavioral and Brain Sciences, 5(1), 134–141. 10.1177/2372732217749254
- 75Rosenthal, R. (1979). The “file drawer problem” and tolerance for null results. Psychological Bulletin, 86(3), 638–641. 10.1037/0033-2909.86.3.638
- 76Schauer, J. M., & Hedges, L. V. (2020). Assessing heterogeneity and power in replications of psychological experiments. Psychological bulletin, 146(8), 701–719. 10.1037/bul0000232
- 77Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? Psychological Bulletin, 105(2), 309–316. 10.1037/0033-2909.105.2.309
- 78Shrout, P. E., & Rodgers, J. L. (2018). Psychology, science, and knowledge construction: Broadening perspectives from the replication crisis. Annual review of psychology, 69, 487–510. 10.1146/annurev-psych-122216-011845
- 79Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science, 22(11), 1359–1366. 10.1177/0956797611417632
- 80Soderberg, C. K., Errington, T. M., Schiavone, S. R., Bottesini, J., Thorn, F. S., Vazire, S., … Nosek, B. A. (2021). Initial evidence of research quality of registered reports compared with the standard publishing model. Nature Human Behaviour, 5(8), 990–997. 10.1038/s41562-021-01142-4
- 81Stanley, T. D., Carter, E. C., & Doucouliagos, H. (2018). What meta-analyses reveal about the replicability of psychological research. Psychological bulletin, 144(12),
1325 . 10.1037/bul0000169 - 82Stanley, T. D., Doucouliagos, H., & Ioannidis, J. P. A. (2021). Retrospective median power, false positive meta-analysis and large-scale replication. Research Synthesis Methods, 13(1), 88–108. 10.1002/jrsm.1529
- 83Stefan, A. M., & Schönbrodt, F. D. (2023). Big little lies: A compendium and simulation of p-hacking strategies. Royal Society Open Science, 10(2),
220346 . 10.1098/rsos.220346 - 84Strathern, M. (1997). ‘Improving ratings’: audit in the British University system. European Review, 5(3), 305–321. 10.1002/(SICI)1234-981X(199707)5:3<;305::AID-EURO184>3.0.CO;2-4
- 85Stroebe, W., & Strack, F. (2014). The Alleged Crisis and the Illusion of Exact Replication. Perspectives On Psychological Science, 9(1), 59–71. 10.1177/1745691613514450
- 86Świątkowski, W., & Dompnier, B. (2017). Replicability Crisis in Social Psychology: Looking at the Past to Find New Pathways for the Future. International Review Of Social Psychology, 30(1), 111–124. 10.5334/irsp.66
- 87Swift, J. K., Christopherson, C. D., Bird, M. O., Zöld, A., & Goode, J. (2022). Questionable research practices among faculty and students in APA-accredited clinical and counseling psychology doctoral programs. Training and Education in Professional Psychology, 16(3), 299–305. 10.1037/tep0000322
- 88Szucs, D., & Ioannidis, J. P. (2017). Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature. PLoS biology, 15(3),
e2000797 . 10.1371/journal.pbio.2000797 - 89Tressoldi, P. E., & Giofré, D. (2015). The pervasive avoidance of prospective statistical power: Major consequences and practical solutions. Frontiers in Psychology, 6,
137497 . 10.3389/fpsyg.2015.00726 - 90Tsiatis, A. A. (2006). Information-based monitoring of clinical trials. Statistics in Medicine, 25(19), 3236–3244. 10.1002/sim.2625
- 91Van Zwet, E. W., & Cator, E. (2021). The significance filter, the winner’s curse and the need to shrink. Statistica Neerlandica, 75(4), 437–452. 10.1111/stan.12241
- 92Vankelecom, L., Loeys, T., & Moerkerke, B. (2024). How to Safely Reassess Variability and Adapt Sample Size? A Primer for the Independent Samples t Test. Advances in Methods And Practices in Psychological Science, 7(1). 10.1177/25152459231212128
- 93Vankov, I., Bowers, J., & Munafò, M. R. (2014). On the persistence of low power in psychological science. Quarterly Journal of Experimental Psychology, 67, 1037–1040. DOI: 10.1080/17470218.2014.885986
- 94Wagenmakers, E., Wetzels, R., Borsboom, D., & Van Der Maas, H. L. J. (2011). Why psychologists must change the way they analyze their data: The case of psi: Comment on Bem (2011). Journal Of Personality And Social Psychology, 100(3), 426–432. 10.1037/a0022790
- 95Wang, Y. A. (2023). How to Conduct Power Analysis for Structural Equation Models: A Practical Primer. PsyArXiv. 10.31234/osf.io/4n3uk
- 96Wassmer, G., & Brannath, W. (2016). Group sequential and confirmatory adaptive designs in clinical trials. Springer. 10.1007/978-3-319-32562-0
- 97Wicherts, J. (2011). Psychology must learn a lesson from fraud case. Nature, 480,
7 . 10.1038/480007a - 98Wicherts, J. M., Veldkamp, C. L., Augusteijn, H. E., Bakker, M., Van Aert, R., & Van Assen, M. A. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers in psychology, 7. 10.3389/fpsyg.2016.01832
- 99Wiggins, B. J., & Christopherson, C. D. (2019). The replication crisis in psychology: An overview for theoretical and philosophical psychology. Journal Of Theoretical And Philosophical Psychology, 39(4), 202–217. 10.1037/teo0000137
- 100Wittes, J., & Brittain, E. (1990). The role of internal pilot studies in increasing the efficiency of clinical trials. Statistics in Medicine, 9(1–2), 65–71. 10.1002/sim.4780090110
