Have a personal or library account? Click to login
A Practical Primer To Power Analysis for Simple Experimental Designs Cover

A Practical Primer To Power Analysis for Simple Experimental Designs

Open Access
|Jul 2018

References

  1. 1Albers, C., & Lakens, D. (2018). When power analyses based on pilot data are biased: Inaccurate effect size estimators and follow-up bias. Journal of Experimental Social Psychology, 74, 187195. DOI: 10.1016/j.jesp.2017.09.004
  2. 2Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample-size planning for more accurate statistical power: A method adjusting sample effect sizes for publication bias and uncertainty. Psychological Science, 28(11), 15471562. DOI: 10.1177/0956797617723724
  3. 3Armitage, P., Berry, G., & Matthews, J. (2002). Statistical Methods in Medical Research (4th ed.). Blackwell Science Ltd. DOI: 10.1002/9780470773666
  4. 4Arnold, B. F., Hogan, D. R., Colford, J. M., & Hubbard, A. E. (2011). Simulation methods to estimate design power: an overview for applied research. BMC Medical Research Methodology, 11(1), 94. DOI: 10.1186/1471-2288-11-94
  5. 5Asendorpf, J. B., Conner, M., De Fruyt, F., et al. (2013). Recommendations for increasing replicability in psychology. European Journal of Personality, 27(2), 108119. DOI: 10.1002/per.1919
  6. 6Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7(6), 543554. DOI: 10.1177/1745691612459060
  7. 7Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 11731182. DOI: 10.1037/0022-3514.51.6.1173
  8. 8Benjamin, D. J., Berger, J. O., Johannesson, M., et al. (2018). Redefine statistical significance. Nature Human Behaviour, 2(1), 610. DOI: 10.1038/s41562-017-0189-z
  9. 9Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, Series B (Methodological), 57, 289300.
  10. 10Bloom, H. S. (1995). Minimum detectable effects: A simple way to report the statistical power of experimental designs. Evaluation Review, 19(5), 547556. DOI: 10.1177/0193841X9501900504
  11. 11Brysbaert, M., & Stevens, M. (2018). Power analysis and effect size in mixed effects models: A tutorial. Journal of Cognition, 1(1), 120. DOI: 10.5334/joc.10
  12. 12Button, K. S., Ioannidis, J. P. A., Mokrysz, C., et al. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365376. DOI: 10.1038/nrn3475
  13. 13Cohen, J. (1962). The statistical power of abnormal—social psychological research: A review. Journal of Abnormal and Social Psychology, 65(3), 145153. DOI: 10.1037/h0045186
  14. 14Cohen, J. (1988). Statistical power analysis for the behavioral sciences. (2nd ed.). Hillsdale, NJ: Erlbaum.
  15. 15Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45(12), 130412. DOI: 10.1037/0003-066X.45.12.1304
  16. 16Cohen, J. (1992a). Statistical power analysis. Current Directions in Psychological Science, 1(3), 98101. DOI: 10.1111/1467-8721.ep10768783
  17. 17Cohen, J. (1992b). A power primer. Psychological Bulletin, 112(1), 155159. DOI: 10.1037/0033-2909.112.1.155
  18. 18Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49(12), 9971003. DOI: 10.1037/0003-066X.49.12.997
  19. 19Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences. Mahwah, N.J.: L. Erlbaum Associates.
  20. 20Cumming, G. (2013). Understanding the New Statistics: Effect Sizes, Confidence Intervals, and Meta-analysis. New York: Routledge.
  21. 21De Schryver, M., Hughes, S., Rosseel, Y., & De Houwer, J. (2016). Unreliable yet still replicable: A comment on LeBel and Paunonen (2011). Frontiers in Psychology, 6, 2039. DOI: 10.3389/fpsyg.2015.02039
  22. 22Eikeland, H. M. (1975). Epsilon-squared Should Be Preferred To Eta-squared. Technical Report, University of Oslo.
  23. 23Ellis. (2010). The Essential Guide To Effect Sizes. Cambridge University Press. DOI: 10.1017/CBO9780511761676
  24. 24Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2009). Statistical power analyses using G* Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 11491160. DOI: 10.3758/BRM.41.4.1149
  25. 25Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175191. DOI: 10.3758/BF03193146
  26. 26Fritz, C. O., Morris, P. E., & Richler, J. J. (2012). Effect size estimates: Current use, calculations, and interpretation. Journal of Experimental Psychology: General, 141(1), 218. DOI: 10.1037/a0024338
  27. 27Gelman, A. (2018, March 15). You need 16 times the sample size to estimate an interaction than to estimate a main effect [blog post]. Retrieved from: http://andrewgelman.com/2018/03/15/need-16-times-sample-size-estimate-interaction-estimate-main-effect/.
  28. 28Gelman, A., & Hill, J. (2006). Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge, England: Cambridge University Press. DOI: 10.1017/CBO9780511790942
  29. 29Giner-Sorolla, R. (2018, January 24). Powering your interaction [blog post]. Retrieved from: https://approachingblog.wordpress.com/2018/01/24/powering-your-interaction-2/.
  30. 30Hayes, A. F., & Scharkow, M. (2013). The relative trustworthiness of inferential tests of the indirect effect in statistical mediation analysis: Does method really matter? Psychological Science, 24(10), 19181927. DOI: 10.1177/0956797613480187
  31. 31Hunter, J. E., & Schmidt, F. L. (2004). Methods of Meta-Analysis: Correcting Error and Bias in Research Findings. (2nd ed.). Thousand Oaks, CA: Sage. DOI: 10.4135/9781412985031
  32. 32Hays, W. L. (1963). Statistics for Psychologists. New York: Holt, Rinehart, and Winston.
  33. 33Hedges, L. V. (1981). Distribution theory for Glass’ estimator of effect size and related estimators. Journal of Educational Statistics, 6(2), 107128. DOI: 10.3102/10769986006002107
  34. 34Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. DOI: 10.1371/journal.pmed.0020124
  35. 35Jaccard, J., & Turrisi, R. (2003). Interaction Effects in Multiple Regression. (2nd ed.). Thousand Oaks: Sage. DOI: 10.4135/9781412984522
  36. 36Judd, C. M., McClelland, G. H., & Ryan, C. S. (2017). Data Analysis: A Model Comparison Approach To Regression. (3rd ed.). New York: Routledge.
  37. 37Kelly, T. L. (1935). An unbiased correlation ratio measure. Proceedings of the National Academy of Sciences, 21, 554559. DOI: 10.1073/pnas.21.9.554
  38. 38Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 863. DOI: 10.3389/fpsyg.2013.00863
  39. 39Lakens, D. (2014). Performing high-powered studies efficiently with sequential analyses. European Journal of Social Psychology, 44(7), 701710. DOI: 10.1002/ejsp.2023
  40. 40Lakens, D., Adolfi, F. G., Albers, C. J., et al. (2018). Justify your alpha. Nature Human Behaviour, 2, 168171. DOI: 10.1038/s41562-018-0311-x
  41. 41Lane, S. P., & Hennes, E. P. (2017). Power struggles: Estimating sample size for multilevel relationships research. Journal of Social and Personal Relationships, 35(1), 731. DOI: 10.1177/0265407517710342
  42. 42LeBel, E. P., & Paunonen, S. V. (2011). Sexy but often unreliable: The impact of unreliability on the replicability of experimental findings with implicit measures. Personality and Social Psychology Bulletin, 37(4), 570583. DOI: 10.1177/0146167211400619
  43. 43Liu, X. S. (2014). Statistical Power Analysis for the Social and Behavioral Sciences: Basic and Advanced Techniques. New York: Routledge.
  44. 44Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: causes, consequences, and remedies. Psychological Methods, 9(2), 147163. DOI: 10.1037/1082-989X.9.2.147
  45. 45Maxwell, S. E., Kelley, K., & Rausch, J. R. (2008). Sample size planning for statistical power and accuracy in parameter estimation. Annual Review of Psychology, 59, 537563. DOI: 10.1146/annurev.psych.59.103006.093735
  46. 46McClelland, G. H., & Judd, C. M. (1993). Statistical difficulties of detecting interactions and moderator effects. Psychological Bulletin, 114(2), 376390. DOI: 10.1037/0033-2909.114.2.376
  47. 47O’Brien, R. G., & Castelloe, J. M. (2007). Sample size analysis for traditional hypothesis testing: concepts and issues. In: Dmitrienko, A., Chuang-Stein, C., D’Agostino, R. (eds.), Pharmaceutical Statistics Using SAS: A Practical Guide, 23771. Cary, NC: SAS.
  48. 48Okada, K. (2013). Is omega squared less biased? A comparison of three major effect size indices in one-way ANOVA. Behaviormetrika, 40(2), 129147. DOI: 10.2333/bhmk.40.129
  49. 49Perugini, M., Gallucci, M., & Costantini, G. (2014). Safeguard power as a protection against imprecise power estimates. Perspectives on Psychological Science, 9(3), 319332. DOI: 10.1177/1745691614528519
  50. 50Porter, K. E. (2017). Statistical power in evaluations that investigate effects on multiple outcomes: A guide for researchers. Journal of Research on Educational Effectiveness, 11(2), 267295. DOI: 10.1080/19345747.2017.1342887
  51. 51Preacher, K. J., & Hayes, A. F. (2004). SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behavior Research Methods, Instruments, & Computers, 36(4), 717731. DOI: 10.3758/BF03206553
  52. 52Preacher, K. J., & Selig, J. P. (2012). Advantages of Monte Carlo confidence intervals for indirect effects. Communication Methods and Measures, 6(2), 7798. DOI: 10.1080/19312458.2012.679848
  53. 53Qiu, W. (2017). powerMediation: Power/Sample Size Calculation for Mediation Analysis. R package version 0.2.7.
  54. 54R Core Team. (2017). R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing.
  55. 55Richardson, J. T. (2011). Eta squared and partial eta squared as measures of effect size in educational research. Educational Research Review, 6(2), 135147. DOI: 10.1016/j.edurev.2010.12.001
  56. 56Rosenthal, R., & Rosnow, R. L. (1985). Contrast Analysis: Focused Comparisons in the Analysis of Variance. Cambridge University Press.
  57. 57Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 136. DOI: 10.18637/jss.v048.i02
  58. 58Ruscio, J. (2008). A probability-based measure of effect size: robustness to base rates and other factors. Psychological Methods, 13(1), 1930. DOI: 10.1037/1082-989X.13.1.19
  59. 59Ruscio, J., & Mullen, T. (2012). Confidence intervals for the probability of superiority effect size measure and the area under a receiver operating characteristic curve. Multivariate Behavioral Research, 47(2), 201223. DOI: 10.1080/00273171.2012.658329
  60. 60Schoemann, A. M., Boulton, A. J., & Short, S. D. (2017). Determining Power and Sample Size for Simple and Complex Mediation Models. Social Psychological and Personality Science, 8(4), 379386. DOI: 10.1177/1948550617715068
  61. 61Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47(5), 609612. DOI: 10.1016/j.jrp.2013.05.009
  62. 62Schönbrodt, F. D., & Wagenmakers, E. (2017). Bayes factor design analysis: Planning for compelling evidence. Psychonomic Bulletin & Review, (2014), 113. DOI: 10.3758/s13423-017-1230-y
  63. 63Schönbrodt, F. D., Wagenmakers, E., Zehetleitner, M., & Perugini, M. (2017). Sequential hypothesis testing with Bayes factors: Efficiently testing mean differences. Psychological Methods, 22(2), 322339. DOI: 10.1037/met0000061
  64. 64Shieh, G. (2009). Detecting interaction effects in moderated multiple regression with continuous variables power and sample size considerations. Organizational Research Methods, 12(3), 510528. DOI: 10.1177/1094428108320370
  65. 65Simonsohn, U. (2014). [17] No-way Interactions. The Winnower, 5, e142559.90552. DOI: 10.15200/winn.142559.90552
  66. 66Sobel, M. E. (1982). Asymptotic confidence intervals for indirect effects in structural equation models. Sociological Methodology, 13, 290312. DOI: 10.2307/270723
  67. 67Sterne, J. A., & Smith, D. G. (2001). Sifting the evidence—What’s wrong with significance tests? Physical Therapy, 81(8), 14641469. DOI: 10.1093/ptj/81.8.1464
  68. 68Swiatkowski, W., & Dompnier, B. (2017). Replicability Crisis in Social Psychology: Looking at the Past to Find New Pathways for the Future. International Review of Social Psychology, 30(1), 111124. DOI: 10.5334/irsp.66
  69. 69Thoemmes, F., MacKinnon, D. P., & Reiser, M. R. (2010). Power analysis for complex mediational designs using Monte Carlo methods. Structural Equation Modeling, 17(3), 510534. DOI: 10.1080/10705511.2010.489379
  70. 70Wahlsten, D. (1991). Sample size to detect a planned contrast and a one degree-of-freedom interaction effect. Psychological Bulletin, 110(3), 587595. DOI: 10.1037/0033-2909.110.3.587
  71. 71Wilkinson, L., & Task Force Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54(8), 594604. DOI: 10.1037/0003-066X.54.8.594
  72. 72Zhang, Z. (2014). Monte Carlo based statistical power analysis for mediation models: Methods and software. Behavior Research Methods, 46(4), 11841198. DOI: 10.3758/s13428-013-0424-0
  73. 73Zumbo, B. D., & Hubley, A. M. (1998). A note on misconceptions concerning prospective and retrospective power. Journal of the Royal Statistical Society: Series D (The Statistician), 47(2), 385388. DOI: 10.1111/1467-9884.00139
DOI: https://doi.org/10.5334/irsp.181 | Journal eISSN: 2397-8570
Language: English
Published on: Jul 9, 2018
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2018 Marco Perugini, Marcello Gallucci, Giulio Costantini, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.