2017
DOI: 10.1177/0956797617723724
|View full text |Cite
|
Sign up to set email alerts
|

Sample-Size Planning for More Accurate Statistical Power: A Method Adjusting Sample Effect Sizes for Publication Bias and Uncertainty

Abstract: The sample size necessary to obtain a desired level of statistical power depends in part on the population value of the effect size, which is, by definition, unknown. A common approach to sample-size planning uses the sample effect size from a prior study as an estimate of the population value of the effect to be detected in the future study. Although this strategy is intuitively appealing, effect-size estimates, taken at face value, are typically not accurate estimates of the population effect size because of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
318
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 345 publications
(322 citation statements)
references
References 57 publications
(78 reference statements)
4
318
0
Order By: Relevance
“…When researchers do run a power analysis to determine the sample size, they need to take into account that published effect sizes from previous research are probably overestimated and might lead to overly small sample sizes. Ways to deal with this are correcting the observed effect sizes for publication bias (Anderson, Kelley, & Maxwell, 2017;van Assen, van Aert, & Wicherts, 2015;Vevea & Hedges, 1995) , calculating lower bound power (Perugini, Galucci, & Constantini, 2014), or base a power analysis on the smallest effect size of interest (Ellis, 2010).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…When researchers do run a power analysis to determine the sample size, they need to take into account that published effect sizes from previous research are probably overestimated and might lead to overly small sample sizes. Ways to deal with this are correcting the observed effect sizes for publication bias (Anderson, Kelley, & Maxwell, 2017;van Assen, van Aert, & Wicherts, 2015;Vevea & Hedges, 1995) , calculating lower bound power (Perugini, Galucci, & Constantini, 2014), or base a power analysis on the smallest effect size of interest (Ellis, 2010).…”
Section: Discussionmentioning
confidence: 99%
“…These effect sizes are likely overestimated, which means that the sample size will be underestimated. A solution is to correct the observed effect sizes (Anderson, Kelley, & Maxwell, 2017;Etz & Vandekerckhove, 2016;van Aert & van Assen, 2017a, 2017bvan Assen et al, 2015;Vevea & Hedges, 1995), calculate lower bound power (Anderson et al, 2017;Perugini et al, 2014), or base a power analysis on the smallest effect size of interest (Ellis, 2010).…”
Section: Solutionsmentioning
confidence: 99%
“…Our study has some limitations. First, although the number of participants recruited is similar to previous studies (Baert et al., ; Price, Greven, Siegle, Koster, & De Raedt, ; Wells & Beevers, ), it is possible that a larger sample could have made it easier to find significant training effects by increasing the statistical power of the study (Anderson, Kelley, & Maxwell, ). Nevertheless, as seen in Table , the number of participants in dot‐probe tasks in depression ranges from 5 (Kruijt et al., ) to 29 (Beevers et al., ) and, therefore, it is unlikely that the limited number of participants explains the lack of significant results.…”
Section: Discussionmentioning
confidence: 88%
“…The replication crisis in cognitive neuroscience inspired us to investigate all sources of delay and noise in our experiments so that our efforts might be better replicated (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18). If 64% of all psychological science is not reproducible, what percent of this 64% is due to errant stimulus timing or unshielded systems?…”
Section: Discussionmentioning
confidence: 99%