(The Importance of) Understanding Statistical Power

Part 1

The first question you might ask yourself is “What is statistical power?” I promise, it’s not as scary of a concept as it may sound! Statistical power is defined as the probability of rejecting the null hypothesis under the assumption that a *specific true effect* exists in the population. For example, consider the following hypothesis: from a third-person perspective, participants will judge that people have a stronger obligation to help their family members than they do to help complete strangers. (Okay, so I’m going to talk a little bit more [than I said I would in my introduction blog post] about some of the research we do in the Morality Lab).

In planning a study to test this hypothesis, you design a procedure in which participants will be randomly assigned to one of two conditions. Group 1 will read a scenario in which Person A realizes that Person B needs help, but Person A does not know Person B. Group 2 will read an almost identical scenario; except now, Person A and Person B will be cousins. The statistical question is: Will Group 2, on average, judge Person A as having a stronger obligation to help than Group 1? You predict “yes”! If there truly is such an effect, statistical power refers to the probability that you will correctly reject the null hypothesis that Group 1 and Group 2 will not differ in their judgments of obligation strength.

We’ll assume you already have a scenario that you adapted from prior research, and you have a valid and reliable measure of obligation strength (e.g., 0 = none at all to 100 = a great deal). So now what? Can you put the scenarios and measures into Qualtrics, upload the link to MTurk or Prolific, and start collecting data? Almost!

First, you must decide how many participants you’re going to collect data from. Let’s say you decide to collect data from 10 people for Group 1, and 10 people for Group 2 (total N = 20), and your analysis plan is to conduct an independent samples t-test. Before collecting any data, you know that there are 4 possible outcomes for your study:

  • There truly is NO effect; you don’t find an effect in your sample (correct failure to reject null)
  • There truly is NO effect; you do find an effect in your sample (false positive)
  • There truly IS an effect; you find an effect in your sample (correct rejection of null)
  • There truly IS an effect; you don’t find an effect in your sample (false negative)

Statistical power is the probability of #3 (i.e., finding an effect in your sample when there is a specific effect in the population), and it (mostly) depends on three inputs: (1) Sample size; (2) Alpha level (the probability of incorrectly rejecting the null hypothesis when the null is true); and (3) Effect size.

Sample size and alpha level are largely within your control as a researcher (well, to the extent that you have resources and do not simply follow norms for the sake of following norms). In this example, you’ve already decided on collecting two groups of 10 (total N = 20). The standard alpha level used in psychology is 0.05 (think of “p ≤ .05”); this means that if no effect exists in the population, and you run 100 studies investigating that effect, 5 studies would show a statistically significant difference between groups, even though there’s truly no effect in the population! As a researcher, you could decide to loosen this criterion, or make it stricter; it depends on how comfortable you are with a particular proportion of incorrect decisions. You can imagine how this might matter more for some sciences than others. For simplicity, let’s move forward assuming you want to use 0.05 as your alpha level. Therefore, the final element to determine statistical power is effect size. (This is simplified a bit because there are other inputs to statistical power, such as whether the design is between- versus within-participants, or whether you use a two-tailed versus a one-tailed test). But for now, for the sake of attaining a general understanding of power, just keep in mind that it is tightly tied to sample size, alpha level, and effect size.

Now you might wonder, “What’s an effect size?” In your study, the effect size is simply the magnitude of the difference in obligation strength judgments between Group 1 and Group 2. A popular effect size is Cohen’s d. Cohen’s d is a standardized mean difference effect size; it is calculated by subtracting the mean of Group 2 from the mean of Group 1 and then dividing that difference by the pooled standard deviation of the two groups(see https://rpsychologist.com/cohend/ for a visualization of Cohen’s d and more intuitive interpretations of it). In your subfield, you probably have a sense for what the average effect sizes are in terms of Cohen’s d. For simplicity, let’s assume that the average effect size in your field is a (absolute value) Cohen’s d = 0.20. You have no reason to think that your new study’s effect size will be any different in magnitude from the other effects that are well-documented in your field.

Finally, you have a sample size, an alpha level, and an expected effect size. You’re ready to ask yourself the following question: “How much statistical power do I have to reject the null hypothesis that the difference between groups is zero, assuming an alpha level of 0.05 and a true effect size of Cohen’s d = 0.20, with 10 participants per group?” There are a few easy ways to answer this question that don’t require a full understanding of the logic behind power analyses. For example, you can download the software “G*Power” (a point-and-click power calculation program), and simply enter the sample size, alpha level, and expected effect size. Using this information, G*Power will tell you exactly how much statistical power you have. Take a guess how much power your study would have to reject the null, assuming an alpha level = 0.05 and a difference between groups of Cohen’s d = 0.20, with N = 10 per group, using a two-tailed independent samples t-test?

7%! Not 77, not 70, not even 17… you’d have 7% power! This means that if an effect size of Cohen’s d = 0.20 truly exists, your study would only have a 7% chance of rejecting the null. Is a study with such a small possible success rate even worth conducting?!

Now that you know this, how can you increase statistical power for your study? There are at least a few straightforward ways. (1) You could increase your sample size. (2) You could set your alpha level to be higher than 0.05. (3) You could use a one-tailed test instead of a two-tailed test (which I’m not going to cover this in this post). (4) You could adopt a within-subjects design instead of a between-subjects design (which I’m also not going to cover in this post). (5) You could create a stronger manipulation that you think will lead to a larger effect size; large effect sizes are easier to detect than small effect sizes (which I’ll cover later in this post). Or you could do some combination of all of these!

After some thinking, you reason that your field uses an alpha level = 0.05 as its standard, and you don’t have a good enough reason to argue that your study’s effect should be treated any differently. You also do not have a strong argument for why you should use a one-tailed test instead of a two-tailed test, so you use the standard two-tailed test. Because you want to be able to make an inference about one-off judgments, not comparative judgments, a within-subjects design is not appropriate for your study. Last, you need to collect data soon because you have a conference submission due in the next few days; therefore, you don’t want to spend any extra time thinking about and implementing a stronger manipulation. This leaves sample size. You must increase your sample size to obtain more statistical power. Your next question might be “How much statistical power should I have?”

As with decisions about setting an appropriate alpha level, there is not a clear consensus on how much statistical power is optimal. Your subfield might use 80% power as a rule of thumb. As a reminder, this means that studies in your subfield typically rely on a heuristic that it is sufficient to be able to reject the null hypothesis 80% of the time if a specific effect size truly exists in the population. Put another way, disciplines that use this 80% power heuristic will incorrectly fail to reject the null20% of the time, even when a specific effect size truly exists in the population! Framed this way, 80% power (to me, at least) does not seem sufficient. Let’s assume, though, that you choose to power your experiment at 80% power, assuming an alpha level of .05, a true effect size of Cohen’s d = 0.20, using a two-tailed independent samples t-test. Before reading on, make a guess to yourself about how many participants you think this would require!

Did you guess something like 100-150 participants per group? 100-150 per group seems plenty! However, the actual answer is 394 participants(!), per group! Your study would need almost 800 participants in total to have an 80% chance of rejecting the null, assuming an effect size of Cohen’s d = 0.20 exists in the population. If you weren’t strapped for time due to your upcoming conference submission, this realization might lead to a reconsideration of the factors that affect statistical power. You might reassess whether a higher alpha level, a one-tailed test, a within-subjects design, or a stronger manipulation, or some combination, is warranted. (Importantly, and perhaps depressingly, your real-world decisions regarding these issues will depend critically on the available resources you or your lab have at your disposal.) For illustration, however, let’s assume you have unlimited resources.

Your using of two of the common heuristics (alpha = 0.05, power = 80%) creates an additional, more complex heuristic. Remember: having an alpha level = 0.05 means that your study has a 5% chance of incorrectly rejecting the null when no effect exists in the population; having 80% power means that your study has a 20% chance of incorrectly failing to reject the null if a specific effect size (in this case, a Cohen’s d = 0.20) exists in the population. These two heuristics, together, suggest that you are four times as concerned about avoiding false positives than avoiding false negatives (5% false positive rate versus 20% false negative rate). Does that seem like a reasonable discrepancy to you? Or do you think that you should be equally concerned with avoiding false positives and false negatives? If the latter, then perhaps you should equalize your false positive and false negative rates. For example, you could keep your alpha level at 0.05 while increasing your statistical power to 95%, meaning that you would only incorrectly fail to reject the null 5% of the time if a Cohen’s d = 0.20 truly exists in the population.

How many participants do you think this would require? (Hint: it’s a lot!)

Your study would require… 651 participants(!) per condition! In total, you’d need over 1,300 participants! By now, you can imagine that predicting an interaction that attenuates this effect would require a massive (and likely a practically infeasible) sample size. To solidify all of this, see the attached screenshot from G*Power below. In this screenshot, power is plotted on the x-axis, whereas total sample size is plotted on the y-axis. Different colored lines are plotted along the plane to show how many total participants you’d need to correctly reject the null at different levels of power, assuming different true effect sizes in the population.

Thus far, I hope to have convinced you of the importance of understanding statistical power. Specifically, if you want your studies to have a strict alpha criterion, high power, and be able to detect small effect sizes (e.g., Cohen’s d = 0.20), you will probably need many more participants than you’re currently used to running.

If the concept of power is new to you (or if you haven’t really dug into the details), you might still wonder to yourself, “How do we know that software programs like G*Power are correct in their calculations of statistical power based on combinations of alpha level, sample size, and effect size?” This is where simulations come in handy! Not only do simulations help you to understand how all of these elements are intertwined, but they also allow you to investigate additional questions that programs like G*Power are not built for you to ponder.  

There are a few things you might want to understand better: (1) Why does an alpha level of 0.05 mean that you will incorrectly detect an effect 5% of the time when no true effect exists in the population? (2) Why does 80% power mean that you will correctly detect an effect 80% of the time when a true effect exists in the population? (3) What is the relationship between statistical power and expected effect size? (4) How likely is it that any one effect size is accurately estimated at different levels of statistical power? Rather than simply providing answers to these questions, I’m going to show you how you can answer each of these questions yourself using simulations. If you are interested in learning the answers to these questions (or how to answer them yourself), please read on! If not, I hope what has been presented thus far has helped you in thinking more carefully about the importance of statistical power.

TL;DR version of Part 1:
Statistical power refers to the probability of rejecting the null hypothesis under the assumption that a specific true effect size exists. There are various ways to increase statistical power, including but not limited to, increasing sample size and adjusting your alpha level. If you want to detect small effects, you probably need to collect data from more participants than you’re currently used to running.

Part 2
Before diving into Part 2 (on simulations), I just want to apologize to any readers who are not familiar with R; I only know how to run these simulations in R. Even if you’re not familiar with R, you can download R and RStudio, and you should be able to run and reproduce everything with the code I’ve provided. You should also be able to tinker with the numbers if you want to strengthen your own statistical intuitions even more! I also want to give a shoutout to Daniel Lakens for his Coursera course “Improving your statistical inferences” (https://www.coursera.org/learn/statistical-inferences), which is where I first learned in detail about statistical power (among other important meta-science issues). I encourage anyone who has not gone through his courses to do so; they’re extremely informative. The code I’m using for this post is an extension and extreme modification of code that Lakens used in a lesson on the behavior of p-values. Okay, back to our main questions that we can answer with simulations!

Q1: Why does setting an alpha level of 0.05 mean that you will incorrectly detect an effect 5% of the time when no true effect exists in the population (and what else can we learn when answering this question)?

Remember from Part 1 that the alpha level is just the decision criterion you use for labeling your group difference (or any other hypothesis test) as “statistically significant” or not. If you set your alpha level at 0.05, and your study returns a p-value of 0.01, then because that value is below the alpha level of 0.05, you decide to reject the null hypothesis. That is, you would conclude that your data are extremely unlikely under the assumption that the null hypothesis is true; therefore, you reject the null hypothesis. But why is this? Let’s work through some simulations.

In R, you can produce simulated participants. Say you wanted to simulate 10 participants from a population with a known mean and standard deviation. You’d use the following commands; remember your 0 to 100 obligation measure:
x <- rnorm(n = 10, mean = 60, sd = 25)

Then, you can simulate a second set of participants:
y <- rnorm(n = 10, mean = 60, sd = 25)

It should be clear from these two commands that you’re sampling from populations that have identical properties. Importantly—and this is the strength of simulations—you create and thus know the ground truth. Therefore, in simulating two samples of participants from this population, you know that there should be no effect (i.e., no difference between Group 1 and Group 2) when you run a t-test. However, simulating one dataset and conducting one t-test is not going to help you understand why setting an alpha level = 0.05 means that you’ll incorrectly detect an effect 5% of the time when no true effect exists. To fully appreciate this, you need to simulate many datasets. For illustration, I chose to simulate 100,000 datasets with the same commands as above. I also chose to simulate datasets that varied in how large the samples drawn from each population were (N = 10 per group, N = 50 per group, N = 100 per group, N = 200 per group, N = 500 per group, and N = 1000 per group). So in total, there are 600,000 simulated datasets with two groups of participants that were sampled from identical populations. You’ll see the importance of this later. Okay, let’s start working through the simulations!

If you run lines 1 – 150 of the R script, you’ll have the same 600,000 datasets and t-tests that I simulated (plus some extra stuff). Remember: we know that there is no effect (i.e., no difference in obligation judgments between groups) because we simulated two groups of participants from the same population. Before reading on, think about what you’d expect when running hundreds of thousands of independent samples t-tests on groups of participants sampled from the same population. What do you think the distribution of p-values will look like?

After thinking through what you’d expect, run lines 151 – 162 to find out (and create the following plot):

On the x-axis are the observed p-values. On the y-axis are the frequencies of those observed p-values across all t-tests. The plot is faceted by how many participants were sampled from each population (e.g., “N = 10 per group” means that 100,000 simulated datasets were created by sampling 10 participants per group [total N = 20 per group]). The vertical redlines are placed at 0.05 of the x-axis to show what proportion of t-tests returned p-values suggesting a significant difference between means.

You should immediately notice that, when there is no effect, roughly 5% of the p-values still suggest statistically significant differences. (I say “roughly” because this will depend on how many simulations you run – the more simulations, the closer this proportion will be to your alpha level). This is why an alpha level of 0.05 means that you will incorrectly reject the null 5% of the time when no true effect exists. Also notice that, when no true effect exists, p-values are a randomly distributed variable. To solidify this point, a p-value of .01 is just as likely as a p-value of .99 when the null hypothesis is true. Neat, right?

You can also look at how raw mean differences and their corresponding standardized effect sizes behave under the null hypothesis. What do you think the distribution of mean differences and standardized effect sizes will look like? Will they also be random? Before reading on, take a minute to think about what you’d expect to see when plotting the effect sizes that correspond to the random p-values. Then go ahead and run lines 164 – 174 to see.

On the x-axis are the observed mean difference estimates (remember our 0 to 100 obligation scale). On the y-axis are the frequencies of those observed mean difference estimates. You should immediately how very different these are from the distribution of p-values. Notably, the dispersion of sampled mean differences around the true mean difference (i.e., 0 [green vertical line]) is high when samples are small (e.g., N = 10 per group). But as sample sizes increase, the dispersion of sampled mean differences gets tighter and tighter around the true mean difference. Given this plot, what do you expect for sampled Cohen’s d estimates? Take a minute to think about this before reading on. Then go ahead and run lines 176 – 186 to see.

This plot looks almost identical to the mean difference plot (except, now, the x-axis is in Cohen’s d units rather than raw mean difference units). What do these plots mean in conjunction with the plot of randomly distributed p-values?

My interpretation is that all false positives (i.e., all p-values ≤ 0.05) are due to overestimated effect sizes (i.e., effects considerably smaller or larger than 0). We can verify this empirically by looking only at statistically significant effects within each subset of data. Run lines 188 – 194 as a demonstration of this in the “N = 10 per group” data. Here, the average (absolute) mean difference for only statistically significant t-tests is 24.71(!), ranging from 11.26 to 49.01. The average (absolute) Cohen’s d for only statistically significant t-tests is 1.16(!), ranging from 0.94 to 2.42.

Next, run lines 203 – 208, and then lines 217 – 222 to look at only statistically significant effects in other subsets of the data. In the “N = 100 per group” data, the average absolute mean difference is 8.22, ranging from 5.95 to 17.37. The average absolute Cohen’s d is 0.33, ranging from 0.28 to 0.67.

In the “N = 500 per group” data, the average absolute mean difference is 3.68, ranging from 2.86 to 7.82. The average absolute Cohen’s d is 0.15, ranging from 0.12 to 0.32. You should notice a clear pattern. As N per group increases, the statistically significant effect sizes get closer and closer to the true effect size (i.e., 0).

Q1 Takeaways (TL;DR):
First, under the null, p-values are randomly distributed. Second, assuming the null is true, false positives are equally likely to occur across different sample sizes. Third, however, not all false positives are going to be equally misleading. If you pilot a study with N = 10 per group, and there truly is no effect in the population, you’ll be more likely to observe a large effect size than if you would have piloted a study with N = 100 per group. Alright, on to the next question!

Q2: Why does 80% power mean that you will correctly detect an effect 80% of the time when a true effect exists in the population (and what else can we learn when answering this question)?

Go ahead and run lines 232 – 372, which may take a few minutes. This simulates another 600,000 datasets (100,000 datasets per “N per group”). The only difference between these new simulations and the previous simulations is that these new simulations generate participants from populations that differ in obligation strength judgments by a Cohen’s d = 0.20 (M1 = 60, SD1 = 25 vs M2 = 65, SD2 = 25). What do you think the distributions of p-values will look like now? Will they be random? If not, how do you think they’ll look? Take a moment to think about this before reading on.

Go ahead and run lines 375 – 385 to see.

This plot looks vastly different from the distributions of p-values when there truly is no effect. Here, as N per group increases, the probability of rejecting the null increases. But how does this relate to statistical power? Simply put, the proportion of p-values that are equal to or less than your chosen alpha level (here, 0.05) is how much statistical power a t-test would have to reject the null, assuming a specific difference exists (here, d = 0.20). We can verify this empirically by looking only at statistically significant effects within each subset of data. Run lines 412 – 417 to investigate power for the “N = 10 per group” data.

First, you’ll notice that only 7,044 out of 100,000 tests yield a statistically significant group difference in means. This means that, assuming a Cohen’s d = 0.20 difference between groups, you would only have ~7% power to reject the null if you ran a study with N = 10 per group (i.e., there’s only a 7% chance that your study would return a significant difference between means). Framed differently, this means that there is a 93%(!) false negative rate (i.e., you would have a 93% chance of incorrectly failing to reject the null). As a check, you can run a power analysis in G*Power, specifying 7% power to detect a Cohen’s d =0.20, assuming a two-tailed test and an alpha level = 0.05. G*Power returns the necessary sample size as N = 10 per group (see below)!

In R, if you look at the average absolute mean difference for significant effects in the “N = 10 per group” data, it shows an average difference of 25.66(!), ranging from 10.91 to 51.94! Similarly, the average absolute Cohen’s d = 1.18(!), ranging from 0.94 to 2.98. That is, not a single data set that returns a significant effect accurately estimates the true mean difference or effect size. Why? With such a small sample size, effect sizes of d = 0.20 (the true effect size) will yield t-statistics that do not exceed the critical t threshold for a two-tailed test with an alpha level = 0.05. Practically speaking, this means that, when true small effects exist, and experiments only collect data from small numbers of participants, observing only exceptionally large effect sizes will lead to rejection of the null.

What else can we learn from these specific simulations? If you look back at the p-value distributions from these data sets (which assume a Cohen’s d = 0.20), you’ll notice that the first sample size at which most of the p-values are at or below 0.05 is the subset of data in which N = 200 per group. Run lines 433 – 438 and see if you can figure out how much power a study with N = 200 per group will have to detect a Cohen’s d = 0.20.

51,579 of 100,000 tests yielded a statistically significant difference. This means that a study with N = 200 per group would only have a ~52% chance of rejecting the null if the true difference between groups is d = 0.20 (and therefore a 48% chance of incorrectly failingto reject the null). You’ll also notice that the average absolute effect size (Cohen’s d = 0.28, ranging from 0.18 to 0.62) for these statistically significant results is much closer to the true effect size than was the case when investigating the significant effects that were sampled with N = 10 per group. Again though, even with N = 200 per group, it’s worth noting that some of the significant effects are still drastically overestimated relative to the true effect size (e.g., a maximum absolute value of d = 0.62). Even more interesting, if you rerun line 438, removing the abs() function, you’ll notice that some of these significant effects are not even in the right direction! Remember, we simulated data so that Group 2 would be higher than Group 1, meaning that the effect size technically should be negative (because d was calculated as Group 1 – Group 2). So even with N = 200 per group, not only might your sampled effect size be overestimated in terms of absolute magnitude, but it’s possible that your sampled effect size will actually be in the opposite direction of the true effect size! I’ll come back to this in more detail when we go through question #4.

Q2 Takeaways (TL;DR):
First, power can be thought of as “assuming a specific effect size, how many significant effects would I observe if I were to run this study 100,000 (or more) times?” And you should now be able to test this yourself if you’re ever confused about how to explain it! Second, increasing your sample size will increase your ability (i.e., power) to detect an effect. Third, if you compare the p-value distributions from this section (where Cohen’s d = 0.20) to the p-value distributions from the previous section (where Cohen’s d = 0), you’ll notice that each set of distributions show significant effects. This is important to note because it means that any one significant effect is not by itself sufficient for concluding that a true effect exists. Ask yourself, how would you know if your sampled effect is from the population of effects that can occur when the null is true, or if it is from the population of effects that can occur when the null is false? This realization has made me appreciate the importance of replication. Of course, there are other issues apart from replication that are important to consider when deciding whether an effect exists (e.g., did you actually measure what you intended to measure, is there a confound in your manipulation, is the effect due to your choice of stimuli, etc.), but these are issues that you should be considering already anyway! Alright, let’s move on to question #3. (Q3 and Q4 are short, I promise!)

Q3: What is the relationship between statistical power and expected effect size?

Run lines 455 – 595 to simulate 600,000 new datasets, now assuming a true effect size of Cohen’s d = 0.50. Before you plot the p-values for these datasets, what do you think the distributions will look like? Will they be distributed identically to the p-values from Question 2 (where d = 0.20), or will they be different? If you think they’ll be different, in what way will they be different?

Run lines 597 – 608 to create the following plot:


Well, it looks kind of similar to Q2’s p-value plot. Can you spot the difference? If you compare these p-value distributions (where Cohen’s d = 0.50) to the distributions of p-values from Q2 (where Cohen’s d = 0.20), you’ll notice that for similar “Ns per group,” there are more statistically significant differences when the true effect is Cohen’s d = 0.50 compared to when the true effect size is Cohen’s d = 0.20. This means that, holding N per group constant, you’ll have more power to detect large effect sizes than small effect sizes. This is especially evident when looking at the bottom three panels of each p-value plot; the bottom panel in this section seems to show almost nothing but significant effects (which is not true of the bottom panel from Q2). Again, you can further verify this by investigating descriptive statistics associated with each subset of data, which I’m not going to go into detail about here. What we can do, though, is drive this point home by running an additional set of simulations in which the true effect size is a Cohen’s d = 0.80. You can do this yourself by running lines 678 – 831 to get the following plot (again, this may take a few minutes):


Q3 Takeaways (TL;DR): To me, there’s only one main takeaway: As the true effect size increases, you will need fewer participants to have a high probability of rejecting the null. If this still isn’t evident, you can compare all three p-value plots with true non-zero effects (Cohen’s ds = 0.20, 0.50, and 0.80), and you should be able to see it. Alright, time for the last question!

Q4: How likely is it that any one effect size is accurately estimated at different levels of statistical power?

You may not think about this issue too much if what you’re primarily interested in for your research is reliably detecting directions of effects. However, you may come to a point in your research program where you care a lot about accuracy of effect sizes. For example, in your studies on obligation strength, let’s assume that you’ve documented (and replicated) an effect in the predicted direction: People think others have a weaker obligation to help complete strangers than they do to help their cousins. Now you might wonder whether your effect is due to a more fine-grained factor, such as genetic relatedness. You might ask “Will people judge that others have a weakerobligation to help their cousins than to help their siblings?” If you predict “yes,” now you must consider whether the effect size that you’ve already estimated in your previous work will hold when answering this new question.

You might reason that because cousins and siblings are within the same category (i.e., family), people might not distinguish as much between them as when they distinguished between obligations to help strangers versus cousins (i.e., non-family versus family). Let’s assume this is true; we can stipulate that your already documented effect (for comparing obligation judgments to help strangers versus cousins), which you’ve replicated numerous times, suggested that the true effect was actually a Cohen’s d = 0.50. Now, you speculate that your new experiment (i.e., comparing obligation judgments to help cousins versus siblings) will suggest there is still a difference, but likely a much smaller difference. Let’s assume this new hypothesis is true and that the effect size truly is a Cohen’s d = 0.20.

In Q2, I mentioned “…assuming a true effect size of d = 0.20, even with N = 200 per group, not only might your sampled effect size be overestimated in terms of absolute magnitude, but it’s possible that your sampled effect size will actually be in the opposite direction of the true effect size!” Let’s use this as a starting point. Run lines 399 – 409 to get the following plot which assumes a Cohen’s d = -0.20; remember, Group 1 – Group 2 (i.e., “Cousin” obligation judgments – “Sibling” obligation judgments) would technically lead to a negative effect size, as Group 1 had a lower mean in our simulation code:

This plot suggests that, for any one study, higher power (i.e., larger N per group) will lead to a higher probability of your sampled effect size reflecting the true effect size, both in magnitude and direction. But does this pattern hold across different true effect sizes? Re-run lines 622 – 632 to see what the range of observed effect sizes look like when the true effect size is a Cohen’s d = -0.50:

Q4 Takeaways (TL;DR): First, holding the true effect size constant, you’re more likely to accurately estimate that effect size when you have higher power (e.g., larger N per group). Importantly, with lower power, you can even end up observing significant effects in the wrong direction (as we saw when answering Question 2)! This point again illustrates the importance of replication. Second, regardless of the magnitude of the true effect size, the range of observed effect sizes will be wide when N per group is low, but the range will narrow and be closer to the true effect size as N per group increases.

That’s all for this post! If there is one important takeaway from this entire post, I would say that it is imperative to understand statistical power and its implications. It will help you think through whether a non-significant effect is worth following up on versus moving on from. It will help you in evaluating other researchers’ statistical inferences as a casual reader or as a reviewer. It will even help you more appropriately communicate your own research findings. Also, if you have any more burning questions about power that you want to know the answer to, you might now be able to (at least try to) answer them yourself if you play around with simulations!

I hope you found this helpful. If you caught any mistakes, either in my code, in my results, or in my reasoning, please let me know. I enjoy learning about stats and testing my intuitions, but I don’t want to do it wrong. Worse yet, I don’t want to do it wrong and have other people believe my wrongness. If you’ve read all the way to the end, thank you! Since you made it all the way through to the end, here is a picture of my dogs to hopefully bring a smile to your face during these challenging times. Their names are Lomi and Fuzz 😊

P.S. – Here is a list of some folks I recommend following on Twitter and Google Scholar if you’re interested in learning more about meta-science-y or stats-related issues: Daniel Lakens, Lisa DeBruine, Tal Yarkoni, Simine Vazire, Alison Ledgerwood, Danielle Navarro, Berna Devezer, Dan Quintana, James Heathers, Indrajeet Patil, Anne Scheel, Stuart Ritchie, Uli Schimmack, Iris van Rooij, Kristopher Magnusson, and Chelsea Parlett-Pelleriti. I’m certain I’ve left some researchers off of this list, but it is not intentional; I just went into my bookmarked tweets and added names of folks whose tweets I’ve recently saved!

2 thoughts on “(The Importance of) Understanding Statistical Power

  1. Question – why bother using a t-test and NHST at all when comparing two independent groups, and why not rely on Confidence Intervals and effect sizes, as this presumably would give all the same information and much more without all the misconceptions and limitations associated with significance testing? For example, even in your initial example in reference to different error types you mention “There truly is NO effect…”, but in reality there is always an effect/difference between groups at some decimal point (as pointed by critics of NHST for years). I ask because I teach stats at both graduate and undergraduate level (psychology) and trying to see if their are reasons for retaining NHST (and by extension power analysis) that I’m not aware of. Thanks!

    Like

    1. Hi Joe – thanks for reading and for your thoughts! I have to say that I feel slightly funny and underqualified trying to respond to someone who teaches stats on a regular basis… but I’ll do my best to communicate how I think about the issues you raised 😊

      First, I think the question of “why bother” is a good one. Certainly some people do not believe that NHST is the only tool that should be used (e.g., https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5133225/; https://journals.sagepub.com/doi/full/10.1177/1745691620958012), and like you mentioned, some people believe that using only estimation would lead us to the same conclusions of a hybrid NHST/estimation approach (e.g., https://journals.sagepub.com/doi/10.1177/0956797613504966).

      Personally, I believe that NHST, if used correctly (i.e., for long-run error control in making reject/retain decisions), is fine. That is, if a researcher wants to make a prediction that condition A will be higher than condition B, and they specify an alpha level with a particular effect size in mind and then power their statistical test appropriately, I don’t see anything wrong with making a tentative reject/retain decision about the null. Do I think researchers should make a strong claim about the magnitude of an effect if they use this approach? No… because we know that any single study may not return the true effect size (and we know that, regardless of the true effect size, extreme deflection from that true effect size is likelier to happen with smaller Ns).

      Second, I take your point that researchers often use NHST to test against an effect size of exactly zero. (Some call this nil-hypothesis significance testing [e.g., https://replicationindex.com/category/nil-hypothesis/%5D). However, the null can also be operationalized as any other non-zero effect size. For example, a researcher can have the same hypothesis as above (i.e., A > B), but they can stipulate that an effect size that is equal to or smaller than d = 0.20 is not practically or theoretically relevant. That is, they can make the null hypothesis be d = 0.20 and test against that. This kind of logic is also used in equivalence testing, where the goal is instead to provide evidence against a specific effect size (https://journals.sagepub.com/doi/full/10.1177/1948550617697177; https://journals.sagepub.com/doi/10.1177/2515245918770963).

      Third, I unfortunately don’t have a clear-cut answer to the question of whether there are good reasons for retaining all elements of NHST framework. I lean towards thinking that, in principle, it would be great if all research could focus on magnitudes (and precision) of effects rather than making reject/retain decisions about the null. In practice, however, there likely will not be a “one size fits all” solution. For example, imagine that Lab X has virtually unlimited resources, but Lab Y has extremely limited resources. Because of these resource discrepancies, Lab X might often plan studies with precision in mind, whereas Lab Y might often plan studies based on having reasonable statistical power (e.g., 80%) to detect an effect size that they consider as practically or theoretically relevant. Conducting an experiment that has the goal of precision is always going to require many more participants than one that has the goal of testing for a non-zero but practically meaningful effect size (see https://richarddmorey.medium.com/power-and-precision-47f644ddea5e). Put differently, it seems reasonable to me that the goals of the research team, and the resources they have available to them, will dictate the utility of retaining certain elements of the NHST framework. But I think, as Richard Morey points out in his blog, it doesn’t make sense to abandon all elements of the NHST framework (e.g., power analysis) because the very motivation to abandon those elements (e.g., planning for precision instead of significance testing) runs into the problem of needing to reason about those elements anyway.

      I know my response might not be very satisfying, but I hope I’ve at least responded to your main questions/concerns. Please LMK if I haven’t (or if you disagree with anything I’ve said)!

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: